The U.S. government is embarking on an all-out sprint to develop and deploy artificial intelligence in the name of national security, but its plans for protecting civil rights and civil liberties have barely taken shape.
Based on a sweeping new report by a congressionally-mandated commission, it’s clear that U.S. intelligence agencies and the military are seeking to integrate AI into some of the government’s most profound decisions: who it surveils, who it adds to government watchlists, who it labels a “risk” to national security, and even who it targets using lethal weapons.
In many of these areas, the deployment of AI already appears to be well underway. But we know next to nothing about the specific systems that agencies like the FBI, Department of Homeland Security, CIA, and National Security Agency are using, and even less about the safeguards that exist — if any.
That’s why the ACLU is filing a Freedom of Information Act (FOIA) request today seeking information about the types of AI tools intelligence agencies are deploying, what rules constrain their use of AI, and what dangers these systems pose to equality, due process, privacy, and free expression.
Earlier this month, the National Security Commission on Artificial Intelligence issued its final report, outlining a national strategy to meet the opportunities and challenges posed by AI. The commission — composed of technologists, business leaders, and academic experts — spent more than two years examining how AI could impact national security. It describes AI as “a constellation of technologies” that “solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action; and technologies that may learn and act autonomously whether in the form of software agents or embodied robots.” AI systems are increasingly used to make decisions, recommendations, classifications, and predictions that impact Americans and people abroad as we all go about our daily lives.
The report urges the federal government — and especially intelligence agencies — to continue rapidly developing and deploying AI systems for a wide range of purposes. Those purposes include conducting surveillance, exploiting social media information and biometric data, performing intelligence analysis, countering the spread of disinformation via the internet, and predicting threats. The report notes that individual intelligence agencies have already made progress toward these goals, and it calls for “ubiquitous AI integration in each stage of the intelligence cycle” by 2025.
While artificial intelligence may promise certain benefits for national security — improving the speed of some tasks and augmenting human judgment or analysis in others — these systems also pose undeniable risks to civil rights and civil liberties.
Of particular concern is the way AI systems can be biased against people of color, women, and marginalized communities, and may be used to automate, expand, or legitimize discriminatory government conduct. AI systems may replicate biases embedded in the data used to train those systems, and they may have higher error rates when applied to people of color, women, and marginalized communities because of others flaws in the underlying data or in the algorithms themselves. In addition, AI may be used to guide or even supercharge government activities that have long been used to unfairly and wrongly scrutinize communities of color — including intrusive surveillance, investigative questioning, detention, and watchlisting.
The commission’s report acknowledges many of these dangers and makes a number of useful recommendations, like mandatory civil rights assessments, independent third-party testing, and the creation of robust redress mechanisms. But ultimately the report prioritizes the deployment of AI, which it says must be “immediate,” over the adoption of strong safeguards. The commission should have gone further and insisted that the government establish critical civil rights protections now, at the same time that these systems are being widely deployed by intelligence agencies and the military.
One threshold problem is that, when it comes to AI, even basic transparency is lacking. In June 2020, the Office for the Director of National Intelligence released its Artificial Intelligence Framework for the Intelligence Community — and identified “transparency” as one of the framework’s core principles. But there is almost nothing to show for it. The public does not have even basic information about the AI tools that are being developed by the intelligence agencies, despite their potential to harm Americans and people abroad. Nor is it clear what concrete rules, if any, these agencies have adopted to guard against the misuse of AI in the name of national security.
Our new FOIA request aims to shed light on these questions. In the meantime, the work of fashioning baseline AI protections must move ahead. If the development of AI systems for national security purposes is an urgent priority for the country, then the adoption of critical safeguards by Congress and the executive branch is just as urgent. We cannot wait until dangerous systems have already become entrenched.