United States Customs and Border Protection (CBP) is set to invest $225,000 for a year of access to Clearview AI, a controversial face recognition tool. This technology enables comparisons of individual photos against a database of over 60 billion images sourced from the internet. The deal primarily extends to the Border Patrol’s intelligence division, enhancing their capacities in data collection and analysis as part of efforts to "disrupt, degrade, and dismantle" perceived security threats.
According to the contract, the Clearview tools will be integrated into the daily operations of analysts at CBP rather than being reserved for specific investigations. The technology will support "tactical targeting" and "strategic counter-network analysis," utilizing various data sources to identify individuals and trace their connections for national security and immigration purposes.
A significant concern surrounding this initiative is the handling of sensitive personal data, particularly biometric identifiers. The contract mandates non-disclosure agreements for contractors with access to this data, yet details remain unclear regarding what types of images personnel will upload, if U.S. citizens may be included in searches, and the retention duration of uploaded materials or search results.
This move comes at a time when the Department of Homeland Security (DHS) faces increasing scrutiny for its use of facial recognition technologies in enforcement operations across the country. Civil liberties advocates and some lawmakers have raised alarms over the potential for these technologies to be used as routine intelligence tools rather than as limited investigative aids. Critically, there is a push for legislation to outright bar agencies like ICE and CBP from utilizing facial recognition technology, reflecting rising concerns over biometric surveillance without adequate limits or transparency.
The Clearview AI business model has drawn ire due to its practices of scraping images from public websites, which are converted into biometric templates without individuals’ consent. This has sparked debates on ethical implications and privacy concerns related to how such technologies are applied and monitored.
Furthermore, recent evaluations by the National Institute of Standards and Technology (NIST) have shown that while Clearview AI performs well in controlled environments, its effectiveness diminishes in less regulated settings, leading to high false match rates. This limitation raises questions about the reliability of relying on such systems for national security operations, with NIST cautioning that identification results could be misleading.
As CBP prepares to implement this tool, the lack of clarity on its integration raises vital questions about the future of facial recognition technology in federal enforcement actions, particularly concerning civil rights and liberties.