Governments need an abrupt change of direction to avoid “stumbling zombielike into a digital welfare dystopia,” Philip G. Alston, a human rights expert reporting on poverty, told the United Nations General Assembly last year, in a report calling for the regulation of digital technologies, including artificial intelligence, to ensure compliance with human rights. The private companies that play an increasingly dominant role in social welfare delivery, he noted, “operate in a virtually human-rights-free zone.”
Last month, the U.N. expert monitoring contemporary forms of racism flagged concerns that “governments and nonstate actors are developing and deploying emerging digital technologies in ways that are uniquely experimental, dangerous, and discriminatory in the border and immigration enforcement context.”
The European Border and Coast Guard Agency, also called Frontex, has tested unpiloted military-grade drones in the Mediterranean and Aegean for the surveillance and interdiction of vessels of migrants and refugees trying to reach Europe, the expert, E. Tendayi Achiume, reported.
The U.N. antiracism panel, which is charged with monitoring and holding states to account for their compliance with the international convention on eliminating racial discrimination, said states must legislate measures combating racial bias and create independent mechanisms for handling complaints. It emphasized the need for transparency in the design and application of algorithms used in profiling.
“This includes public disclosure of the use of such systems and explanations of how the systems work, what data sets are being used and what measures preventing human rights harms are in place,” the group said.
The panel’s recommendations are aimed at a global audience of 182 states that have signed the convention, but most of the complaints it received over the past two years came from the United States, Ms. Shepherd said, and its findings amplify concerns voiced by American digital rights activists.
American police departments have fiercely resisted sharing details of the number or type of technologies they employ, and there is scarce regulation requiring any accountability for what or how they use them, said Rashida Richardson, a visiting scholar at Rutgers Law School and director of research policy at New York University’s A.I. Now Institute.