Protests following the death of George Floyd in Minneapolis, Minnesota, continue to seize national and global attention. Videos of peaceful protestors demanding justice and rioters setting businesses ablaze have provided two seemingly different narratives of the summer’s intense social activism. There have been protests in at least 1,700 cities across all 50 states and the police shooting of Jacob Blake, a 29-year-old Black man in Kenosha, WI, on August 23 makes clear why that number continues to grow. What is unclear, though, is the accuracy of surveillance tools, methods, and overall processes used by law enforcement to retroactively arrest people who participated in protests months ago.
The scale and range of surveillance technologies being deployed are unprecedented:
- Drones or Unmanned Aerial Systems (UAS)
- Body-worn cameras (BWC)
- Automated License plate readers (ALPRs)
- Amazon Ring or other Home Surveillance Systems
- Camera Registry
- Face Recognition
- Cell Site Simulators
- Computer Vision
- Predictive Policing
- Open Source Intelligence
- Gunshot Detection
- IMSI Catchers
These technologies are manufactured by a wide range of private companies: Motorola, Amazon, Harris Corporation, Axon, ShotSpotter, Hewlett Packard (HP), Flock Safety, Palantir, and IBM. The Electronic Freedom Foundation created the Atlas of Surveillance showing the different types of surveillance technology owned by police departments across the nation. The map’s current 5,000 data points demonstrate how many state and local law enforcement departments own and use these surveillance technologies.
In Texas, the Department of Public Safety and its special agents have analyzed hundreds of hours of YouTube videos and social media posts to arrest more than a dozen Texans. The Miami Police Department scraped Facebook, LinkedIn, and Instagram photos with Clearview AI facial recognition technology to identify suspected individuals. In Chicago, Mayor Lori Lightfoot announced the establishment of the Social Media Task Force, a 20-person unit within the Crime Prevention and Information Center whose focus is to use facial recognition technology to scrape social media to prevent potential looting.
Facial recognition technology, which is powered by AI, has faced intense scrutiny for being flawed. In 2018, the ACLU tested Amazon’s face recognition technology which misidentified 28 sitting Congressmembers with mugshots. These tools are prone to making mistakes which have lasting effects that are disproportionate if you are Black or a person of color. ACLU Senior Advocacy and Policy Counsel, Chad Marlow, leads the #TakeCTRL campaign which is focused on surveillance, privacy and technology issues. He has condemned using facial recognition technology and social media scraping tools to conduct arrests, pointing to examples and studies that found the technology to be biased. In June, The New York Times reported on Robert Julian-Borchak Williams, a Detroit resident wrongfully arrested by Detroit police because of flawed facial recognition technology. Detectives were so confident that the software was correct that they arrested and detained Williams for a crime he did not commit.
The pipeline through which the data travels is obscure. Once a photo or status is posted to social media and is initially captured by law enforcement, what exactly happens to when an arrest is conducted? What is the legality of using facial recognition technology and other tools to scrape tweets or Instagram photos to implicate, formally charge, and arrest someone during peaceful protests?
The incentive for law enforcement using these tools is that it lessens the usual workload necessary to make arrests. In an interview with Jake Laperruque, Senior Counsel at the Project on Government Oversight, these technologies eliminate the level of engagement needed to do the same amount of work and thus “opens the door to more abuse because there are less logistical, practical obstacles to conduct surveillance and creates the opportunity for nefarious action.” As the list of predictive technologies continues to expand, real-time social media scanning capabilities will accelerate the process of scraping, enabling law enforcement to make questionable arrests more quickly.
Attorneys could help the general public achieve the transparency it deserves during this pivotal moment in our history. Attorneys should, to the greatest extent legally possible, share the details of cases they have defended where a client was retroactively arrested for participating in constitutionally protected activity, like protests, when the evidence was predicated on facial recognition technology or social media scanning. Doing so might just be the key to preventing what Dr. Desmond Patton, Director of Columbia’s SAFE lab, says could become the virtual stop and frisk. This means we should tweet, email, and otherwise call on district and state attorneys to make transparent the use of facial recognition technology to arrest individuals from past and more recent social justice protests.