Disclosures of NYPD Surveillance Technologies Raise More Questions Than Answers

Surveillance technologies are reshaping society. Technical advances and investment in tools ranging from facial recognition and social media analysis to cell-site simulators and drones continue to increase each year in cities across the United States. “Because they pose an unprecedented risk to our civil liberties and privacy, we cannot passively accept them, as we have done too often with previous technologies,” concludes Jon Fasman in his bookWe See It All: Liberty and Justice in an Age of Perpetual Surveillance. 

New disclosures of surveillance technologies employed by the New York City Police Department (NYPD) make clear why the public must demand greater specificity, input and oversight on surveillance systems, and, in some instances, seek to ban technologies altogether. The disclosures by NYPD – the first required by the POST Act – raise substantial questions about how police in New York acquire and maintain data across dozens of surveillance systems, how NYPD thinks about safety and possible harms to society, and reveal new details about the suite of technologies enabling covert police activities on social media networks.

I. Background: The POST Act Disclosures

In June 2020, at the height of the uprising against police violence following the murder of George Floyd, the New York City Council passed police reform measures. In an amendment to the administrative code, the City Council introduced a measure requiring “comprehensive reporting and oversight of New York City Police Department surveillance technologies.” Dubbed the Public Oversight of Surveillance Technology (POST) Act, the new law requires the NYPD to publicly disclose its entire complement of surveillance technologies. The required disclosures must be accompanied by “impact and use policies for all existing surveillance technologies describing how the technology will be used, the limitations in place to protect against abuse, and the oversight mechanisms governing use of the technology,” according to the Brennan Center for Justice at NYU Law School.

The product of years of effort by activists, academics, and civil society groups to raise awareness of police surveillance in New York, the passage of the POST Act represents a victory over NYPD, which opposed it vehemently. NYPD Deputy Commissioner of Intelligence and Counterterrorism John Miller called the disclosure requirements “insane” and “an effective blueprint for those seeking to do us harm,” according to The Intercept. Despite NYPD opposition, the act commanded a veto-proof majority; the Mayor also broke from NYPD and announced his support of the bill before its passage by the Council.

NYPD made its first disclosures earlier this year, posting links to PDFs containing information on 36 technologies, listed alphabetically from “Audio-Only Recording Devices, Covert” to “WiFi Geolocation Devices.”

According to NYPD, the disclosures “provide details of: 1) the capabilities of the Department’s surveillance technologies, 2) the rules regulating the use of the technologies, 3) protections against unauthorized access of the technologies or related data, 4) surveillance technologies data retention policies, 5) public access to surveillance technologies data, 6) external entity access to surveillance technologies data, 7) Department trainings in the use of surveillance technologies, 8) internal audit and oversight mechanisms of surveillance technologies, 9) health and safety reporting on the surveillance technologies, and 10) potential disparate impacts of the impact and use policies for surveillance technologies.” The public is invited to comment, via email, on each technology by February 25th, 2021. After a period of review, “final impact and use policies will be published by April 11th, 2021.”

While there was widespread coverage of the passage of the POST Act, so far, there has been relatively little reporting on the disclosures, even as concerns and lawsuits over NYPD surveillance mount. Gizmodo notes in a report that “a full and transparent accounting of the force’s spying power has largely been absent until now.” Despite the appearance of transparency, our review of the disclosures raises a multitude of questions – not only about the technologies themselves, but also about their composition and contents, and the policies that dictate their use. Our analysis suggests more specificity is required to understand exactly what capabilities NYPD employs, on what scale each system is implemented, the ways in which these technologies link to one another and utilize common data sets, and how NYPD, judicial and public oversight work in practice.

II. Inspecting the Disclosures

While the POST Act establishes a general framework for disclosure, NYPD has latitude in how it drafts the briefs and how it makes them available to the public. For the purposes of this report, we consider one technology widely discussed in the news media, facial recognition, and two technologies disclosed separately that provide a useful way to examine how these various technologies fit together – social media network analysis tools and internet attribution management infrastructure.

In general, the disclosures follow the outline provided in the law, with headings for “Capabilities of the technology,” “Rules, processes and guidelines relating to the use of the technology,” “Policies relating to retention, access, & use of the data,” etc. Most are between 4-8 pages, and consist entirely of text – there are no technical diagrams or charts.

Curiously, across all 36 technologies, the remarks in the “Health & Safety Reporting” category reveal a possible discrepancy between the intent of the law and the way NYPD has interpreted it. While the law requires disclosure of “any tests or reports regarding the health and safety effects of the surveillance technology,” in nearly all of the disclosures this section is essentially dismissed with boilerplate language. While NYPD notes that drones may “interfere with other lawful aircraft communication systems,” the disclosures assure the reader that its Manned Aircraft Systems meet “FAA safety standards,” and that radiation exposure from “NYPD mobile x-ray technology is considered trivial.” There are “no known health and safety issues” for the 33 other technologies, suggesting that the department is using a narrow concept of health and safety and may not have conducted any tests or produced any reports before their implementation. Indeed, researchers are now looking at the physical and psychological health impacts of policing, including surveillance, as per research published by the American Public Health Association.

Facial recognition

Perhaps anticipating it would draw the most scrutiny, the disclosure for facial recognition is the longest of the set. But on a close reading, significant questions arise. On the first page, the disclosure states on it: “Facial recognition technology does not use artificial intelligence, machine learning, or any additional biometric measuring technologies.” This is noteworthy since nearly all state-of-the-art facial recognition technologies employ machine learning to train model embeddings. Writing in Wired, Albert Fox Cahn, the Surveillance Technology Oversight Project’s (S.T.O.P.‘s) founder and Executive Director, and Justin Sherman, Co-Founder and Senior Fellow at Ethical Tech at Duke University, note that such statements “directly contradict New York’s own report on artificial intelligence systems,” which was also recently published.

Another distinction worth investigating regards how the system functions. The disclosure states that “facial recognition technology is not integrated in any NYPD video cameras or systems” and that “NYPD video cameras or systems do not possess a capability for real-time facial recognition.” Presumably, this means that NYPD cameras do not contain embedded systems that process data on the device, but it does not rule out the possibility that images are transferred for near real-time processing. Such technologies are widely available to police. “The reality is that if you wait to take a live photo and then upstream it, it’s technically not live facial recognition. It’s a kind of distinction without a difference in terms of how they can use surveillance footage and bodycam footage pretty close to [it being] live to try to identify people,” noted Ángel Díaz, counsel in the Liberty & National Security Program at the Brennan Center for Justice.

A key question for each of the technologies is the scale of the human operational systems connected to their deployment. For instance, the facial recognition disclosure refers to the role of the “facial recognition investigator.” How many such investigators are employed by the NYPD? A 2020 BuzzFeed report on NYPD’s use of the controversial service ClearView AI states that “30 officers have Clearview accounts” and, collectively, the officers conducted 11,000 searches on the ClearView system. The NYPD disclosure also suggests that use cases for facial recognition technology that fall outside of the categories “foreseen or described” in NYPD policies are referred to the “Chief of Detectives or Deputy Commissioner of Intelligence and Counterterrorism,” who apparently have the sole responsibility for determining if novel uses of the technology are “appropriate and lawful.” If facial recognition technologies are used to investigate political activity, such as protests, that responsibility lies with the Intelligence Bureau. It is unclear if this was the protocol last summer when facial recognition was employed to identify a Black Lives Matter activist arrested after an hours-long “siege” on his apartment. This use case contradicted a past statement from NYPD that facial recognition is not used to identify protestors.

Social Network Analysis Tools

Another notable technology listed in the disclosures is a suite of software as a service (SaaS) “social network analysis tools” that allow NYPD to assess social media profiles and connections between individuals that are apparent on social platforms. The tools also equip NYPD to ingest social media content and permit police “to retain information on social networking platforms relevant to investigations and alert investigators to new activity on queried social media accounts.” The disclosure maintains that NYPD only has access to publicly available information, insofar as it is “viewable as a result of user privacy settings or practices.” User “practices” are not defined, but could refer to actions a user might take that would permit NYPD to see information even if it is not posted to a public channel. There is very little information provided in the disclosure about how long content acquired from social media profiles is retained, which may include “audio, video images, location, or similar information contained on social networking platforms.”

Similar to the disclosure for facial recognition, this disclosure does not state the number of police officers that have access to social media network analysis tools, how many individuals have been tracked using these tools, and the extent and duration of such tracking. As with facial recognition technology, social media network analysis tools that are employed in relation to political activity are under the domain of the NYPD Intelligence Bureau. While the document states that “information is not shared in furtherance of immigration enforcement,” it does make clear that NYPD shares social media information with other law enforcement agencies and other third parties, who are not disclosed.

Internet Attribution Management Infrastructure

We must consider the ways in which various systems and databases interact across the various surveillance systems. The disclosure for “Internet Attribution Management Infrastructure” brings this question to the fore. This suite of technologies, which includes everything from servers and network infrastructure to SaaS software to laptops and smartphone devices, permits NYPD personnel to “engage in undercover activity to covertly obtain information” by visiting social media platforms and “chatrooms” and engaging individuals with messaging applications without creating a digital footprint traceable to the NYPD to “allow its personnel to safely, securely, and covertly conduct investigations and detect possible criminal activity on the internet.” As with social media network analysis, the use of this technology requires no court authorization and “may be used in any situation the supervisory personnel responsible for oversight deems appropriate.”

Just as with the other technologies, there is the open question of the scale of use that is not defined by the disclosure. How many covert profiles does the NYPD operate? How many individuals in the department are regularly participating in social media groups, or messaging members of the public (or indeed minors) under a false profile? What guidelines are there for the behavior of a police officer in these situations?

Indeed, engaging with individuals on messaging apps may happen in intimate circumstances. Rachel Levinson-Waldman, a Senior Counsel to the Liberty and National Security Program at New York University School of Law’s Brennan Center for Justice, has noted that in past investigations NYPD detectives have targeted mostly Black and Hispanic teenage males “by using a fake avatar of a female teenager.” The disclosure notes that “allegations of misuse are internally investigated at the command level or by the Internal Affairs Bureau.” How many instances of misuse have been reported? How many are related to investigations of minors? Dr. Desmond Patton, founding Director of Safe lab at Columbia University said, “One of the places where we saw this most is in doing interviews with legal aid attorneys that had young Black and brown clients that had been arrested because of their social media profile in some way.” His lab looked specifically at how social media information is misused by police in a 2017 paper, Stop and Frisk Online: Theorizing Everyday Racism in Digital Policing in the Use of Social Media for Identification of Criminal Conduct and Associations.

This article is a collaboration between Tech Policy Press and the Center for Media Engagement. To read the rest of this article, please visit the Tech Policy Press website.