Clearview A.I. flap

Ahead of the Wednesday October 1st "town hall" meeting with 33 Division of the Toronto Police Service, I asked Google's Gemini A.I. service to recap the Clearview A.I. scandal. This is an edited transcript of Gemini’s findings. -- Roman

The Clearview AI Controversy: An Unforeseen Adoption

A significant controversy erupted in 2020 concerning the use of "Clearview AI" by law enforcement. This dustup over the unannounced "early adoption" of powerful facial recognition technology by police forces, including in Ontario, raised serious concerns among citizens and had a lasting impact on law enforcement agencies.

The controversy centered on
Clearview AI
, a company that developed a facial recognition application. The software's power lay in its massive database, reportedly containing billions of images scraped from public websites and social media platforms like Facebook, Instagram, and Twitter. This allowed law enforcement users to upload a photo of an unknown individual and potentially identify them by matching it against the vast repository of online images.

In early 2020, investigative journalism revealed that numerous police services across Canada, including the Toronto Police Service (TPS) and the Ontario Provincial Police (OPP), had been using Clearview AI on a trial basis. This use was largely unknown to the public and, in some cases, even to the leadership of the police forces themselves. The "early adoption" was often initiated by individual officers or specific units that were offered free trials of the technology.



* Citizen and Privacy Advocate Backlash *

The revelation that Clearview AI was being using in Ontario sparked immediate and widespread backlash from the public, civil liberties organizations, and privacy advocates. The primary concerns included:

. Mass Surveillance: Critics argued that the scraping of billions of images without users' consent constituted a form of mass surveillance, turning the internet into a massive, searchable police lineup.

. Lack of Consent and Transparency: Citizens were alarmed that their personal photos, shared on social media for personal reasons, were being used for law enforcement purposes without their knowledge or consent. The lack of transparency from police services regarding their use of such a powerful tool eroded public trust.

. Accuracy and Bias: Concerns were raised about the accuracy of facial recognition technology, particularly its higher rates of misidentification for racialized individuals and women. This created a risk of innocent people being wrongly targeted or investigated.

. Violation of Privacy Rights: The Canadian Civil Liberties Association and other groups argued that the use of Clearview AI was a significant infringement on the privacy rights of Canadians.


Investigations and Official Findings: A Resounding Problem

In response to the public outcry, the Information and Privacy Commissioner of Ontario, along with federal and other provincial privacy watchdogs, launched a joint investigation. Their findings were unequivocal:

Illegal Practices: In a February 2021 report, the privacy commissioners jointly declared that Clearview AI's collection of images without consent was illegal under Canadian privacy laws. They stated that the company's practices represented "mass surveillance" and were "a clear violation of the privacy rights of Canadians."

RCMP violated the Privacy Act: A separate investigation into the Royal Canadian Mounted Police's use of Clearview AI concluded that the national police force had also violated the federal Privacy Act. The Privacy Commissioner of Canada emphasized that government institutions cannot collect personal information from a third party if that information was collected unlawfully.

The investigations confirmed that the use of Clearview AI by police in Ontario was not just a minor dustup but a significant problem with legal and ethical ramifications.



Impact on Law Enforcement in Ontario

The Clearview AI scandal had a profound impact on law enforcement agencies in Ontario:

Cessation of Use: In the wake of the controversy, police services in Ontario, including the TPS and OPP, ceased all use of Clearview AI's technology. Clearview AI itself announced it would no longer offer its services in Canada.

Policy and Governance Overhaul: The scandal forced police services to confront the lack of policies and governance around the adoption of new surveillance technologies. The Toronto Police Service, for example, initiated the development of a formal policy on the use of artificial intelligence and facial recognition technology, including public consultation.

Increased Scrutiny and Demand for Transparency: The public and oversight bodies now have a heightened awareness of the potential dangers of unregulated surveillance technologies. There is a greater demand for transparency and accountability from police regarding the tools they use.

Legal Challenges: The use of evidence derived from Clearview AI in criminal cases came under legal scrutiny, with defense lawyers questioning its legality and reliability.


The U.S. Connection: An Introduction to the Technology

The issue regarding the involvement of U.S. law enforcement agencies like the FBI is relevant. Reports indicated that at least one Toronto Police detective was first introduced to Clearview AI at a conference in the Netherlands during a showcase put on by the FBI and the U.S. Department of Homeland Security. This highlights how new technologies are often disseminated and promoted within the international law enforcement community, sometimes ahead of public and legal scrutiny in the jurisdictions where they are eventually deployed.

In conclusion, the controversy surrounding Clearview AI in Ontario was a significant event that brought the challenges of new surveillance technologies to the forefront of public discourse. The "early adoption" without prior notice or discussion was deemed a serious problem by privacy watchdogs, leading to a re-evaluation of police practices and a greater demand for transparency and accountability in the use of AI-powered tools.


End of Google Gemini item. Last updated on October 3, 2025


Related items: “Facial recognition technology poses ‘enormous’ potential for abuse, NDP says” - Toronto Star, Jan. 28 2020; item mentions then-ethics critic Charlie Angus.

thestar.com/politics/federal/2020/01/28/facial-recognition-technology-poses-enormous-potential-for-abuse-ndp-says.html, archived at https://archive.is/AeJG4
...