Facial recognition technology raises concerns over privacy

Jason M. Shepard, Ph.D.
6 min readSep 20, 2022

--

Originally published in the Spring 2022 issue of California Publisher.

By Jason M. Shepard

A child abuser in Las Vegas sentenced to 35 years in prison. A murder in Miami solved after a gas station robbery. Unidentified protestors turned into violent rioters at the insurrection of the U.S. Capitol on Jan. 6, 2021.

Those are among the thousands of crimes reportedly solved by law enforcement agencies using cutting-edge facial recognition technology created by Clearview AI.

The software sounds simple enough: a user uploads a photo of an unknown person and gets back a match to a photo from the Internet, often with links to social media accounts or other web pages with identifying information.

Clearview AI’s technology stands apart from other photo databases because of its sophisticated algorithms and its vast library of photos scraped from almost everywhere online, including social media, news sites, and even banking apps.

As of March 2022, Clearview AI boasts a database of more than 20 billion images and contracts with 3,100 law enforcement agencies in the United States, including the FBI and the Department of Homeland Security.

Clearview AI is a game changer in crime solving, and the company is expanding its staff and adding new investors.

But critics and privacy watchdogs see Clearview AI as a major new threat to the right to privacy. The legal challenges are piling up.

Clearview AI is the brainchild of Hoan Ton-That, an Australian of royal Vietnamese ancestry who dropped out of college and moved to San Francisco in 2007 to develop mobile apps as smartphones hit the market, according to a profile in the New York Times.

After a string of start-up failures, including an app that allows users to add Donald Trump’s hair to a user’s photo, Ton-That moved to New York in 2016. He met a business partner, and together they decided to build a facial recognition system with two key strengths. First, the software relied on biometric algorithms to map facial geometry that allows for fast and accurate matching. And second, the software would draw from a vast database of photos scrapped from almost everywhere on the internet.

The company attracted investors and strategized about ways to make money from different uses. It marketed 30-day free trials to law enforcement officers with hopes that they’d convince their agencies to buy subscriptions.

Arrests became success stories. For example, the Indiana State Police solved a shooting case within 20 minutes of first using the app. The case involved a shooting in a park that was captured on video. Using a photo from the video, the Clearview AI database found a match to a photo of the alleged shooter from a social media account. The alleged shooter hadn’t shown up in government databases because he didn’t have a driver’s license or a criminal record.

Mentions of the company began cropping up in arrest and charging documents, and a profile of Ton-That in the New York Times in January 2020 brought the company significant media attention for the first time.

The attention brought in new investors and clients, but it also drew the ire of privacy advocates alarmed by the possibilities of the new technology.

“The weaponization possibilities of this are endless,” Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, told the New York Times. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”

Both the biometric algorithm and the web scrapping have raised some novel legal questions.

In a recent decision unrelated to Clearview AI, the Ninth Circuit Court of Appeals ruled that the scrapping of publicly accessible information does not constitute “unauthorized access” of a website prohibited by the federal Computer Fraud and Abuse Act, a 1986 anti-hacking law.

The case, hiQ Labs v. LinkedIn, involved efforts by LinkedIn to prohibit another company from downloading information users uploaded to their LinkedIn profiles. LinkedIn argued that its user agreement prohibits unauthorized downloading of its content, and by violating the user agreement, hiQ Labs was also violating CFAA. The district and appellate courts disagreed.

The ruling was based on the U.S. Supreme Court’s decision last summer in Van Buren v. U.S. that limited CFAA’s reach to cases in which individuals obtain unauthorized access to computer systems, not to those who misuse access they already have.

So, for now, it appears Clearview AI’s mass collection of publicly available photos have support in emerging case law.

However, Clearview AI is facing thorny legal issues on other fronts.

Internationally, Clearview AI is facing challenges in Europe, where the European Union has said Clearview AI violates the General Data Protection Regulation, and in Canada, where the country’s privacy commissioner has called Clearview AI’s practices illegal.

In May, Clearview AI settled a lawsuit in Illinois with the American Civil Liberties Union that alleged violations of Illinois’s Biometric Information Privacy Act (BIPA), a 2008 law that prohibits companies from collecting and using a person’s biometric data without their notice and consent.

As part of the settlement, Clearview AI agreed not to sell its faceprint database to private businesses in the United States, limiting its sales primarily to law enforcement. Clearview AI also will stop providing access to all entities in Illinois, including law enforcement, for five years. It will also provide an opt-out option from its database for Illinois residents.

“By requiring Clearview to comply with Illinois’ pathbreaking biometric privacy law not just in the state, but across the country, this settlement demonstrates that strong privacy laws can provide real protections against abuse,” Nathan Freed Wessler, a deputy director of the ACLU Speech, Privacy, and Technology Project, said in a statement.

“Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit. Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws.”

The Illinois law is the strongest biometrics privacy law in the country. Two other states, Texas and Washington, have similar laws, while at least seven other states, including California, have bills under consideration in their legislatures.

In California, several cities have prohibited its police departments from using Clearview AI, including San Francisco, Oakland, Berkeley and Alameda.

And in March, four individuals and two immigrant rights organizations filed a lawsuit in Alameda County Circuit Court, alleging that Clearview AI’s technology provides police with an illegal surveillance system that chills free speech and association rights.

The lawsuit, Renderos v. Clearview AI, seeks an injunction prohibiting Clearview AI from collecting biometric information in California and requiring the company to delete all images and data from Californians in their databases.

“Our plaintiffs and their members care deeply about the ability to control their biometric identifiers and to be able to continue to engage in political speech that is critical of the police and immigration policy free from the threat of clandestine and invasive surveillance,” Sejal R. Zota, one of the attorneys at Just Futures Law who filed the lawsuit, told the Los Angeles Times. “And California has a Constitution and laws that protect these rights.”

Among Clearview AI’s supporters is distinguished First Amendment attorney Floyd Abrams, who is representing Clearview AI and serves on its advisory board.

As legal challenges mount, Abrams argues that the First Amendment should protect Clearview AI’s software and its accumulation and use of publicly available photos.

“The creation and dissemination of information is protected by the First Amendment. Requiring consent prior to making public already public information is inconsistent with the First Amendment,” Abrams said in an interview on Clearview AI’s website.

Abrams also recognizes the potentially significant legal precedents that may come in litigation involving Clearview AI.

This, Abrams told the New York Times, “has the potential of leading to a major decision about the interrelationship between privacy claims and First Amendments in the 21st century.”

Jason M. Shepard, Ph.D., is professor and chair of the Department of Communications at California State University, Fullerton. His primary research expertise is in media law, and he teaches courses in journalism, and media law, history and ethics. Contact him at jshepard@fullerton.edu or Twitter at @jasonmshepard.

--

--

Jason M. Shepard, Ph.D.
Jason M. Shepard, Ph.D.

Written by Jason M. Shepard, Ph.D.

Media law prof and COMM dept chair @CSUF. Past: @CapTimes @isthmus @TeachForAmerica @UWMadison PhD. More at jasonmshepard.com.

No responses yet