Privacy is a thing of the past now that Clearview has created a database to identify a person’s entire online profile from a single photo. | Credit: AP/Jenny Kane
- Clearview AI can create an in-depth profile on any person based on a single photo.
- Everything from that person’s activities and interests to their network of friends is easily accessible via the Clearview database.
- The police say the tech is “the biggest breakthrough in the last decade,” and the privacy concerns that raises are alarming.
A tiny company headed up by an Australian has quietly opened up Pandora’s box. Clearview AI created advanced facial recognition technology that law-enforcement officers say is “the biggest breakthrough in the last decade.” The software, which has been scraping images from all around the web for years, is able to compare a photo with its massive database to provide a detailed profile of the person being searched. While that’s proven extremely useful to police officers trying identify sexual abuse victims and catch shoplifters, it also raises very serious questions about privacy.
Silently Storing Your Data
A huge part of Clearview’s strategy so far has been flying under the radar. The firm has already sold its technology to hundreds of law-enforcement agencies around the country with the majority of the public unaware.
What’s more, those who made their social media accounts private and disabled friends from linking their name to photos are just as vulnerable to being investigated as those whose profiles are widely available. That’s because the program may have saved photos to its database before those privacy settings were selected. Even if you deleted all evidence of yourself online tomorrow, Clearview would still have copies of everything you posted in its database.
Private or not, your selfies are probably stored inside Clearview’s massive database | Credit: metamorworks/Shutterstock.com
Clearview’s facial recognition technology is an extremely powerful tool, but that power comes with a heavy weight of responsibility. Unfortunately, once something like this has been created, it’s hard to control how it’s used. You can’t put the toothpaste back into the tube, so to speak.
Google Backs Away From Facial Recognition AI
In 2011, Google claimed to have the ability to create a similar facial recognition database but abandoned that project for fear of misuse. Then-CEO Eric Schmidt said that Google foresaw that kind of tracking technology as dangerous if it were to reach the wrong hands. At the time, Schmidt used the example of a dictator using it against citizens to violate their privacy rights.
Credit: Lionel Bonaventure/AFP
While it’s unclear exactly who has access to Clearview’s technology, The New York Times estimates that more than 600 law enforcement agencies and a handful of private companies are making use of the program. The Chicago Police Department will shell out $50,000 over the course of two years for access to a Clearview “pilot” program.
That kind of wide-spread usage allows for a huge margin of error when it comes to the database getting into the wrong hands. Not only are we talking about hundreds of government-controlled agencies with questionable motives but also private companies using the application as they see fit for “security purposes.” If that doesn’t scream “violation of privacy,” I don’t know what does.
Privacy Concerns Rise
Social media companies like Facebook and Twitter have started to get nervous about what Clearview’s technology means for their own privacy requirements. They’ve sent cease-and-desist letters aimed at curbing Clearview’s “scraping” process, but it’s becoming increasingly difficult to control where data shared online ultimately ends up.
Credit: fizkes/Shutterstock.com
Some states have questioned whether the technology infringes on its residents’ rights, prompting bans and lawmaker inquiries. Hoan Ton-That, Clearview’s creator, says he would welcome government regulation.
The Next Cambridge Analytica Privacy Scandal
Clearview’s process of “scraping” digital images from around the web is reminiscent of the Cambridge Analytica scandal that combed through millions of Facebook profiles without users’ consent. Back in 2018, Mark Zuckerberg apologized for the privacy violation and vowed to better protect users.
While what Clearview is doing may violate ethical standards of privacy, it doesn’t necessarily break any laws. That’s because lawmakers aren’t able to keep up with the pace of technology— a decade ago this kind of technology would have seemed like a pipe-dream. Plus, Clearview isn’t only accessing one platform’s data. Instead, the firm combs the internet for relevant information to make its profiles stronger. That means if there is blowback on Facebook, it will likely hit the entire social media space as well.
The opinions expressed in this article do not necessarily reflect the views of CCN.com.
This article was edited by Gerelyn Terzo.
Last modified: February 15, 2020 12:07 AM UTC