It’s You! AI, Facial Recognition, Power, Privacy & the Thomas Crown Defense
Like Covid, it is hard to remember a time before AI. In a blink, Artificial Intelligence went from science fiction trope to inescapable reality. With the public launch of ChatGPT, AI broke free from the rarefied labs of data scientists. Tech companies promised miracles, while skeptics pointed to AI’s ability to deliver bias-at-scale, fakery-at-scale and scams-at-scale.
A year ago I didn’t know a computer program could “hallucinate.” Or that the energy footprint required to keep millions of suddenly ubiquitous chatbots chatting could upend all the progress made so far to address climate change. Then came the one sentence warning released by a newly minted “Center of AI Safety,” signed by 350 mostly male, mostly white titans of tech:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Great. No clues about what should or could be done, other than adding AI to a long list of existential worries.
A six month “pause” on AI commercialization was proposed and ignored. Instead, marketers went to work packaging AI as a friendly “co-pilot,” while chatbots were given cute names and cartoonish personae. But in the background, Large Language Models (LLMs), the building blocks of ChapGPT and its algorithmic doppelgängers, continued to devour whatever could be digitally diced, sliced and turned into tasty nuggets of digestible data, including “ambient data”—words, sounds and visuals.
Now This
For almost a year, every time I think I have a handle on the issues around AI and begin to imagine a path forward that isn’t in some way disturbingly flawed, a new wrinkle emerges that manages to make everything worse.
In her new book, with the singularly creepy title, “Your Face Belongs to Us,” New York Times reporter Kashmir Hill dives into the world of Clearview AI, a private company based in Silicon Valley with a few dozen employees, a handful of investors, and tens of billions of photos of billions of people. Clearview is in the photo-mining business, with a legions of bots trained to scrape the public internet for everyone’s image, aka “faceprint,” no permission asked for or under current rules required.
A single image of someone standing in the background of a photo can be good enough to correctly and quickly identify an individual using Clearview’s vast, AI-enhanced database. Most of the time. Police departments immediately understood the value of such a tool and eagerly signed up for the service. There was no shortage of laudable use cases, including tracking child predators. Who could possibly be against that? The software was also used to identify several of the January 6 thugs who stormed the nation’s Capitol.
But stories have also emerged of collateral damage, including misidentification and false arrests. It has also been used to inflict intentional damage. Employees who worked at firm involved in a lawsuit against Madison Square Garden found themselves singled out and banned from attending events at MSG, Radio City Music Hall and other venues owned by a holding company.
Although Clearview has faced bans in Australia, the EU and Canada, the US government has contracted with the company to work on projects for the Departments of Homeland Security and Defense, including the development of augmented reality glasses to identify people in real time as they go about their daily lives. This is referred to as “passive identification” and human rights groups—really anyone interested in privacy—are up in arms over the specter not only of Orwellian-level surveillance, but surveillance for sale by subscription. “Big Brother is Watching You” could be a description of Clearview’s business model.
In a twist laced with irony, the CIA has expressed concern that informants critical to the agency’s mission have been compromised by facial recognition tech. The surveillers have become the surveilled.
Could Clearview share or even sell intelligence data? Barring oversight—and it is hard to imagine what oversight would look like—it is quite possible that no one would know if they did. Which means a private company founded only six years ago has the potential to upend national security.
Hill notes that Google and Facebook each developed facial recognition software more than a decade ago, yet both determined it was too dangerous to deploy. The difference with Clearview AI, according to Hill, is one of ethics. “They were willing to do what other companies weren’t willing to do.”
Clearview is one of small but growing number of private companies that have effectively managed to corner markets few realized would be markets until suddenly they were. The US government is now entirely dependent on SpaceX, a private company owned by Elon Musk, to ferry astronauts and supplies to the International Space Station. Starlink, a global satellite internet service owned by SpaceX, so also owned by Musk, controls Ukraine’s access to the internet, with significant ramifications for the wars and by extension US foreign policy.
Yet that pales in comparison to the potential power of Clearview’s 35 year old CEO Hoan Ton-That who controls the database that can influence the lives and futures of billions of individuals in ways that aren’t always obvious.
The one sentence warning of a pending AI apocalypse may be sobering, but the clear and present dangers of AI point to the human side of the equation. Will the tech be used for good or evil? Will facial recognition open doors or slam them shut? Will it keep us safer or make us more vulnerable?
The Thomas Crown Defense
What happens if the face being recognized isn’t real? “Deepfake” photos and videos are getting better, easier and cheaper to create and are already starting to proliferate online, the digital “corpus” banquet up which LLMs continually feed.
Could one AI-enabled annoyance/threat cancel out another? Could deepfakes be the best way to hide in plain sight? With a nod to Magritte and to Thomas Crown:
Ce n’est pas moi.
(start at TC 1:50)