Why don't we get the news we need?
Art: The Mirror of Venus, Edward Burne-Jones, 1875
Last month, New York Times technology journalist Kashmir Hill wrote a shocking piece about a US-based company working on software backed by facial recognition algorithms that it was selling to law enforcement agencies around the country.
The story was about Hoan Ton-That, who wandered aimlessly around Silicon Valley after dropping out of college, looking for a business plan that would make him rich. In 2016, he moved to New York.
Tall and slender, with long black hair, he considered a modeling career, he said, but after one shoot he returned to trying to figure out the next big thing in tech. He started reading academic papers on artificial intelligence, image recognition and machine learning.
He was playing around with facial recognition when he met Richard Schwartz. Schwartz had a ton of contacts, and soon, after fumbling around a bit, they were talking to police departments across the country, and came up with the company as it operates today.
The story was a fantastic piece of reporting that had all the ingredients of a news story that’s going to go viral: Startling privacy violations. Shady government agencies. A former “Bitcoin believer.” Connections to Rudy Guiliani. Peter Thiel.
And it did indeed go viral. Hill’s tweet of the story amassed 13 thousand likes, including my own.
However, as I kept seeing the story cycle throughout the mainstream media, I became increasingly dissatisfied with how it was reported. After a couple weeks of reflection, I think it shows how the modern journalism model is broken.
FindFace in First Place
The main issue is that the story made it seem like Clearview AI was the first company anywhere to do facial recognition at a massive scale based on scraping social media. The headline itself screams in bold font, “The Secretive Company That Might End Privacy as We Know It.” The text calls the app “groundbreaking”, and notes that “Tech companies capable of releasing such a tool have refrained from doing so.” However, there was a big problem: it didn’t explain that Clearview AI was not the first company of its ilk.
A Russian company has already been doing facial recognition based on scraping social network data for years, as Anton reminded me in the replies.
He wasn’t the only one. Twitter replies to Hill’s article said the same thing:
To understand that Clearview’s AI is not doing anything new, it helps to have some background on FindFace. The person who’s done some of the best reporting on this is Daniil Turovsky. He’s a journalist for Meduza, an independent Russian-language media outlet based out of Latvia, run by Galina Timchenko, who was kicked out of her last job at Lenta.ru (an online news source) by Alexander Mamut. Remember that guy? He’s the head of Rambler group, which raided Nginx’s offices and dragged its creator into police custody.
Turovsky often writes about Russian tech, particularly as it relates to security and hackers. In 2017, he wrote a story in Russian called, “The End of Private Life,” about FindFace’s technology.
The introduction to the piece reads,
Приложение FindFace, позволяющее сфотографировать человека на улице, а потом найти его аккаунт во «ВКонтакте», в последнее время регулярно появляется в новостях: с помощью него делают арт-проекты или травят женщин, снимающихся в порно. Специальный корреспондент «Медузы» Даниил Туровский разыскал тех, кто разработал технологию FaceN, ставшую основой FindFace, а также тех, кто эту технологию купил, — и выяснил, что это только начало:
The FindFace app, which allows you to photograph a person on the street, and then find their account in VKontakte (the Russian Facebook clone), has frequently been in the news as of late. With the help of FindFace, people have created art projects, or harassed women who act in pornographic movies. Meduza’s special projects correspondent Daniil Turovsky writes about those who created the technology FaceN, which became the beginning of FindFace, and also about those who have purchased the technology, and explained that all of this is just the beginning.
Sound familiar? If it does, it’s because the Clearview tech did the same thing:
His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.
I think the whole FindFace piece is a very worthwhile read. It’s not available in English (as far as I know), but I’ve translated the parts relevant to the Clearview article below.
In the middle of the 2000s, on weekdays in the intra-city busses from Troitsky to Moscow, you could have met a tall, winsome boy named Artem Kukharenko. The road to Moscow took an hour and a half one way - past the Vnukovo airport, the Moscow XRay factory, and through the Moscow Ring Road. Even at that time, his main hobby was programming. He joined after school groups around informatics, and in eight grade was accepted into Gymnasium № 1543 in southwestern Moscow, considered to be one of the strongest mathematics schools in the city. In the summers, Kukharenko, attended computer schools, where he learned algorithms, data structures, and their mathematical analysis (Vicki's note: This is all pretty standard comp sci class fare). During the school year, he participated in math Olympiads, and in 2006, he won an all-russian Olympiad in Informatics.
After graduating, Kukharenko was accepted to Moscow State University (Vicki's note: one of the most prestigious schools in Russia), to the department of numerical analysis and cybernetics, not even thinking about applying anywhere else. During this second year, he went to a special elective led by Anton Konushin, "Introduction to computer vision." At the end of the year, Konushin offered those interested in the topic to complete a few assignments and come to work at the lab. Kukharenko got into the lab. The laboratory conducted experiments in machine learning and neural networks.
When he was close to graduating, at the advice of the leader of the lab, Kukharenko turned his attention to a new sphere of study - facial recognition. In 2012, together with Konushin, they wrote a paper on facial recognition in photographs. (Vicki’s note: if you’re interested in reading the paper, can read Russian and feel like spending half an hour reading about facial recognition, here it is. It adds absolutely nothing to the newsletter, but I spent too long tracking it down not to include it.)
Turovsky goes on to say that afterwards, Kukharenko stopped thinking about facial recognition and went on to do a bunch of other stuff, but that over the 2015 winter holidays he found himself bored, and, together with his girlfriend, hand-labelled 150 pictures of dogs and their corresponding breeds. Remember how I said NLP was people all the way down? Well, labeling faces is people all the way down, too.
He then uploaded this information into a neural network in an Android app called Magic Dog, which he said could take pictures of dogs on the street and tag them with the proper breed. If you’re interested, the actual downloadable executable, is still available here. The description asks the reader ominously to
Download the application now while it's free and has no ads!
He decided to shop the app around to investors. The pitch went something like this:
На первых встречах с потенциальными инвесторами Кухаренко, ссылаясь на опыт Кремниевой долины, рассказывал, что нейросети и распознавание лиц — это будущее. В последние годы интернет-гиганты Google, Facebook, Apple скупили десятки проектов разработчиков распознавания лиц и нейросетей — Deepmind, MSQRD, Face.com и другие.
During his first meetings with potential investors, Kukharenko, leaning on the expertise of Silicon Valley, said that neural nets and facial recognition are the future. In the past couple years, the internet giants, like Google, Facebook, and apple, have purchased tens of projects of people who have developed neural networks : Deepmind, MSQRD, Face.com, and others.
He got a few investors, founded a company called N-Labs, got a fancy office in Moscow, and hired some developers. He bought and installed four servers. He said, “Google uses thousands of servers to do this. We only need four.”
In a couple months, the developers scraped VKontakte for faces and trained a neural network to recognize them and match them. In late 2015, Kukharenko found out that the University of Washington was hosting a facial recognition competition and beat Google’s Facenet: Kuharenko’s app recognized 73.3% of the faces, and Facenet recognized 70.5%.
The next paragraph should tell you all you need to know about Kukharenko and what came next for his app.
Кухаренко ходит по офису в шортах и футболке. Он похож на Эдварда Сноудена, но его точка зрения на приватность — полная противоположность взглядам главного противника слежки и перехвата трафика. «Мне как обычному человеку важнее безопасность, чем приватность», — говорит Кухаренко.
Kukharenko walks around the office in shorts and a t-shirt. He looks like Edward Snowden, but his point-of-view on privacy is completely the opposite of the main opponent of surveillance. “For me, as an ordinary person, safety is more important than privacy,” he says.
In 2016, he met an advertising guy named Maxim Perlin, who asked him why N-Labs wasn’t developing anything for the mass market. That’s how FindFace was born. The idea was that you could go outside, point your camera at anyone on the street, and be able to do a reverse-lookup of their VKontakte profile based on the scraped training data.
The app’s original launch page, as screenshotted by Meduza, featured three women suggestively inviting you to connect.
Perlin at the time of the launch wrote in his Facebook:
«Это на самом деле разрывает всякие шаблоны и стирает на хрен любую анонимность, — писал он. — Увидев симпатичную девушку в клубе, вы можете сфотографировать ее на телефон и моментально найти ее профиль во „ВКонтакте“, узнать имя, интересы и отправить ей сообщение».
This for real breaks the mold and throws any kind of anonymity away. If you see a pretty girl in the club, you can take a picture of her on your phone and immediately find her profile in VKontakte, find out her name, interests, and send her a notification.
The app was an instant hit in Russia. By June it had been downloaded over 1 million times. It, of course, predictably had issues. For example, users used it to out female porn stars who worked under pseudonyms and harass them.
This is when the app started gaining attention in the Western media as well. (The New York Times also reported on it at length.) In an interview with the Guardian, Kukharenko and his co-founder revealed that they had even loftier goals for it:
But the FindFace app is really just a shop window for the technology, the founders said. There is a paid function for those who want to make more than 30 searches a month, but this is more to regulate the servers from overload rather than to make money. They believe the real money-spinner from their face-recognition technology will come from law enforcement and retail.
And that it did. In 2020, FindFace changed its website to a more serious and corporate blue, a .pro website domain, and added case studies of organizations that are currently using it, including, most recently, the Moscow government.
“We believe that the Moscow face recognition system is the largest in the world,” said Minin, adding that there may be systems that are larger that scan archive footage.
Why is the media bad now?
All of this background brings us to the main question I had while reading the New York Times story: Why was the reader led to believe that Clearview was the first incident of this kind of massive data scraping and training neural nets on faces, when this was not the case?
The context on this issue is really important. First, readers shouldn’t overreact to the hype about the Clearview story - it’s not new. But, they should also realize that that facial recognition working in tandem with law enforcement is a global problem.
The takeaway should be this: all around the world, smart, bored young people are fooling around with facial recognition algorithms, without thinking about the consequences. And the international market for these is enormous. And, to say that big companies like Google aren’t producing these kinds of systems is only half-right - they may not be building the apps, but the’re providing the building blocks: Microsoft’s Face API, Google’s Face Detection, and Amazon’s Rekognition, which (for some reason it and not the others) got into so much hot water that it had to provide a FAQ. The technical aspects of the system are also glossed over. This lack of context does readers a great disservice. It scares us, but doesn’t allow us to correctly form our opinions about an important issue.
Here’s why I think this happened. Hill spent a long time researching and putting together this complicated story. It involved understanding facial recognition works, talking to a bunch of lawyers, reading police listservs, and chasing down Ton-That himself. It involved LinkedIn searches, interviews, and going to Clearview’s office to get answers.
In the meantime, Clearview was tracking her,
While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.
All of this is extremely stressful. It’s even more so for a high-profile journalist. Hill has over 270k Twitter followers, which would be hard enough for a normal person, as self-help guru Tim Ferris writes in a meditation on the awfulness of notoriety.
The point is this: you don’t need to do anything wrong to get death threats, rape threats, etc. You just need a big enough audience. Think of yourself as the leader of a tribe or the mayor of a city.
For a female journalist in a technical field, it becomes an exercise in endurance.
Then what gets added to that pressure cooker is the enormously hard task of getting a story out on a deadline, into the relentless, ceaseless news cycle. If you’re a journalist and you’re working on a story like this, you need to be able to break it first, before anyone else does, or you won’t get ahead of the news cycle. You need to balance accuracy with getting this dynamite story out the door as soon as humanly possible.
And, if you’re an editor, you need to be able to package up this story with as much outrage and fear as possible so that people will click on it so that you can justify your advertising and subscription revenue to fight against shrinking margins. So then you pick a headline like, “The Secretive Company That Might End Privacy as We Know It”, even though the end of privacy has already been declared at least once in Russia, and maybe more times in China. You need to get people to click.
In the middle of this economic chaos created by the digital economy, there is no time to stop, pause, reflect, and really research an issue, to think. Every minute, every extra day gathering information is a luxury.
Hill says on a podcast about the story (which is very interesting and you should take a listen, because it includes a lot of detail that I also think should have been in the story,) that she’s been covering privacy for ten years. So, it’s probably fair to say she should have already known about FindFace. But, how, in the middle of the vicious news cycle, is she supposed to get the time to find all this stuff out? In addition to everything she should have researched for the article, should she also have read up on facial recognition in Russia, China, and the EU? In a fair world, yes.
But the only reason I personally was able to write about this is because I’m not a journalist, and not on a news cycle and on only a loose deadline (and even that’s hard enough!).
Wednesday:Crap Thursday:Crap Friday:Thank god I sent this newsletter Friday afternoon:Crap
In fact, as you’ve probably noticed, Normcore is somewhat anti-cycle, exploring whatever issues are important to me in-depth. For example, in writing this issue of Normcore, I first read Hill’s story. Then I waited about a week, watching for all the responses to come in. Then, I re-read the original Meduza story about FindFace. One of the reasons I was able to find that story is because I am, right now, reading Turovsky’s book about hackers, which had Kukharenko’s story as one of the chapters.
Then, I went back to see if I could find his original academic papers to see if he was the real deal. Even then, I started running out of time, because I’m on a self-imposed deadline to release every Monday-Tuesday. I can’t imagine what New York Times journalists dealing with Twitter-length news cycles go through.
I can take the time to read computer vision papers in Russian, to write about weird, niche things, because I’m sponsored by readers and I’m not beholden to editors. But, even if I wasn’t, Normcore isn’t my daily bread and butter. I have my full-time salary to fall back on. For journalists in the modern news cycle, this isn’t a hobby. They don’t have the luxury of taking days to research PDFs of neural net research in Russian, nor to find experts who can do so. They can’t write random articles about Elon Musk memes. This is their job, and the pace of their lives.
And, journalists are squeezed from all sides. On the one hand, they’re shackled to the deadline, to the news cycle. On the other, they’re beholden to their editors, who demand maximum outrage to generate more revenue. So it’s even possible that Hill knows about FindFace and wrote about it. Maybe what happened was that her editor just didn’t think it was germane to the article’s narrative.
It’s vicious. And not only does it suck for the journalist, but it sucks for the reader. Because what you get is a media ecosystem that both intentionally and unintentionally misinforms the very readers whose attention it relies on. You get a bunch of junk data, fear, and adrenaline-driven journalism and editing that does not take into account that the very fake news it reports on is fake news it might be generating itself.
In a piece that I quote extremely frequently, Aaron Swartz’s (RIP) I Hate the News, he says,
The news’s obsession with having a little bit of information on a wide variety of subjects means that it actually gets most of those subjects wrong. (One need only read the blatant errors reported in the corrections page to get some sense of the more thorough-going errors that must lie beneath them. And, indeed, anyone who has ever been in the news will tell you that the news always gets the story wrong.) Its obsession with the criminal and the deviant makes us less trusting people. Its obsession with the hurry of the day-to-day makes us less reflective thinkers. Its obsession with surfaces makes us shallow.
And it is these shallows that we muck about on a daily basis, on Twitter, on Facebook, on all the social media platforms where we don’t stop to think, impacted by the fast, frantic rush of the news stream, that we get stuck.
The fear being reported in these articles is the fear of facial recognition. But what if the other real fear is that we’re not getting the full story?
What I’m reading lately:
New tech site/newsletter: Protocol
The problem with Twitter is supernodes: users with lots of followers
The Glossier brand
The Newsletter:
This newsletter’s M.O. is takes on tech news that are rooted in humanism, nuance, context, rationality, and a little fun. It goes out once a week to free subscribers, and once more to paid subscribers. If you like it, forward it to friends and tell them to subscribe!
The Author:
I’m a data scientist. Most of my free time is spent wrangling a preschooler and a baby, reading, and writing bad tweets. Find out more here or follow me on Twitter.