Surveillance Dressed as Friendship
The Cambridge Analytica story broke the way these things usually break—with a single piece that suddenly made everything make sense. The Guardian’s Observer and the New York Times reported in early 2018 that a British political consulting firm had harvested the personal data of 50 million Facebook users through a survey app that passed itself off as academic research, then used those profiles to build targeted political advertising designed to influence the 2016 US election. By most available evidence, it worked.
I’ve disliked Facebook for years. Not in the fashionable way people claim they barely use it while still checking it twice a day—a genuine, low-level revulsion at what it represents and what it has done to the internet. Mark Zuckerberg has always struck me as someone who fundamentally doesn’t understand why privacy might matter to people with nothing obviously to hide, which is either a real failure of imagination or a convenient performance of one. The result is the same either way.
Edward Snowden put it plainly: Facebook is surveillance infrastructure wearing a social network’s clothes. What Zuckerberg does is collect data and sell it at a premium. Brian Acton—who founded WhatsApp and sold it to Zuckerberg for sixteen billion dollars—publicly called for people to delete Facebook, which is either principled or the most expensive buyer’s remorse in history. Probably both.
The writer Sascha Lobo argued that Facebook is one of the most powerful companies on earth in terms of its influence on human perception and social behavior, and that the genuinely frightening part isn’t malice—it’s that Facebook itself doesn’t fully understand how Facebook works. The platform was perfected as an advertising machine. What it does to the people using it was always secondary. That indifference, more than any specific scandal, is the thing that should bother people.
I wrote about the dependency problem back in 2015 and nothing has become less true. Every blogger, every magazine, every creator voluntarily walked into a trap—handed Facebook their audience in exchange for reach, then discovered too late that the reach was always conditional. We optimized our posts for the algorithm, chased every like and share, deleted the comments that discouraged people from clicking the link. We built the cage ourselves.
The uncomfortable truth I kept circling was that there’s no clean exit. Stop using social networks to distribute your own content and you disappear faster than you can build an alternative. Keep using them and you remain dependent on platforms that increasingly treat publishers as parasites. The readers adapted to Facebook’s version of the internet and forgot there was anything else. Appealing to younger users felt like a government anti-drug campaign—technically valid, completely useless.
The #DeleteFacebook moment felt real for about a week. Genuine momentum, genuine anger. I waited to see whether it would actually tip—whether Facebook would finally join MySpace, Friendster, and Germany’s StudiVZ in the graveyard of social networks we once thought were permanent. It didn’t. The users stayed, the advertisers stayed, the data collection continued.
But Cambridge Analytica was also useful because it made the abstraction concrete. Surveillance capitalism sounds like theory until you can point to 50 million harvested profiles and a presidential election. Facebook has spent years functioning as a gatekeeper with authoritarian instincts—censoring content, suppressing reach, deciding what you see and from whom based on criteria it refuses to explain. That power, once visible, is very hard to pretend isn’t there. The next reckoning will land harder. Or at least that’s what I keep telling myself.