AI is not like social media
So don't treat it that way.
I have a theory that part of the reason for the deep skepticism about the utility of LLMs relates to equating it with social media, even if subconsciously.[1] Especially for people born in the late-’90s and later, social media is really the only technological innovation they’ve personally experienced, and I think it has created some unfortunate priors around new innovations’ being inevitably exploitative.
Nearly alone among technologies, algorithmic social media is almost universally bad. The redeeming characteristics that it does have, like bringing together disparate groups of people to bond over unusual interests or discuss rare diseases, are features of the internet, not of social media. In fact, social media actually destroyed many of those old (often well-moderated) vbulletin sites by drawing users’ attention elsewhere. For many people, social media is modern technological innovation.
AI is nothing like social media: there is no social lock-in; there is a clear direct-pay economic model; and the industry currently attracts many high-quality, thoughtful CEOs.
Understandably, then, people look around at another huge innovation coming out of Silicon Valley and reflect on what the last big thing did to their friends, their families, and to politics and think “never again.” I think this is how you get an administration saying “there will be no [AI] start-ups” when visited by VCs looking to collaborate.
AI is nothing like social media: there is no social lock-in; there is a clear direct-pay economic model; and the industry currently attracts many high-quality, thoughtful CEOs.
There’s a whole taxonomy around why social media got so bad, but the key point is that it is very hard to leave a platform when everyone else is still there. There is a “right of exit” in theory, but in practice they are something more like natural monopolies. You can set out to start your own “better” social media thing, but nobody will follow you and you will feel sad, alone, and dejected. I tried, and you almost certainly didn’t get here from there.
In contrast, the relationship between you and an AI model is like the relationship between you and a car—transactional and easy to change whenever you feel like it. BMW made great cars fifteen years ago, and I love the one I got in 2007. Now they charge a subscription fee for heated seats and I think the cars are hideous, but nobody in my life will care or probably even notice if I get a different brand next time.
Furthermore, there is practically no switching cost, and the smartest people I know constantly rotate among the different models, because the best ones seem to change weekly. Heck, products like Cursor and Kagi just have drop-downs; you don’t even need to create a new account to experiment with alternatives. If it looks like OpenAI is going to go the Facebook route on their product, well, that’s unfortunate, but Anthropic is happy to take your $20/month and not show you ads.
Speaking of taking your $20/month, the economic model here is pretty obvious. These LLMs are incredible tools, and people are used to paying for tools. Paying $20 to be outraged or shown an infinite scroll of stupid (but addicting!) videos is a much less viable model, because asking people to pay to be manipulated is . . . challenging. In contrast, there is no obvious reason why all AI companies will become behaviorally manipulative advertisers. While some will, of course, there will always be premium options that trade on their non-manipulative credentials.
Finally, the social media model is, let’s be honest, pretty gross. It is an extremely odd thing to want to be the CEO of one of these companies. It’s pretty much all downside, for one, and the selection effects are brutal: if you are not willing to exploit human weakness to drive attention, someone else will, and that person will win.
AI, on the other hand, is fascinating. It’s also lucrative. There is no difficulty attracting good actors to these companies. Certainly, that is not to say that all AI CEOs will be saints, but the industry doesn’t actively select against that.
To be clear, I am not saying that everything will be roses and sunshine here. There are plenty of risks, plenty of downsides just like with any new technology (I have written about some of them, including the dangers of manipulative ads and companies’ destroying the consumer surplus). I am only arguing against looking at this technology through the lens of the last unfortunate technology we created. It’s different, and we should think about it differently.
I don’t mean fears around job disruption and malicious use, but the surprisingly common dismissal that it could even be useful. ↩︎