A lot of social media content is pure fakery. Accounts are created and posts generated entirely to attract eyeballs either for revenue generation, political manipulation, or simply because it is ‘entertaining’ to certain people. The algorithms amplify much of this content towards more and more extreme viewpoints because that is what keeps most of us engaged. This is true because humanity, in the larger sense, is horrible.
But recently it has become clear that the corporate owners of some of the largest social media giants want to put their thumb on the scale in a more direct way. They want to create virtual people to produce the content itself, perfecting the engagement cycle by eliminating the messy human element. This seems like a really dangerous and outrageously bad idea.
Chasing engagement
The vast majority of social media content on Facebook, TikTok, Instagram, Threads, Twitter (X), and so on is presented to users based on code that is intended to maximize engagement. Engagement has the intent of keeping the audience locked in to the site for the maximum amount of time. This is one of the key metrics sold to advertisers: more time spent staring at the content means more opportunity for the user to absorb the surrounding advertising messages.
The details are hidden as corporate secrets, but simplistically the application code is continuously measuring what type of content is most ‘sticky’ for each individual human. There is a complex web of analysis performed based on what other people with similar interests to the current viewer find fascinating. If content on topic X makes you spend 2% longer on the site, and people similar to you were intrigued by content X+, then content X+ might make you spend 5% more. This focus on keeping you locked on the site overrules any assessment of whether the content is truthful, violent, induces hatred, or is otherwise destructive. All that matters is gluing your eyeballs to the site.
Engagement sounds innocuous at first. If you like posts about knitting, you’ll probably like posts that have more knitting related content. What could be wrong with that? Well, the logic of the self-optimizing code that drives this process is relentless. If you react to a post about homeless people, the algorithm might determine that most people who react this way also engage with posts about homeless people being involved in crime. If that keeps you interested, then the logic dictates that you’ll probably respond well to content about displacing or removing homeless people. Next thing you know you’ll be reading a stream of posts about burning homeless encampments down or imprisoning the poor. It will be more subtle than this, but the code will not care so long as it makes you spend more time yelling at the screen.
Note that human content creators are entirely complicit in the whole engagement process. People want their content to get ‘promoted’ by the algorithm as they get their ego stroked and possibly get paid based on the same logic. So of course they write increasingly loud and extreme material in order to get more clicks and revenue. Most of the humans creating this stuff don’t believe a word of what they say. All they care is whether or not their audience ‘tunes in’, sees the advertisements, and buys the merchandise.
Any attempts by companies like Meta (Facebook/Instagram/Threads) to mitigate the social ill their focus on engagement causes is at best patchwork. The problem of social media radicalization and toxic tribalism will persist so long as there is a bit of code to present content purely to drive increasing attention.
Make it worse by creating the content itself
What could be worse than driving viewers towards more and more ‘targeted’ (extreme) content? How about the platforms themselves creating the content? And if the algorithm does that, it would be even better if it generated the content ‘creators’ themselves: AI representations of the perfect customized profile for each audience member to attach to. The AI ‘creators’ could respond to changes in engagement metrics in milliseconds, continuously optimizing what they produce in order to increase attention.
This horrific concept is exactly what Meta has been working on for several years. Of course bots (AI generated fake accounts) have been a problem on these platforms forever. But now the content platform itself is getting involved in the fake account and content generation game. Meta has access to all the user data and levers of control over their users. I am confident that platform-controlled AI bots already exist on other platforms as well- Musk is too much of an egoist to miss the opportunity. There are no good outcomes here.
The human mind is distressingly weak despite or perhaps because of its complexity. We can be made to believe lies and disbelieve the truth against all common sense concepts of self protection and our own best interests. Can the techbros possibly make this social media dystopia any worse? Yes, yes they can…
Ah yes, Zuck has decided to fluff his platform engagement numbers with AI garbage.
I saw somebody says that putting in computer controlled players in an FPS match that is short of contestants is okay, but filling out your social media site with bots is not.
Also, weren’t bots a problem before? I guess not when you make them yourself.
I think AI controlled characters in games that are there ‘by design’ i.e.: not bots used by players meaning to cheat are a net good. AI NPCs seems like the perfect use for AI, particularly in RPGs. Having an NPC that can ‘intelligently’ respond to changes in the game world caused by the player would be pretty cool if done well.
And yes, Wilhelm, you are absolutely right: there are already ‘third party’ bots on all social media platforms lying to us for engagement reasons. The big problem with the media companies themselves rolling out their own ‘fake’ humans is that they have access to all the internal user data in real time.
It is kind of like the difference between human trading and machine trading on the stock market: the inhumanly rapid response to user metrics could… would cause a multiplying factor on all the worst elements of the engagement cycle.
I don’t know. Most of the hysteria about AI sounds to me much like the same kind of hysteria that greeted the arrival of every other technological innovation. All of them have the potential to be awful but mostly what we’ve ended up with has been a much more mixed bag – some good, some bad, most just disposable.
I certainly don’t believe there are “no good outcomes”. I can think of a whole raft of ways AI personas could be used to make my life more fun, entertaining and amusing in ways that aren’t in any obvious way detrimental except for the energy costs. The web used to be known at least as much for its trivia and fluff as for the bad things it brought into your home. Think of all those cats doing funny things. I can imagine a world where we wase our time watching virtual pets doing and saying charming, endearing, amusing things as easily as one where it’s all negativity and hatred. From an advertiser’s point of view, do charmed, amused people spend money more readily than angry, frightened ones? Maybe it depends on what they’re selling.
Bhagpuss, I have no problem with the use of AI in, for example, a game- so long as I know it is an NPC, that’s a good use.
But I am very worried about the use of AI to create simulated human accounts with the directive to drive social media engagement. We’ve already seen how social media can and does get used to radicalize people through the implementation of simple ‘rewards’ driving content creators. Multiply that by several orders of magnitude and you have what I expect a Facebook or Instagram managed AI bot army would do.
The whole problem seems to be that the human brain isn’t wired to engage as strongly with happiness or kindness. We pay more attention to things that incite fear and hatred: that is the clear message of the last 20 years or so of social media that gave us the extremist political situations we have in the world today. The engagement algorithms don’t care about right or wrong, good or evil: they just reward whatever generates the highest “time on screen” metrics.
Do Androids dream of electric sheep? When the artificial people start interacting with each other, will they know it? (And they will interact. Even if they are programmed to recognize each other, the AI that powers them learns by scraping the internet for content, and there are already problems with AI being contaminated by AI generated content, producing hallucinations.)
We’ve already seen what happens when humans interact in closed bubbles online; what of that process sped up to computer speed?
To use the game analogy – if all the players are bots, turning the game into something a regular human cannot recognize or compete in, what would happen? My guess is the humans would quit playing.
If the AIs are programmed to become people’s “friends” that is more destructive, but still I think self limiting. Herding people into bubbles of the like minded and feeding them select disinformation has been very useful for some. But it has limits as these groups disagree on something like support for Ukraine. And as these bubbles are built on intolerance and rejection of dissent seeming minor differences of opinion turn former friends and influencers to enemies. Soon the bubble has burst. AI friends offer a way to pad the bubble, to keep that sense that “everyone I talk to agrees with what I think” going a little longer. But in the end it just feeds the fragmentation. But how many murders will occur because vulnerable people just did what their artificial friends told the to is hard to say
Current social media is based on segregation and control, while nature rewards cooperation and communication. Social media as it currently exists is a loosing proposition and will fail. The only question is how much damage it does to humanity along the way.
Chris, I would like to think your hypothesis that social media as it currently exists will fail is correct. That is an optimistic viewpoint that implies some kind of fundamental goodness in humanity beyond our seemingly overwhelming need to be part of a group.
I think my perception is coloured by what has happened globally but particularly in the U.S. in recent years. Hatred and fear of ‘the different’ is the core tenet of the human tribal structure. Social media engagement mechanisms seem tailor-made to encourage formation and growth of like-minded communities around such viewpoints. It seems as if we aren’t communicating across boundaries but instead are building the boundaries higher.
I perceive that people don’t care if the group they belong to wants to create internment camps for ‘foreigners’, stop providing medical care for our elderly, or remove all life-saving vaccines from society. So long as they are all part of the same team, that’s all stuff that happens to the ‘bad people’: the ‘others’. Maybe some of the people with that tribal affiliation actually like those features of their clan. But I suspect… hope… the majority were suckered in by the desire to belong, facilitated by the social media engagement engines that surround us.
Can humanity pull its head out of its ass in time to realize that our worst instincts are being played upon by automated social networks? I don’t think AI ‘content creators’ pretending to be the perfect friend directing us to exactly the right engaging content to maximize profit will make things any better. But hopefully you are right, and the worst aspects of social media will be burned off by some kind of natural selection towards cooperation and communication for good ends instead of simply profit.