AI Social Media Users Are Not Always a Totally Dumb Idea

Share This Post


Meta caused a stir last week when it let slip that it intends to populate its platform with a significant number of entirely artificial users in the not too distant future.

“We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do,” Connor Hayes, vice-president of product for generative AI at Meta, told The Financial Times. “They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform …  that’s where we see all of this going.”

The fact that Meta seems happy to fill its platform with AI slop and accelerate the “enshittification” of the internet as we know it is concerning. Some people then noticed that Facebook was in fact already awash with strange AI-generated individuals, most of which stopped posting a while ago. These included for example, “Liv,” a “proud Black queer momma of 2 & truth-teller, your realest source of life’s ups & downs,” a persona that went viral as people marveled at its awkward sloppiness. Meta began deleting these earlier fake profiles after they failed to get engagement from any real users.

Let’s pause from hating on Meta for a moment though. It’s worth noting that AI-generated social personas can also be a valuable research tool for scientists looking to explore how AI can mimic human behavior.

An experiment called GovSim, run in late 2024, illustrates how useful it can be to study how AI characters interact with one another. The researchers behind the project wanted to explore the phenomenon of collaboration between humans with access to a shared resource such as shared land for grazing livestock. Several decades ago, the Nobel prize–winning economist Elinor Ostrom showed that, instead of depleting such a resource, real communities tend to figure out how to share it through informal communication and collaboration, without any imposed rules.

Max Kleiman-Weiner, a professor at the University of Washington and one of those involved with the GovSim work, says it was partly inspired by a Stanford project called Smallville, which I previously wrote about in AI Lab. Smallville is a Farmville-like simulation involving characters that communicate and interact with each other under the control of large language models.

Kleiman-Weiner and colleagues wanted to see if AI characters would engage in the kind of cooperation that Ostrom found. The team tested 15 different LLMs, including those from OpenAI, Google, and Anthropic, on three imaginary scenarios: a fishing community with access to the same lake; shepherds who share land for their sheep; and a group of factory owners who need to limit their collective pollution.

In 43 out of 45 simulations they found that the AI personas failed to share resources correctly, although smarter models did do better. “We did see a pretty strong correlation between how powerful the LLM was and how able it was to sustain cooperation,” Kleiman-Weiner told me.



Source link

spot_img

Related Posts

Windows 10 PCs will be forced to install the new Outlook app in February

For several months now, Microsoft has been urging...

LG says 22% of gaming monitors are OLED displays

I’ve been trying hard to resist the urge...

Sonos CEO is out following disastrous app revamp that enraged buyers

Could 2024 have been any worse for Sonos?...

Astrohaus’ newest keyboard has a built-in word counter and timer

If you type a lot, it’s pretty handy...

It’s time to start docking phones again, DisplayLink says

Eight to nine years ago, the Windows world...

VLC celebrates 6 billion downloads with new AI subtitles feature

At CES 2025, non-profit organization VideoLAN announced that...
spot_img