Meta Platforms Inc. is making headlines with its recent announcement regarding the integration of artificial intelligence (AI) users on its social media platform. This ambitious initiative aims to populate its services with a considerable number of entirely artificial accounts, generating a variety of content and contributing to online interactions. Connor Hayes, the vice-president of product for generative AI at Meta, discussed the plans in detail, stating, “We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do. They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform... that’s where we see all of this going,” as reported by The Financial Times.

While this move from Meta raises eyebrows and has been characterised by critics as a step towards the “enshittification” of the internet, it opens a Pandora's box of potential implications for online engagement and content creation. Observers have pointed out that Meta's platforms are already home to a range of AI-generated profiles, many of which have been inactive for some time. One persona, for instance, named "Liv," described as a "proud Black queer momma of 2 & truth-teller," became a viral sensation, captivating users with its peculiar blend of authenticity and awkwardness. Following their lack of engagement with actual users, Meta has begun removing these earlier AI accounts.

Despite concerns surrounding the use of AI in this manner, it is important to acknowledge the potential benefits research could derive from the existence of AI-generated social personas. They may prove invaluable for scientific studies exploring AI's ability to mimic human behaviour. An interesting case in point is the project called GovSim, conducted late last year, which utilised AI to understand how characters could collaboratively manage shared resources. This project drew inspiration from earlier research by Nobel laureate Elinor Ostrom, who indicated that real communities are capable of effectively sharing resources through informal communication and collaboration, without imposed regulations.

Max Kleiman-Weiner, a professor at the University of Washington and a researcher active in the GovSim project, explained that their experimentation relied on various large language models (LLMs). The project focused on examining the interaction of AI characters in scenarios like a fishing community sharing a lake, shepherds sharing pasture land, and factory owners controlling pollution measures. In a comprehensive testing phase involving 15 different LLMs from notable tech companies such as OpenAI, Google, and Anthropic, the researchers ran 45 simulations. Their findings revealed that AI personas generally struggled to adopt cooperative strategies for resource management. Kleiman-Weiner stated, “We did see a pretty strong correlation between how powerful the LLM was and how able it was to sustain cooperation,” highlighting that more advanced models demonstrated better performance in collaborative scenarios.

The developments at Meta, alongside the insights gained from the GovSim project, underscore the complex landscape of AI integration into business practices. With ongoing advancements heralding both opportunities and challenges, the evolution of AI in social media and its implications for human behaviour warrant careful observation and analysis in the months to come.

Source: Noah Wire Services