In the lead-up to the global elections slated for 2024, concerns surrounding the potential impact of generative artificial intelligence (AI) on political disinformation have sparked considerable debate. Experts had previously highlighted fears that deepfake technology could lead to a surge in misleading content, complicating voters' ability to discern fact from fiction. A range of influential figures, including Sadiq Khan and the Pope, raised alarms about the potential ramifications of AI-generated propaganda, contributing to a World Economic Forum survey that identified AI disinformation as the second-most pressing risk for the upcoming elections.
However, recent analyses indicate a more restrained effect of AI disinformation than initially anticipated. Research conducted by the Alan Turing Institute revealed that a mere 27 pieces of viral AI-generated content emerged during the summer elections across the UK, France, and the European Union. Furthermore, a separate survey discovered that only about 5% of the British public recognised the most prominent political deepfakes associated with these elections.
In the United States, a comprehensive study by the News Literacy Project catalogued nearly 1,000 instances of misinformation related to the presidential election, of which only 6% could be linked to generative AI. Notably, TikTok reported that the removal of AI-generated content did not see a spike as election day approached.
Additional analysis from the Financial Times showed that discussions regarding terms like “deepfake” or “AI-generated” on the platform X were more closely aligned with the launch of new image generation technologies, rather than a crescendo of misinformation related to elections. This trend extended to non-western contexts, highlighted by research indicating that just 2% of misinformation during Bangladesh’s January elections was attributed to deepfakes. Similarly, South Africa’s recent elections were characterised by an “unexpected lack” of AI-generated content, according to researchers' observations.
Despite the widespread apprehension, reports from tech giants Microsoft, Meta, and OpenAI indicated they had discovered foreign operations attempting to exploit AI to sway electoral outcomes, but none made significant headway in reaching large audiences. Most AI-generated political content surfaced not as deceptions but as tools for emotional persuasion, often used to create supportive imagery or satirical representations of political narratives. An example cited includes a deepfake of Kamala Harris at a rally adorned with Soviet flags, and another of an Italian child with a pizza topped with insects, mocking the EU’s stance on insect diets.
Expert Daniel Schiff from Purdue University noted that nearly 40% of the political deepfakes analysed by his team served satirical or entertainment purposes rather than intending to mislead voters. This raises questions about the so-called “liar’s dividend,” which posits that individuals might dismiss genuine, compromising content as AI-generated, thereby eroding trust in authentic information.
An analysis by the Institute for Strategic Dialogue found indications of confusion over political content on social media, where users frequently misclassified legitimate images as AI-created. Nevertheless, many demonstrated a level of scepticism towards such claims; Pew Research indicated that fewer U.S. voters struggled to identify truthful news in the run-up to the 2024 elections compared to 2020.
Felix Simon, a researcher at the Reuters Institute for the Study of Journalism, remarked, “We’ve had Photoshop for ages, and we still largely trust photos,” reflecting a belief that concerns surrounding deepfakes may be overstated.
Despite these reassurances, the rapid advancement of AI technology and its societal implications warrant vigilance. Instances of deepfake misuse are already evident in fields such as impersonation scams and harassment. Experts caution that while the spectre of deepfakes in political disinformation remains salient, the core issue lies in addressing the underlying factors that contribute to the public's propensity to accept and share false narratives, notably political polarisation and media consumption influenced by platforms like TikTok. As the 2024 elections draw near, the discourse on AI-generated content continues to evolve, shifting focus towards the complexities of electoral integrity in the digital age.
Source: Noah Wire Services