The rapid evolution of artificial intelligence (AI) technology has introduced new complexities into the realm of consumer reviews, raising concerns among watchdog groups and researchers. With the rise of generative AI tools, such as OpenAI's ChatGPT, fraudsters now possess the ability to produce high volumes of fabricated online reviews with minimal effort, creating challenges for merchants, service providers, and consumers alike.

This issue has been persistent with online platforms like Amazon and Yelp, where deceptive reviews have long been a concern. However, the infusion of AI text generation technology has exacerbated the situation, making it easier for scammers to produce convincing yet fraudulent content. The Milwaukee Independent reports that the impact is especially acute during the holiday shopping season, as consumers increasingly rely on reviews to guide their purchasing decisions.

Industry analysts are witnessing an upsurge in AI-generated reviews across various sectors, including e-commerce, hospitality, dining, and even services such as home repairs, medical care, and educational lessons. According to The Transparency Company, a tech group focused on identifying fraudulent reviews, a significant analysis conducted in December revealed alarming statistics. Out of 73 million reviews scrutinised in home, legal, and medical services, nearly 14% were deemed likely to be fake, with approximately 2.3 million potentially produced using AI tools. Maury Blackman, an investor and advisor to tech startups, emphasised the growing efficiency of these tools in terms of aiding scammers.

In August, DoubleVerify, a software company, noted a "significant increase" in mobile applications with reviews that appeared to be AI-generated. These reviews could mislead users into downloading malicious apps, which might compromise their devices or inundate them with intrusive advertisements. Subsequently, the Federal Trade Commission (FTC) filed a lawsuit against Rytr, a company providing AI writing solutions, for allegedly enabling users to produce excessive fraudulent reviews across various industries, including garage door repair and counterfeit handbag sales.

Prominent review platforms have begun grappling with the prevalence of AI-generated content. Max Spero, CEO of Pangram Labs, reported detecting convincing AI-produced reviews on Amazon, claiming their well-structured nature helped them rise to the top of search results. However, discerning genuine from fake reviews remains difficult, as Amazon has stated that external parties often lack access to essential data that could identify patterns of abuse.

Many of the AI-generated entries on Yelp have been linked to individuals seeking to amass enough reviews to obtain an "Elite" badge. Kay Dean, formerly a federal investigator and now leading a watchdog group called Fake Review Watch, noted that this badge signifies trustworthy content and allows access to exclusive local business events, thus incentivising fraudsters to create realistic profiles.

While it is important to note that the existence of AI-generated reviews does not automatically render them fake—some users may harness AI to articulate genuine sentiments or enhance their linguistic accuracy—gurus in the field stress the necessity for a balanced approach. Sherry He, a marketing professor at Michigan State University, suggested that platforms should focus on the behavioural patterns of dishonest actors rather than discouraging legitimate users from leveraging AI for benign purposes.

To mitigate the impact of fraudulent reviews, major corporations are implementing policies to integrate AI-generated content within their review removal systems. Amazon and Trustpilot have expressed their willingness to allow AI-assisted reviews, provided they reflect authentic consumer experiences. Conversely, Yelp has adopted a more cautious stance, instating guidelines that require original text composition by reviewers.

In light of increased AI tool adoption by consumers, the Coalition for Trusted Reviews, consisting of Amazon, Trustpilot, and several travel and employment review sites, aims to develop best practices and advanced AI detection systems to safeguard consumers and maintain review integrity.

The FTC’s recent regulation banning fake reviews, which came into effect in October, imposes penalties on businesses and individuals involved in this illicit practice, while tech companies remain shielded from legal repercussions for the content posted by users on their platforms. Despite ongoing efforts by companies like Amazon and Google to eradicate fake reviews, concerns persist regarding their effectiveness. Dean highlighted the ongoing challenge of locating fake reviews on these platforms, suggesting that larger companies could enhance their detection capabilities.

Experts recommend that consumers look out for certain indicators of fake reviews, such as overly positive or negative sentiment and the repetition of product jargon. Research from Yale has demonstrated that distinguishing between AI-generated and human-written reviews can be extremely challenging, with some detectors struggling with shorter texts typical in online reviews.

However, Pangram Labs provides insights into the characteristics of AI-written reviews, noting that they often exhibit longer, structured formats containing generic phrases and clichéd descriptors. As the landscape of AI-generated content evolves, the implications for consumers, businesses, and online platforms continue to unfold, ushering in a new chapter of digital commerce and consumer interaction.

Source: Noah Wire Services