Recent data from AIPRM reveals that interest in AI-generated technologies, particularly deepfakes and voice cloning, has surged significantly over the past year. With deepfakes alone attracting an average monthly search volume of 178,000, the term has become synonymous with emerging technologies that pose risks to individuals and businesses alike. This spike in curiosity coincides with a staggering 2,137% increase in deepfake scam attempts over the last three years, indicating the technology's growing prevalence in nefarious activities.

In parallel, AI voice cloning has also garnered attention, with about 23,000 searches per month. This technology has been identified as one of the fastest-growing scams anticipated for 2024, raising concern amongst experts and the general populace. Notably, a staggering 70% of adults express uncertainty about their ability to distinguish a cloned voice from the real thing, highlighting the potential for widespread exploitation.

Experts, including Christoph C. Cemper, founder of AIPRM, have emphasised the importance of recognising AI impersonation scams and provided guidance for protecting oneself against these increasingly sophisticated tactics. Speaking to "Start Your Business," Cemper noted, “AI scams have seen a huge rise in recent years, but 2025 may prove to be the most dangerous year yet, with developments in AI and scammer’s tactics growing more sophisticated.”

The tactics employed in AI voice scams are alarming in their simplicity. Scammers require as little as three seconds of audio to replicate a person's voice, which they then utilise during scam calls. Common strategies used by fraudsters include impersonating friends, family, or colleagues. To prevent falling victim to such scams, individuals are advised to ask unique questions or create secret phrases that only they and the actual person would understand.

In addition to voice scams, AI phishing and text scams have also proliferated. Many individuals receive suspicious texts or emails that appear to be from familiar contacts. Experts suggest verifying the sender’s email address or phone number and being vigilant for poor grammar and spelling, which often characterise AI-generated communications. Urgency is another red flag; scammers frequently press for immediate responses or sensitive information, prompting recipients to act quickly without due diligence.

The future of scamming is further complicated by the rise of AI-generated listings on social media and online platforms. Experts predict that the incidence of these fraudulent listings may rise in 2025, especially following Meta’s decision to curb its fact-checking efforts. Scams often involve listings demanding upfront payments or directing users to unfamiliar sites, which can lead to financial fraud.

Cemper recommends proactive measures if someone suspects they've fallen victim to an AI scam. This includes reporting the incident to appropriate government agencies, freezing bank cards to prevent unauthorized access, and changing all passwords, ideally implementing two-factor authentication for additional security.

He underscored the criticality of reporting, stating that “No matter how big or small the scam, reporting it helps not only you, but also contributes to building data on scams, which allows authorities to take action against fraudsters.”

As the landscape of AI technology continues to evolve and transform business practices, the accompanying rise in AI impersonation scams presents an ongoing challenge for consumers and businesses alike. Awareness and preparedness appear to be essential components in navigating this complex and potentially hazardous digital frontier.

Source: Noah Wire Services