The landscape of artificial intelligence (AI) in public safety is currently under scrutiny following a proposed settlement by the Federal Trade Commission (FTC) with Evolv Technologies, a company that has marketed AI-powered security scanners designed for environments such as schools. The FTC’s investigation has revealed significant discrepancies between Evolv's promises and the actual performance of its technology.

Evolv Technologies, based in Massachusetts, has claimed that its advanced scanners enhance public safety by efficiently detecting weapons while disregarding non-threatening items like water bottles and bags. These systems have been deployed in various high-profile venues, including schools, airports, sporting events, and subway stations since the company went public in 2021. However, the FTC characterised Evolv's assertions as exaggerated, highlighting a pattern of misleading marketing that overhyped the capabilities of their scanners.

FTC Chair Lina Khan stated in a post on X that Evolv has "falsely hyped" its systems, particularly in school environments where school districts invested heavily in this technology, reportedly paying millions. There have been multiple instances where the scanners failed to detect real threats while mistakenly identifying harmless personal items as potential weapons. This misjudgement raises serious concerns about the reliability of using AI for critical public safety measures. For instance, a report by The Intercept highlighted comments made by former CEO Peter George, who assured investors at a 2022 conference that the systems were capable of detecting concealed weapons effectively.

Despite these assurances, the efficacy of Evolv's scanners has been challenged. Legal investigations by five law firms have emerged, focusing on potential violations of securities law, stemming from claims that the company misled investors regarding the performance of its technology. Additionally, Evolv’s shareholders have initiated a class-action lawsuit that contends the marketing representations overstated the effectiveness of the weapons detection systems.

In a notable development, New York City Mayor Eric Adams announced a three-month pilot program in which these scanners will be deployed in subway systems. This decision came despite reports of the technology’s performance at Jacobi Medical Center, where it triggered a high number of false alarms. According to further investigations, the false positive rate for Evolv’s systems has reached an alarming 95%, with only a fraction of alerts associated with genuine threats.

In response to regulatory pressures, Evolv Technologies has reached an agreement with the FTC regarding its past marketing practices. The proposed settlement will require the company to cease making unsupported claims about its AI systems’ capabilities and inform certain K-12 school clients that they have the option to cancel contracts made during a specific timeframe from April 2022 to June 2023. Samuel Levine, Director of the Bureau of Consumer Protection, emphasised the necessity for technology claims—especially those affecting child safety—to be substantiated.

Mike Ellenbogen, interim president and CEO of Evolv, asserted that the inquiry focused on marketing language and did not challenge the fundamental effectiveness of the technology. Nevertheless, the proposed provisions would legally restrict Evolv from claiming that their systems can effectively detect weapons or eliminate erroneous alerts without prior removal of personal belongings.

As the FTC continues to examine the implications of AI technologies for public safety, the narrative surrounding Evolv Technologies serves as a cautionary tale for businesses involved in the rapidly evolving sector of AI automation. The broader conversation highlights the potential risks associated with promoting unverified AI solutions, particularly as cities and institutions seek reliable methods to enhance security in increasingly vulnerable environments.

Source: Noah Wire Services