The integration of artificial intelligence (AI) into various sectors, including business and workplace environments, has sparked significant debate regarding its potential benefits and drawbacks. One of the most contentious developments is the emergence of AI-enabled emotion recognition technology, which claims to interpret human emotions based on physiological and behavioural data. This burgeoning market, valued at approximately US$34 billion in 2022, is projected to reach US$62 billion by 2027. The rise of such technologies has elicited both interest and apprehension among experts and legislators alike.
The European Union's AI Act, implemented in August, reflects increasing caution towards these technologies by prohibiting their use for inferring emotions in workplace contexts, except for medical or safety purposes. Conversely, Australia currently lacks specific regulations governing the deployment of emotion recognition systems, underscoring a notable disparity in global approaches to AI oversight. This absence of regulation is viewed by some experts as a pressing concern, particularly in light of the potential for misuse in various work scenarios.
Australian tech company inTruth Technologies is at the forefront of this field, planning to launch a wrist-worn device capable of tracking emotions through physiological metrics such as heart rate and skin moisture. Nicole Gibson, the founder of inTruth, posits that their technology can serve as a valuable tool for employers seeking to monitor employee performance and mental health. She characterises it as "an AI emotion coach that knows everything about you, including what you’re feeling and why you’re feeling it". However, the ethical implications of such monitoring remain a contentious topic.
Historically, the use of emotion recognition technology in employment settings has faced scrutiny. For instance, a video interviewing system provided by US-based company HireVue has been noted for utilizing facial analysis to evaluate job applicants. Following complaints concerning its methodologies, HireVue discontinued the use of emotion analysis in 2021. Despite this setback, the growing trend towards AI-driven surveillance in workplaces indicates that interest in such technologies may be resurfacing.
Critics assert that the scientific basis for emotion recognition systems is tenuous at best. Some scholars raise concerns that these technologies mirror obsolete fields such as phrenology, which endeavoured to correlate physical traits with personal characteristics. A conclusion drawn by a group of experts in 2019 underscored the lack of objective measures for identifying emotional states, suggesting that indicators like skin moisture could be inconsistently interpreted across different individuals and contexts.
In addressing criticisms, Nicole Gibson acknowledged the historical challenges faced by emotion recognition technologies. However, she asserted that advancements in the field have led to significant improvements. Meanwhile, the implications of deploying such systems revolve around fundamental rights, including privacy and protection against discrimination. Reports have highlighted instances where emotion recognition technologies exhibited bias based on race, gender, and disability, raising questions about the fairness of these applications.
A particularly salient point of concern revolves around the use of emotion recognition as a surveillance mechanism in workplaces, which threatens personal privacy. Critics suggest that this could result in employees' sensitive information being collected without their knowledge or the need for such monitoring being reasonably justified.
Public sentiment towards emotion recognition technologies in Australia reflects widespread unease. A recent survey indicated that only 12.9% of Australian adults are in favour of their use in workplace settings. Participants often characterised these technologies as invasive, unethical, and error-prone. Concerns have also emerged from US studies, with respondents expressing fears that inaccuracies in emotion recognition could adversely affect their job stability and career progression. One participant conveyed that such systems could be particularly damaging to minority groups in the workplace.
As the AI landscape evolves, the ongoing developments in emotion recognition technology and its implications for business practices and employee welfare will undoubtedly continue to incite both discussion and scrutiny across various forums.
Source: Noah Wire Services