A recent study conducted by Deloitte Consulting has raised significant questions regarding the ethics and trust associated with artificial intelligence (AI) technologies, particularly generative AI (GenAI). The report, titled the State of Ethics and Trust in Technology, reveals that cognitive technologies, including AI, were deemed to pose the highest potential for serious ethical risks, with a striking 54 percent of respondents aligning with this view. This concern was notably higher than the second leading risk, digital reality, which was identified by just 16 percent.

In terms of data privacy, a substantial 40 percent of respondents expressed apprehension specifically relating to generative AI. Nonetheless, alongside these concerns, the survey also highlighted a more optimistic perspective: 46 percent of individuals believe that cognitive technologies harbour the most significant potential for social good. This statistic underscores the polarising nature of AI since its emergence.

The study noted a slight shift in sentiments over the past year, with distrust and ethical concerns associated with AI diminishing by 3 percent. Conversely, hope that AI will ultimately serve as a force for good increased by 7 percent. This shift suggests that as businesses and IT professionals become more familiar with AI and GenAI, their comfort levels appear to rise, although substantial doubts remain.

The discourse surrounding trust in AI is compounded by popular culture, which has long propagated narratives of AI malfunctioning or turning rogue. Despite ongoing ethical concerns, the Deloitte study indicates that public perception may start to shift positively if steps are taken to enhance trust. The report specifically warned that reputational damage could ensue for organisations that misuse AI technologies or fail to adhere to ethical standards, suggesting the need for protective measures.

“AI is a powerful tool, but it requires guardrails,” the report stated, underscoring the importance of governance and compliance frameworks in gaining the support of employees and customers alike.

Amidst these discussions, questions concerning the reliability of GenAI outputs persist. A separate study revealed that the fidelity of outputs produced by GenAI can be alarmingly low, even on simple tasks. Lexin Zhou, a researcher at Spain's Polytechnic University of Valencia and co-author of the study, remarked, “Scaled-up models tend to give an apparently sensible yet wrong answer much more often,” highlighting the discrepancies that can occur even when questions are straightforward.

To address the ethical and reliability issues surrounding AI, Deloitte recommends the appointment of Chief Ethics Officers. These individuals would oversee AI operations, ensuring compliance with ethical best practices and developing processes for safe and accurate AI usage. Bill Briggs, the Chief Technology Officer at Deloitte Consulting, noted, “Embedding ethical principles early and repeatedly in the technology development lifecycle can help demonstrate a fuller commitment to trust in organizations and keep ethics at the front of your workforce’s priorities and processes.”

The necessity for organisations to implement processes and guardrails is further emphasized, ensuring that GenAI users can trust the reliability of outputs and that they avoid potential issues such as theft or plagiarism of intellectual property. Furthermore, these protocols would safeguard against the uncritical acceptance of AI conclusions, which are not infallible, thereby mitigating the rising ethical risks associated with the accelerating adoption of AI technologies.

As the landscape of AI continues to evolve, the implications for business practices are becoming increasingly complex, with the potential for both positive advancements and significant risks if not managed appropriately.

Source: Noah Wire Services