How does AI hallucinate? Can a virtual assistant act unethically? What’s the best way to prevent a chatbot from being offensive? Welcome to the world of responsible AI.
The age of AI has dawned. With generative AI (GenAI) now unleashed, this has further underlined the risks, as well as the possibilities, of this amazing technology.
The field of ‘responsible AI’ is a multi-disciplinary domain for ensuring that AI applications are designed, developed and deployed in a way that is ethical and accountable. Its purpose is to maximize AI’s benefits, while minimizing any potential harm to individuals, communities and society.
So, where are the risks?
Firstly, AI is only ever as good as the huge volumes of data that it’s ‘trained’ on and the learning models it creates from them. For example, it will perpetuate hidden or unconscious human biases. And because GenAI tools like ChatGPT and Gemini have been trained only on publicly available information, there are some inherent limitations and risks. There’s the risk of copyright infringement, for example; and there may be issues with ‘inexplicability’, where difficulties understanding how outputs are generated can lead to decreased user trust.. AI ‘hallucinations’ occur when AI produces outputs that make no sense or are completely inaccurate, caused by problems with training data or with how the learning model works. Microsoft’s chat AI, Sydney, admitted to falling in love with users and spying on Bing employees.[1]
Secondly, AI is convincing and eager to please, which means that it sounds like it has all the right answers– even when it doesn’t. The Galactica assistant, developed by Meta, created fake news and misinformation; for instance, it fabricated papers (sometimes attributing them to real authors) and generated wiki articles about the history of bears in space.[2]
Dimensions of responsible AI
Thirdly, if prompted in a specific way, AI can produce misleading, incorrect, unethical, or dangerous outputs. When this isn’t considered, then AI is open to undetected malicious attack. For example, Microsoft had to halt activity by its Tay chatbot within 24 hours of introducing it on Twitter after trolls taught it to spew racist and xenophobic language.[3]
Any business using AI must therefore understand and mitigate the associated risks by devising and implementing a responsible AI strategy. This encompasses the transparency and explainability of how the AI works and what its outputs are based on; fairness and bias mitigation to recognize and minimize any predispositions or partialities that arise; accountability for the effects that AI may have on people and society; and privacy and security to prevent leakage of confidential or sensitive information.
Culture of responsibility
While the technologies may be highly advanced, AI should not be seen as a ‘blackbox’. It’s important to develop a shared vision of how AI will be used. What kind of back-office and customer-facing solutions does your enterprise want to develop? How is risk management applied to ensure effective governance and oversight?
What’s more, GenAI changes how people work, so should be underpinned by change management and adoption strategies. This is about fostering a culture of responsible AI development; and it means engaging and educating stakeholders, including end users, about responsible AI principles and applications.
Legality and compliance
Externally, the governance landscape is evolving. A swathe of policies and laws are in development, with – among others – the launch of the UN AI Advisory Body, the signing of the AI Executive Order in the US, and agreement of the EU AI Act, all in late 2023.
Expert legal perspectives are needed to pinpoint what solutions your enterprise is allowed to develop, and implement what’s required for compliance of processes, systems and data. AI has impact on intellectual property and information law; and AI ethics and policy development are key, including steps to combat disinformation.
Data science and technology
From the technological and development perspective, responsible AI is about how data is selected, governed and used, which techs are deployed, and how the lifecycle of learning models is managed. Methods are needed to recognize and mitigate bias in datasets, and to address how AI might be prompted. Teams should understand how models are trained, and why they generate certain outputs or decisions.
Strong security and privacy settings are essential to respect and protect privacy rights. This includes, but is not limited to, the safeguarding of personal data, maintaining secure infrastructures, and conducting regular privacy and security audits.
Responsible enough: from principle to practice
Today enterprises face the challenge of how to translate responsible AI frameworks and policies into actionable methodologies across their business, legal and technology domains. That’s why – as part of our AI capability – Eviden has developed five categories of Responsible AI Accelerators: ethical AI; AI governance; AI data; transparency and fairness; and AI security. We also focus on sustainability, otherwise known as Green AI, to minimize the negative environmental impacts of AI development and deployment.
It’s important to note, however, that no model or application can ever be inherently 100% transparent, secure and bias-free. Human beings can aim only to minimize the risks. This is a complex and dynamic capability that requires education and discussion. A holistic responsible AI strategy will help to safeguard your business, reputation and the confidence of stakeholders as you continue to innovate with AI’s boundless possibility.