Generative AI (GenAI) is penetrating the operations of many, if not all, business domains. It facilitates, simplifies, and expands the usage of sophisticated processes and applications. Of course, one natural candidate of such applications is Identity and Access Management (IAM). 

“IAM, please explain me why user X can’t access the same business applications as user Y, even though they have the same job position in the same business unit. IAM, please assign the newly hired user Y the same permissions as user Z”: these are typical day-to-day questions an IAM practitioner would ask, to be more productive in the administration of the identity and access governance, by simplifying, shortening and de-risking the interactions with IAM tools. 

To make this innovative approach a reality, Evidian (an Eviden business) has incorporated GenAI in its short-term product roadmap and is evaluating prototypes that are getting close to a minimum viable product (MVP). 

How can Generative AI help IAM? 

There are many scenarios where Generative AI benefits to IAM. These may include: 

  • Getting the best from the product(s) documentation and the customer care expertise, especially when the searched information is shared among several rich sources, supplementing existing support means such as knowledge databases and consulting experts; 
  • Helping extract advanced analytical insights and interact with the Identity Fabric APIs, to act on the security policy lifecycle, with the objective of simplifying IAM administration. This is probably the use case where we expect the biggest breakthrough, benefiting IAM practitioners in the short-term; 
  • Combining several hybrid AI technologies, such as built-in AI capabilities, can help not only co-pilot IAM governance but also auto-pilot all IAM processes; 
  • Auto-configuring an IAM deployment. This includes automatically generating approval workflows that are often tailored to specific customers, helping set up the initial instantiation of the policy model, managing the role lifecycle and entitlement assignments to end-users, co-program rules for decision management systems, and more. 

How do we build a Generative AI-based IAM? 

In essence, GenAI can generate data from a large set of data learned preliminarily during a learning phase. In response to a question called prompt, new data is generated, i.e. predicted and not just copied. This is based on the pretrained Large Language Model (LLM) with billions of parameters inside.  

A GenAI implementation serving IAM would typically be made up of the following elements: 

  • A conversational app, as a simplified user interface to interact with the LLM using natural language; 
  • An orchestration tool, such as LangChain, to chain multiple inputs to the LLM, outputs from the LLM, combined with API requests and responses to an IAM Identity Fabric; 
  • Prompt engineering, to enrich the question asked with additional contextual information, to focus the question on the appropriate scope, and more; 
  • A VectorStore database to optimize the efficiency of frequent requests made by the LLM to the IAM data sources; 
  • A LLM to execute the prompt requests with the closed target of serving demands related to the IAM topic only, with safety checks on inputs and outputs; 
  • IAM agents to securely retrieve data from the IAM policy database and the data lake, as well as to act on IAM policy data, using the Identity Fabric APIs. 

Such a technological approach must then be evaluated, component by component, and fine-tuned to deliver efficient IAM governance and better user experience, taking constraints and risks into consideration. 

How can we mitigate specific risks? 

Let’s take a step back on risks that are related to LLMs, chatbots and AI technologies in general. These risks are associated to classical criteria such as security, privacy, performance, expandability, as well as AI-specific ones such as explainability, bias, information leakage, and more recently hallucinations, where the LLM predicts an output recommendation that is pure inappropriate invention. 

Besides the known countermeasures that are intrinsic to the GenAI technology, we can implement traditional Zero Trust security delivered by IAM processes. We must apply to GenAI techniques the same key principles that we apply to protected business applications: strong multifactor authentication to access GenAI, fine-grained dynamic authorization to enforce the least privilege principle for all accesses to the underlying Identity Fabric API, exhaustive audit of inputs and outputs, and active monitoring of the audit trail, etc. 

When GenAI is interoperating with the IAM Identity Fabric, we must constrain it to process questions and deliver answers only within the strict scope of IAM. In addition, other AI techniques can help challenge the outputs delivered by the LLM, with dedicated consistency-checking rules to avoid hallucinations. 

Such an open approach is future-proof and will contribute to minimize the uncertainty of the GenAI tool by allowing it to extend the list of safety countermeasures. A rule-based decision management safety tool will advantageously be linked with the approval workflow in place, to add a preliminary human approval from the security officer, before applying changes suggested by the GenAI tool.  

A new era for the ultimate IAM user experience 

GenAI for IAM is just a step away from becoming a reality in production. Even if this technology won’t be a disruption in the way many IAM processes work, it is here to stay.  

We can be confident that well-known security approaches will eventually minimize risks frequently associated with GenAI technology. 

With that in mind, Generative AI will definitely and deeply impact the user experience when interacting with IAM tools, in a natural, non-tedious and very powerful manner.