At a Glance

An energy company was challenged to efficiently retrieve relevant information from its Confluence documentation and GitLab code repositories. AWS and Eviden helped it fix the problem with a GenAI solution.   

Outcomes

  • Improved efficiency and accuracy of information retrieval from Confluence and GitLab repositories  
  • Accelerated search and increased productivity using advanced AI techniques to enhance internal knowledge management 

The Client

A leading European energy company 

An energy company was having trouble getting relevant information out of its Confluence documentation and GitLab code repositories. Searching was time-consuming, and results were often inaccurate. The situation was draining productivity and internal knowledge management.  

The company worked with Eviden and AWS to improve the efficiency and accuracy of information retrieval. 

An energy company was having trouble getting relevant information out of its Confluence documentation and GitLab code repositories. Searching was time-consuming, and results were often inaccurate. The situation was draining productivity and internal knowledge management.  

The company worked with Eviden and AWS to improve the efficiency and accuracy of information retrieval. 

Together, they created an advanced Generative AI solution using retrieval-augmented generation (RAG).

Why Eviden

Eviden is an AWS Premier Tier Services Partner and AWS Marketplace Seller supporting a global client base by bringing together people, business and technology. An 11-year AWS partner, Eviden has 14 AWS Competencies, including Migration Consulting, and is also a member of the AWS Managed Service Provider (MSP) and AWS Well-Architected Partner Programs.

This technique enhances the capabilities of large language models by incorporating external knowledge from an organization’s internal knowledge bases without the need for retraining the models. The RAG approach is cost-effective and ensures that the output remains relevant and precise.  

How to transform documentation search  

The solution is built on several key components, each playing a vital role in transforming the energy company’s documentation search process. 

CI/CD pipeline 

The project teams set up a robust continuous improvement/continuous delivery pipeline using AWS Cloud Development Kit, GitLab, AWS CodeCommit, AWS CodeBuild and AWS Pipeline. The process begins with pushing CDK code to GitLab, which mirrors it to AWS CodeCommit. A push to CodeCommit triggers the deployment pipeline, where CodeBuild compiles the code, and AWS Pipeline handles the deployment. This ensures seamless integration and deployment. 

Data ingestion 

Data is ingested from Confluence and GitLab using their respective REST APIs. Glue jobs process and index the data. This processed data is then stored in AWS S3, and messages are sent to SQS for further processing, creating a structured and accessible data repository. 

Data storage 

AWS OpenSearch Serverless optimizes search capabilities. The ingested data is stored in vector databases, with two separate indexes created for Gitlab and Confluence, ensuring efficient and accurate search results. 

Application and web interface 

  • A Streamlit interface hosted on AWS ECS Fargate enhances user interaction. Users interact with a chatbot through this interface, with authentication managed by AWS Cognito. AWS Application Load Balancer manages traffic, ensuring a smooth and secure user experience. How the components and services worked together 

Components of the retrieval-augmented generation solution 

  • AWS Glue ingests data from Confluence and GitLab. Glue jobs process the data and send it to S3 and SQS. 
  • Amazon S3 stores ingested data and facilitates version control and CRUD operations. 
  • AWS EventBridge links S3 actions with SQS to decouple ingestion and processing events. 
  • Amazon SQS handles message queuing for data processing. 
  • Amazon DynamoDB manages the indexing status of Confluence pages. 
  • AWS Lambda processes and indexed data from GitLab and Confluence queues. 
  • Amazon OpenSearch Serverless performs vector and semantic searches on indexed data. 
  • Amazon Bedrock provides LLMs for text generation and analysis, including embedding documents and chatbot responses. 
  • AWS Fargate hosts the application code in a serverless environment. 
  • AWS ALB manages traffic load and integrates with AWS Web Application Firewall for security. 
  • AWS Cognito manages user authentication and authorization. 
  • Streamlit provides the web interface for user interaction. 

The project team also relied on advanced techniques such as reranking and summarization.  

Reranking improves the relevance of search results using the FlashrankRerank method and the ms-marco-MiniLM-L-12-v2 cross-encoder model, integrated through LangChain. This ensures that users receive the most pertinent information in response to their queries. 

The summarization technique uses the anthropic.claude-v2:1 model to generate document summaries that maintain context. This technique ensures efficient and accurate information retrieval, making it easier for users to grasp essential details quickly.  

The implementation of this GenAI solution significantly improved the efficiency and accuracy of information retrieval within the energy company’s Confluence and GitLab repositories. The project demonstrated the potential of using advanced AI to enhance internal knowledge management, accelerate search and increase overall productivity. 

Related resources

E-b-trans improves fleet management Client Story

E-b-trans improves fleet management

How to prove a full, serverless cloud concept in one week

Safran Group’s massive cloud transformation Client Story

Safran Group’s massive cloud transformation

Video: See how AWS improves industrial performance.

RheinEnergie migrates key SAP workloads to AWS Client Story

RheinEnergie migrates key SAP workloads to AWS

Audio: How the cloud changed the way RheinEnergie works