Eviden’s AI offer is a unique blend of best-of-breed solutions for AI acceleration and high-density computing. It helps enterprises and organizations accelerate their AI while improving energy efficiency and reducing carbon emissions.

Eviden’s offer has been purposely built for AI. It leverages years of consulting and best-of-breed infrastructure to help customers improve and speed up their entire set of models.

What we offer

We specialize in empowering organizations to significantly accelerate AI-enabled applications. Our exceptional expertise and insight enable flexible and scalable composable AI architecture that delivers unparalleled performance and ongoing extendibility. Additionally, we take pride in providing the most energy-efficient system in the market, ideal for environmentally-conscious organizations.

Our Products and Solutions

BullSequana AI 1200H

The BullSequana AI 1200H is a powerful, enterprise-ready supercomputer engineered to power the most demanding Large Language Models (LLMs) and complex system modeling workloads.

Designed for seamless integration into enterprise environments, the BullSequana AI 1200H delivers exceptional performance while ensuring optimal energy efficiency with Eviden’s patented Direct Liquid Cooling (DLC) technology.

AI consulting servicesConsulting Services

Eviden empowers businesses to streamline their entire AI workflow.

From initial data exploration and understanding model outputs, to rapid deployment and performance evaluation, Eviden’s high-performance computing accelerates every stage.

This translates to faster time-to-insight and a significant boost in overall AI development efficiency.

Related resources

ECMWF Client story

April 29, 2024

ECMWF

Winds of change with ECMWF’s digital transformation

GENCI Press release

March 28, 2024

GENCI

GENCI and CNRS choose Eviden to make the Jean Zay supercomputer one of the most powerful in France

Novo Nordisk Foundation Press release

March 19, 2024

Novo Nordisk Foundation

The Novo Nordisk Foundation chooses Eviden to build “Gefion” in Denmark, one of the world’s most powerful AI supercomputers

FAQ

LLMs (Large Language Models) and complex models require high compute power due to several factors:

  • Massive Parameter Counts:
    These models have billions or even trillions of parameters, which are like the “brain cells” of the model. Processing and storing this many parameters demands significant computational resources.
  • Huge Datasets:
    Training these models requires massive amounts of data. This data needs to be loaded, processed, and analyzed, putting a strain on memory and processing units.
  • Complex Calculations:
    The algorithms used to train and run LLMs are incredibly complex. These calculations, like matrix multiplications and attention mechanisms, are computationally intensive, especially as the model size grows.
  • Iterative Refinement:
    The development process isn’t a one-shot deal. It involves repeated training and experimentation, each cycle adding to the computational load.

In short, the sheer size, data requirements, and intricate calculations in LLM and complex models necessitate the use of high-performance computing systems.

Direct Liquid Cooling (DLC) is becoming essential for modern computing infrastructure, especially as data centers and high-performance systems grapple with increasing power demands. Here’s why DLC matters:

  • Superior Energy Efficiency:
    DLC systems cool IT components directly with liquid, resulting in massive reductions in energy needed for cooling compared to traditional air-based systems. This translates to lower operating costs and reduced carbon footprint.
  • High-Density Computing:
    By removing the limitations of air cooling, DLC enables much denser packaging of components within servers. This unlocks higher performance per square foot within data centers.
  • Reduced Noise Pollution:
    DLC systems are significantly quieter than those relying on air cooling, improving the work environment for technicians and engineers.
  • Reliability:
    The controlled environment within DLC systems can improve component longevity and reduce the risk of hardware failures due to overheating.

Our Team

Jacques Conan

Jacques Conan

Senior Product Manager
Mikael Jacquemont

Mikael Jacquemont

Artificial Intelligence Expert and Project Owner AI4SIM
Bruno Charrier

Bruno Charrier

HPC & AI Senior Solutions Architect