In a critical, high-intensity context, AI-based decision support technology solutions must above all deliver enhanced performance, while enabling operators to focus on their expertise. Whether it’s to gain operational advantage on the battlefield or for predictive maintenance to ensure that public transport runs smoothly, AI must support critical decision-making with speed and reliability.

What we offer

Our unique solution RAID, Reasoning AI for Decisions, based on the Xtractis® technology by INTELLITECH, provides a quick and reliable response to complex problems in order to support critical decision-making while being robust and sovereign, transparent and intelligible, ethical and responsible. The solution uses a continuous learning loop to solve the central problem of data annotation.

Industries we serve

Unlocking potential across critical industries with versatile use cases - explore just a glimpse of what our solution can achieve.

Railway

Sleepers are critical components of railway infrastructure and are highly vulnerable to damage. Implementing an AI system that performs predictive classification of sleepers is essential for early detection of potential failures, ensuring timely intervention and preventing accidents. Our solution is a decision aid system that relies on data collected by different sensors to anticipate and reduce the risk of breakdowns while optimizing safety within a constant budget.

Defense

In today’s complex geopolitical landscape, armed forces are modernizing with state-of-the-art digital systems to maintain their strategic edge. But the data overload requires tailored AI technologies to aid critical decision-making. From accoustic detection to drone recognition, we support operational planning, battlefield anticipation, and unit coordination with the right ethical and legal framework.

Key benefits

Transparent and intelligible
Robust and sovereign
Ethical and responsible

On the road to 2025

Seminar – The Sovereign Reasoning AI for Trusted Critical Decisions

Under the aegis of the AI Action Summit and the high patronage of the Presidency of the Republic. Special event chaired by Mr. Philippe LATOMBE, Member of Parliament, and sponsored by Ambassador Isabelle ROME.

The key topics of this seminar program included cybersecurity challenges, the stakes of a collective and evolving AI, frugal and embedded AI, along with a focus on use cases in the transportation, defense, healthcare, and R&D sectors.

A unique technology

A trusted AI is defined by several essential criteria that ensure its reliability and compliance with the ethical values and requirements of the Armed Forces and critical industries, within an evolving international and European legal framework.

One of its key characteristics is being “human-centered”. Human involvement should go beyond merely being “in the loop” as a mandatory checkpoint in a process; instead, AI should be designed according to human thought to act as an extension of human action.

Controlling the operational framework of a complex decision-making system is often approached externally through statistical tests. But is this truly sufficient? The validation of the most critical AI systems (AIS), where failure could result in harm or loss of human life, requires far more rigorous measures.

This need for more formal control of delegation highlights two other essential criteria: intelligibility (or transparency) of the AIS and the explainability of its predictions. Beyond statistical validation and sample testing of an opaque (“black box”) AIS, a transparent AIS undergoes formal and refutable pre-validation across its entire operational framework. This involves verifying the completeness of the embedded knowledge within the AIS, forming its internal decision-making logic, and assessing whether it is appropriate and sufficient for its intended use.

Thus, through intelligibility and explainability, the AIS validation phase enables the detection of potential biases, particularly those unconsciously introduced during the creation of training datasets – a prerequisite for the ethical and responsible use of AI systems.

Stéphane Delaye

Stéphane Delaye

Chief Technology Officer for Command/Control & Intelligence at Eviden

Stéphane Delaye

Chief Technology Officer for Command/Control & Intelligence at Eviden

RAID, Reasoning AI for Decisions, makes it possible to deliver a fast, reliable response to a complex problem by taking the same steps as the human brain: gathering data, validating and processing this data, and producing the best possible decision.

Zyed Zalila

Zyed Zalila

Founding CEO, R&D Director at INTELLITECH

Zyed Zalila

Founding CEO, R&D Director at INTELLITECH

The positioning of the problem is the human being, the resolution of the problem is Xtractis®, and the auditing, certification and deployment is the human being.

Contact us

Thank you for your interest. You can download the report here.
A member of our team will be in touch with you shortly