Reasoning  AI: A powerful decision-making tool

In today’s geopolitical context, new emerging threats are paving the way for new forms of conflict or the resurgence of old ones. High-intensity combat, cyberattacks and other forms of informational warfare all require the armed forces to develop and integrate innovative technologies to meet today’s operational needs.

To maintain their strategic advantage and operational capability at the highest level, armed forces are evolving and modernizing to deploy state-of-the-art digital systems. The information overload created by this flow of data requires the deployment of Artificial Intelligence (AI) technologies to support critical decision-making.

So how can we adapt these technologies and their uses, inspired in part by the civilian world, while respecting the constraints inherent in their field of use?

1. The high expectations of the armed forces

AI has been Listed as priority number 1 in the DrOID 2023 Defense Innovation Orientation reference document, and reaffirmed by the creation in March 2024 of the Ministerial Agency for Defense Artificial Intelligence, AI in Critical Systems meets multiple needs of the Armed Forces, which revolve around several main axes.

  • Decision-making tools: first and foremost, AI must be able to support decision-making in the planning and conduct of operations. Through rapid, in-depth data analysis, it must be able to anticipate battlefield developments and threats. In this way, it can provide commanders with a better understanding of the operational environment, enabling them to make the best possible decisions.
  • Facilitating force coordination and collaborative combat: AI should also enhance operational efficiency by facilitating coordination between different units, thus supporting collaborative combat, and ensuring interoperability with allies, while maintaining the ability to operate autonomously.
  • Sovereign, resilient and trusted technology: last but not least, autonomy in the design of AI systems must guarantee the sovereignty of our critical solutions, and avoid damaging dependence on foreign countries.

To guarantee robust, secure systems that support the exercise of command without creating vulnerabilities, it is essential to have trusted, secure AI. But how should this be defined, and within what ethical and legal framework?

2. An ethical and legal framework

We need to define the ethical and legal framework within which AI applied to military job concepts must evolve. This is a different framework from that applied to the civilian world; we know that the criticality and complexity of defense operations must be central to the framework for its deployment. Forces must be able to rely on the integrated AI tools made available to them. These decision-making tools must also meet specific deployment constraints, such as computing power consumption and on-board capability. Let’s now define the terms transparent AI VS black-box AI, frugal AI or embedded AI.

Transparent AI: Transparent AI refers to an artificial intelligence system whose decision-making mechanisms and internal processes are open and understandable to users. It aims to provide visibility on how data is processed, models are built and conclusions are drawn.  It is therefore formally auditable and certifiable, and enables biases to be detected.

Black-box AI: A black-box AI designates an artificial intelligence system whose internal decision-making processes are opaque and not explicable to users. Users can see the input data and output results, but cannot easily understand how or why the AI arrived at these results. This opacity can pose problems in terms of accountability, trust and regulatory compliance, particularly in critical areas.

Frugal AI: Frugal AI refers to an artificial intelligence system designed to operate efficiently using a minimum of hardware and energy resources. Such a system is therefore optimized to have a small footprint, whether in terms of computing power, memory or data storage. – Frugality also means being able to learn from little or incomplete data.

Embedded AI: Embedded AI refers to an artificial intelligence system integrated directly into hardware devices or systems, often with constraints on size, power and processing capacity.

A trusted AI is defined by several essential criteria that guarantee its reliability and compliance with the ethical values and requirements of the Armed Forces, within a legal framework established at international and European level.

One of its main qualities is to be “Human Centric”. The human being must not only be “in the loop”, as an obligatory point of passage in a process, but the AI must be shaped according to his way of thinking, so as to be an extension of his action: “AI in the loop”. Despite the autonomy of AI systems, in France, the responsibility for making critical decisions lies with humans.

3. RAID – Reasoning AI for Decisions

In a theater of operations, multiple parameters have to be taken into account simultaneously in an unstable, high-intensity environment. Plan, refine, execute: this cycle, which embodies the foundations of all military action at every level of tactical decision-making, must be executed ever more rapidly.

Acting on the cognitive load at the decision-making stage is therefore essential for rapid, effective action.

RAID, Reasoning AI for Decisions, makes it possible to deliver a fast, reliable response to a complex problem by taking the same steps as the human brain: gathering data, validating and processing this data, and producing the best possible decision. Responding to all the constraints inherent in the integration of AI in Critical Systems, RAID is both robust and sovereign, transparent and intelligible, ethical and responsible.

This necessity naturally defines three other essential criteria: the transparency of systems, and the intelligibility and explicability of predictions made by AI. These systems must therefore be certifiable and auditable. In addition to the statistical and sample-based validation of a black box, a transparent system is subject to prior formal and refutable validation across its entire scope of use.

Finally, and thanks to the characteristics defined above, it must enable the detection of biases, an indispensable condition for the qualification of ethical and responsible AI.

One of the special features of this technology is that it can be embedded. Indeed, once the system of rules has been validated, the model operates frugally, even in a constrained environment.

At the heart of this system is Xtractis®, a unique AI invented and created by Professor Zyed ZALILA, and co-developed and published by the French company INTELLITECH. Placing the human at the heart of the system, this man-machine collaboration guarantees strategic and operational superiority by possessing all the characteristics of a trusted AI.

“The positioning of the problem is the human being, the resolution of the problem is Xtractis®, and the auditing, certification and deployment is the human being.” Zyed Zalila

To find out more, meet us at Eurosatory, stand 5AJ126, from June 17 to 21, 2024. Book a demo or meeting >