From proof of concept to large-scale systems
After years focused on proof-of-concept experiments, quantum computing is entering a decisive phase. The challenge is no longer to control a few dozen or even a few hundred qubits, but to build systems that can run long, complex, and reliable computations. Fault tolerance is central to this transition.
Quantum error correction, introduced in our previous article, has become a core element of both academic and industrial roadmaps. It is a direct prerequisite for quantum computing to deliver a meaningful advantage over classical methods.
Scaling is impossible without error correction
The physical error rates observed today, typically between 10⁻³ and 10⁻⁴ per gate, impose a strict limit on the number of operations that can be performed in a usable quantum circuit. As soon as the number of operations increases, the probability of error quickly becomes dominant.
By contrast, quantum algorithms with real industrial impact require error rates in the range of 10⁻⁹ to 10⁻¹². As a result, the construction of logical qubits protected by error-correcting codes operating below their threshold is widely seen as the only viable path toward large-scale quantum computing.
Error-correcting codes: wat is the state of the art in 2025?
Several research groups have now demonstrated early logical qubits using small error-correcting codes, typically involving on the order of a dozen physical qubits. These experiments show modest reductions in error rates, but they remain far from the levels required for practical applications.
Surface code has emerged as the most mature approach so far. Its two-dimensional layout and reliance on local interactions between neighboring qubits make it particularly attractive from an engineering and industrial standpoint, especially for superconducting platforms.
Color codes offer appealing features, notably the possibility of implementing certain logical gates transversally. However, this comes at the cost of more complex decoding. They are considered promising for some platforms, such as trapped ions and neutral atoms, while posing significant challenges at scale.
At the same time, quantum LDPC codes (Low Density Parity Check) are drawing increasing attention. Their sparse structure and potential for more efficient decoding make them strong candidates for the medium to long term, even though their hardware implementations are still largely experimental.
No error-correcting code is universal: none allows all logical gates to be implemented transversally. The key challenge is therefore to strike the right balance between error suppression, decoding complexity and hardware practicality.
Open challenges on the road to fault tolerance
Despite steady progress, several major obstacles remain:
- Limited numbers of physical qubits
The number of physical qubits remains limited today. Yet encoding a single logical qubit requires substantial redundancy. Fewer physical qubits means even fewer logical qubits, which severely restricts the size of feasible computations. - Physical error rates
The physical error rate measures how likely an error is to occur when a qubit is manipulated. Error correction only works below a critical threshold.If the hardware is too noisy, above this threshold, error correction actually worsens performance by introducing more errors than it removes. Below the threshold, however, redundancy allows errors to be progressively suppressed, resulting in much more reliable logical qubits.Today’s technologies operate close to this threshold. This is encouraging, but it leaves little room for degradation: small drops in hardware performance can push a system back into the noisy regime, while even modest improvements significantly ease the burden on error correction. - Engineering complexity
Encoding a logical qubit may require several hundred physical qubits. Designing, connecting and controlling such architectures raises major engineering challenges, including connectivity, control, cooling, etc. - Error decoding
In a quantum computer, errors cannot be detected by simply “looking” at the qubits, since directly measuring a qubit destroys the quantum information it contains. Error correction must therefore rely on indirect methods.This is achieved through the measurement of syndromes, indirect indicators that must be decoded to infer which errors have occurred. Decoding is a complex task, often required in real time, involving continuous feedback between the quantum processor and classical computers at extremely high speeds.
Industrial trends and global dynamics
Error correction is no longer an academic curiosity. It now sits at the center of the strategies of QPU manufacturers, startups, industrial research labs and large national and European programs.
Many public roadmaps point to fault-tolerant demonstrators in the 2027–2030 timeframe. While these timelines are ambitious and should be treated cautiously, they reflect a shared understanding: without fault tolerance, quantum computing will not deliver industrial value.
In practice, most current demonstrations still focus on error-corrected quantum memories, with only limited examples of full multi-qubit logical computations.
Eviden’s approach: tools, integration and system thinking
Within this evolving ecosystem, Eviden focuses on tooling and integration. Our work is motivated by a clear gap: the lack of practical tools to implement, compare, and evaluate error-correcting codes across realistic and diverse hardware platforms.
Our main research directions include:
- developing emulators for noisy quantum hardware
- building tools to implement and characterize multiple error-correcting codes
- cleanly separating quantum algorithms from error-correction mechanisms
- proposing standard approaches that support future multi-technology integration
Our goal is to allow researchers and users to experiment with quantum algorithms in error-corrected regimes without requiring deep expertise in error correction itself.
Eviden also participates in several French and European initiatives, including the France Hybrid Quantum Infrastructure (HQI) project, which aims to integrate quantum technologies into high-performance computing environments.
Toward an engineering discipline for reliable quantum computing
The move toward fault-tolerant quantum computing marks a fundamental transition, not only in scale, but in methodology.
As architectures mature, error correction becomes a system-level concern spanning hardware, software, algorithms and high-performance computing.
No single technology, code, or organization can define the winning approach on its own. Progress will depend on collective efforts to build shared tools, frameworks, and evaluation methods that enable comparison and convergence, while remaining agnostic to specific technological choices.