The BACQ project (Benchmarks for Application-Centric Quantum Computing) aims to develop, exploit, and promote a reliable, objective, and practical measurement instrument for the practical performance of quantum computing by considering a set of benchmarks close to real applications.
The project is supported by the national program MetriQs-France, which is part of the French national quantum strategy. The consortium includes Thales, Eviden, CEA, CNRS, Teratec, and LNE. Together, they aim to establish meaningful performance evaluation criteria for industry users.
Q-score™ forms the foundation of the BACQ project
In 2021, Eviden introduced Q-score, a quantum computing metric that is open-source. This metric was developed to address an optimization problem, allowing for the testing and validation of benchmarks across different quantum machines. Q-Score is associated with the effective number of qubits that a specific quantum stack—comprising both quantum hardware (Quantum Processing Unit, or QPU) and optimization software such as compilers—can utilize to solve a widely used combinatorial optimization problem known as the Maximum Cut problem. A solution is considered adequate if it demonstrates a significant computational improvement beyond what can be achieved through random assignment.
The BACQ project’s first action was to adopt Q-score and establish the benchmark’s universality. Discussions and collaborations have already been engaged with European and international technology providers regarding various hardware platforms, including superconducting, photonic, spins, neutral atoms and trapped ions.
What are the upcoming steps in the roadmap?
The BACQ project is working on normalizing and validating the Q-score by analyzing feedback from tests conducted on real quantum processors. This includes refining the Q-score and developing standardized protocols. The project plans to expand testing to additional Quantum Processing Units (QPUs) in 2025. An article summarizing these tests is in progress. The Eviden quantum lab is also working on creating a benchmark based on many-body physics.
Which organizations are currently utilizing Q-score?
2022
Delft University of Technology published a paper adapting Q-Score to Quantum Annealing devices. They maintained all the basics of Q-Score, including the MaxCut reference, and used a QUBO formulation of the problem, which was then solved using D-Wave’s devices.
2023
A Dutch team published an article introducing a new member to the Q-Score family: Q-Score Max-Clique. Additionally, IQM released its new 20 superconducting qubits QPU and published its measurement according to the Q-Score MaxCut benchmark. IQM achieved a Q-Score/Max-Cut of 11.
2024
Another promising QPU maker, Quandela, released a new photonic QPU named Altair, now available on Quandela Cloud 2.0, and published its Q-Score/MaxCut result.

Related resources
FAQ
The Q-score is a metric designed to evaluate the performance of quantum processors and the entire quantum computing stack for solving real-world problems. It specifically assesses the maximum number of qubits that can be effectively utilized to solve the Max-Cut problem
The Max-Cut problem is a well-studied combinatorial optimization problem with applications in machine learning, integrated circuit design, and network optimization. It’s also an NP-hard problem, meaning there are no known efficient classical algorithms to solve it for large instances, which makes it suitable for demonstrating potential quantum advantage. Additionally, Max-Cut can be easily formulated for implementation on quantum processors.
The BACQ project aims to develop application-oriented benchmarks for quantum computing, targeting a high-level evaluation significant for industry. The Q-score is a key component of this project, providing a practical and insightful metric for evaluating performance in the context of real-world applications.
The Q-score’s focus on solving a concrete problem like Max-Cut using a practical algorithm like QAOA provides a more comprehensive and meaningful evaluation of quantum computing capabilities. This approach contrasts with conventional metrics that focus on individual hardware components, like gate fidelity or coherence times.
β(n) is a metric that helps filter out the inherent “random” performance of quantum algorithms.
Running a Q-score is straightforward, as the code is open source available on GitHub.
Get in touch!
Connect with us to discover BACQ.
Get in Touch!