No industry-standard benchmark exists for comparing quantum computers, making procurement decisions nearly impossible
technologytechnology0 views
Quantum Volume, the most widely cited benchmark for quantum computer performance, cannot scale beyond approximately 100 qubits because it requires exponentially expensive classical simulation for verification. IBM stopped reporting Quantum Volume for its largest processors. Alternative metrics like EPLG (error per layered gate) and CLOPS (circuit layer operations per second) each capture only one dimension of performance. There is no agreed-upon, comprehensive benchmark, so a CTO evaluating whether to lease time on IBM, Google, IonQ, or Quantinuum hardware has no apples-to-apples comparison available.
Why it matters: Because there is no standard benchmark, enterprise customers cannot rationally evaluate quantum hardware for their specific workloads, so procurement decisions are based on marketing claims and vendor relationships rather than measured performance, so enterprises either overpay for hardware that does not fit their use case or avoid quantum computing entirely due to uncertainty, so the commercial quantum computing market grows far more slowly than the technology's potential warrants, so quantum hardware companies cannot get the customer revenue they need to fund continued R&D and must rely on increasingly skeptical venture capital.
The structural root cause is that quantum computing platforms differ so fundamentally in their architecture (superconducting vs. trapped ion vs. photonic vs. neutral atom) that no single metric can fairly compare them. Quantum Volume favors high-connectivity systems, gate fidelity favors trapped ions, qubit count favors superconducting, and speed favors photonic. Each vendor promotes the metric where they win. International standardization bodies like ISO and IEEE have working groups, but they have explicitly stated that many benchmarking techniques are 'still significantly evolving' and standardization would be premature.
Evidence
A February 2025 paper by researchers in Poland proposed parity-preserving benchmarks as a scalable alternative to Quantum Volume, acknowledging QV's inability to scale (The Quantum Insider, February 2025). A March 2025 arXiv paper ('Systematic Benchmarking of Quantum Computers: Status and Recommendations') identified five areas requiring international standardization working groups. Quantinuum published extensive benchmark data in 2025 calling itself 'the most benchmarked quantum computer in the world,' revealing the competitive pressure around metrics. A Nature Computational Science paper (2025) benchmarked quantum computing software frameworks but noted the difficulty of comparing hardware. IBM stopped reporting Quantum Volume numbers for its largest processors after the metric became uncomputable at scale.