In 1981, American physicist and Nobel Laureate, Richard Feynman, gave a lecture at the Massachusetts Institute of Technology (MIT) near Boston, in which he outlined a revolutionary idea. Feynman suggested that the strange physics of quantum mechanics could be used to perform calculations.
The field of quantum computing was born. In the 40-plus years since, it has become an intensive area of research in computer science. Despite years of frantic development, physicists have not yet built practical quantum computers that are well suited for everyday use and normal conditions (for example, many quantum computers operate at very low temperatures). Questions and uncertainties still remain about the best ways to reach this milestone.
What exactly is quantum computing, and how close are we to seeing them enter wide use? Let’s first look at classical computing, the type of computing we rely on today, like the laptop I am using to write this piece.
Classical computers process information using combinations of “bits”, their smallest units of data. These bits have values of either 0 or 1. Everything you do on your computer, from writing emails to browsing the web, is made possible by processing combinations of these bits in strings of zeroes and ones.
Quantum computers, on the other hand, use quantum bits, or qubits. Unlike classical bits, qubits don’t just represent 0 or 1. Thanks to a property called quantum superposition, qubits can be in multiple states simultaneously. This means a qubit can be 0, 1, or both at the same time. This is what gives quantum computers the ability to process massive amounts of data and information simultaneously.
Imagine being able to explore every possible solution to a problem all at once, instead of once at a time. It would allow you to navigate your way through a maze by simultaneously trying all possible paths at the same time to find the right one. Quantum computers are therefore incredibly fast at finding optimal solutions, such as identifying the shortest path, the quickest way.
Think about the extremely complex problem of rescheduling airline flights after a delay or an unexpected incident. This happens with regularity in the real world, but the solutions applied may not be the best or optimal ones. In order to work out the optimal responses, standard computers would need to consider, one by one, all possible combinations of moving, rerouting, delaying, cancelling or grouping, flights.
Every day there are more than 45,000 flights, organised by over 500 airlines, connecting more than 4,000 airports. This problem would take years to solve for a classical computer.
On the other hand, a quantum computer would be able to try all these possibilities at once and let the best configuration organically emerge. Qubits also have a physical property known as entanglement. When qubits are entangled, the state of one qubit can depend on the state of another, no matter how far apart they are.
This is something that, again, has no counterpart in classical computing. Entanglement allows quantum computers to solve certain problems exponentially faster than traditional computers can.
Read more:
Brain implants, agentic AI and answers on dark matter: what to expect from science in 2025 – podcast
A common question is whether quantum computers will completely replace classical computers or not. The short answer is no, at least not in the foreseeable future. Quantum computers are incredibly powerful for solving specific problems – such as simulating the interactions between different molecules, finding the best solution from many options or dealing with encryption and decryption. However, they are not suited to every type of task.
Classical computers process one calculation at a time in a linear sequence, and they follow algorithms (sets of mathematical rules for carrying out particular computing tasks) designed for use with classical bits that are either 0 or 1. This makes them extremely predictable, robust and less prone to errors than quantum machines. For everyday computing needs such as word processing or browsing the internet, classical computers will continue to play a dominant role.
There are at least two reasons for that. The first one is practical. Building a quantum computer that can run reliable calculations is extremely difficult. The quantum world is incredibly volatile, and qubits are easily disturbed by things in their environment, such as interference from electromagnetic radiation, which makes them prone to errors.
The second reason lies in the inherent uncertainty in dealing with qubits. Because qubits are in superposition (are neither a 0 or 1) they are not as predictable as the bits used in classical computing. Physicists therefore describe qubits and their calculations in terms of probabilities. This means that the same problem, using the same quantum algorithm, run multiple times on the same quantum computer might return a different solution each time.
To address this uncertainty, quantum algorithms are typically run multiple times. The results are then analysed statistically to determine the most likely solution. This approach allows researchers to extract meaningful information from the inherently probabilistic quantum computations.
From a commercial point of view, the development of quantum computing is still in its early stages, but the landscape is very diverse with lots of new companies appearing every year. It is fascinating to see that in addition to big, established companies like IBM and Google, new ones are joining, such as IQM, Pasqal and startups such as Alice and Bob. They are all working on making quantum computers more reliable, scalable and accessible.
In the past, manufacturers have drawn attention to the number of qubits in their quantum computers, as a measure of how powerful the machine is. Manufacturers are increasingly prioritising ways to correct the errors that quantum computers are prone to. This shift is crucial for developing large-scale, fault-tolerant quantum computers, as these techniques are essential for improving their usability.
Google’s latest quantum chip, Willow, recently demonstrated remarkable progress in this area. The more qubits Google used in Willow, the more it reduced the errors. This achievement marks a significant step towards building commercially relevant quantum computers that can revolutionise fields like medicine, energy and AI.
After more than 40 years, quantum computing is still in its infancy, but significant progress is expected in the next decade. The probabilistic nature of these machines represents a fundamental difference between quantum and classical computing. It is what makes them fragile and hard to develop and scale.
At the same time, it is what makes them a very powerful tool to solve optimisation problems, exploring multiple solutions at the same time, faster and more efficiently that classical computers can.
Domenico Vicinanza does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.