Quantum AI: Ending impotence!


“AI” and “quantum”. Here you go, two buzzwords for the price of one! More seriously, although I am aware that it is difficult to sum up two such big fields with a few articles, I wanted to try to expose, as faithfully as possible, the pros and cons of quantum computing applied to artificial intelligence, and more specifically to Machine Learning. The main underlying problem is the limitation of our computing capabilities to execute heavy algorithms. Indeed, although the power of our equipment has increased by 10 times in the last thirty years, we must keep in mind that we will always need more resources and that traditional computing won’t allow us to tackle some Big Data and IoT issues. Quantum mechanics might be one of the technical solutions to make computer science enter a new era in terms of security and algorithms execution speed.

Did you say “quantum”?

First, we could look into where the adjective “quantum” comes from. This term is so overused – like artificial intelligence – that we often forget its origin. We must go back to the 20’s and 30’s and consider the work of physicists like Planck, Born, Einstein – among others – to hear about quantization. To make it quick, the word “quantum” is roughly related to the atomic energy levels quantization – regarding the photoelectric effect – and the black-body radiation.

When the theory was developed, they understood that measurements related to molecules, atoms and particles can only take certain well-defined and undivided values. Such quantities are said to be “quantized” values. It’s a bit like planet orbits in our solar system: the planets move on well-defined ellipses but there is no planet between two consecutive orbits (except for asteroids). It may not seem like much, but if we imagine that the temperature on Earth was quantized, then it could only be -20°C, 5°C or +40°C… and never something else. What the heck!

Today, “quantum” refers to all subatomic sciences, meaning a world where objects are both particles and waves with probabilistic measurement.

Computing power retrospective

In the 70’s was released the first Intel microprocessor, containing 2300 transistors – basic processor component. Since then, the hardware performance, that is intrinsically related to the number of transistors within the processor, follows a trend – the Moore’s empirical law  – which states that computers power doubles every 18 months. This is due to the fact that the transistors engraving fineness is always reduced by 50% over this period. The problem is that this “law” will not apply eternally. If IBM has recently passed the five-nanometer threshold for the size of a transistor, we might face a floor by 2020-2021 for we are inexorably closer to the atom size – around the “angstrom” order of magnitude, meaning one-tenth of a nanometer. Why is it an issue? Because, when handling such small components, some phenomena such as quantum tunneling (electrons “go through walls” instead of being stopped) appear, and transistors cannot work like usual transistors any longer.

Moreover, to increase processors computing power, even without considering these quantum effects, manufacturers did not rely solely on their miniaturization because they encountered some issues like circuit overheating. Thus, to ease parallel computing and limit the components temperature, they focused on new architectures that can be classified into two types of approaches.

The first one is the resources horizontal growth (several small servers), typically developed for Big Data purposes. This includes distributed computing, meaning that we are now able to leverage several processors from different machines within a cluster.

The second one is the resources vertical growth (one single heavy server). Multi-core processors are the oldest example. It has evolved into more modern architectures to massively increase parallelization for High Performance Computing (HPC) issues. Two architectures are quite typical of HPC: graphics cards (GPU) marketed by Nvidia and AMD (among others), and manycore architectures historically opted by Intel through its Xeon Phi product line.

In the same category, we can add Google’s tensor processing units (TPU), initially developed for the use of the TensorFlow library (deep learning techniques).

Furthermore, we can also quote neuromorphic processors (NPU) and quantum processors (QPU).

Of course, every piece of this type of hardware is expensive; expect more than €5000 for a good graphics card. But the main problem, which is not only a financial matter (although the energy bill can quickly become huge), is that artificial intelligence and the Internet of Things can require a lot of power… and even with these new technologies, the execution of some algorithms are so long that they are impractical .

Why is quantum computing so hype?

Although we are not yet mature on the quantum computing topic, this new paradigm brings lots of promises. The algorithms using quantum formalism may be able to take into account the special properties of quantum hardware. More specifically, for some tasks such as the resolution of systems of linear equations (matrix inversions) and the prime factors decomposition, the speed of quantum algorithms is theoretically higher than that of any equivalent classical algorithms. Mathematicians say that the complexity – the number of operations according to the number of input data – of classical algorithms is much higher than that of equivalent quantum algorithms, when they exist. For example, when the number to factorize increases by one decimal place, the duration of the classical factorization algorithm increases almost exponentially – while for its quantum counterpart, the computation time increases “only” quadratically, meaning much more slowly.

Which Machine Learning algorithms could be impacted?

We have previously quoted the prime factors decomposition. This processing, part of cryptography (and cybersecurity), is one of the best illustrations of how long it takes to run some algorithms with traditional computers. If we focus on machine learning algorithms, used in artificial intelligence projects, we also have to deal with considerable computation times: matrix inversions, gradient descent, back-propagation…

Thus, quantum computers might be able to solve some extremely time-consuming algorithms – several decades – in a sensible amount of time, from a few hours to a few days. We will see in another article the general idea of a quantum algorithm operating.

Technical digression

As a result, Data Scientists know well the machine learning algorithms that could theoretically be powered by quantum acceleration. Among the most famous are factor analyses (eg PCA), most clustering (eg K-Means), support vector machines (SVM), Boltzmann machines (RBM), bayesian inferences, reinforcement learning, optimization algorithms (global extrema exploration), etc. More generally, these are often algorithms whose resolution involves linear operations (or linear cost functions), insofar as the quantum calculations are themselves linear. In neural networks, it is therefore more difficult because activation functions are usually not linear. Nevertheless, quantum accelerators can be used for unsupervised pre-training steps based on Boltzmann machines for instance.

Democratizing quantum computing would therefore allow some theoretical algorithms to reach the real world, as GPUs made it not so long ago for Deep Learning algorithms.

So, should we move to the quantum processor?

No rush. In fact, there are three brakes on the quantum computing adoption for your AI projects.

First, and as previously explained, not all machine learning algorithms are affected by quantum acceleration. Then, the operating of quantum computers is – today – far from convincing: algorithms outputs include many errors due to the high sensitivity of the physical phenomena (we will illustrate them in another article). Finally, the current cost of quantum prototypes is… prohibitive. If you are a compulsive buyer, plan a few tens of millions of euros for a quantum computer, knowing that it is not even sure that it will work as you expected.

Nevertheless, although major players such as IBM, Atos or Microsoft are only selling (at the moment) quantum “simulators” – “classic” supercomputers capable of quickly evaluating all the possible solutions of a quantum system -, it may be the right time for data teams to start testing quantum algorithms, or at least to try a quantum approach with more traditional algorithms, on targeted use cases such as fraud detection, genomics or thermosensitive phenomena forecasting. This advance learning could actually be a strong competitive advantage when the first real quantum processors will be marketed in the mid run, probably in a decade.

Quantum computing has a lot to offer in terms of algorithmic speed and AI performance. Its adoption will mainly be economic and political. But be sure that if it brings a decisive advantage in your industry and your use cases (those having a very high algorithmic complexity), then it would be a good thing to start the empowerment and the change management of your data teams to bring together the R&D and business teams around a common purpose.

Can we use some cookies?

This site uses cookies. An explanation of their purpose can be found below. To comply with new EU regulation, please confirm your consent to their use by clicking "Accept". After consenting, you will not see this message again.

Know more about tracers