# Rainier V: Instance Classes

Several algorithms have been discovered that cannot be run on classical computing systems. These quantum algorithms require operations that are allowed by quantum mechanics but not by classical physics. In order to run any one of these algorithms, hardware is required that is capable of supporting all of the operations required by that particular algorithm.

One important class of quantum algorithms are the adiabatic quantum algorithms. Initially discovered in 2000 by a group at MIT, these algorithms solve formally intractable computational problems in a novel way. These algorithms are of intense interest for two main reasons. The first is that special-purpose hardware build specifically to run them is by far the most advanced enablement of quantum computation in the world today. The second is that early analysis of these algorithms show exponential speed-ups over classical approaches on certain typical case NP-hard problems. If the polynomial scaling seen for small problems turns out to be the correct asymptotic scaling, this would be of extreme practical importance, as NP-hard optimization problems are ubiquitous in science and business.

Unfortunately, the asymptotic run time for the adiabatic quantum algorithms is not known and appears surprisingly difficult to calculate. Therefore there is controversy about how well this class of algorithms is expected to perform.

In this post I will discuss one reason why it’s so difficult to generically predict the performance of adiabatic quantum algorithms. This reason is usually called instance dependence.

Recall that a problem of the sort we are interested in is specified by a vector $\vec{h}$ and a matrix $\hat{J}$, where the elements in both are selected from a finite set whose elements are set by the allowed precision in hardware (see here for an introduction to this concept). For specificity let’s say we have 4 bits of precision in hardware. Then the elements of $\vec{h}$ and $\hat{J}$ must be selected from the 15-element set $-7, -6, ..., 0, ..., +6, +7$. In addition the allowed non-zero elements of $\hat{J}$ are fixed by the hardware graph as introduced in post 1 of this series.

So for the 128-qubit Rainier chip, which has 128 qubits and 352 couplers, each of which can take on one of 15 settings, there are $15^{480}$ possible settings of the values of $\vec{h}$ and $\hat{J}$ corresponding to problems we can ask the hardware to solve. Of course many of these are the same problem, but even eliminating these symmetries the resultant number of possible instances is large.

We know that not all of these problems take the same time to solve. Some of them are trivially easy, and some of them are exceptionally hard. So how do you answer the question of how an algorithm performs on the vast numbers of problems the hardware can accept?

One way you can make progress is to restrict the type of problem you will consider by choosing a prescription for selecting the $\vec{h}$ and $\hat{J}$ numbers that makes all of the problems generated by that prescription somehow “similar”. We call this problem generating prescription an instance class generator. For example, a valid instance class generator would be the following prescription: assign to all $\vec{h}$ and $\hat{J}$ values randomly selected from the allowed element set drawn with equal probability. We call the instances generated by an instance class generator an instance class.

Defining instance classes allows much more specificity in answering the question “how well does your algorithm perform?”. This is because if the problems within an instance class are sufficiently similar, the median run times for the algorithm on the instances in that class is a meaningful measure of how you’d expect the algorithm to perform “in real life” on applications generating instances of that type.

The instance class that I am going to focus on in the next post is similar to the one I mentioned above, except that the probabilities of assigning the 15 allowed elements is gaussian–the probability of assigning zero being highest, and then tailing off gaussianly for the larger magnitude elements. The variance of the gaussian I will use is set such that the probability of getting the largest magnitude elements $+7$ and $-7$ is $1\%$. The instance class thus generated is a type of random spin glass on the hardware graph defined by the hardware, which is hard (in a way I’ll more specifically define in the next post) for classical solvers.