I mentioned in my last Rainier post that I was going to write next about how to predict runtime performance of the quantum adiabatic algorithms we are trying to enable in the hardware. However there is one other topic that I need to cover before getting to that.

Recall that the 128-qubit Rainier chip is designed to embody a quantum algorithm whose objective is to find the global minimum of

where and is the edge set I defined in post #1 on this topic.

The parameters and arise from currents that flow on the chip, which are set by a user. For a variety of reasons, including most importantly the presence of low frequency magnetic flux noise in the chip itself, the actual values of and that are implemented in the chip are not the desired values, but the desired values some offset. The typical spread between actual and desired values defines what we refer to as the precision of the setting of these parameters.

More specifically, the and values are physically restricted to some interval between maximum positive and negative values. In our implementation the maximum range is on the order of +3.5 GHz to -3.5 GHz for both and . We can normalize these intervals to the range . We then ask the following question: if we try to set a parameter to zero, what does the distribution of actual parameter values look like? Let’s assume that it is a zero mean gaussian with variance . Then we know if we try to set that parameter to a value less than the system does not know for certain whether we’re asking for zero or the small number.

In order to get around this “smearing” problem, we only allow parameters that are (almost always) distinguishable. This means that in the whole allowed parameter value interval we must define a set of allowed values. We require that and define to be the number of bits of precision allowed by hardware. Here is a picture showing how this works for bits of precision, for allowed values of . The y-axis here is the probability of the value of the parameter being the indicated value for a large number of attempts to set to the three allowed values.

In this case the spreads in trying to set to a parameter value of zero and +1 do not appreciably overlap. In practice we have to be careful about how to define the required spread but this is the basic idea.

If we abstract away from the physical origin of these offsets and think about how to model this finite precision, the first order approximation is to only allow the parameters and in the objective function take on the discrete values . For example, if we specify 4 bits of precision in hardware, this means that the and values in the objective function can only take on values in the 15-element set .

Pingback: Rainier V: Instance Classes « rose.blog

“In our implementation the maximum range is on the order of +3.5 GHz to -3.5 GHz for both h_j and J_{ij}.”

I understand 3.5 GHz, but I don’t understand the leading + and -. How can you have negative cycles per second?

I understand vaguely that current flow can be related to frequency, but I’m missing the connection here.

Hi Charles,

The reported range is actually in reality an energy range, not a frequency range. Think of these values as static settings. To convert to what the units really are you have to multiply by \hbar to get something in Joules. I have a habit of thinking in units where \hbar=1 and k_B (Boltzmann constant) =1 and reporting everything in units of GHz. In these units 20.8 GHz = 1 Kelvin. The reason there is a “negative” range for h_j is that it arises from the application of a magnetic field whose direction at the jth variable determines the sign. Think of a bar magnet with external field applied one way or 180 degrees to this being + or -. The J_ij can switch sign based on the setting of the coupling device sitting between variables i and j which can energetically prefer alignment or anti-alignment giving again a + and -.

OK, my quantum-fu is weak. I going to crawl back in my cave and do some more reading. But 20.8 GHz = 1 Kelvin. That’s good to know.