I mentioned in my last Rainier post that I was going to write next about how to predict runtime performance of the quantum adiabatic algorithms we are trying to enable in the hardware. However there is one other topic that I need to cover before getting to that.
Recall that the 128-qubit Rainier chip is designed to embody a quantum algorithm whose objective is to find the global minimum of
where and is the edge set I defined in post #1 on this topic.
The parameters and arise from currents that flow on the chip, which are set by a user. For a variety of reasons, including most importantly the presence of low frequency magnetic flux noise in the chip itself, the actual values of and that are implemented in the chip are not the desired values, but the desired values some offset. The typical spread between actual and desired values defines what we refer to as the precision of the setting of these parameters.
More specifically, the and values are physically restricted to some interval between maximum positive and negative values. In our implementation the maximum range is on the order of +3.5 GHz to -3.5 GHz for both and . We can normalize these intervals to the range . We then ask the following question: if we try to set a parameter to zero, what does the distribution of actual parameter values look like? Let’s assume that it is a zero mean gaussian with variance . Then we know if we try to set that parameter to a value less than the system does not know for certain whether we’re asking for zero or the small number.
In order to get around this “smearing” problem, we only allow parameters that are (almost always) distinguishable. This means that in the whole allowed parameter value interval we must define a set of allowed values. We require that and define to be the number of bits of precision allowed by hardware. Here is a picture showing how this works for bits of precision, for allowed values of . The y-axis here is the probability of the value of the parameter being the indicated value for a large number of attempts to set to the three allowed values.
In this case the spreads in trying to set to a parameter value of zero and +1 do not appreciably overlap. In practice we have to be careful about how to define the required spread but this is the basic idea.
If we abstract away from the physical origin of these offsets and think about how to model this finite precision, the first order approximation is to only allow the parameters and in the objective function take on the discrete values . For example, if we specify 4 bits of precision in hardware, this means that the and values in the objective function can only take on values in the 15-element set .