# Finding the Most Probable Explanation using a quantum computer

The optimization problem that our hardware natively solves is known in physics as an Ising model. Defining a set of $\pm 1$ variables $s_j$, the hardware returns a set of these variables $\{s_j\}^*$ that, if our algorithm is working, minimizes the cost function

$E(s_1, ..., s_N) = \sum_{j=1}^N h_j s_j + \sum_{i

where $h_j$ and $J_{ij}$ are real numbers originating from the problem instance we want to solve.

Note that the hardware is inherently stochastic. Regardless of how we run the quantum algorithm we are never guaranteed to get the global minimum of $E$. What we get instead are samples from some probability distribution that, as algorithm designers, we try to engineer so that it is very likely that the samples we draw are “good” in the sense of providing an approximate solution to the problem.

Alright so we have some heuristic solver for the type of Ising model shown above. What can we do with it that might be useful? One idea is to use the hardware to solve what are known as Most Probable Explanation (MPE) problems. MPE is a problem encountered when you have a Bayesian network, with some evidence, and you’re trying to compute what values of the variables in your Net are most probable given the evidence you’ve got.

Before I show how to do this, first I should mention that the Ising model I just wrote down goes by other names. One of these is Weighted MAX-SAT. In the Weighted MAX-SAT problem, we are given a set of clauses with associated weights, and we’re asked to find the instantiation that produces the largest sum of the weights of the satisfied clauses. While it looks a little different at first blush, they are actually exactly the same problem under the hood.

The ideas I’ll be discussing here are mainly from “Using Weighted MAX-SAT Engines to solve MPE” by James D. Park. A great book about Bayesian stuff (and more generally about science and human nature that every human should read) is E. T. Jaynes’ Probability Theory: The Logic of Science.

What’s a Bayesian network? Wikipedia says:

A Bayesian network, belief network or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

BOORRRINGGG!!!!!!. … see previous post. Bayesian networks aren’t really boring, they are exciting! Here’s how I would describe them:

A Bayesian network is a framework that allows a computer to make inferences and decisions based on incomplete information.

That’s a big part of what humans do! Think about everything you’ve done today. You have reasoned about the world around you and used that reasoning to make decisions. For example, you might have heard raindrops on your roof, reasoned that it was raining, and grabbed your umbrella before heading out the door. However it is possible it wasn’t raining. Maybe it was the sound of a horde of angry rabid squirrels running across your roof, preparing to attack you as soon as you open the door. A Bayesian network is a way to quantify all of these possibilities. When we provide evidence to the network, we can try to calculate what the Most Probable Explanation is consistent with that evidence. Your own brain-based Bayesian network solved the MPE problem for you when you heard what you thought were raindrops. It computed the MPE for those sounds — rain. You then used that info and grabbed your umbrella. If you think about it, pretty much all your actions are conditioned on MPE decisions that can be formalized in a Bayesian way. With enough computing power, 99.999% (maybe moređź™‚ ) of what you do can be modeled using Bayesian methods.

We can solve the MPE problem directly in a quantum computer, via the mapping from MPE to Weighted MAX-SAT described in the J. D. Park article referenced above, and then by solving the Weighted MAX-SAT problem in hardware using an adiabatic quantum optimization algorithm.

In the next post I’ll reduce this to practice with a specific example.