# Clash of the Titans: thermodynamics vs. NP-hardness

Modern physics contains at its core the assumption that there are simple, knowable rules that govern the behavior of the world around us. There are many of these, some believed to be more fundamental than others. Many physicists believe that the laws of thermodynamics, and in particular the second law of thermodynamics, are more likely to remain true as we discover more and more about our universe than any other set of laws. Quantum mechanics or general relativity could be shown to be facets of some grander underlying theory, but thermodynamics will likely remain as a framework inside which all other physical theories must fit.

The second law of thermodynamics has many formulations. One way to think about the law is that when you put two different physical systems in contact, they will attempt to thermally equilibrate with each other — heat flows from the hotter system to the colder one, until the whole system reaches some equilibrium temperature in the middle. This simple common sense observation has very deep roots. If it were to fail, much of our understanding of how the universe works would be shown to be false. As just one example, perpetual motion machines would become feasible.

The Ising model

There are an enormous number of physical systems we might want to model, from individual electrons to collisions of galaxies. The most useful models of nature are those that can be used to represent a large number of completely different systems. Understanding how such models work then leads to understanding all of the physical systems the model can be used to represent.

One of the most widely used models in physics is called the Ising model. It was initially proposed in the mid 1920s by Ernst Ising and Wilhelm Lenz as a way to understand how magnetic materials work. The approach modeled a magnetic material as a collection of molecules, each of which has a spin which can align or anti-align with an applied magnetic field, and which interact through a pairwise term with each other. Let $s_i\in \{-1,+1\}$ represent the spin of the $i$th molecule, $V$ represent the set of all molecules, and $|V|$ represent the number of molecules. The energy of a collection of spins $\bf{s}=[s_1,\cdots, s_{|V|}]$ is

$E(\bf{s}) = \sum_{(i,j) \in \, \text{neigh}} J_{i,j} s_i s_j + \sum_{i\in V} h_i s_i (1)$

where $h_i$ is the strength of the applied field at molecule $i$ and $J_{i,j}$ acts as interaction field between neighbouring spins $i$ and $j$. At low temperatures where the system is strongly biased towards low energy states, a positive $J_{i,j}$ favors anti-aligned neighbouring spins ($s_i s_j = -1$) since the product $J_{i,j} s_i s_j$ will be negative. If $J_{i,j}$ is negative the opposite occurs, $s_i$ and $s_j$ tend to align ($s_i s_j = 1$) so that $J_{i,j} s_i s_j$ is again negative. Though Ising did not realize it at the time this simple model provides excellent experimental agreement with the properties of many magnetic materials.

Over time it was realized that Eq. (1) could be used to model an enormous number of physical systems. Any system that describes a set of individual elements (modeled by the spins $s_j$) interacting via pairwise interactions (the quadratic terms $s_i s_j$) can be described in this framework. In the period 1969 to 1997, more than 12,000 papers were published using the Ising model to describe systems in fields ranging from artificial intelligence to zoology.

Thermodynamics of the Ising model

Even in 1920, it was well-known that if a system described by Eq. (1) was in thermal equilibrium at temperature $T$ (using units where $\hbar = k_B=1$), the probability of observing any particular configuration $\bf{s}$ follows a Boltzmann distribution, i.e., the probability of $\bf{s}$ is proportional to $\exp\bigl(-E(\bf{s})/T\bigr)$ where $T$ is the temperature of the system. At high temperatures almost all configurations have the same probability as $T \gg E(\bf{s})$ for all $\bf{s}$. However, at low temperatures the $\bf{s}$ having the lowest energy becomes the most likely state. At exactly $T=0$ only the state $\bf{s}$ having the lowest energy will be observed.

This thermal equilibration process is driven by the second law of thermodynamics. Any system that can be described as an Ising model of the form Eq. (1), if placed in contact with any other system in thermal equilibrium, will attempt to thermalize with this other system. If this thermalization can be achieved, the most likely state of the system in Eq. (1) will be its ground state.

This leads to a fascinating conundrum, because finding the global ground state of the Ising model is NP-hard. This fact was established in 1982 by Francisco Baharona by constructing Ising equivalents of the logical operators $\neg$, $\wedge$, and $\vee$ of SAT. Treating $s_i=-1$ as false and $s_i=+1$ as true the truth tables for the logical operators can be encoded as minima in the Ising energy function. From these basic elements an arbitrary SAT formula may be encoded as an Ising model so that the SAT formula is satisfiable if and only if the lowest energy state of the Ising model is 0.

Therefore it seems as though nature, via its fundamental statistically-driven tendency to thermalize all systems to the same temperature, is applying an enormous amount of computing power attempting to solve a large number of NP-hard problems. Thermalizing any of the physical systems to which Eq. (1) has been applied requires solving’ an NP-hard problem, in the sense that if equilibration can occur the most likely state of the system is its ground state, which encodes the solution to an NP-hard optimization problem.

This is a peculiar observation — that perhaps the most fundamental computational’ aspect of our universe, embodied in the second law of thermodynamics, can be largely frustrated by the fact that its desired outcome — thermalizing physical systems — may be computationally intractable for a large number of systems. A far future civilization might be able to use this effect to forestall a possible heat death of the universe by building large physical systems described by Eq. (1), which would remain out of thermal equilibrium for extremely long periods of time, `storing’ future energy for conversation into work only when it was needed.

Such a machine might start out with the {h,J} terms configured to be a “worst case” type problem, and whenever you needed to turn on the tap to release some energy, you could morph the {h,J} set to make it a little easier. The resultant thermodynamic cascade downwards in energy could be harnessed. Kind of like turning on a tap, except instead of water you get useful energy coming out of a computationally bounded storage device.

This entry was posted in D-Wave Science & Technology by Geordie. Bookmark the permalink.