In my last post, a reader asked an excellent question that I thought I’d provide my perspective on. This particular issue is complex, so bear with me. Here’s the question:
If quantum computing is eventually bound to change and transform the computing business landscape completely, how come that companies with deep pockets like Intel and IBM are not planning any version of quantum computer any time soon, but they still focus only on traditional computing? If your work is worthwhile, why a company such intel, which can throw US$4 billion into a fab, can not spend a hundreth of this money to finance DWave (editor’s note: or any effort, including an internal one)?
There are at least three good reasons why a big company would decide not to invest in quantum computing as it’s currently perceived. The first is an economic argument based on the time value of money. I have a great article on this by HP’s Stan Williams that anyone interested in this question should read. Here it is:
The punchline is that any opportunity sufficiently far away with sufficient risk isn’t a good investment regardless of the size of the opportunity.
The second good reason is covered in Clay Christensen’s The Innovator’s Dilemma, which is another must-read for understanding the dynamics of technology strategy in big companies. The main point is that businesses acting rationally produce products that their customers ask for. It’s not rational to produce something that no-one wants. Major disruptive technologies are sometimes things that no-one is asking for (like QCs). This is related to the first reason. People working on QCs seem to think that companies like IBM etc. will try to build these just because they are cool. That’s not the way this type of thing works. People invest in technology development because they believe people will want to buy it. What argument would someone inside IBM use to justify a long-term high-risk high-expense internal QC effort? If there were clear large-market applications where you could definitely say, if we built this we would own a new $10B/year market with 100% margins then maybe you could justify the investment. But currently there is an extreme lack of clarity “out there” on the issue of what you’d do with a QC if you had one.
The third good reason was pointed out by a respondent to the original question. If you’re an IBM, HP, etc., it makes a lot of sense to argue as follows: Look, we don’t know at least two things. One, we don’t know what the roadmap for QC development looks like. Some say QCs will never be built. Most people say machines competitive with conventional approaches are 20 years away. Some people say 5. Some (ie me) point out that QCs are already being sold (NMR QCs) at high margins. Two, we don’t know what the market looks like. What are the applications? Who are the customers? What’s the size of the opportunity over time? So given both enormous technical and market risk, let’s do the following. Let’s wait and see what happens. If a competitor or a start-up can actually demonstrate (1) sufficient reduction of technical risk, (2) a clear path to real scalability (note: I don’t mean scalability in the sense that it’s sometimes used in research articles, ie. in principle scalable, I mean practically scalable), and (3) clear market information, most likely in the form of paying customers and a high margin business model with high growth (~40%+ year over year revenue growth), then these big guys figure that they can always get in on the action by partnering, acquiring, licensing, etc. It is rational to pay more for less risk. If I were the CEO of IBM, given the information currently publicly available, that’s certainly the strategy I would use.
Given these types of arguments, you might ask why it makes sense to try to build QCs in a business context at all. In other words, why would a start-up attempt to do this? Here’s some reasons:
- There are several well-known case studies of situations where the prevailing wisdom about how long it would take to develop a new technology or achieve a scientific milestone were wrong. Here’s two: sequencing the human genome (Celera) and producing synthetic insulin (Genentech): here’s a Genentech case study very relevant to QC & D-Wave: scott_stern_1.pdf
- If, as in the case of Celera or Genentech, the prevailing wisdom as to timescales to deployment could be shortened by a factor of 5 by a focused effort, then we’re talking about QCs competitive with conventional approaches in 1-4 years, not 5-20. This drastically changes the conclusions of the time value of money argument (reason #1 above).
- Most of the people working in quantum information science aren’t even remotely interested in applications. There is a lot of interest in algorithms, and sometimes people interchangeably use the two terms, but there is a major fundamental difference between the two. This lack of interest in applications is a green field opportunity for a start up. If there are big market applications for QCs in practice, the fact that there simply aren’t a lot of people looking to connect technology to users means that a small focused effort stands a good chance at identifying these in advance of competitors. Therefore reason #2 above related to the innovator’s dilemma is actually an extremely valuable advantage for a start up. We can prospect without worrying about competition from incumbents.
- Building QCs, especially in a business context, is extremely hard. The only way any effort has a chance of succeeding is by having a lot of A+ people in a wide range of roles. A first mover has a big advantage here. If someone wants to be involved in a really serious, long-term, well-financed effort with the kind of infrastructure required to do this very hard thing, there simply aren’t a lot of options for where to go. As an example, would I personally have joined Celera or Genentech in their early days if I had the chance? You bet I would. The same dynamic is at work now in QC development.