# Algorithms vs. hardware: The throwdown

I was recently asked by folks at an artificial intelligence institute a very interesting question about whether hardware advances (Moore’s Law) or algorithmic progress (better use of computational resources) have had more impact on solving hard problems. Here is an excerpt from what I sent them:

In 1977 Pollard’s rho algorithm was among the best factoring algorithms. See here for a reference. This algorithm was likely the basis for Ron Rivest’s claim in the August 1977 Scientific American article that factoring a 129-digit product of primes would take 40 quadrillion years.

The number of modular multiplies required to run Pollard’s rho algorithm in the worst case is 4*n^{1/4}, where n is the number to be factored.

In 2007 Blue Gene/L at LLNL runs at ~ 0.4 petaflops. This system running Pollard’s algorithm factors an n=2^300 (90-digit, 300 bits) product of primes with 1.5 * 10^23 modular multiplies, which Blue Gene might be able to do in about 4*10^8 seconds ~ 12 years.

In 2007 the quadratic sieve is the best public algorithm for factorizing integers under 100 digits. According to this source, factoring a 90-digit number takes about 4,800 seconds on a 2.2GHz Athlon64.

If we use an Apple II c. 1977 with a MOS 6502 processor we get about 1 MHz. This is 2,200 x slower in clock speed than the Athlon. Direct comparison is more complex though. To be safe extend the difference by another factor of 10. So I’d estimate that the quadratic sieve on the Apple II c.1977 on a 90-digit number would take about 4800*2200*10 ~ 1 * 10^8 seconds, or about 3.3 years.

So the Apple II beats Blue Gene/L with the algorithm swap!!!

As the numbers get bigger the difference in performance increasingly favors the Apple II as the asymptotic scaling of the sieving algorithms is much better than the rho algorithm… if you want to go to 100-digit+ numbers the difference is much more spectacular.

=

If you can figure it out you pass the Turing test.

## 12 thoughts on “Algorithms vs. hardware: The throwdown”

1. Anonymous: yeah I saw that but mysteriously it works for me. There’s something about the apostrophe that doesn’t render for some people. Not sure how to fix it.

2. Ahh got it… thanks.

3. Software algorithms are still mostly human-designed, while within a given hardware achitecture increases in performance are due to machines laying out the dense circuit lines.

4. Giovannetti’s idea is to send the address down the branching tree of connections in such a way that it only affects one switch at a time
The first address qubit sets a switch at the first branching point to go one way or the other; the second qubit is sent that way and sets the switch at the next branching point, and so on. The total number of entangled quantum systems is smaller, and they are not so susceptible to interference, allowing information to be retrieved from memory intact
What you think about that ? Geordie

5. You promised the quantum computer will on mark in 2008.Do you have a time table for the quantum computer services on mark ?

6. Pingback: Progress with AI « dw2

7. Ahh got it… thanks.

8. I am curious how long it would take if both Blue Gene and quadratic sieve algorithm are
used? Thanks!

9. Hi Lingzhu, I’m not completely sure but I would guess you can run quadratic sieve in an highly parallel mode where you’d get a speed-up proportional to the number of cores you were using. For a large modern supercomputer you’d probably be able to do the 90-digit example above in less than a second if you were smart about the implementation I would guess. Note that there’s no proof you can’t do a lot better in factoring bi-brimes than public sieving methods. I’d guess there are much better algorithms known but not public.