Keynote from 15th US SCE Workshop

Yesterday I had the honor of giving the keynote speech at the 15th Annual United States Workshop on Superconductive Electronics in beautiful Lakeville, Connecticut. Now I’m not allowed to say where Lakeville is but here is a picture, which may or may not be of Lakeville. If it even exists.

ctgr.jpg

Here is a PG-13 version of the slides from the keynote.

24 thoughts on “Keynote from 15th US SCE Workshop

  1. Hi Geordie!

    Really interesting presentation!!!

    ‘Why I think superconducting computers have a chance’

    I’m not a specialist in SC… What makes the current superconducting computer technology more competitive in comparison to the conventional technology, at least for certain applications?

    I recognize the energy consumption issue, but do you also take quantum computing into account while making this point?.. There was already a period of great expectations for SC in the 70s/80s. Why is it different this time?

    You quote NSA Superconducting Technology Assessment (PDF) that identifies RSFQ as ‘the most promising technology in the continuing demand for faster processors’. It also looks like Moore’s Law will continue with existing chips for at least 10-15 years. Why do you expect SC to be advantageous already during this period?

    The idea of the rising demand for co-processor/hybrid machines is fascinating, but do we already have numbers supporting this claim? Except for IDC’s Top Ten Predictions…

  2. Superconductor classical processors never will be faster than electrical processors if we are talking about transistors (elements) in 2D space, becouse superconductor processor can work on say 200 GHz, becouse it is about 10-100 times smaller than say intel core2 processor. And electricity such length can travel per 10-100 times shorter period, and therefore superconductor chip working on higher frenquency. But in core2 processor is 100-10000 more transistors than in superconductor processor and therefore need 10-100 such superconductor processors, which working on 20-200 GHz to be equal one intel core2 processor working on 2 GHz (becouse if processor size decrese linarly, number of trnzistors decrease quadraticaly). So if you increase size of superconductor processor 10-100, times such that it will be equal core2 and 1cm^2, then superconductor processor will be able work also only on abou 5 GHz frenquency. So superconductor technlogy is very stupid way to making conventional processors, becouse need very much power to get very low temperature. Superconductor classical (not quantum) processors don’t have future. EXCEPT, maybe if transistors will be maked in not in 2D space, but in 3D, then they will be very hot and becouse then will be need superconductor technology, but I still then don’t understand, how with low temperature will be possible to cool superconductor processor inside 3D structure. So I say one thing, Superconductor processors never will be used for fast computing. It is physical limitation by speed of light…
    For quantum computers maybe it is only one possible way to perform quantum computation with superconductors, becouse how I know only in superconductors is possible to control many spins at same time and same way.
    “It also looks like Moore’s Law will continue with existing chips for at least 10-15 years. Why do you expect SC to be advantageous already during this period?”
    For classical computers, it’s may opinion and callulations that computers based on SC will never be faster than computers based on usual processors working at room temperature, it’s imposible from laws of phisics. And to quantum computer(s) don’t necessary at what frenquency it working it still even at 1 Hz frenquency with about 300000 qubits and performing or QS or grover algorithm still will be faster than all universe if imagine that each atom is one transistor (then 10^90 transistors).

  3. sr: your analysis of superconducting vs. semiconducting processors is incorrect.

    If you want to understand how one type of classical digital logic is proposed to work, you should read the NSA Superconducting Technology Assessment, which is linked to in the sidebar on the right.

    It is entirely reasonable that special purpose classical digital superconducting co-processors could be operated at clock speeds enabling factors of 100 or more speed-ups over commodity processors with a fraction of the heat generation. The issue isn’t whether or not you can build this type of chip (you can), it’s whether it’s worth doing (is the market size large enough to warrant the time & capital required).

  4. nextquant:

    “What makes the current superconducting computer technology more competitive in comparison to the conventional technology, at least for certain applications?”

    Two things: it’s faster and it consumes a lot less power.

    “I recognize the energy consumption issue, but do you also take quantum computing into account while making this point?.. There was already a period of great expectations for SC in the 70s/80s. Why is it different this time?”

    No but if you have a processor architecture & computational model that works in both classical and quantum regimes (ours is the only one I know of) that’s much better. It’s different now because the performance of silicon has plateau’ed for applications that don’t parallelize. This wasn’t true 20 years ago. Some people need faster processors, not more processors.

    “It also looks like Moore’s Law will continue with existing chips for at least 10-15 years. Why do you expect SC to be advantageous already during this period?”

    Moore’s Law isn’t continuing, at least in its historical form. It’s already stopped. Commodity processor companies are now primarily working on better architecture with existing sizes and devices. These better architectures (multicore, special purpose accelerators, etc.) don’t always translate to better performance for HPC applications, and they can be difficult to properly program.

    “The idea of the rising demand for co-processor/hybrid machines is fascinating, but do we already have numbers supporting this claim? Except for IDC’s Top Ten Predictions…”

    Yes, in that IDC report and elsewhere, you can take a look at clearspeed or the GP-GPU space for data.

  5. Geordie, I read NSA ST and my analys is correct. Do you heard about transistors, which working on 1 THz? Why sych transistors can’t be used in processors? Becouse if very much such fast transistors fit in processors, then they will be unable work at 1 THz and they will work only at same speed like novadays processors at about 5 GHz. Why, becouse, chip die size is about 10mm length and 10 mm wide. Then speed of light can travel 10 mm length with 30 GHz. Chip is quadratic so need go to chip edges for light, so then chip speed can be 7.5-15GHz, depending on where voltage will be connected, but more realistic chip wich size is 100 mm^2 can’t work at higher frenquency than 7.5 GHz. So say we have semiconductor chip 100 mm^2, which working on about 5 GHz and consist of about 10^9 transistors. Now say we have superconductor chip, which working at 500 GHz. This chip length and wide then must be 100 times shorter, so this chip size then must be 0.1 mm length and 0.1 mm wide and chip size 0.01 mm^2. So if in 100 mm^2 fit in 10^9 transitors, then in 0.01 mm^2 fit in 100/0.01=10000 less transistors: 10^9 /10000=10^5 transistors. So let’s now calculate all speed of semiconductor processor and superconductor processor. Semiconductor processor all speed is 10^9 (tran) * 5*10^9 (Hz) = 5*10^18 ops. Superconductor processir speed is 10^5 (tran) * 5*10^11 (Hz) = 5*10^16 ops. So semiconductor processor is 100 times faster. But you saying, that just need more such superconductor processors and it will be the same speed like semiconductor processor and with 100 times less transistors. But such SC processors will have 100 times smaller L1 and L2 cache memory. So if you want send data with photons to this cold processors, then you will need send data from RAM 100 times more frenquently than for semiconductor processors, becouse they will have 10000 times less transistors (but working at 100 times biger frenquency). And your semiconductor processor is very far from those 100 superconductor processors and data need transimt 100 time more frenquency, and then data will be transmited very slowly. And you can say, I still can add more 100 times more transistors, that it will be equal to semiconductor processor transistors and energy consumption, but still your normal processor, which sending data to cold processors is very far and stil it will be very big latency and need then much computation power to address each small part (for they small cashe) information for each small Superconductor processor.
    Nor intel nor IBM nor AMD, nobody traying to made superconductor supercomputers, becouse they know, that processor speed now only possible increase with more transistors by making they smaller and smaller. Evrybody saying, that nanotubes wil consume less energy and transistors will be more efiecient, but I don’t think that. Only deacrising elements size is possible to build faster processors and not with some hypercomputation technologies. Maybe it even relevant to quantum computers. Becouse evidence, that quantum computer is faster, than classical will be only after will be build faster, than classical, quantum computer.

  6. speed that electron travel in electrocircuit never change.if we want make the cpu more fast only thing we can do is make the eletrocircuit more short than before.but now Intel and amd can not make their cpu electrocircuit more short that is why they develop cpu with two kernel

  7. combineguard,
    electron speed is trilions times slower than electricity speed, but if moving one electron, then it with speed of light affect another electrons, and those another electrons affect another with speed of light and so on… If one electron move than in his place is positive gap and another electrons with speed of light feel this positive gap and move to it…
    Anyway if build very small processors with 10^5 transistors, which working on 500 GHz. Then you can think, that one such processor consist of 10^7 transistors and working at 5 GHz. But 10^7 still less than 10^9 transistors. So it means, that superconductor processor with 10^7 ‘transistors’ will be 100 times slower than semiconductor processor. If you have instead one superconductor processor, 100 such processors, then all will be equal 10^5 transistors, 100 times higher frenquency and 100 times more processors. But must be one big processor, who will send data for this small prcoessors and data need split into 100 parts and then this outcoming data combine to one part. Maybe, this spliting combining process will take additional computation resources? And if don’t take (or take very small resources) then possible add 10000 superconductor processors in 100 mm^2 chip and then it will be 100 times faster than usual chip. Then need 100 semiconductor processors and it will take about 1000-10000 W. But to cool one superconductor processor maybe also will be need the same amount of energy about 1000-10000 W? IBM bluegene/L have teoretical about 360 TFLOPS perfomence and 280 TFLOPS real perfomance, although processors are far each from another. Many cables, but don’t need photonic comunication like in superconductor case and don’t need argon or freon or whatever. And since nobody actualy build faster than semicnonducotor computer, some superconductor computer, then for superconductor computer can be some unknown obstacles and all this smells like hypercomputation. And if say superconductor can give 100 times speedup over semiconductor computers/supercomputers, then it’s still don’t looks very much… And then it’s looks like Dwave tryes to get benefit from two unknow technologies at same time.

  8. BTW, comnineguard, more kernels (cores) need becouse, they have reach such technological process, then possible any core overlock to any frenquency, which is limited only by chip size. So about 5-10 years ago was Pentium III or Pentium 4 and intel sell chips with bigger proce, which can was fabricated better and was able work on bigger frenquency. Now all chips can work at same frenquency. Then overlocking is unlimited (by bus). So now instead lower frenquency, they devide chip in many parts and chip with 4 parts is more expensive, then chip with 2 parts. So was needed to somehow explain, that “we can’t more increase frenquency, but we can increase number of cores”, although number of transistors grows always… So many cores don’t play any role in overall chip frenquency, imho, I think, they still are synchronized, and working like one processor, and all this “many cores” is just formalism and in real any many cores don’t exist and better they don’t lie to people and build one core processor, but with diferent number transistors or in the better case with one core with the same number of transistors, but block multiplier for each chip (but then will be possible overlock with bus and that’s becouse they don’t do in such way). Intel chips is bigger than 100 mm^2, but it’s still such size like intel overal cores size can work and single chip at the same frenquency. More or less cores just need for wide range auditorium and maybe also if one core don’t work, then turn of demaged core…
    IBM make 6 GHz 720M transistors with 65 nm tech process, but it has L3 cashe, which perhaps working on half processor speed or such processor size don’t bigger than 156.25-625 mm^2 (if chip is quadratic), depending, where voltage is contected.
    And BTW GP-GPU also is 100 times faster than processor so, GP-GPU also can be as coprocessor.

  9. “SR.I HATE YOU!!!”the sentence with some grammer mistake?
    Have you heard “Thread” in CPU? the operation system like windows devide CPU time into some small block.so we can download files and chat at same time.
    the two kernel cpu like a carriage with two horse.it make the carriage carry more but can’t make the carriage faster(the speed of horse never change).

  10. Everywhere program devide in small blocks. GPU have many piplines, but there no reason making more cores on single die crystal. I think more cores it’s temporary phenomenon. Intel now want fit in all cores in single die crystal.
    3dfx Voodoo 4/5 also have many cores, but it don’t make them faster, than nvidia geforce 256/2. With single core also possible download files and chat or even play 3D games… More cores is just like decoracy, but true power is in number of piplines or conveyers like in GPU. Why for GPU don’t need more cores? For example Geforce 8800 have about 700 mln transistors, Geforce 8600 have ~300 M transistors, geforce 8400 have ~150 M transistors and just diferent number of piplines and don’t need more cores. If you will make processor with single core, but with same number of transistors than it would be the same speed as processor with many threads and would work at same frenquency. Many cores just need if you can’t fit in many transistors into one silicon crystal… I think, that after 10 years will be single core processors like Pentium 2/3/4. But maybe more cores build more easy? whatever…

  11. How you think, does D-wave will ever build very fast quantum computer?
    I think no.
    “Generally the success rates are around 90% for 4-vertex MIS problems and around 85% for 6-vertex MIS problems.” ( https://dwave.wordpress.com/2007/03/28/introduction-to-orion-document/#comments )
    With 1-vertex MIS problem, 97.5 % succeed. With 2-vertex – 95 %. with 3-verex – 92.5. with 4-vertex – 90%. With 5-vertex – 87.5%. And with 6 vertex – 85 % success.
    Qubits more than vertex so it’s nothing proof. And in such temp, will be 0% success with 40-vertex.

  12. I think D-Wave will build something more fast than now we have.Checkout result is a big problem in building quantum computer.if Geordie solve this problem i am sure next nobel pirce belong to him.

    SR.I just find there something interesting here:
    97.5%(1vertex) – 95%(2vertex) = 2.5%
    95%(2vertex) – 92.5%(3vertex) = 2.5%
    92.5%(3vertex) – 90%(4vertex) = 2.5%
    so on.

  13. ofcourse you find, each vertex taking 2.5% probability… And it’s means, that with more than 40 vertex to solve Maximum Independent Set (MIS) problem probability will be equal 0%. And if you know, what is intel 4004 processor, then you can think, that this processor also easy solve such 6-vertex and maybe even 40-vertex problems.
    It’s ofcourse allright, perhaps D-wave quantum computer is realy quantum, but to quantum computer with number of qubits need increase at least linarly each qubit controling precision, but D-wave don’t know how do it. And nobody know. Even in such simple thing like optical quantum computer nobody knkow how increase linarly computation precision (or gates accuracy). So according to this rules, D-wave with more qubits will be unable increase precision and will be unable solve any problem. Say with 60-vertex MIS problem will be 100% unsolvable.
    “Checkout result is a big problem in building quantum computer.if Geordie solve this problem i am sure next nobel pirce belong to him.”
    About what are you talking? What problem?

  14. Do you know why nobody success in quantum computation like D-wave, except optical quantum computer and NMR QC? Becouse, they want with one single praticles spin or with one single atom made one qubits. For example if number of qubits is 100, then number of atoms or particles is also 100, with which they wanted operate. In optical quantum computer case, number of gates need increase exponentionaly with number of qubits. And in NMR quantum computer they reach some success, becouse they use not single atom or particle like qubit, but many particles like one qubit. But in nuclear magnetic reasonanse (NMR) quantum computer need exponentionaly increase energy or time or particles for one qubits (don’t remember). Becouse in NMR molecules nulears can’t be in same state for one qubits, becouse they are fermions and many fermions can’t be in same state, so becouse need exponentionaly many fermions and energy… In D-wave quantum computer, one qubits consist also of many particles, but those particles are not fermions, but bosons (one boson is two frozen electrons). Many bosons can be in same state. So with bosons don’t need use quantum error correction (QEC) and QEC is nonsense and it was making specialy for QC, where qubits are single particles, such like traped ions. But nobody build such quantum computer, becouse it’s imposible with “thick fingers” to control signel particles. And D-wave controling not single particles, but many bosons like one qubit. They already can solve simple problems with Orion with enough good precision, which, I guess, nor NMR nor optical QC reach… But is one problem. By increasing number of qubits also need increase and qubits control precision. Whis problem can’t solve nor NMR QC nor optical QC… And I can tell why nobody can’t solve this problem. Becouse this problem can’t solve analog computer. And it’s is becouse our world is not perfect, there is termodinamic, molecules moving and always is some noise. D-wave quantum computer perfect qubits control inposible not beecouse noise in superconductor loops is too strong, but becouse noise is to strong in they analog machine, with which dwave want control each qubit. Imposible build analog computer which will computate with very big precision like digital. And qubits controlation is analog and they can’t them control with very big precision, becouse it even can’t do analog computers! Analog computation is better for neurons, but for matematical computation it is very bad becouse nobody can’t build analog computer with such computation precision like digital computer can do. So why becouse nobody use analog computers and everybody working with digital computers. And I think, that D-wave will not build faster quantum computer than classical, becouse they don’t know how precisly control qubits with analog techincues. And this temporary success is becouse, quantum computer taking many energy (to cool down it to near absolute zero) and problems to him is not very big.

  15. D-wave showed how quantum computer sovle sudoku puzzle and give 100% good answer, but in real Orion was working in both quantum and probabilistic regimes…
    “…nobody is going to find out the 10,000th digit of pi with an analogue computer.” ( http://www.dself.dsl.pipex.com/MUSEUM/COMPUTE/analog/analog.htm ) From this you can know how much qubits can have D-wave quantum computer. Quantum computer with more than 10000 qubits is imposible or so…

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s