Another AQC question

Does anyone know a way to reduce the space complexity of an AQC algorithm (ie number of qubits required) by increasing the time complexity? Are there any known tricks in AQC to trade off one vs. the other?

13 thoughts on “Another AQC question

  1. for example :
    Make sure if a number in an number array.
    of course we can check every number in the array one by one this take no space but that will take a long time😦
    and we have another way make : some connection between value and address it storged.(storge value 1 in address 1,storged value 10001 in address 10001) when we want to check a number 666 then we check if address 666 is null or not .this can reduce time :)but this will take a lot of space:(
    I hope that will help.

  2. It seems like you’re scaling up the number of qubits fairly quickly. Wouldn’t you want to move in the opposite direction then? IE, trade time complexity for space complexity?

  3. combineguard:

    The operation you have described is called hashing. It is a common technique and a very interesting subject in its own right. You can read more about it here:

    http://en.wikipedia.org/wiki/Hash_function

    To use D-Wave’s computer, the problem you’re trying to solve must representable as a maximum clique problem. Many problems in computer science are not easily translated into maximum clique problems, making D-Wave’s computer more of a special-purpose device built to quickly solve only those portions of the problem, while the remaining computation is handled by a traditional computer. For an analogy, you might compare it to the use of ASICs over more general-purpose devices for signal processing, etc. It is a purpose-built device for efficiently solving a very specific category of problems, but that is all (at the present time).

    http://en.wikipedia.org/wiki/Clique_problem

    If you want to interface with the machine externally, you’ll have to wait for the publication of their commercial API. All we know is that it will be vaguely SQL-like: you spec out the parameters of your problem using a set of declarative statements, pass your job to D-Wave over the network, wait for it to be processed by their AQC, and then receive your answer back (picture a SELECT statement, tailored to max clique problems). He talks about it briefly here:

    https://dwave.wordpress.com/2006/12/01/remiss-remiss/

    If you want to understand how it works internally, you will probably need to take some graduate-level classes in physics and mathematics. How it works is tied directly to a theoretical understanding of how matter behaves at the quantum scale under a very specific set of conditions; it is not a situation where having a basic grasp of the concepts is enough to give you any traction at all on the subject. It’s quite hard, and experimental results tend to be very counter-intuitive:

    http://en.wikipedia.org/wiki/Adiabatic_theorem

    In fact, no one but D-Wave really knows whether or not their computer does what they say it does, or whether it will scale. The public must patiently wait for the architecture to scale to the point that it can be openly tested with a series of random inputs sufficient to preclude the possibility of cheating (for example, by precomputing inputs) and it must perform at a level well outside the boundaries of any conventional computer or cluster for people to believe it is truly taking advantage of quantum effects (the performance hurdle does not actually prove this; think of it as a prerequisite) and to make its business case (so customers will pay for compute time on D-Wave’s system). Issues of scalability are probably why Geordie asked the exact questions he did. If the nature of the relationship between AQC qubits / problem complexity / time indicates that he can provide adequate solutions to his customers at a reasonable time penalty using only 256 AQC qubits, then why go to the added expense of scaling past that point if it is non-trivial to do so? They’d also need to understand this relationship to develop a set of equitable pricing tiers for their paying clients (and for a given problem, there is probably a point of diminishing returns where using additional computing resources does not result in a substantially faster or more accurate solution). Then again, I know absolutely nothing about physics or computer science, so all of this could be totally, completely wrong. 🙂

  4. time is money friend.i donot know why you interested in save space by cost time.
    as we know CPU is fast and RAM is dear.so lot of people try to reduce space by creat time.but not all process can trade.

  5. ” … a way to reduce the space complexity of an AQC algorithm by increasing the time complexity?”

    the minimum requisite level of expertise for such friendly adjustment of algorithmic complexity in particular domains is not frequently available and almost always quite expensive!

    (is this one of those google / craigslist like recruiting quiz tests? *grin*)

  6. Isomorphism: Well put🙂 Implicit in your response is that such “friendly adjustment” will be specific to a particular algorithm used for a particular purpose. I’m thinking there must be well-known generic tricks for situations where memory is really expensive but time is cheap.

    Jordan: No, because real-world discrete optimization problems can easily have millions of variables >> number of qubits we’ll have in 2008.

  7. I see. I would expect an easier line of attack working with your lsing method of encoding as opposed to directly with AQC. I imagine there must be heuristic NP algorithms that work by restricting the number of variables, optimizing over the reduced space, and then repeating, with some mechanism for trading in and out variables between iterations, philosophically a simplex method of sorts. Presuming you can solve each stage of the iteration very rapidly I imagine this would work well. But I would talk to a comp sci man, iso might have the appropriate connections =D.

  8. Hi Geordie,

    It seems that it is unlikely to be possible, beyond a few trivial optimizations. If we slice up space into segments which are processed sequentially then we limit the entanglement possible between such segments. If the segments take up the full space available in the processor, then no entantlement between segments is possible. If they take up less space, then it is conceivable that you can use the additional space as a type of quantum memory, allowing you to maintain entanglement between segments.

    Unfortunately, doing so limits the number of types of correlation which can be maintained to a fixed number (unless your device has an infinitely large state space, in which case the problem doesn’t arise). As a result, any such process can be simulated efficiently classically (although this my be efficient in only the computer science sense).

    Joe

  9. Hi Joe: In cases where the “slicing up of space” can be done without introducing any approximations (like when the slice consists of a set of vertices in a graph that are subjects of a local search), what you’re suggesting does accomplish the objective (trading space for time complexity). If I understand what you’re saying though you expect that this will trade a small improvement in space complexity for a large increase in time complexity. I’m not convinced this is true. I suspect that the ability to slice up space without introducing approximations is a special case for which the trade-off might not be so bad. Although I have no idea how to quantify the trade-off.

  10. Actually, that’s not exactly what I’m getting at. There are many problems for which this will work (possibly using a type of quantum memory, as mentioned), but my point is that the size of quantum memory limits the entanglement possible, making these correlations classically simulable. This may not actually be a super linear increase in time complexity. It’s just an observation that problems which are amenable to this cannot possible reciever a superpolynomial speedup.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s