We usually talk about the power of a quantum computer by examining the separation between sets of gates that we know we can efficiently simulate on a classical computer (i.e. problems in the class BPP), and universal gate sets which, by definition, can implement any quantum algorithm, including BQP-complete algorithms. So, assuming a separation between BPP and BQP, there is a separation in the power of the algorithms that can be implemented with these gate sets, and the separation between these gate sets can be as simple as the availability of one gate (two classic examples are the Clifford gates + the $\pi/8$ phase gate, and Toffoli+Hadamard). In effect, you need a universal gate set in order to gain a computational speed-up. However, this is specifically about algorithms with polynomial running times.

What are the requirements that distinguish the power of a quantum computer which is intended solely to provide a polynomial speed-up on a problem outside BPP? For example, a device built solely for the purpose of implementing a Grover's search. Presumably the D-Wave machines fall into this category.

To be clear, I require a speed-up that changes the scaling relation. If there's a classical algorithm that requires time $O(2^n)$, then obviously there are many different ways of physically implementing it which have different running times, but all will be $O(2^n)$. I'm interested in identifying what it is in a quantum computer that permits a better scaling (but not a reduction to polynomial time running).

Asked another way: think about the D-wave machine (although I am not aiming to be limited to just talking about this case), which we believe is doing something coherent, and for a given problem size, seems to be quite speedy, but we don't know how it scales. Can we know a priori, from details of its architecture, that it at least has the potential to provide a speed-up over classical? If it were universal for quantum computation, then it certainly would have that potential, but universality probably isn't necessary in this context.

Part of what I'm struggling to get my head around, even in terms of defining the question properly, is that we don't have to have a universal gate set because it doesn't necessarily matter if the gate set can be efficiently simulated on a classical computer, just so long as the overhead in performing the simulation is similar or worse than the speedup itself.

  • To facilitate the answer: is the polynomial speed-up the hard requirement here, or would a linear speed-up also be satisfactory? – agaitaarino Apr 13 at 5:37

I am not sure if the answer to this question, in the sense of not just necessary but also sufficient conditions, is actually known.

However, simply using handwaving arguments, you can identify a few actual requirements (necessary conditions):

  1. Initialization. Without initializing your system, the result will be arbitrary. In fact, you might include all of the five DiVincenzo criteria for quantum computing in the first place.

  2. Superpositions or quantum parallelism. Without superposition, you cannot process "more" information than possible classically.

  3. Interference. Without interference, you cannot make use of the "extra" information gained from superposition.

  4. Measurement. Without measurement, you cannot get a (classical) result from the computation.

Your Answer

 
discard

By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Not the answer you're looking for? Browse other questions tagged or ask your own question.