Quantum Computing has Arrived

Those of us that have been around for a little while fondly remember Intel’s pre-Pentium days, the best of which were spent gaming on the awesome 486 dx line. At the time, the forward-looking amongst us (full disclosure, I was five when the first 486 debuted, so for me, forward-looking consisted of planning time around The Adventures of David the Gnome) imagined the future of computing consisting of a  steadily increasing trend of more powerful processors.  By the late 90’s it became obvious that processor speed was a more or less a white elephant, and its continued development was shelved in favor of a focus on enhancing the quantity and quality of RAM.  Over the last few years, that trend has shifted again back to the processor, but not with the goal of making them faster, but rather making them more parallel.  By tossing two 2.0 GHz processors in one computer, that computer makes two “normal use”  calculations at the same rate as one 3.0 GHz processor makes a single “normal use” calculation.  It’s only when you start dealing with complex processes that are discrete, that you really see the performance benefits of having faster processors.  This dichotomy lies with the inherent fact that one processor can only perform one calculation at a time.  Modern supercomputers don’t get their umph from blazing fast processors, but rather from thousands of snappy ones.  If the human brain was a computer, each neuron would be a processor.  Take it from nature:  a one neuron brain is useless, and therefor does not occur (this is an absurd statement, as neurons themselves are the product of a great deal of evolutionary history, obviously a one neuron brain wouldn’t have much merit and would exist instead as some sort of elementary ganglion).  Intelligence in the natural world is measured by parallelism, in other words, the density of neurons in a given biomass.  Considering this trend, the likely future contains laptops with hundreds, if not thousands, of moderately quick processors, right?  Actually, no.  The paradigm that began long before the 486 is quickly drawing to a close.  Even with the rise of nanotechnology, there are only so many transistors that you can fit on square inch of silicon (this is the breaking point of Moore’s Law).

So what new paradigm is just over the horizon?  For the last several years, rumors floated around about atomic, or subatomic computers in the early stages of R&D.  It sounds so sci-fi that many simply shelved the concept in the same category as the jet pack, forever doomed to futuredom.  For as much as I read about these things, I still thought that the first practical models were a decade away.  This one was even a shocker to Bob, and that says a lot.  So it was with a profound sense of surprise and joy that I read about new research published this week in Nature Chemistry that highlighted the first application of a quantum computer used to perform a task that was impossible using conventional transistor based computing.  So, what sort of unbelievable heavy lifting was performed with this revolutionary technology?  Was it predicting global weather patterns years in advance, big-bang simulations, or the like? Actually, it was used to calculate the exact amount of energy contained in molecular hydrogen – not very sexy, I know, but surprisingly far more difficult.  Traditional supercomputers can approximate the comings and goings of one or two atoms, but every additional variable adds orders of magnitude to the time required to process results.  This is because transistors are limited to one of two functions: off or on.  Quantum computers, however, use qubits instead of transistors to run calculations.  Qubits are like traditional transistors, in that they have values of one and zero, off and on.  The major difference lies in the core of quantum mechanics, in the fundamental concept of quantum uncertainty.  Each qubit in an array doesn’t just express its current state as a one or a zero, but rather captures all possible combinations of zero and one at the same time, calculated in terms of probability.  This is inherently confusing, partly because we don’t think of ourselves as observing reality on the quantum scale.  While you can rest assured that the computer that you are using to read this article would not work if electrons didn’t behave in a manner predicted by quantum mechanics, don’t take my word for it, take a dip into the intellectual deep end.  If this whole quantum computing thing isn’t awesome enough, the researches from Harvard and the University of Queensland who performed this task did so using only two entangled photons.  That’s right, out of the 1088 number of photons in the observable universe, it took just two in order to outperform the best multi-million dollar supercomputers.

So while I might wax nostalgic for my old 486, I like the way the future is panning out.  Although if quantum mechanics teaches one thing, it’s that nothing is predictable, the best we can hope for is a little less uncertainty.


~ by Wil Finley on January 18, 2010.

7 Responses to “Quantum Computing has Arrived”

  1. I don’t think I’ve met anyone else who remembers David the Gnome.

    • How could anyone forget? That show was awesome. Little known fact: it was originally made in Spain and dubbed over for the US audience.

      • Yeah I never knew that until I read it on your post.

        I was going to ask what happens if you put a cat in the tower but I think I figured it out. http://en.wikipedia.org/wiki/Cat_state

        • Ah, Schrodinger’s cat. One thing that I think is cool is the way that quantum uncertainty gets a little tricky when considering that the cat (or qubit) essentially observes itself. A present quantum state should always be determinable, but a future quantum state not so much. So when people talk about observation resulting in a reduction in uncertainty, that always comes across to me as being more of an analogy for humans than a quantum fundamental. Essentially “observation” could be restated as “a point in the fourth dimension”.

          • I’ve always thought of time rather than a Euclidean space as the 4th dimension. It always comes across as a human needing to observe it to me also, but some people think a measurement counts as an observation which makes more sense to me. Even though it isn’t how it works, I thought it rather funny to think that a computer may or may not be processing depending on if you watch it.

          • Maybe if lots of people observed it at once it could go faster : ). It makes me think of the Infinite Improbability Drive.

          • Many respectable physicists said that they weren’t going to stand for this – partly because it was a debasement of science, but mostly because they didn’t get invited to those sort of parties.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: