Archive for the ‘In the news’ Category

Will it quantum?

Friday, April 19th, 2013

This week Boxio et al. have put up test results on the arxiv on performing quantum annealing with more than one hundred qubits on the commercial D-Wave One device. Marketed as a quantum computer, the natural question to ask is “Will it blend?”. Or rather, is the device quantum or just a complex classical device? In their paper, the authors compare the experimental performance with a simulation of how a quantum device would perform. In addition they compare the behavior to optimized classical algorithms. The conclusion is interesting in two respects. First, the behavior of the current device seems to suggest it is a quantum device. And secondly, an improved version of the current architecture might be able to deliver real world speed ups in performing calculations with larger problem sizes. This would certainly be very interesting. Or in the words of the authors: “A quantum annealer showing better scaling than classical algorithms for these problem sizes would be an exciting breakthrough, validating the potential of quantum information processing to outperform its classical counterpart.”

Update June 5: There has been a lot of buzz about D-Wave in the aftermath of this preprint (and a related study). I highly recommend reading Scott Aaronson’s post about the topic, from which it seems that the jury is still out on whether the D-Wave architecture will provide real world benefits.

An open revolution

Wednesday, February 8th, 2012

We are in the midst of an open revolution in science. This is a revolution towards doing research in the open, using online tools to share unfinished ideas and allow massive collaboration, but also an open revolt against the traditional model of publishing those finished ideas, in particular against the publishing giant Elsevier.

In his book Reinventing Discovery, Michael Nielsen (co-author of The Book in quantum information) is an enthusiastic chronicler of the rise of open science, and inspiring advocate for its success.  Some of his examples were familiar to me, like the arXiv or the polymath project.  But there were also some neat examples I had not heard of, like the Galaxy Zoo, where anyone can help classify galaxies, FoldIt, where users compete video game style to come up with low-energy configurations of proteins, or Kasparov vs. The World, where 50,000 people democratically played chess against Kasparov, doing much better than expected.

One example I was surprised not to find in the book was the stack exchange family of question and answer websites, mentioned previously on this blog.  Sites like math overflow or cs theory stack exchange, where people can ask research-level questions and expect to get informative answers in a matter of hours, have quickly become very useful research tools.

The end of Nielsen’s book is a call to arms, with suggestions of what scientists, programmers, citizens can do to develop open science, and also some fantasies of what open science might look like in the future.  One such idea I found quite cool was a sort of stack exchange meets eHarmony service.  This tool would automatically pair questions with respondents, based on the respondents expertise and questions they have successfully answered previously.  As Nielsen describes it, every morning a researcher would receive an email with a list of ten requests related to their expertise, which they could then choose to answer or not.

What I personally find the most exciting in the open science revolution is the development of open education.  The Khan academy deserves a lot of credit for innovation here, showing that students often prefer a short video that can be paused and replayed to a live lecture.  They now have over \(10^8\) downloads and 2600 videos.

Sebastian Thrun, who co-taught the Stanford online artificial intelligence class last semester that attracted 160,000 students, recently gave a talk about the experience which I think is well worth watching.  Thrun says that this experienced changed his life, and the talk is surprisingly heartfelt and emotional.  He also has some harsh words for the current university teaching model: “Miraculously, professors teach today exactly the same way they did 1000 years ago.  University has been, surprisingly, the least innovative of all places in society.”  Nielsen also addresses this point in discussing the low rate of participation of researchers in Wikipedia.

Thrun further says that after teaching 160,000 students (of which 23,000 completed the course), he can’t go back to the old way of teaching the 200 in his normal Stanford classes.  He gave up tenure at Stanford in April of last year and now has began a startup, udacity, focused on producing online classes.  Two classes are on tap for the first offering, CS101 going from zero programming experience to programming a web search engine, and CS373, programming some of the algorithms under the hood of a driverless car.

Blind quantum computing: An introduction

Friday, January 20th, 2012

Cross-posted from my personal blog.
Blind quantum computing - as illustrated by my lovely wife Liu Jia
Since I have a new paper out today in Science, which seems to have attracted the interest of a fair number of non-physicists, I thought I would take the time to explain the paper: what it is we do, and the background behind it.

The paper itself is an experimental implementation of a cryptographic scheme designed to allow a user to run a calculation on a remote computer (server) in such a way that the users input, the operations performed and the output all remain hidden from the server. In our case we care about running a computation where the remote computer is a quantum computer, but the user has access to a much more limited machine. With that in mind, I thought I might first explain the protocol before talking about the experiments themselves.

The current experiments are based on Universal Blind Quantum Computing, a protocol proposed by Anne Broadbent, Elham Kashefi and myself a few years ago (illustrated above).

In quantum computing, the basic unit of information is the quantum bit (or qubit for short). This is exactly the structure you get if you take a classical bit, which can be 0 or 1, and try to represent it as a quantum system. Quantum mechanics allows for a systems to exist in superposition. This is essentially a class of non-classical states which exists in between classical states. This means that the qubit can exist in various superpositions of 0 and 1 at the same time. Each quantum state is can be described in terms of classical states, associating two numbers with each: an absolute amplitude (between 0 and 1) which when squared gives you the probability of finding obtaining that classical state when you measure the system, and a complex phase which governs interference effects. As it turns out, this means that we can represent the state of an (unentangled) qubit very simply, as simply the point on the surface of a sphere, known as a Bloch sphere (see below).

The Bloch sphere

The classical states 0 and 1 are the top point and bottom point of the sphere, while the other states are in superposition. The lower down you go, the higher the probability of obtaining a 1 instead of a 0, with the points around the equator representing states where there is an equal probability of measuring 0 or 1.

Since we imagine this in 3 dimensional space, it makes sense to assign an X, Y and Z axis to the ball. By convention, we take the Z axis to pass through both 0 and 1, and to put the point (0,0,0) at the centre of the ball. This gives us a very easy way to visualise the basic operations of quantum computing which can be performed on a single qubit: they are simply rotations of the ball, and indeed it turns out that it is enough to consider rotations about only the X, Y and Z axes. If you can do these, you can make any more complex operation as a sequence of (3) rotations about these axes. These operations are basically a generalisation of the NOT operation a classical computer can perform, which takes 0 to 1 and 1 to 0.

As with a classical computer, we also need some way to affect one qubit dependent on the state of another qubit. As it turns out, an operation called a CZ gate is sufficient for this. The CZ gate rotates one qubit 180 degrees about the Z axis if the other qubit is in state 1, and does nothing if it is in state 0. Conveniently, this gate is symmetric, so it has exactly the same net effect independent of which qubit you designate to be rotated.

The way our protocol works is based heavily on a result which says that you can perform any computation (classical or quantum) in the following way:

  1. Prepare a sufficiently large number of qubits in the state which lies on the equator of the Bloch sphere (which specifies its absolute amplitudes) and has phase +1. I will refer to this as the + state, since in the notation most physicists use it is written as \[\frac{1}{\sqrt{2}}\mid 0 \rangle + \frac{1}{\sqrt{2}}\mid 1 \rangle\]
  2. Arrange the qubits into a regular 2D lattice (such as a square grid).
  3. For every qubit i in order:
  • Rotate i about the Z axis by some angle which depends on exactly which gate you want to perform. Note that this angle can depend on the previous measurement result.
  • Rotate i 90 degrees about the Y axis.
  • Measure whether i is 0 or 1.

We noticed that if Z rotations before and after the CZ operations have exactly the same effect, and that a Z rotation before the CZ is identical to simply preparing some other state chosen from the equator of the Bloch sphere. The idea is then to get the server to do all off the hard parts: applying the CZ gates, doing the Z rotations and making measurements, while the client only does the easy parts: computing the angle for the measurement in each round, and preparing single qubits (this last part may seem like a physically hard operation to perform, but in fact the client needs little more than a very weak laser and a pair of 3D glasses from the cinema to accomplish this if the server is sufficiently powerful).

If you look at the list of operations that the server is expected to perform, only the Z rotations actually depend on the computation being performed. This is because it is possible to choose a fixed arrangement for the qubits (the lattice referred to earlier) which will work for any computation you may wish to perform. The lattice has only 2 parameters, length and breadth, which are analogous to the time and memory the computation uses. As these parameters can be artificially inflated, they are not revealing anything particularly useful about the computation. So only the Z rotation reveals information about the computation. However, if the user prepares the initial qubits in random states on the equator of the Bloch sphere, the net rotation the server needs to perform must incorporate both the undoing of this initial random rotation and rotate by the appropriate angle required to correctly implement the desired calculation. This means that, depending on the initial random state chose (of which he is not aware), for any fixed angle required to implement the users computation, the server is equally likely to be asked to perform a Z rotation through any angle. Thus the angle he is told is completely independent of the users calculation.

So none of the classical messages the user send reveal any information about the calculation, but what about the qubits? Can’t they be measured to reveal information about their random rotation? If so, then the entire computation could be infered by the server from a combination of the classical and quantum messages sent by the user. Fortunately for us, the quantum mechanics come to the rescue. Suppose instead of choosing the inital state of the qubit and the angle sent to the server in such a way that the exactly implement the correct angle required to implement their desired computation, suppose they choose them randomly to either add up to the correct angle or add up to the correct angle + 180 degrees. What would happen then? Well, this would be the same as randomly choosing on of two opposing points on the Bloch sphere. No matter what two points you pick, it is impossible to determine either of them by making a measurement on such a system. This is because no matter what axis you pick to make a measurement along, you always have an equal chance of obtaining a 1 or a 0, independent o the choice of points. So if the user sends such randomized states to the server, together with measurement angles, there is nothing to be learned by any measurement of the system. Thus everything is perfectly hidden from a malicious server, even if they deviate from the protocol.

But doesn’t this randomization mess up the computation, resulting in nonsense? As it turns out, no, it doesn’t. This is because this extra rotation through 180 degrees is identical to flipping the measurement result the server gets when they measure the qubit. Since the measurement results are randomly flipped, the server can’t learn anything about the computation. However, the user knows whether or not the result has been flipped, and so can unflip it when they receive the result from the server. As long as they base their actions on these corrected results, the outcome of the computation is identical to if they had never occurred at all. Since the measurement results the server obtains are flipped, even for the last qubits to be measured, they have no knowledge of the outcome of the computation, though it can be trivially read by the user.

Since the server doesn’t know what computations are being performed, it is possible to set up traps to detect if the server deviates from the protocol, but this isn’t necessary to ensure blindness, and is beyond the scope of this blog post.

Now that I’ve explained the explained the theory, I’ll tell you a little about the experiment. To actually see our protocol become something real, we spent much of the last two years working with experimentalists Stefanie Barz, Philip Walther and Anton Zeilinger to find a way to implement the protocol using photons. Photons make perfect qubits, since they have two polarizations which can be used to encode a single qubit. In practical terms they would be ideal for a mature implementation of blind quantum computation, since photons can be relatively easily be transferred over large distances, allowing the user to be hundreds or thousands of kilometers distant from the server. Additionally, the group already had significant success in implementing the measurement based model of quantum computing on which our protocol is based.

Optical setup

The final set-up used (see above), like every other first demonstration, makes some compromises most notably in giving the client something a little more sophisticated than a pocket laser and 3d glasses, since this allowed us to simplify the server somewhat. None the less, it is a pretty faithful implementation of our original idea. With the 4 photonic qubits we had, we were able to hide all of the basic building blocks for a larger scale computation, meaning that we were able to demonstrate blind single and 2 qubit gates, which are sufficient to build an arbitrary computation out of. We were also able to hide instances of two quantum algorithms: the Deutsch-Josza algorithm and Grover’s algorithm.

Lastly, and most importantly, we were able to show that virtually no information could have been leaked by the apparatus. Although we have a theoretical proof, as outlined above, actual experimental apparatus aren’t as well behaved as our theoretical models, and so we need to show that our apparatus isn’t leaking information by introducing some subtle error on the quantum states. This may sound impossible, since how do you prove a negative? By running the experiment many times over for each and every possible choice user could make, we were able to use measurements made on different runs to infer the real state received by the server. This was possible because we knew exactly what the users choice of randomness was in each case, and so weren’t subject to the blindness which affects the server. With all of these measurements made, and the true states inferred, we applied a result known as Holevo’s theorem to calculate a maximum amount of information that could be extracted by a malicious server had they knowledge of the exact peculiarities of our experimental set-up. Philip and Stefanie are so good at their jobs, the system is pretty accurate, an the amount of information which could possibly have been leaked was no more than 0.169 of a bit. Even if the server could somehow learn something about the user’s desired calculation or random choices, the amount he could learn from measurements on the quantum state rises only slightly to 0.185 of a bit. In the case of these experiments, we were using 8 different initial states (i.e. 3 bits) for each blind qubit which is sufficient for any calculation, and so the amount of leakage is tiny. Not too bad for a first attempt, I think!

QC in the NYT

Wednesday, December 7th, 2011

Scott Aaronson has a very nice piece in yesterday’s New York Times entitled “Quantum Computing Promises New Insights, Not Just Supermachines”, which I’d encourage any non-specialist with an interest in quantum computing to read. He does a great job dispelling myths about quantum computing and explaining why we should be interested in the quantum computing.

I agree with virtually everything Scott says in the article, and I think he has done a fantastic job of accurately explaining quantum computing and its merits without resorting to the kind of fudges found in many popularizations. The one minor point I would disagree with Scott on is where he suggests that decoherence is the limiting factor that currently prevents from building large scale quantum computers. In my view the situation is more complicated, and depends on the physical implementation you have in mind. Several implementations have demonstrated all the unitary operations for fault-tolerant computation with accuracies significantly exceeding what is necessary for fault-tolerant computation. However, these systems usually suffer from other problems such as an inability to individually address and manipulated qubits or difficulties with cooling. Indeed it is problems with cooling to a fixed (non-random) initial state that has limited liquid state NMR quantum computing, which provided the platform for many of the early demonstrations of quantum computing. This is only a very minor point though, and does not at all detract from Scott’s excellent article.

Chronicling Fukushima Daiichi

Saturday, December 3rd, 2011

The Institute of Nuclear Power Operations recently released their report of the events surrounding the accident earlier this year at TEPCO’s Fukishima plant.  As a detailed and technical account of the events it makes for an insightful read.  The report gives a detailed timeline, from the status of the plant’s six reactors prior to the earthquake through to the point at which power was fully restored to the site 12 days later.

Some points stand out.  Though it was clear at the time that the tsunamis’ damage to the backup power systems played a critical role in the disaster, the motion of the earthquake itself was outside the design parameters.  The whole site was simply not conceived with such an extreme earthquake in mind.  Second, it seems that a total loss of power to the plant wasn’t considered.  Not only was there no pumping of the coolant to the reactors under these conditions, but there was no lighting or monitoring equipment functioning in the control room of unit 1 and 2 reactors.  There was really no way to know what was going on until the workers on-the-fly figured out a way to run the equipment off of spare batteries.  Also, it seems the explosions in units 1, 3, and 4 were caused by a buildup of hot hydrogen in the buildings housing the reactors, not in the primary containment, which was vented successfully.  Due to the power loss a number of ventilation fans were not running.  The fact that the reactors and primary containment were not the source of the explosions, nor were significantly damaged by them is the reason why the radiation release was less than one twentieth that of the Chernobyl disaster.  (For some interesting comparisons, check this out.)  At Chernobyl the reactor itself exploded after what investigators believe was a very brief run-away chain reaction in a critical mass of molten fuel.

The report also paints the workers as achieving some heroic feats under extreme stress.  An excerpt:

“Because of the tsunami and earthquake damage to the surrounding communities, little outside assistance was initially available. Some workers lost their homes and families to the earthquake and tsunami, yet continued to work. … Some of these workers remain on site today, still working to keep the reactors cool and prevent the spread of contamination.”

This incident demonstrates that even with dedicated workers and extreme care for safety, unforeseen external events can lead to dangerous situations.  Some newer reactor designs (small modular reactors) are very interesting for reducing these risks by being entirely self-contained and sufficiently cooled entirely by passive convection.  In a sense the reaction provides the power to cool itself and does not rely on the operation of any mechanical systems.  It remains to be seen if these are more economically viable than current plants.

Wimps, Silvio, and the Masses.

Sunday, November 13th, 2011

Since last I blogged, those pesky neutrinos travelled faster than light, the fine structure constant varied spatially, 67 wimps turned up making dark matter a slightly lighter shade, and our favourite Italian comedian Silvia resigned as he tries to bring down the EU, after the Greeks crack (to use my most venerate cycling parlance). Has the world gone mad?!

Fortunately some clever people are thinking ahead to the time when us physicists won’t have any funding to play with. Yes, the time is coming when the worlds governments won’t have enough spare cash to spend on us, so we’ll have to look elsewhere. Enter crowdfunding, and the #SciFund Challenge. Their mission: to fund 49 science projects purely on public donations. So far they have raised over 40 thousand US dollars of hard earn cash in just 13 days! Its an interesting idea in many ways, the public has to find the projects engaging and important enough for them to even think of giving their money up. Why donate to science when you could help fund the water well project in Africa for example? It’s really a challenge of communication, about convincing people of the value of doing science. Lets see how this new model of funding takes off.

Quantifying the desire for fast neutrinos

Wednesday, October 5th, 2011

So what do you think about these particles that travel faster then light? Even as the news about superluminal neutrinos gets to be two weeks old, that question still pops up in discussions with friends and colleagues alike. And during the latest of these discussions, I realized a strange behavior. Physicists tend to argue that the experiment must be flawed, but yet hope that it is not. This behavior has been noticed before by sources as different as The Economist and the XKCD comic strip. And since you never really should make statements without solid data, I’d like to put some numbers to that phenomenon.
(more…)

More speculation and anticipation…

Tuesday, October 4th, 2011

The Washington Post has also jumped on the quantum bandwagon for the physics Nobel Prize.  Is this the year that quantum information gets the nod?

It’s that time of the year again…

Monday, October 3rd, 2011

Nobel season is upon us again. The winners of the Nobel prize in Physiology or Medicine were announced earlier today,  with the prize to be split between Bruce Beutler, Jules Hoffman and Ralph Steinman. This has already caused some confusion, as USA Today is reporting that Steinman died a few days ago, and the prize is not usually awarded posthumously. This is no doubt a hard time for Dr. Steinman’s family and friends, and so I will not dwell on this aspect of the announcements.

For me, of course, the most interesting announcement will be tomorrow, when the winners of this years Nobel prize in Physics will be announced.  From the perspective of someone working in quantum computing, the prize announcements are becoming increasingly exciting as increasingly many researchers from this area are mentioned in connection with the prize. Each year Thomson Reuters name what they call “citation laureats” which are essentially their predictions for who will win a Nobel prize. The full list of candidates for the physics prize who they have predicted throughout the years can be found here. It is notable how many of the people on the list are associated with quantum computing research directly, not to mention all the others associated with related fields. In particular, the list includes Alain Aspect, Ignacio Cirac, John Clauser, Anton Zeilinger and Peter Zoller. Of these, Aspect, Clauser and Zeilinger have also won the Wolf Prize in Physics, which is a fairly good predictor of future Nobels (from 26 prizes awarded between 1978 and 2010, fourteen winners have proceeded to win Nobel prizes). Certainly these aren’t the only quantum researchers worthy of consideration for the prize, and if you scan the predictions on various physics blogs you will see others being mentioned.

Predicting Nobel Prizes is a tricky business, and ultimately the decision comes down to a decision by a small number of people. The secrecy surrounding nominations further means that we never know who has been considered for the prize, as that information is kept secret for 50 years.

Perhaps, though, this year will be a quantum year…

 

(P.S. If you think you know better, Chad Orzel is taking bets here.)