Archive for December, 2011

Communication complexity and non-negative rank

Wednesday, December 7th, 2011

One of my favorite topics is communication complexity.  In this model, two parties, usually called Alice and Bob, want to answer a question, but neither one has sufficient information to do this on their own—only by sharing what they know can they answer it.  The question is how much information they need to exchange in order to do this.  For example, imagine that Alice and Bob want to arrange a meeting for next week, but both have busy schedules. How long can this conversation go on?

Alice: “Are you free Monday at 2pm?”

Bob: “No.  Monday 3pm good for you?”

Alice “No…”

It might be that the free slots in Alice’s schedule are completely disjoint from the free slots in Bob’s schedule and they can’t meet at all.  In fact this function, determining if Alice and Bob can meet, is known as disjointness and is the most important function in all of communication complexity.  And for this problem it turns out an optimal protocol is for Alice  right off the bat to send her whole schedule to Bob to check if they have a common free slot.

Mathematically, communication complexity is modeled by saying that Alice and Bob want to compute a function \(f(x,y)\) where Alice is given the input \(x\) and Bob the input \(y\), and the question is how much they have to communicate to do this for the worst-case input \(x,y\).  There are by now many variants of this model, for example where Alice and Bob can flip coins and only have to get the answer right with high probability, or can exchange quantum messages rather than classical strings.  Today at the CQT annual event we even heard about a model where the communication was via water and garden hoses!  Here we will stick to the plain old dry deterministic model.

The function \(f(x,y)\) is naturally viewed as a matrix  with rows labeled by the inputs to Alice, columns labeled by inputs to Bob, and the \((x,y)\) entry being \(f(x,y)\).  In the most common case, \(f(x,y)\) is a boolean function, so the matrix entries are just 0 or 1.  This matrix is known as the communication matrix, and analysis of matrix properties like rank play a large role in trying to figure out the communication complexity of a function.

For example, in the deterministic case it is known that the logarithm of the rank of the communication matrix is a lower bound on the communication complexity.  One of the biggest open problems in communication complexity, known as the log rank conjecture, is whether or not this bound is always close to the right answer—is the communication complexity always upper bounded by a polynomial in the logarithm of the rank of the communication matrix?  There has not been much progress on this question in the last 15 years, although just recently an interesting new approach has been suggested.

The log rank conjecture is known to be true…for a different kind of rank.  The usual rank of a matrix \(M\) is defined as the minimum \(k\) such that \(M\) can be written as a sum of \(k\) rank-one matrices \[M=\sum_{i=1}^k u_i v_i^t.\]

For a non-negative matrix, like the communication matrix of a boolean function, there is always such a decomposition where the \(u_i,v_i\) are non-negative vectors.  The non-negative rank of \(M\) is the minimum \(k\) such that \(M\) can be written as the sum of \(k\) non-negative rank-one matrices.  It turns out that the same proof that shows that the logarithm of the rank is a lower bound on communication complexity also shows that the logarithm of the non-negative rank is a lower bound on communication complexity.  This time, however, Lovasz proved that the logarithm of the non-negative rank of the communication matrix times the logarithm of the non-negative rank of the complement of the communication matrix is an upper bound on communication complexity.  (The complement of the communication matrix flips 0′s to 1′s and 1′s to 0′s.  These two matrices have essentially the same rank, but can have very different non-negative ranks.)

The problem with all this is that the non-negative rank is difficult to compute, both theoretically and practically.  Theoretically, unlike the rank which can be computed quickly, the non-negative rank is NP-hard to compute.  And in practice, we have depressingly few tools available to lower bound the non-negative rank.  A nice, simply stated open question is computing the non-negative rank of the \(n\)-by-\(n\) matrix whose \(i,j\) entry is \((i-j)^2\).  The regular rank of this matrix is 3, but it is conjectured that the non-negative rank is \(\Omega(n)\).

QC in the NYT

Wednesday, December 7th, 2011

Scott Aaronson has a very nice piece in yesterday’s New York Times entitled “Quantum Computing Promises New Insights, Not Just Supermachines”, which I’d encourage any non-specialist with an interest in quantum computing to read. He does a great job dispelling myths about quantum computing and explaining why we should be interested in the quantum computing.

I agree with virtually everything Scott says in the article, and I think he has done a fantastic job of accurately explaining quantum computing and its merits without resorting to the kind of fudges found in many popularizations. The one minor point I would disagree with Scott on is where he suggests that decoherence is the limiting factor that currently prevents from building large scale quantum computers. In my view the situation is more complicated, and depends on the physical implementation you have in mind. Several implementations have demonstrated all the unitary operations for fault-tolerant computation with accuracies significantly exceeding what is necessary for fault-tolerant computation. However, these systems usually suffer from other problems such as an inability to individually address and manipulated qubits or difficulties with cooling. Indeed it is problems with cooling to a fixed (non-random) initial state that has limited liquid state NMR quantum computing, which provided the platform for many of the early demonstrations of quantum computing. This is only a very minor point though, and does not at all detract from Scott’s excellent article.

Chronicling Fukushima Daiichi

Saturday, December 3rd, 2011

The Institute of Nuclear Power Operations recently released their report of the events surrounding the accident earlier this year at TEPCO’s Fukishima plant.  As a detailed and technical account of the events it makes for an insightful read.  The report gives a detailed timeline, from the status of the plant’s six reactors prior to the earthquake through to the point at which power was fully restored to the site 12 days later.

Some points stand out.  Though it was clear at the time that the tsunamis’ damage to the backup power systems played a critical role in the disaster, the motion of the earthquake itself was outside the design parameters.  The whole site was simply not conceived with such an extreme earthquake in mind.  Second, it seems that a total loss of power to the plant wasn’t considered.  Not only was there no pumping of the coolant to the reactors under these conditions, but there was no lighting or monitoring equipment functioning in the control room of unit 1 and 2 reactors.  There was really no way to know what was going on until the workers on-the-fly figured out a way to run the equipment off of spare batteries.  Also, it seems the explosions in units 1, 3, and 4 were caused by a buildup of hot hydrogen in the buildings housing the reactors, not in the primary containment, which was vented successfully.  Due to the power loss a number of ventilation fans were not running.  The fact that the reactors and primary containment were not the source of the explosions, nor were significantly damaged by them is the reason why the radiation release was less than one twentieth that of the Chernobyl disaster.  (For some interesting comparisons, check this out.)  At Chernobyl the reactor itself exploded after what investigators believe was a very brief run-away chain reaction in a critical mass of molten fuel.

The report also paints the workers as achieving some heroic feats under extreme stress.  An excerpt:

“Because of the tsunami and earthquake damage to the surrounding communities, little outside assistance was initially available. Some workers lost their homes and families to the earthquake and tsunami, yet continued to work. … Some of these workers remain on site today, still working to keep the reactors cool and prevent the spread of contamination.”

This incident demonstrates that even with dedicated workers and extreme care for safety, unforeseen external events can lead to dangerous situations.  Some newer reactor designs (small modular reactors) are very interesting for reducing these risks by being entirely self-contained and sufficiently cooled entirely by passive convection.  In a sense the reaction provides the power to cool itself and does not rely on the operation of any mechanical systems.  It remains to be seen if these are more economically viable than current plants.

What’s going on?

Thursday, December 1st, 2011

Below are two photos I took at Boat Quay last weekend. Does anyone know what was going on? My first thought was that it was a little late in the year for summer eights