{\huge Graduate Student Traps}
Can we help avoid parallel repetition of mistakes?
\vspace{.5in}
Irit Dinur has recently again shown a wonderful skill at re-conceptualizing an area that had seemingly been well worked out. A notable previous instance was her re-casting the proof of the PCP Theorem as a progressive amplification. Now she and David Steurer have posted a new paper whose title, ``Analytical Approach to Parallel Repetition,'' introduces a new framework. The subject of parallel repetition, which they call ``a basic product operation for one-round two-player games,'' is unique in having its genesis in a mistake made in a paper---a trap of automatic thinking. Lance Fortnow and Mike Sipser thought that executing multiple instances of a randomized protocol in parallel would have the same power-law reduction in the probability of overall failure as executing them independently in sequence, overlooking how the strategies in the parallel cases could be dependent and exploit this.
Today we talk about similar traps that people can fall into even at advanced graduate level. How might they be avoided?
Truth-in-blogging note: this post is really about a different case of products and independence, and most of it was written months ago. It was lacking an intro section with someone to feature according to our ``blog invariant,'' and we also wanted a few short examples of ``graduate-student traps'' in computational theory and mathematics before progressing to the main one. The parallel repetition example came not only first to mind, but also second and third and fourth... as we struggled to think of more good ones. Lance's haute faute has already been mentioned a few times on this blog, and I thought it would make a tiresome and repetitious parallel to my own ``trap.'' It didn't help that the neat example I saw online years ago which furnished the phrase ``graduate-student trap''---but which I didn't preserve and forgot---has vanished into unsearchability.
I was deciding between leading with Nicholas de Cusa for his advice on not pretending to have completed knowledge, or alternately a colleague in Buffalo who compiled a good graduate-student advice list. Lance has similar advice, and when looking for it I spotted the mention of Dinur in his Twitter feed---actually Lance's re-tweet of one by my former student D. Sivakumar. Voilá---Lance's example it was. Thanks, Irit.
Traps
What is a trap? Pitfalls and paradoxes abound in mathematics and the sciences, and surmounting them is just part of acquiring the literature. Sometimes it is a confounding of preconceived expectations, but it is hard to find a way of defining such expectations that works for everybody or even most people. What makes a trap, in my opinion, is when there are concrete indications in the concepts, in their contextual use, in their notation, or in the literature itself that run counter to the truth. Here are what strike Dick and me as a few simple but significant examples:
$latex {\bullet}&fg=000000$ Square root is not a function. It is written like a function, but isn't. Here is an example of what you can ``do'' by conveniently forgetting this: $latex {-1/1 = 1/-1}&fg=000000$, so take square roots of both sides. You get
$latex \displaystyle \frac{\sqrt{-1}}{\sqrt{1}} = \frac{\sqrt{1}}{\sqrt{-1}}\quad\text{so}\quad \frac{i}{1} = \frac{1}{i}\quad\text{so}\quad i^2 = 1.&fg=000000$
This contradicts the definition $latex {i^2 = -1}&fg=000000$.
$latex {\bullet}&fg=000000$ Not all matrices are diagonalizable. Since even many singular matrices are diagonalizable, it is easy to forget this is not true in general. If it were true, then there would be a really quick proof that a matrix $latex {A}&fg=000000$ always satisfies its characteristic polynomia $latex {p}&fg=000000$l. Namely, let $latex {A = P D P^{-1}}&fg=000000$ with $latex {D}&fg=000000$ the diagonal matrix of eigenvalues. Then on substituting the right-hand side into the formula $latex {p(A)}&fg=000000$, all the $latex {P}&fg=000000$'s and $latex {P^{-1}}&fg=000000$'s cancel except for one $latex {P}&fg=000000$ in front and one $latex {P^{-1}}&fg=000000$ in back. The rest is the component-wise evaluation $latex {p(\lambda)}&fg=000000$ on each eigenvalue $latex {\lambda}&fg=000000$ in $latex {D}&fg=000000$, which identically vanishes, leaving the all-zero matrix.
Well this is not a bad error to make. Every matirx $latex {A}&fg=000000$ has arbitrarily close perturbed forms $latex {A'}&fg=000000$ that are diagonalizable, indeed have distinct eigenvalues. The above proof gives $latex {p'(A') = 0}&fg=000000$ where the characteristic polynomial $latex {p'}&fg=000000$ is coefficientwise close to $latex {p}&fg=000000$. Continuity then implies $latex {p(A) = 0}&fg=000000$.
$latex {\bullet}&fg=000000$ $latex {\mathbb{Z}_{p^k}}&fg=000000$ is not the same as the field $latex {GF(p^k)}&fg=000000$. The former is not a field, as it has zero divisors. The totient subgroup formed by the elements that are not multiples of $latex {p}&fg=000000$ is not a field either. But this is again not always a bad error to make, even in crypto. A lot of properties and problems are similar between the structures.
These are really at high school or undergraduate level, before the research stage. What kind of traps matter at research level?
My Trap
My own strongest feeling of falling into a ``graduate student trap'' came in October 2006, as I began my work on statistical claims of cheating with computers at chess that had arisen during the world championship match and before. I modeled a human player and a computer as distributions $latex {P}&fg=000000$ and $latex {Q}&fg=000000$ over the choices of available moves in game positions. Cheating would depend on how close the distribution of played moves were to $latex {P}&fg=000000$ vis-à-vis $latex {Q}&fg=000000$, so I wanted a suitable distance measure $latex {d(P,Q)}&fg=000000$ between distributions. Part reason to model the computer as a distribution is not only to allow for different chess-engine program versions and parameter settings, but also a steady amount of small variation caused by hash collisions---as I described here and mainly here.
I decided to assume that for two different (sets of) positions $latex {S_1}&fg=000000$ and $latex {S_2}&fg=000000$, the player's distributions $latex {P_1,P_2}&fg=000000$ would be independent, and similarly $latex {Q_1,Q_2}&fg=000000$ for the computer. This makes the joint distributions $latex {P}&fg=000000$ and $latex {Q}&fg=000000$ on $latex {S_1 \cup S_2}&fg=000000$ satisfy $latex {P = P_1 \times P_2}&fg=000000$, $latex {Q = Q_1 \times Q_2}&fg=000000$. So that I could group game turns as I wished, I wanted the distance measure to be additive, namely
$latex \displaystyle d(P,Q) = d(P_1 \times P_2, Q_1 \times Q_2) = d(P_1,Q_1) + d(P_2,Q_2). &fg=000000$
The first distance measure I considered, called Kullback-Leibler (K-L) divergence, is defined (on discrete domains $latex {X}&fg=000000$) by$latex \displaystyle \kappa(P || Q) = \sum_{x \in X}P(x)\ln\frac{P(x)}{Q(x)}. &fg=000000$
The Wikipedia page says straight out that $latex {\kappa}&fg=000000$ is additive. Great, I thought.Unfortunately, $latex {\kappa}&fg=000000$ is not symmetric, and more of concern to me, $latex {\kappa}&fg=000000$ approaches $latex {+\infty}&fg=000000$ whenever there are events $latex {x}&fg=000000$ for which $latex {Q(x)}&fg=000000$ is tiny but $latex {P(x)}&fg=000000$ is not. In chess, such events would be moves the computer recognizes as bad but that the player still falls into, or is tempted by. This was a concern because chess positions can have many bad moves, so that the ``tail'' of the move distribution could distort the value of $latex {\kappa}&fg=000000$. I could switch around $latex {P}&fg=000000$ and $latex {Q}&fg=000000$ to avoid this, but then bad moves shunned by players would cause distortion.
Is Jensen-Shannon Divergence Additive?
Applications employing distributional divergence measures were new to me, but it so happened that my department's Distinguished Alumni Speaker that month knew something about them. After hearing my issues, he---I won't name the ``guilty party,'' though I already did---suggested using Jensen-Shannon (J-S) divergence instead. J-S reduces this distortion by employing the interpolated distribution $latex {R = \frac{1}{2}P + \frac{1}{2}Q}&fg=000000$. Then it is defined by
$latex \displaystyle \eta(P,Q) = \frac{1}{2}\kappa(P || R) + \frac{1}{2}\kappa(Q || R). &fg=000000$
This always gives finite values, and is symmetric---hence the use of comma not $latex {||}&fg=000000$. Analogous to how the sum of squared differences, which is obviously additive on product vectors, is the square of the Euclidean metric, $latex {\eta}&fg=000000$ is also the square of a metric. All this, plus the absence of contrary information, plus the frequent words ``J-S is a symmetrized and smoothed version of K-L,'' naturally made me assume that $latex {\eta}&fg=000000$ was additive. Grateful for the tip, I happily started on the machinery to apply it for chess.A week later I started drafting a paper describing my concept and model, and decided it would be good to include a proof that J-S divergence is additive. Then, only then, is when I discovered with an electric shock:
It isn't.
I'll leave the task of actually constructing counterexamples to the reader, but here's an intuition. It uses a generalizing trick that reminds me of one by Bob Vaughan that we covered last July. For any $latex {\lambda}&fg=000000$ define $latex {R' = \lambda P + (1 - \lambda)Q}&fg=000000$ and
$latex \displaystyle \eta'_{\lambda}(P,Q) = \lambda \kappa(P||R') + (1 - \lambda)\kappa(Q||R'). &fg=000000$
A little reflection may convince you that this cannot be additive for all $latex {\lambda}&fg=000000$. Hence its being additive for $latex {\lambda = \frac{1}{2}}&fg=000000$, which yields $latex {\eta}&fg=000000$, would be an ``accident.'' Finally thinking how $latex {P}&fg=000000$ and $latex {Q}&fg=000000$ themselves can give-and-go with $latex {\lambda}&fg=000000$ gives the inkling that the accident doesn't happen.
How Clear in the Literature?
I can put this in the form of a ``blog-beg,'' called a bleg:
Can you find an easily-accessed source that says clearly that Jensen-Shannon divergence is not additive?
As of this writing, the Wikipedia page on J-S still does have such a statement. Adding one yourself would be cheating. In 2006 I did not find one elsewhere, even in a couple of texts. My one-hour trial by Google when I first drafted this post last summer found one paper in 2008 titled ``Nonextensive Generalizations of the Jensen-Shannon Divergence.'' This clued me in that nonextensive is the antonym of additive. So the authors' generalizations are not additive, but what about the original $latex {\eta}&fg=000000$?
Another paper I found has the promising title ``Properties of Classical and Quantum Jensen-Shannon Divergence,'' and its first author and I have Harry Burhman as a common coauthor. It defines J-S with a bang in the opening sentence, states some generalizations of J-S, and (still on page 1) says the magic word:
Shannon entropy is additive in the sense that the entropy of independent random variables, defined as the entropy of their joint distribution, is the sum of their individual entropies. (emphasis in original)
But the next sentence brings up the different topic of Rényi entropy, and after a mention of ``non-extensive (i.e. nonadditive) generalizations'' of J-S it goes into quantum.
Another paper picks up the thread in its title ``Nonextensive Generalizations of the Jensen-Shannon Divergence.'' The point of the first two words must be that the original J-S is additive, yes? It's the generalizations that are non-additive. Right? The paper's abstract says it builds something called the Jensen-Tsallis $latex {q}&fg=000000$-difference, ``which is a nonextensive generalization of the JSD.'' So the original J-S is extensive, then? After defining additivity, it brings up the generalizations in which
... the additivity property is abandoned, yielding the so-called nonextensive entropies.
The next paragraph introduces K-L and J-S, but doesn't tell me whether the ``abandoned'' property was ever there. It seems the simple knowledge is presumed, but how might a bright young graduate student---or an old one---find it in the first place?
Open Problems
Can you give some more examples of ``graduate-student traps''? Ones that are helpful to know?
Is J-S close to being additive, in some useful sense? This actually strikes me as a non-trappy, research-worthy question.