{\huge Honor Thy Fathers Correctly}

Partly a rant from today's New York Times \vspace{.5in}

Aryabhata is often called the father of Indian mathematics, and can be regarded as the progenitor of many modern features of mathematics. He completed his short great textbook, the Aryabhatiya, in 499 at age 23. Thus if there had been a ``STOC 500,'' he could have been the keynote speaker. The book has the first recorded algorithm for implementing the Chinese Remainder Theorem, which Sun Tzu had stated without proof or procedure. He computed $latex {\pi}&fg=000000$ to 4 decimal places, and interestingly, hinted by his choice of words that $latex {\pi}&fg=000000$ could only be ``approached.'' He introduced the concept of versine, that is $latex {1 - cos(\theta)}&fg=000000$, and computed trigonometric tables. He maintained that the Earth rotates on an axis, 1,000 years before Nikolaus Copernicus did, and while he described a geocentric system, he treated planetary periods in relation to the Sun.

Today Ken and I wish everyone a Happy Father's Day, and talk about recognizing our scientific fathers correctly. This includes a short rant about a flawed piece in today's New York Times.

Ken believes that Aryabhata further hinted at the ``correct'' value of $latex {\pi}&fg=000000$. As earlier referenced by our fellow blogger and friend Bill Gasarch, this article by Bob Palais wins Ken's assent that we should really think in terms of 6.283185307… Here is what Aryabhata wrote:

Add 4 to 100, multiply by 8, and then add 62,000. By this rule the circumference of a circle with a diameter of 20,000 can be approached.

Why wouldn't he say to multiply by 4, add 31,000, and get the circumference for a diameter of 10,000? Ken observes that 10,000 was already the more recognizable unit, so Aryabhata may have been hinting at the ratio to the radius. Indeed Aryabhata flatly stated a decimal place system as the norm for arithmetic, though he himself extended it with a system of Sanskrit phonemes capable of handling fractions and decimal places.

Aryabhata is sometimes also credited with the first symbolic use of zero, at least implicitly. His successor Brahmagupta, a century-plus later, perhaps deserves greater credit for being the first to treat zero as an explicit entity. He wrote rules such as zero plus anything is itself, zero times anything is zero, and ``a fortune subtracted from zero is a debt.'' Brahmagupta waded into trouble only when he ruled that zero divided by zero is zero, and tried to make unique sense out of other fractions with zero. But our point is that in the West he seems to get zero credit for zero at all. And that brings me to giving credit to ``fathers'' of fields correctly.

Today's Article

This Sunday's New York Times has, on page nine of the Review section, an article by Ignacio Palacios-Huerta, which is titled {|it The Beautiful Data Set}. He is a professor of economics at the London School of Economics and is the author of a recent book on Game Theory.

In his article he uses soccer as a way to explain game theory. He is especially interested in penalty kicks. These are two person games between the goalie and the player. It is a zero-sum game, since either the kick goes in or it does not. He has looked at over nine thousand such kicks in professional games that took place over the period 1995 to 2012. He found that roughly 60% go to the right of the net and the rest to the left. This is, as he notes, not exactly an even distribution; this is probably due to players having a stronger kick to one side than the other.

All of this is interesting and I like when the NYT, or any mainstream media, talks about mathematics. But Palacios-Huerta repeatedly gets one thing wrong. I cannot understand why, but in my opinion he does. He repeatedly attributes the theory of such zero-sum games to John Nash. He calls all of the above ``Nash's Theory.''

This is just wrong. The theory of zero-sum games is due to John von Neumann. Recall that he published the famous Minmax Theorem in 1928 and said: As far as I can see, there could be no theory of games $latex {\dots}&fg=000000$ without that theorem $latex {\dots}&fg=000000$ I thought there was nothing worth publishing until the Minimax Theorem was proved.

Palacios-Huerta knows this---he quotes the above in his book---so I am at best puzzled by his use of Nash in his NYT article. The only point I can see is that the public may recall Nash, since he won a Nobel and is featured in a great book and even a major motion picture. Soccer is after all called the ``Beautiful Game,'' whence his ``Beautiful Data Set,'' and the association with ``A Beautiful Mind'' called up Nash to either him or the editor---or both.

But I still am disappointed with the NYT. The whole claim to fame for Nash is that he extended the theory of games to include non-zero-sum games, that his theory extended the minmax theorem to such games. To invoke Nash, in a misleading manner, seems to me to be wrong. Seems to be something that does not fit.

Open Problems

What do you think? Happy Father's Day.