Error correcting codes are systematic ways of introducing redundancy into data so that the original information can be recovered even when the data is corrupted. Error correcting codes (or just codes) are used ubiquitously in communication systems and data storage. The study of error correcting codes (or coding theory) started with the seminal works of Shannon and Hamming in the late 1940s. Since then coding theory has been an active cross-disciplinary research area. Coding theory has used tools from combinatorics, probability theory and algebra among other areas.
This course will discuss the theoretical aspects of codes and will focus mostly on the worst case noise model pioneered by Hamming. However, we will discuss quite a few results on the stochastic noise model pioneered by Shannon.
In this course we will start with the classical theory of Shannon and Hamming and then focus on the areas of coding theory that have seen a lot of activity in the last decade or so. The course will be roughly divided into three parts. The first part will look at the combinatorial issues in the design of codes. This part will mostly be classical results that talk about limits to what can and cannot be done using codes. The second part of the course will deal with the algorithmic aspects of codes. In particular, we will focus on efficient algorithms that recover the original information from corrupted data (called decoding). In this part we will discuss some exciting recent developments that bridge the "divergent" schools of thoughts of Shannon and Hamming. Finally, we will study some application of codes outside of the "traditional" error correcting applications. In particular, we will see how codes can be used to obtain results in theoretical computer science in general and computational complexity in particular. Most of the course will focus on the first two parts and we will spend 2-3 lectures on applications.
Here is a tentative list of topics (actual topics discussed will be a subset depending on class interest and time).