NOTICE: Advance registrations will be accepted until 8:00 AM EST, Wednesday December 23. After that, please register on site at the meeting.The AMS Short Course will be held on Monday (1/11/10) and Tuesday (1/12/10) in Room 2004, 2nd Floor, Moscone West (AMS) and in Room 2002, 2nd Floor, Moscone West (MAA). The Registration Desk for the short courses and the tutorial will be located in the foyer outside these rooms, on Monday (1/11/10) from 8:00 a.m. to noon only. 
Markov Chains and Mixing Times
January 11  12, 2010, Room 2004, 2nd Floor, Moscone Center West
(Preceding the Joint Mathematics Meetings)
Organized by David Levin, University of Oregon; Yuval Peres, University of California, Berkeley and Microsoft; Elizabeth Wilmer, Oberlin College.
In lieu of traditional lecture notes the book entitled Markov Chains and Mixing Times, co authored by the organizers, (description below) will be provided free of charge to the first 80 people who register for this course. A voucher will be provided to all other registrants that will allow them to purchase the book at the deeply discounted price of US$32. All preregistrants will be notified if they qualified for the book or the voucher. A home mailing address as well as a current email address will be required for those registering for the Short Course in order to receive the book (or voucher) prior to the meeting.
Advance registration fees are: member of the AMS/MAAUS$98; nonmemberUS$135; student, unemployed, emeritusUS$46. Onsite fees are: member of the AMS/MAAUS$132; nonmemberUS$165; student, unemployed, emeritusUS$67. Click here to register.
Top of Page
General Introduction
Convergence of finite Markov chains to their stationary distributions is an extremely active research area. Many of the arguments are both beautiful and accessible, and the field interacts closely with both theoretical computer science and statistical physics. The main goal of both our book Markov Chains and Mixing Times [6] and this Short Course is to encourage wider dissemination of this material to a broad mathematical audience. Much of the material which we will present is related to very recent research in the area, such as [25].
Markov chains are a general class of stochastic processes which under mild regularity conditions converge in distribution to a unique stationary probability distribution. Traditionally, undergraduate treatments of Markov chains have focused on analyzing a fixed chain as time goes to infinity. In the past two decades a different asymptotic analysis has emerged. For a Markov chain with a large state space, we care about the finite number of steps needed to get the distribution reasonably close to the limit (stationary) distribution. This number is known as the mixing time of the chain, and there are now many methods for determining its behavior as a function of the geometry and size of the state space.
In 1986 Aldous and Diaconis wrote a wonderful Monthly article on mixing times [1]. Since then, the field and its fruitful interactions with computer science and statistical physics have grown tremendously. In the spring of 2005 a research program on Probability, Algorithms, and Statistical Physics was held at the Mathematical Sciences Research Institute in Berkeley, California. This multidisciplinary program united the interests of mathematicians, computer scientists, and physicists in discrete probabilistic models, and one of its major themes was the rigorous study of mixing times for finite Markov chains. Since much of the theory of Markov chain convergence was developed by physicists and computer scientists, the course will allow participants to see how mathematics is enriched by interaction with other disciplines. Several of the models which we will examine in the course will be "particle systems" arising in statistical physics. Interestingly, many of these models exhibit phase transitions: behavior of the model may change abruptly as a parameter describing local interactions passes through a critical value. For our particle systems, the mixing time may vary "fast" (polynomial in the instance size n) to "slow" (exponential in n) as interaction parameters pass through a critical value.
Talks will be given by the organizers and also by David Aldous and Alistair Sinclair, University of California, Berkeley.
Top of Page
References
(Where noted, articles may be retrieved from http://arxiv.org/.)
[1] D. Aldous and P. Diaconis, Shuffling cards and stopping times, Amer. Math. Monthly 93 (1986), no. 5, 333348.
[2] P. Diaconis and L. SaloffCoste, Separation cutoffs for birth and death chains, Ann. Appl. Probab.16 (2006), no. 4, 20982122.
[3] J. Ding, E. Lubetzky, and Y. Peres, The mixing time evolution of Glauber dynamics for the meanfield Ising model, Comm. Math. Physics (2008), available at http://arxiv:0806.1906.
[4] ________Totalvariation cutoff in birthanddeath chains, Probab. Theory and Rel. Fields (2008), available at http://arxiv:0801.2625.
[5] D. A. Levin, M. J. Luczak, and Y. Peres, Glauber dynamics for the meanfield Ising model: Cutoff, critical power law, and metastability, Probab. Theory and Rel. Fields (2007), available at http://arxiv:PR/0712.0790.
[6] D. A. Levin, Y. Peres, and E. L. Wilmer, Markov Chains and Mixing Times, American Mathematical Society, Providence, RI, 2009, with a chapter by James G. Propp and David B. Wilson.
Top of Page
About the Book
Markov Chains and Mixing Times builds on recent interest in chains with large state spaces by examining mixing times and helping to explain how they grow as the size of state spaces increases. The first part of this look at modern methods in the theory of Markov chains offers illustrative examples of techniques. Topics include a discussion of Glauber dynamics and the Metropolis algorithm in the context of "spin systems", and an examination of how under mild conditions Markov chains converge to their stationary distributions. The first part of the book also includes analyses of card shuffling chains. Part II covers more sophisticated techniques, many of which have not previously been presented in textbook form.
Topics include advanced spectral techniques, families of large chains studied in computer science and statistical mechanics, the cutoff phenomenon, and lamplighter chains. The text progresses smoothly from simple to more complicated topics, making it useful for a variety of undergraduate and graduate students and researchers. The text is designed to convey the liveliness of Markov chain convergence, a central part of modern probability theory, to a wide audience.
The organizing speakers will lecture on material directly from the text, and Aldous and Sinclair are leading experts and innovators in the subject matter of the book.
Top of Page
