PLENARY LECTURES

Learning and Message-Passing in Graphical Models
Large-scale convex optimization for machine learning
Adaptation and Learning over Complex Networks

Learning and Message-Passing in Graphical Models

Martin Wainwright

Martin Wainwright
Statistics and EECS, UC Berkeley, USA
Personal homepage

Martin Wainwright joined the faculty at University of California at Berkeley in Fall 2004, with a joint appointment between the Department of Statistics and the Department of Electrical Engineering and Computer Sciences. He received his Bachelor's degree in Mathematics from University of Waterloo, and his Ph.D. degree in Electrical Engineering and Computer Science (EECS) from Massachusetts Institute of Technology (MIT), for which he was awarded the George M. Sprowls Prize from the MIT EECS department in 2002. He is interested in large-scale statistical models, and their applications to communication and coding, machine learning, and statistical signal and image processing. He has received an NSF-CAREER Award (2006), an Alfred P. Sloan Foundation Research Fellowship (2005), an Okawa Research Grant in Information and Telecommunications (2005), the 1967 Fellowship from the Natural Sciences and Engineering Research Council of Canada (1996--2000), and several outstanding conference paper awards.


 Download lecture slides

Abstract

Graphical models provide a powerful framework for modeling complex dependencies in structured signals, including image and video data, language and text corpora, social networks, and biological data. They also come with distributed message-passing algorithms, which generalize the familiar Kalman and Viterbi algorithms, for statistical computation. In this talk, we discuss some recent advances in the use of graphical models, including low-complexity stochastic forms of sum-product message-passing, and computationally efficient algorithms for learning graphical structure from high-dimensional data.

Based on joint work with Nima Noorshams and Po-Ling Loh, UC Berkeley.


Large-scale convex optimization for machine learning

Francis Bach

Francis Bach
PASCAL invited speaker
INRIA, France
Personal homepage

Francis Bach is a researcher in the Sierra INRIA project-team, in the Computer Science Department of the Ecole Normale Superieure, Paris, France. He graduated from the Ecole Polytechnique, Palaiseau, France, in 1997, and earned his PhD in 2005 from the Computer Science division at the University of California, Berkeley. His research interests include machine learning, statistics, optimization, graphical models, kernel methods, sparse methods and statistical signal processing. He has been awarded a starting investigator grant from the European Research Council in 2009.


 Download lecture slides

Abstract

Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. In this talk, I will present several recent results, showing that in the ideal infinite-data setting, online learning algorithms based on stochastic approximation should be preferred, but that in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate with an iteration cost similar to stochastic gradient descent. (joint work with Nicolas Le Roux, Eric Moulines and Mark Schmidt)


Adaptation and Learning over Complex Networks

Ali H. Sayed

Ali H. Sayed
UCLA Electrical Engineering
Personal homepage

Ali H. Sayed is Professor of Electrical Engineering at the University of California, Los Angeles (UCLA), where he directs the UCLA Adaptive Systems Laboratory. He is the author or coauthor of over 370 articles and 5 books. He is the author of the textbooks Adaptive Filters (New York: Wiley, 2008), and Fundamentals of Adaptive Filtering (New York: Wiley, 2003), and co-author of Linear Estimation (Prentice-Hall, 2000). Dr. Sayed's research interests span several areas including adaptation and learning, adaptive and cognitive networks, bio-inspired networks, flocking and swarming behavior, cooperative behavior, distributed processing, self-healing circuitry, and statistical signal processing. His research has been awarded several recognitions including the 1996 IEEE Donald G. Fink Prize, a 2002 Best Paper Award from the IEEE Signal Processing Society, the 2003 Kuwait Prize in Basic Sciences, the 2005 Frederick E. Terman Award, and a 2005 Young Author Best Paper Award from the IEEE Signal Processing Society. He served as Editor-in-Chief of the IEEE Transactions on Signal Processing (2003-2005) and the EURASIP Journal on Advances in Signal Processing (2006-2007). He also served as a 2005 Distinguished Lecturer of the IEEE Signal Processing Society, and as General Chairman of the 2008 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). He served as Vice-President-Publications of the IEEE Signal Processing Society (2009-2011), and as member of the Board of Governors (2007-2011) of the same society.

Abstract

Complex patterns of behavior are common in many biological networks, where no single agent is in command and yet forms of decentralized intelligence are evident. Examples include fish joining together in schools, birds flying in formation, bees swarming towards a new hive, and bacteria diffusing towards a nutrient source. While each individual agent in these biological networks is not capable of complex behavior, it is the combined coordination among multiple agents that leads to the manifestation of sophisticated order and learning abilities at the network level. The study of these phenomena opens up opportunities for collaborative research across several domains including economics, life sciences, biology, machine learning, and information processing, in order to address and clarify several relevant questions such as: (a) how and why organized and complex behavior arises at the group level from interactions among agents without central control? (b) What communication topologies enable the emergence of order at the higher level from interactions at the lower level? (c) How is information quantized during the diffusion of knowledge through the network? And (d) how does mobility influence the learning and tracking abilities of the agents and the network. Several disciplines are concerned in elucidating different aspects of these questions including evolutionary biology, animal behavior programs, physical biology, and also computer graphics. In the realm of machine learning and signal processing, these questions motivate the need to study and develop decentralized strategies for information processing that are able to endow cognitive networks with real-time adaptation and learning abilities. Cognitive networks consist of spatially distributed agents that are linked together through a connection topology. The topology may vary with time and the agents may also move. The agents cooperate with each other through local interactions and by means of in-network processing. Such networks are well-suited to perform decentralized information processing, decentralized optimization, and decentralized learning and inference tasks. They are also well-suited to model and understand self-organized and complex behavior encountered in nature and in social and economic networks. This presentation examines several patterns of decentralized intelligence in biological networks, and describes powerful diffusion adaptation and online learning strategies that our research group has been developing in recent years to model and reproduce these kinds of learning behavior over cognitive networks.