Tutorials

 Privacy-Preserving Speech and Audio Processing
 Manifold Learning: Modeling and Algorithms



Privacy-Preserving Speech and Audio Processing

Bhiksha Raj Bhiksha Raj
Associate Professor, Carnegie Mellon University, USA
Personal homepage
Group homepage

Bhiksha Raj is an Associate Professor in the Language Technologies Institute of the School of Computer Science at Carnegie Mellon University, with additional affiliations to the Electrical and Computer Engineering and Machine Learning departments. Dr. Raj obtained his PhD from CMU in 2000 and was at Mistubishi Electric Research Laboratories from 2001-2008. Dr. Raj's chief research interests lie in automatic speech recognition, computer audition, machine learning and data privacy. Dr. Raj's latest research interests lie in the newly emerging field of privacy-preserving speech processing, in which his research group has made several contributions.

 Download lecture slides



Abstract

The privacy of personal data has generally been considered inviolable. On the other hand, in nearly any interaction, whether it is with other people or with computerized systems, we reveal information about ourselves. Sometimes this is intended, for instance when we use a biometric system to authenticate ourselves, or when we explicitly provide personal information in some manner. Often, however, it is unintended; for instance a simple search performed on a server reveals information about our preferences. An interaction with a voice recognition system reveals information to the system about our gender, nationality (accent), and possibly emotional state and age.

Regardless of whether the exposure of information is intentional or not, it could be misused, potentially setting us at financial, social and even physical risk. These concerns about exposure of information has spawned a large and growing body of research, addressing various issues about how information may be leaked, and how to protect it.

One area of concern are sound data, particularly voice. For instance, voice-authentication systems and voice-recognition systems are becoming increasingly popular and commonplace. However, in the process of using these services, a user exposes himself to potential abuse: as mentioned above the server, or an eavesdropper, may obtain unintended demographic information about the user by analyzing the voice and sell this information. It may edit recordings to create fake recordings the user never spoke. Other such issues can be listed. Merely encrypting the data for transmission does not protect the user, since the recepient (the server) must finally have access to the data in the clear (i.e. decrypted form) in order to perform its processing.

In this tutorial, we will discuss solutions for privacy-preserving sound processing, which enable a user to employ sound- or voice-processing services without explosing themselves to risks such as the above.

We will describe the basics of privacy-preserving techniques for data processing, including homomorphic encryption, oblivious transfer, secret sharing, and secure-multiparty computation. We will describe how these can be employed to build secure "primitives" for computation, that enable users to perform basic steps of computation without revealing information. We will describe the privacy issues with respect to these operations. We will then briefly present schemes that employ these techniques for privacy-preserving signal processing and biometrics.

We will then delve into uses for sound, and particularly voice processing, including authentication, classification and recognition, and discuss computational and accuracy issues.

Finally we will present a newer class of methods based on exact matches built upon locality sensitive hashing and universal quantization, which enables several of the above privacy-preserving operations at a different operating point of privacy-accuracy tradeoff.


Manifold Learning: Modeling and Algorithms

Raviv Raich Raviv Raich
Assistant Professor, Oregon State University, USA
Personal homepage

Raviv Raich is an assistant Professor of Electrical Engineering in the school of Electrical Engineering and Computer Science at Oregon State University. Raviv Raich received the B.Sc. and M.Sc. degrees from Tel Aviv University, Tel-Aviv, Israel, in 1994 and 1998, respectively, and the Ph.D. degree from Georgia Institute of Technology, Atlanta, in 2004, all in electrical engineering. From 2004 to 2007, he was a Postdoctoral Fellow with the University of Michigan, Ann Arbor. Since fall 2007, he has been an Assistant Professor in the School of Electrical Engineering and Computer Science, Oregon State University, Corvallis. His main research interest is in statistical signal processing and machine learning.

 Download lecture slides



Abstract

Recent advances in data acquisition and high rate information sources give rise to high volume and high dimensional data. For such data, dimension reduction provides means of visualization, compression, and feature extraction for clustering or classification. In the last decade, a variety of methods for nonlinear dimensionality reduction have been a topic of ongoing research.

In effort to alleviate the curse of dimensionality, it is often assumed that data possess a geometric structure which can be captured with a low dimensional representation. Dimension reduction focuses on the identification of a mapping from the high-dimensional data to a low-dimensional representation. When a collection of data points is assumed to reside on a hyper-plane, a linear transformation is sought after giving rise to well-known algorithms such as principal component analysis. Manifolds offer a generalization to linear spaces and present a natural alternative when the data points no longer reside on linear subspace. Manifold learning and data dimension reduction have many applications, e.g., visualization, classification, and information processing. Data visualization in 2D or 3D provides further insight into the data structure, which can be used for either interpretation or data model selection.

In this tutorial, we will present methods of dimensionality reduction used for analysis of high dimensional data.

We will begin with an introduction of principled criteria for data dimension reduction. Specifically, we will introduce criteria for both supervised and unsupervised dimension reduction and their corresponding computational solutions.

We will then continue with an introduction of a variety of approaches for geometric representation of data linking the high dimensional to its low dimensional representation for both linear and nonlinear models (e.g., via local neighborhood graphs or kernel methods). We will introduce optimization approaches for the different methods.

Finally, we will review probabilistic approaches for nonlinear dimension reduction.