[CMC T2-2-A] 

Optimal Transport and Machine Learning

 

Date: July 3-7, 2017       Place: Rm. 1503, KIAS

Titles & Abstracts Home > Titles & Abstracts

Lecturer: Marco Cuturi (ENSAE / CREST)
Title: Computational Optimal Transport with Applications to Machine Learning
Abstract: I will cover in these lectures fundamental aspects behind the computation of the so-called "static" optimal transport problem, as formalized by Monge first, but most importantly by Kantorovich and Hitchcok during the war, and solved numerically as a linear program by Dantzig shortly after. I will show the computational limits of this LP approach and explain how a simple regularization of the problem can result in considerably faster computations, using notably GPUs. I will then discuss recent applications where this regularized perspective has proved effective for data analysis.

Talk schedule:

[1st part] Introduction, OT as a LP, regularized OT
- Introduction to the field, brief historical review.
- Introduction to the linear programming formulation of optimal transport.
- Review of relevant algorithms to solve OT.
- Entropic regularization.
- Algorithmic properties. Connexion to matrix scaling. Convergence.
- Differentiability of regularized OT.


[2nd part] Wasserstein Variational Problems in Data Analysis
- Links with k-means
- The barycenter problem
- K-means in Wasserstein space of measures.
- Dictionary learning
- Wasserstein PCA
- Wasserstein Regression
- Minimum Kantorovich Estimation / Wasserstein GAN



Lecturer:  Young-Kyun Noh (Mechanical and Aerospace Engineering, SNU) 
Title: Bias Reduction and Metric Learning for Nearest-Neighbor Estimation of Kullback-Leibler Divergence
Abstract: Asymptotically unbiased nearest-neighbor estimators for the KL divergence have recently been proposed and demonstrated in a number of applications. With small sample sizes, however, these nonparametric methods typically suffer from high estimation bias due to the non-local statistics of empirical nearest-neighbor information. In this talk, I will show that this non-local bias can be mitigated by changing the local distance metric and propose a method for learning a local optimal Mahalanobis-type metric based on global information provided by approximate parametric models of the underlying densities. In both simulations and experiments, this interplay between parametric models and nonparametric estimation methods significantly improves the accuracy of the nearest-neighbor KL divergence estimator.

Lecturer: Seungjin Choi (Computer Science and Engineering, POSTECH)
Title:
Deep Generative Models for Density Estimation
Abstract: Density estimation is a core problem in machine learning, the goal of which is to construct an estimate, based on observed data, of an unobservable underlying probability density function. Generative models have played a critical role for density estimation. In this talk, I begin with linear generative models where some conditional independence structures are imposed on parameterized distributions in linear model to build a problem-specific density which is learned from a training dataset. Then, I explain rather recent advances in deep generative models where deep neural networks incorporate encoder or decoder in generative models. Two important categories are emphasized in terms of prescribed models or implicit models, including variational autoencoders and generative adversarial nets.


Lecturer: Jinwoo Shin (Electrical Engineering, KAIST)
Title: Generative Machine Learning
Abstract: Generative models randomly generate observable data values, typically given some hidden parameters. In this talk, I will introduce most successful generative models and their underlying theory, which have developed in the machine learning community: (a) graphical models (e.g., Gaussian, Markov random fields, Boltzmann machines) and (b) neural networks (e.g., variational auto-encoders, generative adversarial networks).