Ian Goodfellow

PhD Candidate
Advisors: Yoshua Bengio and Aaron Courville
Department of Computer Science and Operations Research
Université de Montréal
Montréal, Québec
goodfeli@iro.umontreal.ca
CV

About Me

I am currently a PhD candidate in Yoshua Bengio's lab, LISA. I graduated from Stanford University in 2009 with a B.Sc. and an M.Sc. in Computer Science.
Google generously supports my PhD through the 2013 Google PhD Fellowship in Deep Learning. My Google Research mentor is Samy Bengio. At Stanford, I was part of Andrew Ng's research group, where I worked on deep learning and the Stanford AI Robot.

Recent news

My Google internship project that I completed in September along with the Street Smart team has been featured in online articles from MIT Technology Review, Vice, Wired, Slate, and Eparsa.

I will be joining Jeff Dean's team at Google in Mountain View, CA as a research scientist in July 2014.

Research Interests

Most of my work deals with learning large-scale hierarchical models (both probabilistic models and deterministic neural nets) for complicated real-world tasks. I usually use computer vision tasks to benchmark the performance of my machine learning algorithms though most of my work should be more broadly applicable. My algorithms are best suited for learning in situations containing lots of structure, little noise, and medium to high amounts of labeled data.

Recently, I did an internship at Google, working on machine learning / computer vision for StreetView. Earlier this year, my colleagues at LISA and I improved the state of the art on several object recognition benchmark tasks using maxout networks. I spent most of last year figuring out how to train deep Boltzmann machines without using layerwise pretraining. In 2011, I developed a fast inference approximate inference algorithm for the S3C model and used it to win a transfer learning contest organized by Quoc Le and Marc'Aurelio Ranzato. In 2010, I was part of LISA's team that won another transfer learning contest organized by DARPA.

Conference Papers

Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks [ArXiv]
Ian J. Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, and Vinay Shet
ICLR 2014 (Oral presentation)

An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks [ArXiv]
Ian J. Goodfellow, Mehdi Mirza, Da Xiao, Aaron Couville, and Yoshua Bengio
ICLR 2014

An empirical analysis of dropout in piecewise linear networks [ArXiv]
David Warde-Farley, Ian J. Goodfellow, Aaron Couville, and Yoshua Bengio
ICLR 2014

Multi-prediction deep Boltzmann machines [pdf] [bib] [Code / hyperparameters]
Ian J. Goodfellow, Mehdi Mirza, Aaron Courville, and Yoshua Bengio
NIPS 2013 (Previously an oral presentation at ICLR 2013 workshops track)

Challenges in representation learning: a report on three machine learning contests [arXiv preprint] [FER-2013 dataset] [MLC-2013 dataset]
Ian J. Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville, Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler, Dong-Hyun Lee, Yingbo Zhou, Chetan Ramaiah, Fangxiang Feng, Ruifan Li, Xiaojie Wang, Dimitris Athanasakis, John Shawe-Taylor, Maxim Milakov, John Park, Radu Ionescu, Marius Popescu, Cristian Grozea, James Bergstra, Jingjing Xie, Lukasz Romaszko, Bing Xu, Zhang Chuang, and Yoshua Bengio
ICONIP 2013. Oral presentation by Dong-Hyun Lee.

Maxout Networks. [pdf] [bib] [code/hyperparameters]
Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio.
ICML 2013 (Full-length oral presentation).

Large-Scale Feature Learning With Spike-and-Slab Sparse Coding. [pdf] [bib]
Ian J. Goodfellow, Aaron Courville, and Yoshua Bengio.
ICML 2012. (Oral presentation at ICML 2012, The Learning Workshop aka Snowbird 2012, and NIPS workshops 2011 in recognition of winning the transfer learning challenge)

Help me help you: interfaces for personal robots. [pdf] [bib]
Ian J. Goodfellow, Nate Koenig, Marius Muja, Caroline Pantofaru, Alexander Sorokin, and Leila Takayama.
HRI 2010.

Measuring invariances in deep networks. [pdf] [bib] [video data]
Ian J. Goodfellow, Quoc V. Le, Andrew M. Saxe, Honglak Lee, and Andrew Y. Ng.
Advances in Neural Information Processing Systems (NIPS) 22.

Journal Papers

Scaling up Spike-and-Slab Models for Unsupervised Feature Learning [preprint pdf]
Ian J. Goodfellow, Aaron Courville, and Yoshua Bengio
IEEE Transactions on Pattern Analysis and Machine Intelligence, special issue on deep learning, 2013

Unsupervised and Transfer Learning Challenge: a Deep Learning approach. [pdf] [bib]
Grégoire Mesnil, Yann Dauphin, Xavier Glorot, Salah Rifai, Yoshua Bengio, Ian Goodfellow, Erick Lavoie,
Xavier Muller, Guillaume Desjardins, David Warde-Farley, Pascal Vincent, Aaron Courville, and James Bergstra.
Journal of Machine Learning Research, Volume 27.

Not-yet-published workshop papers and arXiv preprints

Pylearn2: a machine learning research library [pdf]
Ian J. Goodfellow, David Warde-Farley, Pascal Lamblin, Vincent Dumoulin, Mehdi Mirza, Razvan Pascanu, James Bergstra, Frédéric Bastien, Yoshua Bengio
arXiv 2013

Higher-order Spike-and-Slab Boltzmann Machines for Disentangling Factors of Variation
Aaron Courville, Guillaume Desjardins, Ian Goodfellow, and Yoshua Bengio. The Learning Workshop, 2012. Snowbird, Utah.

Software

See my github account for most of my recent work.
I wrote most of Pylearn2, a python library designed to make machine learning research convenient. Its mission is to provide a toolbox of interchangeable parts that provide a lot of flexibility for setting up machine learning experiments, providing enough extensibility that pretty much any research idea is feasible within the context of the library. This is in contrast to other machine learning libraries such as scikits-learn that are designed to be black boxes that just work. Think of pylearn2 as user friendly for machine learning researchers and scikits-learn as user friendly for developers that want to apply machine learning.
I'm a regular contributor to Theano. With Razvan Pascanu, I introduce the R operator used for Hessian Free optimization and other techniques. I also rewrote the symbolic differentiation system from scratch to make it handled undefined gradients and integer-valued variables correctly. I also wrote the 3D convolution code. I regularly submit bug fixes, speed improvements, and debugging tools.
At Stanford, I was a core developer of the STAIR Vision Library.
I've also contributed to ROS and OpenCV.

Teaching

I have been a teaching assistant / course assistant for the following courses:
IFT 6266 at UdeM, Representation Learning, taught by Yoshua Bengio
CS 147 at Stanford, Compilers, taught by a video recording of Jerry Cain
CS 221 at Stanford, Introduction to Artificial Intelligence, taught by Andrew Ng
CS 229 at Stanford, Machine Learning, taught by Andrew Ng