Ph.D. Candidate, Computer Science
Stanford Artificial Intelligence Laboratory
Statistical Machine Learning Group
Curriculum Vitae
I am a fifth year Ph.D. candidate in Computer Science in Stanford University, advised by Stefano Ermon. I did my undergrad in the Department of Computer Science and Technology, Tsinghua University, where I worked with Jun Zhu and Lawrence Carin.
My research is centered on deep unsupervised learning, with various applications on deep generative modeling, representation learning and (inverse) reinforcement learning. Recently, I am interested in the following topics:
- Information-theoretic approaches to machine learning and representation learning [1, 2, 3, 4]
- Improvements to generative modeling and statistical inference [1, 2, 3, 4]
- Learning complex behaviors and intentions from demonstrations [1, 2, 3, 4]
- Societal issues in machine learning, such as fairness and calibration [1, 2, 3]
Email: tsong [at] cs [dot] stanford [dot] edu
Teaching
- CS228, Probablistic Graphical Models (Winter 2020, Head TA)
- CS236, Deep Generative Models (Fall 2018, TA)
Publications
2021
- [37]Denoising Diffusion Implicit Models
ICLR 2021, In International Conference on Learning Representations.[code] - [36]Negative Data Augmentation
ICLR 2021, In International Conference on Learning Representations. - [35]Improved Autoregressive Modeling with Distribution Smoothing
ICLR 2021, In International Conference on Learning Representations, (Oral presentation).
2020
- [34]Multi-label Contrastive Predictive Coding
NeurIPS 2020, In Neural Information Processing Systems, (Oral presentation). - [33]Autoregressive Score Matching
NeurIPS 2020, In Neural Information Processing Systems. - [32]Belief Propagation Neural Networks
NeurIPS 2020, In Neural Information Processing Systems. - [31]Robust and On-the-fly Dataset Denoising for Image Classification
ECCV 2020, In European Conference on Computer Vision.[slides] - [30]Permutation Invariant Graph Generation via Score-Based Generative Modeling
AISTATS 2020, In International Conference on Artificial Intelligence and Statistics.[code] - [29]Gaussianization Flows
AISTATS 2020, In International Conference on Artificial Intelligence and Statistics.[code] - [28]Training Deep Energy-Based Models with f-Divergence Minimization
ICML 2020, In International Conference on Machine Learning. - [27]Bridging the Gap Between f-GANs and Wasserstein GANs
ICML 2020, In International Conference on Machine Learning.[slides] [code] - [26]Domain Adaptive Imitation Learning
ICML 2020, In International Conference on Machine Learning. - [25]Understanding the Limitations of Variational Mutual Information Estimators
ICLR 2020, In International Conference on Learning Representations.[slides] [code] - [24]A Theory of Usable Information under Computational Constraints
ICLR 2020, In International Conference on Learning Representations, (Oral presentation). - [23]Multi-agent Adversarial Inverse Reinforcement Learning with Latent Variables
AAMAS 2020, In International Conference on Autonomous Agents and MultiAgent Systems (extended abstract). - [22]Imitation with Neural Density Models
Preprint, arXiv:2010.09808. - [21]Privacy Preserving Recalibration under Domain Shift
Preprint, arXiv:2008.09643. - [20]Experience Replay with Likelihood-free Importance Weights
Preprint, arXiv:2006.13169.
2019
- [19]Bias Correction of Learned Generative Models using Likelihood-free Importance Weighting
NeurIPS 2019, In Advances in Neural Information Processing Systems. - [18]Calibrated Model-based Deep Reinforcement Learning
ICML 2019, In International Conference on Machine Learning.[code] - [17]Multi-agent Adversarial Inverse Reinforcement Learning
ICML 2019, In International Conference on Machine Learning.[code] - [16]InfoVAE: Balancing Learning and Inference in Variational Autoencoders
AAAI 2019, In AAAI Conference on Artificial Intelligence. - [15]Learning Controllable Fair Representations
AISTATS 2019, In International Conference on Artificial Intelligence and Statistics.[code] - [14]Unsupervised Out-of-Distribution Detection with Batch Normalization
Preprint, arXiv:1910.09115.
2018
- [13]Multi-Agent Generative Adversarial Imitation Learning
NeurIPS 2018, In Advances in Neural Information Processing Systems.[code] - [12]Bias and Generalization in Deep Generative Models: An Empirical Study
NeurIPS 2018, In Advances in Neural Information Processing Systems, (Spotlight presentation).[code] - [11]The Information Autoencoding Family: A Lagrangian Perspective on Latent Variable Generative Models
UAI 2018, In Conference on Uncertainty in Artificial Intelligence, (Oral presentation).[code] - [10]Accelerating Natural Gradient with Higher-Order Invariance
ICML 2018, In International Conference on Machine Learning.[code] - [9]Adversarial Constraint Learning for Structured Prediction
IJCAI 2018, In International Joint Conference on Artificial Intelligence.[code] - [8]Learning with weak supervision from physics and data-driven constraints
AI Magazine, AI Magazine.
2017
- [7]A-NICE-MC: Adversarial training for MCMC
NeurIPS 2017, In Advances in Neural Information Processing Systems.[slides] [code] [blog] - [5]InfoGAIL: Interpretable imitation learning from visual demonstrations
NeurIPS 2017, In Advances in Neural Information Processing Systems.[code] - [6]Learning Hierarchical Features from Deep Generative Models
ICML 2017, In International Conference on Machine Learning.[code] - [4]Towards deeper understanding of variational autoencoding models
Preprint, arXiv:1702.08658.
2016
- [3]Factored Temporal Sigmoid Belief Networks for Sequence Learning
ICML 2016, In International Conference on Machine Learning. - [2]Discriminative nonparametric latent feature relational models with data augmentation
AAAI 2016, In AAAI Conference on Artificial Intelligence. - [1]Max-margin Nonparametric Latent Feature Models for Link Prediction
Preprint, arXiv:1602.07428.
Professional Services
Journal reviewer: IEEE TPAMI, JAIR, IEEE TIT, ACM TIST
Conference reviewer / Program committee: ICML (2019, 2020), NeurIPS (2019, 2020), ICLR (2018, 2019, 2020, 2021), COLT (2019), UAI (2019, 2020), CVPR (2020, 2021), ECCV (2020), ICCV (2019), AAAI (2021), ACML (2018, 2019), WACV (2020)
Workshop organization:
- NeurIPS 2019 Workshop on Information Theory and Machine Learning (chair)
- DALI 2018 Workshop on Generative Models and Reinforcement Learning (chair)
Awards and Fellowships
- Qualcomm Innovation Fellowship (QInF 2018, 4.6%)
- Stanford School of Engineering Fellowship (2016)
- Google Excellence Scholarship (2015)
- Outstanding Undergraduate, China Computer Federation (2015)
- Outstanding Winner, Interdisciplinary Contest in Modeling (2015, 0.4%)
- Zhong Shimo Scholarship (2013, 0.75%)
Acknowledgements: based on the al-folio template by Maruan Al-Shedivat.