Publications
-
Does Progress On Object Recognition Benchmarks Improve Generalization on Crowdsourced Global Data?
Megan Richards, Polina Kirichenko, Diane Bouchacourt, Mark Ibrahim
ICML Data-centric Machine Learning Research Workshop, 2023;
International Conference on Learning Representations (ICLR), 2024
[arXiv]
-
Understanding the Detrimental Class-level Effects of Data Augmentation
Polina Kirichenko, Mark Ibrahim, Randall Balestriero, Diane Bouchacourt, Rama Vedantam, Hamed Firooz, Andrew Gordon Wilson
ICML workshop on Spurious Correlations, Invariance, and Stability, 2023;
Neural Information Processing Systems (NeurIPS), 2023
[arXiv]
-
Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations
Polina Kirichenko*, Pavel Izmailov*, Andrew Gordon Wilson
ICML workshop on Spurious Correlations, Invariance, and Stability, 2022; oral presentation
International Conference on Learning Representations (ICLR), 2023; spotlight (notable-top-25%)
[arXiv, code]
-
On Feature Learning in the Presence of Spurious Correlations
Pavel Izmailov*, Polina Kirichenko*, Nate, Gruver*, Andrew Gordon Wilson
Neural Information Processing Systems (NeurIPS), 2022
[arXiv, code]
-
Chroma-VAE: Mitigating Shortcut Learning with Generative Classifiers
Wanqian Yang, Polina Kirichenko, Micah Goldblum, Andrew Gordon Wilson
Neural Information Processing Systems (NeurIPS), 2022
[arXiv]
-
Does Knowledge Distillation Really Work?
Samuel Stanton, Pavel Izmailov, Polina Kirichenko, Alexander A. Alemi, Andrew Gordon Wilson
Neural Information Processing Systems (NeurIPS), 2021
[arXiv]
-
Task-agnostic Continual Learning with Hybrid Probabilistic Models
Polina Kirichenko, Mehrdad Farajtabar, Dushyant Rao, Balaji Lakshminarayanan, Nir Levine, Ang Li, Huiyi Hu, Andrew Gordon Wilson, Razvan Pascanu
ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models (spotlight talk), 2021
[arXiv,
poster]
-
Why Normalizing Flows Fail to Detect Out-of-Distribution Data
Polina Kirichenko*, Pavel Izmailov*, Andrew Gordon Wilson
ICML workshop on Invertible Neural Networks and Normalizing Flows, 2020
Neural Information Processing Systems (NeurIPS), 2020
[arXiv,
poster]
-
Semi-Supervised Learning with Normalizing Flows
Pavel Izmailov*, Polina Kirichenko*, Marc Finzi*, Andrew Gordon Wilson
ICML workshop on Invertible Neural Nets and Normalizing Flows, 2019
14th Women in Machine Learning workshop (co-located with NeurIPS), 2019
International Conference on Machine Learning (ICML), 2020
[arXiv,
PMLR,
code,
poster]
-
Subspace Inference for Bayesian Deep Learning
Pavel Izmailov*, Wesley Maddox*, Polina Kirichenko*, Timur Garipov*, Dmitry Vetrov, Andrew Gordon Wilson
ICML workshop on Uncertainty & Robustness in Deep Learning (contributed talk), 2019
Uncertainty in Artificial Intelligence (UAI), 2019
[arXiv,
code,
UDL poster,
UAI poster]
-
SWALP: Stochastic Weight Averaging in Low-Precision Training
Guandao Yang, Tianyi Zhang, Polina Kirichenko, Junwen Bai, Andrew Gordon Wilson, Christopher De Sa
International Conference on Machine Learning (ICML), 2019
[arXiv,
PMLR,
code]
Invited Talks
-
Towards Robust and Reliable Deep Learning
Princeton, Visual AI Lab seminar
-
Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations
Oral presentation at ICML 2022 Workshop on Spurious Correlations, Invariance, and Stability
[video]
-
Applications of Normalizing Flows: Semi-Supervised Learning, Anomaly Detection, and Continual Learning
Keynote talk at ICML 2021 Workshop on Representation Learning for Finance and E-Commerce Applications
[video]
-
Task-agnostic Continual Learning with Hybrid Probabilistic Models
Spotlight talk at INNF+ workshop at ICML 2021
[video]
-
Understanding Semantic Anomaly Detection with Generative Networks
ML Collective 2021, Deep Learning: Classics and Trends
[slides]
-
Normalizing Flows for Anomaly Detection
Technical University of Denmark
[video]
-
Anomaly Detection via Generative Models
Open Data Science DafaFest 2020, Uncertainty in ML Workshop
[video]
-
Why Normalizing Flows Fail to Detect Out-of-Distribution Data
INNF+ workshop at ICML 2020; NeurIPS 2020
[ICML video,
NeurIPS video]
-
How do we build neural networks we can trust?
Broad Institute of MIT and Harvard
[video,
slides]
-
Subspace Inference for Bayesian Deep Learning
ICML workshop on Uncertainty & Robustness in Deep Learning
[video,
slides]