I am a machine learning researcher broadly interested in building trustworthy and reliable foundation models.
Currently, I am a researcher at FAIR (Meta AI) and a visiting researcher at Princeton Visual AI Lab.
I recently completed my PhD at New York University Center for Data Science working with
professor Andrew Gordon Wilson. I was also a
Visiting Researcher at FAIR Labs (Meta AI) working with
Mark Ibrahim and
Diane Bouchacourt.
During my PhD, I interned at Meta AI, Google DeepMind and Cold Spring Harbor Laboratory.
Prior to that, I received a BS in Computer Science at Higher School of Economics
where I worked with professor Dmitry Vetrov.
I was also a visiting research student at EPFL with professors
Martin Jaggi and Dan Alistarh.
I am a recipient of the Google Generation Scholarship and the DeepMind fellowship.
I am excited about a broad range of topics related to generalization, reliability and societal impact of AI, including but not limited to:
- Robustness and out-of-distribution generalization;
- Fairness and addressing biases in large-scale multi-modal models and data;
- Uncertainty estimation, calibration and out-of-distribution detection;
- Understanding generalization and representations learning mechanisms in models.
If you are an undergraduate or masters student interested in collaborating on research projects or looking for advice on PhD applications, please reach out!
Selected Publications
-
Decomposed Evaluations of Geographic Disparities in Text-to-image Models
Abhishek Sureddy, Dishant Padalia, Nandhinee Periyakaruppa, Oindrila Saha, Adina Williams, Adriana Romero-Soriano, Megan Richards*, Polina Kirichenko*, Melissa Hall*
ICML Trustworthy Multi-modal Foundation Models Workshop, 2024; Outstanding paper award and oral presentation
[arXiv]
-
Modeling Caption Diversity in Contrastive Vision-Language Pretraining
Samuel Lavoie, Polina Kirichenko*, Mark Ibrahim*, Mahmoud Assran, Andrew Gordon Wildon, Aaron Courville, Nicolas Ballas
International Conference on Machine Learning (ICML), 2024
[arXiv]
-
Does Progress On Object Recognition Benchmarks Improve Generalization on Crowdsourced Global Data?
Megan Richards, Polina Kirichenko, Diane Bouchacourt, Mark Ibrahim
ICML Data-centric Machine Learning Research Workshop, 2023;
International Conference on Learning Representations (ICLR), 2024
[arXiv]
-
Understanding the Detrimental Class-level Effects of Data Augmentation
Polina Kirichenko, Mark Ibrahim, Randall Balestriero, Diane Bouchacourt, Rama Vedantam, Hamed Firooz, Andrew Gordon Wilson
ICML workshop on Spurious Correlations, Invariance, and Stability, 2023;
Neural Information Processing Systems (NeurIPS), 2023
[arXiv]
-
Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations
Polina Kirichenko*, Pavel Izmailov*, Andrew Gordon Wilson
ICML workshop on Spurious Correlations, Invariance, and Stability, 2022; oral presentation
International Conference on Learning Representations (ICLR), 2023; spotlight (notable-top-25%)
[arXiv, code]
-
On Feature Learning in the Presence of Spurious Correlations
Pavel Izmailov*, Polina Kirichenko*, Nate, Gruver*, Andrew Gordon Wilson
Neural Information Processing Systems (NeurIPS), 2022
[arXiv, code]
-
Chroma-VAE: Mitigating Shortcut Learning with Generative Classifiers
Wanqian Yang, Polina Kirichenko, Micah Goldblum, Andrew Gordon Wilson
Neural Information Processing Systems (NeurIPS), 2022
[arXiv]
-
Does Knowledge Distillation Really Work?
Samuel Stanton, Pavel Izmailov, Polina Kirichenko, Alexander A. Alemi, Andrew Gordon Wilson
Neural Information Processing Systems (NeurIPS), 2021
[arXiv]
-
Task-agnostic Continual Learning with Hybrid Probabilistic Models
Polina Kirichenko, Mehrdad Farajtabar, Dushyant Rao, Balaji Lakshminarayanan, Nir Levine, Ang Li, Huiyi Hu, Andrew Gordon Wilson, Razvan Pascanu
ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models (spotlight talk), 2021
[arXiv,
poster]
-
Why Normalizing Flows Fail to Detect Out-of-Distribution Data
Polina Kirichenko*, Pavel Izmailov*, Andrew Gordon Wilson
ICML workshop on Invertible Neural Networks and Normalizing Flows, 2020
Neural Information Processing Systems (NeurIPS), 2020
[arXiv,
poster]
-
Semi-Supervised Learning with Normalizing Flows
Pavel Izmailov*, Polina Kirichenko*, Marc Finzi*, Andrew Gordon Wilson
ICML workshop on Invertible Neural Nets and Normalizing Flows, 2019
International Conference on Machine Learning (ICML), 2020
[arXiv,
PMLR,
code,
poster]
-
Subspace Inference for Bayesian Deep Learning
Pavel Izmailov*, Wesley Maddox*, Polina Kirichenko*, Timur Garipov*, Dmitry Vetrov, Andrew Gordon Wilson
ICML workshop on Uncertainty & Robustness in Deep Learning (contributed talk), 2019
Uncertainty in Artificial Intelligence (UAI), 2019
[arXiv,
code,
UDL poster,
UAI poster]
Selected Invited Talks
-
Addressing robustness to biases in vision foundation models
Invited talk at the ECCV 2024 Workshop on Uncertainty Quantification for Computer Vision
-
Towards Robust and Reliable Deep Learning
Princeton, Visual AI Lab seminar, 2023
-
Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations
Oral presentation at ICML 2022 Workshop on Spurious Correlations, Invariance, and Stability
Spotlight presentation at ICLR 2023
[video]
-
Applications of Normalizing Flows: Semi-Supervised Learning, Anomaly Detection, and Continual Learning
Keynote talk at ICML 2021 Workshop on Representation Learning for Finance and E-Commerce Applications
[video]
-
Understanding Semantic Anomaly Detection with Generative Networks
ML Collective 2021, Deep Learning: Classics and Trends
[slides]
-
Normalizing Flows for Anomaly Detection
Technical University of Denmark
[video]
-
Anomaly Detection via Generative Models
Open Data Science DafaFest 2020, Uncertainty in ML Workshop
[video]
-
Why Normalizing Flows Fail to Detect Out-of-Distribution Data
INNF+ workshop at ICML 2020; NeurIPS 2020
[ICML video,
NeurIPS video]
-
How do we build neural networks we can trust?
Broad Institute of MIT and Harvard
[video,
slides]
-
Subspace Inference for Bayesian Deep Learning
Contributed talk at the ICML workshop on Uncertainty & Robustness in Deep Learning
[video,
slides]