I am a Research Scientist at Google Research New York. I work on fundamental research in machine learning with impact on Google Ads products.

My research interests include machine learning theory, NLP and deep retrieval.

News and Highlights

  • Action Editor (equivalent to an Area Chair) for ACL 2024 in the Machine Learning for NLP track.
  • Co-organizing of the workshop on Scaling Behavior of Large Language Models (SCALE-LLM 2024) co-located with EACL 2024.
  • Outstanding reviewer ICML 2022 (top 10%).

Recent Publications

What do larger image classifiers memorise?
Michal Lukasik, Vaishnavh Nagarajan, Ankit Singh Rawat, Aditya Krishna Menon, Sanjiv Kumar. In TMLR, 2024.
It's an Alignment, Not a Trade-off: Revisiting Bias and Variance in Deep Models
Lin Chen, Michal Lukasik, Wittawat Jitkrittum, Chong You, Sanjiv Kumar. In ICLR (spotlight presentation), 2024.
Two-stage LLM Fine-tuning with Less Specialization and More Generalization
Yihan Wang, Si Si, Daliang Li, Michal Lukasik, Felix Yu, Cho-Jui Hsieh, Inderjit S Dhillon, Sanjiv Kumar. In ICLR, 2024.
ResMem: Learn what you can and memorize the rest
Zitong Yang, Michal Lukasik, Vaishnavh Nagarajan, Zonglin Li, Ankit Rawat, Manzil Zaheer, Aditya Menon, Sanjiv Kumar. In NEURIPS, 2023.
Large language models with controllable working memory
Daliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, Sanjiv Kumar. In ACL (findings), 2023.
Robust distillation for worst-class performance: on the interplay between teacher and student objectives
Serena Wang, Harikrishna Narasimhan, Yichen Zhou, Sara Hooker, Michal Lukasik, Aditya Krishna Menon. In UAI, 2023.