I am a Research Scientist at Google Research New York. I work on fundamental research in machine learning with impact on Google Ads products.

My research interests include machine learning theory, NLP and deep retrieval.

News and Highlights

  • I am serving as an Action Editor (equivalent to an Area Chair) for ACL 2024 in the Machine Learning for NLP track.
  • I am co-organizing the workshop on Scaling Behavior of Large Language Models (SCALE-LLM 2024) colocated with EACL 2024.

Recent Publications

It's an Alignment, Not a Trade-off: Revisiting Bias and Variance in Deep Models
Lin Chen, Michal Lukasik, Wittawat Jitkrittum, Chong You, Sanjiv Kumar. In ICLR (spotlight presentation), 2024.
Two-stage LLM Fine-tuning with Less Specialization and More Generalization
Yihan Wang, Si Si, Daliang Li, Michal Lukasik, Felix Yu, Cho-Jui Hsieh, Inderjit S Dhillon, Sanjiv Kumar. In ICLR, 2024.
ResMem: Learn what you can and memorize the rest
Zitong Yang, Michal Lukasik, Vaishnavh Nagarajan, Zonglin Li, Ankit Rawat, Manzil Zaheer, Aditya Menon, Sanjiv Kumar. In NEURIPS, 2023.
Large language models with controllable working memory
Daliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, Sanjiv Kumar. In ACL (findings), 2023.
Robust distillation for worst-class performance: on the interplay between teacher and student objectives
Serena Wang, Harikrishna Narasimhan, Yichen Zhou, Sara Hooker, Michal Lukasik, Aditya Krishna Menon. In UAI, 2023.
What do larger image classifiers memorise?
Michal Lukasik, Vaishnavh Nagarajan, Ankit Singh Rawat, Aditya Krishna Menon, Sanjiv Kumar. In arxiv, 2023.
Teacher's pet: understanding and mitigating biases in distillation
Michal Lukasik, Srinadh Bhojanapalli, Aditya Krishna Menon, Sanjiv Kumar. In TMLR, 2022.