About Me

I am a Research Scientist at Google Research, New York. I work on machine learning research with impact on Google products.

Research Interests

  • LLMs for regression, scoring and ranking tasks.
  • Loss function design, e.g. ranking, distillation, label smoothing.
  • Fundamental research on deep learning and LLMs, e.g. memorization, bias-variance.

News and Highlights

  • Action Editor (equivalent to an Area Chair) for ACL and EMNLP 2024-2025 in the Machine Learning for NLP track.
  • Co-organized the workshop on Scaling Behavior of Large Language Models SCALE-LLM 2024 co-located with EACL 2024.
  • Reviewer for ICML, ICLR, Neurips 2020-2025. Outstanding reviewer ICML 2022 (top 10%).

Recent Publications

TRACT: Regression-Aware Fine-tuning Meets Chain-of-Thought Reasoning for LLM-as-a-Judge
Cheng-Han Chiang, Hung-yi Lee, Michal Lukasik. In ACL (main), 2025.
Bipartite Ranking From Multiple Labels On Loss Versus Label Aggregation
Michal Lukasik, Lin Chen, Harikrishna Narasimhan, Aditya Krishna Menon, Wittawat Jitkrittum, Felix X. Yu, Sashank J. Reddi, Gang Fu, Mohammadhossein Bateni, Sanjiv Kumar. In ICML, 2025.
Better Autoregressive Regression via Regression-aware Fine-tuning
Michal Lukasik, Zhao Meng, Harikrishna Narasimhan, Yin-Wen Chang, Aditya Krishna Menon, Felix Yu, Sanjiv Kumar. In ICLR (spotlight), 2025.
Regression Aware Inference with LLMs
Michal Lukasik, Harikrishna Narasimhan, Aditya Krishna Menon, Felix Yu, Sanjiv Kumar. In EMNLP (findings), 2024.
It's an Alignment, Not a Trade-off: Revisiting Bias and Variance in Deep Models
Lin Chen, Michal Lukasik, Wittawat Jitkrittum, Chong You, Sanjiv Kumar. In ICLR (spotlight), 2024.