I am a postdoctoral researcher at Mila - Quebec AI Institute and a postdoctoral fellow at McGill University in Montrรฉal ๐จ๐ฆ. Before that, I was a postdoctoral researcher at the Language Science and Technology Department of Saarland University in Saarbrรผcken ๐ฉ๐ช, where I also did my PhD in Computer Science.
The goal of my research is to enable reliable, controllable, and trustworthy Natural Language Processing (NLP) systems, particularly the Large Language Models (LLMs) that millions interact with daily. LLMs require significant adaptation (also known as fine-tuning or post-training) to become specialized, safe, and aligned with specific requirements after pre-training. My research program centers on building a fundamental, scientific understanding of this crucial adaptation stage.
For our work, my collaborators and I have received a Best Paper Award ๐ at COLING 2022, the Best Theme Paper Award ๐ at ACL 2023, and the Most Interesting Paper Award ๐ at the BabyLM Challenge 2023.
In my free time, I enjoy CrossFit ๐๏ธ, playing soccer โฝ๏ธ, and occationally baking ๐ฐ.
Latest News
-
We have a new preprint on the impact of data frequency on LLM unlearning. Surprise: Not all data is unlearned equally ๐!
-
Check out our new preprint on our analysis of the reasoning chains of DeepSeek-R1 ๐ญ.
-
Our workshop on Actionable Interpretability ๐ has been accepted by ICML 2025 ๐!
-
I attended the Simons Institute workshop on The Future of Language Models and Transformers in Berkeley ๐บ๐ธ.