About
Hi, I’m an incoming PhD student at the University of Michigan, co-advised by Prof. Lu Wang and Prof. Honglak Lee. I earned my MS from KAIST advised by Prof. Minjoon Seo. My research focuses on making large language models (LLMs) more efficient and reliable, with a particular interest in how they acquire and utilize knowledge under real-world constraints.
Previously, I spent eight years in industry as a research scientist at Samsung Research and ESTsoft, where I worked on applied LLM systems including Galaxy AI, with a focus on LLM pre-/post-training, information retrieval, and semantic parsing. I also interned at Microsoft Research, where I work with Yeyun Gong, Lei Ji, and Qi Chen, and was honored with the Stars of Tomorrow award at Microsoft Research Asia.
My research interests lie in making large language models more efficient and practical:
- Efficient training and inference methods for LLMs, including cost-efficient inference (NAACL 2025), distillation (ICML 2025, NAACL 2025), and data mixture optimization (ACL 2026F)
- Real-world impact, advancing agent adaptability (NAACL 2025), scientific discovery (GenBio 2025, BioNLP 2025), and real-time information access (NAACL 2024).
Work Experience



