About Me
I am an Applied Scientist at Amazon Alexa working on large-scale language understanding and machine learning systems. I received my Ph.D. in Computer Science from the University of Illinois Chicago in 2025, where I was advised by Prof. Xinhua Zhang. My research broadly centers on trustworthy machine learning in temporal environments, including reinforcement learning, continual learning, robust sequential decision-making, and large language models.
My work focuses on how learning systems behave under non-stationarity, distribution shift, and adversarial perturbations. I am particularly interested in developing robust and adaptive learning frameworks that remain reliable as data distributions, environments, and optimization objectives evolve over time. This includes research on continual learning, robust reinforcement learning, data poisoning in offline-to-online RL, and learning-based data acquisition, where the goal is to improve both reliability and sample efficiency in dynamic settings.
At Amazon, I build large-scale intent understanding and multilingual language systems for Alexa, with recent work spanning human-in-the-loop LLM data curation, scalable intent classification, and multilingual transfer for production language understanding. More broadly, I am interested in bridging foundational machine learning research and real-world deployment, especially in settings where reliability, scale, and temporal dynamics all matter.
Before UIC, I worked with Prof. Eli Upfal and Dr. Cyrus Cousins at Brown University, where I earned my M.S. in Computer Science. I received my B.S. in Mathematics and Computer Science from Penn State and also worked with Prof. Jeremiah Blocki as a visiting research assistant at Purdue University.
