Welcome to Kaiyi Ji's Homepage

alt text 

About Me

I am an Assistant Professor in Computer Science and Engineering at the University at Buffalo. Previously, I was a Postdoctoral Researcher in Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor, and a Visiting Student Research Collaborator in Electrical Engineering at Princeton University. I received my Ph.D. in Electrical and Computer Engineering from The Ohio State University, supervised by Prof. Yingbin Liang, and my B.E. from the University of Science and Technology of China. I received the UB CSE Junior Faculty Research Award (2023), the NSF CAREER Award (2025), CSE Excellence in Research Award (2025) and SEAS Early Career Researcher of the Year Award (2026).

Research Interests

I am particularly interested in the theoretical and algorithmic foundations for optimization and machine learning. My current focus is on:

  • Optimization & theory for modern ML: parameter-efficient optimization, multi-objective and bilevel optimization.

  • Foundation of AI and ML: multi-task learning, continual learning and meta-learning.

  • Efficient foundation-model adaptation: PEFT/LoRA for multi-task and federated settings, parameter-efficient continual tuning, multi-objective alignment.

  • Applied side: methods validated in robotics, recommendation, and NLP.

Recent News!

  • [Publication] 05/2026 Two paper on Lower bounds for bilevel optimization and Backward knowledge transfer for LLMs have been accepted by ICML 2026. Congrats to Anushka on her first paper at a top ML conference!

  • [Award] 04/2026 I am honored to receive SEAS Early Career Researcher of the Year Award from the School of Engineering and Applied Sciences.

  • [Publication] 04/2026 Our paper, Rethinking Parameter Sharing for LLM Fine-Tuning with Multiple LoRAs <GitHub>, has been accepted to ACL 2026 Findings. Congrats to Hao!

    • This work clarifies an important detail in prior multi-LoRA sharing results: the similarity in A may largely stem from identical initialization rather than genuinely shared knowledge, offering a useful diagnostic for future method design. It further proposes a simple asymmetric design (sharing B) that improves task balance without degrading average performance, and extends this approach to federated fine-tuning.

  • [Library] 02/2026 Our open-source multi-task learning-to-rank library DeepMTL2R <paper> <code> with Amazon is now online.

  • [Award] 01/2026 My proposal on continual learning received a $15,000 award from the Research, Innovation and Economic Development Grant at UB. I sincerely thank UB for the support.

  • [Award] 12/2025 I received the CSE Excellence in Research Award from the department. I am truly grateful to my fantastic students for their great contributions and to the department for its support.

  • [Service] 08/2025 I will serve as an Area Chair for ICLR 2026.

  • [Award] 04/2025 Glad to receive NSF CAREER Award [news].

Recent Featured Works

Selected Publications