Welcome to Kaiyi Ji's Homepage

alt text 

About Me

I am an assistant professor at the Department of Computer Science and Engineering of the University at Buffalo, The State University of New York. I was a postdoctoral research fellow at the Electrical Engineering and Computer Science Department of the University of Michigan, Ann Arbor, in 2022, working with Prof. Lei Ying. I received my Ph.D. degree from the Electrical and Computer Engineering Department of The Ohio State University in December, 2021, advised by Prof. Yingbin Liang. I was a visiting student research collaborator at the department of Electrical Engineering, Princeton University. Previously I obtained my B.S. degree from University of Science and Technology of China in 2016.

Prospective students: I do not have PhD positions now. However, intern and visiting students are welcome! Please send me an email with your CV and Transcript or fill this form.

Research

I have been working at the intersection of optimization, machine learning and wireless networking. In particular, I work on bilevel optimization, multi-task learning, transfer learning (meta-learning, continual learning, machine unlearning), distributed learning over networks, stochastic optimization and control, and their applications in signal processing, communication and image processing. Here are selected publications showcasing my current interest:

Continual Learning/Machine Unlearning
Multi-Objective/Task Learning
Bilevel Optimization
Distributed Learning over Networks

Recent News!

  • [Publication] 05/2024 Two papers on continual learning theory and multi-task learning accepted to ICML 2024. Congratulations to Hao, Meng and collaborators.

  • [Manuscript] 02/2024 One manuscript “Fair Resource Allocation in Multi-Task Learning” is available online. We connect fair resource allocation in wireless communication with multi-task learning, and propose an optimization method named FairGrad. This method implements different ideas of fairness and achieves SOTA performance among gradient manipulation MTL methods with performance guarantee. The idea has also been incorporated into existing MTL methods with significant improvements observed. Check our codes: Click.

  • [Manuscript] 02/2024 One manuscript “Discriminative Adversarial Unlearning” is available online. We introduce a novel machine unlearning framework founded on an attacker network and a defender network, where the attacker teases out the information of the data to be unlearned, and the defender unlearns to defend the network against the attack. We also incorporate a self-supervised objective to address the feature space discrepancies between the forget and validation sets. This method closely approximates the ideal benchmark of retraining from scratch in various scenarios. Code is available at Click.

  • [Publication] 01/2024 One paper on resource-efficient self-supervised contrastive learning accepted in ICLR 2024! We achieve competitive or SOTA results on ImageNet and other standard datasets with an impressively small batchsize. This method also has the promising downstream viability on transfer learning and few-shot learning. Code and paper will come soon! Big congratulations to Rohan and other coauthors!

  • [Award] 12/2023 Glad to receive CSE Junior Faculty Research Award from UB CSE. Thanks to the department and my students!

  • [Talk] 12/2023 Glad to visit RPI ECSE and give a talk on bilevel optimization for machine learning and beyond. Many thanks to Tianyi's invitation and host!

  • [Talk] 10,11/2023 Glad to give multiple invited talks at INFORMS 2023 (Phoenix), Asilomar 2023 (Pacific Grove), MobiHoc 2023 (Washionton DC) about our recent progress on bilevel optimization for continual learning and network resource allocation.