Welcome to Kaiyi Ji's Homepage
About Me
I am an assistant professor at the Department of Computer Science and Engineering of the University at Buffalo, The State University of New York.
I was a postdoctoral research fellow at the Electrical Engineering and Computer Science Department of the University of Michigan, Ann Arbor, in 2022,
working with Prof. Lei Ying.
I received my Ph.D. degree from the Electrical and Computer Engineering Department of The Ohio State University in December, 2021, advised by
Prof. Yingbin Liang.
I was a visiting student research collaborator at the department of Electrical Engineering, Princeton University working with Prof. H. Vincent Poor.
Previously I obtained my B.S. degree from University of Science and Technology of China in 2016.
Research
I have been working at the intersection of optimization, machine learning and networks, on theory, algorithm and application sides.
My major research focuses include:
Bilevel optimization and its application in deep learning
Meta-learning, multi-task learning and continual learning
Large-scale stochastic optimization
Federated learning and communication networks
To Prospective Students
I am looking for highly motivated students with strong mathematical backgrounds and/or programming skills in machine learning and optimization to work with me.
Moreover, intern and visiting students are highly welcome!
One PhD position is avaliable immediately. Please fill the following form if you are interested. I will contact you if there is a good match!
Recent News!
[Manuscript] 02/2024 One manuscript “Fair Resource Allocation in Multi-Task Learning” is available online. We connect fair resource allocation in wireless communication with multi-task learning, and propose an optimization method named FairGrad. This method implements different ideas of fairness and achieves SOTA performance among gradient manipulation MTL methods with performance guarantee. The idea has also been incorporated into existing MTL methods with significant improvements observed. Check our codes: Click.
[Manuscript] 02/2024 One manuscript “Discriminative Adversarial Unlearning” is available online. We introduce a novel machine unlearning framework founded on an attacker network and a defender network, where the attacker teases out the information of the data to be unlearned, and the defender unlearns to defend the network against the attack. We also incorporate a self-supervised objective to address the feature space discrepancies between the forget and validation sets. This method closely approximates the ideal benchmark of retraining from scratch in various scenarios. Code is available at Click.
[Talk] 10,11/2023 Glad to give multiple invited talks at INFORMS 2023 (Phoenix), Asilomar 2023 (Pacific Grove), MobiHoc 2023 (Washionton DC) about our recent progress on bilevel optimization for continual learning and network resource allocation.
[Publication] 09/2023 Five papers accepted in NeurIPS 2023 with one spotlight presentation! The topics span over Hessian-free bilevel optimization, federated learning, continual learning and multi-objective learning. Big congratulations to my students Yifan, Peiyao and Hao, and many thanks to my collaborators!
Funded Projects
Our research is generously supported by NSF and University at Buffalo.
|