Yifan Yang

Self 

PhD Candidate
Department of Computer Science
University of California, Santa Barbara (UCSB)
Email: yifanyang at ucsb dot edu
Google Schoolar    Linkedin

About me

I am a PhD candidate in UCSB computer science department. Prior to UCSB, I received my B.S. in Electronic and Information Engineering from Huazhong University of Science and Technology (HUST). Currently, I'm working on the efficient training/inference of Large Language Models (LLMs), including but not limited to the parameter efficient fine-tuning (PEFT), quantization, inference speed-up, and robustness issue during the efficient training. I also worked on the online optimization before 2023.

News

04/21/2024: Our paper 'LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models' is selected as an oral presentation paper (top 5%) by NAACL 2024.

03/28/2024: Our paper 'PID Control-Based Self-Healing to Improve the Robustness of Large Language Models' is accepted by TMLR.

03/13/2024: Our paper 'LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models' is accepted by NAACL 2024.

03/08/2024: I will join Amazon Alexa AI for my summer internship, working on the inference speed-up of LLMs.

08/01/2023: I start working in the field of Natural Language Processing, focusing on the efficient training of LLMs.

Preprint

Yifan Yang, Alec Koppel, Zheng Zhang, "A Gradient-based Approach for Online Robust Deep Neural Network Training with Noisy Labels". [arxiv]

Yifan Yang, Chang Liu, Zheng Zhang, "Particle-based Online Bayesian Sampling", submitting to Transactions on Machine Learning Research (TMLR). [arxiv]

Publications

Zhuotong Chen, Qianxiao Li, Zihu Wang, Yifan Yang, Zheng Zhang, "PID Control-Based Self-Healing to Improve the Robustness of Large Language Models", to appear in Transactions on Machine Learning Research (TMLR), 2024.

Yifan Yang, Jiajun Zhou, Ngai Wong, Zheng Zhang, "LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models", to appear in Proceedings of 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024), Oral, top 5%, Mexico City, Mexico, ACL, 2024. [arxiv] [code]

Yifan Yang, Lin Chen, Pan Zhou, Xiaofeng Ding, "Vflh: A Following-the-Leader-History Based Algorithm for Adaptive Online Convex Optimization with Stochastic Constraints", to appear in Proceedings of the 35th IEEE International Conference on Tools with Artificial Intelligence, Atlanta, USA, 2023. (Best Student Paper Award, top 1%)

Yifan Yang, Jie Xu, Zichuan Xu, Pan Zhou and Tie Qiu, "Quantile context-aware social IoT big data recommendation with D2D communication", IEEE Internet of Things Journal 7.6 (2020): 5533-5548.