Shaolei Zhang (张绍磊) is a fifth-year Ph.D. candidate (2020-2025) in Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences (中国科学院计算技术研究所), advised by professor Yang Feng (冯洋). He received his bachelor’s degree from Beijing University of Posts and Telecommunications in 2020, majoring in computer science and technology (北京邮电大学计算机科学与技术实验班).

He is dedicated to developing next-generation human-computer interaction paradigms that are seamless, real-time, and secure. To this end, his research interests include natural language processing, simultaneous/streaming models, and large language models. He has published over 20 papers at the top international AI/NLP conferences such as ACL, NeurIPS, ICLR, AAAI. Some of his representative works include:

Beyond his research, he won the first place in the streaming transcription track of AutoSimTrans 2021. He is actively engaged in sharing researches and insights at various academic events. He has served as Area Chair of ACL/EACL/NAACL ARR 2023 and Reviewer of Top AI/NLP conferences.

I’m willing to communicate and share my research, and interested in opportunities in the industry, academia or postdoc. If you would like to connect with me, please feel free to reach out via Email zhangshaolei20z@ict.ac.cn or WeChat zhangshaolei0331.

🔥 News

  • 2025.01:  🎉 2 papers are accepted by ICLR 2025!
  • 2024.11:  🎉 1 paper is accepted by AAAI 2025!
  • 2024.05:  🎉 6 papers are accepted by ACL 2024!
  • 2023.12:  🎉 1 paper is accepted by ICASSP 2024!
  • 2023.10:  🎉 2 papers are accepted by EMNLP 2023!
  • 2023.09:  👏 Serve as Area Chair of ACL/EACL/NAACL ARR 2023!
  • 2023.09:  🎉 1 paper is accepted by NeurIPS 2023!
  • 2023.06:  🎉 Our cross-lingual aligned LLM BayLing is released.
  • 2023.05:  🎉 2 papers are accepted by ACL 2023.
  • 2023.01:  🎉 1 paper is accepted by ICLR 2023 (Spotlight)!
  • 2022.10:  🎉 3 papers are accepted by EMNLP 2022!
  • 2022.02:  🎉 3 papers are accepted by ACL 2022!

📝 Publications

Multilingual LLM
sym
Demo img

BayLing: Bridging Cross-lingual Alignment and Instruction Following through Interactive Translation for Large Language Models
Shaolei Zhang, Qingkai Fang, Zhuocheng Zhang, Zhengrui Ma, Yan Zhou, Langlin Huang, Mengyu Bu, Shangtong Gui, Yunji Chen, Xilin Chen, Yang Feng

arXiv project project model

  • BayLing (百聆) is a LLM equipped with advanced language alignments.
  • BayLing is the first research to use language alignments to enhance LLM's multilingual capabilities.
  • BayLing is selected for inclusion in the 2022-2023 Top 100 Opensource achievements: Open100 (2022-2023), launched by the International Open Benchmark Council (BenchCouncil).
Vision
sym
Demo img

LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token
Shaolei Zhang, Qingkai Fang, Zhe Yang, Yang Feng

arXiv model Hits

  • LLaVA-Mini is a unified large multimodal model that can support the understanding of images, high-resolution images, and videos in an efficient manner.
  • LLaVA-Mini only requires 1 token to represent each image, which improves the efficiency of image and video understanding, including:
    • Computational effort: 77% FLOPs reduction;
    • Response latency: reduce from 100 milliseconds to 40 milliseconds;
    • VRAM memory usage: reduce from 360 MB/image to 0.6 MB/image, support 3-hour video processing;
Speech
sym
Demo img

StreamSpeech: Simultaneous Speech-to-Speech Translation with Multi-task Learning
Shaolei Zhang, Qingkai Fang, Shoutao Guo, Zhengrui Ma, Min Zhang, Yang Feng

arXiv project model Hits

  • StreamSpeech is an "All in One" seamless model for over 8 tasks of offline and simultaneous speech recognition, speech translation and speech synthesis.
  • StreamSpeech can present intermediate results (i.e., ASR or translation results) during simultaneous translation, offering a more comprehensive low-latency communication experience .
  • Get over 500 reposts and 300K views on Twitter!
Trustworthy LLM
sym
Demo img

TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
Shaolei Zhang, Tian Yu, Yang Feng

arXiv project model PWC

  • TruthX is an inference-time method to activate the truthfulness of LLMs by editing their internal representations, thereby mitigating the hallucinations.
  • TruthX can control LLMs to generate truthful or hallucinatory responses by editing only a vector in truthful space.
  • On TruthfulQA benchmark, TruthX yields an average enhancement of 20% in truthfulness across 13 LLMs. #Ranked 2 behind GPT-4.
GitHub Repo
sym

Awesome Simultaneous Translation
Shaolei Zhang

  • A repository that collects the tookits, common datasets and paper list related to the research on Simultaneous Translation, including text-to-text machine translation and speech-to-text translation.

2025

2024

2023

2022

2021

🏆 Honors and Awards

  • [2022] ICT’s Special Scholarship (Xia Peisu Award) (计算所 所长特别奖(夏培肃奖)) [Highest award in ICT/CAS, Top 2]
  • [2022] National Scholarship (国家奖学金)
  • [2021] First place in the streaming track of AutoSimTrans 2021 (organized by Baidu/Huawei/Google)
  • [2020] Beijing Outstanding Graduates Award (北京市优秀毕业生)
  • [2018] Beijing Merit Student (北京市三好学生)
  • [2017] National Scholarship (国家奖学金)

👏 Services

  • Area Chair of ACL/EACL/NAACL ARR 2023
  • Reviewer of ACL/EMNLP/COLING/NAACL/EACL/NeurIPS/ICLR/ICML, Computing Survey
  • Session Chair of Student Seminar in CCL 2024
  • Session Chair of Student Seminar in YSSNLP 2024
  • 中国中文信息学会青年工作委员会 学生执委会主任 2020-2024
  • Programming Chair of CSSNLP 2020/2021/2023

📖 Educations

💬 Invited Talks

  • “迈向实时跨语言沟通:实时语音模型的挑战、技术和未来” on Xmart青年论坛 [Slides] [Video]
  • “浅谈大模型时代的研究转变:选题和实践” on ASCII-116 [Slides]
  • “如何在大模型技术迭代中把握科研节奏” on IMLIP 2024 [Slides]
  • “流式翻译进展分享” share talk in Li Auto [Slides]
  • “缓解大语言模型幻觉:从内部表示视角” share talk in Tencent [Slides]
  • “大模型时代的科研选题和实践分享” on MLNLP Academic Seminar [Slides] [Video]
  • “跨语言对齐增强大模型——百聆” on AI TIME 大模型嘉年华 [Slides] [Video]
  • “如何在大模型时代找到科研切入点?” on CCMT 2023 [Slides] [Video]
  • “从机器翻译到同声传译:挑战与进展” on MLNLP Academic Seminar [Slides] [Video]
  • AI Time Youth Talk for ICLR 2023 [Video]
  • Internal share talks in ByteDance, Huawei, Tencent, Li Auto

💻 Internships

  • 2019.12 - 2021.12, Huawei Noah’s Ark Lab, industry-university-research collaboration project, China.