Triforce: Lossless acceleration of long sequence generation with hierarchical speculative decoding
Sun, Hanshi; Chen, Zhuoming; Yang, Xinyu; Tian, Yuandong; Chen, Beidi
Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding
Chen, Zhuoming; May, Avner; Svirschevski, Ruslan; Huang, Yuhsun; Ryabinin, Max; Jia, Zhihao; Chen, Beidi
Megalodon: Efficient llm pretraining and inference with unlimited context length
Ma, Xuezhe; Yang, Xiaomeng; Xiong, Wenhan; Chen, Beidi; Yu, Lili; Zhang, Hao; May, Jonathan; Zettlemoyer, Luke; Levy, Omer; Zhou, Chunting
Prompt-prompted Mixture of Experts for Efficient LLM Generation
Dong, Harry; Chen, Beidi; Chi, Yuejie
Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity
Guo, Wentao; Long, Jikai; Zeng, Yimeng; Liu, Zirui; Yang, Xinyu; Ran, Yide; Gardner, Jacob R; Bastani, Osbert; De Sa, Christopher; Yu, Xiaodong
SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices
Svirschevski, Ruslan; May, Avner; Chen, Zhuoming; Chen, Beidi; Jia, Zhihao; Ryabinin, Max
Nearest Neighbor Speculative Decoding for LLM Generation and Attribution
Li, Minghan; Chen, Xilun; Holtzman, Ari; Chen, Beidi; Lin, Jimmy; Yih, Wen-tau; Lin, Xi Victoria
Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding
Zhang, Zhenyu; Chen, Runjin; Liu, Shiwei; Yao, Zhewei; Ruwase, Olatunji; Chen, Beidi; Wu, Xiaoxia; Wang, Zhangyang
LLM Inference Unveiled: Survey and Roofline Model Insights
Yuan, Zhihang; Shang, Yuzhang; Zhou, Yang; Dong, Zhen; Xue, Chenhao; Wu, Bingzhe; Li, Zhikai; Gu, Qingyi; Lee, Yong Jae Yan, Yan;
Learn To be Efficient: Build Structured Sparsity in Large Language Models
Zheng, Haizhong; Bai, Xiaoyan; Chen, Beidi; Lai, Fan; Prakash, Atul
Inrank: Incremental low-rank learning
Zhao, Jiawei; Zhang, Yifei; Chen, Beidi; Schäfer, Florian; Anandkumar, Anima
Sample-efficient Surrogate Model for Frequency Response of Linear PDEs using Self-Attentive Complex Polynomials
Cohen, Andrew; Dou, Weiping; Zhu, Jiang; Koziel, Slawomir; Renner, Peter; Mattsson, Jan-Ove; Yang, Xiaomeng; Chen, Beidi; Stone, Kevin; Tian, Yuandong
Galore: Memory-efficient llm training by gradient low-rank projection
Zhao, Jiawei; Zhang, Zhenyu; Chen, Beidi; Wang, Zhangyang; Anandkumar, Anima; Tian, Yuandong
ICML2024 (Oral)
Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference
Dong, Harry; Yang, Xinyu; Zhang, Zhenyu; Wang, Zhangyang; Chi, Yuejie; Chen, Beidi
ICML2024
LoCoCo: Dropping In Convolutions for Long Context Compression
Cai, Ruisi; Tian, Yuandong; Wang, Zhangyang; Chen, Beidi
ICML2024
HexGen: Generative Inference of Large Language Model over Heterogeneous Environment
Jiang, Youhe; Yan, Ran; Yao, Xiaozhe; Zhou, Yang; Chen, Beidi; Yuan, Binhang
ICML2024
KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache
Liu, Zirui; Yuan, Jiayi; Jin, Hongye; Zhong, Shaochen; Xu, Zhaozhuo; Braverman, Vladimir; Chen, Beidi; Hu, Xia
ICML2024
Compress, then prompt: Improving accuracy-efficiency trade-off of llm inference with transferable prompt
Xu, Zhaozhuo; Liu, Zirui; Chen, Beidi; Tang, Yuxin; Wang, Jue; Zhou, Kaixiong; Hu, Xia; Shrivastava, Anshumali
ICML2024
Layer skip: Enabling early exit inference and self-speculative decoding
Elhoushi, Mostafa; Shrivastava, Akshat; Liskovich, Diana; Hosmer, Basil; Wasti, Bram; Lai, Liangzhen; Mahmoud, Anas; Acun, Bilge; Agarwal, Saurabh; Roman, Ahmed
ACL2024
Q-Hitter: A Better Token Oracle for Efficient LLM Inference via Sparse-Quantized KV Cache
Zhang, Zhenyu; Liu, Shiwei; Chen, Runjin; Kailkhura, Bhavya; Chen, Beidi; Wang, Atlas
MLSys2024
Efficient streaming language models with attention sinks
Xiao, Guangxuan; Tian, Yuandong; Chen, Beidi; Han, Song; Lewis, Mike
ICLR2024
Joma: Demystifying multilayer transformers via joint dynamics of mlp and attention
Tian, Yuandong; Wang, Yiping; Zhang, Zhenyu; Chen, Beidi; Du, Simon
ICLR2024
Fast Algorithms for a New Relaxation of Optimal Transport
Charikar, Moses; Chen, Beidi; Ré, Christopher; Waingarten, Erik
COLT2023
H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models
Zhang, Zhenyu; Sheng, Ying; Zhou, Tianyi; Chen, Tianlong; Zheng, Lianmin; Cai, Ruisi; Song, Zhao; Tian, Yuandong; Ré, Christopher; Barrett, Clark; Wang, Zhangyang; Chen, Beidi
NeurIPS2023
Scan and Snap: Understanding Training Dynamics and Token Composition in 1-layer Transformer
Tian, Yuandong; Wang, Yiping; Chen, Beidi; Du, Simon
NeurIPS2023
Laughing hyena distillery: Extracting compact recurrences from convolutions
Massaroli, Stefano; Poli, Michael; Fu, Dan; Kumbong, Hermann; Parnichkun, Rom; Romero, David; Timalsina, Aman; McIntyre, Quinn; Chen, Beidi; Rudra, Atri
NeurIPS2023
Deja vu: Contextual sparsity for efficient llms at inference time
Liu, Zichang; Wang, Jue; Dao, Tri; Zhou, Tianyi; Yuan, Binhang; Song, Zhao; Shrivastava, Anshumali; Zhang, Ce; Tian, Yuandong; Re, Christopher; Chen, Beidi
ICML2023 (Oral)
FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU
Sheng, Ying; Zheng, Lianmin; Yuan, Binhang; Li, Zhuohan; Ryabinin, Max; Chen, Beidi; Liang, Percy; Re, Christopher; Stoica, Ion; Zhang, Ce
ICML2023 (Oral)
CocktailSGD: Fine-tuning foundation models over 500Mbps networks
Wang, Jue; Lu, Yucheng; Yuan, Binhang; Chen, Beidi; Liang, Percy; De Sa, Christopher; Re, Christopher; Zhang, Ce
ICML2023
Decentralized training of foundation models in heterogeneous environments
Yuan, Binhang; He, Yongjun; Davis, Jared Quincy; Zhang, Tianyi; Dao, Tri; Chen, Beidi; Liang, Percy; Re, Christopher; Zhang, Ce
NeurIPS2022 (Oral)
Fine-tuning language models over slow networks using activation compression with guarantees
Wang, Jue; Yuan, Binhang; Rimanic, Luka; He, Yongjun; Dao, Tri; Chen, Beidi; Re, Christopher; Zhang, Ce
NeurIPS2022
Monarch: Expressive structured matrices for efficient and accurate training
Dao, Tri; Chen, Beidi; Sohoni, Nimit S; Desai, Arjun; Poli, Michael; Grogan, Jessica; Liu, Alexander; Rao, Aniruddh; Rudra, Atri; Ré, Christopher
ICML2022 (Outstanding Paper Runner Up)
Pixelated butterfly: Simple and efficient sparse training for neural network models
Chen, Beidi; Dao, Tri; Liang, Kaizhao; Yang, Jiaming; Song, Zhao; Rudra, Atri; Re, Christopher
ICLR2022 (Spotlight)
Halos: Hashing large output space for cheap inference
Liu, Zichang; Xu, Zhaozhuo; Ji, Alan; Zhang, Junyan; Li, Jonathan; Chen, Beidi; Shrivastava, Anshumali
MLSys2022
Scatterbrain: Unifying sparse and low-rank attention
Chen, Beidi; Dao, Tri; Winsor, Eric; Song, Zhao; Rudra, Atri; Ré, Christopher
NeurIPS2021
Locality sensitive teaching
Xu, Zhaozhuo; Chen, Beidi; Li, Chaojian; Liu, Weiyang; Song, Le; Lin, Yingyan; Shrivastava, Anshumali
NeurIPS2021
A tale of two efficient and informative negative sampling distributions
Daghaghi, Shabnam; Medini, Tharun; Meisburger, Nicholas; Chen, Beidi; Zhao, Mengnan; Shrivastava, Anshumali
ICML2021 (Oral)
MONGOOSE: A learnable LSH framework for efficient neural network training
Chen, Beidi; Liu, Zichang; Peng, Binghui; Xu, Zhaozhuo; Li, Jonathan Lingjie; Dao, Tri; Song, Zhao; Shrivastava, Anshumali; Re, Christopher
ICLR2021 (Oral)
SOLAR: Sparse Orthogonal Learned and Random Embeddings
Medini, Tharun; Chen, Beidi; Shrivastava, Anshumali
ICLR2021
Satellite Images and Deep Learning to Identify Discrepancy in Mailing Addresses with Applications to Census 2020 in Houston
Xu, Zhaozhuo; Ji, Alan Baonan; Woods, Andrew; Chen, Beidi; Shrivastava, Anshumali
JSM2021
SLIDE: In Defense of Smart Algorithms over Hardware Acceleration for Large-scale Deep Learning Systems
Chen, Beidi; Medini, Tharun; Farwell, James; Gobriel, Sameh; Tai, Charlie; Shrivastava, Anshumali
MLSys2020
Angular visual hardness
Chen, Beidi; Liu, Weiyang; Yu, Zhiding; Kautz, Jan; Shrivastava, Anshumali; Garg, Animesh; Anandkumar, Animashree
ICML2020
Fast and accurate stochastic gradient estimation
Chen, Beidi; Xu, Yingchen; Shrivastava, Anshumali
NeurIPS2019
Densified winner take all (WTA) hashing for sparse datasets
Chen, Beidi; Shrivastava, Anshumali
UAI2018
Unique entity estimation with application to the Syrian conflict
Chen, Beidi; Shrivastava, Anshumali; Steorts, Rebecca C
The Annals of Applied Statistics 12.2 (2018). (IISA 2018 Best Student Paper in Applied Statistics)
Analyzing log analysis: An empirical study of user log mining
Alspaugh, Sara; Chen, Beidi; Lin, Jessica; Ganapathi, Archana; Hearst, Marti; Katz, Randy
LISA2014 (Best Student Paper)
Towards Structured Sparsity in Transformers for Efficient Inference
Dong, Harry; Chen, Beidi; Chi, Yuejie
Workshop on Efficient Systems for Foundation Models@ ICML2023
BearLoc: a composable distributed framework for indoor localization systems
Chen, Kaifei; He, Siyuan; Chen, Beidi; Kolb, John; Katz, Randy H; Culler, David E
Proceedings of the 2015 Workshop on IoT challenges in Mobile and Industrial Systems