Seguir
He Yuxiong
He Yuxiong
Microsoft Research
Dirección de correo verificada de microsoft.com - Página principal
Título
Citado por
Citado por
Año
Zero: Memory optimizations toward training trillion parameter models
S Rajbhandari, J Rasley, O Ruwase, Y He
SC20: International Conference for High Performance Computing, Networking …, 2020
10642020
Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters
J Rasley, S Rajbhandari, O Ruwase, Y He
Proceedings of the 26th ACM SIGKDD International Conference on Knowledge …, 2020
9872020
Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model
S Smith, M Patwary, B Norick, P LeGresley, S Rajbhandari, J Casper, ...
arXiv preprint arXiv:2201.11990, 2022
6022022
{Zero-offload}: Democratizing {billion-scale} model training
J Ren, S Rajbhandari, RY Aminabadi, O Ruwase, S Yang, M Zhang, D Li, ...
2021 USENIX Annual Technical Conference (USENIX ATC 21), 551-564, 2021
3292021
Graph query processing using plurality of engines
S Elnikety, Y He, S Sakr
US Patent 9,053,210, 2015
3082015
Zero-infinity: Breaking the gpu memory wall for extreme scale deep learning
S Rajbhandari, O Ruwase, J Rasley, S Smith, Y He
Proceedings of the international conference for high performance computing …, 2021
2892021
Zeroquant: Efficient and affordable post-training quantization for large-scale transformers
Z Yao, R Yazdani Aminabadi, M Zhang, X Wu, C Li, Y He
Advances in Neural Information Processing Systems 35, 27168-27183, 2022
2832022
Deepspeed-inference: enabling efficient inference of transformer models at unprecedented scale
RY Aminabadi, S Rajbhandari, AA Awan, C Li, D Li, E Zheng, O Ruwase, ...
SC22: International Conference for High Performance Computing, Networking …, 2022
2292022
Deepspeed-moe: Advancing mixture-of-experts inference and training to power next-generation ai scale
S Rajbhandari, C Li, Z Yao, M Zhang, RY Aminabadi, AA Awan, J Rasley, ...
International conference on machine learning, 18332-18346, 2022
2022022
Provably-efficient job scheduling for energy and fairness in geographically distributed data centers
S Ren, Y He, F Xu
2012 IEEE 32nd International Conference on Distributed Computing Systems, 22-31, 2012
1562012
Learning intrinsic sparse structures within long short-term memory
W Wen, Y He, S Rajbhandari, M Zhang, W Wang, F Liu, B Hu, Y Chen, ...
arXiv preprint arXiv:1709.05027, 2017
1552017
The Cilkview scalability analyzer
Y He, CE Leiserson, WM Leiserson
Proceedings of the twenty-second annual ACM symposium on Parallelism in …, 2010
1462010
Adaptive work-stealing with parallelism feedback
K Agrawal, CE Leiserson, Y He, WJ Hsu
ACM Transactions on Computer Systems (TOCS) 26 (3), 1-32, 2008
1452008
Swayam: distributed autoscaling to meet slas of machine learning inference services with resource efficiency
A Gujarati, S Elnikety, Y He, KS McKinley, BB Brandenburg
Proceedings of the 18th ACM/IFIP/USENIX middleware conference, 109-120, 2017
1442017
Few-to-many: Incremental parallelism for reducing tail latency in interactive services
ME Haque, YH Eom, Y He, S Elnikety, R Bianchini, KS McKinley
ACM Sigplan Notices 50 (4), 161-175, 2015
1382015
{DeepCPU}: Serving {RNN-based} Deep Learning Models 10x Faster
M Zhang, S Rajbhandari, W Wang, Y He
2018 USENIX Annual Technical Conference (USENIX ATC 18), 951-965, 2018
1232018
Predictive parallelization: Taming tail latencies in web search
M Jeon, S Kim, S Hwang, Y He, S Elnikety, AL Cox, S Rixner
Proceedings of the 37th international ACM SIGIR conference on Research …, 2014
1212014
Performance modeling and scalability optimization of distributed deep learning systems
F Yan, O Ruwase, Y He, T Chilimbi
Proceedings of the 21th ACM SIGKDD International Conference on Knowledge …, 2015
1142015
Accelerating training of transformer-based language models with progressive layer dropping
M Zhang, Y He
Advances in neural information processing systems 33, 14011-14023, 2020
1012020
Adaptive scheduling with parallelism feedback
K Agrawal, Y He, WJ Hsu, CE Leiserson
Proceedings of the eleventh ACM SIGPLAN symposium on Principles and practice …, 2006
912006
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20