Seguir
Wenhui Wang
Wenhui Wang
Microsoft Research
Dirección de correo verificada de microsoft.com
Título
Citado por
Citado por
Año
Unified language model pre-training for natural language understanding and generation
L Dong, N Yang, W Wang, F Wei, X Liu, Y Wang, J Gao, M Zhou, HW Hon
Advances in neural information processing systems 32, 2019
17602019
Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers
W Wang, F Wei, L Dong, H Bao, N Yang, M Zhou
Advances in Neural Information Processing Systems 33, 5776-5788, 2020
10672020
Gated self-matching networks for reading comprehension and question answering
W Wang, N Yang, F Wei, B Chang, M Zhou
Proceedings of the 55th Annual Meeting of the Association for Computational …, 2017
8432017
Image as a foreign language: Beit pretraining for vision and vision-language tasks
W Wang, H Bao, L Dong, J Bjorck, Z Peng, Q Liu, K Aggarwal, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
657*2023
Kosmos-2: Grounding multimodal large language models to the world
Z Peng, W Wang, L Dong, Y Hao, S Huang, S Ma, F Wei
arXiv preprint arXiv:2306.14824, 2023
4132023
Unilmv2: Pseudo-masked language models for unified language model pre-training
H Bao, L Dong, F Wei, W Wang, N Yang, X Liu, Y Wang, J Gao, S Piao, ...
International conference on machine learning, 642-652, 2020
4072020
Language is not all you need: Aligning perception with language models
S Huang, L Dong, W Wang, Y Hao, S Singhal, S Ma, T Lv, L Cui, ...
Advances in Neural Information Processing Systems 36, 72096-72109, 2023
3772023
InfoXLM: An information-theoretic framework for cross-lingual language model pre-training
Z Chi, L Dong, F Wei, N Yang, S Singhal, W Wang, X Song, XL Mao, ...
arXiv preprint arXiv:2007.07834, 2020
3272020
Vlmo: Unified vision-language pre-training with mixture-of-modality-experts
H Bao, W Wang, L Dong, Q Liu, OK Mohammed, K Aggarwal, S Som, ...
Advances in Neural Information Processing Systems 35, 32897-32912, 2022
2792022
Minilmv2: Multi-head self-attention relation distillation for compressing pretrained transformers
W Wang, H Bao, S Huang, L Dong, F Wei
arXiv preprint arXiv:2012.15828, 2020
1962020
Graph-based dependency parsing with bidirectional LSTM
W Wang, B Chang
Proceedings of the 54th Annual Meeting of the Association for Computational …, 2016
1832016
Multiway attention networks for modeling sentence pairs.
C Tan, F Wei, W Wang, W Lv, M Zhou
IJCAI, 4411-4417, 2018
1512018
Cross-lingual natural language generation via pre-training
Z Chi, L Dong, F Wei, W Wang, XL Mao, H Huang
Proceedings of the AAAI conference on artificial intelligence 34 (05), 7570-7577, 2020
1442020
Longnet: Scaling transformers to 1,000,000,000 tokens
J Ding, S Ma, L Dong, X Zhang, S Huang, W Wang, N Zheng, F Wei
arXiv preprint arXiv:2307.02486, 2023
1082023
Language models are general-purpose interfaces
Y Hao, H Song, L Dong, S Huang, Z Chi, W Wang, S Ma, F Wei
arXiv preprint arXiv:2206.06336, 2022
942022
The era of 1-bit llms: All large language models are in 1.58 bits
S Ma, H Wang, L Ma, L Wang, W Wang, S Huang, L Dong, R Wang, J Xue, ...
arXiv preprint arXiv:2402.17764, 2024
762024
Learning to ask unanswerable questions for machine reading comprehension
H Zhu, L Dong, F Wei, W Wang, B Qin, T Liu
arXiv preprint arXiv:1906.06045, 2019
542019
Harvesting and refining question-answer pairs for unsupervised QA
Z Li, W Wang, L Dong, F Wei, K Xu
arXiv preprint arXiv:2005.02925, 2020
502020
Consistency regularization for cross-lingual fine-tuning
B Zheng, L Dong, S Huang, W Wang, Z Chi, S Singhal, W Che, T Liu, ...
arXiv preprint arXiv:2106.08226, 2021
462021
Vl-beit: Generative vision-language pretraining
H Bao, W Wang, L Dong, F Wei
arXiv preprint arXiv:2206.01127, 2022
422022
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20