Seguir
Hangbo Bao
Hangbo Bao
Microsoft Research
Dirección de correo verificada de microsoft.com - Página principal
Título
Citado por
Citado por
Año
BEiT: BERT Pre-Training of Image Transformers
H Bao, L Dong, S Piao, F Wei
International Conference on Learning Representations, 2022
29412022
Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers
W Wang, F Wei, L Dong, H Bao, N Yang, M Zhou
Advances in Neural Information Processing Systems 33, 5776-5788, 2020
11882020
Image as a Foreign Language: BEiT Pretraining for Vision and Vision-Language Tasks
W Wang, H Bao, L Dong, J Bjorck, Z Peng, Q Liu, K Aggarwal, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
912*2023
VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
H Bao, W Wang, L Dong, Q Liu, OK Mohammed, K Aggarwal, S Som, ...
36th Conference on Neural Information Processing Systems (NeurIPS 2022), 2022
526*2022
Unilmv2: Pseudo-masked language models for unified language model pre-training
H Bao, L Dong, F Wei, W Wang, N Yang, X Liu, Y Wang, J Gao, S Piao, ...
International Conference on Machine Learning, 642-652, 2020
4262020
Neural question generation from text: A preliminary study
Q Zhou, N Yang, F Wei, C Tan, H Bao, M Zhou
Natural Language Processing and Chinese Computing: 6th CCF International …, 2018
4172018
BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers
Z Peng, L Dong, H Bao, Q Ye, F Wei
arXiv preprint arXiv:2208.06366, 2022
2672022
MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers
W Wang, H Bao, S Huang, L Dong, F Wei
arXiv preprint arXiv:2012.15828, 2020
2182020
THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption
T Chen, H Bao, S Huang, L Dong, B Jiao, D Jiang, H Zhou, J Li, F Wei
Findings of the Association for Computational Linguistics: ACL 2022, 3510-3520, 2022
822022
Corrupted Image Modeling for Self-Supervised Visual Pre-Training
Y Fang, L Dong, H Bao, X Wang, F Wei
arXiv preprint arXiv:2202.03382, 2022
812022
Neural melody composition from lyrics
H Bao, S Huang, F Wei, L Cui, Y Wu, C Tan, S Piao, M Zhou
Natural Language Processing and Chinese Computing: 8th CCF International …, 2019
372019
Fine-tuning pretrained transformer encoders for sequence-to-sequence learning
H Bao, L Dong, W Wang, N Yang, S Piao, F Wei
International Journal of Machine Learning and Cybernetics 15 (5), 1711-1728, 2024
242024
Attention Temperature Matters in Abstractive Summarization Distillation
S Zhang, X Zhang, H Bao, F Wei
arXiv preprint arXiv:2106.03441, 2021
192021
Learning to Sample Replacements for ELECTRA Pre-Training
Y Hao, L Dong, H Bao, K Xu, F Wei
arXiv preprint arXiv:2106.13715, 2021
92021
Inspecting Unification of Encoding and Matching with Transformer: A Case Study of Machine Reading Comprehension
H Bao, L Dong, F Wei, W Wang, N Yang, L Cui, S Piao, M Zhou
Proceedings of the 2nd Workshop on Machine Reading for Question Answering, 14-18, 2019
42019
A unified view of masked image modeling
Z Peng, L Dong, H Bao, F Wei
TMLR, 2023
2023
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–16