Seguir
Saksham Singhal
Saksham Singhal
Dirección de correo verificada de microsoft.com
Título
Citado por
Citado por
Año
Image as a foreign language: Beit pretraining for all vision and vision-language tasks
W Wang, H Bao, L Dong, J Bjorck, Z Peng, Q Liu, K Aggarwal, ...
arXiv preprint arXiv:2208.10442, 2022
821*2022
Language is not all you need: Aligning perception with language models
S Huang, L Dong, W Wang, Y Hao, S Singhal, S Ma, T Lv, L Cui, ...
Advances in Neural Information Processing Systems 36, 72096-72109, 2023
471*2023
InfoXLM: An information-theoretic framework for cross-lingual language model pre-training
Z Chi, L Dong, F Wei, N Yang, S Singhal, W Wang, X Song, XL Mao, ...
arXiv preprint arXiv:2007.07834, 2020
3332020
XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Z Chi
arXiv preprint arXiv:2106.16138, 2021
1252021
mT6: Multilingual pretrained text-to-text transformer with translation pairs
Z Chi, L Dong, S Ma, S Huang, S Singhal, XL Mao, H Huang, X Song, ...
arXiv preprint arXiv:2104.08692, 2021
782021
Deltalm: Encoder-decoder pre-training for language generation and translation by augmenting pretrained multilingual encoders
S Ma, L Dong, S Huang, D Zhang, A Muzio, S Singhal, HH Awadalla, ...
arXiv preprint arXiv:2106.13736, 2021
662021
On the representation collapse of sparse mixture of experts
Z Chi, L Dong, S Huang, D Dai, S Ma, B Patra, S Singhal, P Bajaj, X Song, ...
Advances in Neural Information Processing Systems 35, 34600-34613, 2022
632022
Consistency regularization for cross-lingual fine-tuning
B Zheng, L Dong, S Huang, W Wang, Z Chi, S Singhal, W Che, T Liu, ...
arXiv preprint arXiv:2106.08226, 2021
462021
Multilingual machine translation systems from Microsoft for WMT21 shared task
J Yang, S Ma, H Huang, D Zhang, L Dong, S Huang, A Muzio, S Singhal, ...
arXiv preprint arXiv:2111.02086, 2021
392021
MAGNETO: a foundation transformer
H Wang, S Ma, S Huang, L Dong, W Wang, Z Peng, Y Wu, P Bajaj, ...
International Conference on Machine Learning, 36077-36092, 2023
38*2023
Xlm-t: Scaling up multilingual machine translation with pretrained cross-lingual transformer encoders
S Ma, J Yang, H Huang, Z Chi, L Dong, D Zhang, HH Awadalla, A Muzio, ...
arXiv preprint arXiv:2012.15547, 2020
272020
Allocating large vocabulary capacity for cross-lingual language model pre-training
B Zheng, L Dong, S Huang, S Singhal, W Che, T Liu, X Song, F Wei
arXiv preprint arXiv:2109.07306, 2021
252021
Beyond english-centric bitexts for better multilingual language representation learning
B Patra, S Singhal, S Huang, Z Chi, L Dong, F Wei, V Chaudhary, X Song
arXiv preprint arXiv:2210.14867, 2022
162022
Bootstrapping a high quality multilingual multimodal dataset for Bletchley
OK Mohammed, K Aggarwal, Q Liu, S Singhal, J Bjorck, S Som
Asian Conference on Machine Learning, 738-753, 2023
22023
Dispersion based similarity for mining similar papers in citation network
S Singhal, V Pudi
2015 IEEE International Conference on Data Mining Workshop (ICDMW), 524-531, 2015
22015
Informational grounding with respect to a generative model
Z Liu, S Singhal, X Song, R Lal
US Patent App. 18/335,983, 2024
2024
On the Adaptation of Unlimiformer for Decoder-Only Transformers
K Ahrabian, A Benhaim, B Patra, J Pujara, S Singhal, X Song
Proceedings of the 2024 Joint International Conference on Computational …, 2024
2024
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–17