Seguir
Suchin Gururangan
Suchin Gururangan
Dirección de correo verificada de cs.washington.edu - Página principal
Título
Citado por
Citado por
Año
Don't stop pretraining: Adapt language models to domains and tasks
S Gururangan, A Marasović, S Swayamdipta, K Lo, I Beltagy, D Downey, ...
arXiv preprint arXiv:2004.10964, 2020
21222020
Annotation artifacts in natural language inference data
S Gururangan, S Swayamdipta, O Levy, R Schwartz, SR Bowman, ...
arXiv preprint arXiv:1803.02324, 2018
11852018
Realtoxicityprompts: Evaluating neural toxic degeneration in language models
S Gehman, S Gururangan, M Sap, Y Choi, NA Smith
arXiv preprint arXiv:2009.11462, 2020
8792020
All that's' human'is not gold: Evaluating human evaluation of generated text
E Clark, T August, S Serrano, N Haduong, S Gururangan, NA Smith
arXiv preprint arXiv:2107.00061, 2021
3282021
Show your work: Improved reporting of experimental results
J Dodge, S Gururangan, D Card, R Schwartz, NA Smith
arXiv preprint arXiv:1909.03004, 2019
2592019
Editing models with task arithmetic
G Ilharco, MT Ribeiro, M Wortsman, S Gururangan, L Schmidt, ...
arXiv preprint arXiv:2212.04089, 2022
2172022
Variational pretraining for semi-supervised text classification
S Gururangan, T Dang, D Card, NA Smith
arXiv preprint arXiv:1906.02242, 2019
1322019
Detoxifying language models risks marginalizing minority voices
A Xu, E Pathak, E Wallace, S Gururangan, M Sap, D Klein
arXiv preprint arXiv:2104.06390, 2021
1042021
Branch-train-merge: Embarrassingly parallel training of expert language models
M Li, S Gururangan, T Dettmers, M Lewis, T Althoff, NA Smith, ...
arXiv preprint arXiv:2208.03306, 2022
922022
Demix layers: Disentangling domains for modular language modeling
S Gururangan, M Lewis, A Holtzman, NA Smith, L Zettlemoyer
arXiv preprint arXiv:2108.05036, 2021
922021
Time waits for no one! analysis and challenges of temporal misalignment
K Luu, D Khashabi, S Gururangan, K Mandyam, NA Smith
arXiv preprint arXiv:2111.07408, 2021
642021
kNN-Prompt: Nearest Neighbor Zero-Shot Inference
W Shi, J Michael, S Gururangan, L Zettlemoyer
arXiv preprint arXiv:2205.13792, 2022
392022
Silo language models: Isolating legal risk in a nonparametric datastore
S Min, S Gururangan, E Wallace, H Hajishirzi, NA Smith, L Zettlemoyer
arXiv preprint arXiv:2308.04430, 2023
372023
Less: Selecting influential data for targeted instruction tuning
M Xia, S Malladi, S Gururangan, S Arora, D Chen
arXiv preprint arXiv:2402.04333, 2024
332024
Scaling expert language models with unsupervised domain discovery
S Gururangan, M Li, M Lewis, W Shi, T Althoff, NA Smith, L Zettlemoyer
arXiv preprint arXiv:2303.14177, 2023
232023
Analysis of graph invariants in functional neocortical circuitry reveals generalized features common to three areas of sensory cortex
SS Gururangan, AJ Sadovsky, JN MacLean
PLoS computational biology 10 (7), e1003710, 2014
172014
Whose language counts as high quality? measuring language ideologies in text data selection
S Gururangan, D Card, SK Dreier, EK Gade, LZ Wang, Z Wang, ...
arXiv preprint arXiv:2201.10474, 2022
162022
M2D2: A massively multi-domain language modeling dataset
M Reid, V Zhong, S Gururangan, L Zettlemoyer
arXiv preprint arXiv:2210.07370, 2022
152022
Osworld: Benchmarking multimodal agents for open-ended tasks in real computer environments
T Xie, D Zhang, J Chen, X Li, S Zhao, R Cao, TJ Hua, Z Cheng, D Shin, ...
arXiv preprint arXiv:2404.07972, 2024
122024
Emergent coordination underlying learning to reach to grasp with a brain-machine interface
M Vaidya, K Balasubramanian, J Southerland, I Badreldin, A Eleryan, ...
Journal of neurophysiology 119 (4), 1291-1304, 2018
112018
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20