Seguir
Leonard Berrada
Leonard Berrada
Research Scientist, DeepMind
Dirección de correo verificada de google.com
Título
Citado por
Citado por
Año
Unlocking high-accuracy differentially private image classification through scale
S De, L Berrada, J Hayes, SL Smith, B Balle
arXiv preprint arXiv:2204.13650, 2022
1892022
Smooth Loss Functions for Deep Top-k Classification
L Berrada, A Zisserman, MP Kumar
International Conference on Learning Representations, 2018
1312018
Training neural networks for and by interpolation
L Berrada, A Zisserman, MP Kumar
International Conference on Machine Learning, 2020
602020
Differentially private diffusion models generate useful synthetic images
S Ghalebikesabi, L Berrada, S Gowal, I Ktena, R Stanforth, J Hayes, S De, ...
arXiv preprint arXiv:2302.13861, 2023
562023
Griffin: Mixing gated linear recurrences with local attention for efficient language models
S De, SL Smith, A Fernando, A Botev, G Cristian-Muraru, A Gu, R Haroun, ...
arXiv preprint arXiv:2402.19427, 2024
522024
Deep Frank-Wolfe For Neural Network Optimization
L Berrada, A Zisserman, MP Kumar
International Conference on Learning Representations, 2019
442019
Make sure you're unsure: A framework for verifying probabilistic specifications
L Berrada, S Dathathri, K Dvijotham, R Stanforth, RR Bunel, J Uesato, ...
Advances in Neural Information Processing Systems 34, 11136-11147, 2021
21*2021
Trusting SVM for piecewise linear CNNs
L Berrada, A Zisserman, MP Kumar
International Conference on Learning Representations, 2017
202017
Convnets match vision transformers at scale
SL Smith, A Brock, L Berrada, S De
arXiv preprint arXiv:2310.16764, 2023
142023
Unlocking accuracy and fairness in differentially private image classification
L Berrada, S De, JH Shen, J Hayes, R Stanforth, D Stutz, P Kohli, ...
arXiv preprint arXiv:2308.10888, 2023
12*2023
Soham De. Convnets match vision transformers at scale
SL Smith, A Brock, L Berrada
arXiv preprint arXiv:2303.08774 3, 2023
62023
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models, Feb. 2024
S De, SL Smith, A Fernando, A Botev, G Cristian-Muraru, A Gu, R Haroun, ...
URL http://arxiv. org/abs/2402.19427 v1, 0
6
RecurrentGemma: Moving Past Transformers for Efficient Open Language Models
A Botev, S De, SL Smith, A Fernando, GC Muraru, R Haroun, L Berrada, ...
arXiv preprint arXiv:2404.07839, 2024
52024
A Stochastic Bundle Method for Interpolation
A Paren, L Berrada, RPK Poudel, MP Kumar
Journal of Machine Learning Research 23 (15), 1-57, 2022
52022
Comment on stochastic Polyak step-size: Performance of ALI-G
L Berrada, A Zisserman, MP Kumar
arXiv preprint arXiv:2105.10011, 2021
52021
Operationalizing Contextual Integrity in Privacy-Conscious Assistants
S Ghalebikesabi, E Bagdasaryan, R Yi, I Yona, I Shumailov, A Pappu, ...
arXiv preprint arXiv:2408.02373, 2024
22024
Leveraging structure for optimization in deep learning
L Berrada
University of Oxford, 2019
2019
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–17