Seguir
Aaron Mishkin
Aaron Mishkin
PhD Student, Stanford University
Dirección de correo verificada de cs.stanford.edu - Página principal
Título
Citado por
Citado por
Año
Painless stochastic gradient: Interpolation, line-search, and convergence rates
S Vaswani, A Mishkin, I Laradji, M Schmidt, G Gidel, S Lacoste-Julien
Advances in neural information processing systems 32, 2019
2362019
Slang: Fast structured covariance approximations for bayesian deep learning with natural gradient
A Mishkin, F Kunstner, D Nielsen, M Schmidt, ME Khan
Advances in neural information processing systems 31, 2018
762018
Fast convex optimization for two-layer relu networks: Equivalent model classes and cone decompositions
A Mishkin, A Sahiner, M Pilanci
International Conference on Machine Learning, 15770-15816, 2022
312022
Interpolation, Growth Conditions, and Stochastic Gradient Descent
A Mishkin
University of British Columbia, 2020
72020
Optimal sets and solution paths of ReLU networks
A Mishkin, M Pilanci
International Conference on Machine Learning, 24888-24924, 2023
52023
To each optimizer a norm, to each norm its generalization
S Vaswani, R Babanezhad, J Gallego-Posada, A Mishkin, ...
arXiv preprint arXiv:2006.06821, 2020
52020
Directional Smoothness and Gradient Methods: Convergence and Adaptivity
A Mishkin, A Khaled, Y Wang, A Defazio, RM Gower
arXiv preprint arXiv:2403.04081, 2024
32024
Web ValueCharts
AP Mishkin, EA Hindalong
32018
A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features
E Zeger, Y Wang, A Mishkin, T Ergen, E Candès, M Pilanci
arXiv preprint arXiv:2403.01046, 2024
22024
Analyzing and Improving Greedy 2-Coordinate Updates for Equality-Constrained Optimization via Steepest Descent in the 1-Norm
AV Ramesh, A Mishkin, M Schmidt, Y Zhou, JW Lavington, J She
arXiv preprint arXiv:2307.01169, 2023
12023
Exploring the loss landscape of regularized neural networks via convex duality
S Kim, A Mishkin, M Pilanci
arXiv preprint arXiv:2411.07729, 2024
2024
Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation
A Mishkin, M Pilanci, M Schmidt
arXiv preprint arXiv:2404.02378, 2024
2024
Level Set Teleportation: An Optimization Perspective
A Mishkin, A Bietti, RM Gower
arXiv preprint arXiv:2403.03362, 2024
2024
Web ValueCharts: Analyzing Individual and Group Preferences with Interactive, Web-based Visualizations
A Mishkin
2017
A novel analysis of gradient descent under directional smoothness
A Mishkin, A Khaled, A Defazio, RM Gower
OPT 2023: Optimization for Machine Learning, 0
Level Set Teleportation: the Good, the Bad, and the Ugly
A Mishkin, A Bietti, RM Gower
OPT 2023: Optimization for Machine Learning, 0
Strong Duality via Convex Conjugacy
A Mishkin
Solving Projection Problems using Lagrangian Duality
A Mishkin
Fast Convergence of Greedy 2-Coordinate Updates for Optimizing with an Equality Constraint
AV Ramesh, A Mishkin, M Schmidt
OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 0
The Solution Path of the Group Lasso
A Mishkin, M Pilanci
OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 0
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20