Seguir
Matthieu Geist
Matthieu Geist
Cohere (ex Google, on leave of Professor, Université de Lorraine)
Dirección de correo verificada de univ-lorraine.fr
Título
Citado por
Citado por
Año
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
22132023
What matters for on-policy deep actor-critic methods? a large-scale study
M Andrychowicz, A Raichuk, P Stańczyk, M Orsini, S Girgin, R Marinier, ...
International conference on learning representations, 2021
466*2021
A theory of regularized markov decision processes
M Geist, B Scherrer, O Pietquin
International Conference on Machine Learning, 2160-2169, 2019
3422019
Human activity recognition using recurrent neural networks
D Singh, E Merdivan, I Psychoula, J Kropf, S Hanke, M Geist, A Holzinger
Machine Learning and Knowledge Extraction: First IFIP TC 5, WG 8.4, 8.9, 12 …, 2017
2262017
IQ-Learn: Inverse soft-Q Learning for Imitation
D Garg, S Chakraborty, C Cundy, J Song, M Geist, S Ermon
arXiv preprint arXiv:2106.12142, 2022
1682022
Approximate modified policy iteration and its application to the game of Tetris.
B Scherrer, M Ghavamzadeh, V Gabillon, B Lesner, M Geist
J. Mach. Learn. Res. 16 (49), 1629-1676, 2015
1582015
Primal wasserstein imitation learning
R Dadashi, L Hussenot, M Geist, O Pietquin
arXiv preprint arXiv:2006.04678, 2020
1472020
Fictitious play for mean field games: Continuous time analysis and applications
S Perrin, J Pérolat, M Laurière, M Geist, R Elie, O Pietquin
Advances in neural information processing systems 33, 13199-13213, 2020
1392020
On the convergence of model free learning in mean field games
R Elie, J Perolat, M Laurière, M Geist, O Pietquin
Proceedings of the AAAI Conference on Artificial Intelligence 34 (05), 7143-7150, 2020
138*2020
Inverse reinforcement learning through structured classification
E Klein, M Geist, B Piot, O Pietquin
Advances in neural information processing systems 25, 2012
1352012
Leverage the average: an analysis of kl regularization in reinforcement learning
N Vieillard, T Kozuno, B Scherrer, O Pietquin, R Munos, M Geist
Advances in Neural Information Processing Systems 33, 12163-12174, 2020
131*2020
Kalman temporal differences
M Geist, O Pietquin
Journal of artificial intelligence research 39, 483-532, 2010
1272010
Algorithmic survey of parametric value function approximation
M Geist, O Pietquin
IEEE Transactions on Neural Networks and Learning Systems 24 (6), 845-867, 2013
124*2013
Sample-efficient batch reinforcement learning for dialogue management optimization
O Pietquin, M Geist, S Chandramohan, H Frezza-Buet
ACM Transactions on Speech and Language Processing (TSLP) 7 (3), 1-21, 2011
1212011
User simulation in dialogue systems using inverse reinforcement learning
S Chandramohan, M Geist, F Lefevre, O Pietquin
Interspeech 2011, 1025-1028, 2011
1202011
Bridging the gap between imitation learning and inverse reinforcement learning
B Piot, M Geist, O Pietquin
IEEE transactions on neural networks and learning systems 28 (8), 1814-1826, 2016
1132016
Off-policy learning with eligibility traces: a survey.
M Geist, B Scherrer
J. Mach. Learn. Res. 15 (1), 289-333, 2014
1122014
Munchausen reinforcement learning
N Vieillard, O Pietquin, M Geist
Advances in Neural Information Processing Systems 33, 4235-4246, 2020
1072020
On-policy distillation of language models: Learning from self-generated mistakes
R Agarwal, N Vieillard, Y Zhou, P Stanczyk, SR Garea, M Geist, ...
The Twelfth International Conference on Learning Representations, 2024
106*2024
Convolutional and recurrent neural networks for activity recognition in smart environment
D Singh, E Merdivan, S Hanke, J Kropf, M Geist, A Holzinger
Towards Integrative Machine Learning and Knowledge Extraction: BIRS Workshop …, 2017
992017
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20