Seguir
Rick Valenzano
Rick Valenzano
Assistant Professor of Computer Science, Toronto Metropolitan University
Dirección de correo verificada de ryerson.ca - Página principal
Título
Citado por
Citado por
Año
Using reward machines for high-level task specification and decomposition in reinforcement learning
RT Icarte, T Klassen, R Valenzano, S McIlraith
International Conference on Machine Learning, 2107-2116, 2018
3332018
LTL and Beyond: Formal Languages for Reward Function Specification in Reinforcement Learning.
A Camacho, RT Icarte, TQ Klassen, RA Valenzano, SA McIlraith
IJCAI 19, 6065-6073, 2019
2542019
Reward machines: Exploiting reward function structure in reinforcement learning
RT Icarte, TQ Klassen, R Valenzano, SA McIlraith
Journal of Artificial Intelligence Research 73, 173-208, 2022
2132022
Learning reward machines for partially observable reinforcement learning
R Toro Icarte, E Waldie, T Klassen, R Valenzano, M Castro, S McIlraith
Advances in neural information processing systems 32, 2019
1542019
Teaching multiple tasks to an RL agent using LTL
R Toro Icarte, TQ Klassen, R Valenzano, SA McIlraith
Proceedings of the 17th International Conference on Autonomous Agents and …, 2018
1482018
Evaluating state-space abstractions in extensive-form games
M Johanson, N Burch, R Valenzano, M Bowling
Proceedings of the 2013 international conference on Autonomous agents and …, 2013
1002013
Simultaneously searching with multiple settings: An alternative to parameter tuning for suboptimal single-agent search algorithms
R Valenzano, N Sturtevant, J Schaeffer, K Buro, A Kishimoto
Proceedings of the International Conference on Automated Planning and …, 2010
652010
A comparison of knowledge-based GBFS enhancements and knowledge-free exploration
R Valenzano, N Sturtevant, J Schaeffer, F Xie
Proceedings of the International Conference on Automated Planning and …, 2014
472014
Arvandherd: Parallel planning with a portfolio
R Valenzano, H Nakhost, M Müller, J Schaeffer, N Sturtevant
ECAI 2012, 786-791, 2012
442012
Advice-based exploration in model-based reinforcement learning
R Toro Icarte, TQ Klassen, RA Valenzano, SA McIlraith
Advances in Artificial Intelligence: 31st Canadian Conference on Artificial …, 2018
302018
Using alternative suboptimality bounds in heuristic search
R Valenzano, SJ Arfaee, J Thayer, R Stern, N Sturtevant
Proceedings of the International Conference on Automated Planning and …, 2013
262013
Arvand: the art of random walks
H Nakhost, M Müller, R Valenzano, F Xie
The, 15-16, 2011
192011
On the completeness of best-first search variants that use random exploration
R Valenzano, F Xie
Proceedings of the AAAI Conference on Artificial Intelligence 30 (1), 2016
182016
Worst-case solution quality analysis when not re-expanding nodes in best-first search
R Valenzano, N Sturtevant, J Schaeffer
Proceedings of the AAAI Conference on Artificial Intelligence 28 (1), 2014
132014
Better time constrained search via randomization and postprocessing
F Xie, R Valenzano, M Müller
Proceedings of the International Conference on Automated Planning and …, 2013
102013
Probably bounded suboptimal heuristic search
R Stern, G Dreiman, R Valenzano
Artificial Intelligence 267, 39-57, 2019
92019
The act of remembering: a study in partially observable reinforcement learning
RT Icarte, R Valenzano, TQ Klassen, P Christoffersen, A Farahmand, ...
arXiv preprint arXiv:2010.01753, 2020
72020
Arvandherd 2014
R Valenzano, H Nakhost, M Müller, J Schaeffer, N Sturtevant
The Eighth International Planning Competition. Description of Participant …, 2014
72014
Searching for Markovian subproblems to address partially observable reinforcement learning
RT Icarte, E Waldie, TQ Klassen, R Valenzano, MP Castro, SA McIlraith
Proceedings of the 4th Multi-disciplinary Conference on Reinforcement …, 2019
62019
Using metric temporal logic to specify scheduling problems
R Luo, RA Valenzano, Y Li, JC Beck, SA McIlraith
Fifteenth International Conference on the Principles of Knowledge …, 2016
62016
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20