I am a senior staff research scientist at DeepMind London.
My research interests include (modular/hierarchical) reinforcement learning, (stochastic/black-box) optimization with minimal hyperparameter tuning, and (deep/recurrent) neural networks. My favorite application domain are games.
I grew up in Luxembourg and studied computer science in Switzerland (with exchanges at Waterloo and Columbia), where I obtained an MSc from the EPFL in 2005. I hold a PhD from TU Munich (2011), which I did under the supervision of Jürgen Schmidhuber at the Swiss AI Lab IDSIA. From 2011 to 2013 I was a postdoc at the Courant Institute of NYU, in the lab of Yann LeCun.
NeurIPS 2022 | T. Schaul, A. Barreto, J. Quan and G. Ostrovski.
The Phenomenon of Policy Churn. Advances in Neural Information Processing Systems. [arXiv] |
Nature Comm. 2020 | N. Tomašev, J. Cornebise, F. Hutter et al.
AI for Social Good: Unlocking the Opportunity for Positive Impact. Nature Communications 11 (2468). [Link] |
Nature 2019 | O. Vinyals, I. Babuschkin, W. Czarnecki et al.
Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 574 (7780). [Link] [Preprint] [Blog] [Video] |
RLDM 2019 | T. Schaul, D. Borsa, J. Modayil and R. Pascanu.
Ray Interference: a Source of Plateaus in Deep Reinforcement Learning. Multidisciplinary Conference on Reinforcement Learning and Decision Making . [arXiv] |
ICLR 2016 | T. Schaul, J. Quan, I. Antonoglou and D. Silver.
Prioritized Experience Replay. |
ICML 2015 | T. Schaul, D. Horgan, K. Gregor and D. Silver.
Universal Value Function Approximators. |