Today’s RL algorithms are great at exploiting a particular environment, terrible at using that knowledge in new situations. Here’s a new environment which is already helping us understand why, and which may help develop RL algorithms that generalize:
We’re releasing CoinRun, an environment generator that provides a metric for an agent’s ability to generalize across new environments - blog.openai.com/quantifying-…
8
64
1
194
This is our third major attempt in the past two years (Universe, Retro Contest) to develop a platform for RL generalization. Each time, we’ve made the task easier — but more focused on the core generalization challenge. Already seeing promising results on CoinRun.
3
7
42
Looks interesting! Seems very related to our paper on Illuminating Generalization in Deep Reinforcement Learning through Procedural Level Generation arxiv.org/pdf/1806.10729.pdf @togelius @nojustesen
3
2
13
Just had a quick look at the paper. It seems to me like you are doing essentially the same thing as we did in our paper which we released in June and which is presented at the @NeurIPS Deep RL Workshop (see previous tweet). Would be nice with an acknowledgement. @OpenAI @gdb
1
7
Following up with the team!

Dec 6, 2018 · 5:21 PM UTC

1
4