A good analysis of the embeddings we use in the OpenAI Five model: neuro.cs.ut.ee/the-use-of-em…. After analyzing the rest of the model, concludes: “All the strategy and tactics must lie in one place – 1024-unit LSTM.” Pretty remarkable what LSTMs (invented in 1997!) can do at scale.
2
88
4
268
(Also, "at scale" is a very relative term. After two doublings, our latest Five model is 4096-units, and consumes roughly the same amount of compute per second as a honeybee.)
Sep 11, 2018 · 10:24 PM UTC
6
13
58






