A good analysis of the embeddings we use in the OpenAI Five model: neuro.cs.ut.ee/the-use-of-em…. After analyzing the rest of the model, concludes: “All the strategy and tactics must lie in one place – 1024-unit LSTM.” Pretty remarkable what LSTMs (invented in 1997!) can do at scale.
2
88
4
268
(Also, "at scale" is a very relative term. After two doublings, our latest Five model is 4096-units, and consumes roughly the same amount of compute per second as a honeybee.)

Sep 11, 2018 · 10:24 PM UTC

6
13
58
Replying to @gdb
Do you see as much improvement with the second doubling as you saw with the first?
Replying to @gdb
Stop these comparisons to animal models, they are just plainly wrong 🤣
Replying to @gdb
New goal for biologists: train literal honeybees to play Dota2
1
20
Replying to @gdb
Even though the NN doesn't work at a pixel level, I thought a spatial attention-like mechanism (ie, getting the bots to pay more attention to stuff happening closer to where they are) would work well, especially since most bots can only act locally.
2
Replying to @gdb
Did the farming improve? Also i wonder if the AI learned to stack camps.
Replying to @gdb
What have your results been with time-convolution vs LSTM?