Seeing a lot of posts speculating that our results at The International show that today’s AI just can’t hope to match humans at strategy. Our interpretation is we have yet to reach the limits of current AI technology. Learning curves haven’t asymptoted just yet!
31
39
3
468
Note: would be quite a cool finding if our current approach does *not* scale to the level of the top human pros. But this week's matches were a snapshot of current progress, not what's possible.

Aug 24, 2018 · 10:31 AM UTC

10
5
1
86
Replying to @gdb
Agree. Nothing moves science forward than solid empirical evidence. This will always be valuable work, irregardless of winning.
Replying to @gdb
I wish we could see how well humans would play after 5 million matches in their prime!
Replying to @gdb
Would point to an incomplete understanding of necessary variation and selection of agent models for universal knowledge creation, structure of evolving self play needing to be reworked to unlock human level complexity of strategy, if possible in dota sandbox
Replying to @gdb
I think you guys could be missing something for the rewards that pro players consider for their dayly gameplay. Also, you have to focus in these factors: time (earky, mid, late game), role for each hero, and laning.
Replying to @gdb
Idea: Extend the hand robot, possibly using two hands to learn to solve a Rubik's cube
Replying to @gdb
Congrats on progress. Hope your team gets a little break!
1
Replying to @gdb
Seems pretty obvious that the bots can still get a lot better. Even just learning to itemise on their own will dramatically improve their performances.
Replying to @gdb
The people touting never need to realize that OpenAI5 were using their biggest spells to dispose of creeps, this is not something the lowest mmrs do, it's still in its infancy