From Richard Sutton (incompleteideas.net/), an essay on the repeated historical finding that computational scale has always beaten cleverness in AI (and some commentary on why this is such a hard-to-accept fact): incompleteideas.net/IncIdeas…
12
82
8
274
I agree to some extent, but as I mentioned in person previously, Q-learning is a nontrivial algorithm with a nontrivial framework (MDP and RL), and without Q-learning there certainly wouldn't be DQN and the current explosion in DRL.
1
10
Replying to @TheGregYang
Agree!
Replying to @etzioni
Both are important! Look at GPT-2 for instance — that's a general-purpose architectural improvement (i.e. the Transformer) run at massive scale. One interesting point from the essay is that scale gets a bad rap — doing the reverse isn't a good way of fixing the problem!

Mar 15, 2019 · 4:44 AM UTC