AI algorithms are making rapid progress — there's been an exponential decrease in the amount of compute needed to train a neural network to a fixed level of performance on ImageNet. The slope is steeper than Moore's Law.
Since 2012, the amount of compute for training to AlexNet-level performance on ImageNet has been decreasing exponentially — halving every 16 months, in total a 44x improvement. By contrast, Moore's Law would only have yielded an 11x cost improvement: openai.com/blog/ai-and-effic…

May 5, 2020 · 4:12 PM UTC

6
39
6
217
Replying to @gdb
people got smarter not algorithms
2
Replying to @gdb
But training these algorithms getting to use more computation... especially NAS trainings.
Replying to @gdb
Great analysis. Added it to a recent discussion on the topic
This recent analysis by @OpenAI seems pertinent to the thread
Replying to @gdb
Damn, I just noticed your epic twitter handle.
Replying to @gdb @TheKnave2
Governor AI.
1