Released a new analysis showing that compute for landmark AI models from before 2012 grew at exactly Moore's Law. From 2012-2018, every 1.5 years compute grew the amount that used to take a decade. Deep learning is 60, not 6, years of steady progress: openai.com/blog/ai-and-compu…

Nov 7, 2019 · 6:36 PM UTC

7
77
9
253
Replying to @gdb
Petaflop/s-days is an awkward metric -- just flops would be fine if you are going to have an exponent on the axis anyway...
2
12
We debated that internally before releasing the original analysis last year :). The plus side is that a petaflop/s-day is, at least in principle, somewhat relatable — it's the equivalent of 8 V100's at full efficiency for a day, or ~32 V100's at a typical efficiency.
1
1
5
Replying to @gdb
This is because Moore's Law (rising transistors/$) was facing an opposite trend in CPU design (falling computations/s/transistor), which unwound suddenly when GPGPU went mainstream. Most of that bump is a one-time switchover, the rest is rising $ budgets.
1
3
Replying to @gdb
Yet we are not an inch closer to reasoning. Just better statistics.
1
Replying to @gdb
Very interesting analysis, thank you! I was curious, what opportunities are there to decrease computation use of AI research? Is there any room to optimize the implementations? 🤔