Released a new analysis showing that compute for landmark AI models from before 2012 grew at exactly Moore's Law.
From 2012-2018, every 1.5 years compute grew the amount that used to take a decade.
Deep learning is 60, not 6, years of steady progress: openai.com/blog/ai-and-compu…
7
77
9
253
We debated that internally before releasing the original analysis last year :). The plus side is that a petaflop/s-day is, at least in principle, somewhat relatable — it's the equivalent of 8 V100's at full efficiency for a day, or ~32 V100's at a typical efficiency.
Nov 7, 2019 · 6:56 PM UTC
1
1
5


