AI algorithms are making rapid progress — there's been an exponential decrease in the amount of compute needed to train a neural network to a fixed level of performance on ImageNet. The slope is steeper than Moore's Law.
Since 2012, the amount of compute for training to AlexNet-level performance on ImageNet has been decreasing exponentially — halving every 16 months, in total a 44x improvement.
By contrast, Moore's Law would only have yielded an 11x cost improvement: openai.com/blog/ai-and-effic…
May 5, 2020 · 4:12 PM UTC
6
39
6
217








