We’re starting a new team to apply deep learning to the deep learning iteration cycle. Looking at our own workflows, especially on large projects like Dota, it feels like there’s a lot of room for improvements. Join us:
At OpenAI, we're excited by work in learned optimizers such as Learning To Learn By Gradient Descent By Gradient Descent. These methods will improve the efficiency of our models, and allow for higher quality science. Looking for a founding team member! tinyurl.com/tjz8ct3

Dec 6, 2019 · 8:14 PM UTC

3
7
1
97
Replying to @gdb
Excited to see that. I think there's a lot of room for automation still in DL, automl from google notwithstanding. But I have to also caution, don't fall into the trap of language designers (haskell etc. types) where you create great tooling but have little impact on applications
1
Replying to @gdb
Yo dawg, I heard you like deep learning, so I..