President & Co-Founder @OpenAI

Joined July 2010
Filter
Exclude
Time range
-
Near
Introducing Mr. and Mrs. Brockman:
33
4
3
531
Replying to @ID_AA_Carmack
We debated that internally before releasing the original analysis last year :). The plus side is that a petaflop/s-day is, at least in principle, somewhat relatable — it's the equivalent of 8 V100's at full efficiency for a day, or ~32 V100's at a typical efficiency.
1
1
5
Released a new analysis showing that compute for landmark AI models from before 2012 grew at exactly Moore's Law. From 2012-2018, every 1.5 years compute grew the amount that used to take a decade. Deep learning is 60, not 6, years of steady progress: openai.com/blog/ai-and-compu…
7
77
9
253
The best part of our relationship is we love all the same things.
2
1
107
Just released the largest version of GPT-2, completing our 9-month staged release process:
We're releasing the 1.5billion parameter GPT-2 model as part of our staged release publication strategy. - GPT-2 output detection model: github.com/openai/gpt-2-outp… - Research from partners on potential malicious uses: d4mucfpksywv.cloudfront.net/… - More details: openai.com/blog/gpt-2-1-5b-r…
4
11
73
A fireside chat with @drewhouston about OpenAI: blog.dropbox.com/topics/comp…. Covered our culture, how we think about AI progress, and many of our recent results.
2
8
68
Cool to see the discussion of our multiagent work at the top of /r/programming:
This tweet is unavailable
11
38
Just reversed the semantics of a "dryrun" flag, meaning that my script started deleting data when it was just supposed to be printing. Good reminder of how easy it is to make damaging bugs, and that you never grow out of some programming mistakes.
1
1
28
Kinda funny that @GaryMarcus has a problem with the title of "Solving Rubik's Cube with a Robot Hand" for a result that literally involves a robot hand solving a Rubik's cube.
Replying to @GaryMarcus
I will say again that the work itself is impressive, but mischaracterized, and that a better title would have been "manipulating a Rubik's cube using reinforcement learning" or "progress in manipulation with dextrous robotic hands" or similar lines.
22
11
10
189
OpenAI fuses the best of academia and startups. We do large-scale research with close-knit teams of researchers and engineers. Both build/maintain large systems and constantly iterate on new ideas:
"Solving the Rubik's Cube with a Robot Hand" took many human hands over the past 2.5 years — meet our Robotics team! (PS they're hiring: openai.com/jobs/!)
3
10
149
Replying to @nikhilbd
Co-founders make it easier :).
One reason ML for physical robotics is so hard is how many confounders you need to deal with:
Replying to @OpenAI
One person on the project used to have consistently better results on the robot. For a while, we couldn't figure out why. It turned out that his laptop was faster, and it incurred less latency on the robot, which in turn gave better results.
1
7
52
Replying to @tsimonite
Rubik’s certainly won’t be our next milestone, but I wouldn’t be surprised if it remained a benchmark!
2
One of the surprising parts of working with a smart, adaptable robotic system is that it might be broken and secretly compensating for its injury:
Replying to @OpenAI
Sometimes we were unaware that our robot is partially broken because the neural network could compensate for it. The model worked just fine with broken fingers or defected sensors.
5
43
15
214
Our robot is a "small but vital step toward the kind of robots that might one day perform manual labor or household tasks and even work alongside humans, instead of in closed-off environments, without any explicit programming governing their actions." theverge.com/2019/10/15/2091…
6
26
3
123
Replying to @tsimonite
Getting beyond 0% success was the goal. Like with OpenAI Five, there's no fundamental barrier to increasing reliability (and note that after OpenAI Five's losses at TI 2018, many people thought that we'd hit the limits of our technology!).
Our robotic system solves the cube 60% of the time under normal conditions — but only 20% with an adversarially-scrambled cube. The big result is that it's possible at all. Like with OpenAI Five, reliability keeps getting better the more we train
1
1
6
Replying to @gdb @xpasky @egrefen
Though, curious what y'all think of this tweet, which uses 540 million years as the comparison:
Replying to @drsrinathsridha
Unfortunately, no. Human hand dexterity is a product of millions of years of evolution, not just as primates but through our ancestors all the way back in the Cambrian explosion (~540 million years ago). (3/9) en.wikipedia.org/wiki/Cambri…
1
When we started working on robotics in 2016, there was controversy about how to make robots that learn. Gather experience from *many* physical robots, or maybe *somehow* transfer knowledge from simulation? @woj_zaremba bet on sim, and it's worked better than any of us imagined.
3
28
1
168
0
OpenAI progress with physical robots over the past 2.5 years: - openai.com/blog/spam-detecti… - openai.com/blog/robots-that-… - openai.com/blog/generalizing… - openai.com/blog/learning-dex… - openai.com/blog/solving-rubi… Pretty exciting to see the progress since our first result of a Spam Detecting robot:
3
41
1
149
GIF
Replying to @xpasky @egrefen
The question was "How many billion years of training did the Deep RL agent need again?" Just wanted to point out that humans have an irreducible evolutionary prior, whose effect is hard to internalize. Could certainly have been more precise!
3
4