President & Co-Founder @OpenAI

Joined July 2010
Filter
Exclude
Time range
-
Near
Step towards a general computer-using agent: openai.com/blog/vpt/ AI learns to use keyboard & mouse from video (& small amount of contractor data). With a bit of reinforcement learning, can do a task that takes proficient humans 20 minutes—crafting diamond tools in Minecraft.
3
18
3
114
Really cool to see how many people want to use cutting-edge AI tools. Seems like we're just crossing the threshold of usefulness — and we are still just at the beginning of how amazing these technologies can become:
little openai update: gpt-3, github copilot, and dall-e each have more than 1 million signups! took gpt-3 ~24 months to get there, copilot i think around 6 months, and dall-e only 2.5 months.
12
14
188
Very cool to see what a creative group of kids can express with DALL·E 2: youtube.com/watch?v=y9y22D7z…
8
16
5
86
Frog Comic, created with GPT-3 & DALL-E 2:
This tweet is unavailable
7
8
80
GPT-3 is now a published poet! newyorker.com/culture/cultur…
6
14
5
115
The waitlist is over; GitHub Copilot is now generally available! One of our favorite usages of Codex:
GitHub Copilot helps you get better focus and build faster by instantly suggesting code—and is now available for developers everywhere. github.blog/2022-06-21-githu…
11
36
5
269
Always wanted to be on the cover of Cosmo:
PSA: The world’s smartest artificial intelligence (aka @OpenAI) made this magazine cover (yes, really!)—its first EVER.
7
6
1
150
Evolving the code of a simulated robot via mutations provided by a large language model:
“Evolution through Large Models” – new paper from our team at OpenAI. Step towards evolutionary algos that continually invent and improve at inventing: Large models can suggest (+ improve at making) meaningful mutations to code. Paper: arxiv.org/abs/2206.08896 1/4
2
5
64
What I find most incredible is that the idea of the artificial neural network was designed in the 1940s, and the field since then has been realizing the vision while yielding increasingly stunning results. Feels like a collective project of humanity for 80 years and counting.
i find it both obvious and incredible that a neural network is a digital brain that lives inside a computer (and that actually kinda works)
9
26
2
207
AI alignment will require talent from many fields; there are many hard open problems but we're starting to make rapid progress. Welcome, Scott!
Really looking forward to working with the legendary Scott Aaronson! scottaaronson.blog/?p=6484
4
63
Explaining current events is fitting the train set. Predicting what comes next is generalizing to the test set. The former sees most the activity for both humans and models, but the latter is where the value is.
3
7
1
57
One of my favorite design patterns is the middleware stack—providing a simple API that is meant to be “wrapped” by a new object implementing the same API. Makes it very easy to add&reuse functionality with minimal coupling between components. Equally useful in web servers and ML.
4
13
90
While weightlighting, I've noticed my form gets better the heavier the weight. Seems like a good analogy for how startups can succeed against long odds — only when the challenge is sufficiently hard will performance rise to the occasion.
12
12
1
123
Extremely impressed by the progress the frontend ecosystem has made in the past decade. Eg writing a real-time updating app is now dead simple with React and GraphQL — in 2010, it seemed like every company was inventing their own framework. Also npm, ES, etc have come a long way.
10
7
1
145
Progress on AI alignment: we've created an AI system which can write a critique of a short story summary, successfully helping humans find hidden flaws (a task that takes humans ~10 minutes). Step towards our long-term alignment roadmap: openai.com/blog/critiques/
3
7
2
89
Refactoring existing code is underrated. Huge leverage in making existing functionality more useful rather than starting from scratch. Also quite fun to have a chance to do better than in the past, with the benefit of your present & historical knowledge.
21
24
4
291
A good overview of what it's like working with DALL-E:
A week with Dall-E 2, OpenAI's text-to-image AI tool that is in private research beta and feels like a breakthrough in the history of consumer tech (@caseynewton / The Verge) theverge.com/23162454/openai… techmeme.com/220610/p21#a220…
4
2
32
How large neural networks are trained across increasingly massive clusters — cleverly slicing the computation on a wide variety of axes, rematerializing intermediate results, and much more:
Techniques for training large neural networks, by @lilianweng and @gdb: openai.com/blog/techniques-f…
2
4
53
A surprising fraction of accomplishing any unprecedented feat is having the conviction that it can be done.
16
28
3
273
Twitter Spaces discussion on Best Practices for LLM deployment with @AI21Labs @OpenAI @CohereAI at noon PT, moderated by @percyliang:
Join @Miles_Brundage @aidangomezzz @percyliang and @Udi73613335 for a Twitter Spaces discussion in an hour (noon PT) to hear about best practices for deploying large language models👇
2
23