President & Co-Founder @OpenAI

Joined July 2010
Filter
Exclude
Time range
-
Near
Overfitting is the mind killer.
10
19
6
315
Machine learning is the art of getting a computer to make what you can measure.
18
106
6
1,006
There's nothing quite so freeing as the feeling of starting a greenfield project from scratch.
3
6
102
DALL-E as a new medium of artistic expression:
been playing with dall-e for ~24hrs and as someone who has never felt traditionally “artistic”, being able to see my ideas in such visual fidelity is incredible!!! im completely obsessed here’s some stuff ive made so far (will keep adding to this thread):
2
65
Nice result: hinting to a language model that it's ok to show its work (by adding "Let's think step by step" to the prompt) rather than directly outputting the final answer massively boosts results on reasoning benchmarks. True of humans too.
Large Language Models are Zero-Shot Reasoners Simply adding “Let’s think step by step” before each answer increases the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with GPT-3. arxiv.org/abs/2205.11916
2
8
1
87
ML systems debugging rewards a willingness to dig across the entire stack, to chase a slightly suspicious signal back to its source, and to derive chains of failures from surprising end results. High cognitive burden but also some of the most exhilarating work upon success.
12
26
5
279
On a day-to-day basis, certainty in one's own beliefs is something that the scientist cannot afford; but on the scale of decades, certainty in correct contrarian beliefs is what differentiates the great careers from the rest.
3
7
89
In most startups, the median business person has less impact than the median engineer, but the top business person has more impact than the top engineer. To have truly outsized impact as an engineer, need to find the rare business where sustained tech velocity is high order bit.
6
15
171
The most productive engineers also tend to be the ones who create the biggest absolute number of bugs (even if their relative rate is low). Don't be afraid to make mistakes, but instead be afraid to let problems lurk beneath the surface or linger once they have been discovered.
5
18
220
Replying to @ilyasut
And it ❤️s you back!
2
24
There are now 70 production Codex apps: openai.com/blog/codex-apps/
1
25
1
97
Any sufficiently advanced magic is indistinguishable from AI.
32
72
7
553
DALL-E 2 is stunningly better than DALL-E 1, but what's interesting (and underappreciated) is the fact that the newer model is much smaller & amount of training compute is similar. Improvements are essentially all due to algorithmic innovation.
22
64
6
943
The way you know you are on an exponential is that you never cease to be amazed by what your team produces.
5
12
2
163
If you can repro it cheaply, you can fix it.
1
2
60
I’d always wondered what the Jabberwock would look like:
Replying to @adnothing @npew
Here's mine, using dall-e 2 as well:
4
4
33
Why did the statistician go back to school? To get his degree in regression! (Courtesy of GPT-3 when prompted to make a joke involving the dual meaning of regression.)
13
21
4
372
One trick to stop your TODO list from growing without bound: create a fresh list every day, and archive the previous list. Copy over old items only as you actively remember them. Pretty soon you get good at only tracking items that are actually worth the mental space.
19
18
3
233
Even in machine learning, as important as knowing how to listen to the data is knowing when to ignore it.
16
41
3
338
A surprising fraction of machine learning performance engineering is figuring out how to profile the system component that shouldn't be your bottleneck but somehow is.
2
20
1
202