President & Co-Founder @OpenAI

Joined July 2010
Filter
Exclude
Time range
-
Near
Replying to @wear_here
Yes, this technique (which we call "context stuffing") is about hinting to the model the task you want, what an acceptable output looks like, etc. — basically establishing the pattern to follow or process to apply, often involving extracting knowledge already in the model.
2
2
Replying to @wear_here
Ah, got it! Empirically, giving the model examples of the kind of behavior you'd like to see really improves its performance. I would think of it as a way of communicating to the model what task it's supposed to solve. We study this in the GPT-3 paper: arxiv.org/abs/2005.14165
Replying to @gdb @wear_here
Here is a video of a full integration: nitter.vloup.ch/gdb/status/12712… Curious if anything still seems off to you!
Building a natural language shell using our API in about 3 minutes: vimeo.com/427943407/98fe5258…
1
Replying to @wear_here
We definitely omitted the "display" code, but the typical workflow for Q&A is: 1. Prepare a generic Q&A context 2. Concatenate in a question 3. Call the API to generate text 4. Return the newly generated text In the demo, the green is step 2; the output is the question + 4.
1
1
2
Replying to @mitchellgoffpc
From our waitlist (forms.office.com/Pages/Respo…): "We're offering free access to the API for the next two months of our private beta, while we determine our longer-term pricing."
3
Building a natural language shell using our API in about 3 minutes: vimeo.com/427943407/98fe5258…
4
18
8
156
Replying to @AIonmymind
Using our Q&A prompt: """... Q: Am I a robot? A:""" The API returns: " No, you are not a robot."
3
8
Using the API is very simple. Just POST some text to us, and we'll stream back text from the model:
10
30
1
245
0
This is one of the models in the API with a very simple UI on top! If you sign up for the beta, you can build your own integration using that model.
4
Replying to @mark_riedl
We have an academic access program, where we're offering free API usage to academics: forms.office.com/Pages/Respo… You can run the same evaluations that our internal teams do:
Training/eval'ing GPT-3 involved a bunch of gnarly distributed system problems (which I love, but are an acquired taste tbh). The API hides those messy details so you can use normal python w/ a tight feedback loop. Gave me the same tingles as switching from TF to pytorch 😊
1
4
23
Also — if you'd like to work on building the API with me and an amazing cross-functional team, we're hiring: openai.com/jobs/#applied-ai!
1
3
22
It does! We also have an academic access program if you'd like to try it out: forms.office.com/Pages/Respo…
7
We've been heads down building something new — an API to general-purpose AI technology. Definitely the most exciting project I've worked on.
We're releasing an API for accessing new AI models developed by OpenAI. You can "program" the API in natural language with just a few examples of your task. See how companies are using the API today, or join our waitlist: beta.openai.com/
5
42
4
302
The main program requirement is comfort with writing software and US work authorization / being located in the US. No ML experience is required! We'll post a call for applications in upcoming weeks.
4
People ask how I can look so happy when I program. It's simple: I love solving hard problems.
9
24
1
308
Ilya is also one of my favorite people to explore ideas with :). Worth watching!
Here's my conversation with Ilya Sutskever (@ilyasut), co-founder of @OpenAI & one of the greatest AI researchers ever. Plus, he is one of my favorite people to explore ideas with, each time quickly delving to the core & first principles of the problem. youtube.com/watch?v=13CZPWmk…
1
8
150
AI algorithms are making rapid progress — there's been an exponential decrease in the amount of compute needed to train a neural network to a fixed level of performance on ImageNet. The slope is steeper than Moore's Law.
Since 2012, the amount of compute for training to AlexNet-level performance on ImageNet has been decreasing exponentially — halving every 16 months, in total a 44x improvement. By contrast, Moore's Law would only have yielded an 11x cost improvement: openai.com/blog/ai-and-effic…
6
39
6
217
Replying to @maraoz
We do have pre-trained models: github.com/openai/jukebox/!
Replying to @sedielem
Thanks Sander!
1
6