Not going to dignify them with a link but the OpenAI beta demos are such absolute bullshit. What does this prompt (in green) even have to do with the output
2
1
We definitely omitted the "display" code, but the typical workflow for Q&A is: 1. Prepare a generic Q&A context 2. Concatenate in a question 3. Call the API to generate text 4. Return the newly generated text In the demo, the green is step 2; the output is the question + 4.
1
1
2
Here is a video of a full integration: nitter.vloup.ch/gdb/status/12712… Curious if anything still seems off to you!
Building a natural language shell using our API in about 3 minutes: vimeo.com/427943407/98fe5258…
1
Hey, thanks for the response. I'm not commenting on the format of the response—I see how multiple Qs followed by As could train the model to generate an A following a Q. I'm talking about the content. How do Qs not about faxes prepare the model to answer when they were first sent
2
And how do examples of shell commands other than iptables prepare the model to generate an iptables command with all the arguments?? That seems like extremely specific knowledge baked into the model prior to training.
1
Yes, this technique (which we call "context stuffing") is about hinting to the model the task you want, what an acceptable output looks like, etc. — basically establishing the pattern to follow or process to apply, often involving extracting knowledge already in the model.
2
2
Replying to @gdb @wear_here
The GPT-3 paper might be an interesting read to get a sense of this: arxiv.org/abs/2005.14165. Alternatively, if you sign up for the beta, once you start playing with the API you can get a feel for it pretty quickly.

Jun 12, 2020 · 5:23 AM UTC

1
3
Replying to @gdb
I signed up for the beta, thanks. Very impressive tech. I appreciate the conversation here too, will tweet a correction
1