Not going to dignify them with a link but the OpenAI beta demos are such absolute bullshit. What does this prompt (in green) even have to do with the output
2
1
We definitely omitted the "display" code, but the typical workflow for Q&A is:
1. Prepare a generic Q&A context
2. Concatenate in a question
3. Call the API to generate text
4. Return the newly generated text
In the demo, the green is step 2; the output is the question + 4.
1
1
2
Here is a video of a full integration: nitter.vloup.ch/gdb/status/12712…
Curious if anything still seems off to you!
Building a natural language shell using our API in about 3 minutes: vimeo.com/427943407/98fe5258…
1
Hey, thanks for the response. I'm not commenting on the format of the response—I see how multiple Qs followed by As could train the model to generate an A following a Q. I'm talking about the content. How do Qs not about faxes prepare the model to answer when they were first sent
2
And how do examples of shell commands other than iptables prepare the model to generate an iptables command with all the arguments?? That seems like extremely specific knowledge baked into the model prior to training.
1
Yes, this technique (which we call "context stuffing") is about hinting to the model the task you want, what an acceptable output looks like, etc. — basically establishing the pattern to follow or process to apply, often involving extracting knowledge already in the model.
2
2
Yep, that's right, for this specific example.
For a bit of a different case, see the "Parse Unstructured Data" for an example where the model needs to extract some content from an input paragraph that it's certainly never seen before: beta.openai.com/?demo=2
Jun 12, 2020 · 5:29 AM UTC
1
4





