Not going to dignify them with a link but the OpenAI beta demos are such absolute bullshit. What does this prompt (in green) even have to do with the output
2
1
We definitely omitted the "display" code, but the typical workflow for Q&A is:
1. Prepare a generic Q&A context
2. Concatenate in a question
3. Call the API to generate text
4. Return the newly generated text
In the demo, the green is step 2; the output is the question + 4.
1
1
2
Here is a video of a full integration: nitter.vloup.ch/gdb/status/12712…
Curious if anything still seems off to you!
Building a natural language shell using our API in about 3 minutes: vimeo.com/427943407/98fe5258…
1
Hey, thanks for the response. I'm not commenting on the format of the response—I see how multiple Qs followed by As could train the model to generate an A following a Q. I'm talking about the content. How do Qs not about faxes prepare the model to answer when they were first sent
2
Yes, this technique (which we call "context stuffing") is about hinting to the model the task you want, what an acceptable output looks like, etc. — basically establishing the pattern to follow or process to apply, often involving extracting knowledge already in the model.
Jun 12, 2020 · 5:21 AM UTC
2
2





