Using my AI writing project based on @OpenAI's API, I asked this question and got a very sincere reply:
Dear Hulk,
Why Hulk smash?
Best,
Banner
---
Dear Bruce,
Hulk likes to smash. Why? Hulk not know why.
Please help.
Your friend,
Hulk
We have a few beta customers working on related applications today (and you can too if you sign up for the beta!): beta.openai.com/
The API today is already *very* good at this, and we're making it a lot better quickly.
We have a *very* long waitlist and are working down it as quickly as we can.
That being said, we can prioritize folks who seem particularly excited. Check your inbox :).
Long :). But we're working through it as fast as we can.
That being said, we're prioritizing users who are particularly excited about working with the API — check your inbox :).
We're releasing an API for accessing new AI models developed by OpenAI. You can "program" the API in natural language with just a few examples of your task. See how companies are using the API today, or join our waitlist: beta.openai.com/
We found that just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. openai.com/blog/image-gpt/
Thanks for the kind words :). There are certainly tasks where the API fails or makes mistakes, but users have found it reliable enough for production (and you should expect it to just get better).
You can sign up for our beta and try it out for yourself! forms.office.com/Pages/Respo…
Yep, that's right, for this specific example.
For a bit of a different case, see the "Parse Unstructured Data" for an example where the model needs to extract some content from an input paragraph that it's certainly never seen before: beta.openai.com/?demo=2
The GPT-3 paper might be an interesting read to get a sense of this: arxiv.org/abs/2005.14165.
Alternatively, if you sign up for the beta, once you start playing with the API you can get a feel for it pretty quickly.