Our dev platform team is planning their work for the next month, and wants to make sure we're building what people want. So if you're an OpenAI API user — what should we add/fix/improve to make your experience better?

Mar 8, 2023 · 5:10 PM UTC

353
54
7
521
Replying to @gdb
Remove the 'I am an AI' thing, make the system element that the devs define actually do something even for longer prompts.
Replying to @gdb
Love your team’s work! Real-time streaming Whisper transcriptions using web sockets, please. For bonus brownie points, auto language detection on that streaming interface. 🙏🙏🙏
Replying to @gdb
Would it be possible to add an additional "role" to the Chat API so a product could add additional context without the end-user seeing it? Similar to the "system" role, but throughout the conversation
Replying to @gdb
1) Improve AI steerability. Give me a knob, like temperature, to control the RLHF factors. 2) Self-serve dedicated instances. This is primarily a function of speed/reliability. 3) Probably hard, but hosting on GCP/AWS as well. 4) More types of models that can do more! 😀
1
5
Replying to @gdb
Fine tuning ChatGPT
1
Replying to @gdb
10x cheaper the cost is insane out of this world basically scale,needs to be .002 per txt not .02
1
Replying to @gdb
Davinci 2 and 3 at 10x lower cost. And if I could only pick one Davinci 2
3
8
Replying to @gdb
You are doing a phenomenal job. You gotta let us access chat gpt more than an hour at a time though if we pay premium!
Replying to @gdb
An open source endpoint with PII detection and redaction capabilities, can swap sensitive info with proxy tokens in responses. Tokens can still be included in generated responses and then replaced with correct values internally. Protecting data privacy is critical for adoption.
1
Replying to @gdb
Being able to send id from previous response to keep context, this would allow developers to break down their content submitted to the api and send it in batches, allowing for more then 4k tokens