The underlying spirit in many debates about the pace of AI progress—that we need to take safety very seriously and proceed with caution—is key to our mission. We spent more than 6 months testing GPT-4 and making it even safer, and built it on years of alignment research that we pursued in anticipation of models like GPT-4. We expect to continue to ramp our safety precautions more proactively than many of our users would like. Our general goal is for each model we ship to be our most aligned one yet, and it’s been true so far from GPT-3 (initially deployed without any special alignment), GPT-3.5 (aligned enough to be deployed in ChatGPT), and now GPT-4 (performs much better on all of our safety metrics than GPT-3.5). We believe (and have been saying in policy discussions with governments) that powerful training runs should be reported to governments, be accompanied by increasingly-sophisticated predictions of their capability and impact, and require best practices such as dangerous capability testing. We think governance of large-scale compute usage, safety standards, and regulation of/lesson-sharing from deployment are good ideas, but the details really matter and should adapt over time as the technology evolves. It’s also important to address the whole spectrum of risks from present-day issues (e.g. preventing misuse or self-harm, mitigating bias) to longer-term existential ones. Perhaps the most common theme from the long history of AI has been incorrect confident predictions from experts. One way to avoid unspotted prediction errors is for the technology in its current state to have early and frequent contact with reality as it is iteratively developed, tested, deployed, and all the while improved. And there are creative ideas people don’t often discuss which can improve the safety landscape in surprising ways — for example, it’s easy to create a continuum of incrementally-better AIs (such as by deploying subsequent checkpoints of a given training run), which presents a safety opportunity very unlike our historical approach of infrequent major model upgrades. The upcoming transformative technological change of AI is something that is simultaneously cause for optimism and concern — the whole range of emotions is justified and is shared by people within OpenAI, too. It’s a special opportunity and obligation for us all to be alive at this time, to have a chance to design the future together.

Apr 12, 2023 · 4:08 PM UTC

239
380
95
2,438
Replying to @gdb
> which presents a safety opportunity I don't quite understand what you mean by this. The pause for follow-up training is the opportunity?
2
Replying to @gdb @sama
Errinwright: “You give a monkey a stick, inevitably he’ll beat another monkey to death with it.”
1
Replying to @gdb @sama
I'm just a below novice user and I can attest that after playing with autoGPT that you guys have done such a fantastic and wonderful job. Can definitely tell of all the love that has been put in :)
1
1
Replying to @gdb
"We believe (and have been saying in policy discussions with governments) that powerful training runs should be reported to governments" Thanks for the reminder to support open-source efforts like OpenAssistant and LAION, and stock up on GPUs!
6
2
59
Replying to @gdb
Experience - the greatest teacher. Cleaves through the uncertainty and catastrophizing. Though some lessons we do not want to learn from experience.
1
Replying to @gdb
👏👏
1
Replying to @gdb
> "One way to avoid unspotted prediction errors is for the technology in its current state to have early and frequent contact with reality..." This is the main schism in the community right now. I'm of the position that feedback from reality is the alignment. But i understand the possible costs give well-founded skepticism.
2
Replying to @gdb @sama
Should people that do powerful training runs without reporting to the government, or alternatively, without the permission of government, be thrown in prison? Yes or no answer, please.
1
Replying to @gdb
How do We know if this text is not written by a CHAT GPT and you are not, it's hostage?😀
3