This is a surprisingly unintelligent move from OpenAI. It adds corporate inertia to something as mundane as the choice of deep learning framework.
Imagine you worked at OpenAI and want to experiment with Jax. Imagine it was the best solution for the problem. Now you can't ship.
We're standardizing OpenAI's deep learning framework on PyTorch to increase our research productivity at scale on GPUs (and have just released a PyTorch version of Spinning Up in Deep RL): openai.com/blog/openai-pytor…
2
1
14
That'd be totally fine! Per our post: "Going forward we’ll primarily use PyTorch as our deep learning framework but sometimes use other ones when there’s a specific technical reason to do so".
Jan 31, 2020 · 5:16 AM UTC
1
25


