Deep learning is capable of performing reasoning tasks, in the sense that it's pattern recognition engine, and any task (including ones we would categorize as involving reasoning) can be cast as pattern recognition, given a sufficient dense sampling of the input/output space.
.@ilyasut and I, together with awesome OpenAI coworkers, are starting a new team: the Reasoning team. Many people think deep networks are inherently unable to reason — we'll find out! We're looking for a great people manager to join us. Interested? Email me: gdb@openai.com.
7
45
1
172
Replying to @fchollet
"In general, anything that requires reasoning—like programming, or applying the scientific method—long-term planning, and algorithmic-like data manipulation, is out of reach for deep learning models, no matter how much data you throw at them." — blog.keras.io/the-limitation…

Apr 11, 2019 · 5:38 PM UTC

7
9
57
Replying to @gdb
"no matter how much data" is a slight exaggeration, since there is in fact a level of data density when any task becomes possible. But this blog post is otherwise accurate, because that level of sampling density is rarely practical.
2
1
13
I'll make sure to be more explicit about it next time -- since now I expect that I will be taken too literally :)
1
6
Replying to @gdb @fchollet
In domains such as medical, it is impossible to sample all possible states, to develop say a reinforcement learning model. Patient care is not an Atari game that you can build a simulator to replay again and again by exploring different (possibly risky) states.
1
1
"heavier-than-air flying machines are impossible" - Lord Kelvin, 1895
Replying to @gdb @fchollet
Are opinions never allowed to change?