Neural nets are not black boxes. With study, we can interpret circuits they learn. For example, the image below shows a car detector being composed of wheel, window, and car body detectors.
We show how to read a human-interpretable algorithm from a neural network's weights, by studying the circuits formed by the connections between individual neurons: distill.pub/2020/circuits/zo…

Mar 10, 2020 · 3:52 PM UTC

5
52
3
307
Replying to @gdb
What about adversarial examples..? Add a bit of random noise to the image and it's neural net now sees it as a monkey
Replying to @gdb
Very powerful sentence because I believe that it not only applies to N.N. but for any complex system too.
Replying to @gdb
The first defeating comment on my deep learning in hydro proposal, back in 2016, was that they are black boxes and thus we cannot learn anything, even though half of the proposal was explaining how we can get insights.
2