Neural nets are not black boxes. With study, we can interpret circuits they learn.
For example, the image below shows a car detector being composed of wheel, window, and car body detectors.
We show how to read a human-interpretable algorithm from a neural network's weights, by studying the circuits formed by the connections between individual neurons: distill.pub/2020/circuits/zo…
Mar 10, 2020 · 3:52 PM UTC
5
52
3
307





