ML bugs are so much trickier than bugs in traditional software because rather than getting an error, you get degraded performance (and it's not obvious a priori what ideal performance is). So ML debugging works by continual sanity checking, e.g. comparing to various baselines.

May 14, 2022 · 4:23 PM UTC

46
229
39
1,907
Replying to @gdb
Put ML and data science tools directly on the hands of domain experts. Domain experts avoid pitfalls and ethical issues, likely to sniff out nonsensical inferences. No one from IT department is checking experts work.
Replying to @gdb
It’s sometimes difficult, too, because a model’s output may not be what you expected, but for more conceptually ambiguous things, it may not necessarily be clearly wrong, either.
1
Replying to @gdb
Use explainable ai
Replying to @gdb
Sanity checking is for the insane-adjacent.