Overfitting is the mind killer.

May 29, 2022 · 3:56 PM UTC

10
19
6
315
Replying to @gdb
As dangerous as underfitting is.
4
1
28
Replying to @gdb
Overfitting and data sufficiency
Replying to @gdb
Ironically enough, it is neural nets themselves that provide us an *echo-chamber of information*; to which we end up becoming overfit Aka - YouTube, Facebook, Twitter optimize the over-fitting of humans
1
Replying to @gdb
Then obtain a balance with SVM kernles
Replying to @gdb
there needs to be an essay like “worse is better” but for training models. there should always be a getting started memo for every domain that’s important.
Replying to @gdb
I think we've learnt in the past months that most LLMs are actually suffering from a specific sort of underfitting. Datasets too small & models too large. Train smaller models for many more epochs on larger datasets
Replying to @gdb
We all overfit as we grow older…
1
Replying to @gdb
Haha, true that! If in real-life context, would you instead get more data, reduce the model size or perform regularization?
1
Replying to @gdb
The point may be that you are always either over or under fitting. There is no perfect fit, only plausibly functional and serviceable fits. “All models are wrong, but some are useful”— Quip from George Box