NYU Deep Learning Week 14 – Practicum: Overfitting and regularization, and Bayesian neural nets
AIP - State-of-the-Art AI Research
Join the channel membership: https://www.youtube.com/c/AIPursuit/join
Subscribe to the channel: https://www.youtube.com/c/AIPursuit?sub_confirmation=1
Support and Donation: Paypal ⇢ https://paypal.me/tayhengee Patreon ⇢ https://www.patreon.com/hengee BTC ⇢ bc1q2r7eymlf20576alvcmryn28tgrvxqw5r30cmpu ETH ⇢ 0x58c4bD4244686F3b4e636EfeBD159258A5513744 Doge ⇢ DSGNbzuS1s6x81ZSbSHHV5uGDxJXePeyKy
Wanted to own BTC, ETH, or even Dogecoin? Kickstart your crypto portfolio with the largest crypto market Binance with my affiliate link: https://accounts.binance.com/en/register?ref=27700065
The video was published under the license of the Creative Commons Attribution license (reuse allowed). It is reposted for educational purposes and encourages involvement in the field of research. Source: https://youtu.be/DL7iew823c0 Subscribe to Alfredo Canziani: https://www.youtube.com/channel/UCupQLyNchb9-2Z5lmUOIijw
When training highly parametrised models such as deep neural networks there is a risk of overfitting to the training data. This leads to greater generalization error. To help reduce overfitting we can introduce regularization into our training, discouraging certain solutions to decrease the extent to which our models will fit to noise. 0:01:41 – Overfitting and regularization 0:18:11 – Model regularization (L2, L1, dropout, batch norm, and data augmentation) 0:49:30 – Visualizing Regularisation and Overfitting, Bayesian Neural Networks ... https://www.youtube.com/watch?v=77KkT59DKu8
199613671 Bytes