Ferenc Huszár 4/23/2021

On Information Theoretic Bounds for SGD

Read Original

This technical blog post discusses a theoretical approach to understanding the generalization of Stochastic Gradient Descent (SGD) using information theory. It explains a thought experiment linking the mutual information between model parameters and the training dataset to generalization performance, and outlines how KL divergences are used to derive formal bounds for SGD.

On Information Theoretic Bounds for SGD

Comments

No comments yet

Be the first to share your thoughts!

Browser Extension

Get instant access to AllDevBlogs from your browser

Top of the Week

1
The Beautiful Web
Jens Oliver Meiert 2 votes
3
LLM Use in the Python Source Code
Miguel Grinberg 1 votes
4
Wagon’s algorithm in Python
John D. Cook 1 votes