Meta-Learning Millions of Hyper-parameters using the Implicit Function Theorem
Read OriginalThis technical article analyzes a 2019 research paper on meta-learning that uses the Implicit Function Theorem and implicit differentiation to optimize vast numbers of hyperparameters. It explains the nested optimization problem, compares the proposed Neumann series approximation for the inverse Hessian to methods like iMAML, and discusses experiments demonstrating the approach's versatility, such as treating the training dataset as a hyperparameter.
Comments
No comments yet
Be the first to share your thoughts!
Browser Extension
Get instant access to AllDevBlogs from your browser