Research/Blog
CellStrat > Research/Blog > Artificial Intelligence > Deep Learning > Convergence in a Neural Network
Convergence in a Neural Network
- August 18, 2018
- Posted by: vsinghal
- Category: Deep Learning Machine Learning
No Comments
Convergence refers to the point in machine learning training where ideal weights have been discovered and cost function minimized.
But this is, at times, plagued by issues of local minima and slow learning. Here Mini-batch gradient descent comes handy, which combines the advantages of batch gradient learning as well as stochastic gradient descent.
The process involves starting with random weights and running training loops until the program finds a cost minima, hopefully the global minima.