Skip to content

A numpy implementation of SPSA for optimizing neural networks

Notifications You must be signed in to change notification settings

anomic1911/SPSA-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neural Networks are at the core of deep learning. But these are often constrained by Back-propagation algorithm which requires the derivative of Loss function with respect to network parameters. In this repository, I will show that Neural Networks are not limited by back-propagation and we can use Simultaneous Perturbation using Stochastic Approximation(SPSA) to find noisy gradients. This technique is highly useful when it is very expensive to compute the gradients of the loss function or it is not differentiable at all. Gradient Descent or any other popular optimisation algorithms like Adam/RMSProp requires to compute Gradient.

Usage

python3 train.py

You can read the full article at http://anshulyadav.me/2019/06/21/SPSA-Neural-Network/

About

A numpy implementation of SPSA for optimizing neural networks

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages