Show / Hide Table of Contents

Class SGD

Stochastic gradient descent (often shortened to SGD), also known as incremental gradient descent, is an iterative method for optimizing a differentiable objective function, a stochastic approximation of gradient descent optimization. A 2018 article[1] implicitly credits Herbert Robbins and Sutton Monro for developing SGD in their 1951 article titled "A Stochastic Approximation Method"; see Stochastic approximation for more information. It is called stochastic because samples are selected randomly (or shuffled) instead of as a single group (as in standard gradient descent) or in the order they appear in the training set.

Inheritance
System.Object
BaseOptimizer
SGD
Inherited Members
BaseOptimizer.Name
BaseOptimizer.LearningRate
BaseOptimizer.Momentum
BaseOptimizer.DecayRate
Namespace: SiaNet.Optimizers
Assembly: SiaNet.dll
Syntax
public class SGD : BaseOptimizer

Constructors

| Improve this Doc View Source

SGD(Single, Single, Single, Boolean)

Initializes a new instance of the SGD class.

Declaration
public SGD(float lr = 0.01F, float momentum = 0F, float decayRate = 0F, bool nesterov = false)
Parameters
Type Name Description
System.Single lr

The initial learning rate.

System.Single momentum

Parameter that accelerates SGD in the relevant direction and dampens oscillations.

System.Single decayRate

Learning rate decay over each update..

System.Boolean nesterov

Whether to apply Nesterov momentum.

Properties

| Improve this Doc View Source

Nesterov

Whether to apply Nesterov momentum.

Declaration
public bool Nesterov { get; set; }
Property Value
Type Description
System.Boolean

true if nesterov; otherwise, false.

See Also

BaseOptimizer
  • Improve this Doc
  • View Source
Back to top Generated by DocFX