Show / Hide Table of Contents

Namespace Keras.Optimizers

Classes

Adadelta

Adadelta is a more robust extension of Adagrad that adapts learning rates based on a moving window of gradient updates, instead of accumulating all past gradients. This way, Adadelta continues learning even when many updates have been done. Compared to Adagrad, in the original version of Adadelta you don't have to set an initial learning rate. In this version, initial learning rate and decay factor can be set, as in most other Keras optimizers.

Adagrad

Adagrad is an optimizer with parameter-specific learning rates, which are adapted relative to how frequently a parameter gets updated during training. The more updates a parameter receives, the smaller the learning rate.

Adam

Adam optimizer. Default parameters follow those provided in the original paper.

Adamax

Adamax optimizer from Adam paper's Section 7. It is a variant of Adam based on the infinity norm.Default parameters follow those provided in the paper.

Nadam

Nesterov Adam optimizer. Much like Adam is essentially RMSprop with momentum, Nadam is Adam RMSprop with Nesterov momentum. Default parameters follow those provided in the paper.It is recommended to leave the parameters of this optimizer at their default values.

RMSprop

RMSProp optimizer. It is recommended to leave the parameters of this optimizer at their default values (except the learning rate, which can be freely tuned). This optimizer is usually a good choice for recurrent neural networks.

SGD

Stochastic gradient descent optimizer. Includes support for momentum, learning rate decay, and Nesterov momentum.

Back to top Generated by DocFX