Show / Hide Table of Contents

Namespace SiaNet.Layers.Activations

Classes

Elu

Exponential linear unit activation function: x if x > 0 and alpha * (exp(x)-1) if x < 0.

Exp

Exponential activation function which returns simple exp(x)

HardSigmoid

LeakyRelu

Leaky version of a Rectified Linear Unit.

It allows a small gradient when the unit is not active: f(x) = alpha* x for x< 0, f(x) = x for x >= 0.

Linear

Linear activation with f(x)=x

PRelu

Parametric Rectified Linear Unit.

It follows: f(x) = alpha* x for x< 0, f(x) = x for x >= 0, where alpha is a learned array with the same shape as x.

Relu

Rectified Linear Unit.

With default values, it returns element-wise max(x, 0).

Otherwise, it follows: f(x) = max_value for x >= max_value, f(x) = x for threshold <= x<max_value, f(x) = alpha* (x - threshold) otherwise.

Selu

SELU is equal to: scale * elu(x, alpha), where alpha and scale are predefined constants. The values of alpha and scale are chosen so that the mean and variance of the inputs are preserved between two consecutive layers as long as the weights are initialized correctly (see lecun_normal initialization) and the number of inputs is "large enough"

Sigmoid

Sigmoid takes a real value as input and outputs another value between 0 and 1. It’s easy to work with and has all the nice properties of activation functions: it’s non-linear, continuously differentiable, monotonic, and has a fixed output range.

Softmax

Softmax function calculates the probabilities distribution of the event over ‘n’ different events. In general way of saying, this function will calculate the probabilities of each target class over all possible target classes. Later the calculated probabilities will be helpful for determining the target class for the given inputs.

Softplus

The softplus activation: log(exp(x) + 1).

Softsign

The softsign activation: x / (abs(x) + 1).

Tanh

Hyperbolic tangent activation. Tanh squashes a real-valued number to the range [-1, 1]. It’s non-linear. But unlike Sigmoid, its output is zero-centered. Therefore, in practice the tanh non-linearity is always preferred to the sigmoid nonlinearity.

Back to top Generated by DocFX