Class Activations
Activations can either be used through an Activation layer, or through the activation argument supported by all forward layers:
Implements
Inherited Members
Namespace: Keras
Assembly: Keras.dll
Syntax
public class Activations : Base, IDisposable
Methods
| Improve this Doc View SourceElu(NDarray, Single)
Declaration
public static NDarray Elu(NDarray x, float alpha = 1F)
Parameters
Type | Name | Description |
---|---|---|
Numpy.NDarray | x | |
System.Single | alpha |
Returns
Type | Description |
---|---|
Numpy.NDarray |
Exponential(NDarray)
Exponential (base e) activation function.
Declaration
public static NDarray Exponential(NDarray x)
Parameters
Type | Name | Description |
---|---|---|
Numpy.NDarray | x | Input tensor. |
Returns
Type | Description |
---|---|
Numpy.NDarray | Output tensor |
HardSigmoid(NDarray)
Hard sigmoid activation function. Faster to compute than sigmoid activation.
Declaration
public static NDarray HardSigmoid(NDarray x)
Parameters
Type | Name | Description |
---|---|---|
Numpy.NDarray | x | Input tensor. |
Returns
Type | Description |
---|---|
Numpy.NDarray | Output tensor |
Linear(NDarray)
Linear (i.e. identity) activation function.
Declaration
public static NDarray Linear(NDarray x)
Parameters
Type | Name | Description |
---|---|---|
Numpy.NDarray | x | Input tensor. |
Returns
Type | Description |
---|---|
Numpy.NDarray | Output tensor |
Relu(NDarray, Single, Nullable<Single>, Single)
Declaration
public static NDarray Relu(NDarray x, float alpha = 0F, float? max_value = default(float? ), float threshold = 0F)
Parameters
Type | Name | Description |
---|---|---|
Numpy.NDarray | x | |
System.Single | alpha | |
System.Nullable<System.Single> | max_value | |
System.Single | threshold |
Returns
Type | Description |
---|---|
Numpy.NDarray |
Selu(NDarray)
Scaled Exponential Linear Unit (SELU). SELU is equal to: scale* elu(x, alpha), where alpha and scale are predefined constants.The values of alpha and scale are chosen so that the mean and variance of the inputs are preserved between two consecutive layers as long as the weights are initialized correctly(see lecun_normal initialization) and the number of inputs is "large enough" (see references for more information).
Declaration
public static NDarray Selu(NDarray x)
Parameters
Type | Name | Description |
---|---|---|
Numpy.NDarray | x | Input tensor. |
Returns
Type | Description |
---|---|
Numpy.NDarray | The scaled exponential unit activation: scale * elu(x, alpha). |
Sigmoid(NDarray)
Sigmoid activation function.
Declaration
public static NDarray Sigmoid(NDarray x)
Parameters
Type | Name | Description |
---|---|---|
Numpy.NDarray | x | Input tensor. |
Returns
Type | Description |
---|---|
Numpy.NDarray | Tensor, output of sigmoid |
Softmax(NDarray, Int32)
Softmax activation function
Declaration
public static NDarray Softmax(NDarray x, int axis = -1)
Parameters
Type | Name | Description |
---|---|---|
Numpy.NDarray | x | The input tensor. |
System.Int32 | axis | Integer, axis along which the softmax normalization is applied. |
Returns
Type | Description |
---|---|
Numpy.NDarray | Tensor, output of softmax transformation. |
Softplus(NDarray)
Softplus activation function. The softplus activation: log(exp(x) + 1).
Declaration
public static NDarray Softplus(NDarray x)
Parameters
Type | Name | Description |
---|---|---|
Numpy.NDarray | x | Input tensor. |
Returns
Type | Description |
---|---|
Numpy.NDarray | Output tensor |
Softsign(NDarray)
The softsign activation: x / (abs(x) + 1).
Declaration
public static NDarray Softsign(NDarray x)
Parameters
Type | Name | Description |
---|---|---|
Numpy.NDarray | x | Input tensor. |
Returns
Type | Description |
---|---|
Numpy.NDarray | Output tensor |
Tanh(NDarray)
Hyperbolic tangent activation function.
Declaration
public static NDarray Tanh(NDarray x)
Parameters
Type | Name | Description |
---|---|---|
Numpy.NDarray | x | Input tensor. |
Returns
Type | Description |
---|---|
Numpy.NDarray | Output tensor |