Namespace SiaNet.Losses
Classes
BaseLoss
A loss function (or objective function, or optimization score function) is one of the two parameters required to compile a model.
BinaryCrossentropy
Binary Cross-Entropy Loss. Also called Sigmoid Cross-Entropy loss. It is a Sigmoid activation plus a Cross-Entropy loss. Unlike Softmax loss it is independent for each vector component (class), meaning that the loss computed for every vector component is not affected by other component values
CategorialHinge
The categorical cross-entropy loss (negative log likelihood) is used when a probabilistic interpretation of the scores is desired.
CategoricalCrossentropy
Cross-entropy loss, or log loss, measures the performance of a classification model whose output is a probability value between 0 and 1. Cross-entropy loss increases as the predicted probability diverges from the actual label.
CosineProximity
Cosine Proximity loss function computes the cosine proximity between predicted value and actual value. It is same as Cosine Similarity, which is a measure of similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. In this case, note that unit vectors are maximally “similar” if they’re parallel and maximally “dissimilar” if they’re orthogonal (perpendicular). This is analogous to the cosine, which is unity (maximum value) when the segments subtend a zero angle and zero (uncorrelated) when the segments are perpendicular.
Hinge
Hinge Loss, also known as max-margin objective, is a loss function used for training classifiers. The hinge loss is used for “maximum-margin” classification, most notably for support vector machines (SVMs).
KullbackLeiblerDivergence
KL Divergence, also known as relative entropy, information divergence/gain, is a measure of how one probability distribution diverges from a second expected probability distribution.
LogCosh
Logarithm of the hyperbolic cosine of the prediction error.
log(cosh(x)) is approximately equal to(x** 2) / 2 for small x and to abs(x) - log(2) for large x. This means that 'logcosh' works mostly like the mean squared error, but will not be so strongly affected by the occasional wildly incorrect prediction.
MeanAbsoluteError
Mean Absolute Error (MAE) is a quantity used to measure how close forecasts or predictions are to the eventual outcomes
MeanAbsolutePercentageError
The mean absolute percentage error (MAPE), also known as mean absolute percentage deviation (MAPD), is a measure of prediction accuracy of a forecasting method in statistics, for example in trend estimation, also used as a Loss function for regression problems in Machine Learning.
MeanSquaredError
The mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference between the estimated values and what is estimated.
MeanSquaredLogError
Mean squared logarithmic error (MSLE) is, as the name suggests, a variation of the Mean Squared Error. The loss is the mean over the seen data of the squared differences between the log-transformed true and predicted values, or writing it as a formula: where ŷ is the predicted value.
Poisson
Poisson loss function is a measure of how the predicted distribution diverges from the expected distribution, the poisson as loss function is a variant from Poisson Distribution, where the poisson distribution is widely used for modeling count data. It can be shown to be the limiting distribution for a normal approximation to a binomial where the number of trials goes to infinity and the probability goes to zero and both happen at such a rate that np is equal to some mean frequency for the process.
SquaredHinge
Squared Hinge Loss function is a variant of Hinge Loss, it solves the problem in hinge loss that the derivative of hinge loss has a discontinuity at Pred * True = 1