August
5th,
2017
L1Loss
${loss}(x, y) = \frac{1}{n} \sum |x_i - y_i|$
MSELoss
${loss}(x, y) = \frac{1}{n} \sum |x_i - y_i|^2$
NLLLoss
It is useful to train a classification problem with n classes.
$loss(x, class) = -x[class]$
CrossEntropyLoss
This criterion combines LogSoftMax and NLLLoss in one single class.
BCELoss
Binary Cross Entropy
$ loss(o, t) = - \frac{1}{n} \sum_i (t[i] * log(o[i]) + (1 - t[i]) * log(1 - o[i])) $
BCEWithLogitsLoss
This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss
$ loss(o, t) = - \frac{1}{n} \sum_i (t[i] * log(sigmoid(o[i])) + (1 - t[i]) * log(1 - sigmoid(o[i]))) $
HingeEmbeddingLoss
SmoothL1Loss
L1loss产生稀疏解,在边界处避免梯度爆炸,SmoothL1Loss在0处可导。