BCELoss和BCEWithLogitsLoss

BCELoss和BCEWithLogitsLoss引自:https://www.cnblogs.com/jiangkejie/p/11207863.htmlBCELossCLASS torch.nn.BCELoss(weight=None, size_average=None, reduce=None, 

大家好,欢迎来到IT知识分享网。

引自: BCELoss和BCEWithLogitsLoss

BCELoss

CLASS torch.nn.BCELoss(weight=Nonesize_average=Nonereduce=Nonereduction=’mean’)

创建一个标准来度量目标和输出之间的二进制交叉熵。

unreduced (i.e. with reduction set to 'none') 时该损失描述为:

 BCELoss和BCEWithLogitsLoss

其中N是批尺寸, 如果reduction 不是 'none' (默认为 'mean'), 则:

 BCELoss和BCEWithLogitsLoss

即,对批次中各样本损失求均值或求和。

 

其可以用来测量重构误差,例如一个自编码器。注意目标y应该是0到1之间的数字。

Parameters:

  • weight (Tensoroptional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch.

  • size_average (booloptional) –(已弃用) Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True

  • reduce (booloptional) – Deprecated (已弃用)(see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True

  • reduction (stringoptional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum''none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'(指定返回各损失值,批损失均值,批损失和,默认返回批损失均值)

shape:

  • Input: (N, *)(N,) where *∗ means, any number of additional dimensions

  • Target: (N, *)(N,), same shape as the input

  • Output: scalar. If reduction is 'none', then (N, *)(N,), same shape as input.

源代码:

 1 def binary_cross_entropy(input, target, weight=None, size_average=None,
 2                          reduce=None, reduction='elementwise_mean'):
 3     r"""Function that measures the Binary Cross Entropy
 4     between the target and the output.
 5 
 6     See :class:`~torch.nn.BCELoss` for details.
 7 
 8     Args:
 9         input: Tensor of arbitrary shape
10         target: Tensor of the same shape as input
11         weight (Tensor, optional): a manual rescaling weight
12                 if provided it's repeated to match input tensor shape
13         size_average (bool, optional): Deprecated (see :attr:`reduction`). By default,
14             the losses are averaged over each loss element in the batch. Note that for
15             some losses, there multiple elements per sample. If the field :attr:`size_average`
16             is set to ``False``, the losses are instead summed for each minibatch. Ignored
17             when reduce is ``False``. Default: ``True``
18         reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the
19             losses are averaged or summed over observations for each minibatch depending
20             on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per
21             batch element instead and ignores :attr:`size_average`. Default: ``True``
22         reduction (string, optional): Specifies the reduction to apply to the output:
23             'none' | 'elementwise_mean' | 'sum'. 'none': no reduction will be applied,
24             'elementwise_mean': the sum of the output will be divided by the number of
25             elements in the output, 'sum': the output will be summed. Note: :attr:`size_average`
26             and :attr:`reduce` are in the process of being deprecated, and in the meantime,
27             specifying either of those two args will override :attr:`reduction`. Default: 'elementwise_mean'
28 
29     Examples::
30 
31         >>> input = torch.randn((3, 2), requires_grad=True)
32         >>> target = torch.rand((3, 2), requires_grad=False)
33         >>> loss = F.binary_cross_entropy(F.sigmoid(input), target)
34         >>> loss.backward()
35     """
36     if size_average is not None or reduce is not None:
37         reduction = _Reduction.legacy_get_enum(size_average, reduce)
38     else:
39         reduction = _Reduction.get_enum(reduction)
40     if not (target.size() == input.size()):
41         warnings.warn("Using a target size ({}) that is different to the input size ({}) is deprecated. "
42                       "Please ensure they have the same size.".format(target.size(), input.size()))
43     if input.nelement() != target.nelement():
44         raise ValueError("Target and input must have the same number of elements. target nelement ({}) "
45                          "!= input nelement ({})".format(target.nelement(), input.nelement()))
46 
47     if weight is not None:
48         new_size = _infer_size(target.size(), weight.size())
49         weight = weight.expand(new_size)
50 
51     return torch._C._nn.binary_cross_entropy(input, target, weight, reduction)

BCEWithLogitsLoss(提高数值稳定性)

CLASStorch.nn.BCEWithLogitsLoss(weight=Nonesize_average=Nonereduce=Nonereduction=’mean’pos_weight=None)

这个损失将Sigmoid层和BCELoss合并在一个类中。

这个版本在数值上比使用一个简单的Sigmoid和一个BCELoss as更稳定,通过将操作合并到一个层中,我们利用log-sum-exp技巧来实现数值稳定性。

 1 def binary_cross_entropy_with_logits(input, target, weight=None, size_average=None,
 2                                      reduce=None, reduction='elementwise_mean', pos_weight=None):
 3     r"""Function that measures Binary Cross Entropy between target and output
 4     logits.
 5 
 6     See :class:`~torch.nn.BCEWithLogitsLoss` for details.
 7 
 8     Args:
 9         input: Tensor of arbitrary shape
10         target: Tensor of the same shape as input
11         weight (Tensor, optional): a manual rescaling weight
12             if provided it's repeated to match input tensor shape
13         size_average (bool, optional): Deprecated (see :attr:`reduction`). By default,
14             the losses are averaged over each loss element in the batch. Note that for
15             some losses, there multiple elements per sample. If the field :attr:`size_average`
16             is set to ``False``, the losses are instead summed for each minibatch. Ignored
17             when reduce is ``False``. Default: ``True``
18         reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the
19             losses are averaged or summed over observations for each minibatch depending
20             on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per
21             batch element instead and ignores :attr:`size_average`. Default: ``True``
22         reduction (string, optional): Specifies the reduction to apply to the output:
23             'none' | 'elementwise_mean' | 'sum'. 'none': no reduction will be applied,
24             'elementwise_mean': the sum of the output will be divided by the number of
25             elements in the output, 'sum': the output will be summed. Note: :attr:`size_average`
26             and :attr:`reduce` are in the process of being deprecated, and in the meantime,
27             specifying either of those two args will override :attr:`reduction`. Default: 'elementwise_mean'
28         pos_weight (Tensor, optional): a weight of positive examples.
29                 Must be a vector with length equal to the number of classes.
30 
31     Examples::
32 
33          >>> input = torch.randn(3, requires_grad=True)
34          >>> target = torch.empty(3).random_(2)
35          >>> loss = F.binary_cross_entropy_with_logits(input, target)
36          >>> loss.backward()
37     """
38     if size_average is not None or reduce is not None:
39         reduction = _Reduction.legacy_get_string(size_average, reduce)
40     if not (target.size() == input.size()):
41         raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
42 
43     max_val = (-input).clamp(min=0)
44 
45     if pos_weight is None:
46         loss = input - input * target + max_val + ((-max_val).exp() + (-input - max_val).exp()).log()
47     else:
48         log_weight = 1 + (pos_weight - 1) * target
49         loss = input - input * target + log_weight * (max_val + ((-max_val).exp() + (-input - max_val).exp()).log())
50 
51     if weight is not None:
52         loss = loss * weight
53 
54     if reduction == 'none':
55         return loss
56     elif reduction == 'elementwise_mean':
57         return loss.mean()
58     else:
59         return loss.sum()

免责声明:本站所有文章内容,图片,视频等均是来源于用户投稿和互联网及文摘转载整编而成,不代表本站观点,不承担相关法律责任。其著作权各归其原作者或其出版社所有。如发现本站有涉嫌抄袭侵权/违法违规的内容,侵犯到您的权益,请在线联系站长,一经查实,本站将立刻删除。 本文来自网络,若有侵权,请联系删除,如若转载,请注明出处:https://yundeesoft.com/31577.html

(0)

相关推荐

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注微信