【DL】FocalLoss的PyTorch实现

【DL】FocalLoss的PyTorch实现

此篇不介绍FocalLoss的原理,仅展示PyTorch实现FocalLoss的两种方式。个人认为相关原理已在文章《FocalLoss原理通俗解释及其二分类和多分类场景下的原理与实现》中讲得很清晰,故此篇不再介绍。

方式一

同时计算一个batch中所有样本关于FocalLoss的损失值(来自文章《FocalLoss原理通俗解释及其二分类和多分类场景下的原理与实现》,个人补充了一些注释):

import torch
from torch import nn
import random
class FocalLoss(nn.Module):
    """
    参考 https://github.com/lonePatient/TorchBlocks
    """

    def __init__(self, gamma=2.0, alpha=1, epsilon=1.e-9, device=None):
        super(FocalLoss, self).__init__()
        self.gamma = gamma
        if isinstance(alpha, list):
            self.alpha = torch.Tensor(alpha, device=device)
        else:
            self.alpha = alpha
        self.epsilon = epsilon
    
    '''
    batch中所有样本一起计算loss
    '''
    def forward(self, input, target):
        """
        Args:
            input: model's output, shape of [batch_size, num_cls]
            target: ground truth labels, shape of [batch_size]
        Returns:
            shape of [batch_size]
        """
        num_labels = input.size(-1) # 类别数量
        idx = target.view(-1, 1).long() # 行向量target变成列向量idx
        one_hot_key = torch.zeros(idx.size(0), num_labels, dtype=torch.float32, device=idx.device)
        one_hot_key = one_hot_key.scatter_(1, idx, 1) # one_hot_key矩阵中的每一行对应相应样本的标签one_hot向量,利用scatter_方法将样本的标签类别标记为1,其余位置为0
        one_hot_key[:, 0] = 0  # ignore 0 index. 此行需要视具体情况决定是否保留,如果标签中存在类别0(而不是直接从类别1开始),此行应当注释、不使用
        logits = torch.softmax(input, dim=-1)
        loss = -self.alpha * one_hot_key * torch.pow((1 - logits), self.gamma) * (logits + self.epsilon).log() # 计算FocalLoss
        loss = loss.sum(1)
        return loss.mean()

# 固定随机数种子,方便复现
def setup_seed(seed):
     torch.manual_seed(seed)
     torch.cuda.manual_seed_all(seed)
     np.random.seed(seed)
     random.seed(seed)
     torch.backends.cudnn.deterministic = True

if __name__ == '__main__':
    loss = FocalLoss(alpha=[0.1, 0.2, 0.3, 0.15, 0.25])
    # 设置随机数种子
    setup_seed(20) 
    input = torch.randn(3, 5, requires_grad=True) # torch.Size([3, 5]) [sample_num, class_num]
    target = torch.empty(3, dtype=torch.long).random_(5) # torch.Size([3]) [sample_num]
    output = loss(input, target)
    # print(output)
    output.backward()

方式二

一个batch中逐个样本计算关于FocalLoss的损失值,将它们求平均,返回一个batch内所有样本的FocalLoss的平均值:

import torch
from torch import nn
import random
class FocalLoss(nn.Module):
    """
    参考 https://github.com/lonePatient/TorchBlocks
    """

    def __init__(self, gamma=2.0, alpha=1, epsilon=1.e-9, device=None):
        super(FocalLoss, self).__init__()
        self.gamma = gamma
        if isinstance(alpha, list):
            self.alpha = torch.Tensor(alpha, device=device)
        else:
            self.alpha = alpha
        self.epsilon = epsilon
	
    '''
    逐个样本计算loss
    '''    
	def forward(self, input, target):
        """
        Args:
            input: model's output, shape of [batch_size, num_cls]
            target: ground truth labels, shape of [batch_size]
        Returns:
            shape of [batch_size]
        """
        num_labels = input.size(-1) # 类别数量
        loss = []
        for i, sample in enumerate(input):
            one_hot_key = torch.zeros(1, num_labels, dtype=torch.float32, device=input.device)
            one_hot_key.scatter_(1, target[i].view(1, -1), 1)

            logits = torch.softmax(sample, dim=-1)
            loss_this_sample = - self.alpha * one_hot_key * torch.pow((1 - logits), self.gamma) * (logits + self.epsilon).log()
            loss_this_sample = loss_this_sample.sum(1)
            if i == 0:
                loss = loss_this_sample
            else:
                loss = torch.cat((loss, loss_this_sample))

        return loss.mean()

# 固定随机数种子,方便复现
def setup_seed(seed):
     torch.manual_seed(seed)
     torch.cuda.manual_seed_all(seed)
     np.random.seed(seed)
     random.seed(seed)
     torch.backends.cudnn.deterministic = True

if __name__ == '__main__':
    loss = FocalLoss(alpha=[0.1, 0.2, 0.3, 0.15, 0.25])
    # 设置随机数种子
    setup_seed(20) 
    input = torch.randn(3, 5, requires_grad=True) # torch.Size([3, 5]) [sample_num, class_num]
    target = torch.empty(3, dtype=torch.long).random_(5) # torch.Size([3]) [sample_num]
    output = loss(input, target)
    # print(output)
    output.backward()

相关推荐

  1. 【DL】FocalLossPyTorch实现

    2024-05-11 09:00:10       33 阅读
  2. 人群计数CSRNetpytorch实现

    2024-05-11 09:00:10       63 阅读
  3. 基于pytorch RNN实现文本分类

    2024-05-11 09:00:10       59 阅读

最近更新

  1. docker php8.1+nginx base 镜像 dockerfile 配置

    2024-05-11 09:00:10       94 阅读
  2. Could not load dynamic library ‘cudart64_100.dll‘

    2024-05-11 09:00:10       100 阅读
  3. 在Django里面运行非项目文件

    2024-05-11 09:00:10       82 阅读
  4. Python语言-面向对象

    2024-05-11 09:00:10       91 阅读

热门阅读

  1. IPsec协议:保障网络通信的安全利器

    2024-05-11 09:00:10       32 阅读
  2. [力扣题解]455. 分发饼干

    2024-05-11 09:00:10       27 阅读
  3. 每天一个数据分析题(三百一十七)-AB测试

    2024-05-11 09:00:10       35 阅读
  4. Mysql修改表结构、添加索引会锁表吗?

    2024-05-11 09:00:10       32 阅读
  5. 相机购买指南

    2024-05-11 09:00:10       28 阅读
  6. Scrum敏捷项目管理转型有哪些工具可以使用?

    2024-05-11 09:00:10       32 阅读
  7. ansible快速入门手册

    2024-05-11 09:00:10       29 阅读
  8. 基于UDP的网络客户端和服务端模型IO函数

    2024-05-11 09:00:10       35 阅读
  9. 常用的 Ansible 模块

    2024-05-11 09:00:10       34 阅读