李沐29_残差网络ResNet——自学笔记

残差网络

残差网络的核心思想是:每个附加层都应该更容易地包含原始函数作为其元素之一。

残差块

串联一个层改变函数类,我们希望扩大函数类,残差块加入快速通道来得到f(x)=x+g(x)的结果

ResNet块

1.高宽减半的ResNet块(步幅2)

2.后接多个高宽不变的ResNet块

ResNet架构

1.类似VGG和GoogLeNet总体架构

2.但替换成ResNet块

总结

残差块使得很深的网络更加容易训练,甚至可以训练一千层的网络

代码实现

ResNet沿用了VGG完整的3X3卷积层设计。 残差块里首先有2个有相同输出通道数的3X3卷积层。 每个卷积层后接一个批量规范化层和ReLU激活函数。 然后我们通过跨层数据通路,跳过这2个卷积运算,将输入直接加在最后的ReLU激活函数前。 这样的设计要求2个卷积层的输出与输入形状一样,从而使它们可以相加。 如果想改变通道数,就需要引入一个额外的
1X1卷积层来将输入变换成需要的形状后再做相加运算。

import torch
from torch import nn
from torch.nn import functional as F
from d2l import torch as d2l


class Residual(nn.Module):  #save
    def __init__(self, input_channels, num_channels,
                 use_1x1conv=False, strides=1):
        super().__init__()
        self.conv1 = nn.Conv2d(input_channels, num_channels,
                               kernel_size=3, padding=1, stride=strides)
        self.conv2 = nn.Conv2d(num_channels, num_channels,
                               kernel_size=3, padding=1)
        if use_1x1conv:
            self.conv3 = nn.Conv2d(input_channels, num_channels,
                                   kernel_size=1, stride=strides)
        else:
            self.conv3 = None
        self.bn1 = nn.BatchNorm2d(num_channels)
        self.bn2 = nn.BatchNorm2d(num_channels)

    def forward(self, X):
        Y = F.relu(self.bn1(self.conv1(X)))
        Y = self.bn2(self.conv2(Y))
        if self.conv3:
            X = self.conv3(X)
        Y += X
        return F.relu(Y)

一种是当use_1x1conv=False时,应用ReLU非线性函数之前,将输入添加到输出。 另一种是当use_1x1conv=True时,添加通过1X1卷积调整通道和分辨率。

输入与输出形状一致

blk = Residual(3,3)
X = torch.rand(4, 3, 6, 6)
Y = blk(X)
Y.shape
torch.Size([4, 3, 6, 6])

增加输出通道的同时,减半高和宽

blk = Residual(3,6, use_1x1conv=True, strides=2)
blk(X).shape # batch size, channel, h, w
torch.Size([4, 6, 3, 3])

ResNet模型

b1 = nn.Sequential(nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
                   nn.BatchNorm2d(64), nn.ReLU(),
                   nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
def resnet_block(input_channels, num_channels, num_residuals,
                 first_block=False):
    blk = []
    for i in range(num_residuals):
        if i == 0 and not first_block:
            blk.append(Residual(input_channels, num_channels,
                                use_1x1conv=True, strides=2))
        else:
            blk.append(Residual(num_channels, num_channels))
    return blk
b2 = nn.Sequential(*resnet_block(64, 64, 2, first_block=True))
b3 = nn.Sequential(*resnet_block(64, 128, 2))
b4 = nn.Sequential(*resnet_block(128, 256, 2))
b5 = nn.Sequential(*resnet_block(256, 512, 2))
net = nn.Sequential(b1, b2, b3, b4, b5,
                    nn.AdaptiveAvgPool2d((1,1)),
                    nn.Flatten(), nn.Linear(512, 10))

观察ResNet的不同模块的输入形状是如何变化。

X = torch.rand(size=(1, 1, 224, 224))
for layer in net:
    X = layer(X)
    print(layer.__class__.__name__,'output shape:\t', X.shape)
Sequential output shape:	 torch.Size([1, 64, 56, 56])
Sequential output shape:	 torch.Size([1, 64, 56, 56])
Sequential output shape:	 torch.Size([1, 128, 28, 28])
Sequential output shape:	 torch.Size([1, 256, 14, 14])
Sequential output shape:	 torch.Size([1, 512, 7, 7])
AdaptiveAvgPool2d output shape:	 torch.Size([1, 512, 1, 1])
Flatten output shape:	 torch.Size([1, 512])
Linear output shape:	 torch.Size([1, 10])

训练模型

lr, num_epochs, batch_size = 0.05, 10, 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=96)
d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())
loss 0.016, train acc 0.995, test acc 0.915
1553.6 examples/sec on cuda:0

在这里插入图片描述

相关推荐

  1. 16神经网络基础——自学笔记

    2024-04-12 02:00:05       12 阅读
  2. 54_循环神经网络RNN——自学笔记

    2024-04-12 02:00:05       12 阅读

最近更新

  1. TCP协议是安全的吗?

    2024-04-12 02:00:05       16 阅读
  2. 阿里云服务器执行yum,一直下载docker-ce-stable失败

    2024-04-12 02:00:05       16 阅读
  3. 【Python教程】压缩PDF文件大小

    2024-04-12 02:00:05       15 阅读
  4. 通过文章id递归查询所有评论(xml)

    2024-04-12 02:00:05       18 阅读

热门阅读

  1. 生成式伪造语音安全问题与解决方案(上)

    2024-04-12 02:00:05       15 阅读
  2. redis缓存实现分布式锁原理及注意事项(附代码)

    2024-04-12 02:00:05       17 阅读
  3. 《牛客》-E可口蛋糕

    2024-04-12 02:00:05       18 阅读
  4. 原型设计模式

    2024-04-12 02:00:05       17 阅读
  5. 算法| ss 合并区间

    2024-04-12 02:00:05       15 阅读
  6. 蓝桥杯——分糖果

    2024-04-12 02:00:05       15 阅读
  7. Esilnt使用记录

    2024-04-12 02:00:05       13 阅读
  8. 【IC前端虚拟项目】SDC文件编写与DC综合环境组织

    2024-04-12 02:00:05       14 阅读
  9. 钩子函数和副作用

    2024-04-12 02:00:05       13 阅读
  10. jquery 数字金额转化为大写金额

    2024-04-12 02:00:05       14 阅读
  11. 从企业开发流程到使用场景解析 git vs svn

    2024-04-12 02:00:05       16 阅读