我是靠谱客的博主 温婉小海豚,最近开发中收集的这篇文章主要介绍pytorch 修改预训练模型(全连接层、单个卷积层、多个卷积层)1. 修改全连接层类别数2. 修改某一层卷积3. 修改某几层卷积,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

1. 修改全连接层类别数

model = torchvision.models.resnet50(pretrained=True)
# 重定义最后一层
model.fc = nn.Linear(2048,10)
print(model.fc)

2. 修改某一层卷积

model = torchvision.models.resnet50(pretrained=True)
# 重定义第一层卷积的输入通道数
model.conv1 = nn.Conv2d(4, 64, kernel_size=7, stride=2, padding=3, bias=False)

3. 修改某几层卷积

下面我放了一个deeplabv3plus的代码,这个要将resnet的最后layer3和layer4换成扩张卷积,并取消步长。

class Deeplabv3_plus(nn.Module):
    def __init__(self,
                 layers=50,
                 atrous_rates=[6, 12, 18],
                 classes=1,
                 BatchNorm2d=nn.BatchNorm2d,
                 criterion=nn.CrossEntropyLoss(),
                 pretrained=False):
        super(Deeplabv3_plus, self).__init__()
        assert layers in [50, 101, 152]
        self.criterion = criterion
        models.BatchNorm2d = BatchNorm2d

        resnet = models.resnet50(pretrained=pretrained)
        
        self.layer0 = nn.Sequential(resnet.conv1, resnet.bn1, resnet.relu)
        self.layer1 = nn.Sequential(resnet.maxpool, resnet.layer1)
        self.layer2, self.layer3, self.layer4 = resnet.layer2, resnet.layer3, resnet.layer4
		
		# 接下来这几行代码用于修改扩张率和步长等参数
        for n, m in self.layer3.named_modules():
            if 'conv2' in n:
                m.dilation, m.padding, m.stride = (2, 2), (2, 2), (1, 1)
            elif 'downsample.0' in n:
                m.stride = (1, 1)
        for n, m in self.layer4.named_modules():
            if 'conv2' in n:
                m.dilation, m.padding, m.stride = (4, 4), (4, 4), (1, 1)
            elif 'downsample.0' in n:
                m.stride = (1, 1)

        fea_dim = 2048
        self.aspp = ASPP(fea_dim, BatchNorm2d, atrous_rates=atrous_rates)

        self.low_level_feature_conv = nn.Sequential(nn.Conv2d(256, 48, kernel_size=1, bias=False),
                                                    BatchNorm2d(48),
                                                    nn.ReLU(inplace=True))

        self.cls = nn.Sequential(
            nn.Conv2d(304, 256, kernel_size=3, padding=1, bias=False),
            BatchNorm2d(256),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=False),
            BatchNorm2d(256),
            nn.ReLU(),
            nn.Conv2d(256, classes, kernel_size=1, stride=1))

        if self.training:
            self.aux = nn.Sequential(
                nn.Conv2d(1024, 256, kernel_size=3, padding=1, bias=False),
                BatchNorm2d(256),
                nn.ReLU(inplace=True),
                nn.Conv2d(256, classes, kernel_size=1)
            )

    def forward(self, x, y=None):
        x_size = x.size()
        h = x_size[2]
        w = x_size[3]

        x = self.layer0(x)
        low_level_features = self.layer1(x)
        x = self.layer2(low_level_features)
        x_tmp = self.layer3(x)
        x = self.layer4(x_tmp)

        x = self.aspp(x)
        x = F.interpolate(x, size=(int(h / 4), int(w / 4)), mode='bilinear', align_corners=True)
        low_level_features = self.low_level_feature_conv(low_level_features)
        x = torch.cat((x, low_level_features), dim=1)

        x = self.cls(x)

        x = F.interpolate(x, size=(h, w), mode='bilinear', align_corners=True)
        main_out = torch.sigmoid(x)

        if self.training:
            aux = self.aux(x_tmp)
            aux = F.interpolate(aux, size=(h, w), mode='bilinear', align_corners=True)
            main_loss = self.criterion(x, y)
            aux_loss = self.criterion(aux, y)
            return main_out, main_loss, aux_loss
        else:
            return x

最后

以上就是温婉小海豚为你收集整理的pytorch 修改预训练模型(全连接层、单个卷积层、多个卷积层)1. 修改全连接层类别数2. 修改某一层卷积3. 修改某几层卷积的全部内容,希望文章能够帮你解决pytorch 修改预训练模型(全连接层、单个卷积层、多个卷积层)1. 修改全连接层类别数2. 修改某一层卷积3. 修改某几层卷积所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(39)

评论列表共有 0 条评论

立即
投稿
返回
顶部