我是靠谱客的博主 靓丽白昼,最近开发中收集的这篇文章主要介绍【Pytorch错误】RNN-GRU代码编写,出现didn‘t match because some of the arguments have invalid types: (float, int),觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

在使用RNN-GRU模块编写时出现:

didn't match because some of the arguments have invalid types: (float, int)

这是因为,RNN的隐藏层个数除以2到GRU中出现了数据类型不匹配,hidden_size=configs.hidden_size /2为浮点型,需要改为整型数据,即错误代码:

import torch
import torch.nn as nn

class RNN_GRU(nn.Module):
    def __init__(self, configs):
        super(RNN_GRU, self).__init__()
        self.hidden_size = configs.hidden_size
        self.num_layers = configs.num_layers
        self.seq_len = configs.seq_len
        self.pred_len = configs.pred_len
        self.rnn = nn.RNN(input_size=1, hidden_size=configs.hidden_size, num_layers=configs.num_layers,
                          batch_first=True)  # batch_first – 如果为True,那么输入和输出Tensor的形状为(batch, seq, feature)
        self.gru = nn.GRU(input_size=configs.hidden_size, hidden_size=configs.hidden_size /2,
                          num_layers=configs.num_layers,
                          batch_first=True)  # batch_first – 如果为True,那么输入和输出Tensor的形状为(batch, seq, feature)
        self.fc = nn.Linear(configs.hidden_size / 2, configs.pred_len)

    def forward(self, x):
        # Set initial hidden and cell states
        h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).cuda()
        out, _ = self.rnn(x, h0)  # out: tensor of shape (batch_size, seq_length, hidden_size)
        h1 = torch.zeros(self.num_layers, out.size(0), self.hidden_size / 2).cuda()
        out, _ = self.gru(out, h1)  # out: tensor of shape (batch_size, seq_length, hidden_size)
        out = self.fc(out[:, -1, :])  # 此处的-1说明我们只取RNN最后输出的那个hn
        return out

正确的做法是将’/‘全部改为’//',正确代码如下:

import torch
import torch.nn as nn

class RNN_GRU(nn.Module):
    def __init__(self, configs):
        super(RNN_GRU, self).__init__()
        self.hidden_size = configs.hidden_size
        self.num_layers = configs.num_layers
        self.seq_len = configs.seq_len
        self.pred_len = configs.pred_len
        self.rnn = nn.RNN(input_size=1, hidden_size=configs.hidden_size, num_layers=configs.num_layers,
                          batch_first=True)  # batch_first – 如果为True,那么输入和输出Tensor的形状为(batch, seq, feature)
        self.gru = nn.GRU(input_size=configs.hidden_size, hidden_size=configs.hidden_size //2,
                          num_layers=configs.num_layers,
                          batch_first=True)  # batch_first – 如果为True,那么输入和输出Tensor的形状为(batch, seq, feature)
        self.fc = nn.Linear(configs.hidden_size // 2, configs.pred_len)

    def forward(self, x):
        # Set initial hidden and cell states
        h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).cuda()
        out, _ = self.rnn(x, h0)  # out: tensor of shape (batch_size, seq_length, hidden_size)
        h1 = torch.zeros(self.num_layers, out.size(0), self.hidden_size // 2).cuda()
        out, _ = self.gru(out, h1)  # out: tensor of shape (batch_size, seq_length, hidden_size)
        out = self.fc(out[:, -1, :])  # 此处的-1说明我们只取RNN最后输出的那个hn
        return out

最后

以上就是靓丽白昼为你收集整理的【Pytorch错误】RNN-GRU代码编写,出现didn‘t match because some of the arguments have invalid types: (float, int)的全部内容,希望文章能够帮你解决【Pytorch错误】RNN-GRU代码编写,出现didn‘t match because some of the arguments have invalid types: (float, int)所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(42)

评论列表共有 0 条评论

立即
投稿
返回
顶部