我是靠谱客的博主 甜美康乃馨,这篇文章主要介绍(五)使用CNN实现多分类的情感分析,现在分享给大家,希望可以做个参考。

文章目录

    • 准备数据
    • 搭建模型
    • 训练模型
    • 用户输入
    • 完整代码

在之前的所有笔记中,我们已经对只有两个类别的数据集(正面或负面)进行了情感分析。当我们只有两个类时,我们的输出可以是一个标量,限制在0和1之间,表示示例属于哪个类。当我们有两个以上的分类时,我们的输出必须是一个 C C C维向量,其中 C C C是类的数目。

在本笔记中,我们将对一个有6个类的数据集进行分类。请注意,这个数据集实际上不是一个情感分析数据集,它是一个问题数据集,任务是对问题所属的类别进行分类。但是,本笔记本中所涵盖的内容适用于包含属于 C C C类之一的输入序列的示例的任何数据集。

下面,我们设置字段并加载数据集。

第一个区别是我们不需要在LABEL字段中设置dtype。在处理多类问题时,PyTorch希望标签是数值化的长张量。

第二个不同之处在于我们使用TREC而不是IMDB来加载TREC数据集。fine_grained参数允许我们使用细粒度(fine-grained)标签(其中有50个类)或不使用细粒度标签(在这种情况下,它们将是6个类)。

准备数据

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import torch from torchtext import data from torchtext import datasets SEED = 1234 torch.manual_seed(SEED) torch.backends.cudnn.deterministic = True TEXT = data.Field(tokenize='spacy', tokenizer_language='en_core_web_sm') LABEL = data.LabelField() train_data, test_data = datasets.TREC.splits(TEXT, LABEL, fine_grained = False) train_data, valid_data = train_data.split()

让我们看一下训练集中的一个例子。

复制代码
1
2
print(vars(train_data[-1]))
复制代码
1
2
{'text': ['What', 'is', 'a', 'Cartesian', 'Diver', '?'], 'label': 'DESC'}

接下来,我们将建立词汇表。由于这个数据集很小(只有约3800个训练示例),它也有一个非常小的词汇表(约7500个唯一标记),这意味着我们不需要像以前那样为词汇表设置max_size。

复制代码
1
2
3
4
5
6
7
8
9
10
MAX_VOCAB_SIZE = 25_000 TEXT.build_vocab(train_data, max_size = MAX_VOCAB_SIZE, vectors = "glove.6B.100d", unk_init = torch.Tensor.normal_) LABEL.build_vocab(train_data)

接下来,我们可以检查标签。

这6个标签(对于非细粒度的案例)对应数据集中的6种问题类型:

  • HUM 表示关于人类的问题类别
  • ENTY 表示关于实体的问题类别
  • DESC 表示要求描述的问题类别
  • NUM 表示答案是数字的问题类别
  • LOC 表示答案是地点的问题类别
  • ABBR 表示关于缩写的问题类别
复制代码
1
2
print(LABEL.vocab.stoi)
复制代码
1
2
defaultdict(<function _default_unk_index at 0x7f0a50190d08>, {'HUM': 0, 'ENTY': 1, 'DESC': 2, 'NUM': 3, 'LOC': 4, 'ABBR': 5})

和往常一样,我们设置迭代器。

复制代码
1
2
3
4
5
6
7
8
9
BATCH_SIZE = 64 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits( (train_data, valid_data, test_data), batch_size = BATCH_SIZE, device = device)

我们将使用以前的笔记中的CNN模型,然而,模型将工作在这个数据集。唯一的区别是现在output_dim将是 C C C而不是 1 1 1

搭建模型

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
import torch.nn as nn import torch.nn.functional as F class CNN(nn.Module): def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, dropout, pad_inx): super(CNN, self).__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim) self.convs = nn.ModuleList([ nn.Conv2d(in_channels=1, out_channels=n_filters, kernel_size=(fs, embedding_dim)) for fs in filter_sizes ]) self.fc = nn.Linear(n_filters * len(filter_sizes), output_dim) self.dropout = nn.Dropout(dropout) def forward(self, text): # text = [sent_len, batch_size] text = text.permute(1, 0) # text = [batch_size, sent_len] embedded = self.embedding(text) # embedded = [batch_size, sent_len, emb_dim] embedded = embedded.unsqueeze(1) # embedded = [batch_size, 1, sent_len, emb_dim] convd = [conv(embedded).squeeze(3) for conv in self.convs] # conv_n = [batch_size, n_filters, sent_len - fs + 1] pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in convd] # pooled_n = [batch_size, n_filters] cat = self.dropout(torch.cat(pooled, dim=1)) # cat = [batch_size, n_filters * len(filter_sizes)] return self.fc(cat)

我们定义模型,确保将OUTPUT_DIM设置为 C C C。通过使用标签(LABEL)词汇表的大小,我们可以很容易地获得 C C C,就像我们使用文本(TEXT)词汇表的大小来获得输入词汇表的大小一样。

这个数据集中的示例通常来说比IMDb数据集中的示例小得多,因此我们将使用更小的过滤器。

复制代码
1
2
3
4
5
6
7
8
9
10
INPUT_DIM = len(TEXT.vocab) EMBEDDING_DIM = 100 N_FILTERS = 100 FILTER_SIZES = [2,3,4] OUTPUT_DIM = len(LABEL.vocab) DROPOUT = 0.5 PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] model = CNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT, PAD_IDX)

检查参数的数量,我们可以看到较小的过滤器意味着我们在IMDb数据集上的CNN模型的参数大约只有之前模型的三分之一。

复制代码
1
2
3
4
5
def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters')
复制代码
1
2
The model has 834,206 trainable parameters

接下来,我们将加载预先训练好的嵌入。

复制代码
1
2
3
4
pretrained_embeddings = TEXT.vocab.vectors model.embedding.weight.data.copy_(pretrained_embeddings)
复制代码
1
2
3
4
5
6
7
8
tensor([[-0.1117, -0.4966, 0.1631, ..., 1.2647, -0.2753, -0.1325], [-0.8555, -0.7208, 1.3755, ..., 0.0825, -1.1314, 0.3997], [ 0.1638, 0.6046, 1.0789, ..., -0.3140, 0.1844, 0.3624], ..., [-0.3110, -0.3398, 1.0308, ..., 0.5317, 0.2836, -0.0640], [ 0.0091, 0.2810, 0.7356, ..., -0.7508, 0.8967, -0.7631], [ 0.4306, 1.2011, 0.0873, ..., 0.8817, 0.3722, 0.3458]])

然后将《unk》和《pad》标记的初始权值归零。

复制代码
1
2
3
4
5
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token] model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM) model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)

训练模型

另一个不同于之前笔记的是我们的损失函数(又名准则(criterion))。以前我们使用BCEWithLogitsLoss,但是现在我们使用CrossEntropyLoss。不需要了解太多细节,CrossEntropyLoss在我们的模型输出上执行一个softmax函数,损失由它和标签之间的交叉熵给出。

通常来说:

  • 当我们的示例只属于 C C C类中的一个时,使用CrossEntropyLoss
  • BCEWithLogitsLoss用于我们的示例只属于两个类(0和1),也用于示例属于0和 C C C类(又名多标签分类)的情况。
复制代码
1
2
3
4
5
6
7
8
9
import torch.optim as optim optimizer = optim.Adam(model.parameters()) criterion = nn.CrossEntropyLoss() model = model.to(device) criterion = criterion.to(device)

以前,我们有一个函数,在二进制标签的情况下计算准确率,我们说,如果值大于0.5,那么我们假设它是正的。在有2个以上类的情况下,我们的模型输出一个 C C C维向量,其中每个元素的值表示示例属于这个类程度。

例如,在我们的标签中有:‘HUM’ = 0, ‘ENTY’ = 1, ‘DESC’ = 2, ‘NUM’ = 3, ‘LOC’ = 4和’ABBR’ = 5。如果我们的模型输出类似于:[5.1,0.3,0.1,2.1,0.2,0.6],这意味着模型强烈地认为该示例属于第0类,即关于human的问题,稍微地认为该示例属于第3类,即numerical问题。

我们通过执行argmax获取batch处理中每个元素的预测最大值的索引来计算准确率,然后计算它等于实际标签的次数,然后我们在这batch中求平均值。

复制代码
1
2
3
4
5
6
7
8
def categorical_accuracy(preds, y): """ Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8 """ max_preds = preds.argmax(dim = 1, keepdim = True) # get the index of the max probability correct = max_preds.squeeze(1).eq(y) return correct.sum() / torch.FloatTensor([y.shape[0]])

训练循环与以前类似,不需要压缩(squeeze )模型预测值,因为CrossEntropyLoss预期输入为[batch_size,n classes],标签为[batch size]。

标签(Label)需要是一个LongTensor,默认情况下是这样的,因为我们没有像之前那样将dtype设置为FloatTensor。

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
epoch_loss = 0 epoch_acc = 0 model.train() for batch in iterator: optimizer.zero_grad() predictions = model(batch.text) loss = criterion(predictions, batch.label) acc = categorical_accuracy(predictions, batch.label) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator)

评估循环与前面类似。

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
def evaluate(model, iterator, criterion): epoch_loss = 0 epoch_acc = 0 model.eval() with torch.no_grad(): for batch in iterator: predictions = model(batch.text) loss = criterion(predictions, batch.label) acc = categorical_accuracy(predictions, batch.label) epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator)
复制代码
1
2
3
4
5
6
7
8
import time def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs

接下来,我们训练我们的模型。

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
N_EPOCHS = 5 best_valid_loss = float('inf') for epoch in range(N_EPOCHS): start_time = time.time() train_loss, train_acc = train(model, train_iterator, optimizer, criterion) valid_loss, valid_acc = evaluate(model, valid_iterator, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'tut5-model.pt') print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s') print(f'tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%') print(f't Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Epoch: 01 | Epoch Time: 0m 3s Train Loss: 1.281 | Train Acc: 49.74% Val. Loss: 0.940 | Val. Acc: 66.27% Epoch: 02 | Epoch Time: 0m 3s Train Loss: 0.855 | Train Acc: 69.60% Val. Loss: 0.772 | Val. Acc: 73.10% Epoch: 03 | Epoch Time: 0m 3s Train Loss: 0.645 | Train Acc: 77.69% Val. Loss: 0.645 | Val. Acc: 77.02% Epoch: 04 | Epoch Time: 0m 3s Train Loss: 0.476 | Train Acc: 84.39% Val. Loss: 0.556 | Val. Acc: 80.35% Epoch: 05 | Epoch Time: 0m 3s Train Loss: 0.364 | Train Acc: 88.34% Val. Loss: 0.513 | Val. Acc: 81.40%

最后,让我们在测试集中运行我们的模型!

复制代码
1
2
3
4
5
6
model.load_state_dict(torch.load('tut5-model.pt')) test_loss, test_acc = evaluate(model, test_iterator, criterion) print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
复制代码
1
2
Test Loss: 0.390 | Test Acc: 86.57%

用户输入

类似于我们如何建立一个函数来预测任何给定句子的情绪,我们现在可以建立一个函数来预测所给出的问题的类型。

这里唯一的区别是,我们不是使用sigmoid函数来压缩0到1之间的输入,而是使用argmax来获得最高的预测类索引。然后,我们将这个索引与标签词汇表一起使用,以获得人类可读的标签。

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import spacy nlp = spacy.load('en_core_web_sm') def predict_class(model, sentence, min_len = 4): model.eval() tokenized = [tok.text for tok in nlp.tokenizer(sentence)] if len(tokenized) < min_len: tokenized += ['<pad>'] * (min_len - len(tokenized)) indexed = [TEXT.vocab.stoi[t] for t in tokenized] tensor = torch.LongTensor(indexed) tensor = tensor.unsqueeze(1) preds = model(tensor) max_preds = preds.argmax(dim = 1) return max_preds.item()

现在,让我们来回答几个不同的问题……

复制代码
1
2
3
pred_class = predict_class(model, "Who is Keyser Söze?") print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')

Predicted class is: 0 = HUM

复制代码
1
2
3
pred_class = predict_class(model, "How many minutes are in six hundred and eighteen hours?") print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')

Predicted class is: 3 = NUM

复制代码
1
2
3
pred_class = predict_class(model, "What continent is Bulgaria in?") print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')

Predicted class is: 4 = LOC

复制代码
1
2
3
pred_class = predict_class(model, "What does WYSIWYG stand for?") print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')

Predicted class is: 5 = ABBR

完整代码

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
import torch from torchtext import data from torchtext import datasets SEED = 1234 torch.manual_seed(SEED) torch.backends.cudnn.deterministic = True TEXT = data.Field(tokenize='spacy', tokenizer_language='en_core_web_sm') LABEL = data.LabelField() train_data, test_data = datasets.TREC.splits(TEXT, LABEL, fine_grained = False) train_data, valid_data = train_data.split() print(vars(train_data[-1])) MAX_VOCAB_SIZE = 25_000 TEXT.build_vocab( train_data, max_size = MAX_VOCAB_SIZE, vectors = 'glove.6B.100d', unk_init = torch.Tensor.normal_ ) LABEL.build_vocab(train_data) print(LABEL.vocab.stoi) BATCH_SIZE = 64 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits( (train_data, valid_data, test_data), batch_size=BATCH_SIZE, device=device ) import torch.nn as nn import torch.nn.functional as F class CNN(nn.Module): def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, dropout, pad_inx): super(CNN, self).__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim) self.convs = nn.ModuleList([ nn.Conv2d(in_channels=1, out_channels=n_filters, kernel_size=(fs, embedding_dim)) for fs in filter_sizes ]) self.fc = nn.Linear(n_filters * len(filter_sizes), output_dim) self.dropout = nn.Dropout(dropout) def forward(self, text): # text = [sent_len, batch_size] text = text.permute(1, 0) # text = [batch_size, sent_len] embedded = self.embedding(text) # embedded = [batch_size, sent_len, emb_dim] embedded = embedded.unsqueeze(1) # embedded = [batch_size, 1, sent_len, emb_dim] convd = [conv(embedded).squeeze(3) for conv in self.convs] # conv_n = [batch_size, n_filters, sent_len - fs + 1] pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in convd] # pooled_n = [batch_size, n_filters] cat = self.dropout(torch.cat(pooled, dim=1)) # cat = [batch_size, n_filters * len(filter_sizes)] return self.fc(cat) INPUT_DIM = len(TEXT.vocab) EMBEDDING_DIM = 100 N_FILTERS = 100 FILTER_SIZES = [2, 3, 4] OUTPUT_DIM = len(LABEL.vocab) DROPOUT = 0.5 PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] model = CNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT, PAD_IDX) def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') pretrained_embeddings = TEXT.vocab.vectors model.embedding.weight.data.copy_(pretrained_embeddings) UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token] model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM) model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM) import torch.optim as optim optimizer = optim.Adam(model.parameters()) criterion = nn.CrossEntropyLoss() model = model.to(device) criterion = criterion.to(device) def categorical_accuracy(preds, y): """ Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8 """ max_preds = preds.argmax(dim = 1, keepdim = True) # get the index of the max probability correct = max_preds.squeeze(1).eq(y) return correct.sum() / torch.FloatTensor([y.shape[0]]) def train(model, iterator, optimizer, criterion): epoch_loss = 0 epoch_acc = 0 model.train() for batch in iterator: optimizer.zero_grad() predictions = model(batch.text) loss = criterion(predictions, batch.label) acc = categorical_accuracy(predictions, batch.label) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) def evaluate(model, iterator, criterion): epoch_loss = 0 epoch_acc = 0 model.eval() with torch.no_grad(): for batch in iterator: predictions = model(batch.text) loss = criterion(predictions, batch.label) acc = categorical_accuracy(predictions, batch.label) epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) import time def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs N_EPOCHS = 5 best_valid_loss = float('inf') for epoch in range(N_EPOCHS): start_time = time.time() train_loss, train_acc = train(model, train_iterator, optimizer, criterion) valid_loss, valid_acc = evaluate(model, valid_iterator, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'tut5-model.pt') print(f'Epoch: {epoch + 1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s') print(f'tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc * 100:.2f}%') print(f't Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc * 100:.2f}%') model.load_state_dict(torch.load('tut5-model.pt')) test_loss, test_acc = evaluate(model, test_iterator, criterion) print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%') import spacy nlp = spacy.load('en_core_web_sm') def predict_class(model, sentence, min_len = 4): model.eval() tokenized = [tok.text for tok in nlp.tokenizer(sentence)] if len(tokenized) < min_len: tokenized += ['<pad>'] * (min_len - len(tokenized)) indexed = [TEXT.vocab.stoi[t] for t in tokenized] tensor = torch.LongTensor(indexed) tensor = tensor.unsqueeze(1) preds = model(tensor) max_preds = preds.argmax(dim = 1) return max_preds.item() pred_class = predict_class(model, "Who is Keyser Söze?") print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}') pred_class = predict_class(model, "How many minutes are in six hundred and eighteen hours?") print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}') pred_class = predict_class(model, "What continent is Bulgaria in?") print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}') pred_class = predict_class(model, "What does WYSIWYG stand for?") print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')

最后

以上就是甜美康乃馨最近收集整理的关于(五)使用CNN实现多分类的情感分析的全部内容,更多相关(五)使用CNN实现多分类内容请搜索靠谱客的其他文章。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(59)

评论列表共有 0 条评论

立即
投稿
返回
顶部