概述
why : nb_words = len(tokenizer.word_index) + 1 ?????
answer:
1. word index start from 1, so the index of 0 would be 0 all the time
embedding_matrix = np.zeros((nb_words, embedding_dim))
for word, i in word_index.items():
try:
embedding_vector = word_vectors[word]
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
except KeyError:
print("vector not found for word - %s" % word)
2. while mapping the word to vector, the padding of 0 would be mapping to np.array(0,embedding_dim)
train_sequences_1 = tokenizer.texts_to_sequences(sentence1) # texts_to_sequences -------- map the index of words to words
train_padded_data_1 = pad_sequences(train_sequences_1,maxlen = max_sequence_length) # padding to the max length of sequence
3. the sentence encoding would be complete
最后
以上就是激情大象为你收集整理的为什么建立embedding_matrix时要再一开始多加一行?的全部内容,希望文章能够帮你解决为什么建立embedding_matrix时要再一开始多加一行?所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复