我是靠谱客的博主 不安星星,最近开发中收集的这篇文章主要介绍【Deep Learning】循环神经网络(RNN)推导和实现,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

       主要参考wildml的博客所写,所有的代码都是python实现,并且没有使用深度学习的框架,所以对理解RNN可以起到很大的帮助。

一、语言模型

        如果一个句子有m个词,那么这个句子生成的概率就是:

        其即假设下一次词生成的概率和只和句子前面的词有关,举一个例子:How are you,生成的概率可以表示为: 

 P(How are you) = P(you)*P(you|How,are) 。

二、数据预处理

       语料预处理会去掉一些低频词从而控制词典大小,这里我们截取前8000个高频词汇,低频词使用一个统一标识替换(这里是UNKNOWN_TOKEN),在经过预处理之后每一个词得到一个编号;为了学出来哪些词常常作为句子开始和句子结束,引入SENTENCE_START和SENTENCE_END两个特殊字符。具体代码如下:

vocabulary_size = 8000
unknown_token = "UNKNOWN_TOKEN"
sentence_start_token = "SENTENCE_START"
sentence_end_token = "SENTENCE_END"
 
# Read the data and append SENTENCE_START and SENTENCE_END tokens
print "Reading CSV file..."
with open('data/reddit-comments-2015-08.csv', 'rb') as f:
    reader = csv.reader(f, skipinitialspace=True)
    reader.next()
    # Split full comments into sentences
    sentences = itertools.chain(*[nltk.sent_tokenize(x[0].decode('utf-8').lower()) for x in reader])
    # Append SENTENCE_START and SENTENCE_END
    sentences = ["%s %s %s" % (sentence_start_token, x, sentence_end_token) for x in sentences]
print "Parsed %d sentences." % (len(sentences))
     
# Tokenize the sentences into words
tokenized_sentences = [nltk.word_tokenize(sent) for sent in sentences]
 
# Count the word frequencies
word_freq = nltk.FreqDist(itertools.chain(*tokenized_sentences))
print "Found %d unique words tokens." % len(word_freq.items())
 
# Get the most common words and build index_to_word and word_to_index vectors
vocab = word_freq.most_common(vocabulary_size-1)
index_to_word = [x[0] for x in vocab]
index_to_word.append(unknown_token)
word_to_index = dict([(w,i) for i,w in enumerate(index_to_word)])
 
print "Using vocabulary size %d." % vocabulary_size
print "The least frequent word in our vocabulary is '%s' and appeared %d times." % (vocab[-1][0], vocab[-1][1])
 
# Replace all words not in our vocabulary with the unknown token
for i, sent in enumerate(tokenized_sentences):
    tokenized_sentences[i] = [w if w in word_to_index else unknown_token for w in sent]
 
print "nExample sentence: '%s'" 

最后

以上就是不安星星为你收集整理的【Deep Learning】循环神经网络(RNN)推导和实现的全部内容,希望文章能够帮你解决【Deep Learning】循环神经网络(RNN)推导和实现所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(56)

评论列表共有 0 条评论

立即
投稿
返回
顶部