我是靠谱客的博主 成就彩虹,最近开发中收集的这篇文章主要介绍用Python进行词频统计,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

英文文本词频统计

def getText():
    txt = open("hamlet.txt","r").read()#读取文件
    txt = txt.lower() #把文本全部变为小写
    for ch in '|"#$%&^()*+,-./:;<>=?@[]\_‘{}~':#把特殊字符变为空格
        txt = txt.replace(ch," ")
    return txt


hamletText = getText()
words = hamletText.split()#把文件变为一个单词列表
counts = {}#定义字典
for word in words:
    counts[word] = counts.get(word,0) + 1
items = list(counts.items())#把字典变为列表
items.sort(key=lambda x:x[1],reverse = True)#按照词频降序排列,要记住,常用
for i in range(10):#输出词频前10的单词
    word, count = items[i]
    print("{0:<10}{1:>5}".format(word,count))

中文文本词频统计

import jieba
txt = open("threekingdoms.txt","r",encoding="utf-8").read()
words = jieba.lcut(txt)#利用jieba库进行分词
counts = {}
for word in words:
    if len(word)==1:
        continue
    else:
        counts[word] = counts.get(word,0) + 1
items = list(counts.items())
items.sort(key=lambda x:x[1], reverse=True)
for i in range(15):
    word, count = items[i]
    print("{0:<10}{1:>5}".format(word,count))
import jieba
txt = open("threekingdoms.txt","r",encoding="utf-8").read()
excludes = {"将军","却说","荆州","二人","不可","不能","如此"}
words = jieba.lcut(txt)#利用jieba库进行分词
counts = {}
for word in words:
    if len(word)==1:
        continue
    elif word == "诸葛亮" or word == "孔明曰":
        rword = "孔明"
    elif word == "关公" or word =="云长":
        rword = "关羽"
    elif word == "玄德" or word =="玄德曰":
        rword = "刘备"
    else:
        reword = word
        counts[rword] = counts.get(word,0) + 1
for word in excludes:
    del counts[word]
items = list(counts.items())
items.sort(key=lambda x:x[1], reverse=True)
for i in range(15):
    word, count = items[i]
    print("{0:<10}{1:>5}".format(word,count))

最后

以上就是成就彩虹为你收集整理的用Python进行词频统计的全部内容,希望文章能够帮你解决用Python进行词频统计所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(51)

评论列表共有 0 条评论

立即
投稿
返回
顶部