In [1]:
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17from pyspark.ml.feature import HashingTF, IDF, Tokenizer sentenceData = sqlContext.createDataFrame([ (0, "Hi I heard about Spark"), (0, "I wish Java could use case classes"), (1, "Logistic regression models are neat") ], ["label", "sentence"]) tokenizer = Tokenizer(inputCol="sentence", outputCol="words") wordsData = tokenizer.transform(sentenceData) wordsData.show(5, False) hashingTF = HashingTF(inputCol="words", outputCol="rawFeatures", numFeatures=20) featurizedData = hashingTF.transform(wordsData) # alternatively, CountVectorizer can also be used to get term frequency vectors featurizedData.select('rawFeatures', 'label').show(5, False) idf = IDF(inputCol="rawFeatures", outputCol="features") idfModel = idf.fit(featurizedData) rescaledData = idfModel.transform(featurizedData) rescaledData.select("features", "label").show(5, False)
最后
以上就是奋斗糖豆最近收集整理的关于使用spark的TF-IDF算法计算单词的重要性 使用spark的TF-IDF算法计算单词的重要性的全部内容,更多相关使用spark的TF-IDF算法计算单词的重要性内容请搜索靠谱客的其他文章。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复