我是靠谱客的博主 无情鸡,最近开发中收集的这篇文章主要介绍python爬取百度百科词条-python简单爬虫爬取百度百科python词条网页***Python,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

目标分析:

目标:百度百科python词条相关词条网页 - 标题和简介

入口页:https://baike.baidu.com/item/Python/407313

URL格式:

- 词条页面URL:/item/xxxx

数据格式:

- 标题:

***

- 简介:

***

页面编码:utf-8

爬虫主入口文件

spider_main.py

#coding:utf-8

importurl_managerimporthtml_downloaderimporthtml_parserimporthtml_outputerclassSpiderMain(object):def __init__(self):#url管理器

self.urls =url_manager.UrlManager()#下载器

self.downloader =html_downloader.HtmlDownloader()#解析器

self.parser =html_parser.HtmlParser()#输出控制器

self.outputer =html_outputer.HtmlOutputer()defcraw(self, root_url):#记录当前爬取的是第几个url

count = 1self.urls.add_new_url(root_url)#如果有待爬取的url就继续while循环

'''while self.urls.has_new_url():

try:

new_url = self.urls.get_new_url()

print 'craw %d : %s' % (count, new_url)

# 下载url页面

html_cont = self.downloader.download(new_url)

# 进行url解析并获取url的数据

new_urls, new_data = self.parser.parse(new_url, html_cont)

# url解析及数据搜集

self.urls.add_new_urls(new_urls)

self.outputer.collect_data(new_data)

if count >= 1000:

break

count = count + 1

except Exception as e:

print 'craw failed'

print e'''

whileself.urls.has_new_url():

new_url=self.urls.get_new_url()print 'craw %d : %s' %(count, new_url)#下载url页面

html_cont =self.downloader.download(new_url)#print html_cont

#进行url解析并获取url的数据

new_urls, new_data =self.parser.parse(new_url, html_cont)#url解析及数据搜集

self.urls.add_new_urls(new_urls)

self.outputer.collect_data(new_data)if count >= 10:breakcount= count + 1

#输出到指定页面

self.outputer.output_html()if __name__ == "__main__":

root_url= "https://baike.baidu.com/item/Python/407313"obj_spider=SpiderMain()

obj_spider.craw(root_url)

网页管理器

url_manager.py

#coding:utf-8

classUrlManager(object):def __init__(self):#要爬取的url

self.new_urls =set()#爬取过的url

self.old_urls =set()defadd_new_url(self, url):if url isNone:return

#如果url不在要爬取的url里面也不在爬取过的url里面就添加进来

if url not in self.new_urls and url not inself.old_urls:

self.new_urls.add(url)defadd_new_urls(self, urls):if urls is None or len(urls) ==0:return

for url inurls:

self.add_new_url(url)defhas_new_url(self):return len(self.new_urls) !=0defget_new_url(self):

new_url=self.new_urls.pop()

self.old_urls.add(new_url)return new_url

网页下载器

html_downloader.py

#coding:utf-8

importurllib2classHtmlDownloader(object):defdownload(self, url):if url isNone:returnNone

response=urllib2.urlopen(url)if response.getcode() != 200:returnNonereturn response.read()

网页分析器

html_parser.py

#coding:utf-8

from bs4 importBeautifulSoupimportreimporturlparseclassHtmlParser(object):def_get_new_urls(self, page_url, soup):#得到所有的词条url

links = soup.find_all('a', href=re.compile(r"/item/.*"))#print links

new_urls =set()for link inlinks:

new_url= link['href']

new_full_url=urlparse.urljoin(page_url, new_url)

new_urls.add(new_full_url)returnnew_urlsdef_get_new_data(self, page_url, soup):

res_data={}#url

res_data['url'] =page_url#

#

Python

title_node = soup.find('dd', class_='lemmaWgt-lemmaTitle-title').find("h1")

res_data['title'] =title_node.get_text()#

summary_node = soup.find('div', class_="lemma-summary")

res_data['summary'] =summary_node.get_text()returnres_datadefparse(self, page_url, html_cont):if page_url is None or html_cont isNone:returnsoup= BeautifulSoup(html_cont, 'html.parser', from_encoding='utf-8')

new_urls=self._get_new_urls(page_url, soup)

new_data=self._get_new_data(page_url, soup)return new_urls,new_data

网页输出器

html_outputer.py

#coding:utf-8

classHtmlOutputer(object):def __init__(self):

self.datas=[]defcollect_data(self, data):if data isNone:returnself.datas.append(data)#ascii

defoutput_html(self):

fout= open('output.html', 'w')

fout.write("")

fout.write("

")

fout.write("

fout.write("

")

fout.write("

%s" % data['url'])

fout.write("

%s" % data['title'].encode('utf-8'))

fout.write("

%s" % data['summary'].encode('utf-8'))

fout.write("

")

fout.write("/table")

fout.write("/body")

fout.write("/html")

运行代码:

1087718-20190625165537128-1558581752.png

结果页面

1087718-20190625170138807-700386152.png

最后

以上就是无情鸡为你收集整理的python爬取百度百科词条-python简单爬虫爬取百度百科python词条网页***Python的全部内容,希望文章能够帮你解决python爬取百度百科词条-python简单爬虫爬取百度百科python词条网页***Python所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(83)

评论列表共有 0 条评论

立即
投稿
返回
顶部