概述
前言
前面介绍了正则表达式的相关用法,但是一旦正则表达式写得有问题,得到的可能就不是我们想要的结果。对于一个网页来说,都有一定的特殊结构和层级关系,而且很多节点都有 id 或 class来区分,所以也可以借助它们的结构和属性来进行提取。
这一节介绍Beautiful Soup,它借助网页的结构和属性等特性来解析网页
Beautiful Soup的安装
pip3 install beautifulsoup4
解析器
Beautiful Soup 在解析时实际上依赖于解析器。因为lxml解析器有解析HTML 和 XML的功能,而且速度快,容错能力强,所以本书推荐使用lxml解析器。
如果使用lxml,那么在初始化 Beautiful Soup时,把第二个参数改为lxml即可。示例:
from bs4 import BeautifulSoup
soup = BeautifulSoup('<p>Hello</p>', 'lxml')
print(soup.p.string)
输出:
Hello
基本用法
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dormouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.prettify())
print(soup.title.string)
prettify() 方法可以把要解析的字符串以标准的缩进格式输出。这里需要注意的是,输出结果里面包含body 和 html节点,也就是说对于不标准的HTML字符串Beautiful Soup可以自动更正格式。这一步不是prettify() 方法做的,而是在初始化BeautifulSoup时就完成了。
soup.title.string 输出HTML中 title 节点中的文本内容。
节点选择器
直接调用节点的名称就可以选择节点元素,再调用 string 属性就可以得到节点内的文本了。如果单个节点结构层次非常清晰,可以选用这种方式来解析。
选择元素
下面用一个例子详细说明选择元素的方法:
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dormouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.title)
print(type(soup.title))
print(soup.title.string)
print(soup.head)
print(soup.p)
程序运行结果:
<title>The Dormouse's story</title>
<class 'bs4.element.Tag'>
The Dormouse's story
<head><title>The Dormouse's story</title></head>
<p class="title" name="dormouse"><b>The Dormouse's story</b></p>
可以看到print(type(soup.title)) 结果是 bs4.element.Tag 类型,这是Beautiful Soup中一个重要的数据结构。经过选择器选择后,选择结果都是这种 Tag 类型。Tag 具有一些属性,比如string 属性,调用该属性,可以得到节点的文本内容。
print(soup.p) 的结果是第一个 p 节点的内容,后面的几个 p 节点并没有选到。也就是说,当有多个节点时,这种选择方式只会选择到第一个匹配的节点,其他的后面节点都会忽略。
提取信息
(1)获取名称
可以利用 name 属性获取节点的名称。依然以上面的文本为例,选取 title 节点,调用 name 属性就可以得到节点名称:
print(soup.title.name)
运行结果如下:
title
(2)获取属性
每个节点可能有多个属性,比如 id 和 class 等,选择这个节点元素后,可以调用 attrs 获取所有属性:
print(soup.p.attrs)
print(soup.p.attrs['name'])
运行结果:
{'class': ['title'], 'name': 'dormouse'}
dormouse
可以看到,attrs 返回的结果是字典形式。其实有一种更简单的获取方式:不屑attrs,直接在节点元素后面加中括号,传入属性名就可以获取属性值了:
print(soup.p['name'])
print(soup.p['class'])
运行结果:
dormouse
['title']
这里需要注意的是,有的返回结果是字符串,有的返回结果是字符串组成的列表。比如,name 属性的值是唯一的,返回结果就是单个字符串。而对于class ,一个节点元素可能有多个class,所以返回的是列表。
(3)获取内容
可以利用string 属性获取节点元素包含的文本内容,比如要获取第一个p 节点的文本:
print(soup.p.string)
运行结果:
The Dormouse's story
嵌套选择
每一个节点的返回结果都是 bs4.element.Tag 类型,它同样可以继续调用节点进行下一步的选择。
html = """
<html><head><title>The Dormouse's story</title></head></html>
<body>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.head.title)
print(type(soup.head.title))
print(soup.head.title.string)
运行结果如下:
<title>The Dormouse's story</title>
<class 'bs4.element.Tag'>
The Dormouse's story
关联选择
在做选择的时候,有时候不能做到一步就选到想要的节点元素,需要先选中某一个节点元素,然后以它为基准再选择它的子节点、父节点、兄弟节点等。
(1)子节点和子孙节点
选取节点元素之后,如果想要获取它的直接子节点,可以调用 contents 属性,示例如下:
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><span>Elsie</span></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.p.contents)
运行结果:
['Once upon a time there were three little sisters; and their names weren', <a class="sister" href="http://example.com/elsie" id="link1"><span>Elsie</span></a>, ',n', <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, ' andn', <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>, ';nand they lived at the bottom of a well.']
可以看到,返回结果是列表形式。p 节点里包含文本,又包含节点,最后会将它们以列表形式统一返回。
需要注意的是,列表中的每个元素都是p 节点的直接子节点。比如第一个a 节点里面包含一层span 节点,这相当于子孙节点了,但是返回结果并没有把 span 节点选出来。所以说,contents 属性得到的结果是直接子节点的列表。
同样,我们可以调用 children 属性得到相应的结果:
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.p.children)
for i, child in enumerate(soup.p.children):
print(i, child)
运行结果:
<list_iterator object at 0x000002851B0EE0F0>
0 Once upon a time there were three little sisters; and their names were
1 <a class="sister" href="http://example.com/elsie" id="link1"><span>Elsie</span></a>
2 ,
3 <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
4
and
5 <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
6 ;
and they lived at the bottom of a well.
还是同样的 HTML 文本,这里调用了 children 属性来选择,返回结果是生成器类型,用for 循环输出相应的内容。
如果要得到所有的子孙节点的话,可以调用 descendants 属性:
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><span>Elsie</span></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.p.descendants)
for i, child in enumerate(soup.p.descendants):
print(i, child)
运行结果如下:
<generator object descendants at 0x000002C2FC548518>
0 Once upon a time there were three little sisters; and their names were
1 <a class="sister" href="http://example.com/elsie" id="link1"><span>Elsie</span></a>
2 <span>Elsie</span>
3 Elsie
4 ,
5 <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
6 Lacie
7
and
8 <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
9 Tillie
10 ;
and they lived at the bottom of a well.
此时返回结果还是生成器。遍历输出一下可以看到,这次的输出结果就包含了 span 节点。descendants 会递归查询所有子节点,得到所有子孙节点。
(2)父节点和祖先节点
如果要获取某个节点元素的父节点,可以调用 parent 属性:
html = """
<html>
<head><title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.a.parent)
运行结果:
<p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
</p>
这里我们选择的是第一个a 节点的父节点元素。很明显,它的父节点是 p 节点,输出结果便是 p 节点及其内部的内容。
需要注意的是,这里输出的仅仅是a 节点的直接父节点,而没有再向外寻找父节点的祖先节点。如果想获取所有的祖先节点,可以调用 parents 属性:
html = """
<html>
<head><title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(type(soup.a.parents))
print(list(enumerate(soup.a.parents)))
运行结果:
<class 'generator'>
[(0, <p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
</p>), (1, <body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
</p>
<p class="story">...</p>
</body>), (2, <html>
<head><title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
</p>
<p class="story">...</p>
</body></html>), (3, <html>
<head><title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
</p>
<p class="story">...</p>
</body></html>)]
可以发现,返回结果是生成器类型。这里用列表输出了它的索引和内容,而列表中的元素就是a 节点的祖先节点。
(3)兄弟节点
如果要获取同级节点(也就是兄弟节点),示例如下:
html = """
<html>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
Hello
<a href="http://example.com/lacie" class="sister" id="link2">lacie</a>
and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
and they lived at the bottom of the well.
</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print('Next Sibling', soup.a.next_sibling)
print('Prev Sibling', soup.a.previous_sibling)
print('Next Siblings', list(enumerate(soup.a.next_siblings)))
print('Prev sibling', list(enumerate(soup.a.previous_siblings)))
运行结果:
Next Sibling
Hello
Prev Sibling
Once upon a time there were three little sisters; and their names were
Next Siblings [(0, 'n
Hellon'), (1, <a class="sister" href="http://example.com/lacie" id="link2">lacie</a>), (2, 'n
andn'), (3, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>), (4, 'n
and they lived at the bottom of the well.n')]
Prev sibling [(0, 'n
Once upon a time there were three little sisters; and their names weren')]
可以看到,这里调用了4个属性,其中 next_sibling 和 previous_sibling 分别获取节点的下一个和上一个兄弟元素,next_siblings 和 previous_siblings 则分别返回后面和前面的兄弟节点。
(4)提取信息
前面讲解了关联元素节点的选择方法,如果想要获取它们的一些信息,比如文本、属性等,也用同样的方法,示例如下:
html = """
<html>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Bob</a><a href="http://example.com/lacie"
class="sister" id="link2">lacie</a>
</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print('Next Sibling:')
print(type(soup.a.next_sibling))
print(soup.a.next_sibling)
print(soup.a.next_sibling.string)
print('Parent:')
print(type(soup.a.parents))
print(list(soup.a.parents)[0])
print(list(soup.a.parents)[0].attrs['class'])
运行结果:
Next Sibling:
<class 'bs4.element.Tag'>
<a class="sister" href="http://example.com/lacie" id="link2">lacie</a>
lacie
Parent:
<class 'generator'>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">Bob</a><a class="sister" href="http://example.com/lacie" id="link2">lacie</a>
</p>
['story']
方法选择器
前面说讲的选择方法都是通过属性来选择的,这种方法非常快,但是如果进行比较复杂的选择的话,它就比较繁琐,不够灵活了。Beautiful Soup 还为我们提供了一些查询方法,比如find_all() 和 find() 等,调用它们,然后传入相应的参数,就可以灵活查询了。
find_all()
查询所有符合条件的元素。
它的 API 如下:
find_all(name, attrs, recursive, text, **kwargs)
(1)name
我们可以根据节点名来查询元素,示例:
html = """
<div class="panel">
<div class="penel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(name='ul'))
print(type(soup.find_all(name='ul')[0]))
运行结果:
[<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>, <ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>]
<class 'bs4.element.Tag'>
这里我们调用了 find_all() 方法,传入 name 参数,其参数值为 ul。也就是说,我们想要查询所有 ul 节点,返回结果是列表类型,长度为2,每个元素依然都是 bs4.element.Tag 类型。
因为都是 Tag 类型,所以依然可以进行嵌套查询。还是同样的文本,这里查询出所有 ul 节点后,再继续查询其内部的 li 节点:
for ul in soup.find_all(name='ul'):
ul.find_all(name='li')
for li in ul.find_all(name='li'):
print(li.string)
运行结果:
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>]
[<li class="element">Foo</li>, <li class="element">Bar</li>]
(2)attrs
除了根据节点名查询,我们也可以传入一些属性来查询,示例如下:
html = """
<div class="panel">
<div class="penel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(attrs={'id':'list-1'}))
print(soup.find_all(attrs={'name':'elements'}))
运行结果:
[<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>]
[<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>]
这里查询的时候传入的是attrs 参数,参数的类型是字典类型,得到的结果是列表形式。对于一些常用的属性,比如 id 和 class 等,我们可以不用attrs 来传递。比如,要查询id 为list-1 的节点,可以直接传入id 这个参数
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(id='list-1)
print(soup.find_all(class_='element'))
运行结果:
[<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>]
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>, <li class="element">Foo</li>, <li class="element">Bar</li>]
这里直接传入 id=‘list-1’,就可以查询id 为 list-1 的节点元素了。而对于 class 来说,由于 class 在 python 里是一个关键字,所以后面需要加一个下划线,即 class_=‘element’,返回的结果依然还是 Tag 组成的列表。
(3)text
text参数可用来匹配节点的文本,传入的参数可以是字符串,可以是正则表达式对象。
import re
html = '''
<div class="panel">
<div class="panel-body">
<a>Hello, this is a link</a>
<a>Hello, this is a link,too</a>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(text=re.compile('link')))
运行结果:
['Hello, this is a link', 'Hello, this is a link,too']
这里有两个 a 节点,其内部包含文本信息。这里在 find_all() 方法中传入 text 参数,该参数为正则表达式对象,结果返回所有匹配正则表达式的节点文本组成的列表。
find()
除了find_all() 方法,还有find() 方法,只不过后者返回的是单个元素,也就是第一个匹配的元素。示例如下:
html = """
<div class="panel">
<div class="penel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find(name='ul'))
print(type(soup.find(name='ul')))
print(soup.find(class_='list'))
print(soup.find(class_='element'))
运行结果:
<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<class 'bs4.element.Tag'>
<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<li class="element">Foo</li>
最后
以上就是震动白猫为你收集整理的python3 网络爬虫开发实战-使用Beautiful Soup的全部内容,希望文章能够帮你解决python3 网络爬虫开发实战-使用Beautiful Soup所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复