概述
目标:近日在学校参加了一个助研活动,要求下载4万多份的报纸,首先这个老师给我们提供了一些消息是中国知网的报纸是免费下载的,但是每个帐号只能下载近三百份左右,故我注册了大概10个账号左右。
一开始,思路比较简单,直接就去爬中国知网,但是这就已经大错特错。因为电脑版的中国知网反爬极其严重(反复爬取就得输入验证码),使我的头发掉了一大把。最后一不小心用上了手机版的中国知网,才算完美的解决了问题。
思路主要如下:首先是获取每份报纸的id,比如老师分配给我的内蒙古日报,其id通常都是MGRB201807200032像这样的格式。
在这个http://wap.cnki.net/acanewslist.aspx?p=2&property=default&channel=CCND&ccndcode=MGRB 可以很轻易的获得其id,而且不会要求输入验证码(其中的p是页码)
在获取id的过程如下:
#-*-coding:utf-8 -*-
# Author:LZW
import requests
from pyquery import PyQuery as pq
import re
import pymongo
import time
import requests.exceptions
lzw = pymongo.MongoClient()
assistant = lzw['ass']
ass = assistant['ass']
pattern = re.compile('baozhi-(.*?).html', re.S)
url = 'http://wap.cnki.net/acanewslist.aspx?p={}&property=default&channel=CCND&ccndcode=MGRB'
headers = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Connection': 'keep-alive',
'Cookie': 'Ecp_ClientId=2171118141202985576; cnkiUserKey=fe7c8caa-ec4f-a151-650c-f315d59f0bbd; UM_distinctid=1612b412fe0e1-0838ebea265002-393d5f0e-144000-1612b412fe1459; SID=201139; SID_qdcnc=000001; Hm_lvt_07a576e0f1481e3cafe74b0eb336a2d7=1532253080; ASP.NET_SessionId=jtjx5uxxq2njsg2tbuds31jz; Login_Session_UserSID=B3673AD8E0D34F0481DE76250A0F0D34; Login_Cookies_UserSID=3029653; Login_Cookies_UserName=767228083%40qq.com; Login_Cookies_WxOpenId=; Login_Session_UserToken=0C765CF78A62E137C7F1B1F77F19EA06; Login_Cookies_Userpwd=keepupfight; touchThirdLoginReturnUrl=http%3a%2f%2fwap.cnki.net%2ftouch%2fusercenter%2fZone%2fIndex; BK_Login_Session_UserToken=BE1C40E6D17A46700BF61541A32E9C92; BK_Login_Session_UserSID=B3673AD8E0D34F0481DE76250A0F0D34; BK_Login_Cookies_Userid=WEEvREcwSlJHSldTTEYzVTE0YXF0b2lqMW85UzdIUS9nMFBrM2xrOGwwZz0%3d%249A4hF_YAuvQ5obgVAqNKPCYcEjKensW4IQMovwHtwkF4VYPoHbKxJw!!; BK_Login_Cookies_UserSID=4497881; BK_Login_Cookies_UserName=767228083%40qq.com; BK_Login_Cookies_OrgID=; Login_Cookies_OrgID=; Login_Cookies_NickName=767228083%40qq.com; Login_Cookies_UserType=0; Login_Cookies_Userid=WEEvREcwSlJHSldTTEYzVTE0YXF0b2lqMW85UzdIUS9nMFBrM2xrOGwwZz0%3d%249A4hF_YAuvQ5obgVAqNKPCYcEjKensW4IQMovwHtwkF4VYPoHbKxJw!!; zoneReturnUrl=http%3a%2f%2fwap.cnki.net%2ftouch%2fusercenter%3freturnurl%3dhttp%253a%252f%252fwap.cnki.net%252ftouch%252fusercenter%252fZone%252fIndex; SID_wapedu=201140; Search_History=%5b%7b%22SearchType%22%3a101%2c%22SearchKeyWord%22%3a%22%e5%86%85%e8%92%99%e5%8f%a4%e6%97%a5%e6%8a%a5%22%7d%2c%7b%22SearchType%22%3a101%2c%22SearchKeyWord%22%3a%22%e5%86%85%e8%92%99%e5%8f%a4%e6%97%a5%e6%8a%a5%22%7d%2c%7b%22SearchType%22%3a106%2c%22SearchKeyWord%22%3a%22%e5%86%85%e8%92%99%e5%8f%a4%e6%97%a5%e6%8a%a5%22%7d%2c%7b%22SearchType%22%3a101%2c%22SearchKeyWord%22%3a%22%e5%86%85%e8%92%99%e5%8f%a4%e6%97%a5%e6%8a%a5%22%7d%5d; LID=WEEvREcwSlJHSldTTEYzVTE0YXF0b2lqMW85UzdIUS9nMFBrM2xrOGwwZz0=$9A4hF_YAuvQ5obgVAqNKPCYcEjKensW4IQMovwHtwkF4VYPoHbKxJw!!; Hm_lpvt_07a576e0f1481e3cafe74b0eb336a2d7=1532253218; E7F38EA2E837979238D6F8CFF3FB9516=9871D3A2C554B27151CACF1422EEC048=WEEvREcwSlJHSldTTEYzVTE0YXF0b2lqMW85UzdIUS9nMFBrM2xrOGwwZz0%3d%249A4hF_YAuvQ5obgVAqNKPCYcEjKensW4IQMovwHtwkF4VYPoHbKxJw!!&4040592CEC1880AA70936989F05E7C31=767228083%40qq.com&2D53A8FB7ABF5BE7F4A3CF4B565CC75C=; CNZZDATA4207386=cnzz_eid%3D68120689-1532250036-%26ntime%3D1532255436; Hm_lpvt_07a576e0f1481e3cafe74b0eb336a2d7=1532257001; Hm_lvt_07a576e0f1481e3cafe74b0eb336a2d7=1532257001; c_m_LinID=LinID=WEEvREcwSlJHSldTTEYzVTE0YXF0b2lqMW85UzdIUS9nMFBrM2xrOGwwZz0=$9A4hF_YAuvQ5obgVAqNKPCYcEjKensW4IQMovwHtwkF4VYPoHbKxJw!!&ot=07/22/2018 19:22:01; c_m_expire=2018-07-22 19:22:01',
'Host': 'wap.cnki.net',
'Upgrade-Insecure-Requests': '1',
'Referer': 'http://wap.cnki.net/baozhi-MGRB200112270084.html',
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Mobile Safari/537.36'}
for i in range(1025, 2102):#这个页码需要自己定义一下
res = requests.get(url.format(i), headers=headers)
text = pq(res.text)
# print(text)
print('第{}页'.format(i))
time.sleep(2)
for j in range(3, 21):
d = {}
title = text('div ul li:nth-child({})'.format(j)).text()
a = text('div ul li:nth-child({}) a'.format(j)).attr('href')
a = re.search(pattern, a)
d['titile'] = title
d['id'] = a.group(1)
ass.insert(d)
lzw.close()
这里都是很简单的解析过程,交将其存入mongodb
下面就是用selenium来进行模拟登录,并点击下载,文档将会自己下载到chrom浏览器的默认下载地址,
代码如下:
#-*-coding:utf-8 -*-
# Author:LZW
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
# import pymongo
def click(browser,position,wait):
button = wait.until(EC.element_to_be_clickable((By.XPATH, position)))
button.click()
time.sleep(5)
def send_key(browser,position,data,wait):
send = wait.until(EC.presence_of_element_located((By.XPATH,position)))
send.send_keys(data)
time.sleep(3)
def download(browser,id,wait):
browser.get('http://wap.cnki.net/baozhi-{}.html'.format(id))
click(browser,'//*[@id="form1"]/div[5]/div[8]/div/div/a',wait)
time.sleep(2)
def signup(browser,username,url,wait):
browser.get(url)
send_key(browser,'//*[@id="username"]','{}@qq.com'.format(username),wait)
send_key(browser,'//*[@id="password"]','*******',wait)#因为我把所有注册的账号的密码都设置成一样的
click(browser,'//*[@id="btnLogin"]',wait)
time.sleep(3)
if __name__ == '__main__':
user = ['23738766***', '5163188***', '850376****', '232473***', '292298***', '2545574***', '34659613***',
'4949689**', '767228***']
# lzw = pymongo.MongoClient()
# assistant = lzw['ass']
# ass = assistant['ass']
# data = list(ass.find())
f = open('id.json', 'r', encoding='gbk')
for i in range(len(user)):
browser = webdriver.Chrome(r'C:UsersAdministratorDownloadschromedriver.exe')#填下你的文件地址
browser.implicitly_wait(10)
wait = WebDriverWait(browser, 10)
url = 'http://wap.cnki.net/touch/usercenter?'
signup(browser,user[i],url,wait)
for j in range((i+1)*240):
download(browser,f.readline(),wait)
print("第{}".format(j))
browser.close()
这里面引用了一个json数据,但其实是不用的,引入先前的mongodb数据库也是可用的,但是为了方便我的舍友参加助研活动,我把获得的id存成了一个json文件,让他可以使用我的代码。
最后
血的教训:要爬就爬手机端网页
------------------------------------分隔线----------------------------------------------
还是觉得这样下载的话不太方便,而且不能够设定下载文件的路径,极其不方便,在学习了如何用post上传表单之后,我开始了代码的修改,但只是试验,没有写循环,大家可以尝试下。
#-*-coding:utf-8 -*-
# Author:LZW
import json
import requests
from pyquery import PyQuery as pq
sess = requests.Session()
data={
'username':'767228***@qq.com',
'password':'keepupf***t',
'keeppwd':'keepPwd',
'app':''
}#上传表单的这个参数要到具体页面去抓包
headers ={
'Host': 'wap.cnki.net',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Mobile Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Cache-Control': 'max-age=0',
'Connection': 'keep-alive',
'Referer': 'http://wap.cnki.net/touch/usercenter?',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
'Content-Length': '69'
}
headers2 = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Mobile Safari/537.36'
}
url = 'http://wap.cnki.net/touch/usercenter/Account/Validator'
downurl ='http://wap.cnki.net/baozhi-MGRB201807200010.html'
with requests.Session() as s:
r = s.post(url=url,data=data,headers=headers)
print(r.status_code)
res = s.get(downurl,headers = headers2,verify = False)#注意这个地方不要用headers,这样会报错,学识有限,具体不深究
text = res.text
t = pq(text)
keyurl = t('div .btn01 a').attr('href')
content = s.get(keyurl,stream=True)
with open('second.pdf','wb') as f:
f.write(content.content)
美滋滋,虽然之前因为轻率放弃了助研活动,但是能够完成这段代码还是很开心。
最后
以上就是着急芒果为你收集整理的新手上路:在利用爬取中国知网(下载报纸)的诸多问题的全部内容,希望文章能够帮你解决新手上路:在利用爬取中国知网(下载报纸)的诸多问题所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复