Python爬虫的N种姿势 (3)

输出的结果如下(省略中间的输出,以......代替):

################################################## Dejen Gebremeskel , Ethiopian long-distance runner Erik Kynard , American high jumper ...... Buzz Aldrin , American astronaut Egon Krenz , former General Secretary of the Socialist Unity Party of East Germany 使用异步(正则表达式),总共耗时:16.521944999694824 ##################################################

16.5秒,仅仅为一般方法的43分之一,速度如此之快,令人咋舌(感谢某人提供的尝试)。笔者虽然自己实现了异步方法,但用的是BeautifulSoup来解析网页,耗时127秒,没想到使用正则表达式就取得了如此惊人的效果。可见,BeautifulSoup解析网页虽然快,但在异步方法中,还是限制了速度。但这种方法的缺点为,当你需要爬取的内容比较复杂时,一般的正则表达式就难以胜任了,需要另想办法。

爬虫框架Scrapy

  最后,我们使用著名的Python爬虫框架Scrapy来解决这个爬虫。我们创建的爬虫项目为wikiDataScrapy,项目结构如下:

wikiDataScrapy项目

在settings.py中设置“ROBOTSTXT_OBEY = False”. 修改items.py,代码如下:

# -*- coding: utf-8 -*- import scrapy class WikidatascrapyItem(scrapy.Item): # define the fields for your item here like: name = scrapy.Field() desc = scrapy.Field()

然后,在spiders文件夹下新建wikiSpider.py,代码如下:

import scrapy.cmdline from wikiDataScrapy.items import WikidatascrapyItem import requests from bs4 import BeautifulSoup # 获取请求的500个网址,用requests+BeautifulSoup搞定 def get_urls(): url = "http://www.wikidata.org/w/index.php?title=Special:WhatLinksHere/Q5&limit=500&from=0" # 请求头部 headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.87 Safari/537.36'} # 发送HTTP请求 req = requests.get(url, headers=headers) # 解析网页 soup = BeautifulSoup(req.text, "lxml") # 找到name和Description所在的记录 human_list = soup.find(id='mw-whatlinkshere-list')('li') urls = [] # 获取网址 for human in human_list: url = human.find('a')['href'] urls.append('https://www.wikidata.org' + url) # print(urls) return urls # 使用scrapy框架爬取 class bookSpider(scrapy.Spider): name = 'wikiScrapy' # 爬虫名称 start_urls = get_urls() # 需要爬取的500个网址 def parse(self, response): item = WikidatascrapyItem() # name and description item['name'] = response.css('span.wikibase-title-label').xpath('text()').extract_first() item['desc'] = response.css('span.wikibase-descriptionview-text').xpath('text()').extract_first() yield item # 执行该爬虫,并转化为csv文件 scrapy.cmdline.execute(['scrapy', 'crawl', 'wikiScrapy', '-o', 'wiki.csv', '-t', 'csv'])

输出结果如下(只包含最后的Scrapy信息总结部分):

{'downloader/request_bytes': 166187, 'downloader/request_count': 500, 'downloader/request_method_count/GET': 500, 'downloader/response_bytes': 18988798, 'downloader/response_count': 500, 'downloader/response_status_count/200': 500, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2018, 10, 16, 9, 49, 15, 761487), 'item_scraped_count': 500, 'log_count/DEBUG': 1001, 'log_count/INFO': 8, 'response_received_count': 500, 'scheduler/dequeued': 500, 'scheduler/dequeued/memory': 500, 'scheduler/enqueued': 500, 'scheduler/enqueued/memory': 500, 'start_time': datetime.datetime(2018, 10, 16, 9, 48, 44, 58673)}

可以看到,已成功爬取500个网页,耗时31秒,速度也相当OK。再来看一下生成的wiki.csv文件,它包含了所有的输出的name和description,如下图:

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/wpzydf.html