使用 Scrapy 框架进行刮擦

首先,你必须设置一个新的 Scrapy 项目。输入你要存储代码的目录并运行:

scrapy startproject projectName

为了刮我们需要一只蜘蛛。蜘蛛定义如何抓取某个网站。这是蜘蛛的代码,它遵循 StackOverflow 上最高投票问题的链接,并从每个页面中抓取一些数据( 来源 ):

import scrapy

class StackOverflowSpider(scrapy.Spider):
    name = 'stackoverflow'  # each spider has a unique name
    start_urls = ['http://stackoverflow.com/questions?sort=votes']  # the parsing starts from a specific set of urls

    def parse(self, response):  # for each request this generator yields, its response is sent to parse_question
        for href in response.css('.question-summary h3 a::attr(href)'):  # do some scraping stuff using css selectors to find question urls 
            full_url = response.urljoin(href.extract())
            yield scrapy.Request(full_url, callback=self.parse_question)

    def parse_question(self, response): 
        yield {
            'title': response.css('h1 a::text').extract_first(),
            'votes': response.css('.question .vote-count-post::text').extract_first(),
            'body': response.css('.question .post-text').extract_first(),
            'tags': response.css('.question .post-tag::text').extract(),
            'link': response.url,
        }

将你的蜘蛛类保存在 projectName\spiders 目录中。在这种情况下 - projectName\spiders\stackoverflow_spider.py

现在你可以使用蜘蛛了。例如,尝试运行(在项目的目录中):

scrapy crawl stackoverflow