使用 Scrapy 框架進行刮擦
首先,你必須設定一個新的 Scrapy 專案。輸入你要儲存程式碼的目錄並執行:
scrapy startproject projectName
為了刮我們需要一隻蜘蛛。蜘蛛定義如何抓取某個網站。這是蜘蛛的程式碼,它遵循 StackOverflow 上最高投票問題的連結,並從每個頁面中抓取一些資料( 來源 ):
import scrapy
class StackOverflowSpider(scrapy.Spider):
name = 'stackoverflow' # each spider has a unique name
start_urls = ['http://stackoverflow.com/questions?sort=votes'] # the parsing starts from a specific set of urls
def parse(self, response): # for each request this generator yields, its response is sent to parse_question
for href in response.css('.question-summary h3 a::attr(href)'): # do some scraping stuff using css selectors to find question urls
full_url = response.urljoin(href.extract())
yield scrapy.Request(full_url, callback=self.parse_question)
def parse_question(self, response):
yield {
'title': response.css('h1 a::text').extract_first(),
'votes': response.css('.question .vote-count-post::text').extract_first(),
'body': response.css('.question .post-text').extract_first(),
'tags': response.css('.question .post-tag::text').extract(),
'link': response.url,
}
將你的蜘蛛類儲存在 projectName\spiders
目錄中。在這種情況下 - projectName\spiders\stackoverflow_spider.py
。
現在你可以使用蜘蛛了。例如,嘗試執行(在專案的目錄中):
scrapy crawl stackoverflow