Scrapy小试

前言

之前我一直是使用BeautifulSoup4 + urllib2做爬虫,虽然知道Scrapy的大名,但却因为看Scrapy庞大的文档心生怯意,迟迟没有尝试。最近事情不多,照着官方文档来练练手吧。

环境配置

使用Virtualenv创建一个开发环境:virtualenv venv && . venv/bin/active

安装scrapy:pip install scrapy

创建一个startproject:scrapy startproject tutorial

爬虫配置

编辑根目录下的item.py,定义我们需要抓取的数据条目是什么样的。

from scrapy.item import Item, Fieldclass DmozItem(Item):    title = Field()    link = Field()    desc = Field()

在tutorial/spider/目录下创建dmoz_spider.py文件,定义我们的爬虫:

from scrapy.spider import Spiderclass DmozSpider(Spider):    name = "dmoz"    allowed_domains = ["dmoz.org"]    start_urls = [        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"    ]    def parse(self, response):        filename = response.url.split("/")[-2]        open(filename, 'wb').write(response.body)

可以运行了,回到根目录,执行命令:scrapy crawl dmoz,爬虫就开始运行了!接下来你将会看到一大串输出,类似这样:

2014-01-27 17:46:47+0800 [scrapy] INFO: Scrapy 0.22.0 started (bot: tutorial)2014-01-27 17:46:47+0800 [scrapy] INFO: Optional features available: ssl, http112014-01-27 17:46:47+0800 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'}2014-01-27 17:46:48+0800 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState2014-01-27 17:46:48+0800 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats2014-01-27 17:46:48+0800 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware2014-01-27 17:46:48+0800 [scrapy] INFO: Enabled item pipelines:2014-01-27 17:46:48+0800 [dmoz] INFO: Spider opened2014-01-27 17:46:48+0800 [dmoz] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)2014-01-27 17:46:48+0800 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:60232014-01-27 17:46:48+0800 [scrapy] DEBUG: Web service listening on 0.0.0.0:60802014-01-27 17:46:49+0800 [dmoz] DEBUG: Crawled (200)  (referer: None)2014-01-27 17:46:49+0800 [dmoz] DEBUG: Crawled (200)  (referer: None)2014-01-27 17:46:49+0800 [dmoz] INFO: Closing spider (finished)2014-01-27 17:46:49+0800 [dmoz] INFO: Dumping Scrapy stats:    {'downloader/request_bytes': 516,     'downloader/request_count': 2,     'downloader/request_method_count/GET': 2,     'downloader/response_bytes': 14888,     'downloader/response_count': 2,     'downloader/response_status_count/200': 2,     'finish_reason': 'finished',     'finish_time': datetime.datetime(2014, 1, 27, 9, 46, 49, 512993),     'log_count/DEBUG': 4,     'log_count/INFO': 7,     'response_received_count': 2,     'scheduler/dequeued': 2,     'scheduler/dequeued/memory': 2,     'scheduler/enqueued': 2,     'scheduler/enqueued/memory': 2,     'start_time': datetime.datetime(2014, 1, 27, 9, 46, 48, 67847)}2014-01-27 17:46:49+0800 [dmoz] INFO: Spider closed (finished)

使用shell命令可以交互式的调试爬虫:scrapy shell "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",打开以后可以试试如下的命令:

In [1]: sel.xpath('//title')Out[1]: []In [2]: sel.xpath('//title').extract()Out[2]: [u'Open Directory - Computers: Programming: Languages: Python: Books']In [3]: sel.xpath('//title/text()')Out[3]: []In [4]: sel.xpath('//title/text()').extract()Out[4]: [u'Open Directory - Computers: Programming: Languages: Python: Books']In [5]: sel.xpath('//title/text()').re('(\w+):')Out[5]: [u'Computers', u'Programming', u'Languages', u'Python']

根据我们上面所做的调试,修改我们的Spider,像这样:

from scrapy.spider import Spiderfrom scrapy.selector import Selectorclass DmozSpider(Spider):    name = "dmoz"    allowed_domains = ["dmoz.org"]    start_urls = [        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"    ]    def parse(self, response):        sel = Selector(response)        sites = sel.xpath('//ul/li')        for site in sites:            title = site.xpath('a/text()').extract()            link = site.xpath('a/@href').extract()            desc = site.xpath('text()').extract()            print title, link, desc

然后重新运行爬虫:scrapy crawl dmoz

Spider内部希望我们返回包含Item对象格式的抓取数据,再次修改爬虫代码:

from scrapy.spider import Spiderfrom scrapy.selector import Selectorfrom tutorial.items import DmozItemclass DmozSpider(Spider):   name = "dmoz"   allowed_domains = ["dmoz.org"]   start_urls = [       "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",       "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"   ]   def parse(self, response):       sel = Selector(response)       sites = sel.xpath('//ul/li')       items = []       for site in sites:           item = DmozItem()           item['title'] = site.xpath('a/text()').extract()           item['link'] = site.xpath('a/@href').extract()           item['desc'] = site.xpath('text()').extract()           items.append(item)       return items

当我们修改成返回Item类型数据后,就可以做让Scrapy做更多的事情了,例如存储抓取的数据,只需要简单的使用命令:scrapy crawl dmoz -o items.json -t json,运行完成后就保存到本地了。

原创文章,版权声明:自由转载-非商用-非衍生-保持署名 | Creative Commons BY-NC-ND 3.0转载请注明:转载自Tony's blog,原文网址:http://itony.me/494.html

The post Scrapy小试 appeared first on Tony's blog.

只要相信,期待就会成真

Scrapy小试

相关文章:

你感兴趣的文章:

标签云: