利用Scrapy构建的图片爬虫(demo)

只是个小小的demo,自己测试了下,总共down了20几张图片

#coding: utf-8 ############################################################## File Name: spiders/wallpaper.py# Author: mylonly# mail: tianxianggen@gmail.com#Blog:www.mylonly.com# Created Time: 2014年09月01日 星期一 14时20分07秒##########################################################################!/usr/bin/pythonimport urllib2import os

from scrapy.contrib.spiders import CrawlSpider,Rulefrom scrapy.contrib.linkextractors.sgml import SgmlLinkExtractorfrom scrapy.selector import Selectorfrom mylonly.items import wallpaperItemfrom scrapy.http import Request

class wallpaper(CrawlSpider):name = "wallpaperSpider"allowed_domains = ['sj.zol.com.cn']start_urls = ['http://sj.zol.com.cn/']number = 0rules = (Rule(SgmlLinkExtractor(allow = ('detail_\d{4}_\d{5}\.html')),callback = 'parse_image',follow=True),)def parse_image(self,response):self.log('hi,this is an item page! %s' % response.url)sel = Selector(response)sites = sel.xpath("//div[@class='wrapper mt15']//dd[@id='tagfbl']//a[@target='_blank']/@href").extract()for site in sites:url = 'http://sj.zol.com.cn%s' % (site)print 'one page:',urlreturn Request(url,callback = self.parse_href)def parse_href(self,response):print 'I am in:',response.urlsel = Selector(response)src = sel.xpath("//body//img/@src").extract()[0]self.download(src)

def download(self,url):self.number += 1savePath = '/mnt/python_image/%d.jpg' % (self.number)print '正在下载...',urltry:u = urllib2.urlopen(url)r = u.read()downloadFile = open(savePath,'wb')downloadFile.write(r)u.close()downloadFile.close()except:print savePath,'can not download.'

转载请注明:独自一人 » 利用Scrapy构建的图片爬虫(demo)

泪,一种痛苦的雨滴,不知从什么时候开始已在我的世界下个不停。

利用Scrapy构建的图片爬虫(demo)

相关文章:

你感兴趣的文章:

标签云: