网络知识 娱乐 用Scrapy和Selenium爬取动态数据

用Scrapy和Selenium爬取动态数据

文章参考千锋教育大佬的课程:
https://www.bilibili.com/video/BV1QY411F7Vt?p=1&vd_source=5f425e0074a7f92921f53ab87712357b
,多谢大佬的课程

一、 用Selenium操作谷歌浏览器,登录TB账号获取Cookie

  因为TB网的搜索功能需要登录之后才能使用,所以我们要通过程序去控制浏览器实现登录功能,然后再获取登录之后的Cookie.
  首先创建一个Chrome浏览器对象,用这个对象去操控谷歌浏览器:

import json
from selenium import webdriver

def create_chrome_driver(*, headless=False):  # 创建谷歌浏览器对象,用selenium控制浏览器访问url
    options = webdriver.ChromeOptions()
    if headless:  # 如果为True,则爬取时不显示浏览器窗口
        options.add_argument('--headless')

    # 做一些控制上的优化
    options.add_experimental_option('excludeSwitches', ['enable-automation'])
    options.add_experimental_option('useAutomationExtension', False)
    # 创建浏览器对象
    browser = webdriver.Chrome(options=options,executable_path=r"D:python爬虫学习Scrapy框架学习TaoSpidervenvLibsite-packageschromedriver.exe")
    # 破解反爬措施
    browser.execute_cdp_cmd(
        'Page.addScriptToEvaluateOnNewDocument',
        {'source': 'Object.defineProperty(navigator, "webdriver", {get: () => undefined})'}
    )

    return browser

  接着就可以通过这个对象去操作浏览器登录TB网,并且把Cookie存进taobao2.json文件中:

# 模拟登录
import json
import time

from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions
from selenium.webdriver.support.wait import WebDriverWait
from utils import create_chrome_driver

browser = create_chrome_driver()
browser.get('https://login.taobao.com')

# 隐式等待
browser.implicitly_wait(10)

# 获取页面元素模拟用户输入和点击行为
username_input = browser.find_element(By.CSS_SELECTOR, '#fm-login-id')
username_input.send_keys('xxx')  # 填写用户名

password_input = browser.find_element(By.CSS_SELECTOR, '#fm-login-password')
password_input.send_keys('xxx')  # 填写对应的密码

# 登录按钮
login_button = browser.find_element(By.CSS_SELECTOR, '#login-form > div.fm-btn > button')
login_button.click()

# 显示等待
# wait_obj = WebDriverWait(browser, 10)
# wait_obj.until(expected_conditions.presence_of_element_located((By.CSS_SELECTOR, 'div.m-userinfo')))
time.sleep(15)

# 获取登录的cookie数据,并且写入文件
with open('taobao2.json', 'w') as file:
    json.dump(browser.get_cookies(), file)

  最后我们在进行发送请求的时候,把Cookie加进请求当中,就可以访问TB网进行搜索商品了。将Cookie加进浏览器对象中:

def add_cookies(browser, cookie_file):  # 给浏览器对象添加登录的cookie
    with open(cookie_file, 'r') as file:
        cookie_list = json.load(file)
        for cookie_dict in cookie_list:
            if cookie_dict['secure']:
                browser.add_cookie(cookie_dict)

  我们可以先来测试一下是否能操作浏览器,在进行爬取之前得先获取登录的Cookie,所以先执行登录的代码,第一小节的代码在普通python文件中就能执行,可以不用在Scrapy项目中执行。接着执行访问搜索页面的代码,代码为:

'''
通过搜索获取商品信息
'''

from utils import create_chrome_driver, add_cookies

browser = create_chrome_driver()  # 创建谷歌浏览器对象,通过控制浏览器来访问url
browser.get('https://www.taobao.com')
add_cookies(browser, 'taobao2.json')
browser.get('https://s.taobao.com/search?q=手机&s=0')  # 淘宝上的搜索功能必须要登录才能搜索,需要用cookie来亮明身份

  程序会自动操控浏览器去访问TB搜索页:

二、蜘蛛程序的编写

  我们这里爬取手机、笔记本电脑、键鼠套装的数据,每一个类被爬取两页,一页有48条数据,一共就是288条数据,每一页都会有几条时广告,所以爬取的数据是少于288条的。蜘蛛程序代码如下:

import scrapy
from scrapy import Request,Selector

from TaoSpider.items import TaospiderItem


class TaobaoSpider(scrapy.Spider):
    name = 'taobao'
    allowed_domains = ['taobao.com']

    def start_requests(self):
        keywords = ['手机', '笔记本电脑', '键鼠套装']
        for keyword in keywords:
            for page in range(2):
                url = f'https://s.taobao.com/search?q={keyword}&s={48 * page}'
                yield Request(url=url)

    # def parse_detail(self, response, **kwargs):
    #     pass

    def parse(self, response, **kwargs):  # 淘宝的数据是通过js动态渲染出来的,不是静态内容,通过选择器拿不到,我们要通过selenium帮助我们拿到,在数据管道中实现
        sel = Selector(response)
        selectors = sel.css('div.items > div.item.J_MouserOnverReq > div.ctx-box.J_MouseEneterLeave.J_IconMoreNew')
        for selector in selectors:  # type: Selector
            item = TaospiderItem()
            item['title'] = ''.join(selector.css('div.row.row-2.title > a::text').extract()).strip()
            item['price'] = selector.css('div.row.row-1.g-clearfix > div.price.g_price.g_price-highlight > strong::text').extract_first().strip()
            item['deal_count'] = selector.css('div.row.row-1.g-clearfix > div.deal-cnt::text').extract_first().strip()
            item['shop'] = selector.css('div.row.row-3.g-clearfix > div.shop > a > span:nth-child(2)::text').extract_first().strip()
            item['location'] = selector.css('div.row.row-3.g-clearfix > div.location::text').extract_first().strip()
            yield item

  数据条目Items的代码如下:

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class TaospiderItem(scrapy.Item):
    title = scrapy.Field()  # 标题
    price = scrapy.Field()  # 价格
    deal_count = scrapy.Field()  # 销量
    shop = scrapy.Field()  # 店铺名称
    location = scrapy.Field()  # 店铺地址

三、中间件的编写

  我这里主要是重新编写下载中间件,因为Taobao的数据是用js动态渲染的,所以我们不能用Scrapy默认的下载器抓取数据,默认的下载器只能抓取静态数据。想要抓取动态数据的话,需要用到
Selenium。并且Taobao需要登录之后才能使用搜索功能,所以我们把第一小节的代码一起用上,下载中间件代码为:

class TaospiderDownloaderMiddleware:
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        # s = cls()
        # crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        # return s
        return cls()

    def __init__(self):  # 初始化数据管道时模拟用户登录
        self.browser = create_chrome_driver()
        self.browser.get('https://www.taobao.com')
        add_cookies(self.browser, 'taobao2.json')

    def __del__(self):  # 销毁时执行该方法
        self.browser.close()

    def process_request(self, request: Request, spider):  # 不用原来的下载器去下载,自己编写一个selenium下载器
        # Called for each request that goes through the downloader
        # middleware.

        # Must either:
        # - return None: continue processing this request
        # - or return a Response object
        # - or return a Request object
        # - or raise IgnoreRequest: process_exception() methods of
        #   installed downloader middleware will be called
        self.browser.get(request.url)
        # page_source 是带了动态内容的网页源代码,和直接在浏览器看到的源代码不一样,直接在浏览器看到的只有静态内容
        # 通过浏览器请求之后,直接返回响应回来的内容通过引擎传递给解析器
        return HtmlResponse(url=request.url, body=self.browser.page_source,
                            request=request, encoding='utf-8')

    def process_response(self, request, response, spider):
        # Called with the response returned from the downloader.

        # Must either;
        # - return a Response object
        # - return a Request object
        # - or raise IgnoreRequest
        return response

    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.

        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)

  别忘了在配置文件中开启中间件:

DOWNLOADER_MIDDLEWARES = {
   'TaoSpider.middlewares.TaospiderDownloaderMiddleware': 543,
}

四、数据存储——数据管道的编写

  如果不想把数据存进第三方的,比如数据库和excel的话,就不用看这里了直接在命令行执行:scrapy crawl taobao -o taobao.csv,就能把数据存进csv文件中了。我这里在数据管道中把数据存进Excel。数据管道代码:

class TaospiderPipeline:

    def __init__(self):
        self.wb = openpyxl.Workbook()  # 创建工作簿
        self.ws = self.wb.active  # 拿到默认激活的工作表
        self.ws.title = 'TaoBaoData'  # 工作表名称
        self.ws.append(('标题','价格','销量','店铺名称','店铺地址'))  # 表头

    def close_spider(self, spider):  # 爬虫停止运行的时候执行该方法,钩子函数,自己执行不需要调用
        self.wb.save('淘宝商品数据.xlsx')


    def process_item(self, item, spider):
        title = item.get('title', '')  # 如果字典中的title值为空的话,就把''(空值)赋给title变量,写法一
        price = item.get('price') or 0  # 如果字典中的title值为空的话,就把''(空值)赋给title变量,写法二
        deal_count = item.get('deal_count', '')
        shop = item.get('shop', '')
        location = item.get('location', '')
        self.ws.append((title, price, deal_count, shop, location))  #
        return item

  别忘了在配置文件中开启数据管道:

ITEM_PIPELINES = {
   'TaoSpider.pipelines.TaospiderPipeline': 300,
}

  最后在命令行中执行:scrapy crawl taobao,taobao为蜘蛛程序的名字。

  Taobao会有滑动验证码反爬措施,这里只能爬取几次,之后就会被封掉。一般解封时间为一个小时,如果想绕过这个滑动验证码,需要改Chrome浏览器的动态程序,请读者自行上网百度。