scrapy中间件统计爬虫信息

scrapy扩展中间件的使用extensions

1.爬虫统计扩展中间件

在一次爬虫采集中,突然需求方来来个需求,说要知道每天某来源爬虫采集数量的情况

首先分析,scrpay日志输出窗口是有我们想要的信息的,

item_scraped_count:整个爬虫item的总个数

finish_time:爬虫完成时间

elapsed_time_seconds:爬虫运行时间
在这里插入图片描述

我们在scrapy源码中可以看到logstats.py文件

import logging

from twisted.internet import task

from scrapy.exceptions import NotConfigured
from scrapy import signals

logger = logging.getLogger(__name__)


class LogStats:
    """Log basic scraping stats periodically"""

    def __init__(self, stats, interval=60.0):
        self.stats = stats
        self.interval = interval
        self.multiplier = 60.0 / self.interval
        self.task = None

    @classmethod
    def from_crawler(cls, crawler):
        interval = crawler.settings.getfloat('LOGSTATS_INTERVAL')
        if not interval:
            raise NotConfigured
        o = cls(crawler.stats, interval)
        crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)
        crawler.signals.connect(o.spider_closed, signal=signals.spider_closed)
        return o

    def spider_opened(self, spider):
        self.pagesprev = 0
        self.itemsprev = 0

        self.task = task.LoopingCall(self.log, spider)
        self.task.start(self.interval)

    def log(self, spider):
        items = self.stats.get_value('item_scraped_count', 0)
        pages = self.stats.get_value('response_received_count', 0)
        irate = (items - self.itemsprev) * self.multiplier
        prate = (pages - self.pagesprev) * self.multiplier
        self.pagesprev, self.itemsprev = pages, items

        msg = ("Crawled %(pages)d pages (at %(pagerate)d pages/min), "
               "scraped %(items)d items (at %(itemrate)d items/min)")
        log_args = {'pages': pages, 'pagerate': prate,
                    'items': items, 'itemrate': irate}
        logger.info(msg, log_args, extra={'spider': spider})

    def spider_closed(self, spider, reason):
        if self.task and self.task.running:
            self.task.stop()

上述代码就是负责爬虫相关的日志输出

同时可以注意到CoreStats.py中赋值了elapsed_time_seconds,finish_time,finish_reason等字段,如果想要获取爬虫相关的信息统计,我们只要写一个新的类继承CoreStats即可

2.自己实现一个组件,统计信息

我定义一个自己的扩展组件,名字为SpiderStats,来统计爬虫信息

同时将获取到的信息存入MongoDB代码如下

"""
author:tyj
"""
import datetime
import os

from pymongo import MongoClient, ReadPreference

from scrapy import crawler
from scrapy.utils.conf import get_config

from scrapy.extensions.corestats import CoreStats
from scrapy.extensions.logstats import LogStats
import logging

logger = logging.getLogger(__name__)


class SpiderStats(CoreStats):
    batch = None
    sources = None
    def item_scraped(self, item, spider):
        batch = item.get("batch")
        if batch:
            self.batch = batch
        if item.get("sources"):
            self.sources = item.get("sources")

    def spider_closed(self, spider):
        items = self.stats.get_value('item_scraped_count', 0)
        finish_time = self.stats.get_value('finish_time') + datetime.timedelta(hours=8)
        finish_time = finish_time.strftime('%Y-%m-%d %H:%M:%S')
        start_time = self.stats.get_value('start_time') + datetime.timedelta(hours=8)
        start_time = start_time.strftime('%Y-%m-%d %H:%M:%S')
        result_ = {}
        result_["total_items"] = items
        result_["start_time"] = start_time
        result_["finish_time"] = finish_time
        result_["batch"] = self.batch
        result_["sources"] = self.sources
        print("items:", items, "start_time:", start_time, "finish_time:", finish_time, self.batch,self.sources)
        section = "mongo_cfg_prod"
        MONGO_HOST = get_config().get(section=section,option='MONGO_HOST',fallback='')
        MONGO_DB = get_config().get(section=section,option='MONGO_DB',fallback='')
        MONGO_USER = get_config().get(section=section,option='MONGO_USER',fallback='')
        MONGO_PSW = get_config().get(section=section,option='MONGO_PSW',fallback='')
        AUTH_SOURCE = get_config().get(section=section,option='AUTH_SOURCE',fallback='')
        mongo_url = 'mongodb://{0}:{1}@{2}/?authSource={3}&replicaSet=rs01'.format(MONGO_USER, MONGO_PSW,
                                                                                   MONGO_HOST,
                                                                                   AUTH_SOURCE)
        client = MongoClient(mongo_url)
        db = client.get_database(MONGO_DB, read_preference=ReadPreference.SECONDARY_PREFERRED)
        coll = db["ware_detail_price_statistic"]
        coll.insert(result_)
        client.close()

大家可以根据自己的需求来增加相应的扩展中间件,来符合自己业务场景需求




最近更新

  1. docker php8.1+nginx base 镜像 dockerfile 配置

    2024-03-26 01:28:01       98 阅读
  2. Could not load dynamic library ‘cudart64_100.dll‘

    2024-03-26 01:28:01       106 阅读
  3. 在Django里面运行非项目文件

    2024-03-26 01:28:01       87 阅读
  4. Python语言-面向对象

    2024-03-26 01:28:01       96 阅读

热门阅读

  1. sql jdbc测试

    2024-03-26 01:28:01       33 阅读
  2. Android adb命令发送广播介绍

    2024-03-26 01:28:01       33 阅读
  3. C++初阶:浅识内存管理

    2024-03-26 01:28:01       42 阅读
  4. leetcode2549--统计桌面上的不同数字

    2024-03-26 01:28:01       37 阅读
  5. vue 中 清除form 校验状态

    2024-03-26 01:28:01       39 阅读
  6. C#使用ASP.NET Core Razor Pages构建网站(三)

    2024-03-26 01:28:01       44 阅读
  7. leetcode35-Search Insert Position

    2024-03-26 01:28:01       37 阅读
  8. linux下自动续签https证书

    2024-03-26 01:28:01       43 阅读
  9. 交换瓶子。

    2024-03-26 01:28:01       42 阅读
  10. 被指标化后的生活

    2024-03-26 01:28:01       36 阅读
  11. objection命令语句大全简洁版

    2024-03-26 01:28:01       40 阅读
  12. [HITCON 2016]Leaking

    2024-03-26 01:28:01       42 阅读