web scraping - how to use the example of scrapy-redis -


i have read example of scrapy-redis still don't quite understand how use it.

i have run spider named dmoz , works well. when start spider named mycrawler_redis got nothing.

besides i'm quite confused how request queue set. didn't find piece of code in example-project illustrate request queue setting.

and if spiders on different machines want share same request queue, how can done? seems should firstly make slave machine connect master machine's redis, i'm not sure part put relative code in,in spider.py or type in command line?

i'm quite new scrapy-redis , appreciated !

if example spider working , custom 1 isn't, there must have done wrong. update question code, including relevant parts, can see went wrong.

besides i'm quite confused how request queue set. didn't find piece of code in example-project illustrate request queue setting.

as far spider concerned, done appropriate project settings, example if want fifo:

# enables scheduling storing requests queue in redis. scheduler = "scrapy_redis.scheduler.scheduler"  # don't cleanup redis queues, allows pause/resume crawls. scheduler_persist = true  # schedule requests using queue (fifo). scheduler_queue_class = 'scrapy_redis.queue.spiderqueue' 

as far implementation goes, queuing done via redisspider must inherit spider. can find code enqueuing requests here: https://github.com/darkrho/scrapy-redis/blob/a295b1854e3c3d1fddcd02ffd89ff30a6bea776f/scrapy_redis/scheduler.py#l73

as connection, don't need manually connect redis machine, specify host , port information in settings:

redis_host = 'localhost' redis_port = 6379 

and connection configured in ċonnection.py: https://github.com/darkrho/scrapy-redis/blob/a295b1854e3c3d1fddcd02ffd89ff30a6bea776f/scrapy_redis/connection.py example of usage can found in several places: https://github.com/darkrho/scrapy-redis/blob/a295b1854e3c3d1fddcd02ffd89ff30a6bea776f/scrapy_redis/pipelines.py#l17