Our blog for cool new technologies and random stuff.

Calling Scrapy from a Python script

When you need to do some web scraping job in Python, an excellent choice is the Scrapy framework. Not only it takes care of most of the networking (HTTP, SSL, proxies, etc) but it also facilitates the process of extracting data from the web by providing things such as nifty xpath selectors.

Scrapy is built upon the Twisted networking engine. A limitation of its core component, the reactor, is that it cannot be restarted. This might cause us some troubles if we are trying to devise a mechanism to run Scrapy spiders independently from a Python script (and not from Scrapy shell). Say for example we want to implement a Python function that receives some parameters, performs a search/web scraping in some sites and returns a list of scrapped items. A naive solution such as this will not work, since in each of the function calls we need to have the Twisted reactor restarted, and this is unfortunately not possible.

A workaround for this is to run Scrapy on its own process. After doing a search, I could get no solution to work on latest Scrapy. However one of those used Multiprocessing and it came pretty close! Here is an updated version for Scrapy 0.13:

from scrapy import project, signals
from scrapy.conf import settings
from scrapy.crawler import CrawlerProcess
from scrapy.xlib.pydispatch import dispatcher
from multiprocessing.queues import Queue
import multiprocessing

class CrawlerWorker(multiprocessing.Process):

    def __init__(self, spider, result_queue):
        self.result_queue = result_queue

        self.crawler = CrawlerProcess(settings)
        if not hasattr(project, 'crawler'):

        self.items = []
        self.spider = spider
        dispatcher.connect(self._item_passed, signals.item_passed)

    def _item_passed(self, item):
    def run(self):

One way to invoke this, say inside a function, would be:

        result_queue = Queue()
        crawler = CrawlerWorker(MySpider(myArgs), result_queue)
        for item in result_queue.get():
            yield item

where MySpider is of course the class of the Spider you want to run, and myArgs are the arguments you wish to invoke the spider with.

27 thoughts on “Calling Scrapy from a Python script

  1. You do not set any environement variable?
    Just new in scrapy and still get an error .
    "crawler = CrawlerWorker(MySpider('url='), result_queue)"

    What should be MySpider? the class name? the project name? the name of of the crawler (name="myspider" in the class)?


  2. It works only for one process running…

    When I run this code for two or more processes concurrently

    for spider in spiders:
    crawler = CrawlerWorker(spider(myArgs), result_queue)

    I have got errors with Twisted

    Unhandled Error
    Traceback (most recent call last):
    File "/usr/lib64/python2.7/site-packages/twisted/python/", line 84, in callWithLogger
    return callWithContext({"system": lp}, func, *args, **kw)
    File "/usr/lib64/python2.7/site-packages/twisted/python/", line 69, in callWithContext
    return{ILogContext: newCtx}, func, *args, **kw)
    File "/usr/lib64/python2.7/site-packages/twisted/python/", line 118, in callWithContext
    return self.currentContext().callWithContext(ctx, func, *args, **kw)
    File "/usr/lib64/python2.7/site-packages/twisted/python/", line 81, in callWithContext
    return func(*args,**kw)
    — <exception caught here> —
    File "/usr/lib64/python2.7/site-packages/twisted/internet/", line 631, in _doReadOrWrite
    why = selectable.doWrite()
    File "/usr/lib64/python2.7/site-packages/twisted/internet/", line 1094, in doWrite
    raise RuntimeError, "doWrite called on a %s" % reflect.qual(self.__class__)
    exceptions.RuntimeError: doWrite called on a twisted.internet.tcp.Port

  3. Errors in Twisted in example above was eliminated by setting WEBSERVICE_ENABLED and TELNETCONSOLE_ENABLED to FALSE. So I can run any count of processes with own spider in process without errors

  4. Hi Alan,

    I am learning scrapy and python basically I am a java developer, I am using Eclipse PyDev IDE for this development so i need to install scrapy in my eclipse, please help me out how to achieve it.

  5. A service posesses a superb content boss in addition to outstanding genuine impulse to speak about articles or blog posts amalgamated such as traces. Love on the subject of coping with these kinds of. Your very own non-public powerful specifics gives numerous heavy-duty specifics. Even so, it's even so clear to know, wanted in addition to useful.

  6. This is a strategy regarding downpayment elevating. I truly do feel these types of feelings should be employed by just about every organization to improve cash concerning plans. Thanks a lot about publishing this kind of beneficial specifics to most of us, and also sustain publishing including valuable articles.

  7. As soon as each of our together with each of our relate together with my wife and i desired make to indicate a person an effective way anybody pretty much allow tiny forms choosing commonly usually create area getting awesome a great stop completely round the fantastic utilizing high quality function as well as!!

  8. This web site is actually a brand-new walk-through similar to the details any individual required in this particular certain in addition to didn’t learn which in order to concern.

  9. There is certainly quite a lot of main features and maybe they might be absolutely progressive plus valuable. We've got start to understand the content splendidly and it also usually generally seems to me personally spectacular.

  10. I seriously treasured this particular Account with your Engagement ring. The idea produced me personally depressing when you consider your ex concluded in place being regarded on this in early stages grow older. Particularly leading to virtually any guy plus toddlers. Many thanks a whole lot with regards to sharihg this type of gorgeous report.

  11. I simply now regarded your internet site and possess been recently certainly taking a look at merged. Once when i imagined I most certainly will depart this kind of initial ideas. Once when i have no idea what precisely items to data file aside from my own spouse as well as when i contain appreciated taking a look at. Very good site. This concentrate on is usually to keep vacationing aimed at your web generally.

  12. Hey Alan, thanks for the example! I'm building a spider for my site Spoots for crawl all pages and get social stats, but in Mac is not easy to install :'(

    On my server all goes well for luck.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>