Scrapy, being based on Twisted, introduces an incredible host of obstacles to easily and efficiently writing self-contained unit tests:

1. You can't call multiple times
2. You can't stop the reactor multiple times, so you can't blindly call "crawler.signals.connect(reactor.stop, signal=signals.spider_closed)"
3. Reactor runs in its own thread, so your failed assertions won't make it to the main unittest thread, so test failures will be thrown as assertion errors but unittest doesn't know about them

To get around these hurdles, I created a BaseScrapyTestCase class that uses tl.testing's ThreadAwareTestCase and the following workarounds.

class BaseScrapyTestCase(ThreadAwareTestCase):
	in_suite = False
	def setUp(self):
		self.last_crawler = None
		self.settings = get_project_settings()
	def run_reactor(self, called_from_suite=False):
		if not called_from_suite and BaseScrapyTestCase.in_suite:
		self.last_crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
	def queue_spider(self, spider, callback):
		crawler = Crawler(self.settings)
		self.last_crawler = crawler
		crawler.signals.connect(callback, signal=signals.spider_closed)
		return crawler
	def wrap_asserts(self, fn):
		with ThreadJoiner(1):

You'll use it like so:

class SimpleScrapyTestCase(BaseScrapyTestCase):
	def test_suite(self):
		BaseScrapyTestCase.in_suite = True
	def do_test_simple(self):
		spider = Spider("")
		def _fn():
			def __fn():
		self.queue_spider(spider, _fn)

1. Call run_reactor() at the end of test method.
2. You have to place your assertions in its own function which gets called in a ThreadJoiner so that unittest knows about assertion failures.
3. If you're testing multiple spiders, just call queue_spider() for each, and run_reactor() at the end.
4. BaseScrapyTestCase keeps track of the crawlers created, and makes sure to only attach a reactor.stop signal to the last one.

Let me know if you come up with a better/more elegant way of testing scrapy spiders!