Supermind Search Consulting Blog 
Solr - Elasticsearch - Big Data

Posts about Python

Monier-Williams Sanskrit-English-IAST search engine

Posted by Kelvin on 17 Sep 2015 | Tagged as: programming, Lucene / Solr / Elasticsearch / Nutch, Python

I just launched a search application for the Monier-Williams dictionary, which is the definitive Sanskrit-English dictionary.

See it in action here: http://sanskrit.supermind.org

The app is built in Python and uses the Whoosh search engine. I chose Whoosh instead of Solr or ElasticSearch because I wanted to try building a search app which didn't depend on Java.

Features include:
– full-text search in Devanagari, English, IAST, ascii and HK
– results link to page scans
– more frequently occurring word senses are boosted higher in search results
– visually displays the MW level or depth of a word with list indentation

Properly unit testing scrapy spiders

Posted by Kelvin on 20 Nov 2014 | Tagged as: crawling, Python

Scrapy, being based on Twisted, introduces an incredible host of obstacles to easily and efficiently writing self-contained unit tests:

1. You can't call reactor.run() multiple times
2. You can't stop the reactor multiple times, so you can't blindly call "crawler.signals.connect(reactor.stop, signal=signals.spider_closed)"
3. Reactor runs in its own thread, so your failed assertions won't make it to the main unittest thread, so test failures will be thrown as assertion errors but unittest doesn't know about them

To get around these hurdles, I created a BaseScrapyTestCase class that uses tl.testing's ThreadAwareTestCase and the following workarounds.

class BaseScrapyTestCase(ThreadAwareTestCase):
	in_suite = False
 
	def setUp(self):
		self.last_crawler = None
		self.settings = get_project_settings()
 
	def run_reactor(self, called_from_suite=False):
		if not called_from_suite and BaseScrapyTestCase.in_suite:
			return
		log.start()
		self.last_crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
		reactor.run()
 
	def queue_spider(self, spider, callback):
		crawler = Crawler(self.settings)
		self.last_crawler = crawler
		crawler.signals.connect(callback, signal=signals.spider_closed)
		crawler.configure()
		crawler.crawl(spider)
		crawler.start()
		return crawler
 
	def wrap_asserts(self, fn):
		with ThreadJoiner(1):
			self.run_in_thread(fn)

You'll use it like so:

class SimpleScrapyTestCase(BaseScrapyTestCase):
	def test_suite(self):
		BaseScrapyTestCase.in_suite = True
		self.do_test_simple()
		self.run_reactor(True)
 
	def do_test_simple(self):
		spider = Spider("site.com")
		def _fn():
			def __fn():
				self.assertTrue(False)
			self.wrap_asserts(__fn)
		self.queue_spider(spider, _fn)
		self.run_reactor()

1. Call run_reactor() at the end of test method.
2. You have to place your assertions in its own function which gets called in a ThreadJoiner so that unittest knows about assertion failures.
3. If you're testing multiple spiders, just call queue_spider() for each, and run_reactor() at the end.
4. BaseScrapyTestCase keeps track of the crawlers created, and makes sure to only attach a reactor.stop signal to the last one.

Let me know if you come up with a better/more elegant way of testing scrapy spiders!