Full-text search has traditionally been about the indexing and ranking of a corpus of unstructured text content.

The vector space model (VSM) and its cousins, in addition to structural ranking algorithms such as PageRank, have been the authoritative ways of ranking documents.

However, with the recent proliferation of personalization, analytics, social networks and the like, there are increasing ways of determining document relevance, both globally and on a per-user basis. Some call these relevance signals.

Global relevance signals are simple to incorporate into Solr, either as a separate field + query-time boost, or an index-time document boost.

However, there has not traditionally been a satisfactory way of incorporating per-user relevance signals in Lucene/Solr's search process. We'll therefore be focusing on user-specific relevance signals for the rest of this document…

Before going further, here are some examples of user-specific relevance signals:

  • clickstream data
  • search logs
  • user preferences
  • likes, +1, etc
  • purchase history
  • blog, twitter, tumblr feed
  • social graph

I'm going to describe a system of incorporating user-specific relevance signals into your Solr searches in a scalable fashion.


In your Lucene/Solr index, store the documents you want searched. This can be products, companies, jobs etc. It can be multiple data-types, and each doc needs a unique id.

Relevance signals

In a separate sql/nosql database, store your relevance signals. They should be structured in a way which doesn't require complex joins, and be keyed by user-id. i.e. with a single get() query, you should be able to retrieve all necessary relevance data for that user.

One way of doing this is storing the relevance data as json, with individual fields as object ids.

You should also preferably pre-process the relevance data so there is a float/integer which provides the "score" or "value" of that signal.

For example:


In this JSON example, the SPNxxx are product ids, and the integer value is the score.


Now implement a custom FunctionQuery in Solr which accepts the userid as a parameter. Usage will look something like this: influence(201)^0.5 where influence is the name of the functionquery and 201 is the user id, 0.5 being the weight boost.

In the FunctionQuery, issue the DB request and obtain the relevance signal json, e.g. the example above.

Now within the ValueSource itself, load the data ids via FieldCache, and reference the json. The code looks something like:

@Override public DocValues getValues(Map context, IndexReader reader) throws IOException {
    final String[] lookup = FieldCache.DEFAULT.getStrings(reader, idField);
    return new DocValues() {
      @Override public float floatVal(int doc) {
        final String id = lookup[doc];
        if (obj == null) return 0;
        Object v = jsonObj.get(id);
        if (v == null) return 0;
        if (v instanceof Float) {
          return ((Float) v);

See what's happening here is the id field is retrieved from the document id. With our JSON example above, the id value could be something like "SPN332".

This is then used to check against the JSON object. If it exists, the integer/float value is returned as the functionquery score of that doc. Else, 0 is returned.