Large-scale Search

Practical Relevance Ranking for 11 Million Books, Part 3: Document Length Normalization.

In Part 2 we argued that most relevance ranking algorithms used for ranking text documents are based on three fundamental features:

Practical Relevance Ranking for 11 Million Books, Part 2: Document Length and Relevance Ranking

Document Length and Relevance Ranking

In Part 1, we made the argument that the one to two orders of magnitude of difference in document length between HathiTrust books and the documents used in standard test collections affects all aspects of relevance ranking. 

Practical Relevance Ranking for 11 Million Books, Part 1

Practical Relevance Ranking for 11 Million Books, Part 1

This is the first in a series of posts about our work towards practical relevance ranking for the 11 million books in the HathiTrust full-text search application.

A Tale of Two Solrs

When we first started working on large scale search we confronted the issue of whether to index pages or complete books as our fundamental unit of indexing.[i]   We had some concerns about indexing on the page level.  We knew we would need to scale to 10-20 million books and at an average of 300 pages per book that comes out to about 6 billion pages.  At that time we did not think that Solr would scale to 6 billion pages.[ii]  If we indexed by page, we also wanted to be able

Multilingual Issues Part 1: Word Segmentation

At the core of the Solr/Lucene search engine is an inverted index.  The inverted index has a list of tokens and a list of the documents that contain those tokens. In order to index text, Solr needs to break strings of text into “tokens.”  In English and Western European languages spaces are used to separate words, so Solr uses whitespace to determine what is a token for indexing.   In a number of languages the words are not separated by spaces.

Forty Days and Forty Nights: Re-indexing 7+ million books (part 1)

Forty days forty nights: Re-indexing 7+ million books (part 1)

Forty days and forty nights; That’s how long we estimated it would take to re-index all 7+ million volumes in HathiTrust. Because of this forty day turnaround time, when we found a problem with our current indexing, we were reluctant to do a complete re-index. Whenever feasible we would just re-index the affected materials.

Too Many Words Again!

After Mike McCandless increased the limit of unique words in a Lucene/Solr index segment from 2.4 billion words to around 274 billion words, we thought we didn't need to worry about having too many words (See We recently discovered that we were wrong!

Making personal collections from Large Scale Search Results

We just released a new feature in our full-text Large Scale Search. When you do a search,you will see check boxes next to each search result. You can select items you want from the search results and create a personal collection. This should make it much easier to do repeated searches and explore a targeted subset of the HathiTrust volumes. If you are not logged in, the collection will be temporary. If you log in you can save the collection permanently.

Too Many Words!

When we read that the Lucene index format used by Solr has a limit of 2.1 billion unique words per index segment,  we didn't think we had to worry.  However, a couple of weeks ago, after we optimized our indexes on each shard to one segment, we started seeing java "ArrayIndexOutOfBounds" exceptions in our logs.  After a bit of investigation we determined that indeed, most of our index shards contained over 2.1 billion unique words and some queries were triggering these exeptions.  Currently ea

Performance at 5 million volumes

On November 19, 2009, we put new hardware into production to provide full-text searching against about 4.6 million volumes.  Currently we have about 5.3 million volumes.  The average response time is about  3 seconds,  90% of queries take under 4 seconds, 9% of queries take between 4 seconds and 24 seconds, and 1% of queries take longer than 24 seconds.

Scaling up Large Scale Search from 500,000 volumes to 5 Million volumes and beyond

To scale up from 500,000 volumes of full-text  to 5 million, we decided to use Solr’s distributed search feature which allows us to split up an index into a number of separate indexes (called “shards”).  Solr's distributed search feature allows the indexes to be searched in parallel and then the results aggregated so performance is better than having a very large single index.

Sizing the shards

Common Word list for CommonGrams

This is the list of 400 (actually 415) common words used for our current Solr configuration as described in Tuning Search Performance.

New Hardware for searching 5 million+ volumes of full-text

On November 19, 2009, we put new hardware into production to provide full-text searching against about 4.6 million volumes.  Currently we have about 5.3 million volumes indexed.  Below is a brief description of our current production hardware.  Future posts will give  details about performance and background on our experiments with different system architectures and configurations.

Hardware details

Solr Server configuration

Tuning search performance

Before we implemented the CommonGrams Index, our slowest query with the standard index was “the lives and literature of the beat generation” which took about 2 minutes  for the 500,000 volume index.  When we implemented the CommonGrams index, that query took only 3.6 seconds. 

Slow Queries and Common Words (Part 2)

In part 1 we talked about why some queries are slow and the effect of these slow queries on overall performance. The slowest queries are phrase queries containing common words.  These queries are slow because the size of the positions index for common terms on disk is very large and disk seeks are slow.  These long positions index entries cause three problems relating to overall response time:

Current Hardware Used for Testing

This is a brief note on the  current hardware and software environment we are using for Solr testing.

Solr Servers

  • Two Dell PowerEdge 1950 blades
  • 2 x Dual Core Intel Xeon 3.0 GHz 5160 Processors
  • 8GB - 32GB RAM depending on the test configuration
  • Red Hat Enterprise Linux 5.3 (kernel: 2.6.18 PAE)
  • Java(TM) SE Runtime Environment (build: 1.6.0_11-b03)
  • Solr 1.3
  • Tomcat 5.5.26

Storage Server

Slow Queries and Common Words (Part 1)

All Queries are not created equal

Update on Testing (Memory and Load tests)

Since we finished the work described in the Large Scale Search Report we have made some changes to our test protocol and upgraded our Solr implementions to Solr 1.3. We  have completed some testing with increased memory and some preliminary load testing.

The new test protocol has these features

Large-scale Full-text Indexing with Solr

[Copied from the Blog for Library Technology]

A recent blog pointed out that search is hard when there are many indexes to search because results must be combined. Search is hard for us in DLPS for a different reason. Our problem is the size of the data.