Navigation

HTRC User Getting Started FAQ

Introduction

Q: What is HTRC?

HTRC is the research arm of HathiTrust.  It is a partnership between Indiana University (IU) Libraries, the Pervasive Technology Institute, and School of Informatics and Computing at IU, and the University of Illinois, Urbana-Champaign (UIUC) Libraries, and Graduate School of Library and Information Science.

Q: What are the HTRC Services?

A: We have created a couple of platforms for you to experiment with. The main HTRC services (sometimes referred to as the production stack) gives you a Portal and a Workset Builder.
From the Portal you can log in and run analytic algorithms on a set of predefined collections of volumes. These algorithms, powered by the SEASR toolkit, run against the HathiTrust volumes that are in the public domain (close to 3M).
The Workset Builder is a search interface for the Hathitrust public domain corpus - search results can be saved as a 'workset': a collection of volumes against which the text mining algorithms are run.
In addition to the main services, we also provide a Sandbox stack with the same tools. The sandbox runs against non-Google scanned content (about 260,000 volumes). The advantage of the sandbox is that you can access the index and Data API directly, and so you can write your own algorithms.

Q: How do I use the HTRC

A: The HTRC has several overarching paradigms –worksets, algorithms, jobs, and results.

  1. Worksets are collections of volumes and other data to be processed. Worksets are built using software that functions like many library catalog systems.  In the Workset Builder application (often referred to as Blacklight), you will be able to search for, view, and select items that you would like to process.
  2. Algorithms are research methodologies expressed in executable code; that is, they are programs that will run one or more function against your workset. You can choose from a set of algorithms that have been integrated into the HTRC. You can customize the parameters for each algorithm.
  3. Jobs: When you hit submit, you are submitting a job. A job is a set of instructions that are executed by one of the computing resources available to the HTRC. You can view the status of the jobs that you have submitted. You can also delete jobs. If you find that you have made an error in your set up, you can delete the job.
  4. Results: When your job has completed, you can view the results of the job. The results can be viewed in the HTRC. You can also download the results.

Q: What types of data and metadata does HTRC provide?

A: HTRC currently has the public domain corpus OCR text, along with MARC and METS XML.

Access and Services

Q: How do I obtain an account to access HTRC Production Portal?

A: You may sign up for an account by going to the HTRC Production Portal http://htrc2.pti.indiana.edu and choose "Sign up" from the menu.

Q: How do I access HTRC Production Stack?

A: This table lists the HTRC Production Stack entries

Service Endpoint Comments
Portal http://htrc2.pti.indiana.edu The portal allows you to browse volume lists and algorithms, execute algorithms, and view results
Blacklight http://htrc2.pti.indiana.edu/blacklight The Blacklight search interface allows you to search for volumes, and create volume lists that can be used by algorithms.  It provides a GUI interface to our Solr index

 

Q: How do I obtain an account to access HTRC Sandbox?

A: Please send an email to htrc-tech-help-l@list.indiana.edu (a list subscribed by HTRC internal staff only)​ to request for an account, along with your name, your contact information, and indicate that you would like to access the HTRC Sandbox.

Q: How do I access HTRC Sandbox?

A: This table lists the HTRC Sandbox entries

Service Endpoint Comments
Portal https://sandbox.htrc.illinois.edu/HTRC-UI-Portal2/ The portal allows you to browse volume lists and algorithms, execute algorithms, and view results
Blacklight https://sandbox.htrc.illinois.edu/blacklight The Blacklight search interface allows you to search for volumes, and create volume lists that can be used by algorithms.  It provides a GUI interface to our Solr index
Data API https://sandbox.htrc.illinois.edu/data-api The HTRC Data API provides access to the corpus data and METS XML via a RESTful web service. Please refer to the HTRC Data API Guide for more details on how to use the HTRC Data API.
Solr Proxy http://sandbox.htrc.illinois.edu/solr The HTRC Solr Proxy provides access to the Solr index. A sample query is: http://sandbox.htrc.illinois.edu/solr/ocr/select?q=shakespeare. Please refer to the Solr Proxy API Guide for more details on query.

 

Q: What are the differences between the Production Stack and the Sandbox?

A: This table outlines the differences between the Production Stack and the Sandbox:

  Production Stack Sandbox
Purpose A distributed service oriented cyberinfrastructure to support various digital humanities researches and text analysis of HTRC members A community asset meant to be open to the community and for interested users to try things out on a smaller scale
Number of machines 9 1
Corpus Full public domain set Non-Google scanned public domain subset
Number of volumes 2.7 million 250,000
Compute resource A separate 128-node cluster Local on the Sandbox
Accounts Personal account Pre-defined account pool
Account reclamation No Yes (reclaimed and reassigned after 30 days of inactivity)

 

Q: What is the HTRC Solr Proxy and how is it different from Apache Solr?

A: The HTRC Solr Proxy is a thin service in front of Apache Solr services for security and auditing purposes.  The Solr Proxy filters requests to allow read-only requests to protect our indices from being modified; other than that, it is fully compatible with Apache Solr.  Please see Solr Proxy API User Guide

Q: What is the difference between the HTRC Data API and HathiTrust Data API?

A: This table outlines the differences between the HTRC Data API and HathiTrust Data API

  HTRC Data API HT Data API
Purpose To serve high-performance large-scale algorithms and programs To provide public users some volume retrieval capabilities
Throttling enforcement No Yes
Security OAuth2 OAuth
Bulk retrieval of volumes Yes No
Metadata available METS METS, MARC

 

Q: What is HTRC's non-consumptive research? The HTRC Data Capsule

A: The HTRC Data Capsule provides a researcher with a virtual machine that they configure as they need. This includes loading necessary software packages and data sets. When they are ready to run their analysis, they switch their data capsule from maintenance mode to "secure mode", and the routines run in a secure mode that does not allow content from the HT repository to leak out. When completed, the researcher receives an email giving them the location from which to download the results. The HTRC Data Capsule is in apha version and undergoing internal testing.

Use

Q: How do I use the HTRC Data API?

A: Please see HTRC Data API Users Guide

Q: How do I create and analyze worksets in the portal?

A: Worksets are collections of volumes from our collection. There are currently two types of workset: basic and labeled. Basic worksets can be created with the Workset Builder or with the upload CSV functionality, labeled worksets can only be added by uploading a CSV.

Creating worksets with the Workset Builder

The easiest way to create a basic workset is to use the Workset Builder. The Workset Builder allows you to search across our collection.
All the items that you select are kept in the Workset Builder. To review them, click "selected items" in the navigation bar. This is meant as a workspace for building a volume list for the workset, to save a workset of these items: click "Create/Update workset":
When you're saving a workset, note that it can be saved publicly (viewable by all users) or private. After saving a workset, it will be available in the  HTRC Portal, for use in analysis or for download. If you are new to creating and analyzing worksets, please review of step-by-step tutorial.

Building labeled worksets

While a basic workset simply collects volumes in one place, it is possible to add classes to worksets. This allows for use with classification algorithms, such as Naive Bayes.

The CSV can be built in your preferred way. One common approach is to

  1. build a basic workset in the Workset Builder
  2. download the basic workset
  3. open the workset in the HTRC CSV Editor prototype (or a spreadsheet app of one's choosing)
  4. In the CSV Editor or spreadsheet:
    1. A 'class' column can be added and filled in
    2. Additionally CSVs can be appended
    3. Manual volumes can be added (by looking up the "Volume_id" in the Workset Builder)
  5. The output of the HTRC CSV Editor or saved spreadsheet can be uploaded to the HTRC Portal

A labeled workset CSV should follow the following style:

  • the first line should be a header (or names of each column);
  • the first column should be a volume id, and the second column should record the label of the volume.

Below is an example of what the CSV file looks like. Given some volumes, classes are assigned to them based on some criteria. For example, here the labels are the names of the authors of the volumes:

volume_id, class
mdp.39015001796500,Austen
uc2.ark:/13960/t42r3rg51,Austen
uc2.ark:/13960/t3dz03x48,Austen
uiuo.ark:/13960/t4km00443,Austen
mdp.39015004997253,Austen
uc2.ark:/13960/t6c24sq2z,Austen
uc2.ark:/13960/t6m041m4z,Dickens
uiuo.ark:/13960/t5cc1pz8f,Dickens
uiuo.ark:/13960/t1wd47104,Dickens
uc2.ark:/13960/t2v40sj3m,Dickens
uiuo.ark:/13960/t3tt5bm1x,Dickens
uiuo.ark:/13960/t6n013296,Dickens

Uploading Worksets

Worksets are uploaded in the HTRC Portal, under Worksets > Upload Workset, or with the '+' button in the workset list view. This is an alternative to the Workset Builder, and currently the only way to add labeled worksets.

Notes:

As of now, the worksets in the portal and in the csv file display the volumes in different orders. (We are working on a fix to this issue.) You need to be alert to this so that you do not assume that the worksets in the portal and in the csv file would obey the same order. (If you assume that, then you may end up referring to the order displayed in the portal when assigning classes to the volumes specified in your CSV file, which could lead to problems.

One way to find the title/content of a book, while assigning classes to volumes, would be to get it from http://babel.hathitrust.org/cgi/pt?id=mdp.39015033434559;view=1up;seq=1 (by substituting the volume id in this URL with the desired volume ID).

Q: Should I save my results in the portal?
A: If you want to ensure that results are retained through a restart of the services, then you should save your results.

Q. What is the login timeout?
A: The current login timeout is 12 hours.

Support

Q: Where do I go for more information?

A: Below are links to some very useful documentation:

Q: This is a release.  Can I download the code?

A: Yes. All of the HTRC services code modules are open source and are available from SourceForge.  Go to http://sourceforge.net/p/htrc/code/ to browse the code, or check out directly from SVN using:

svn co svn://svn.code.sf.net/p/htrc/code/

Q: How do I ask questions or start discussions with other users?

A: Please join the HTRC Usergroup mailing list.

Q: How do I report issues or give feedback?

A: To report a bug, please go to http://jira.htrc.illinois.edu/browse/HTRC.  You need to create a JIRA login account if you have not done so already.  To provide feedback, you may use the "feedback" tab found on the right-hand side of various portal pages to pop up a form.