Author Archives: Jeremy Pickens

Jeremy Pickens

About Jeremy Pickens

Jeremy Pickens is one of the world’s leading information retrieval scientists and a pioneer in the field of collaborative exploratory search, a form of information seeking in which a group of people who share a common information need actively collaborate to achieve it. Dr. Pickens has seven patents and patents pending in the field of search and information retrieval. As senior applied research scientist at Catalyst, Dr. Pickens has spearheaded the development of Insight Predict. His ongoing research and development focuses on methods for continuous learning, and the variety of real world technology assisted review workflows that are only possible with this approach. Dr. Pickens earned his doctoral degree at the University of Massachusetts, Amherst, Center for Intelligent Information Retrieval. He conducted his post-doctoral work at King’s College, London. Before joining Catalyst, he spent five years as a research scientist at FX Palo Alto Lab, Inc. In addition to his Catalyst responsibilities, he continues to organize research workshops and speak at scientific conferences around the world.

Catalyst’s Report from TREC 2016: ‘We Don’t Need No Stinkin Training’

blog_data_500One of the bigger, and still enduring, debates among Technology Assisted Review experts revolves around the method and amount of training you need to get optimal[1] results from your TAR algorithm. Over the years, experts prescribed a variety of approaches including:

  1. Random Only: Have a subject matter expert (SME), typically a senior lawyer, review and judge several thousand randomly selected documents.
  2. Active Learning: Have the SME review several thousand marginally relevant documents chosen by the computer to assist in the training .
  3. Mixed TAR 1.0 Approach: Have the SME review and judge a mix of randomly selected documents, some found through keyword search and others selected by the algorithm to help it find the boundary between relevant and non-relevant documents.

Continue reading

Ask Catalyst: Does Insight Predict Use Metadata In Ranking Documents?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]
Ask_Catalyst_TC_Jeremy_Pickens
We received this question:

In ranking documents, does Insight Predict use metadata information or is the ranking based solely on the document text?

Today’s question is answered by Dr. Jeremy Pickens, chief (data) scientist.

Insight Predict, Catalyst’s unique, second-generation technology assisted review engine, does use metadata. However, there are dozens if not hundreds of different types of metadata that could be extracted from various kinds of documents. Some metadata has proven more fruitful, other metadata less so. Continue reading

Ask Catalyst: How Does Insight Predict Handle Synonyms?

By | This entry was posted in Ask Catalyst and tagged on by .

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]

We received this question:

Ask_Catalyst_TC_Jeremy_Pickens
How does Insight Predict handle synonyms? For example, assume document 1 has “car” in it and not “automobile” and document 2 has “automobile” and not “car.” If Predict gets the thumbs up on document 1, it doesn’t necessarily rank document 2 high, correct? It doesn’t know that the words are the same concept, right?

Today’s question is answered by Dr. Jeremy Pickens, senior applied research scientist.

As the Catalyst scientist who developed Insight Predict, I made the conscious, explicit choice not to build synonyms into the process. I’ll tell you why: Continuous learning obviates the issue. Continue reading

Ask Catalyst: What Is ‘Supervised Machine Learning’?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]
Ask_Catalyst_JP_What_is_Supervised_Machine_Learning-03

We received this question:

What is supervised machine learning?

Today’s question is answered by Dr. Jeremy Pickens, senior applied research scientist. Continue reading

Ask Catalyst: What Are The Thresholds for Using Technology Assisted Review?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]
Ask_Catalyst_Jeremy_Pickens_Thresholds_for_TAR-03

We received this question:

What are the thresholds (in numbers of docs) at which your company will recommend the use of predictive coding? Would this be case dependent or just a percentage of documents (e.g. 100 out of 1,000 documents giving us 10%)?

Today’s question is answered by Dr. Jeremy Pickens, senior applied research scientist. Continue reading

Does Your TAR System Have My Favorite Feature? A Primer on Holistic Thinking

A_to_B-01-01I have noticed that in certain popular document-based systems in the e-discovery marketplace, there is a feature (a capability) that often gets touted.  Although I am a research scientist at Catalyst, I have been on enough sales calls with my fellow Catalyst team members to have heard numerous users of document-based systems ask whether or not we have the capability to automatically remove common headers and footers from email. There are document-based systems that showcase this capability as a feature that is good to have, so clients often include it in the checklist of capabilities that they’re seeking.

This leads me to ask: Why?

For the longest time, this request confused me. It was a capability that many have declared that they need, because they saw that it existed elsewhere. That leads me to want to discuss the topic of holistic thinking when it comes to one’s technology assisted review (TAR) algorithms and processes. Continue reading

Why Control Sets are Problematic in E-Discovery: A Follow-up to Ralph Losey

Why_Control_Sets_are_Problematic_in_E-DiscoveryIn a recent blog post, Ralph Losey lays out a case for the abolishment of control sets in e-discovery, particularly if one is following a continuous learning protocol.  Here at Catalyst, we could not agree more with this position. From the very first moment we rolled out our TAR 2.0, continuous learning engine we have not only recommended against the use of control sets, but we actively decided against ever implementing them in the first place and thus never even had the potential of steering clients awry.

Losey points out three main flaws with control sets. These may be summarized as (1) knowledge Issues, (2) sequential testing bias, and (3) representativeness. In this blog post I offer my own take and evidence in favor of these three points, and offer a fourth difficulty with control sets: rolling collection. Continue reading

Sorting Out the Real Cost and Value of E-Discovery Technology

small_images_shutterstock_150026453There has been a bit of talk lately in the e-discovery echo chamber about fixed-price models for processing, hosting, review and productions. The purported goal of this discussion was to create a stir and drum up business. Yet conspicuously absent from this entire discussion was talk of total cost, aka value. I am the research scientist at Catalyst, so typically I do not get involved in discussions like this.  However, as there still seems to be a great deal of confusion over value, I felt the need to help sort all this out.

First, a bit of my background. I have spent the last 18 years of my professional life developing and applying algorithms to the task of finding relevant information. Currently, I am the senior applied research scientist at Catalyst.  I obtained my Ph.D. in computer science with a focus on information retrieval (search engines) from the Center for Intelligent Information Retrieval (CIIR) at UMass Amherst in 2004. I did a postdoc at King’s College University of London and then spent five years at the Fuji Xerox research lab in Palo Alto (FXPAL) before joining Catalyst in 2010.  Continue reading

Thinking Through the Implications of CAL: Who Does the Training?

Before joining Catalyst in 2010, my entire academic and professional career revolved around basic research. I spent my time coming up with new and interesting algorithms, ways of improving document rankings and classification. However, in much of my research, it was not always clear which algorithms which may or may not have immediate application. It is not that the algorithms were not useful; they were. They just did not always have immediate application to a live, deployed system.

Since joining Catalyst, however, my research has become much more applied. I have come to discover that doesn’t just mean that the algorithms that I design have to be more narrowly focused on the existing task. It also means that I have to design those algorithms to be aware of the larger real world contexts in which those algorithms will be deployed and the limitations that may exist therein.

So it is with keen interest that I have been watching the eDiscovery world react to the recent (SIGIR 2014) paper from Maura Grossman and Gordon Cormack on the CAL (continuous active learning) protocol, Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery. Continue reading

The Five Myths of Technology Assisted Review, Revisited

On Jan. 24, Law Technology News published John’s article, “Five Myths about Technology Assisted Review.” The article challenged several conventional assumptions about the predictive coding process and generated a lot of interest and a bit of dyspepsia too. At the least, it got some good discussions going and perhaps nudged the status quo a bit in the balance.

One writer, Roe Frazer, took issue with our views in a blog post he wrote. Apparently, he tried to post his comments with Law Technology News but was unsuccessful. Instead, he posted his reaction on the blog of his company, Cicayda. We would have responded there but we don’t see a spot for replies on that blog either. Continue reading