Author Archives: Thomas Gricks

mm

About Thomas Gricks

Managing Director, Professional Services, Catalyst. A prominent e-discovery lawyer and one of the nation's leading authorities on the use of TAR in litigation, Tom advises corporations and law firms on best practices for applying Catalyst's TAR technology, Insight Predict, to reduce the time and cost of discovery. He has more than 25 years’ experience as a trial lawyer and in-house counsel, most recently with the law firm Schnader Harrison Segal & Lewis, where he was a partner and chair of the e-Discovery Practice Group.

How to Get More Miles Per Gallon Out of Your Next Document Review

How many miles per gallon can I get using Insight Predict, Catalyst’s technology assisted review platform, which is based on continuous active learning (CAL)? And how does that fuel efficiency rating compare to what I might get driving a keyword search model?

While our clients don’t always use these automotive terms, this is a key question we are often asked. How does CAL review efficiency1 compare to the review efficiency I have gotten using keyword search? Put another way, how many non-relevant documents will I have to look at to complete my review using CAL versus the number of false hits that will likely come back from keyword searches? Continue reading

Review Efficiency Using Insight Predict

An Initial Case Study

Much of the discussion around Technology Assisted Review (TAR) focuses on “recall,” which is the percentage of the relevant documents found in the review process. Recall is important because lawyers have a duty to take reasonable (and proportionate) steps to produce responsive documents. Indeed, Rule 26(g) of the Federal Rules effectively requires that an attorney certify, after reasonable inquiry, that discovery responses and any associated production are reasonable and proportionate under the totality of the circumstances.

In that regard, achieving a recall rate of less than 50% does not seem reasonable, nor is it often likely to be proportionate. Current TAR decisions suggest that reaching 75% recall is likely reasonable, especially given the potential cost to find additional relevant documents. Higher recall rates, 80% or higher, would seem reasonable in almost every case. Continue reading

Just Say No to Family Batching in Technology Assisted Review

Catalyst_Just_Say_No_Family_BatchingIt is time to put an end to family batching, one of the most widespread document review practices in the e-discovery world and one of the worst possible workflows if you want to implement an efficient technology-assisted review (TAR) protocol. Simply put, it is nearly impossible for family batching to be as efficient as document-level coding in all but the most unusual of situations.

We set out to evaluate this relationship with real world data, and found document-level coding to be nearly 25 percent more efficient than family batching, even if you review and produce all of the members of responsive families. Continue reading

Comparing the Effectiveness of TAR 1.0 to TAR 2.0: A Second Simulation Experiment

Catalyst_Simulation_TAR1_vs_TAR2In a recent blog post, we reported on a technology-assisted review simulation that we conducted to compare the effectiveness and efficiency of a family-based review versus an individual-document review. That post was one of a series here reporting on simulations conducted as part of our TAR Challenge – an invitation to any corporation or law firm to compare its results in an actual litigation against the results that would have been achieved using Catalyst’s advanced TAR 2.0 technology Insight Predict.

As we explained in that recent blog post, the simulation used actual documents that were previously reviewed in an active litigation. Based on those documents, we conducted two distinct experiments. The first was the family vs. non-family test. In this blog post, we discuss the second experiment, testing a TAR 1.0 review against a TAR 2.0 review. Continue reading

Comparing Family-Level Review Against Individual-Document Review: A Simulation Experiment

Catalyst_Simulation_ExperimentIn two recent posts, we’ve reported on simulations of technology assisted review conducted as part of our TAR Challenge—an opportunity for any corporation or law firm to compare its results in an actual, manual review against the results it would have achieved using Catalyst’s advanced TAR 2.0 technology, Insight Predict.

Today, we are taking a slightly different tack. We are again conducting a simulation using actual documents that were previously reviewed in an active litigation. However, this time, we are Continue reading

Does Recall Measure TAR’s Effectiveness Across All Issues? We Put It To The Test

Does_Recall_Measure_TARs_EffectivenessFor some time now, critics of technology assisted review have opposed using general recall as a measure of its effectiveness. Overall recall, they argue, does not account for the fact that general responsiveness covers an array of more-specific issues. And the documents relating to each of those issues exist within the collection in different numbers that could represent a wide range of levels of prevalence.

Since general recall measures effectiveness across the entire collection, the critics’ concern is that you will find a lot of documents from the larger groups and only a few from the smaller groups, yet overall recall may still be very high. Using overall recall as a measure of effectiveness can theoretically mask a disproportionate and selective review and production. In other words, you may find a lot of documents about several underlying issues, but you might find very few about others. Continue reading

Catalyst Research: Family-Based Review and Expert Training — Experimental Simulations, Real Data

Catalyst_Exclusive_ResearchABSTRACT

In this research we answer two main questions: (1) What is the efficiency of a TAR 2.0 family-level document review versus a TAR 2.0 individual document review, and (2) How useful is expert-only (aka TAR 1.0 with expert) training, relative to TAR 2.0’s ability to conflate training and review using non-expert judgments [2]? Continue reading

Catalyst’s Report from TREC 2016: ‘We Don’t Need No Stinkin Training’

blog_data_500One of the bigger, and still enduring, debates among Technology Assisted Review experts revolves around the method and amount of training you need to get optimal[1] results from your TAR algorithm. Over the years, experts prescribed a variety of approaches including:

  1. Random Only: Have a subject matter expert (SME), typically a senior lawyer, review and judge several thousand randomly selected documents.
  2. Active Learning: Have the SME review several thousand marginally relevant documents chosen by the computer to assist in the training .
  3. Mixed TAR 1.0 Approach: Have the SME review and judge a mix of randomly selected documents, some found through keyword search and others selected by the algorithm to help it find the boundary between relevant and non-relevant documents.

Continue reading

Ask Catalyst: If I Use Outside Docs to Train the TAR Algorithm, Do I Risk Exposing Them to My Opponent?

[This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]

We received this question:blog_john_and_tom

Does using documents from other matters to gain intelligence [train the algorithm] run the risk of exposing that data if opposing counsel requests the training set?

Today’s question is answered by John Tredennick, founder and CEO, and Thomas Gricks, managing director of professional services.

Continue reading

Ask Catalyst: How Can I Prove a Negative — That A Document Doesn’t Exist?

[This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]

We received this question:

Twitter_Ask_Catalyst_Mark_Noel

How can I prove a negative — that a document does not exist in a collection — using Catalyst Insight and Predict?

Today’s question is answered by Thomas Gricks, managing director of professional services.

Continue reading