Author Archives: Thomas Gricks

mm

About Thomas Gricks

Managing Director, Professional Services, Catalyst. A prominent e-discovery lawyer and one of the nation's leading authorities on the use of TAR in litigation, Tom advises corporations and law firms on best practices for applying Catalyst's TAR technology, Insight Predict, to reduce the time and cost of discovery. He has more than 25 years’ experience as a trial lawyer and in-house counsel, most recently with the law firm Schnader Harrison Segal & Lewis, where he was a partner and chair of the e-Discovery Practice Group.

How Can I Use TAR 2.0 for Investigations?

Across the legal landscape, lawyers search for documents for many different reasons. TAR 1.0 systems were primarily used to classify large numbers of documents when lawyers were reviewing documents for production. But how can you use TAR for even more document review tasks?

Modern TAR technologies (TAR 2.0 based on the continuous active learningor CALprotocol) include the ability to deal with low richness, rolling and small collections, and flexible inputs in addition to vast improvements in speed. These improvements also allow TAR to be used effectively in many more document review workflows than traditional TAR 1.0 systems. Continue reading

Optimizing Document Review in Compliance Investigations, Part 2

This article was originally published in Corporate Compliance Insights on August 6, 2018

Using Advanced Analytics and Continuous Active Learning to “Prove a Negative”

This is the second article in a two-part series that focuses on document review techniques for managing compliance in internal and regulatory investigations. Part 1 provided several steps for implementing an effective document review directed at achieving the objectives of a compliance investigation. This installment outlines an approach that can be used to demonstrate that there are no responsive documents to an equivalent statistical certainty – essentially proving a negative.

What Does it Mean to “Prove a Negative?”

The objective of a compliance investigation is most often to quickly locate the critical documents that will establish a cohesive fact pattern and provide the materials needed to conduct effective personnel interviews. In that situation, the documents are merely a means to an end. Continue reading

Optimizing Document Review In Compliance Investigations, Part 1

This article was originally published in Corporate Compliance Insights on July 17, 2018

Internal/Regulatory Investigations Versus Litigation

Too many corporations approach litigation and compliance investigations the same way, using the same technology, approach and people. But your approach to managing electronic information in internal and regulatory compliance investigations should differ from the one for litigation.

Most of the discussion surrounding compliance investigations focuses on best practices for planning and conducting personnel interviews. This article addresses document review, specifically electronic document review, an equally critical component of the investigation process directed at finding what some refer to as the “truth serum” for controlling those interviews and structuring much of the investigation. Continue reading

57 Ways to Leave Your (Linear) Lover

A Case Study on Using Catalyst’s Insight Predict to Find Relevant Documents Without SME Training

A Big Four accounting firm with offices in Tokyo recently asked Catalyst to demonstrate the effectiveness of Insight Predict, technology assisted review (TAR) based on continuous active learning (CAL), on a Japanese language investigation. They gave us a test population of about 5,000 documents which had already been tagged for relevance. In fact, they only found 55 relevant documents during their linear review.

We offered to run a free simulation designed to show how quickly Predict would have found those same relevant documents. The simulation would be blind (Predict would not know how the documents were tagged until it presented its ranked list). That way we could simulate an actual Predict review using CAL. Continue reading

The Importance of Contextual Diversity in Technology Assisted Review

How do you know what you don’t know? This is a classic problem when searching a large volume of documents in litigation or an investigation.

In a technology assisted review (TAR), a key concern for some is whether the algorithm has missed important relevant documents, especially those that you may know nothing about at the outset of the review. This is because most modern TAR systems focus exclusively on relevance feedback, which means that the system feeds you the unreviewed documents that are likely to be the most relevant because they are most like what you have already coded as relevant. In other words, what is highly ranked depends on the documents that were tagged previously. Continue reading

How to Get More Miles Per Gallon Out of Your Next Document Review

How many miles per gallon can I get using Insight Predict, Catalyst’s technology assisted review platform, which is based on continuous active learning (CAL)? And how does that fuel efficiency rating compare to what I might get driving a keyword search model?

While our clients don’t always use these automotive terms, this is a key question we are often asked. How does CAL review efficiency1 compare to the review efficiency I have gotten using keyword search? Put another way, how many non-relevant documents will I have to look at to complete my review using CAL versus the number of false hits that will likely come back from keyword searches? Continue reading

Review Efficiency Using Insight Predict

An Initial Case Study

Much of the discussion around Technology Assisted Review (TAR) focuses on “recall,” which is the percentage of the relevant documents found in the review process. Recall is important because lawyers have a duty to take reasonable (and proportionate) steps to produce responsive documents. Indeed, Rule 26(g) of the Federal Rules effectively requires that an attorney certify, after reasonable inquiry, that discovery responses and any associated production are reasonable and proportionate under the totality of the circumstances.

In that regard, achieving a recall rate of less than 50% does not seem reasonable, nor is it often likely to be proportionate. Current TAR decisions suggest that reaching 75% recall is likely reasonable, especially given the potential cost to find additional relevant documents. Higher recall rates, 80% or higher, would seem reasonable in almost every case. Continue reading

Just Say No to Family Batching in Technology Assisted Review

Catalyst_Just_Say_No_Family_BatchingIt is time to put an end to family batching, one of the most widespread document review practices in the e-discovery world and one of the worst possible workflows if you want to implement an efficient technology-assisted review (TAR) protocol. Simply put, it is nearly impossible for family batching to be as efficient as document-level coding in all but the most unusual of situations.

We set out to evaluate this relationship with real world data, and found document-level coding to be nearly 25 percent more efficient than family batching, even if you review and produce all of the members of responsive families. Continue reading

Comparing the Effectiveness of TAR 1.0 to TAR 2.0: A Second Simulation Experiment

Catalyst_Simulation_TAR1_vs_TAR2In a recent blog post, we reported on a technology-assisted review simulation that we conducted to compare the effectiveness and efficiency of a family-based review versus an individual-document review. That post was one of a series here reporting on simulations conducted as part of our TAR Challenge – an invitation to any corporation or law firm to compare its results in an actual litigation against the results that would have been achieved using Catalyst’s advanced TAR 2.0 technology Insight Predict.

As we explained in that recent blog post, the simulation used actual documents that were previously reviewed in an active litigation. Based on those documents, we conducted two distinct experiments. The first was the family vs. non-family test. In this blog post, we discuss the second experiment, testing a TAR 1.0 review against a TAR 2.0 review. Continue reading

Comparing Family-Level Review Against Individual-Document Review: A Simulation Experiment

Catalyst_Simulation_ExperimentIn two recent posts, we’ve reported on simulations of technology assisted review conducted as part of our TAR Challenge—an opportunity for any corporation or law firm to compare its results in an actual, manual review against the results it would have achieved using Catalyst’s advanced TAR 2.0 technology, Insight Predict.

Today, we are taking a slightly different tack. We are again conducting a simulation using actual documents that were previously reviewed in an active litigation. However, this time, we are Continue reading