Category Archives: Technology-Assisted Review

Comparing the Effectiveness of TAR 1.0 to TAR 2.0: A Second Simulation Experiment

Catalyst_Simulation_TAR1_vs_TAR2In a recent blog post, we reported on a technology-assisted review simulation that we conducted to compare the effectiveness and efficiency of a family-based review versus an individual-document review. That post was one of a series here reporting on simulations conducted as part of our TAR Challenge – an invitation to any corporation or law firm to compare its results in an actual litigation against the results that would have been achieved using Catalyst’s advanced TAR 2.0 technology Insight Predict.

As we explained in that recent blog post, the simulation used actual documents that were previously reviewed in an active litigation. Based on those documents, we conducted two distinct experiments. The first was the family vs. non-family test. In this blog post, we discuss the second experiment, testing a TAR 1.0 review against a TAR 2.0 review. Continue reading

Comparing Family-Level Review Against Individual-Document Review: A Simulation Experiment

Catalyst_Simulation_ExperimentIn two recent posts, we’ve reported on simulations of technology assisted review conducted as part of our TAR Challenge—an opportunity for any corporation or law firm to compare its results in an actual, manual review against the results it would have achieved using Catalyst’s advanced TAR 2.0 technology, Insight Predict.

Today, we are taking a slightly different tack. We are again conducting a simulation using actual documents that were previously reviewed in an active litigation. However, this time, we are Continue reading

What Can TAR Do? In This Case, Eliminate Review of 260,000 Documents

Catalyst_Blog_What_Can_TAR_DoMany legal professionals continue to question whether technology assisted review is right for them. Perhaps you are a corporate counsel wondering whether TAR can actually reduce review costs. Or maybe you are a litigator unsure of whether TAR is suitable for your case.

For anyone still uncertain about TAR, Catalyst is offering the TAR Challenge. Give us an actual case of yours in which you’ve completed a manual review, and we will run a simulation showing you how the review would have gone – and what savings you would have achieved – had you used Insight Predict, Catalyst’s award-winning TAR 2.0 platform. Continue reading

Deep Learning in E-Discovery: Moving Past the Hype

blog_lightbulb_with_flareDeep learning. The term seems to be ubiquitous these days. Everywhere from self-driving cars and speech transcription to victories in the game “Go” and cancer diagnosis. If we measure things by press coverage, deep learning seems poised to make every other form of machine learning obsolete.

Recently, Catalyst’s founder and CEO John Tredennick interviewed Catalyst’s chief scientist, Dr. Jeremy Pickens (who we at Catalyst call Dr. J), about how deep learning works and how it might be applied in the legal arena.

JT: Good afternoon Dr. J. I have been reading about deep learning and would like to know more about how it works and what it might offer the legal profession. Continue reading

Our 10 Most Popular TAR-Related Posts of 2017 (so far)

Catalyst_Top_10_TAR_PostsMachine learning is an area of artificial intelligence that enables computers to self-learn, without explicit programming. In e-discovery, machine-learning technologies such as technology assisted review (TAR) are helping legal teams dramatically speed document review and thereby reduce its cost. TAR learns which documents are most likely relevant and feeds those first to reviewers, typically eliminating the need to review from 50 to 90 percent of a collection.

Lawyers are getting it, as evidenced by their expanding use of TAR. At Catalyst, 50 percent of matters now routinely use TAR—and none have been challenged in court. Continue reading

Does Recall Measure TAR’s Effectiveness Across All Issues? We Put It To The Test

Does_Recall_Measure_TARs_EffectivenessFor some time now, critics of technology assisted review have opposed using general recall as a measure of its effectiveness. Overall recall, they argue, does not account for the fact that general responsiveness covers an array of more-specific issues. And the documents relating to each of those issues exist within the collection in different numbers that could represent a wide range of levels of prevalence.

Since general recall measures effectiveness across the entire collection, the critics’ concern is that you will find a lot of documents from the larger groups and only a few from the smaller groups, yet overall recall may still be very high. Using overall recall as a measure of effectiveness can theoretically mask a disproportionate and selective review and production. In other words, you may find a lot of documents about several underlying issues, but you might find very few about others. Continue reading

Citing TAR Research, Court OKs Production Using Random Sampling

Catalyst_Court_OKs_Production_Using_Random_SamplingCiting research on the efficacy of technology assisted review over human review, a federal court has approved a party’s request to respond to discovery using random sampling.

Despite a tight discovery timeline in the case, the plaintiff had sought to compel the defendant hospital to manually review nearly 16,000 patient records. Continue reading

New Book from Catalyst Answers Your Questions about TAR

Ask_Catalyst_EbookHot off the press is a new, complimentary book from Catalyst that answers your questions about technology assisted review.

The new book, Ask Catalyst: A User’s Guide to TAR, provides detailed answers to 20 basic and advanced questions about TAR, and particularly about advanced TAR 2.0 using continuous active learning.

The questions all came from you – our clients, blog readers and webinar attendees. We receive a lot of good questions about e-discovery technology and specifically about TAR, and we answer every question we get. Continue reading

Catalyst Research: Family-Based Review and Expert Training — Experimental Simulations, Real Data

Catalyst_Exclusive_ResearchABSTRACT

In this research we answer two main questions: (1) What is the efficiency of a TAR 2.0 family-level document review versus a TAR 2.0 individual document review, and (2) How useful is expert-only (aka TAR 1.0 with expert) training, relative to TAR 2.0’s ability to conflate training and review using non-expert judgments [2]? Continue reading

Catalyst’s Report from TREC 2016: ‘We Don’t Need No Stinkin Training’

blog_data_500One of the bigger, and still enduring, debates among Technology Assisted Review experts revolves around the method and amount of training you need to get optimal[1] results from your TAR algorithm. Over the years, experts prescribed a variety of approaches including:

  1. Random Only: Have a subject matter expert (SME), typically a senior lawyer, review and judge several thousand randomly selected documents.
  2. Active Learning: Have the SME review several thousand marginally relevant documents chosen by the computer to assist in the training .
  3. Mixed TAR 1.0 Approach: Have the SME review and judge a mix of randomly selected documents, some found through keyword search and others selected by the algorithm to help it find the boundary between relevant and non-relevant documents.

Continue reading