Category Archives: Insight Predict

TAR for Smart Chickens

Special Master Grossman offers a new validation protocol in the Broiler Chicken Antitrust Cases

Validation is one of the more challenging parts of technology assisted review. We have written about it— and the attendant difficulty of proving recall—several times:

The fundamental question is whether a party using TAR has found a sufficient number of responsive1 documents to meet its discovery obligations. For reasons discussed in our earlier articles, proving that you have attained a sufficient level of recall to justify stopping the review can be a difficult problem, particularly when richness is low. Continue reading

Review Efficiency Using Insight Predict

An Initial Case Study

Much of the discussion around Technology Assisted Review (TAR) focuses on “recall,” which is the percentage of the relevant documents found in the review process. Recall is important because lawyers have a duty to take reasonable (and proportionate) steps to produce responsive documents. Indeed, Rule 26(g) of the Federal Rules effectively requires that an attorney certify, after reasonable inquiry, that discovery responses and any associated production are reasonable and proportionate under the totality of the circumstances.

In that regard, achieving a recall rate of less than 50% does not seem reasonable, nor is it often likely to be proportionate. Current TAR decisions suggest that reaching 75% recall is likely reasonable, especially given the potential cost to find additional relevant documents. Higher recall rates, 80% or higher, would seem reasonable in almost every case. Continue reading

Comparing Family-Level Review Against Individual-Document Review: A Simulation Experiment

Catalyst_Simulation_ExperimentIn two recent posts, we’ve reported on simulations of technology assisted review conducted as part of our TAR Challenge—an opportunity for any corporation or law firm to compare its results in an actual, manual review against the results it would have achieved using Catalyst’s advanced TAR 2.0 technology, Insight Predict.

Today, we are taking a slightly different tack. We are again conducting a simulation using actual documents that were previously reviewed in an active litigation. However, this time, we are Continue reading

The TAR Challenge: How One Client Could Have Cut Review By More Than 57%

Catalyst_TAR_Challenge_Client_Save_57_PercentHow much can you save using TAR 2.0, the advanced form of technology assisted review used by Catalyst’s Insight Predict? That is a question many of our clients ask, until they try it and see for themselves.

Perhaps you’ve wondered about this. You’ve read articles or web sites talking about TAR’s ability to lower review costs by reducing the numbers of documents requiring review. You might even have read about the even-greater gains in efficiency delivered by second-generation TAR 2.0 platforms that use the continuous active learning protocol. But still you’ve held out, maybe uncertain of the technology or wondering whether it is right for your cases. Continue reading

Our 10 Most Popular TAR-Related Posts of 2017 (so far)

Catalyst_Top_10_TAR_PostsMachine learning is an area of artificial intelligence that enables computers to self-learn, without explicit programming. In e-discovery, machine-learning technologies such as technology assisted review (TAR) are helping legal teams dramatically speed document review and thereby reduce its cost. TAR learns which documents are most likely relevant and feeds those first to reviewers, typically eliminating the need to review from 50 to 90 percent of a collection.

Lawyers are getting it, as evidenced by their expanding use of TAR. At Catalyst, 50 percent of matters now routinely use TAR—and none have been challenged in court. Continue reading

Does Recall Measure TAR’s Effectiveness Across All Issues? We Put It To The Test

Does_Recall_Measure_TARs_EffectivenessFor some time now, critics of technology assisted review have opposed using general recall as a measure of its effectiveness. Overall recall, they argue, does not account for the fact that general responsiveness covers an array of more-specific issues. And the documents relating to each of those issues exist within the collection in different numbers that could represent a wide range of levels of prevalence.

Since general recall measures effectiveness across the entire collection, the critics’ concern is that you will find a lot of documents from the larger groups and only a few from the smaller groups, yet overall recall may still be very high. Using overall recall as a measure of effectiveness can theoretically mask a disproportionate and selective review and production. In other words, you may find a lot of documents about several underlying issues, but you might find very few about others. Continue reading

Ask Catalyst: How Does Insight Predict Handle ‘Bad’ Decisions By Reviewers?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]

We received this question:

Ask_Catalyst_MN_How_Does_Insight_Predict_Handle_Bad_Decisions_By_Reviewers-04I understand that the QC feature of Insight Predict shows outliers between human decisions versus what Predict believes should be the result. But what if the parties who performed the original review that Predict is using to make judgments were making “bad” decisions? Would the system just use the bad training docs and base decisions just upon those docs?

Similarly, what about the case where half the team is making good decisions and half the team is making bad decisions? Can Insight learn effectively when being fed disparate results on very similar documents?

Can you eliminate the judgments of reviewers if you find they were making poor decisions to keep the system from “learning” bad things and thus making judgments based on the human errors?

Today’s question is answered by Mark Noel, managing director of professional services.  Continue reading

An Open Look at Keyword Search vs. Predictive Analytics

An_Open_Look_at_Keyword_SearchCan keyword search be as or more effective than technology assisted review at finding relevant documents?

A client recently asked me this question and it is one I frequently hear from lawyers. The issue underlying the question is whether a TAR platform such as our Insight Predict is worth the fee we charge for it.

The question is a fair one and it can apply to a range of cases. The short answer, drawing on my 20-plus years of experience as a lawyer, is unequivocally, “It depends.” Continue reading

Case Study: Is It Ever Too Late in A Review to Start Using TAR?

Case_Study“It’s never too late,” people often say. But is that true for technology assisted review? If a legal team has already put substantial time and effort into manual review, can TAR still be worthwhile? That was the issue presented in a patent infringement case where the client’s approval to use TAR came only after the law firm had manually reviewed nearly half the collection. Even that late in the game, Insight Predict produced substantial savings in time and cost.
Continue reading