Category Archives: Predictive Coding

Another Court Declines to Force A Party To Use TAR

Catalyst_Blog_TARYou may recall that, in an opinion issued last August, Hyles v. New York City, U.S. Magistrate Judge Andrew J. Peck denied the plaintiff’s request to force the defendant to use technology assisted review instead of keywords to search for relevant documents and emails. Now, another court has followed suit, similarly concluding that it was without legal authority to force a party to use a particular method of e-discovery search.

In the Aug. 1 Hyles decision, attorneys for Pauline Hyles, a black female who is suing the city for workplace discrimination, had sought to force the city to use TAR, arguing it would be more cost efficient and effective than keyword searches. But even though Judge Peck agreed with Hyles’ attorneys “that in general, TAR is cheaper, more efficient and superior to keyword searching,” he concluded that the party responding to a discovery request is best situated to choose its methods and technologies and that he was without authority to force it to use TAR. Continue reading

Video: Understanding Yield Curves in Technology Assisted Review

blog_contextual_diversity_videoIn information retrieval science and e-discovery, yield curves (also called gain curves) are graphic visualizations of how quickly a review finds relevant documents or how well a technology assisted review tool has ranked all your documents.

This video shows you how they work and how to read them to measure and validate the results of your document review. Continue reading

Video: How Contextual Diversity in TAR 2.0 Keeps You from Missing Key Pockets of Documents

blog_contextual_diversity_videoHow do you know what you don’t know when using technology assisted review? As I discussed in a recent post, this is a classic problem when searching a large volume of documents. You could miss documents, topics or terms in a collection simply because you don’t know to search for them.

Contextual Diversity is the solution to that problem. A proprietary TAR 2.0 tool built into Insight Predict, it continuously and actively explores unreviewed documents for concepts or topics that haven’t been seen, ensuring you’ve looked into all corners of the collection. Continue reading

Assessing Adequacy of Production in a TAR 1.0 Review: Further Lessons from Rio Tinto and a Chance to Do a Little Fishing

blog_fishLast March, we wrote about U.S. Magistrate-Judge Andrew J. Peck’s decision in Rio Tinto PLC v. Vale SA (S.D. N.Y. March 3, 2015). The decision focused on the types of disputes over process that can arise when parties negotiate a TAR 1.0 protocol. In that post, we noted with approval Judge Peck’s acknowledgment that one common bone of contention in TAR 1.0 negotiations ⎯ transparency around training and the seed set ⎯ becomes less of an issue when the TAR methodology uses continuous active learning.

If the TAR methodology uses ‘continuous active learning’ (CAL) (as opposed to simple passive learning (SPL) or simple active learning (SAL)), the contents of the seed set is much less significant.

After issuing his opinion, and doubtless facing continuing squabbles among the parties, Judge Peck appointed Maura Grossman to serve as a special master to resolve discovery disputes relating to the parties’ use of TAR. Several months later, she issued a “Stipulation and Order re: Revised Validation and Audit Protocols for the Use of Predictive Coding in Discovery,” which is the subject of this blog post. Continue reading

Why Control Sets are Problematic in E-Discovery: A Follow-up to Ralph Losey

Why_Control_Sets_are_Problematic_in_E-DiscoveryIn a recent blog post, Ralph Losey lays out a case for the abolishment of control sets in e-discovery, particularly if one is following a continuous learning protocol.  Here at Catalyst, we could not agree more with this position. From the very first moment we rolled out our TAR 2.0, continuous learning engine we have not only recommended against the use of control sets, but we actively decided against ever implementing them in the first place and thus never even had the potential of steering clients awry.

Losey points out three main flaws with control sets. These may be summarized as (1) knowledge Issues, (2) sequential testing bias, and (3) representativeness. In this blog post I offer my own take and evidence in favor of these three points, and offer a fourth difficulty with control sets: rolling collection. Continue reading

Infographic: A TAR is Born: The Making of a Superstar

Infographic_A_TAR_is_BornE-discovery review has come a long way in a short time. Not long ago, manual, linear review was the norm. Then came keyword search, which helped increase efficiency but was imperfect in its results. Technology-assisted review was a great leap forward, but early TAR 1.0 versions were rigid and slow. Only with the arrival of TAR 2.0 and Continuous Active Learning did TAR finally save the day for e-discovery.

The brief history of how TAR evolved is depicted in a new Catalyst infographic, A TAR is Born: The Making of a Superstar.  See how e-discovery review matured from a demanding infant to a Ph.D. in savings. After you check out the infographic, read much more about TAR in Catalyst’s free e-book, TAR for Smart People.

View Infographic >

Case Study Details How a Major Bank Used Catalyst’s Insight Predict to Cut Its Production Review by 94%

In his 2015 opinion in Rio Tinto PLC v. Vale SA, Magistrate Judge Andrew Peck extolled the benefits of technology assisted review using Continuous Active Learning. In particular, he noted that CAL reduces or even eliminates the need for the rigid seed set required by older TAR methods.

CAL’s flexibility on seed sets was illustrated in a case where a large banking institution alleged it lost millions due to a borrower’s accounting fraud. In response to the borrower’s production request, the bank faced review of 2.1 million documents, even after culling. With neither the time nor budget to review them all, the bank turned to Catalyst’s Insight Predict, the first commercial TAR engine to use CAL. Predict cut the review by 94%.

Read the case study to see how it was done >>

Another Federal Decision Acknowledges that TAR Beats Manual Review

In the annals of case law about e-discovery and technology assisted review (TAR), Malone v. Kantner Ingredients will be only a footnote. In fact, were it not for a footnote, the case would barely warrant mention here.

This blog has chronicled the increasing judicial acceptance of TAR, starting with U.S. Magistrate Judge Andrew J. Peck’s seminal 2012 opinion in Da Silva Moore v. Publicis Groupe, which was the first to approve TAR, and continuing through to Judge Peck’s recent opinion in Rio Tinto PLC v. Vale SA, which declared, “the case law has developed to the point that it is now black letter law that where the producing party wants to utilize TAR for document review, courts will permit it.” Continue reading

Magistrate Judge Andrew Peck Discusses TAR in the Courtroom

U.S. Magistrate Judge Andrew J. Peck — author of the first-ever court decision approving the use of technology assisted review in e-discovery — was recently a guest on the Legal Talk Network podcast Digital Detectives. Hosts Sharon D. Nelson and John W. Simek, president and vice president of Sensei Enterprises, interviewed Judge Peck about how TAR works, what cases it is suitable for, and how it is being accepted in the courts.

Given Judge Peck’s leadership in broadening the adoption of TAR, we thought his comments would be of interest to readers of this blog. With the gracious permission of Sharon, John and the Legal Talk Network, below is a partial transcript of the show highlighting Judge Peck’s comments on TAR. You can hear the entire program through the Soundcloud player above or at the Legal Talk Network. Continue reading

The Luck of the Irish: TAR Approved by Irish High Court

I do not know if any leprechauns appeared in this case, but the Irish High Court found the proverbial pot of gold under the TAR rainbow in Irish Bank Resolution Corp. vs. Quinn—the first decision outside the U.S. to approve the use of Technology Assisted Review for civil discovery.

The protocol at issue in the March 3, 2015, decision was TAR 1.0 (Clearwell). For that reason, some of the points addressed by the court will be immaterial for legal professionals who use the more-advanced TAR 2.0 and Continuous Active Learning (CAL). Even so, the case makes for an interesting read, both for its description of the TAR process at issue and for its ultimate outcome. Continue reading