Category Archives: Technology-Assisted Review

Citing TAR Research, Court OKs Production Using Random Sampling

Catalyst_Court_OKs_Production_Using_Random_SamplingCiting research on the efficacy of technology assisted review over human review, a federal court has approved a party’s request to respond to discovery using random sampling.

Despite a tight discovery timeline in the case, the plaintiff had sought to compel the defendant hospital to manually review nearly 16,000 patient records. Continue reading

New Book from Catalyst Answers Your Questions about TAR

Ask_Catalyst_EbookHot off the press is a new, complimentary book from Catalyst that answers your questions about technology assisted review.

The new book, Ask Catalyst: A User’s Guide to TAR, provides detailed answers to 20 basic and advanced questions about TAR, and particularly about advanced TAR 2.0 using continuous active learning.

The questions all came from you – our clients, blog readers and webinar attendees. We receive a lot of good questions about e-discovery technology and specifically about TAR, and we answer every question we get. Continue reading

Catalyst Research: Family-Based Review and Expert Training — Experimental Simulations, Real Data

Catalyst_Exclusive_ResearchABSTRACT

In this research we answer two main questions: (1) What is the efficiency of a TAR 2.0 family-level document review versus a TAR 2.0 individual document review, and (2) How useful is expert-only (aka TAR 1.0 with expert) training, relative to TAR 2.0’s ability to conflate training and review using non-expert judgments [2]? Continue reading

Catalyst’s Report from TREC 2016: ‘We Don’t Need No Stinkin Training’

blog_data_500One of the bigger, and still enduring, debates among Technology Assisted Review experts revolves around the method and amount of training you need to get optimal[1] results from your TAR algorithm. Over the years, experts prescribed a variety of approaches including:

  1. Random Only: Have a subject matter expert (SME), typically a senior lawyer, review and judge several thousand randomly selected documents.
  2. Active Learning: Have the SME review several thousand marginally relevant documents chosen by the computer to assist in the training .
  3. Mixed TAR 1.0 Approach: Have the SME review and judge a mix of randomly selected documents, some found through keyword search and others selected by the algorithm to help it find the boundary between relevant and non-relevant documents.

Continue reading

Riddle Me This: How Does One Reviewer Do The Work of 48?

Catalyst_Infographic_TC_How_Does_1_Reviewer_Do_the_Work_of_48How does one reviewer do the work of 48? It may sound like a riddle, but a new infographic created by Catalyst illustrates the answer.

The question the infographic poses is this: In a review of 723,537 documents, how many reviewers would you need to finish in five days?

The answer depends on whether you are using an early version of technology assisted review (TAR 1.0) or a new-generation TAR 2.0 version. Continue reading

Infographic: Cut the Cost of Discovery in Five Easy Steps

Catalyst_TC_Infographic_Five_StepsIt is difficult to pin down precise numbers on how much companies spend on e-discovery. A 2010 survey prepared for the Duke Conference on Civil Litigation found that the average company paid $621,880 to $3 million per case and that companies at the high end paid $2.4 million to $9.8 million per case. A RAND study put the cost at a median of $1.8 million per case.

What we do know for certain is that e-discovery costs continue to rise as data continues to become more voluminous and complex. According to RAND, roughly 70 percent of e-discovery costs are attributable to document review. Continue reading

Ask Catalyst: What is the Difference Between TAR 1.0 and TAR 2.0?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]  

This week’s question:

Twitter_Ask_Catalyst_John_Tredennick

Your blog and website often refer to “TAR 1.0” and “TAR 2.0.” While I understand the general concept of technology assisted review, I am not clear what you mean by the 1.0 and 2.0 labels. Can you explain the difference?

Today’s question is answered by John Tredennick, founder and CEO.

Continue reading

Another Court Declines to Force A Party To Use TAR

Catalyst_Blog_TARYou may recall that, in an opinion issued last August, Hyles v. New York City, U.S. Magistrate Judge Andrew J. Peck denied the plaintiff’s request to force the defendant to use technology assisted review instead of keywords to search for relevant documents and emails. Now, another court has followed suit, similarly concluding that it was without legal authority to force a party to use a particular method of e-discovery search.

In the Aug. 1 Hyles decision, attorneys for Pauline Hyles, a black female who is suing the city for workplace discrimination, had sought to force the city to use TAR, arguing it would be more cost efficient and effective than keyword searches. But even though Judge Peck agreed with Hyles’ attorneys “that in general, TAR is cheaper, more efficient and superior to keyword searching,” he concluded that the party responding to a discovery request is best situated to choose its methods and technologies and that he was without authority to force it to use TAR. Continue reading

Ask Catalyst: In TAR, What Is Validation And Why Is It Important?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]  

Twitter_Ask_Catalyst_John_TredennickThis week’s question:

In technology assisted review, what is validation and why is it important?

Today’s question is answered by John Tredennick, founder and CEO.

Validation is the “act of confirming that a process has achieved its intended purpose.”[1] It is important to TAR for several reasons, including the need to ensure the TAR algorithm has worked properly and because Rule 26(g) requires counsel to certify that the process they used for producing discovery documents was reasonable and reasonably effective.[2]  While courts have approved validation methods in specific cases,[3] no court has yet purported to set forth specific validation standards applicable to all cases or for all TAR review projects. Continue reading

Companies’ Use of TAR Grows, Reports New Survey of Litigation Trends

Litigation_Trends_Survey_TCNot only are more companies using technology assisted review, but they are using it in a much broader range of matters, concludes the 12th annual Litigation Trends Survey by the law firm Norton Rose Fulbright.

The proportion of respondents using TAR increased from 57 percent in 2015 to 60 percent this year, the survey finds. In the U.S., two-thirds of all companies use TAR, whereas 46 percent of UK companies use it.

More significantly, of the companies that are using TAR, they are using it for more of their matters, with 29 percent using it in half or more of their matters. Continue reading