Despite a tight discovery timeline in the case, the plaintiff had sought to compel the defendant hospital to manually review nearly 16,000 patient records. Continue reading
The new book, Ask Catalyst: A User’s Guide to TAR, provides detailed answers to 20 basic and advanced questions about TAR, and particularly about advanced TAR 2.0 using continuous active learning.
The questions all came from you – our clients, blog readers and webinar attendees. We receive a lot of good questions about e-discovery technology and specifically about TAR, and we answer every question we get. Continue reading
In this research we answer two main questions: (1) What is the efficiency of a TAR 2.0 family-level document review versus a TAR 2.0 individual document review, and (2) How useful is expert-only (aka TAR 1.0 with expert) training, relative to TAR 2.0’s ability to conflate training and review using non-expert judgments ? Continue reading
One of the bigger, and still enduring, debates among Technology Assisted Review experts revolves around the method and amount of training you need to get optimal results from your TAR algorithm. Over the years, experts prescribed a variety of approaches including:
- Random Only: Have a subject matter expert (SME), typically a senior lawyer, review and judge several thousand randomly selected documents.
- Active Learning: Have the SME review several thousand marginally relevant documents chosen by the computer to assist in the training .
- Mixed TAR 1.0 Approach: Have the SME review and judge a mix of randomly selected documents, some found through keyword search and others selected by the algorithm to help it find the boundary between relevant and non-relevant documents.
How does one reviewer do the work of 48? It may sound like a riddle, but a new infographic created by Catalyst illustrates the answer.
The question the infographic poses is this: In a review of 723,537 documents, how many reviewers would you need to finish in five days?
The answer depends on whether you are using an early version of technology assisted review (TAR 1.0) or a new-generation TAR 2.0 version. Continue reading
It is difficult to pin down precise numbers on how much companies spend on e-discovery. A 2010 survey prepared for the Duke Conference on Civil Litigation found that the average company paid $621,880 to $3 million per case and that companies at the high end paid $2.4 million to $9.8 million per case. A RAND study put the cost at a median of $1.8 million per case.
What we do know for certain is that e-discovery costs continue to rise as data continues to become more voluminous and complex. According to RAND, roughly 70 percent of e-discovery costs are attributable to document review. Continue reading
This week’s question:
Your blog and website often refer to “TAR 1.0” and “TAR 2.0.” While I understand the general concept of technology assisted review, I am not clear what you mean by the 1.0 and 2.0 labels. Can you explain the difference?
You may recall that, in an opinion issued last August, Hyles v. New York City, U.S. Magistrate Judge Andrew J. Peck denied the plaintiff’s request to force the defendant to use technology assisted review instead of keywords to search for relevant documents and emails. Now, another court has followed suit, similarly concluding that it was without legal authority to force a party to use a particular method of e-discovery search.
In the Aug. 1 Hyles decision, attorneys for Pauline Hyles, a black female who is suing the city for workplace discrimination, had sought to force the city to use TAR, arguing it would be more cost efficient and effective than keyword searches. But even though Judge Peck agreed with Hyles’ attorneys “that in general, TAR is cheaper, more efficient and superior to keyword searching,” he concluded that the party responding to a discovery request is best situated to choose its methods and technologies and that he was without authority to force it to use TAR. Continue reading
This week’s question:
In technology assisted review, what is validation and why is it important?
Today’s question is answered by John Tredennick, founder and CEO.
Validation is the “act of confirming that a process has achieved its intended purpose.” It is important to TAR for several reasons, including the need to ensure the TAR algorithm has worked properly and because Rule 26(g) requires counsel to certify that the process they used for producing discovery documents was reasonable and reasonably effective. While courts have approved validation methods in specific cases, no court has yet purported to set forth specific validation standards applicable to all cases or for all TAR review projects. Continue reading
Not only are more companies using technology assisted review, but they are using it in a much broader range of matters, concludes the 12th annual Litigation Trends Survey by the law firm Norton Rose Fulbright.
The proportion of respondents using TAR increased from 57 percent in 2015 to 60 percent this year, the survey finds. In the U.S., two-thirds of all companies use TAR, whereas 46 percent of UK companies use it.
More significantly, of the companies that are using TAR, they are using it for more of their matters, with 29 percent using it in half or more of their matters. Continue reading