Machine learning is an area of artificial intelligence that enables computers to self-learn, without explicit programming. In e-discovery, machine-learning technologies such as technology assisted review (TAR) are helping legal teams dramatically speed document review and thereby reduce its cost. TAR learns which documents are most likely relevant and feeds those first to reviewers, typically eliminating the need to review from 50 to 90 percent of a collection.
Lawyers are getting it, as evidenced by their expanding use of TAR. At Catalyst, 50 percent of matters now routinely use TAR—and none have been challenged in court. Continue reading
For some time now, critics of technology assisted review have opposed using general recall as a measure of its effectiveness. Overall recall, they argue, does not account for the fact that general responsiveness covers an array of more-specific issues. And the documents relating to each of those issues exist within the collection in different numbers that could represent a wide range of levels of prevalence.
Since general recall measures effectiveness across the entire collection, the critics’ concern is that you will find a lot of documents from the larger groups and only a few from the smaller groups, yet overall recall may still be very high. Using overall recall as a measure of effectiveness can theoretically mask a disproportionate and selective review and production. In other words, you may find a lot of documents about several underlying issues, but you might find very few about others. Continue reading
[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]
We received this question:
I understand that the QC feature of Insight Predict shows outliers between human decisions versus what Predict believes should be the result. But what if the parties who performed the original review that Predict is using to make judgments were making “bad” decisions? Would the system just use the bad training docs and base decisions just upon those docs?
Similarly, what about the case where half the team is making good decisions and half the team is making bad decisions? Can Insight learn effectively when being fed disparate results on very similar documents?
Can you eliminate the judgments of reviewers if you find they were making poor decisions to keep the system from “learning” bad things and thus making judgments based on the human errors?
Today’s question is answered by Mark Noel, managing director of professional services. Continue reading
Do two years in a row constitute a streak? If so, Catalyst is on one.
For the second consecutive year, Legaltech News has named Catalyst a winner of its Innovation Award. Catalyst was honored in the category “Best E-Discovery Hosting Provider.” Continue reading
Can keyword search be as or more effective than technology assisted review at finding relevant documents?
A client recently asked me this question and it is one I frequently hear from lawyers. The issue underlying the question is whether a TAR platform such as our Insight Predict is worth the fee we charge for it.
The question is a fair one and it can apply to a range of cases. The short answer, drawing on my 20-plus years of experience as a lawyer, is unequivocally, “It depends.” Continue reading
“It’s never too late,” people often say. But is that true for technology assisted review? If a legal team has already put substantial time and effort into manual review, can TAR still be worthwhile? That was the issue presented in a patent infringement case where the client’s approval to use TAR came only after the law firm had manually reviewed nearly half the collection. Even that late in the game, Insight Predict produced substantial savings in time and cost.
Unless you’ve been living in a cave for the last five years, you’ve likely heard about Technology Assisted Review and how it can help reduce the time and cost of document review in e-discovery. But maybe you’ve still never tried it for one of your own cases. Maybe you’ve been on the fence for fear that it won’t really have any value for your case.
If so, here’s your chance to get off the fence. Catalyst this week announced that it is so confident of the effectiveness of its award-winning TAR platform, Insight Predict, that it is putting its money where its mouth is. Catalyst is offering an unconditional, money-back guarantee to anyone who uses Insight Predict on their next case. If you are not completely satisfied with the value you receive using Insight Predict, Catalyst will refund your cost. Continue reading
In their July 2014 paper, Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery, Maura Grossman and Gordon Cormack reported the results of their controlled TAR study in which they compared the effectiveness of a Continuous Active Learning (CAL) protocol against two first-generation (TAR 1.0) protocols that use one-time training. Their study found the CAL protocol to be more effective—most times much more effective—than the TAR 1.0 protocols.
With that evidence establishing the advantages of CAL, e-discovery vendors began jumping on the bandwagon. All of a sudden, it seems that every e-discovery vendor claims to use CAL or somehow incorporate CAL into its TAR protocols. Continue reading
When e-discovery goes international, it gets even more complex and costly. But technology assisted review can be as effective in reducing costs for multi-language, multinational matters as it is for matters here in the U.S.
This infographic illustrates how Catalyst Insight Predict helped streamline a recent patent litigation for a Japanese client. See how Predict’s Continuous Active Learning improved results and cut the time and cost of review by over 85%.
View Infographic >
Read Full Case Study >
No actual birds were harmed in the making of this blog post!
Since the advent of Technology Assisted Review (aka TAR, predictive coding or computer-assisted review), one of the open questions is whether you have to run a separate TAR process for each item in a document request. As litigation professionals know, it is rare to have only one numbered request in a Rule 34 pleading. Rather, you can expect to see scores of requests (typically as many as the local rules allow). Continue reading