[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]
We received this question:
I understand that the QC feature of Insight Predict shows outliers between human decisions versus what Predict believes should be the result. But what if the parties who performed the original review that Predict is using to make judgments were making “bad” decisions? Would the system just use the bad training docs and base decisions just upon those docs?
Similarly, what about the case where half the team is making good decisions and half the team is making bad decisions? Can Insight learn effectively when being fed disparate results on very similar documents?
Can you eliminate the judgments of reviewers if you find they were making poor decisions to keep the system from “learning” bad things and thus making judgments based on the human errors?
Today’s question is answered by Mark Noel, managing director of professional services. Continue reading
Do two years in a row constitute a streak? If so, Catalyst is on one.
For the second consecutive year, Legaltech News has named Catalyst a winner of its Innovation Award. Catalyst was honored in the category “Best E-Discovery Hosting Provider.” Continue reading
Can keyword search be as or more effective than technology assisted review at finding relevant documents?
A client recently asked me this question and it is one I frequently hear from lawyers. The issue underlying the question is whether a TAR platform such as our Insight Predict is worth the fee we charge for it.
The question is a fair one and it can apply to a range of cases. The short answer, drawing on my 20-plus years of experience as a lawyer, is unequivocally, “It depends.” Continue reading
“It’s never too late,” people often say. But is that true for technology assisted review? If a legal team has already put substantial time and effort into manual review, can TAR still be worthwhile? That was the issue presented in a patent infringement case where the client’s approval to use TAR came only after the law firm had manually reviewed nearly half the collection. Even that late in the game, Insight Predict produced substantial savings in time and cost.
Unless you’ve been living in a cave for the last five years, you’ve likely heard about Technology Assisted Review and how it can help reduce the time and cost of document review in e-discovery. But maybe you’ve still never tried it for one of your own cases. Maybe you’ve been on the fence for fear that it won’t really have any value for your case.
If so, here’s your chance to get off the fence. Catalyst this week announced that it is so confident of the effectiveness of its award-winning TAR platform, Insight Predict, that it is putting its money where its mouth is. Catalyst is offering an unconditional, money-back guarantee to anyone who uses Insight Predict on their next case. If you are not completely satisfied with the value you receive using Insight Predict, Catalyst will refund your cost. Continue reading
In their July 2014 paper, Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery, Maura Grossman and Gordon Cormack reported the results of their controlled TAR study in which they compared the effectiveness of a Continuous Active Learning (CAL) protocol against two first-generation (TAR 1.0) protocols that use one-time training. Their study found the CAL protocol to be more effective—most times much more effective—than the TAR 1.0 protocols.
With that evidence establishing the advantages of CAL, e-discovery vendors began jumping on the bandwagon. All of a sudden, it seems that every e-discovery vendor claims to use CAL or somehow incorporate CAL into its TAR protocols. Continue reading
When e-discovery goes international, it gets even more complex and costly. But technology assisted review can be as effective in reducing costs for multi-language, multinational matters as it is for matters here in the U.S.
This infographic illustrates how Catalyst Insight Predict helped streamline a recent patent litigation for a Japanese client. See how Predict’s Continuous Active Learning improved results and cut the time and cost of review by over 85%.
View Infographic >
Read Full Case Study >
No actual birds were harmed in the making of this blog post!
Since the advent of Technology Assisted Review (aka TAR, predictive coding or computer-assisted review), one of the open questions is whether you have to run a separate TAR process for each item in a document request. As litigation professionals know, it is rare to have only one numbered request in a Rule 34 pleading. Rather, you can expect to see scores of requests (typically as many as the local rules allow). Continue reading
I have been on the road quite a bit lately, attending and speaking at several e-discovery events. Most recently I was at the midyear meeting of the Sedona Conference Working Group 1 in Dallas, and before that I was a speaker at both the University of Florida’s 3rd Annual Electronic Discovery Conference and the 4th Annual ASU-Arkfeld E-Discovery and Digital Evidence Conference.
In my travels and elsewhere, I continue to see a marked increase in talk about the new TAR 2.0 protocol, Continuous Active Learning (CAL). I have been seeing increasing interest in CAL ever since the July 2014 release of the Grossman/Cormack study, “Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery.” Continue reading
Our Summit partner, DSi, has a large financial institution client that had allegedly been defrauded by a borrower. The details aren’t important to this discussion, but assume the borrower employed a variety of creative accounting techniques to make its financial position look better than it really was. And, as is often the case, the problems were missed by the accounting and other financial professionals conducting due diligence. Indeed, there were strong factual suggestions that one or more of the professionals were in on the scam.
As the fraud came to light, litigation followed. Perhaps in retaliation or simply to mount a counter offense, the defendant borrower hit the bank with lengthy document requests. After collection and best efforts culling, our client was still left with over 2.1 million documents which might be responsive. Neither time deadlines nor budget allowed for manual review of that volume of documents. Keyword search offered some help but the problem remained. What to do with 2.1 million potentially responsive documents? Continue reading