Category Archives: Review

Catalyst Research: Family-Based Review and Expert Training — Experimental Simulations, Real Data

Catalyst_Exclusive_ResearchABSTRACT

In this research we answer two main questions: (1) What is the efficiency of a TAR 2.0 family-level document review versus a TAR 2.0 individual document review, and (2) How useful is expert-only (aka TAR 1.0 with expert) training, relative to TAR 2.0’s ability to conflate training and review using non-expert judgments [2]? Continue reading

Video: The Three Types of E-Discovery Search Tasks and What They Mean for Your Workflow

By on . Posted in Review

Three Categories of ReviewNot all search tasks are created equal. Sometimes we need to do a reasonable and cost-effective job of finding the majority of relevant documents, sometimes we need to be 100 percent certain that we’ve found every last bit of sensitive data, and sometimes we just need the best examples of certain topics to tell us what’s happening or to use as evidence. This video explains these three broad categories of search tasks; the differences in recall, precision and relevance objectives for each; and the implications for choosing tools and workflows.
Continue reading

Why Control Sets are Problematic in E-Discovery: A Follow-up to Ralph Losey

Why_Control_Sets_are_Problematic_in_E-DiscoveryIn a recent blog post, Ralph Losey lays out a case for the abolishment of control sets in e-discovery, particularly if one is following a continuous learning protocol.  Here at Catalyst, we could not agree more with this position. From the very first moment we rolled out our TAR 2.0, continuous learning engine we have not only recommended against the use of control sets, but we actively decided against ever implementing them in the first place and thus never even had the potential of steering clients awry.

Losey points out three main flaws with control sets. These may be summarized as (1) knowledge Issues, (2) sequential testing bias, and (3) representativeness. In this blog post I offer my own take and evidence in favor of these three points, and offer a fourth difficulty with control sets: rolling collection. Continue reading

Case Study: Is It Ever Too Late in A Review to Start Using TAR?

Case_Study“It’s never too late,” people often say. But is that true for technology assisted review? If a legal team has already put substantial time and effort into manual review, can TAR still be worthwhile? That was the issue presented in a patent infringement case where the client’s approval to use TAR came only after the law firm had manually reviewed nearly half the collection. Even that late in the game, Insight Predict produced substantial savings in time and cost.
Continue reading

Forbes Interviews Catalyst’s John Tredennick and Mark Noel on Technology Assisted Review

ParnellForbesInterview2

What is the impact of data and technology on the modern law firm and lawyer? This was the question Forbes contributor David J. Parnell set out to answer when he recently interviewed John Tredennick, Catalyst’s founder and CEO, and Mark Noel, Catalyst’s managing director of professional services.

At one point in the wide-ranging interview — which was published on Forbes last week — Parnell asks Tredennick about some of the major changes in legal technology he has witnessed over the years. In response, Tredennick says that the legal industry is currently in the midst of a major transition with respect to technology assisted review.

Suddenly technology has come where you take a million documents in review—and for any big firm lawyer that’s a big smile on their face because with junior associates reviewing at 500 docs a day, you’ve got your year made—and somebody comes along and says, “You know, with a wave of a wand and a couple training docs, we’re going to cut that million documents down to about 50,000 docs that are probably important.” Maybe 95% of those billable hours go away. That does not make the lawyers smile. That does not make you smile.

But I’ve seen this for 30 years. The innovation comes out; the billable hour suffers; but you always have a few law firms that are not at the top, and then they say, “We don’t have 50,000 associates. Let’s go outsmart them. We’ll lead the way.” And they start taking business away from the big guys. And the corporate entities listen and change happens.

Elsewhere in the interview, Noel tells Parnell that technologies such as TAR are helping to ease the work of lawyers, but will never replace them.

Many people are realizing that they have to change the way they work. And tools like technology assisted review are changing the way attorneys work. But it’s not going to replace them. TAR tools can quickly analyze millions of documents for subtle patterns, but only humans can decide what’s important to the case, or what stories the documents can tell. So these systems are hybrids: The machines do what they do best, and the humans do what they do best. There will be plenty of work to go around for skilled practitioners who know the tools and have the right skillsets.

Read the full interview at Forbes.

Another Court Formally Endorses the Use of Technology Assisted Review

Given the increasing prevalence of technology assisted review in e-discovery, it seems hard to believe that it was just 19 months ago that TAR received its first judicial endorsement. That endorsement came, of course, from U.S. Magistrate Judge Andrew J. Peck in his landmark ruling in Moore v. Publicis Groupe, 287 F.R.D. 182 (S.D.N.Y. 2012), adopted sub nom. Moore v. Publicis Groupe SA, No. 11 Civ. 1279 (ALC)(AJP), 2012 WL 1446534 (S.D.N.Y. Apr. 26, 2012), in which he stated, “This judicial opinion now recognizes that computer-assisted review is an acceptable way to search for relevant ESI in appropriate cases.”

Other courts have since followed suit, and now there is another to add to the list: the U.S. Tax Court. Continue reading

Continuous Active Learning for Technology Assisted Review
(How it Works and Why it Matters for E-Discovery)

Last month, two of the leading experts on e-discovery, Maura R. Grossman and Gordon V. Cormack, presented a peer-reviewed study on continuous active learning to the annual conference of the Special Interest Group on Information Retrieval, a part of the Association for Computing Machinery (ACM), “Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic Discovery.”

download-pdfIn the study, they compared three TAR protocols, testing them across eight different cases. Two of the three protocols, Simple Passive Learning (SPL) and Simple Active Learning (SAL), are typically associated with early approaches to predictive coding, which we call TAR 1.0. The third, continuous active learning (CAL), is a central part of a newer approach to predictive coding, which we call TAR 2.0. Continue reading

Judge Refuses to Order Predictive Coding Where Not Agreed in ESI Protocol

gavelTwo years ago, it was big news in the world of e-discovery when U.S. Magistrate Judge Andrew J. Peck issued the first judicial opinion expressly approving the use of predictive coding. As other judges followed suit, issuing their own opinions endorsing or approving predictive coding, the trend led law firm Gibson Dunn, in its annual e-discovery update, to declare 2012 “the year of predictive coding.”

The trend towards judicial acceptance of predictive coding and other forms of technology assisted review (TAR) has continued, to the point where it is now newsworthy when a judge declines to order TAR. Continue reading

In the World of Big Data, Human Judgment Comes Second, The Algorithm Rules

I read a fascinating blog post from Andrew McAfee for the Harvard Business Review. Titled “Big Data’s Biggest Challenge? Convincing People NOT to Trust Their Judgment,” the article’s primary thesis is that as the amount of data goes up, the importance of human judgment should go down.

Downplay human judgment? In this age, one would think that judgment is more important that ever. How can we manage in this increasingly complex world if we don’t use our judgment?

Even though it may seem counterintuitive, support for this proposition is piling up rapidly. McAfee cites numerous examples to back his argument. For one, it has been shown that parole boards do much worse than algorithms in assessing which prisoners should be sent home. Pathologists are not as good as image analysis software at diagnosing breast cancer. Continue reading

TAR 2.0: Continuous Ranking – Is One Bite at the Apple Really Enough?

For all of its complexity, technology-assisted review (TAR) in its traditional form is easy to sum up:

  1. A lawyer (subject matter expert) sits down at a computer and looks at a subset of documents.
  2. For each, the lawyer records a thumbs-up or thumbs-down decision (tagging the document). The TAR algorithm watches carefully, learning during this training.
  3. When training is complete, we let the system rank and divide the full set of documents between (predicted) relevant and irrelevant.[1]
  4. We then review the relevant documents, ignoring the rest. Continue reading