Author Archives: Mark Noel

About Mark Noel

Mark Noel is a managing director of professional services at Catalyst Repository Systems, where he specializes in helping clients use technology-assisted review, advanced analytics, and custom workflows to handle complex and large-scale litigations. Before joining Catalyst, Mark was a member of the Acuity team at FTI Consulting, co-founded an e-discovery software startup, and was an intellectual property litigator with Latham & Watkins LLP.

Catalyst’s Report from TREC 2016: ‘We Don’t Need No Stinkin Training’

blog_data_500One of the bigger, and still enduring, debates among Technology Assisted Review experts revolves around the method and amount of training you need to get optimal[1] results from your TAR algorithm. Over the years, experts prescribed a variety of approaches including:

  1. Random Only: Have a subject matter expert (SME), typically a senior lawyer, review and judge several thousand randomly selected documents.
  2. Active Learning: Have the SME review several thousand marginally relevant documents chosen by the computer to assist in the training .
  3. Mixed TAR 1.0 Approach: Have the SME review and judge a mix of randomly selected documents, some found through keyword search and others selected by the algorithm to help it find the boundary between relevant and non-relevant documents.

Continue reading

Ask Catalyst: How Did the F.B.I. Review All Those Emails So Fast?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]

We received this question:

Twitter_Ask_Catalyst_Mark_Noel

On Sunday, F.B.I. Director James Comey announced that his agency had completed its review of 650,000 emails it had found just eight days earlier. How could the F.B.I. review 650,000 emails in just a week?

Today’s question is answered by Mark Noel, managing director of professional services.


Continue reading

Judge Peck Declines to Force the Use of TAR

TC_Bob_AmbrogiBob Ambrogi, who serves as director of communications here at Catalyst, posted a detailed analysis yesterday at Bloomberg Law’s Big Law Business of Magistrate Judge Andrew J. Peck’s latest decision involving technology assisted review, Hyles v. New York City. It’s well worth a look.

Hyles is an employment case where the plaintiff wanted the court to force New York City to use TAR rather than its proposed search terms. Even though Judge Peck emphasized “that in general, TAR is cheaper, more efficient and superior to keyword searching,” he nevertheless declined to force the defendants to use TAR, finding that it hasn’t yet displaced other tools to the point where using something else is unreasonable. Further, the Sedona Principles state:

Responding parties are best situated to evaluate the procedures, methodologies, and technologies appropriate for preserving and producing their own electronically stored information.

Continue reading

Ask Catalyst: Why Can’t You Tell Me Exactly How Much TAR Will Save Me?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]

We received this question:Ask_Catalyst_MN_Why_Can't_You_Tell_Me_Exactly_How_Much_TAR_Will_Save_Me-04

Why can’t you tell me exactly how much I’ll save on my upcoming review project by using technology assisted review?

Today’s question is answered by Mark Noel, managing director of professional services.  Continue reading

Video: The Three Types of E-Discovery Search Tasks and What They Mean for Your Workflow

By | | Filed under: Review

Three Categories of ReviewNot all search tasks are created equal. Sometimes we need to do a reasonable and cost-effective job of finding the majority of relevant documents, sometimes we need to be 100 percent certain that we’ve found every last bit of sensitive data, and sometimes we just need the best examples of certain topics to tell us what’s happening or to use as evidence. This video explains these three broad categories of search tasks; the differences in recall, precision and relevance objectives for each; and the implications for choosing tools and workflows.
Continue reading

Ask Catalyst: How Does Insight Predict Handle ‘Bad’ Decisions By Reviewers?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]

We received this question:

Ask_Catalyst_MN_How_Does_Insight_Predict_Handle_Bad_Decisions_By_Reviewers-04I understand that the QC feature of Insight Predict shows outliers between human decisions versus what Predict believes should be the result. But what if the parties who performed the original review that Predict is using to make judgments were making “bad” decisions? Would the system just use the bad training docs and base decisions just upon those docs?

Similarly, what about the case where half the team is making good decisions and half the team is making bad decisions? Can Insight learn effectively when being fed disparate results on very similar documents?

Can you eliminate the judgments of reviewers if you find they were making poor decisions to keep the system from “learning” bad things and thus making judgments based on the human errors?

Today’s question is answered by Mark Noel, managing director of professional services.  Continue reading

Video: Understanding Yield Curves in Technology Assisted Review

blog_contextual_diversity_videoIn information retrieval science and e-discovery, yield curves (also called gain curves) are graphic visualizations of how quickly a review finds relevant documents or how well a technology assisted review tool has ranked all your documents.

This video shows you how they work and how to read them to measure and validate the results of your document review. Continue reading

Video: How Contextual Diversity in TAR 2.0 Keeps You from Missing Key Pockets of Documents

blog_contextual_diversity_videoHow do you know what you don’t know when using technology assisted review? As I discussed in a recent post, this is a classic problem when searching a large volume of documents. You could miss documents, topics or terms in a collection simply because you don’t know to search for them.

Contextual Diversity is the solution to that problem. A proprietary TAR 2.0 tool built into Insight Predict, it continuously and actively explores unreviewed documents for concepts or topics that haven’t been seen, ensuring you’ve looked into all corners of the collection. Continue reading

Catalyst Publishes 2nd Edition of its Popular Book ‘TAR for Smart People’

Book-2ndEditionI never liked the …for Dummies book titles. So when we released the revised and expanded second edition of our book about technology assisted review at Legaltech New York, I was glad we stuck with the original title, TAR for Smart People: How Technology Assisted Review Works and Why It Matters for Legal Professionals.

Download TAR for Smart People.

In a complex professional practice area such as law, it has become impossible for individual practitioners to hold high levels of expertise in every area that a project might involve. Our brains just aren’t big enough to hold everything we humans have learned. No shame in that. Continue reading