Category Archives: TAR 2.0

Can You Do Good TAR with a Bad Algorithm?

Should proportionality arguments allow producing parties to get away with poor productions— simply because they wasted a lot of effort due to an extremely bad algorithm? That was a question that Dr. Bill Dimm, founder and CEO of Hot Neuron (the maker of Clustify software), posed in a recent blog post, TAR, Proportionality, and Bad Algorithms (1-NN) and it was the subject of our TAR Talk podcast.

This question is critical to e-discovery, and especially relevant to technology-assisted review (TAR). Listen to our short podcast led by Bill, with participants Mary Mack from ACEDS, and Catalyst’s John Tredennick and Tom Gricks, in a discussion on whether one can do “good” TAR with a bad algorithm. Continue reading

Are People the Weakest Link in Technology Assisted Review? Not Really.

In mid-October, our friend Michael Quartararo wrote a post for Above the Law asking whether people were the weakest link in technology-assisted review (TAR). Michael offered some thoughts around whether this may be the case, but he didn’t really answer the question. So, we have to ask:

Why aren’t more people using TAR?

One answer is that there is still a lot of confusion about different types of TAR and how they work. Unfortunately, it appears that Michael’s post may have added to the confusion because he did not differentiate between legacy TAR 1.0 and TAR 2.0. Our first thought was to let the article pass without rejoinder or correction. To our surprise, however, it has been cited and reposted as authoritative by several others. To that end, we want to help clarify a number of Michael’s points. We will quote from his post. Continue reading

Using TAR for Asian Language Discovery

In the early days, many questioned whether technology assisted review (TAR) would work for non-English documents. There were a number of reasons for this but one fear was that TAR only “understood” the English language.

Ironically, that was true in a way for the early days of e-discovery. At the time, most litigation support systems were built for ASCII text. The indexing and search software didn’t understand Asian character combinations and thus couldn’t recognize which characters should be grouped together in order to index them properly. In English (and most other Western languages) we have spaces between words, but there are no such obvious markers in many Asian languages to denote which characters go together to form useful units of meaning (equivalent to English words). Continue reading

Was It a Document Dump or a Deficient TAR Process?

TAR TalkThat’s the topic of our recent TAR Talk podcast.* We talked about the recent decision by the U.S. District Court for the District of Columbia In Re Domestic Airline Travel Antitrust Litigation, 2018 WL 4441507 (D.D.C. Sept. 13, 2018), an antitrust class action lawsuit against the four largest commercial airlines in the United States—American Airlines, Delta Air Lines, Southwest Airlines, and United Airlines.

The declarations around this decision prompted much discussion in the e-discovery world, particularly for those using technology-assisted review (TAR) in the review process. The argument was based on United’s core document production. The plaintiffs called it a deficient TAR process and complained that they were forced to review mountains of non-relevant documents (aka, a document dump). Continue reading

Moving Beyond Outbound Productions: Using TAR 2.0 for Knowledge Generation and Protection

Lawyers search for documents for many different reasons. TAR 1.0 systems were primarily used to reduce review costs in outbound productions. As most know, modern TAR 2.0 protocols, which are based on continuous active learning (CAL) can support a wide range of review needs. In our last post, for example, we talked about how TAR 2.0 systems can be used effectively to support investigations.

That isn’t the end of the discussion. There are a lot of ways to use a CAL predictive ranking algorithm to move take on other types of document review projects. Here we explore various techniques for implementing a TAR 2.0 review for even more knowledge generation tasks than investigations, including opposing party reviews, depo prep and issue analysis, and privilege QC. Continue reading

How Can I Use TAR 2.0 for Investigations?

Across the legal landscape, lawyers search for documents for many different reasons. TAR 1.0 systems were primarily used to classify large numbers of documents when lawyers were reviewing documents for production. But how can you use TAR for even more document review tasks?

Modern TAR technologies (TAR 2.0 based on the continuous active learningor CALprotocol) include the ability to deal with low richness, rolling and small collections, and flexible inputs in addition to vast improvements in speed. These improvements also allow TAR to be used effectively in many more document review workflows than traditional TAR 1.0 systems. Continue reading

Five Questions to Ask Your E-Discovery Vendor About CAL

In the aftermath of studies showing that continuous active learning (CAL) is more effective than the first-generation technology assisted review (TAR 1.0) protocols, it seems like every e-discovery vendor is jumping on the bandwagon. At the least it feels like every e-discovery vendor claims to use CAL or somehow incorporate it into its TAR protocols.

Despite these claims, there remains a wide chasm between the TAR protocols available on the market today. As a TAR consumer, how can you determine whether a vendor that claims to use CAL actually does? Here are five basic questions you can ask your vendor to ensure that your review effectively employs CAL. Continue reading

In AI, No Evolution Without Evaluation

At the recent Legalweek New York AI Bootcamp Workshop, I was reminded of a very small, cheap pocket dictionary that I once bought at a book fair when I was in third grade. One day, while looking up definitions, I came across the entry for “bull.” Bull was defined as “the opposite of cow.” Curious, I looked up “cow.” It was defined as “the opposite of bull.” Nothing about both terms referred to bovines, gender or any other defining description. Just that bull and cow are each other’s opposites.

At the boot camp—designed to cover the foundation, use cases and legal considerations to separate the value of AI technology from “the noise”—I learned that “machine learning” is “not expert systems,” and “expert systems” is “not machine learning.” How is this any more helpful than my third grade dictionary? Continue reading

TAR for Smart Chickens

Special Master Grossman offers a new validation protocol in the Broiler Chicken Antitrust Cases

Validation is one of the more challenging parts of technology assisted review. We have written about it— and the attendant difficulty of proving recall—several times:

The fundamental question is whether a party using TAR has found a sufficient number of responsive1 documents to meet its discovery obligations. For reasons discussed in our earlier articles, proving that you have attained a sufficient level of recall to justify stopping the review can be a difficult problem, particularly when richness is low. Continue reading

Comparing the Effectiveness of TAR 1.0 to TAR 2.0: A Second Simulation Experiment

Catalyst_Simulation_TAR1_vs_TAR2In a recent blog post, we reported on a technology-assisted review simulation that we conducted to compare the effectiveness and efficiency of a family-based review versus an individual-document review. That post was one of a series here reporting on simulations conducted as part of our TAR Challenge – an invitation to any corporation or law firm to compare its results in an actual litigation against the results that would have been achieved using Catalyst’s advanced TAR 2.0 technology Insight Predict.

As we explained in that recent blog post, the simulation used actual documents that were previously reviewed in an active litigation. Based on those documents, we conducted two distinct experiments. The first was the family vs. non-family test. In this blog post, we discuss the second experiment, testing a TAR 1.0 review against a TAR 2.0 review. Continue reading