Author Archives: John Tredennick

mm

About John Tredennick

A nationally known trial lawyer and longtime litigation partner at Holland & Hart, John founded Catalyst in 2000. Over the past four decades he has written or edited eight books and countless articles on legal technology topics, including two American Bar Association best sellers on using computers in litigation technology, a book (supplemented annually) on deposition techniques and several other widely-read books on legal analytics and technology. He served as Chair of the ABA’s Law Practice Section and edited its flagship magazine for six years. John’s legal and technology acumen has earned him numerous awards including being named by the American Lawyer as one of the top six “E-Discovery Trailblazers,” named to the FastCase 50 as a legal visionary and named him one of the “Top 100 Global Technology Leaders” by London Citytech magazine. He has also been named the Ernst & Young Entrepreneur of the Year for Technology in the Rocky Mountain Region, and Top Technology Entrepreneur by the Colorado Software and Internet Association. John regularly speaks on legal technology to audiences across the globe. In his spare time, you will find him competing on the national equestrian show jumping circuit or playing drums and singing in a classic rock jam band.

Moving Beyond Outbound Productions: Using TAR 2.0 for Knowledge Generation and Protection

Lawyers search for documents for many different reasons. TAR 1.0 systems were primarily used to reduce review costs in outbound productions. As most know, modern TAR 2.0 protocols, which are based on continuous active learning (CAL) can support a wide range of review needs. In our last post, for example, we talked about how TAR 2.0 systems can be used effectively to support investigations.

That isn’t the end of the discussion. There are a lot of ways to use a CAL predictive ranking algorithm to move take on other types of document review projects. Here we explore various techniques for implementing a TAR 2.0 review for even more knowledge generation tasks than investigations, including opposing party reviews, depo prep and issue analysis, and privilege QC. Continue reading

Five Questions to Ask Your E-Discovery Vendor About CAL

In the aftermath of studies showing that continuous active learning (CAL) is more effective than the first-generation technology assisted review (TAR 1.0) protocols, it seems like every e-discovery vendor is jumping on the bandwagon. At the least it feels like every e-discovery vendor claims to use CAL or somehow incorporate it into its TAR protocols.

Despite these claims, there remains a wide chasm between the TAR protocols available on the market today. As a TAR consumer, how can you determine whether a vendor that claims to use CAL actually does? Here are five basic questions you can ask your vendor to ensure that your review effectively employs CAL. Continue reading

Predict Proves Effective Even With High Richness Collection

Finds 94% of the Relevant Documents Despite Review Criteria Changes

Our client, a major oil and gas company, was hit with a federal investigation into alleged price fixing. The claim was that several of the drilling companies had conspired through various pricing signals to keep interest owner fees from rising with the market.1 The regulators believed they would find the evidence in the documents.

The request to produce was broad, even for this three-letter agency. Our client would have to review over 2 million documents. And the deadline to respond was short, just four months to get the job done. Continue reading

57 Ways to Leave Your (Linear) Lover

A Case Study on Using Catalyst’s Insight Predict to Find Relevant Documents Without SME Training

A Big Four accounting firm with offices in Tokyo recently asked Catalyst to demonstrate the effectiveness of Insight Predict, technology assisted review (TAR) based on continuous active learning (CAL), on a Japanese language investigation. They gave us a test population of about 5,000 documents which had already been tagged for relevance. In fact, they only found 55 relevant documents during their linear review.

We offered to run a free simulation designed to show how quickly Predict would have found those same relevant documents. The simulation would be blind (Predict would not know how the documents were tagged until it presented its ranked list). That way we could simulate an actual Predict review using CAL. Continue reading

Using TAR Across Borders: Myths & Facts

As the world gets smaller, legal and regulatory compliance matters increasingly encompass documents in multiple languages. Many legal teams involved in cross-border matters, however, still hesitate to use technology assisted review (TAR), questioning its effectiveness and ability to handle non-English document collections.  They perceive TAR as a process that involves “understanding” documents. If the documents are in a language the system does not understand, then TAR cannot be effective, they reason.

The fact is that, done properly, TAR can be just as effective for non-English as it is for English documents. This is true even for the complex Asian languages including Chinese, Japanese and Korean (CJK). Although these languages do not use standard English-language delimiters such as spaces and punctuation, they are nonetheless candidates for the successful use of TAR. Continue reading

The Importance of Contextual Diversity in Technology Assisted Review

How do you know what you don’t know? This is a classic problem when searching a large volume of documents in litigation or an investigation.

In a technology assisted review (TAR), a key concern for some is whether the algorithm has missed important relevant documents, especially those that you may know nothing about at the outset of the review. This is because most modern TAR systems focus exclusively on relevance feedback, which means that the system feeds you the unreviewed documents that are likely to be the most relevant because they are most like what you have already coded as relevant. In other words, what is highly ranked depends on the documents that were tagged previously. Continue reading

Legal Holds for Smart Teams: Tips for IT Professionals Working with Legal to Preserve Company Data

As I recently wrote about in Law360, when litigation or a government investigation looms, a corporation has a duty to identify and preserve data (documents or other electronically-stored information) that may be relevant to the matter. This requirement, imposed by the courts as well as government regulators, is known as a “legal hold”  or sometimes a “litigation hold.” It stems from the duty to not destroy relevant evidence that may be required for a judicial proceeding.

Increasingly, courts require legal departments and their outside counsel to supervise the preservation process and to certify that reasonable steps were taken. In most cases, lawyers must rely heavily on Information Technology (IT) professionals to execute the mechanics of the hold and ensure data is preserved correctly. After all, IT knows and works with the company’s systems, networks. And, if legal preservation obligations aren’t met properly, penalties for that failure can be substantial. Continue reading

How to Get More Miles Per Gallon Out of Your Next Document Review

How many miles per gallon can I get using Insight Predict, Catalyst’s technology assisted review platform, which is based on continuous active learning (CAL)? And how does that fuel efficiency rating compare to what I might get driving a keyword search model?

While our clients don’t always use these automotive terms, this is a key question we are often asked. How does CAL review efficiency1 compare to the review efficiency I have gotten using keyword search? Put another way, how many non-relevant documents will I have to look at to complete my review using CAL versus the number of false hits that will likely come back from keyword searches? Continue reading

TAR for Smart Chickens

Special Master Grossman offers a new validation protocol in the Broiler Chicken Antitrust Cases

Validation is one of the more challenging parts of technology assisted review. We have written about it— and the attendant difficulty of proving recall—several times:

The fundamental question is whether a party using TAR has found a sufficient number of responsive1 documents to meet its discovery obligations. For reasons discussed in our earlier articles, proving that you have attained a sufficient level of recall to justify stopping the review can be a difficult problem, particularly when richness is low. Continue reading

Review Efficiency Using Insight Predict

An Initial Case Study

Much of the discussion around Technology Assisted Review (TAR) focuses on “recall,” which is the percentage of the relevant documents found in the review process. Recall is important because lawyers have a duty to take reasonable (and proportionate) steps to produce responsive documents. Indeed, Rule 26(g) of the Federal Rules effectively requires that an attorney certify, after reasonable inquiry, that discovery responses and any associated production are reasonable and proportionate under the totality of the circumstances.

In that regard, achieving a recall rate of less than 50% does not seem reasonable, nor is it often likely to be proportionate. Current TAR decisions suggest that reaching 75% recall is likely reasonable, especially given the potential cost to find additional relevant documents. Higher recall rates, 80% or higher, would seem reasonable in almost every case. Continue reading