Author Archives: John Tredennick

mm

About John Tredennick

A nationally known trial lawyer and longtime litigation partner at Holland & Hart, John founded Catalyst in 2000 and is responsible for its overall direction, voice and vision.Well before founding Catalyst, John was a pioneer in the field of legal technology. He was editor-in-chief of the multi-author, two-book series, Winning With Computers: Trial Practice in the Twenty-First Century (ABA Press 1990, 1991). Both were ABA best sellers focusing on using computers in litigation technology. At the same time, he wrote, How to Prepare for Take and Use a Deposition at Trial (James Publishing 1990), which he and his co-author continued to supplement for several years. He also wrote, Lawyer’s Guide to Spreadsheets (Glasser Publishing 2000), and, Lawyer’s Guide to Microsoft Excel 2007 (ABA Press 2009).John has been widely honored for his achievements. In 2013, he was named by the American Lawyer as one of the top six “E-Discovery Trailblazers” in their special issue on the “Top Fifty Big Law Innovators” in the past fifty years. In 2012, he was named to the FastCase 50, which recognizes the smartest, most courageous innovators, techies, visionaries and leaders in the law. London’s CityTech magazine named him one of the “Top 100 Global Technology Leaders.” In 2009, he was named the Ernst & Young Entrepreneur of the Year for Technology in the Rocky Mountain Region. Also in 2009, he was named the Top Technology Entrepreneur by the Colorado Software and Internet Association.John is the former chair of the ABA’s Law Practice Management Section. For many years, he was editor-in-chief of the ABA’s Law Practice Management magazine, a monthly publication focusing on legal technology and law office management. More recently, he founded and edited Law Practice Today, a monthly ABA webzine that focuses on legal technology and management. Over two decades, John has written scores of articles on legal technology and spoken on legal technology to audiences on four of the five continents. In his spare time, you will find him competing on the national equestrian show jumping circuit.

Legal Holds for Smart People: Part 1 – What Is A Legal Hold?

Our judicial system is firmly rooted on the belief that parties to litigation should share documents and other information prior to trial. In support of that proposition, each party has a duty to identify, locate and preserve information and other evidence that is relevant to that specific litigation. The purpose is to avoid the intentional or inadvertent destruction (“spoliation”) of relevant evidence that might be used at trial.

The key point to understand is that this duty to preserve evidence may arise even before suit is filed or the information is otherwise requested. In 2003, a federal court judge set out the rule for what has become known as a “legal hold.”

“Once a party reasonably anticipates litigation, it must suspend its routine document retention/destruction policy and put in place a ‘litigation hold.’” Continue reading

Deep Learning in E-Discovery: Moving Past the Hype

blog_lightbulb_with_flareDeep learning. The term seems to be ubiquitous these days. Everywhere from self-driving cars and speech transcription to victories in the game “Go” and cancer diagnosis. If we measure things by press coverage, deep learning seems poised to make every other form of machine learning obsolete.

Recently, Catalyst’s founder and CEO John Tredennick interviewed Catalyst’s chief scientist, Dr. Jeremy Pickens (who we at Catalyst call Dr. J), about how deep learning works and how it might be applied in the legal arena.

JT: Good afternoon Dr. J. I have been reading about deep learning and would like to know more about how it works and what it might offer the legal profession. Continue reading

How Many Documents in a Gigabyte? Our Latest Analysis Shows A Shifting Pattern

Catalyst_How_Many_Docs_2017Since 2011, I have been sampling our document repository and reporting about file sizes in my “How Many Docs in a Gigabyte” series of posts here. I started writing about the subject because we were seeing a large discrepancy between the number of files per gigabyte we stored and the number considered to be standard by our industry colleagues. Indeed, in 2011, I reported that we were finding far fewer documents per GB (2,500) than was generally thought to be the industry norm, which ranged from 5,000 to 15,000. Continue reading

Catalyst’s Report from TREC 2016: ‘We Don’t Need No Stinkin Training’

blog_data_500One of the bigger, and still enduring, debates among Technology Assisted Review experts revolves around the method and amount of training you need to get optimal[1] results from your TAR algorithm. Over the years, experts prescribed a variety of approaches including:

  1. Random Only: Have a subject matter expert (SME), typically a senior lawyer, review and judge several thousand randomly selected documents.
  2. Active Learning: Have the SME review several thousand marginally relevant documents chosen by the computer to assist in the training .
  3. Mixed TAR 1.0 Approach: Have the SME review and judge a mix of randomly selected documents, some found through keyword search and others selected by the algorithm to help it find the boundary between relevant and non-relevant documents.

Continue reading

Ask Catalyst: If I Use Outside Docs to Train the TAR Algorithm, Do I Risk Exposing Them to My Opponent?

[This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]

We received this question:blog_john_and_tom

Does using documents from other matters to gain intelligence [train the algorithm] run the risk of exposing that data if opposing counsel requests the training set?

Today’s question is answered by John Tredennick, founder and CEO, and Thomas Gricks, managing director of professional services.

Continue reading

Ask Catalyst (Video Edition): How Does TAR Work and Why Does It Matter?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]  

John TredennickThis week’s question:

How does technology assisted review work and why does TAR matter for legal professionals?

In a special video edition of Ask Catalyst, today’s question is answered by John Tredennick, founder and CEO.

Continue reading

Ask Catalyst: What is the Difference Between TAR 1.0 and TAR 2.0?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]  

This week’s question:

Twitter_Ask_Catalyst_John_Tredennick

Your blog and website often refer to “TAR 1.0” and “TAR 2.0.” While I understand the general concept of technology assisted review, I am not clear what you mean by the 1.0 and 2.0 labels. Can you explain the difference?

Today’s question is answered by John Tredennick, founder and CEO.

Continue reading

Ask Catalyst: In TAR, What Is Validation And Why Is It Important?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]  

Twitter_Ask_Catalyst_John_TredennickThis week’s question:

In technology assisted review, what is validation and why is it important?

Today’s question is answered by John Tredennick, founder and CEO.

Validation is the “act of confirming that a process has achieved its intended purpose.”[1] It is important to TAR for several reasons, including the need to ensure the TAR algorithm has worked properly and because Rule 26(g) requires counsel to certify that the process they used for producing discovery documents was reasonable and reasonably effective.[2]  While courts have approved validation methods in specific cases,[3] no court has yet purported to set forth specific validation standards applicable to all cases or for all TAR review projects. Continue reading

Ask Catalyst: Is Recall A Fair Measure Of The Validity Of A Production Response?

[Editor’s note: This is another post in our “Ask Catalyst” series, in which we answer your questions about e-discovery search and review. To learn more and submit your own question, go here.]  

Ask_Catalyst_TC_John_TredennickThis week’s question:

Is recall a fair measure of the validity of a production response?

Today’s question is answered by John Tredennick, founder and CEO. Continue reading

A Discussion About Dynamo Holdings: Is 43% Recall Enough?

blog_john_and_tomIn September 2014, Judge Ronald L. Buch became the first to sanction the use of technology assisted review (aka predictive coding) in the U.S. Tax Court. See Dynamo Holdings Limited Partnership v. Commissioner of Internal Revenue, 143 T.C. No. 9. We mentioned it here.

This summer, Judge Buch issued a follow-on order addressing the IRS commissioner’s objections to the outcome of the TAR process, which we chronicled here. In that opinion, he affirmed the petitioner’s TAR process and rejected the commissioner’s challenge that the production was not adequate. In doing so, the judge debunked what he called the two myths of review, namely that human review is the “gold standard” or that any discovery response is or can be perfect. Continue reading