In First for UK, High Court Master Approves Use of TAR

Taking his lead from the seminal U.S. case, Da Silva Moore v. Publicis Groupe, a master of Britain’s High Court of Justice has approved the use of technology assisted review, becoming the first case to do so in the United Kingdom and only the second case outside the U.S. to approve TAR.

In a written decision issued Feb. 16, 2016, in the case Pyrrho Investments Ltd. v. MWB Property Ltd., Master Matthews, who is similar in responsibility to a magistrate judge in the U.S. federal court system – provided his reasons for his approval of the parties’ request to use TAR in a case involving some 3.1 million electronic documents.

I considered that the present was a suitable case in which to use, and that it would promote the overriding objective set out in Part 1 of the CPR if I approved the use of, predictive coding software, and I therefore did so. Whether it would be right for approval to be given in other cases will, of course, depend upon the particular circumstances obtaining in them.

Master Matthews issued the written opinion, he explained, because of the “novelty” of the issue in the United Kingdom. The only other court outside the U.S. that has formally approved the use of TAR is the High Court of Ireland. (See our post: The Luck of the Irish: TAR Approved by Irish High Court.)

His opinion is interesting on several levels. Three in particular warrant expanded discussion in this post:

  • The legal authorities he relied on in reaching his decision.

  • The factors that he found to weigh in favor of TAR.

  • The type of TAR he approved.

We’ll discuss each of these.

The Legal Authorities

Master Matthews begins his legal analysis by noting the paucity of British case law on TAR. “In England there is not a great deal by way of guidance, and nothing by way of authority, on the use of such software as part of the disclosure process,” he wrote.

In fact, he found only a fleeting reference in a single case to anything that even comes close to TAR. In that case, Goodale v. Ministry of Justice [2009] EWHC B41 (QB), the then Senior Master of the Queen’s Bench Division, Master Whitaker, discussed the problem of the increasing volumes of information at issue in “e-disclosure” (as e-discovery is called in the UK). After laying out the problems with standard forms of search, he wrote:

Indeed, when it comes to review, I am aware of software that will effectively score each document as to its likely relevance and which will enable a prioritization of categories within the entire document set.

So finding nothing to rely on in UK law, Master Matthews turned to U.S. Magistrate Judge Andrew J. Peck’s 2012 decision in Da Silva Moore, which was the first U.S. court decision to approve the use of TAR. Master Matthews quoted extensively from Judge Peck’s decision, and also from U.S. District Judge Andrew L. Carter’s subsequent decision affirming Judge Peck’s opinion. He also cited and quoted from the Irish High Court decision we mentioned above, Irish Bank Resolution Corporation Ltd v. Quinn.

After reviewing these two cases, he summed up:

So far as I am aware, no English court has given a judgment which has considered the use of predictive coding software as part of disclosure in civil procedure. At all events, a search of the BAILII online database for “predictive coding software” returned no hits at all, and for “predictive coding” and “computer-assisted review” only the Irish case referred to above.

He also noted that in both Da Silva Moore and Irish Bank, the courts had approved the use of TAR even though one party had objected. He contrasted that with the case before him, in which the parties stipulated to the use of TAR.

Factors Favoring TAR

With that brief basis in law, Master Matthews next went on to detail the factors that he found in favor of approving TAR in this case. He enumerated 10 factors, which we quote directly:

  1. Experience in other jurisdictions, whilst so far limited, has been that predictive coding software can be useful in appropriate cases.

  2. There is no evidence to show that the use of predictive coding software leads to less accurate disclosure being given than, say, manual review alone or keyword searches and manual review combined, and indeed there is some evidence (referred to in the US and Irish cases to which I referred above) to the contrary.

  3. Moreover, there will be greater consistency in using the computer to apply the approach of a senior lawyer towards the initial sample (as refined) to the whole document set, than in using dozens, perhaps hundreds, of lower-grade fee-earners, each seeking independently  to apply the relevant criteria in  relation to individual documents.

  4. There is nothing in the CPR or Practice Directions to prohibit the use of such software.

  5. The number of electronic documents which must be considered for relevance and possible disclosure in the present case is huge, over 3 million.

  6. The cost of manually searching these documents would be enormous, amounting to several million pounds at least. In my judgment, therefore, a full manual review of each document would be “unreasonable” within paragraph 25 of Practice Direction B to Part 31, at least where a suitable automated alternative exists at lower cost.

  7. The costs of using predictive coding software would depend on various factors, including importantly whether the number of documents is reduced by keyword searches, but the estimates given in this case vary between £181,988 plus monthly hosting costs of £15,717 to £469,049 plus monthly hosting costs of £20,820. This is obviously far less expensive than the full manual alternative, though of course there may be additional costs if manual reviews still need to be carried out when the software has done its best.

  8. The ‘value’ of the claims made in this litigation is in the tens of millions of pounds. In my judgment the estimated costs of using the software are proportionate.

  9. The trial in the present case is not until June 2017, so there would be plenty of time to consider other disclosure methods if for any reason the predictive software route turned out to be unsatisfactory.

  10. The parties have agreed on the use of the software, and also how to use it, subject only to the approval of the Court.

Master Matthews concluded that there were no factors of any weight against the use of TAR.

The Type of TAR Used

While this decision approving TAR brings English law on par with contemporary technology, the same can’t be said for the type of TAR being used in the case. Based on Master Matthews’ description of the TAR process at issue, it appears that the parties are using a first-generation TAR 1.0 process rather than the current TAR 2.0 state of the art.

The decision describes a TAR process that would proceed along these steps:

A representative sample of the document set is used to ‘train’ the software. In the present case, the sample will comprise 1,600-1,800 documents. A senior lawyer involved in the litigation – what we generally call a subject matter expert (SME) – considers and makes a decision for each of the documents in the sample, and each such document is categorized accordingly. “It is essential that the criteria for relevance be consistently applied at this stage,” Master Matthews wrote, “So the best practice would be for a single, senior lawyer who has mastered the issues in the case    to consider the whole sample. Where documents would for some reason not be good examples, they should be deselected so that the software does not use them to learn from.”

The software analyses all of the documents for common concepts and language used. Based on the training that the software has received, it then reviews and categorizes each individual document in the whole document set as either relevant or not.

The results of this categorization exercise are then validated through a number of quality assurance exercises based on statistical sampling. “The sampling size will be fixed in advance depending on what confidence level and what margin of error are desired,” the opinion said. “The higher the level of confidence, and the lower the margin of error, the greater the sample must be, the longer it will take and the more it will cost.”

The samples selected are (blind) reviewed by a human for relevance. The software creates a report of software decisions overturned by humans. The overturns are themselves reviewed by a senior reviewer. Where the human decision is adjudged correct, it is fed back into the system for further learning. (It analyses the correctly overturned documents just as the originals were analyzed.) Where not correct, the document is removed from the overturns. Where the relevance of the original document was incorrectly assessed at the first stage, that is changed and all the documents depending on it will have to be re-assessed.

The process of sampling is repeated as many times as required to bring the overturns to a level within agreed tolerances, and so as to achieve a stability pattern. This is usually not less than 3, making 4 rounds in total. A lawyer in the case said that this should involve review of some 8 to 12 batches of documents. The trend of overturns should be lower from round to round. Ultimately there will be a final overturn report within the agreed tolerance, so that the expense of further rounds of review will not be justified by the reduced chance of finding further errors, and the list of relevant documents can be produced.

“Although the number of documents that have to be manually reviewed in a predictive coding process may be high in absolute numbers, it will be only a small proportion of the total that need to be reviewed in the present case,” Master Matthews concluded. “Thus – whatever the cost per document of manual review – provided that the exercise is large enough to absorb the up-front costs of engaging a suitable technology partner, the costs overall of a predictive coding review should be considerably lower. It will be seen that, because the software has to be trained for every case, each use of the predictive coding process is bespoke for that case.”

Using TAR 2.0 Would Simplify Things

While we are pleased to see other jurisdictions catch up to the TAR revolution, the parties would have found life to be much simpler (and the results far better) had they used an advanced TAR protocol called Continuous Active Learning. Here is why.

With CAL there is no need to dragoon a senior lawyer into: 1) reviewing 500 or so documents in order to create a control set; 2) review another 2-3,000 documents for training; and 3) review another 500 or so documents for testing and retesting. Rather, the team could start the review immediately using as many relevant documents as they could find easily to start the ranking process.

Going further, the CAL process (done properly at least) would not crater if the team found more documents to be reviewed. You could simply add them to the collection and keep reviewing. There would be no need to restart the training process from scratch.

Going even further, the CAL process has been shown in research involving hundreds of cases to find relevant documents more quickly than the TAR 1.0 processes and at a much lower review cost.

And lastly, with a CAL process you wouldn’t have to try and understand (let alone manage) this process:

The process of sampling is repeated as many times as required to bring the overturns to a level within agreed tolerances, and so as to achieve a stability pattern. This is usually not less than 3, making 4 rounds in total. A lawyer in the case said that this should involve review of some 8 to 12 batches of documents. The trend of overturns should be lower from round to round. Ultimately there will be a final overturn report within the agreed tolerance, so that the expense of further rounds of review will not be justified by the reduced chance of finding further errors, and the list of relevant documents can be produced.

Rather, you could simply take a sample of the unreviewed documents to show that you weren’t overlooking too many relevant documents.

As we have described in numerous posts on this blog as well as in our book, TAR for Smart People, we believe that TAR 2.0 processes using Continuous Active Learning are far preferable to the TAR 1.0 process described in this opinion. TAR 2.0 is simpler to use, takes less time and produces better results.

But we should be thankful for small victories. We’re glad to see that the High Court in England has taken its lead from our courts here in the U.S. and endorsed the use of TAR in litigation. Next time, maybe the parties will take their game to the next level and use an advanced TAR process to find relevant documents. It will certainly make it easier for the court to follow and approve the process.

 

mm

About John Tredennick

A nationally known trial lawyer and longtime litigation partner at Holland & Hart, John founded Catalyst in 2000. Over the past four decades he has written or edited eight books and countless articles on legal technology topics, including two American Bar Association best sellers on using computers in litigation technology, a book (supplemented annually) on deposition techniques and several other widely-read books on legal analytics and technology. He served as Chair of the ABA’s Law Practice Section and edited its flagship magazine for six years. John’s legal and technology acumen has earned him numerous awards including being named by the American Lawyer as one of the top six “E-Discovery Trailblazers,” named to the FastCase 50 as a legal visionary and named him one of the “Top 100 Global Technology Leaders” by London Citytech magazine. He has also been named the Ernst & Young Entrepreneur of the Year for Technology in the Rocky Mountain Region, and Top Technology Entrepreneur by the Colorado Software and Internet Association. John regularly speaks on legal technology to audiences across the globe. In his spare time, you will find him competing on the national equestrian show jumping circuit or playing drums and singing in a classic rock jam band.

mm

About Bob Ambrogi

Bob is known internationally for his expertise in the Internet and legal technology. He held the top editorial positions at the two leading national U.S. legal newspapers, the National Law Journal and Lawyers USA. A long-time advisor to Catalyst, Bob now divides his time between law practice and media consulting. He writes two blogs, LawSites and MediaLaw, co-authors Law.com's Legal Blog Watch, and co-hosts the weekly legal-affairs podcast Lawyer2Lawyer. A 1980 graduate of Boston College Law School, Bob is a life member of the Massachusetts Bar Foundation and an active member of the Massachusetts Bar Association, which honored him in 1994 with its President's Award.