One of the less appreciated aspects of the UK Post Office Horizon scandal is that it was made possible by a legal change in 1999 on the admissibility of computer evidence. That change was made following a flawed Law Commission analysis which showed two things:
- how poorly lawyers understand technical evidence, especially software and digital evidence generally;
- the arrogance of those who do not know what they do not know.
It was an ignorant arrogance not confined to the Law Commission. Its recommendation effectively reversed the burden of proof and made it practically impossible for a defendant to prove that computer-based evidence was not accurate.
There was simply no understanding of how complex computer systems operate nor the importance of their reliability.
MPs who passed the relevant law treated this change with a frivolity which would be mildly amusing were it not for its baleful consequences. They made the mistake of thinking that because computer technology had become more complicated, thus making it more difficult to prove reliability, the answer was not to bother at all because this would be “impractical”.
The idea that it was precisely this complexity which made it imperative to find a way of ensuring and proving that it could be relied on did not – apparently – occur to them.
The relevant Minister, Paul Boateng (a solicitor) treated it as a trivial change, commenting about eight-year-old children being the only ones to understand computers. There was simply no understanding of how complex computer systems operate nor the importance of their reliability nor the vital necessity for a whole range of people and groups to be able to rely on and trust their reliability.
That was then. Now – more than two decades later with the knowledge of the Post Office Horizon scandal (and others) arising because of this change – is the government going to look at this again?
No. In 2022 the Justice Ministry said there were no plans to review this (despite in 2020 a paper having been prepared by lawyers and IT professionals setting out some detailed proposals as to how the law on computer evidence might be reformed).
Having a law which is at odds with how society works today, which has the potential for prejudicial consequences and which undermines the trust essential to the working world is absurd.
Why? The relevant Minister has stated that the presumption that what a computer says is accurate “has wide application”. This is to repeat the error made in 1999. Then it was “impractical” to expect people to prove their evidence was reliable because this was too difficult. Now it is too much effort to review it because it is used so widely so it is again too difficult.
But it is precisely because it has wide application that it is imperative that the law catches up with the world as it is now, that it be based on a proper understanding of IT. We are in a digital world which is only going to become more so. Laws which do not reflect that and which are based on ignorance (or laziness) are unpardonable.
This not an issue which only matters when someone is prosecuted. The reliability of computers matters to customers: of banks, insurance companies, pension companies, investment firms. It matters to every one operating in the financial sector. It matters in relation to monitoring. It matters in relation to disclosure and discovery. It matters to customers of entities in a wide range of sectors. It matters to regulators of entities in a wide range of sectors. It matters to patients. It is hard to think of any commercial sector or activity where it does not matter. This is only going to become more important with the increasing use of AI.
In 1999 it was “impractical” to expect people to prove their evidence was reliable because this was too difficult. Now it is too much effort to review it because it is used so widely.
If the law says that you don’t need to prove that your systems are working properly, in a court battle, this risks undermining the need to do so, rather than reinforcing it. If your customers and counterparties assume that your systems are reliable and they turn out not to be, how will they react on being told that, too bad, the law says they are and it is up to them to prove otherwise?
Having a law which is at odds with how society works today, which has the potential for prejudicial consequences and which undermines the trust essential to the working world is absurd. And will become absurder still as AI technology and its use expand.
Directors of companies have a duty to “promote the success of the company”. Directors of regulated companies have additional duties, particularly in relation to the technology and systems they use and the information which is captured, stored and derived from them. They need to be able to show their regulators – and other stakeholders – that their systems work effectively, are reliable and that the information coming from them can be trusted.
The government needs to look at this again. And this time involve and listen to IT professionals and others who really understand how computer systems work and how they can go wrong. It could do with listening to those who understand the consequences of such failing.
Fortunately, this time the Law Commission is led by someone who certainly does understand this. Mr Justice Fraser was the judge who decided the Bates vs The Post Office litigation which blew this scandal open. He was less than impressed with the Post Office’s factual statements about Horizon. He is now the new Chair of the Law Commission. Time for the government to stop finding excuses for inaction.