By Tony Collins
Earlier this month the pilots of a Boeing 787 “Dreamliner” carrying 249 passengers aborted a landing at Okayama airport in Japan when the wheels failed to deploy automatically. The pilots circled and deployed the landing gear manually.
A year ago pilots of an Airbus A380, the world’s largest passenger plane, made an emergency landing at Singapore on landing gear that they deployed using gravity, the so-called “gravity drop emergency extension system”.
In both emergencies the contingencies worked. The planes landed safely and nobody was hurt.
Five years earlier, during tests, part of the landing gear on a pre-operational A380 failed initially to drop down using gravity.
The Teflon solution
The problem was solved, thanks in part to the use of Teflon paint (see below). Eventually the A380 was certified to carry 853 passengers.
Those who fly sometimes owe their lives to the proven and certified backup arrangements on civil aircraft. Compare this safety culture to the improvised and sometimes improvident way some health IT systems are tested and deployed.
Routinely hospital boards order the installation of information systems without proven backup arrangements and certification. Absent in health IT are the mandatory standards that underpin air safety.
When an airliner crashes investigators launch a formal inquiry and publish their findings. Improvements usually follow, if they haven’t already, which is one reason flying is so safe today.
Shutters come down when health IT fails
When health IT implementations go wrong the effect on patients is unknown. Barts and The London NHS Trust, the Royal Free Hampstead, the Nuffield Orthopaedic Centre, Barnet and Chase Farm Hospitals NHS Trust and other trusts had failed go-lives of NPfIT patient administration systems. They have not published reports on the consequences of the incidents, and have no statutory duty to do so.
Instead of improvements triggered by a public report there may, in health IT, be an instinctive and systemic cover-up, which is within the law. Why would hospitals own up to the seriousness of any incidents brought about by IT-related confusion or chaos? And under the advice of their lawyers suppliers are unlikely to own up to weaknesses in their systems after pervasive problems.
Supplier “hold harmless” clauses
Indeed a “hold harmless” clause is said to be common in contracts between electronic health record companies and healthcare provider organisations. This clause helps to shift liability to the users of EHRs in that users are liable for adverse patient consequences that result from the use of clinical software, even if the software contains errors.
That said the supplier’s software will have been configured locally; and it’s those modifications that might have caused or contributed to incidents.
Done well, health IT implementations can improve the care and safety of patients. But after the go-live of a patient administration system Barts and The London NHS Trust lost track of thousands of patient appointments and had no idea how many were in breach of the 18-week limit for treatment after being referred by a GP. There were also delays in appointments for cancer checks.
At the Royal Free Hampstead staff found they had to cope with system crashes, delays in booking patient appointments, data missing in records and extra costs.
And an independent study of the Summary Care Records scheme by Trisha Greehalgh and her team found that electronic records can omit allergies and potentially dangerous reactions to certain combinations of drugs. Her report also found that the SCR database:
– Omitted some medications
– Listed ‘current’ medication the patient was not taking
– Included indicated allergies or adverse reactions which the patient probably did not have
Electronic health records can also record wrong dosages of drugs, or the wrong drugs, or fail to provide an alert when clinical staff have come to wrongly rely on such an alert.
A study in 2005 found that Computerized Physician Order Entry systems, which were widely viewed as a way of reducing prescribing errors, could lead to double the correct doses being prescribed.
One problem of health IT in hospitals is that computer systems are introduced alongside paper where neither one nor the other is a single source of truth. This could cause mistakes analogous to the ones made in early air crashes which were caused not by technology alone but pilots not fully understanding how the systems worked and not recognising the signs and effects of systems failing to work as intended.
In air crashes the lessons are learned the hard way. In health IT the lessons from failed implementations will be learned by committed professionals. But what when a hospital boss is overly ambitious, is bowled over by unproven technology and is cajoled into a premature go-live?
In 2011, indeed in the past few months, headlines in the trade press have continued to flow when a hospital’s patient information system goes live, or when a trust reaches a critical mass of Summary Care Record uploads of patient records (although some of the SCR records may or not be accurate, and may or may not be correctly updated).
What we won’t see are headlines on any serious or even tragic consequences of the implementations. A BBC File on 4 documentary this month explained how hospital mistakes are unlikely to be exposed by coroners or inquests.
So hospital board chief executives can order new and large-scale IT systems without the fear of any tragic failure of those implementations being exposed, investigated and publicly reported on. The risk lies with the patient. Certification and regulation of health IT systems would reduce that risk.
Should health IT systems be tested as well as the A380’s landing gear? – those tests in detail
Qantas Flight 32 was carrying 466 passengers when an engine exploded. The Airbus A380 made an emergency landing at Singapore Changi Airport on 4 November 2010. The failure was the first of its kind for the four-engined A380.
Shrapnel from the exploding engine punctured part of the wing and damaged some of the systems, equipment and controls. Pilots deployed the landing gear manually, using gravity – and it worked well. Despite many technical problems the plane landed safely.
Five years earlier, tests of a manual deployment of the A380’s landing gear failed initially. It happened in a test hangar, more than a year before the A380 received regulatory approval to carry 853 passengers.
The story of the landing gear tests is told by Channel 4 as part of a well-made documentary, “Giant of the Skies” on the construction and assembly of the A380. Against a backdrop of delays and a budget overspend, Airbus engineers must show that if power is lost the wheels will come down on their own, using gravity.
The film shows an Airbus A380 suspended in a hangar while the undercarriage test gets underway. The undercarriage doors open under their own weight and a few seconds later the locks that hold up the outer wheels release. Two massive outer sets of four wheels each fall down through a 90-degree arc. Something goes wrong. At about 45 degrees, one of the Michelin tyres catches on an undercarriage door which looks as if it has not opened as fully as it would have if powered electrically. Only after 16 seconds does the jammed wheel set slip free. Engineers watching the test look mortified.
Simon Sanders, head of landing gear design at Airbus, tells Channel 4: “We need to understand and find a solution.”
An engineer smeared some grease (Aeroshell 70022 from Shell, Houston) on a guide ramp where the A380’s wheels are supposed to push the door open in an emergency loss of power. This worked and the test was successful: the left-side outer landing gear doors opened under their own weight; a few seconds later the wheels fell down, also under their own weight, and this time the tyre that had jammed earlier hit the grease on the door and slid down without any delay. But a permanent solution was needed.
A month later Airbus repeated the undercarriage “gravity” test. Sanders told Channel 4: “We have applied a layer of Teflon paint which is similar to the Teflon coasting you have on non-stick flying pans. This will reduce the friction when we do the free-fall [of the undercarriage]. We are now going to perform the test to demonstrate that with this low-friction Teflon coating we have solved the problem.”
This time the A380’s chief test pilot Gérard Desbois was watching. If the wheels got struck again Desbois could refuse to accept the aircraft for its first test flight, which would mean a further delay.
The loss-of-power test began. The outer landing gear doors opened … the wheels fell down under their own weight … and jammed again. This time they freed themselves quicker than before. After some hesitation Desbois accepted the aircraft on the basis that if power were lost and the left outer landing wheels took a little longer to come down than the right outer set this would not be a problem. The gravity free-fall backup system was further refined before the A380 went into service.
The importance of the tests was shown in 2010 when an exploding Rolls-Royce Trent 900 engine on an Airbus A380, Qantas Flight 32 from Singapore to Sydney, caused damage to various aircraft systems including the landing gear computer which stopped working. The pilots had to deploy the landing gear manually. The official incident report shows that all of the A380’s 22 wheels deployed fully under the gravity extension backup arrangements.
If a hospital board had been overseeing the A380’s tests back in 2005, would directors have taken the view that the test was very nearly successful, so the undercarriage could be left to prove itself in service?
For the test engineers at Airbus, safety was not a matter of choice but of regulation and certification. It is a pity that the deployment of health IT, which can affect the safety of patients, is not a matter of regulation or certification.
Giant of the Skies – Channel 4 documentary on manufacture and testing of the Airbus A380