Why hospital IT needs airline safety culture

By Tony Collins

Earlier this month the pilots of a Boeing 787 “Dreamliner” carrying 249 passengers aborted a landing at Okayama airport in Japan when the wheels failed to deploy automatically. The pilots circled and deployed the landing gear manually.

A year ago pilots of an Airbus A380, the world’s largest passenger plane, made an emergency landing at Singapore on landing gear that they deployed using gravity, the so-called “gravity drop emergency extension system”.

In both emergencies the contingencies worked.  The planes landed safely and nobody was hurt.

Five years earlier, during tests, part of the landing gear on a pre-operational A380 failed initially to drop down using gravity.

The Teflon solution

The problem was solved, thanks in part to the use of Teflon paint (see below). Eventually the A380 was certified to carry 853 passengers.

Those who fly sometimes owe their lives to the proven and certified backup arrangements on civil aircraft. Compare this safety culture to the improvised and sometimes improvident way some health IT systems are tested and deployed.

Routinely hospital boards order the installation of information systems without proven backup arrangements and certification. Absent in health IT are the mandatory standards that underpin air safety.

When an airliner crashes investigators launch a formal inquiry and publish their findings. Improvements usually follow, if they haven’t already, which is one reason flying is so safe today.

Shutters come down when health IT fails 

When health IT implementations go wrong the effect on patients is unknown. Barts and The London NHS Trust, the Royal Free Hampstead, the Nuffield Orthopaedic Centre, Barnet and Chase Farm Hospitals NHS Trust and other trusts had failed go-lives of NPfIT patient administration systems. They have not published reports on the consequences of the incidents, and have no statutory duty to do so.

Instead of improvements triggered by a public report there may, in health IT, be an instinctive and systemic cover-up, which is within the law. Why would hospitals own up to the seriousness of any incidents brought about by IT-related confusion or chaos? And under the advice of their lawyers suppliers are unlikely to own up to weaknesses in their systems after pervasive problems.

Supplier “hold harmless” clauses

Indeed a “hold harmless” clause is said to be common in contracts between electronic health record companies and healthcare provider organisations. This clause helps to shift liability to the users of EHRs in that users are liable for adverse patient consequences that result from the use of clinical software, even if the software contains errors.

That said the supplier’s software will have been configured locally; and it’s those modifications that might have caused or contributed to incidents.

Done well, health IT implementations can improve the care and safety of patients. But after the go-live of a patient administration system Barts and The London NHS Trust lost track of thousands of patient appointments and had no idea how many were in breach of the 18-week limit for treatment after being referred by a GP. There were also delays in appointments for cancer checks.

At the Royal Free Hampstead staff found they had to cope with system crashes, delays in booking patient appointments, data missing in records and extra costs.

And an independent study of the Summary Care Records scheme by Trisha Greehalgh and her team found that electronic records can omit allergies and potentially dangerous reactions to certain combinations of drugs. Her report also found that the SCR database:

–  Omitted some medications

–  Listed ‘current’ medication the patient was not taking

–  Included indicated allergies or adverse reactions which the patient probably did not have

Electronic health records can also record wrong dosages of drugs, or the wrong drugs, or fail to provide an alert when clinical staff have come to wrongly rely on such an alert.

A study in 2005 found that Computerized Physician Order Entry systems, which were widely viewed as a way of reducing prescribing errors, could lead to double the correct doses being prescribed.

One problem of health IT in hospitals is that computer systems are introduced alongside paper where neither one nor the other is a single source of truth. This could cause mistakes analogous to the ones made in early air crashes which were caused not by technology alone but pilots not fully understanding how the systems worked and not recognising the signs and effects of systems failing to work as intended.

In air crashes the lessons are learned the hard way. In health IT the lessons from failed implementations will be learned by committed professionals. But what when a hospital boss is overly ambitious, is bowled over by unproven technology and is cajoled into a premature go-live?

In 2011, indeed in the past few months, headlines in the trade press have continued to flow when a hospital’s patient information system goes live, or when a trust reaches a critical mass of Summary Care Record uploads of patient records (although some of the SCR records may or not be accurate, and may or may not be correctly updated).

What we won’t see are headlines on any serious or even tragic consequences of the implementations. A BBC File on 4 documentary this month explained how hospital mistakes are unlikely to be exposed by coroners or inquests.

So hospital board chief executives can order new and large-scale IT systems without the fear of any tragic failure of those implementations being exposed, investigated and publicly reported on. The risk lies with the patient. Certification and regulation of health IT systems would reduce that risk.

Should health IT systems be tested as well as the A380’s landing gear? – those tests in detail

Qantas Flight 32 was carrying 466 passengers when an engine exploded. The Airbus A380 made an emergency landing at Singapore Changi Airport on 4 November 2010. The failure was the first of its kind for the four-engined A380.

Shrapnel from the exploding engine punctured part of the wing and damaged some of the systems, equipment and controls. Pilots deployed the landing gear manually, using gravity  – and it worked well. Despite many technical problems the plane landed safely.

Five years earlier, tests of a manual deployment of the A380’s landing gear failed initially. It happened in a test hangar, more than a year before the A380 received regulatory approval to carry 853 passengers.

The story of the landing gear tests is told by Channel 4 as part of a well-made documentary, “Giant of the Skies” on the construction and assembly of the A380.  Against a backdrop of delays and a budget overspend, Airbus engineers must show that if power is lost the wheels will come down on their own, using gravity.

The film shows an Airbus A380 suspended in a hangar while the undercarriage test gets underway. The undercarriage doors open under their own weight and a few seconds later the locks that hold up the outer wheels release. Two massive outer sets of four wheels each fall down through a 90-degree arc. Something goes wrong.  At about 45 degrees, one of the Michelin tyres catches on an undercarriage door which looks as if it has not opened as fully as it would have if powered electrically. Only after 16 seconds does the jammed wheel set slip free. Engineers watching the test look mortified.

Simon Sanders, head of landing gear design at Airbus, tells Channel 4: “We need to understand and find a solution.”

An engineer smeared some grease (Aeroshell 70022 from Shell, Houston) on a guide ramp where the A380’s wheels are supposed to push the door open in an emergency loss of power. This worked and the test was successful: the left-side outer landing gear doors opened under their own weight; a few seconds later the wheels fell down, also under their own weight, and this time the tyre that had jammed earlier hit the grease on the door and slid down without any delay. But a permanent solution was needed.

A month later Airbus repeated the undercarriage “gravity” test. Sanders told Channel 4: “We have applied a layer of Teflon paint which is similar to the Teflon coasting you have on non-stick flying pans. This will reduce the friction when we do the free-fall [of the undercarriage]. We are now going to perform the test to demonstrate that with this low-friction Teflon coating we have solved the problem.”

This time the A380’s chief test pilot Gérard Desbois was watching. If the wheels got struck again Desbois could refuse to accept the aircraft for its first test flight, which would mean a further delay.

The loss-of-power test began. The outer landing gear doors opened … the wheels fell down under their own weight … and jammed again. This time they freed themselves quicker than before. After some hesitation Desbois accepted the aircraft on the basis that if power were lost and the left outer landing wheels took a little longer to come down than the right outer set this would not be a problem.  The gravity free-fall backup system was further refined before the A380 went into service.

The importance of the tests was shown in 2010 when an exploding Rolls-Royce Trent 900 engine on an Airbus A380,  Qantas Flight 32 from Singapore to Sydney, caused damage to various aircraft systems including the landing gear computer which stopped working.  The pilots had to deploy the landing gear manually. The official incident report shows that all of the A380’s 22 wheels deployed fully under the gravity extension backup arrangements.

If a hospital board had been overseeing the A380’s tests back in 2005, would directors have taken the view that the test was very nearly successful, so the undercarriage could be left to prove itself in service?

For the test engineers at Airbus, safety was not a matter of choice but of regulation and certification. It is a pity that the deployment of health IT, which can affect the safety of patients, is not a matter of regulation or certification.

Links:

Oxford University Hospitals NHS Trust postpones major IT go-live.

Giant of the Skies – Channel 4 documentary on manufacture and testing of the Airbus A380

Advertisements

2 responses to “Why hospital IT needs airline safety culture

  1. Well Mr Collins although I understand the analogy and see what you are driving at I must completely disagree with your inferences.

    The first issue I have is that implicit in your analogy is a level playing field on which all aircraft manufactures play. This is not the case for NHS IT. The DOH has a very clear and well documented set of requirements which all NHS information systems must be compliant with (http://www.connectingforhealth.nhs.uk/systemsandservices/data/nhsdmds). CfH / DOH / Trust happily procure systems which are not so compliant! So from the out set we are not comparing landing gear with landing gear. Second issue, nobody in their right mind would buy an airliner in kit form or one bit at a time, glue / bolt the lot together and hope it will get off the ground. Exactly what CfH did with NPfIT. Third issue, people who buy / commission airliners know exactly what they want, their requirements are fixed and not subject to external change pressures. The pilots who will fly them and the end users who will be passengers have no say in design and construction, precisely what you don’t want in an Information System design or implementation. Fourth point, ‘hold harmless’ clause, this pre-supposes a number of things which have nothing to do with an information system and everything to do with culture and change management (or lack thereof), namely front line staff correctly record what they do and do so in a timely manner, handovers are performed with access to the necessary information by the staff who need it when they need it. Staff performance is monitored against defined criteria. Staff performance is audited. I could go on but I think you get the picture. These few processes not followed could easily lead to the ‘wheels getting stuck’ without ever going near an IT system. Sure IT is supposed to make things easier, faster, safer and cheaper BUT without the human element its useless. Health care information systems should remain, for the moment, passive repositories of data to be consumed by humans who can then make informed decisions, it’s a human who enters the data and it’s a human who interprets the data I don’t see any software suppler willing to stick their neck out an take responsibility for a ‘wheel getting stuck’, as an after thought they do seem to like charging as though they do though don’t they.

    Like

    • All good points particularly the one about the existence of clear standards that are not complied with. Non-compliance with safety standards means being denied a certificate in the world of aviation – and without airworthiness approval a manufacturer cannot sell planes because airlines cannot carry passengers on a non-compliant aircraft. In health IT certification would stop non-complaint systems being used; and the certification process could be the last step in the process (as it is in aircraft manufacture) and so take into account the way the system is to be used in a hospital. Obvious shortcomings and the system would fail the certification process. When the Airbus 380 was tested at its maxiumum safe speed, the flight certification test at high altitude had to be halted because vibration caused part of the undercarriage to dislodge. Designers were required to think again – and at a time when the Airbus was already late and over budget. The key thing is that Airbus had no choice – it could not get the safety certification without making the necessary safety and design changes. It’s not like that in health IT. A hospital, the Department of Health, or a suppler can, as you say, decide not to comply with standards; and as there is no need for a safety certificate they can introduce a non-compliant system. That is wrong and potentially dangerous. Arguably it is an anarchic way to introduce unproven new technology into a safety-related field. It is time that regulation and certification caught up with health IT.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s