Category Archives: risk

Why hospital IT needs airline safety culture

By Tony Collins

Earlier this month the pilots of a Boeing 787 “Dreamliner” carrying 249 passengers aborted a landing at Okayama airport in Japan when the wheels failed to deploy automatically. The pilots circled and deployed the landing gear manually.

A year ago pilots of an Airbus A380, the world’s largest passenger plane, made an emergency landing at Singapore on landing gear that they deployed using gravity, the so-called “gravity drop emergency extension system”.

In both emergencies the contingencies worked.  The planes landed safely and nobody was hurt.

Five years earlier, during tests, part of the landing gear on a pre-operational A380 failed initially to drop down using gravity.

The Teflon solution

The problem was solved, thanks in part to the use of Teflon paint (see below). Eventually the A380 was certified to carry 853 passengers.

Those who fly sometimes owe their lives to the proven and certified backup arrangements on civil aircraft. Compare this safety culture to the improvised and sometimes improvident way some health IT systems are tested and deployed.

Routinely hospital boards order the installation of information systems without proven backup arrangements and certification. Absent in health IT are the mandatory standards that underpin air safety.

When an airliner crashes investigators launch a formal inquiry and publish their findings. Improvements usually follow, if they haven’t already, which is one reason flying is so safe today.

Shutters come down when health IT fails 

When health IT implementations go wrong the effect on patients is unknown. Barts and The London NHS Trust, the Royal Free Hampstead, the Nuffield Orthopaedic Centre, Barnet and Chase Farm Hospitals NHS Trust and other trusts had failed go-lives of NPfIT patient administration systems. They have not published reports on the consequences of the incidents, and have no statutory duty to do so.

Instead of improvements triggered by a public report there may, in health IT, be an instinctive and systemic cover-up, which is within the law. Why would hospitals own up to the seriousness of any incidents brought about by IT-related confusion or chaos? And under the advice of their lawyers suppliers are unlikely to own up to weaknesses in their systems after pervasive problems.

Supplier “hold harmless” clauses

Indeed a “hold harmless” clause is said to be common in contracts between electronic health record companies and healthcare provider organisations. This clause helps to shift liability to the users of EHRs in that users are liable for adverse patient consequences that result from the use of clinical software, even if the software contains errors.

That said the supplier’s software will have been configured locally; and it’s those modifications that might have caused or contributed to incidents.

Done well, health IT implementations can improve the care and safety of patients. But after the go-live of a patient administration system Barts and The London NHS Trust lost track of thousands of patient appointments and had no idea how many were in breach of the 18-week limit for treatment after being referred by a GP. There were also delays in appointments for cancer checks.

At the Royal Free Hampstead staff found they had to cope with system crashes, delays in booking patient appointments, data missing in records and extra costs.

And an independent study of the Summary Care Records scheme by Trisha Greehalgh and her team found that electronic records can omit allergies and potentially dangerous reactions to certain combinations of drugs. Her report also found that the SCR database:

–  Omitted some medications

–  Listed ‘current’ medication the patient was not taking

–  Included indicated allergies or adverse reactions which the patient probably did not have

Electronic health records can also record wrong dosages of drugs, or the wrong drugs, or fail to provide an alert when clinical staff have come to wrongly rely on such an alert.

A study in 2005 found that Computerized Physician Order Entry systems, which were widely viewed as a way of reducing prescribing errors, could lead to double the correct doses being prescribed.

One problem of health IT in hospitals is that computer systems are introduced alongside paper where neither one nor the other is a single source of truth. This could cause mistakes analogous to the ones made in early air crashes which were caused not by technology alone but pilots not fully understanding how the systems worked and not recognising the signs and effects of systems failing to work as intended.

In air crashes the lessons are learned the hard way. In health IT the lessons from failed implementations will be learned by committed professionals. But what when a hospital boss is overly ambitious, is bowled over by unproven technology and is cajoled into a premature go-live?

In 2011, indeed in the past few months, headlines in the trade press have continued to flow when a hospital’s patient information system goes live, or when a trust reaches a critical mass of Summary Care Record uploads of patient records (although some of the SCR records may or not be accurate, and may or may not be correctly updated).

What we won’t see are headlines on any serious or even tragic consequences of the implementations. A BBC File on 4 documentary this month explained how hospital mistakes are unlikely to be exposed by coroners or inquests.

So hospital board chief executives can order new and large-scale IT systems without the fear of any tragic failure of those implementations being exposed, investigated and publicly reported on. The risk lies with the patient. Certification and regulation of health IT systems would reduce that risk.

Should health IT systems be tested as well as the A380’s landing gear? – those tests in detail

Qantas Flight 32 was carrying 466 passengers when an engine exploded. The Airbus A380 made an emergency landing at Singapore Changi Airport on 4 November 2010. The failure was the first of its kind for the four-engined A380.

Shrapnel from the exploding engine punctured part of the wing and damaged some of the systems, equipment and controls. Pilots deployed the landing gear manually, using gravity  – and it worked well. Despite many technical problems the plane landed safely.

Five years earlier, tests of a manual deployment of the A380’s landing gear failed initially. It happened in a test hangar, more than a year before the A380 received regulatory approval to carry 853 passengers.

The story of the landing gear tests is told by Channel 4 as part of a well-made documentary, “Giant of the Skies” on the construction and assembly of the A380.  Against a backdrop of delays and a budget overspend, Airbus engineers must show that if power is lost the wheels will come down on their own, using gravity.

The film shows an Airbus A380 suspended in a hangar while the undercarriage test gets underway. The undercarriage doors open under their own weight and a few seconds later the locks that hold up the outer wheels release. Two massive outer sets of four wheels each fall down through a 90-degree arc. Something goes wrong.  At about 45 degrees, one of the Michelin tyres catches on an undercarriage door which looks as if it has not opened as fully as it would have if powered electrically. Only after 16 seconds does the jammed wheel set slip free. Engineers watching the test look mortified.

Simon Sanders, head of landing gear design at Airbus, tells Channel 4: “We need to understand and find a solution.”

An engineer smeared some grease (Aeroshell 70022 from Shell, Houston) on a guide ramp where the A380’s wheels are supposed to push the door open in an emergency loss of power. This worked and the test was successful: the left-side outer landing gear doors opened under their own weight; a few seconds later the wheels fell down, also under their own weight, and this time the tyre that had jammed earlier hit the grease on the door and slid down without any delay. But a permanent solution was needed.

A month later Airbus repeated the undercarriage “gravity” test. Sanders told Channel 4: “We have applied a layer of Teflon paint which is similar to the Teflon coasting you have on non-stick flying pans. This will reduce the friction when we do the free-fall [of the undercarriage]. We are now going to perform the test to demonstrate that with this low-friction Teflon coating we have solved the problem.”

This time the A380’s chief test pilot Gérard Desbois was watching. If the wheels got struck again Desbois could refuse to accept the aircraft for its first test flight, which would mean a further delay.

The loss-of-power test began. The outer landing gear doors opened … the wheels fell down under their own weight … and jammed again. This time they freed themselves quicker than before. After some hesitation Desbois accepted the aircraft on the basis that if power were lost and the left outer landing wheels took a little longer to come down than the right outer set this would not be a problem.  The gravity free-fall backup system was further refined before the A380 went into service.

The importance of the tests was shown in 2010 when an exploding Rolls-Royce Trent 900 engine on an Airbus A380,  Qantas Flight 32 from Singapore to Sydney, caused damage to various aircraft systems including the landing gear computer which stopped working.  The pilots had to deploy the landing gear manually. The official incident report shows that all of the A380’s 22 wheels deployed fully under the gravity extension backup arrangements.

If a hospital board had been overseeing the A380’s tests back in 2005, would directors have taken the view that the test was very nearly successful, so the undercarriage could be left to prove itself in service?

For the test engineers at Airbus, safety was not a matter of choice but of regulation and certification. It is a pity that the deployment of health IT, which can affect the safety of patients, is not a matter of regulation or certification.

Links:

Oxford University Hospitals NHS Trust postpones major IT go-live.

Giant of the Skies – Channel 4 documentary on manufacture and testing of the Airbus A380

Firecontrol: same mistakes repeated on other projects

By Tony Collins

A report published today by the Public Accounts Committee on the £469m Firecontrol project reads much like its others on government IT-enabled project disasters.

Margaret Hodge, chair of the Committee said:

“This is one of the worst cases of project failure that the committee has seen in many years. FiReControl was an ambitious project with the objectives of improving national resilience, efficiency and technology by replacing the control room functions of 46 local Fire and Rescue Services in England with a network of nine purpose-built regional control centres using a national computer system.

“The project was launched in 2004, but following a series of delays and difficulties, was terminated in December 2010 with none of the original objectives achieved and a minimum of £469m being wasted.

“The project was flawed from the outset, as the Department attempted, without sufficient mandatory powers, to impose a single, national approach on locally accountable Fire and Rescue Services who were reluctant to change the way they operated.

“Yet rather than engaging with the Services to persuade them of the project’s merits, the Department excluded them from decisions about the design of the regional control centres and the proposed IT solution, even though these decisions would leave local services with potential long-term costs and residual liabilities to which they had not agreed.

“The Department launched the project too quickly, driven by its wider aims to ensure a better co-ordinated national response to national disasters, such as terrorist attacks, rail crashes or floods. The Department also wanted to encourage and embed regional government in England.

“But it acted without applying basic project approval checks and balances – taking decisions  before a business case, project plan or procurement strategy had been developed and tested amongst Fire Services. The result was hugely unrealistic forecast costs and savings, naïve over-optimism on the deliverability of the IT solution and under- appreciation or mitigation of the risks. The Department demonstrated poor judgement in approving the project and failed to provide appropriate checks and challenge.

“The fundamentals of project management continued to be absent as the project proceeded. So the new fire control centres were constructed and completed whilst there was considerable delay in even awarding the IT contract, let alone developing the essential IT infrastructure.

Consultants made up over half the management team (costing £69m by 2010) but were not managed. The project had convoluted governance arrangements, with a lack of clarity over roles and responsibilities. There was a high turnover of senior managers although none have been held accountable for the failure.  The committee considers this to be an extraordinary failure of leadership. Yet no individuals have been held accountable for the failure and waste associated with this project.”

Comment:

Firecontrol was a politically-motivated project which used  bricks, mortar and IT to try and change the way people worked. The users in the fire service didn’t want a single national approach of nine new regional centres – complete with new hardware and software – just as NHS clinicians, in general, did not want the National Programme for IT [NPfIT]. The Firecontrol regional centres were built anyway and the NPfIT went ahead anyway.

One lesson is that, in the public sector, you cannot engage users who won’t support the scheme. If they want to change, and they want the new IT, they’ll find ways to overcome the technology’s deficiencies. If they don’t want the scheme – and fire personnel did not want Firecontrol – the end-users will be incorrigibly harsh evaluators of what’s delivered, and not delivered.

It’s better to get the support of users, and involve them in the prototype design and test implementations, long before the scheme is finalised. It’s different in the private sector because the support of users is not essential – those who don’t accept business change and the associated IT will be expected to quit.

So what today is the mistake that is being repeated? The Public Accounts Committee touched on it when it said that the Department for Communities and Local Government – which was responsible for Firecontrol –  “failed to provide appropriate checks and challenge”.

During the life of Firecontrol, the Office of Government Commerce carried out “Gateway reviews” which independently assessed progress or otherwise. The reviews  could have provided an early warning of a project that was about to waste hundreds of millions of pounds. But the Gateway review reports were not published. They had a limited internal distribution and, it appears, were ignored.

According to the Public Accounts Committee, a Gateway review in April 2004, near the start of the Firecontrol project, said the scheme was in poor condition overall and at significant risk of failing to deliver.

Why was this Gateway review not published? If it had, Parliament and the media could have held ministers to account – and perhaps have campaigned to stop the project before millions were thrown away.

There was indeed a media campaign in 2004 – and before – to have Gateway reviews published, but ministers – and particularly civil servants – said no.

Now the same thing is happening. The civil service has persuaded the coalition government to carry on Labour’s tradition of keeping Gateway reviews secret. So Parliament and the media will continue to be kept in the dark on whether a major project is going wrong.

By the time details of the reviews are published, perhaps years later in a report of the National Audit Office,  it may be too late to rescue the scheme. By then tens or hundreds of millions may have been wasted. Gateway reviews should be published around the time they are written, not years later.

Ministers do not have to pander to civil servants. They are paid to stand up to them. They receive a premium over the salary of MPs in part to be independent voices – to provide a challenge.

Subservient ministers in the DWP are among those who continue to allow Gateway reviews to remain hidden. If you ask the DWP under the Freedom of Information Act for the release of the Starting Gate report on Universal Credit (which I am told is not the same as a Gateway review report) the DWP will refuse your request. It refused mine.

So we have to accept the word of civil servants that the Universal Credit programme is going well; but haven’t there been enough IT-related disasters in government for all to know that the word of civil servants on whether things are going well needs to be tested independently? The publication of Gateway reviews – and Starting Gate reviews – could help outsiders hold a department to account. It’s time ministers began to realise this.

Are ministers such as Iain Duncan Smith in control of their departments – or are their civil servants in control of them?

Links:

Margaret Hodge on BBC “Today” programme 20/9/11 – “what could go wrong did go wrong” – did PA Consulting get away without blame?

What the NPfIT and Firecontrol have in common.

Firecontrol:  should PA Consulting share some responsibility for what happened?

CSC optimistic on new NPfIT deal – officials less so

By Tony Collins

CSC is due to meet officials from the Cabinet Office next month to discuss a possible new deal over the company’s £3bn worth of NHS IT contracts. Proposals from the Cabinet Office’s Efficiency and Reform Group have gone to the Department of Health and Downing Street for approval.

Nobody seems to know yet what the ERG has proposed but CSC remains confident that a new NPfIT deal will be signed that is good for the supplier’s finances and for the NHS.  Not all Whitehall officials share CSC’s confidence, however.

A new deal may be signed – but perhaps without the exclusive arrangements in the original contracts and the NHS commitments to place a minimum amount of business with the company.

One reason it’s hard for civil servants to innovate?

By Tony Collins

James Gardner has seen for himself the institutional obstacles to innovation. . He was, in effect, chief innovator [CTO] at the Department for Work and Pensions. He now works for Spigit.

In a blog on the need for innovators to have “courageous patience” he quotes the British politician Tony Benn who used to be Minister of Technology in the Wilson government:

“It’s the same each time with progress. First they ignore you, then they say you’re mad, then dangerous, then there’s a pause and then you can’t find anyone who disagrees with you.”

He also quotes Warren Bennis who, he says, established leadership as a credible academic discipline:

“Innovation— any new idea—by definition will not be accepted at first. It takes repeated attempts, endless demonstrations, monotonous rehearsals before innovation can be accepted and internalized by an organization. This requires courageous patience.”

Patience comes easily in the civil service but courage? The courage to spend a little with inventive SMEs rather than a lot with large systems integrators? Perhaps this is why it’s so hard to get central departments to innovate.

Mutuals: balancing the benefits of employee ownership and innovation with the risks and rewards

By David Bicknell

The excellent King’s Fund report released yesterday on social enterprise in healthcare made some interesting points on employee ownership and risk in social enterprises and mutuals.

It said: “Evidence from other sectors (the commercial industry, and other public services to a lesser extent) largely focuses on the employee ownership model. In the UK, there is considerable evidence based on the John Lewis Partnership, a major retailer and the UK’s largest employee-owned organisation. However, much of the literature in this field is from the United States, where a significant proportion of the workforce (more than one-fifth) is financially involved in their organisation.

“Literature from the private sector is predominantly supportive of employee ownership, and suggests that there is a positive link between employee ownership and productivity, innovation and job satisfaction. This literature is based on the argument that, by giving employees a stake in their organisation, they will be more engaged and potentially more productive.

“However, Ellins and Ham report evidence that suggests that employee ownership may slow down decision-making and generate a risk-averse culture. A review of the literature by Matrix Evidence also suggests that any productivity gains are not immediate, but become stronger over time.

“The relationship between employee ownership and staff engagement is quite complex. It has been suggested that employee ownership does not automatically lead to greater staff participation, but that staff participation is necessary for the development of a successfuland productive employee- organisation. The literature suggests that the main benefit of employee ownership is greater staff involvement in decision-making, which is associated with a stronger tendency for organisational innovation. However, the direct link between ownership and staff satisfaction is much less clear.

“In commercial industries, employee-owned firms tend to have a lower risk of failure. They are able to create jobs quickly, and are at least as profitable when compared to conventionally structured businesses. Further, a survey by the Social Enterprise Coalition found that social enterprises were twice as confident of future growth compared with small- and medium-sized enterprises (SMEs) (48 per cent as opposed to 24 per cent of SMEs). Additionally, since the recession began, 56 per cent of social enterprises have increased their turnover from the previous year (compared with 28 per cent of SMEs).”

The other week when David Cameron launched the Open Public Services White Paper, he suggested that the Civil Service (and perhaps other
enterprises too) would need to adopt a risk-taking culture.

“The biggest challenge for the Civil Service is to try and adapt to this new culture and also a very difficult thing to do, and an easy thing to say, is that actually civil servants will have to take some risks. We all know that in business it is very easy to award the contract to Price Waterhouse. They’ve done it before, they’re an enormous organisation, they won’t fail. I think there’s a similar tendency within the Civil Service. It’s safe to keep it in house and deal with one of the big providers.

“If we really want to see diversity, choice and competition, we have to take some risks and recognise that sometimes there will be a new dynamic social enterprise that has a great way of tackling poverty or drug abuse or helping prisoners go straight, and we do need to take some risks with those organisations and understand that rather like in business, when you have a failure, that that doesn’t mean that the Civil Service has done a disastrous job.

“In business, we try new things in order to do better, and when they don’t work, we sit back and think, ‘How do we do that better next time?’ We do need a sense of creativity and enterprise in our Civil Service which is clearly there….a change of culture, perhaps a different attitude towards innovation and risk and a sense that that will be a good way of driving performance.”

Interesting then that a blog post in the Harvard Business Review site discusses risk and argues that taking a risk is not immoral – as some might argue – and that “the world is full of people who sit on their high horses disparaging risk and risk takers. They counsel caution in order to gain moral stature, all the while making use of a thousand innovations made possible by the very people and practices they shun.”

It’s not the people who shun risks who are the saints, the author, Dan Pallotta, says. It’s the ones who dare to take them. Good piece – worth a read.