Category Archives: IT-related failures

Cerner US-wide outage – what went wrong?

By Tony Collins

Hospital EMR and EHR has an account of what caused Cerner’s outage which affected its client sites across the USA and some international customers, according to reports.

It makes the point that the problem had little or nothing to do with the cloud.

The Los Angeles Times reports that within minutes of the outage, doctors and nurses reverted to writing orders and notes by hand, but in many cases they no longer had access to previous patient information saved remotely with Cerner.

“That information isn’t typically put on paper charts when records are kept electronically,” said the newspaper which quoted Dr. Brent Haberman, a pediatric pulmonologist in Memphis as saying, “If you can’t get to all the patient notes and all the previous data, you can imagine it’s very confusing and mistakes could be made…A new doctor comes on shift and doesn’t have access to what happened the past few hours or days.”

Hospital EMR and EHR

Some hospitals cope well through the outage

Maude gives up on plan to publish regular reports on major projects

By Tony Collins

Cabinet Office minister Francis Maude has given up on publishing regular “Gateway” reports on the progress or otherwise of big IT and construction projects.

Publication of the independent reviews has proved a step too far towards open government.  Were Maude to insist on publishing Major Projects Authority “Gateway” review reports, it would alienate too many influential senior civil servants whose support Maude needs to implement the Civil Service Reform Plan of June 2012.

Gateway reviews are independent reports on medium and high-risk projects at important stages of their lifecycle.  If current and topical the reviews are always kept secret. One copy is given to the project’s senior responsible owner and the Cabinet Office’s Major Projects Authority keeps another. Other copies have limited distribution.

In opposition Maude said he would publish the reviews; and when in power Maude took the necessary steps: the Cabinet Office’s “Structural Reform Plan Monthly Implementation Updates” included an undertaking to publish Gateway reviews by December 2011 .

When some officials, particularly those who had worked at the Office of Government Commerce, objected strongly to publishing the reports (for reasons set out below), the undertaking  to publish them vanished from further Structural Reform Plan Monthly Implementation Updates.  When asked why, a spokesman for the Cabinet Office said the plan to publish Gateway reviews had only ever been a “draft” proposal.

The anti-publication officials have thwarted even Sir Bob Kerslake, head of the Home Civil Service, who replaced Sir Gus O’Donnell.  When in May 2012 Conservative MP Richard Bacon asked Kerslake about publishing Gateway reviews, Kerslake replied:

Yes, actually we are looking at this specific issue as part of the Civil Service Reform Plan….I cannot say exactly what will be in the plan because we have not finalised it yet, but it is due in June and my expectation is that I am very sympathetic to publication of the RAG [red, amber, green] ratings.”

Inexplicably there was a change of plan. The Civil Service Reform Plan in fact said nothing about Gateway reports. It made no mention of RAG ratings. What the Plan offered on openness over major projects was an undertaking that “Government will publish an annual report on major projects by July 2012, which will cover the first full year’s operation of the Major Projects Authority.”  (This is a far cry from publishing regular independent Gateway assessments on major projects such as the IT for Universal Credit.)

Even that promise has yet to materialise: no annual report has been published. The Cabinet Office originally promised Parliament an annual report on the Major Projects Authority by December 2011. The Cabinet Office says that the annual MPA report has been delayed because the “team is now clear that it makes sense to include a full financial year’s worth of data and analysis in its first report”.

When eventually published the annual report will, says the Cabinet Office,  “make for a far more informative and comprehensive piece, and will include analysis of data up to 31 March 2012. This will be the first time the UK government has reported on its major projects in such a coherent and transparent way.”

Even so it’s now clear that the Cabinet Office is discarding its plans to publish regular Gateway review reports. Maude wants cooperation with officials, not confrontation.  He made this clear in the reform plan in which he said:

“Some may caricature this action plan as an attack on the Civil Service. It isn’t. It would be just as wrong to caricature the attitude of the Civil Service as one of unyielding resistance to change. Many of the most substantive ideas in this paper have come out of the work led by Permanent Secretaries themselves.”

But Maude is also frustrated at the quiet recalcitrance of some officials.  To a Lords committee that was inquiring into the accountability of civil servants, he said

“The thing for me that is absolutely fundamental in civil servants is that they should feel wholly uninhibited in challenging, advising and pushing back and then when a decision is made they should be wholly clear about implementing it.

“For me the sin against the holy ghost is to not push back and then not do it – that is what really enrages ministers, certainly in talking to ministers in the last government and in the current government. It is by no mean universal, but it is far more widespread than is desirable.”

It’s likely that Maude will keep Gateway reports secret so long as he has the cooperation of officials on civil service reforms.

Why officials oppose publication

The reasons for opposing publication were set out in the OGC’s evidence to an Information Tribunal on the Information Commissioner’s ruling in 2006 that the OGC publish two Gateway reports on the ID Cards scheme.

Below are some of the OGC’s arguments (all of which the Tribunal rejected).  The OGC went to the High Court to stop two early ID Cards Gateway reports being published, at which time OGC lawyers cited the 1689 Bill of Rights. The ID Cards gateway reports were eventually published (and the world didn’t end).

The OGC had argued that publishing Gateway reports would mean that:

–  Interviewees in Gateway reviews gave their time voluntarily and may refuse to cooperate.  (The Information Commissioner did not accept that officials would cease to perform their duties on the grounds the information may be disclosed.)

– Interviewees would be guarded in what they said;  reviewers would be less inclined to cooperate; and disclosure would result in anodyne reports. These three arguments were given in evidence by Sir Peter Gershon, the first Chief Executive of the OGC.

– Civil servants would be reluctant to take on the role of senior responsible owner of a project.

– Critics of a project would have ammunition which could discourage other departments and agencies from participating in the scheme.

– Cabinet collective responsibility could be undermined if Ministers were interviewed for a review.

– Criticisms in the reviews could be “in the newspapers within a very short time”, and the media could misrepresent the review’s findings. (The Tribunal discovered that those involved in the reviews were generally more concerned with their programme than possible adverse publicity.)

– Reports would take longer to write.

– The public would not understand the complexities in the reports.

Why Gateway reports should be published

The Tribunal found that OGC fears about publishing were speculative and that disclosure would contribute to a public debate about the merits of ID Cards, and provide some insight into the decision-making which underlay the scheme. Disclosure would ensure that a complex and sensitive scheme was “properly scrutinised and implemented”, said the Tribunal.

Was OGC evidence to Tribunal fixed?

The Tribunal was also suspicious that the OGC had submitted several witness statements that used identical wording. The Tribunal said the witnesses should have expressed views in their own words.

It found that disclosure could make Gateway reviewers more candid because they would know that their recommendations and findings would be subject to public scrutiny; and criticisms in the reports, if made public, could strengthen the assurance process.

Importantly, the Tribunal said the disclosure would help people judge whether the Gateway process itself works.

Comment

Hundreds of Gateway reports are carried out by former civil servants who can earn more than £1,000 a day for doing a review (although note Peter Smith’s comment below). As the reports are to remain secret how will the reviewers be held properly accountable for their assessments? No wonder officials don’t want the reports published.

Any idea how many projects we have and what they’ll cost? – Cabinet Office.

Whitehall cost cutting saves £5.5bn

Some lessons from a major outage

By Tony Collins

One of the main reasons for remote hosting is that you don’t have to worry about security and up-time is guaranteed. Support is 24x7x365. State-of-the-art data centres offer predictable, affordable, monthly charges.

In the UK more hospitals are opting for remote hosting of business-critical systems. Royal Berkshire NHS Foundation Trust and Homerton University Hospital NHS Foundation Trust are among those taking remote hosting from Cerner, their main systems supplier.

More trusts are expected to do the same, for good reasons: remote hosting from Cerner will give Royal Berkshire a single point of contact to deal with on technical problems without the risks and delay of ascertaining whether the cause is hardware, third party software or application related.

But what when the network goes down – across the country and possibly internationally?

With remote hosting of business-critical systems becoming more widespread it’s worth looking at some of the implications of a major outage.

A failure at CSC’s Maidstone data centre in 2006 was compounded by problems with its recovery data centre in Royal Tunbridge Wells. Knock-on effects extended to information services in the North and West Midlands. The outage affected 80 trusts that were moving to CSC’s implementation of Lorenzo under the NPfIT.

An investigation later found that CSC had been over-optimistic when it informed NHS Connecting for Health that the situation was under control. Chris Johnson, a professor of computing science at Glasgow University, has written an excellent case study on what happened and how the failure knocked out primary and secondary levels of protection. What occured was a sequence of events nobody had predicted.

Cerner outage

Last week Cerner had a major outage across the US. Its international customers might also have been affected.

InformationWeek Healthcare reported that Cerner’s remote hosting service went down for about six hours on Monday, 23 July. It hit “hospital and physician practice clients all over the country”. Information Week said the unusual outage “reportedly took down the vendor’s entire network” and raised “new questions about the reliability of cloud-based hosting services”.

A Cerner spokesperson Kelli Christman told Information Week,

“Cerner’s remote-hosted clients experienced unscheduled downtime this week. Our clients all have downtime procedures in place to ensure patient safety. The issue has been resolved and clients are back up and running. A human error caused the outage. As a result, we are reviewing our training protocol and documented work instructions for any improvements that can be made.”

Christman did not respond to a question about how many Cerner clients were affected. HIStalk, a popular health IT blog, reported that hospital staff resorted to paper but it is unclear whether they would have had access to the most recent information on patients.

One Tweet by @UhVeeNesh said “Thank you Cerner for being down all day. Just how I like to start my week…with the computer system crashing for all of NorCal.”

Another by @wsnewcomb said “We have not charted any pts [patients] today. Not acceptable from a health care leader.”

Cerner Corp tweeted “Our apologies for the inconvenience today. The downtime should be resolved at this point.”

One HIStalk reader praised Cerner communications. Another didn’t:

“Communication was an issue during the downtime as Cerner’s support sites was down as well. Cerner unable to give an ETA on when systems would be back up. Some sites were given word of possible times, but other sites were left in the dark with no direction. Some sites only knew they were back up when staff started logging back into systems.

“Issue appears to have something to do with DNS entries being deleted across RHO network and possible Active Directory corruption. Outage was across all North America clients as well as some international clients.”

Colleen Becket, chairman and co-CEO of Vurbia Technologies, a cloud computing consultancy, told InformationWeek Healthcare that NCH Healthcare System, which includes two Tampa hospitals, had no access to its Cerner system for six hours. The outage affected the facilities and NCH’s ambulatory-care sites.

Lessons?

A HIStalk reader said Cerner has two electronic back-up options for remote hosted clients. Read-only access would have required the user to be able to log into Cerner’s systems, which wouldn’t have been possible with the DNS servers out of action last week.

Another downtime service downloads patient information to local computers, perhaps at least one on each floor, at regularly scheduled intervals, updated say every five minutes. “That way, even if all connection with [Cerner’s data centre] is lost, staff have information (including meds, labs and more) locally on each floor which is accurate up to the time of the last update”.

Finally, says the HIStalk commentator, “since this outage was due to a DNS problem, anyone logged into the system at the time it went down was able to stay logged in. This allowed many floors to continue to access the production system even while most of the terminals couldn’t connect.”

But could the NHS afford a remote hosted service, and a host of on-site back-up systems?

Common factors in health IT implementation failures

In its discussion on the Cerner outage, HIStalk put its finger on the common causes of hospital IT implementation failures. It says the main problems are usually:

– a lack of customer technical and implementation resources;
– poorly developed, self-deceiving project budgets that don’t support enough headcount, training, and hardware to get the job done right;
– letting IT run the project without getting users properly involved
– unreasonable and inflexible timelines as everybody wants to see something light quickly up after spending millions; and
– expecting that just implementing new software means clearing away all the bad decisions (and indecisions) of the past and forcing a fresh corporate agenda on users and physicians, with the suppplier being the convenient whipping boy for any complaints about ambitious and sometimes oppressive changes that the culture just can’t support.

Cerner hosting outage raises concerns

HIStalk on Cerner outage

Case study on CSC data centre crash in 2006

How to identify a high-risk supplier – Cabinet Office works out details

By Tony Collins

Francis Maude, the Cabinet Office minister, has agreed mechanisms for officials to identify high-risk suppliers where “material and substantial underperformance is evident”.

On his blog Spend Matters, Peter Smith has published parts of a letter from Maude.

Where under-performing suppliers are identified “departments will be asked to engage with the Cabinet Office at each stage of any procurement process involving the affected supplier to ensure that performance concerns are taken fully into account before proceeding”.

The implication is that the Cabinet Office will draw up a blacklist of bad suppliers which departments will take account of when buying. Smith says that two suppliers are already on the blacklist.

Comment: 

For more than 20 years the trade press has identified the same suppliers in a succession of failed or failing IT-based projects but poor performance has never been taken seriously into account.

This is usually because the suppliers argue that the media and/or Parliament has got it all wrong.  Departments, it appears, will always prefer a potential supplier’s version to whatever is said in the media or in Parliament.

The Office of Government Commerce, now part of the Cabinet Office, kept intelligence information on suppliers but it seems to have made no difference in procurements.

It is unlikely the Cabinet Office’s blacklist will rule out any suppliers from a shortlist. As Smith says, suppliers will claim that any problem was all the fault of ministers or civil servants who kept changing their minds, were not around to make key decisions, or didn’t understand the nature of the work.

But still the blacklist is a worthwhile innovation. At least one big IT supplier has made a habit of threatening to withdraw from existing assignments when officials have refused to revise terms, prices or length of contract. The blacklist will strengthen the negotiating hand of officials.

The challenge for Maude will be persuading departments to take the blacklist idea seriously.

Peter Smith, Spend Matters.

Lessons from an IT disaster

By Tony Collins

Only rarely is an independent report on an IT-related disaster published.  So North Bristol NHS Trust deserves credit for publishing a report  by Pricewaterhousecoopers into the problematic go-live of Cerner Millennium in December 2011.  PwC calls the Cerner system a “business-critical patient record system”.

The implementation, says PwC,  resulted in significant continuing  operational difficulty. PwC was asked to review the implementation, identify what went wrong and make recommendations.

What is clear from PWC’s report is that North Bristol NHS Trust repeated the known mistakes of other trusts that had gone live with Cerner Millennium:

–          A lack of independent challenge

–          Not enough testing of the system and new business processes

–          Inadequate contingency arrangements

–          Not enough time for data migration

–          Training systems not the same as those to be used

–          Preparations treated as an IT project, not a change programme.

–          Differences between legacy and Cerner systems not fully understood before go live

–          Staff did not always understand new or changed business processes

In 2007 the National Audit Office reported in detail on the lessons from the go-live of Cerner Millennium at Nuffield Orthopaedic Centre, Oxford in December 2005.

One of those lessons was that the Trust did not learn lessons from earlier NPfIT Cerner Millennium go-lives. This happened again at North Bristol, suggests the PwC report:

“There were not dissimilar Cerner implementations within the Greenfield [other ex-Fujitsu and now BT-managed Cerner Millennium implementations under the NPfIT] systems running a few months before NBT’s [North Bristol Trust] implementation. Similar difficulties were experienced there, but they were more successfully addressed.”

Below are extracts from PwC’s report “Independent review of Cerner Millennium implementation North Bristol NHS Trust”.

“The success of an implementation of this scale, complexity and timing depends on substantial, robust and enduring programme management focusing on:

–          The IT implementation. Incorporating configuration of Cerner Millennium, infrastructure, security, interfaces and testing;

–          The migration of data from the two legacy PAS systems into Cerner Millennium;

–          Change management to engage and train stakeholders, embed change in the organisation and ensure that processes and procedures are aligned to the new system;

–          Continuous communication with users about changes to business processes as a result of the implementation; and

–          Quality control criteria and the association governance to ensure that go-live went ahead in a safe and sustainable manner.

–          The Trust needed stringent programme management with programme and project managers of the highest quality, to ensure that effective governance and project planning procedures were followed.

–          The go-live decision and assurances needed to pass strict criteria with sufficient evidence to provide assurance to the board that all necessary activities were completed prior to go-live.

The implementation in both the wards and the Emergency Department (ED) went well. Staff in ED were well engaged in the project and as a result were fully aware of the changes to their business processes at go live. There were some minor system issues initially but these were resolved quickly and ED was fully operational with Cerner Millennium soon after go live. One of the underlying factors in the success of the deployment to ED was that there was no data migration required as the historical data remains in the old system.

The launch in the wards went as expected; the functionality was tested well and the data was loaded manually, although there now appear to be issues with staff engaging and using the system as intended.

The majority of problems encountered at go live related to the theatre and outpatient clinic builds.

Outpatients had the most disruption immediately after go live. The Trust’s back office team had not finished building the outpatient clinics in Cerner Millennium, so the new and old systems did not mirror each other and data could not successfully migrate. Changes continued to be made to clinics in the old PAS systems, and these were not all reflected in Cerner Millennium.

Ad hoc clinics were used in the old PAS system to allow overbooking to maximise activity. These were not separated from real clinics at go live and migrated to Cerner Millennium as real clinics. The ad hoc clinics in PAS had deliberately abnormal timings so they could be excluded from time-based reports, for example 12:30am and 5:30am. The system generated letters for these ad hoc out- of-hours clinics, and many were sent to patients.

In the old system, clinics for a number of consultants could be pooled to facilitate patients seeing the next available consultant.  All clinics in Cerner Millennium are specific to a consultant and this caused significant confusion to administration staff using the new system.

PAS [the legacy patient administration system] treats “weeks” differently to Cerner Millennium. On migration, weeks were misaligned and the dates for clinics and theatres was incorrect. This created huge confusion as patient notes did not agree with Cerner Millennium , despite exhaustive work before go live to ensure that all patient notes were ready for the clinics that should have been on the system.  This also affected information in letters, with patients advised to attend their appointment on the wrong date.

There was a further issue in theatres relating to theatre procedure codes. The Trust did not map the old procedure codes to the new to ensure that all the required procedures would be available in Cerner Millennium for the data to migrate successfully. The Trust identified this issue soon after go live and has run a parallel manual process to ensure patients received the correct procedures.

The training provided to staff by the Trust did not equip them to be able to use Cerner Millennium at go live. The training environment did not mirror the system the Trust implemented as certain elements of the system were not complete when the training domain was created. Theatre staff and outpatient appointments could not train on a system with theatre schedules and outpatient clinics built in.

The Trust is now beginning to move out of the crisis and return to normal operations.

Lack of effective quality controls

There was insufficient rigour over the controls criteria and sign off of the gateway reviews.

There was inadequate operational control over the go live process, such as clinic freeze and updates pre-, during, and post go-live. Evidence from the interviews suggests that:

  • There was little challenge to confirm that the gateway criteria had in fact been met.
  • There was no evidence presented to the Cerner Programme Board or the Trust Board to demonstrate that the gateway criteria had been met.
  • There was not enough focus on or monitoring of risks and issues and their impact on go live.
  • The cleansing of old and out-of-date data from the legacy PAS systems was inadequate; as a result, erroneous data became live data in the Cerner system.
  • Data Migration issues were not all resolved and their impact on go live was not considered.
  • The outpatient and theatre builds were neither complete nor accurate, and there were no controls which could have detected this before go live.
  • There were inadequate controls over clinic freeze and clinic changes prior to go live.

Lack of effective programme planning

Programme plans were not rigorously updated as the programme progressed and planning around training, testing and data migration and build was not robust. The Trust failed to recognise this programme as a change programme and did not effectively manage the engagement and feedback from their stakeholders. Evidence from the interviews suggests that:

  • The Trust did not factor contingency into its programme plan to account for changes to the go live date.
  • The Cerner Programme Management Office was not effective because of inadequate resource and programme tools.
  • The Trust had a lack of sufficiently skilled resources for a project on this scale.
  • The Trust’s operational staff were not fully engaged in the Cerner project.
  • The Cerner project was treated as an IT project and not a business change programme.
  • The training was inadequate and did not provide users with the skills they needed to be able to use the system at go live.
  • The testing focused on the functionality of the system and not end-user testing of the outpatient and theatre builds.
  • There was no end-user testing of the final outpatient clinic and theatre builds prior to go live.
  • There was lack of understanding of roles within the wider programme team.
  • External parties offered NBT help and advice. They felt that the advice was not taken and the help was refused.

Lack of effective programme governance

Programme governance processes were not reviewed and updated regularly to ensure that they were adequate and there was inappropriate accountability for key decision making. During the implementation, the Trust established new overarching change management arrangements for the Building our Future programme. Evidence from the interviews suggests that:

  • The Cerner Project team failed to comply with the Trust’s Building our Future governance processes
  • The information presented to the Cerner Programme Board and the Trust board by the Cerner Project team was inadequate for them to make informed decisions;
  • The Cerner Programme Board was not effective; and
  • Significant issues relating to the theatre and outpatient clinic build were not escalated to the Cerner Programme Board or the Trust board.

PwC’s Conclusions

For a programme of this scale and complexity, the management arrangements were not sufficiently extensive or robust. There were many issues with the software and data migration, the training of users and operational go live planning. The Trust Board and the Cerner Programme Board did not plan to have, and did not receive, independent assurance that the state of the programme supported a decision to go-live.

Complex IT implementations are never without risks and issues that need to be managed, even at the point of go live. The scale of the issues in this implementation was not properly understood by those with responsibility, and as a result they were not in a position to make sound decisions.

Many of the problems are associated with poor data and process migration. Staff found that a significant proportion of migrated data was incorrect in the new system, and this had rapid and substantial operational impact which has taken a considerable time to rectify with manual processes. Staff needed to be more directly involved in migration and process testing.

The implementation was manifestly a complex change programme. But IT took the lead, and there was no intelligent customer with sufficient distance from IT to ensure products and progress were properly challenged.

There were not dissimilar Cerner implementations within the Greenfield running a few months before NBT implementation. Similar difficulties were experienced there, but they were more successfully addressed.”

PwC recommends that:

–  the Trust “stop and take stock”. It says  “The Trust needs to take stock of its position and develop a coherent and detailed plan for the remainder of the recovery stage. The Trust then needs to ensure that effective cross programme planning and governance arrangements are enforced for all current projects, especially those under the Building Our Future programme.”

PwC also recommends that the Trust carry out a:

–  Governance review

– Capability/capacity review

– Cross programme plan review

– Operational assessment

– Review of process and controls

– Review of information requirement

– Technical resilience/infrastructure review

– Review of access controls

Comment:

To me the PwC report throws up at least six points:

1) Are NPfIT go-lives more political than pragmatic?

In the 1990s Barclays Bank went live with new systems for all its branches. During the night (I was invited to watch the go-live at head office) the most striking element was a check list that asked questions on progress so far. The answers determined whether the go-live would happen. The check-list was completed repeatedly – seemingly endlessly – during the night.

Many  different types of mishaps could have stopped the go-live.  None did.  Go-lives of Cerner Millennium are different. They seem unstoppable, whatever the circumstances, whatever the problems.  There was nothing political about the Barclays go-live. But NPfIT go-lives are intensely political.

Would North Bristol’s board have accepted with equanimity a last-minute cancellation, especially after go-lives had been postponed at least twice before?

2)  Are NHS boards too focused on “good news” to oversee an NPfIT go live?

North Bristol NHS Trust deserves praise for publishing the PwC report.  But it’s not the whole story.  The report says little about any potentially serious impact on patients. Also it mentions (almost in passing) that the Trust board discussed in November 2011 the readiness of Cerner Millennium to go live. That discussion was probably positive because Millennium went live a month later. But there is no mention of that discussion in the Trust’s board papers for November 2011.

Why did the Trust discuss its readiness to go live in secret? And why did it keep secret its November 2011 report on its readiness to go live?

If North Bristol, like so many NHS trusts, is congenitally beset with a good news culture at board level, can the full truth ever be properly discussed?

3) Isn’t it time Cerner lessons were learnt?

After seven years of Cerner implementations in the NHS, several of them notorious failures, isn’t it time Trusts learnt the lessons?

4)  What’s the current position?

PwC’s report is succinct and professional. It’s also diplomatically-worded. There is little in the report that points to how the Trust is coping with the operational difficulties. Indeed it suggests the Trust is returning to normal. “The Trust is now beginning to move out of the crisis and return to normal operations,” says the PwC report. But that is, in essence, what the Trust has been saying publicly since January 2012.  PwC says nothing about whether the safety of patients has been jeopardized by the go-live.

5) Where were the Trust’s Audit Committee – and internal auditors?

Every NHS Trust has an audit committee and internal auditors to warn about things that are going wrong, or may go wrong. It appears that they were out to lunch when it came to North Bristol’s Cerner Millennium project and its consequences.  The Audit Committee seems hardly to have mentioned the project. Should North Bristol’s board hold the Audit Committee and internal auditors to account?

6) Is the Trust board to blame?

Perhaps rightly PwC does not seek to apportion blame. But did the Trust board ask the right questions often enough?  The tacit criticism in the PwC report is of the IT department and layers of management below board level. But is that criticism misdirected? If the board’s culture of encouraging good news – of “bring me solutions not problems” –  has not changed, perhaps little or nothing will have been learned from North Bristol’s IT-related disaster.

PWC report Independent review of Cerner Millennium implementation North Bristol NHS Trust.

Lessons from Nuffield Orthopaedic’s Cerner Millennium implementation in 2005.

North Bristol apologises over Cerner go-live.

New hospital system caused chaos.

MP asks why two Cerner systems cost vastly different prices.

NHS Trust has “major concern” over spend on Cerner

By Tony Collins

North Bristol NHS Trust reports in its latest board papers that  “overall the level of spending on Cerner continues to be a major concern and the IM&T Director is working to develop a plan to identify what will be needed in the current year”.

The trust went live with Cerner Millennium in December 2011 and had various problems which the Trust said had been “overcome” by 1 May 2012.

But the Trust’s board papers last month hint that some difficulties are continuing.

“There are also clearly still data issues from Cerner which are affecting these numbers which the team are working on,” says a North Bristol finance paper in June.

The overspend on Cerner is about £900,000 for a two-month period. The paper says the “costs of Cerner remain a risk as some of the forecast spend may need to be re-classified as revenue.

“The detail on this is currently being reviewed by the Director of IM&T and isn’t included in the month 2 position… There has been relatively little spend in capital with the exception of Cerner which has incurred £0.9m of cost for 2 months.”

The anticipated spending on the Cerner implementation for the Trust will be more than £5m.

Comment:

It’s not unusual for hospitals to run into trouble with a Cerner Millennium implementation.  When confronted with serious IT-related difficulties private sector organisations sometimes confront what has gone wrong with urgency, pragmatism and trying not to pretend things are better than they are.

Public sector organisations, when facing IT-related difficulties, can fall into the trap of concentrating on what has gone right, and talk as little as possible about the problems. Indeed North Bristol’s latest board papers hardly mention the Millennium difficulties.  There is not a mention in the Audit Committee report. Not a mention in the board agenda.  Only a finance report says that spending on Cerner is a major concern. Elsewhere in the board papers there are short, oblique references to data difficulties.

“With reference to the figures in Table 3, it was confirmed that all patients had been contacted but accuracy of the data could still not be guaranteed and reporting continued to be 2 months behind…  There were also a lot of duplicate referrals on the system.  This was being rectified but may affect billing,” says one board paper.

It would be wrong to suggest that a culture of accentuating the positive and hunching the shoulders at the negative has anything to do with IM&T. It’s one of the differences between the private and public sectors.

North Bristol’s board needs to be more open. If it cannot admit its difficulties how will it tackle them? And what is the point of taxpayers paying for internal auditors that simply assure the board they are doing a great job?

NPfIT Cerner go-live at North Bristol has more problems than anticipated.

Halt Cerner implementations after patient safety problems at five hospitals says MP

Richard Granger “ashamed” of some systems

North Bristol overspends £1m on Cerner

All change at the DH, CfH and on NPfIT – or not?

By Tony Collins

Katie Davis is to leave as interim Managing Director of NHS Informatics, says eHealth Insider which has seen an internal memo.

.The memo indicates that Davis “intends to focus on being a full-time mother to her two children”.

She joined the Department of Health on 1 July 2011, on loan from the Cabinet Office where she was Executive Director, Operational Excellence, in the Efficiency and Reform Group.

Before that she was Executive Director of Strategy at the Identity and Passport Service in the Home Office.

The memo indicates that the director responsible for the day-to-day delivery of NHS programmes and services, Tim Donohoe, will take-over Davis’ role until NHS Connecting for Health shuts down at the end of March 2013.

CfH’s national projects look set to move to the NHS Commissioning Board in Leeds, while its delivery functions will move to the Health and Social Care Information Centre.

Davis had told eHeath Insider that her priorities included concluding a piece of unfinished business on the NPfIT – the future of the [CSC] local service provider deal for the North, Midlands and East.

Comment:

Davis has been a strong independent voice at the Department of Health. Partly under her influence buying decisions have passed to NHS trusts without penalties being paid by the NHS to NPfIT local service provider CSC.

It is a little worrying, though, that high-level responsibility for the rump of the NPfIT – CSC’s contracts, Choose and Book, the Spine, Summary Care Record and other centrally-managed projects and programmes – may fall to David Nicholson, Chief Executive of the NHS.

Labour appointed Nicholson in 2006 with a brief that included making a success of the NPfIT. He has been the NPfIT’s strongest advocate.

Indeed a confidential briefing paper from the Department of Health to the then PM Tony Blair in 2007 on the progress of the NPfIT said:

“… much of the programme is complete with software delivered to time and to budget.”

It is difficult to see the NPfIT being completely dismantled under David Nicholson. It’s probable that CfH will be shut down in name but recreated in other parts of the NHS, while the NPfIT programmes and projects run down very slowly.  It’s even conceivable that CSC’s and BT’s local service provider contracts will be extended before they are due to expire in 2015/16.

A comment on eHealth Insider says:

“My understanding is that NPfIT is leaving us with a legacy of ancient PAS systems barely fit for purpose which cost a fortune to operate and which will transfer to a massive service charge once national contracts end. That’s if you don’t count the most expensive PACS system in the universe. And I wonder what Lorenzo cost?”

It’s hard to argue with that. Meanwhile the costly NPfIT go-lives are due to continue, at Imperial College Healthcare NHS Trust, for example.

End game for Davis and CfH announced.

IBM won bid without lowest-price – council gives detail under FOI

By Tony Collins

Excessive secrecy has characterised a deal between IBM and Somerset County Council which was signed in 2007.

Indeed I once went to the council’s offices in Taunton, on behalf of Computer Weekly, for a pre-arranged meeting to ask questions about the IBM contract. A council lawyer refused to answer most of my questions because I did not live locally.

Now (five years later) Somerset’s Corporate Information Governance Officer Peter Grogan at County Hall, Taunton, has shown that the council can be surprisingly open.

He has overturned a refusal of the council to give the bid prices. Suppliers sometimes complain that the public sector awards contracts to the lowest-price bidder. But …

Supplier / Bid Total cost over 10 years
BT Standard bid £220.552M
BT Variant Bid £248.055M
Capita Standard Bid £256.671M
Capita Variant Bid £267.687M
IBM Standard Bid £253.820M
IBM Variant Bid £253.820M

The FOI request was made by former council employee Dave Orr who has, more than anyone, sought to hold Somerset and IBM to account for what has turned out to be a questionable deal.

Under the FOI Act, Orr asked Somerset County Council for the bid totals. It refused saying the suppliers had given the information  in confidence. Orr appealed. In granting the appeal Grogan said:

“I would also consider that the passage of time has a significant impact here as the figures included under the exemption are now some 5 years old and their commercial sensitivity is somewhat eroded.

“Whilst, at the time those companies tendering for the contract would justifiably expect the information to be confidential and that they could rely upon confidentiality clauses, I am not able to support the non-disclosure due the fact that the FOI Act creates a significant argument for disclosure that outweighs the confidentiality agreement once the tender exercise is complete and a reasonable amount of time has passed.

“I therefore do not consider this exemption [section 41] to be engaged. Please find the information you requested below…”

[In my FOI experience – making requests to central government departments – the internal review process has always proved pointless. So all credit to Peter Grogan for not taking the easy route, in this case at least.]

MP Ian Liddell-Grainger ‘s website on the “Southwest One” IBM deal.

IBM struggles with SAP two years on – a shared services warning.

Council accepts IBM deal as failing.

Was Audit Commission Somerset and IBM’s unofficial PR agents?

Poor IT suppliers to face ban from contracts?

By Tony Collins

The Cabinet Office minister Francis Maude is due to meet representatives of suppliers today, including  Accenture BT,Capgemini, Capita, HP, IBM, Interserve, Logica, Serco, and Steria.

They will be warned that suppliers with poor performance may find it more difficult to secure new work with the Government. The Cabinet Office says that formal information on a supplier’s performance will be available and will be taken into consideration at the start of and during the procurement process (pre-contract).

Maude will tell them that the Government is strengthening its supplier management by monitoring suppliers’ performance for the Crown as a whole.

“I want Whitehall procurement to become as sharp as the best businesses”, says Maude. “Today I will tell companies that we won’t tolerate poor performance and that to work with us you will have to offer the best value for money.”

The suppliers at today’s meeting represent around £15bn worth of central government contract spend.

The representatives will also be:

– asked their reactions on the government’s approach to business over the past two years

– briefed on the expanded Cabinet Office team of negotiators (Crown Representatives) from the private and public sectors. Maude says these negotiators aim to maximise the Government’s bulk buying power to obtain strategic discounts for taxpayers and end the days of lengthy and inflexible contracts.

Spending controls made permanent

Maude is announcing today that cross-Whitehall spending controls will be a permanent way of life. The Government introduced in 2010 temporary controls on spending in areas such as ICT  and consultancy. It claims £3.75bn of cash savings in 2010/11, and efficiency savings for 2011/12, which it says are being audited.

The Cabinet Office says: “By creating an overall picture of where the money is going, the controls allow government to act strategically in a way it never could before. For example, strict controls on ICT expenditure do not just reduce costs but also reveal the software, hardware and services that departments are buying and whether there is a competitive mix of suppliers and software standards across government.”

Maude said: “Our cross-Whitehall controls on spending have made billions of cash savings for the taxpayer – something that has never been done before. That’s why I’m pleased to confirm that our controls will be a permanent feature, helping to change fundamentally the way government operates.”

Why is MoD spending more on IT when its data is poor?

By Tony Collins

The Ministry of Defence and the three services have spent many hundreds of millions of pounds on logistics IT systems over the past 20 years, and new IT projects are planned.

But the National Audit Office, in a report published today – Managing the defence investory –  found that logistics data is so unreliable and limited that it has hampered its investigations into stock levels.

“During the course of our study,” says the NAO, “the Department provided data for our analyses from a number of its inventory systems. However, problems in obtaining reliable information have limited the scope of our analysis…”

The NAO does not ask the question of why the MoD is spending money on more IT while data is unreliable and there are gaps in the information collected.

But the NAO does question whether new IT will solve the MoD’s information problems.

“The Department has acknowledged the information and information systems gaps and committed significant funds to system improvements. However these will not address the risk of failure across all of the inventory systems nor resolve the information shortfall.”

MPs on the Public Accounts Committee, who will question defence staff on the NAO report, may wish to ask why the MoD’s is so apparently anxious to hand money to IT suppliers when data is poor and new technology will not plug information gaps.

Comment:

MPs on the Public Accounts Committee found in 2003 (Progress in reducing stocks) that the MoD was buying and storing stock it did not need. Indeed after two major fires at the MoD’s warehouses at Donnington in 1983 and 1988 more than half of the destroyed stock did not need replacing. Not much has changed judging by the NAO’s latest report.

It’s clear that the MoD lacks good management information. Says the NAO in today’s report:

“The summary management and financial information on inventory that is provided to senior staff within Defence Equipment and Support is not sufficient for them to challenge and hold to account the project teams…”

But will throwing money at IT suppliers make much difference? The MoD plans the:

–  Future Logistics Information Services project, which is intended to bring together and replace a number of legacy inventory management systems; and

–  Management of the Joint Deployed Inventory system which will provide the armed services with a common system for the inventory they hold and manage.

But is the  MoD using IT spending as proof of its conviction to improve the quality of data and the management of its inventory?

Managing the defence inventory