Category Archives: software quality

Companies nervous over HMRC customs IT deadline?

By Tony Collins

This Computer Weekly article in 1994 was about the much-delayed customs system CHIEF. Will its CDS replacement that’s being built for the post-Brexit customs regime also be delayed by years?

The Financial Times  reported this week that UK companies are nervous over a deadline next year for the introduction of a new customs system three months before Brexit.

HMRC’s existing customs system CHIEF (Customs Handling of Import Export Freight) copes well with about 100 million transactions a year. It’s expected a £157m replacement system using software from IBM and European Dynamics will have to handle about 255 million transactions and with many more complexities and interdependencies than the existing system.

If the new system fails post-Brexit and CHIEF cannot be adapted to cope, it could be disastrous for companies that import and export freight. A post-Brexit failure could also have a serious impact on the UK economy and the collection of billions of pounds in VAT, according to the National Audit Office.

The FT quoted me on Monday as calling for an independent review of the new customs system by an outside body.

I told the FT of my concern that officials will, at times, tell ministers what they want to hear. Only a fully independent review of the new customs system (as opposed to a comfortable internal review conducted by the Infrastructure and Projects Authority) would stand a chance of revealing whether the new customs system was likely to work on time and whether smaller and medium-sized companies handling freight had been adequately consulted and would be able to integrate the new system into their own technology.

The National Audit Office reported last year that HMRC has a well-established forum for engaging with some stakeholders but has

“significant gaps in its knowledge of important groups. In particular it needs to know more about the number and needs of the smaller and less established traders who might be affected by the customs changes for the first time”.

The National Audit Office said that the new system will need to cope with 180,000 new traders who will use the system for the first time after Brexit, in addition to the 141,000 traders who currently make customs declarations for trade outside the EU.

The introduction in 1994 of CHIEF was labelled a disaster at the time by some traders,  in part because it was designed and developed without their close involvement. CHIEF  was eventually accepted and is now much liked – though it’s 24 years old.

Involve end-users – or risk failure

Lack of involvement of prospective end-users is a common factor in government IT disasters. It happened on the Universal Credit IT programme, which turned out to be a failure in its early years, and on the £10bn National Programme for IT which was dismantled in 2010. Billions of pounds were wasted.

The FT quoted me as saying that the chances of the new customs system CDS [Customs Declaration Service) doing all the things that traders need it to do from day one are almost nil.

The FT quotes one trader as saying,

“HMRC is introducing a massive new programme at what is already a critical time. It would be a complex undertaking at the best of times but proceeding with it at this very moment feels like a high stakes gamble.”

HMRC has been preparing to replace CHIEF with CDS since 2013. Its civil servants say that the use of the SAFe agile methodology when combined with the skills and capabilities of its staff mean that programme risks and issues will be effectively managed.

But, like other government departments, HMRC does not publish its reports on the state of major IT-related projects and programmes. One risk, then,  is that ministers may not know the full truth until a disaster is imminent.

In the meantime ministerial confidence is likely to remain high.

Learning from past mistakes?

HMRC has a mixed record on learning from past failures of big government IT-based projects.  Taking some of the lessons from “Crash”, these are the best  things about the new customs project:

  • It’s designed to be simple to use – a rarity for a government IT system. Last year HMRC reduced the number of system features it plans to implement from 968 to 519. It considered that there were many duplicated and redundant features listed in its programme backlog.
  • The SAFe agile methodology HMRC is using is supposed to help organisations implement large-scale, business-critical systems in the shortest possible time.
  • HMRC is directly managing the technical development and is carrying out this work using its own resources, independent contractors and the resources of its government technology company, RCDTS. Last year it had about 200 people working on the IT programme.

These are the potentially bad things:

  • It’s not HMRC’s fault but it doesn’t know how much work is going to be involved because talks over the post-Brexit customs regime are ongoing.
  • It’s accepted in IT project management that a big bang go-live is not a good idea. The new Customs Declaration Service is due to go live in January 2019, three months before Britain is due to leave the EU. CHIEF system was commissioned from BT in 1989 and its scheduled go-live was delayed by two years. Could CDS be delayed by two years as well? In pre-live trials CHIEF rejected hundreds of test customs declarations for no obvious reason.
  • The new service will use, at its core,  commercially available software (from IBM) to manage customs declarations and software (from European Dynamics) to calculate tariffs. The use of software packages is a good idea – but not if they need large-scale modification.  Tampering with proven packages is a much riskier strategy than developing software from scratch.  The new system will need to integrate with other HMRC systems and a range of third-party systems. It will need to provide information to 85 systems across 26 other government bodies.
  • If a software package works well in another country it almost certainly won’t work when deployed by the UK government. Core software in the new system uses a customs declaration management component that works well in the Netherlands but is not integrated with other systems, as it would be required to do in HMRC, and handles only 14 million declarations each year.
  • The IBM component has been tested in laboratory conditions to cope with 180 million declarations, but the UK may need to process 255 million declarations each year.
  • Testing software in laboratory conditions will give you little idea of whether it will work in the field. This was one of the costly lessons from the NHS IT programme NPfIT.
  • The National Audit Office said in a report last year that HMRC’s contingency plans were under-developed and that there were “significant gaps in staff resources”.

Comment

HMRC has an impressive new CIO Jackie Wright but whether she will have the freedom to work within Whitehall’s restrictive practices is uncertain. It seems that the more talented the CIO the more they’re made to feel like outsiders by senior civil servants who haven’t worked in the private sector.  It’s a pity that some of the best CIOs don’t usually last long in Whitehall.

Meanwhile HMRC’s top civil servants and IT specialists seem to be confident that CDS, the new customs system, will work on time.  Their confidence is not reassuring.  Ministers and civil servants publicly and repeatedly expressed confidence that Universal Credit would be fully rolled by the end of 2017. Now it’s running five years late.  The NHS IT programme NPfIT was to have been rolled out by 2015.  By 2010 it was dismantled as hopeless.

With some important exceptions, Whitehall’s track record on IT-related projects is poor – and that’s when what is needed is known. Brexit is still being negotiated. How can anyone build a new bridge when you’re not sure how long it’ll need to be and what the many and varied external stresses will be?

If the new or existing systems cannot cope with customs declarations after Brexit it may not be the fault of HMRC. But that’ll be little comfort for the hundreds of thousands of traders whose businesses rely, in part, on a speedy and efficient customs service.

FT article – UK companies nervous over deadline for new Customs system

Advertisements

Real time information – the good and bad

By Tony Collins

Widespread publicity over the past week has drawn attention to inaccuracies in Real Time Information, HMRC’s system for handling PAYE submissions from employers every time they pay an employee rather that at the year-end. The Daily Telegraph broke the story with the headline

“Five million UK workers face uncertainty after tax bills wrongly calculated twice in HMRC blunder”

The BBC said tax  statement errors affect thousands of people.  Accountancy Live reported that tax experts were urging HMRC to review RTI to see if it’s fit for purpose. The FT reported HMRC as admitting that an “unknown number of inaccurate P800 statements and payment orders for the 2013/14 tax year had been sent to taxpayers since September 15”.

But HMRC says that RTI is a success for more than 98% of those employers who have to use it.

Tens of millions of PAYE employees are now on RTI – and if the system were a disaster HMRC and MPs would be deluged with complaints. That hasn’t happened.

Indeed the National Audit Office was complimentary in its audit of HMRC’s 2013/14 accounts of the ability of RTI to give employees the correct tax code when their jobs change – thereby reducing the levels of under and overpayments.

“Data quality has improved and HMRC’s own evaluation suggests that RTI is helping to change employer behaviour by encouraging them to tell HMRC of changes in employee circumstances earlier,” said the NAO.

RTI – the good and bad

The good news for HMRC and the government’s welfare reformers is that Universal Credit, which relies on RTI to calculate benefits, is running well behind its original schedule.

UC is rolling out to a small number of people – fewer than 12,000 by 14 August 2014 –  rather than the expected 184,000 by April 2014, according to the DWP’s revised December 2012 business case.  This means that inaccuracies in RTI will have little effect on UC for the foreseeable future.

The bad

If RTI cannot be relied on to provide accurate information on whether Universal Credit claimants are paying the right amount of tax, UC cannot be relied on to provide correct payments to claimants – which would undermine the welfare reform programme.

Another problem is that tax experts are weary of HMRC’s repeated blaming of employers for RTI’s problems. One of the reasons RTI contains inaccuracies is that HMRC uses employers’ changing internal “works” numbers as individual identifiers, as well as the National Insurance Number.

Employers change their payroll works numbers for a variety of reasons, say when an employee is promoted to management, when the company wants to distinguish various groups for the employer’s own purpose, or when an employee moves location.

The works number is for the internal use of the employer but is included in information submitted to HMRC. The number is “owned” by employers and is for them to use and administer as they see fit. It should have no relevance to HMRC.  But when the works number changes, it can trigger a false assumption in HMRC’s systems that the employee has two employments, with the same employer.  This would generate an incorrect tax code – and would be HMRC’s fault, not the employer’s.

Steve Wade, tax director at KPMG, puts it well.  He says of the latest publicity about RTI errors:

“These systems issues are causing so called ‘employer errors’, which is where the data supplied by the employer is not processed by HMRC systems as expected.

“Sometimes this can be due to bad data being supplied but equally it can be due to errors in HMRC systems which were not designed to deal with all the complexities of PAYE.

“The upshot for employers and employees is that they find that the PAYE tax and National Insurance Contributions that have been paid do not match those calculated by HMRC, despite their providing the information as requested.  As a result, they now face uncertainty over whether they have paid the right amount of tax.

“There needs to be some significant and urgent investment in the processing and back end software systems at HMRC which collect and process this data to generate the operational efficiencies envisaged when the whole RTI initiative was conceived.”

Wade told Accountancy Live: “At the moment, RTI just does not seem to be delivering information that is real. What we need is a thorough investigation of what has happened by a team which includes not just HMRC personnel but external specialists. Only that will give the necessary degree of confidence in the system that is vital for everyone who depends upon it.”

Natalie Miller, President of the Association of Taxation Technicians, says of RTI’s inaccuracies:

“This is an alarming revelation and further underscores the need for collaboration with external stakeholders, all of whom have a vested interest in the success of RTI.

“We have been drawing HMRC’s attention to the quirks and complexities of RTI in meetings and correspondence from its inception. We have also highlighted the significant burdens it places on employers and agents. What we are seeing now are real and serious practical problems for possibly many thousands of employees at a time when building confidence in the system is crucial.

“Some of those difficulties might have been avoided if HMRC had heeded advice from ATT and similar bodies at an early stage.

“In light of this latest revelation, we are calling for an urgent review of the RTI system to ensure that it is fit for purpose. This is essential because every employer and employee is entitled to know that PAYE is being dealt with properly. It is doubly important because the RTI system underpins the Universal Credit system that is being rolled out by the Department for Work and Pensions to replace certain state benefits.

“If, as HMRC’s reported comments suggest, the particular problem arose because employers had failed to send in final payment statements for the full 2013/2014 tax year, that suggests two things.

“Firstly, that the process is simply too complex for employers to understand. Secondly, that either HMRC know the information to be incomplete and are failing to address this before placing reliance on the information, or HMRC do not know the information is incomplete, which raises the equally worrying prospect that the system cannot identify when important information is missing.

“It is in nobody’s interest that RTI stumbles from problem to problem; that threatens its credibility. We all need a system that does what it says on the tin. At the moment, Real Time Information just does not seem to be delivering information that is real. What we need is a thorough investigation of what has happened by a team which includes not just HMRC personnel but external specialists.

“Only that will give the necessary degree of confidence in the system that is vital for everyone who depends upon it (employees, pensioners, employers, payroll bureaux, tax advisers, other parts of government and HMRC itself). The review’s remit should extend to other areas of RTI where systemic problems have been identified. The ATT and many other professional bodies stand ready to assist HMRC in that review.”

George Bull, senior tax partner at Baker Tilly, said that the RTI system had so far failed to demonstrate that it can put an end to the annual problem of incorrect tax demands and refunds. “It seems to me that in 2014, this is a pretty sorry state to be in.”

HMRC note to employers, professional bodies and business groups in full (published by Accountingweb)

“We are today emailing our stakeholders to explain that we are aware that a number of employees recently received a form 2013-14 P800 which was issued during our bulk 2013-14 End of Year reconciliation exercise.

“The 2013-14 P800 shows an incorrect overpayment or underpayment where the pay and tax shown on the P800 is incorrect and does not match that shown on their 2013-14 P60.

“The most common scenarios are where:

  • An incorrect overpayment is created as the 2013-14 reconciliation is based upon the Full Payment Submission (FPS) up to month 11 although the employment continued all year.
  • Where the year to date figures supplied are incorrect, for example where an employer reference changed in-year and the previous pay and tax is incorrectly included in the “year to date” (YTD) totals.
  • We have received an “Earlier Year Update” (EYU) and this is yet to be processed to the account.
  • There is a duplicate employment (often caused by differences in works numbers and other changes throughout the year)

“We are urgently investigating these cases and will look to resolve the matter in the next 6-8 weeks.

“We currently do not know the scale of the issue, but some large employers are involved, so several thousands of employees may be affected.

“Next Steps

“We are very sorry that some customers will receive an incorrect 2013-14 P800 tax calculation.

“We are urgently investigating these cases and will look to resolve the matter and issue a revised P800 to the employee in the next 6-8 weeks.

“Employers and their agents should not send any 2013-14 EYUs unless requested by us. We are aware that there are still some 2013-14 EYUs which we have yet to process to the relevant account.

“If an employee asks about a 2013-14 P800 which they think is incorrect, they should advise them:

  • Not to repay any underpayment shown on the P800
  • Not to cash any payable order they may have received
  • Employees will not be affected by the incorrect tax code as we will issue a revised P800 before Annual Coding.”

Comment

RTI is not a disaster but it’s clearly not in a fit state to support Universal Credit – another uncertainty for UC. When the National Audit Office reports on UC, as it is due to do in the next few weeks, it would be useful if it also reports on the state of RTI.

If it does so report, the NAO should not take at face value HMRC’s claims that the fault with RTI lies mainly with employers.

[The NAO will find that, even after the modernisation of PAYE processes, the systems still incorporate COP/CODA/BROCS software that dates back more than 30 years.]

A welcome boost for agile in government

By Tony Collins

David Wilks, Digital Performance Manager at Government Digital Service, which is part of the Cabinet Office, says there has been “incredible” interest in clarified guidance that makes it easier for departments to obtain funding for agile projects.

The guidance applies to major projects.

Wilks says on the GDS blog that the guidance will “cut bureaucracy and encourage innovation, making digital transformation easier across government”.

It means that, in most cases, government organisations can spend up to £750,000 on the first two phases of a government agile project, discovery and alpha, on the basis of Cabinet Office spending controls – without needing an HM Treasury business case.

The guidance means:

  • more use of “light-touch” Programme Business Cases
  • using agile discovery to replace the Strategic Outline Case in most cases
  • avoiding the need for a separate Full Business Case stage where procurement uses a pre-competed arrangement such as the Digital Services Framework

“For agile and finance teams in government departments, this guidance clarification has produced incredible interest,” says Wilks.

Comment

It seems fashionable to criticise the use of agile in government, perhaps because agile requires a mindset and culture that may be alien in parts of the civil service. But done well agile could help to modernise and reform central government administration.  It’s not a cure for all the problems of bloated government IT and it has risks, among them:

–  Zeno’s paradox where a project is perpetually on the point of delivering successfully but never actually does, as with the BBC’s Digital Media Initiative.

–  A so-called agile project that combines waterfall and agile approaches. It’s either waterfall or agile. It’s difficult to see how a project can be both. Those projects where there has been a hybrid agile-waterfall approach have not been successful: Universal Credit, the BBC’s DMI and an Oracle IT-related project disaster in Oregon.

That said, investigators of the “Cover Oregon” failure seem now to advocate a purer form of agile as one solution. A highly critical official report into the failure has some positive comments on agile:

“Since September 2013, CO [Cover Oregon] has been utilizing a home grown development process which is based upon agile methodologies. There are seven functional areas within the process, referred to as tables, with each table having a dedicated table lead (a mini project manager) and a dedicated business analyst. This process appears to be well orchestrated.

“Each morning there are daily “scrum” meetings for the different functional areas. While not rigidly adhering to the formal agile scrum format, these meetings serve a valuable purpose in providing a regular opportunity for various parties from a functional area to provide the latest updates on the progress across the outstanding major defects/issues …”

 

With some reservations the Cabinet Office’s initiative to cut bureaucracy and make it easier for departments to adopt agile is welcome.

 

Trust spends £16.6m on consultants for Cerner EPR

By Tony Collins

Reading-based Royal Berkshire NHS Foundation Trust says in an FOI response that its spending on “computer consultants since the inception of the EPR system is £16.6m”.

The Trust’s total spend on the Cerner Millennium system was said to have been £30m by October 2012.

NHS IT suppliers have told me that the typical cost of a Trust-wide EPR [electronic patient record] system, including support for five years, is about £6m-£8m, which suggests that the Royal Berkshire has spent £22m more than necessary on new patient record IT.

Jonathan Isaby, Taxpayers’ Alliance political director, said: “This is an astonishing amount of taxpayers’ money to have squandered on a system which is evidently failing to deliver results.

“Every pound lost to this project is a pound less available for frontline medical care. Those who were responsible for the failure must be held to account for their actions as this kind of waste cannot go unchecked.”

 The £16.6m consultancy figure was uncovered this week through a Freedom of Information request made by The Reading Chronicle. It had asked for the spend on consultants working on the Cerner Millennium EPR [which went live later than expected in June 2012].

The Trust replied: “Further to your request for information the costs spent on computer consultants since the inception of the EPR system is £16.6m.”

The Chronicle says that the system is “meant to retrieve patient details in seconds, linking them to the availability of surgeons, beds or therapies, but has forced staff to spend up to 15 minutes navigating through multiple screens to book one routine appointment, leading to backlogs on wards and outpatient clinics”.

Royal Berkshire’s chief executive Edward Donald had said the Cerner Millennium go live was successful.  A trust board paper said:

 “The Chief Executive emphasised that, despite these challenges, the ‘go-live’ at the Trust had been more successful than in other Cerner Millennium sites.”

A similar, stronger message had appeared was in a separate board paper which was released under FOI.  Royal Berkshire’s EPR [electronic patient record] Executive Governance Committee minutes said:

“… the Committee noted that the Trust’s launch had been considered to be the best implementation of Cerner Millennium yet and that despite staff misgivings, the project was progressing well. This positive message should also be disseminated…”

Comment

Royal Berkshire went outside the NPfIT. But its costs are even higher than the breathtakingly high costs to the taxpayer of NPfIT Cerner and Lorenzo implementations.

As senior officials at the Department of Health have been so careless with public funds over NHS IT – and have spent millions on the same sets of consultants – they are in no position to admonish Royal Berkshire.

So who can criticise Royal Berkshire and should its chief executive be held accountable?

When it’s official policy to spend tens of millions on EPRs that may or may not make things better for hospitals and patients – and could make things much worse – how can accountability play any part in the purchase of the systems and consultants?

The enormously costly Cerner and Lorenzo EPR implementations go on – in an NHS IT world that is largely without credible supervision, control, accountability or regulation.

Cash squandered on IT help

Trust loses £18m on IT system

The best implementation of Cerner Millennium yet?

Universal Credit – good for its IT suppliers?

By Tony Collins

The DWP is conceding in its own tangential way that the IT for Universal Credit is not up to scratch; and an article in the Daily Telegraph suggests that Universal Credit this year (and perhaps well beyond) will handle so few claimants that the calculations for the time being could be done by hand, or on a spreadsheet, and not automatically by IT systems. The Register, through anonymous sources, has confirmation of this.

The FT says there will be a progressive national rollout of the coalition’s welfare reform in just six additional jobcentres which it said was the “latest sign the project is falling behind schedule”. It added that a significant shake-up of the IT underpinning universal credit is under way. 

The DWP said David Pitchford, the Whitehall troubleshooter who took over the running of Universal Credit for three months, had been asked to “review” the IT and ministers had “accepted his recommendation that they should explore enhancing the IT for universal credit working with the government digital service”.

“Advancements in technology since the current system was developed have meant that a more responsive system that is more flexible and secure could potentially be built,” said the DWP.

The FT quoted Howard Shiplee, who has led the Universal Credit  project since May, as denying claims from MPs that the original IT had been dumped because it had not delivered. “The existing systems that we have are working, and working effectively,” he said.

He added that he had set aside 100 days not to stop the programme, but to reflect on where it has got to and start to look at the entire total plan.

Iain Duncan Smith, the work and pensions secretary, doesn’t concede that the  timetable for the implementation of Universal Credit has changed. He told the work and pensions committee on Wednesday that numbers of claimants would ramp up during 2014 and he insisted that all claimants would be on the system by 2017, as originally planned.

“We get fixated on things like IT; the reality is it’s about a cultural shift,” Duncan Smith told MPs.

Comment

Iain Duncan Smith makes it clear that his DWP staff and suppliers, with the help of HMRC, are implementing Universal Credit with extreme care. Labour’s  work and pensions spokesman Liam Byrne says the Universal Credit project is a shambles. The truth is hard to fathom.

For years the DWP has rejected press reports that the IT for Universal Credit was in trouble. It is able to do without fear of authoritative contradiction because it keeps secret all its consultancy reports on the state of the Universal Credit project, despite FOI requests.

The Cabinet Office minister Francis Maude and his officials talk much about the need for openness and transparency. Isn’t it time they persuaded DWP officials to release their internal and external reports on the detailed challenges faced by suppliers and civil servants on Universal Credit and other major government IT projects?

All big government IT projects are characterised by secrecy and defensiveness, although a little information about them is in the vague and subjectively-worded Major Projects Authority annual report.

One by-product of departmental defensiveness and secrecy is that the IT suppliers – in Universal Credit’s case HP, IBM and Accenture – are likely to continue to be paid even if the project is halted and redesigned. It’s probable the suppliers would argue that they have successfully done what they were asked to do in the contract. Who knows what the truth is?

The DWP is in effect protecting its suppliers from public and parliamentary scrutiny. It has been this way for decades and nothing has changed.

Some lessons from a major outage

By Tony Collins

One of the main reasons for remote hosting is that you don’t have to worry about security and up-time is guaranteed. Support is 24x7x365. State-of-the-art data centres offer predictable, affordable, monthly charges.

In the UK more hospitals are opting for remote hosting of business-critical systems. Royal Berkshire NHS Foundation Trust and Homerton University Hospital NHS Foundation Trust are among those taking remote hosting from Cerner, their main systems supplier.

More trusts are expected to do the same, for good reasons: remote hosting from Cerner will give Royal Berkshire a single point of contact to deal with on technical problems without the risks and delay of ascertaining whether the cause is hardware, third party software or application related.

But what when the network goes down – across the country and possibly internationally?

With remote hosting of business-critical systems becoming more widespread it’s worth looking at some of the implications of a major outage.

A failure at CSC’s Maidstone data centre in 2006 was compounded by problems with its recovery data centre in Royal Tunbridge Wells. Knock-on effects extended to information services in the North and West Midlands. The outage affected 80 trusts that were moving to CSC’s implementation of Lorenzo under the NPfIT.

An investigation later found that CSC had been over-optimistic when it informed NHS Connecting for Health that the situation was under control. Chris Johnson, a professor of computing science at Glasgow University, has written an excellent case study on what happened and how the failure knocked out primary and secondary levels of protection. What occured was a sequence of events nobody had predicted.

Cerner outage

Last week Cerner had a major outage across the US. Its international customers might also have been affected.

InformationWeek Healthcare reported that Cerner’s remote hosting service went down for about six hours on Monday, 23 July. It hit “hospital and physician practice clients all over the country”. Information Week said the unusual outage “reportedly took down the vendor’s entire network” and raised “new questions about the reliability of cloud-based hosting services”.

A Cerner spokesperson Kelli Christman told Information Week,

“Cerner’s remote-hosted clients experienced unscheduled downtime this week. Our clients all have downtime procedures in place to ensure patient safety. The issue has been resolved and clients are back up and running. A human error caused the outage. As a result, we are reviewing our training protocol and documented work instructions for any improvements that can be made.”

Christman did not respond to a question about how many Cerner clients were affected. HIStalk, a popular health IT blog, reported that hospital staff resorted to paper but it is unclear whether they would have had access to the most recent information on patients.

One Tweet by @UhVeeNesh said “Thank you Cerner for being down all day. Just how I like to start my week…with the computer system crashing for all of NorCal.”

Another by @wsnewcomb said “We have not charted any pts [patients] today. Not acceptable from a health care leader.”

Cerner Corp tweeted “Our apologies for the inconvenience today. The downtime should be resolved at this point.”

One HIStalk reader praised Cerner communications. Another didn’t:

“Communication was an issue during the downtime as Cerner’s support sites was down as well. Cerner unable to give an ETA on when systems would be back up. Some sites were given word of possible times, but other sites were left in the dark with no direction. Some sites only knew they were back up when staff started logging back into systems.

“Issue appears to have something to do with DNS entries being deleted across RHO network and possible Active Directory corruption. Outage was across all North America clients as well as some international clients.”

Colleen Becket, chairman and co-CEO of Vurbia Technologies, a cloud computing consultancy, told InformationWeek Healthcare that NCH Healthcare System, which includes two Tampa hospitals, had no access to its Cerner system for six hours. The outage affected the facilities and NCH’s ambulatory-care sites.

Lessons?

A HIStalk reader said Cerner has two electronic back-up options for remote hosted clients. Read-only access would have required the user to be able to log into Cerner’s systems, which wouldn’t have been possible with the DNS servers out of action last week.

Another downtime service downloads patient information to local computers, perhaps at least one on each floor, at regularly scheduled intervals, updated say every five minutes. “That way, even if all connection with [Cerner’s data centre] is lost, staff have information (including meds, labs and more) locally on each floor which is accurate up to the time of the last update”.

Finally, says the HIStalk commentator, “since this outage was due to a DNS problem, anyone logged into the system at the time it went down was able to stay logged in. This allowed many floors to continue to access the production system even while most of the terminals couldn’t connect.”

But could the NHS afford a remote hosted service, and a host of on-site back-up systems?

Common factors in health IT implementation failures

In its discussion on the Cerner outage, HIStalk put its finger on the common causes of hospital IT implementation failures. It says the main problems are usually:

– a lack of customer technical and implementation resources;
– poorly developed, self-deceiving project budgets that don’t support enough headcount, training, and hardware to get the job done right;
– letting IT run the project without getting users properly involved
– unreasonable and inflexible timelines as everybody wants to see something light quickly up after spending millions; and
– expecting that just implementing new software means clearing away all the bad decisions (and indecisions) of the past and forcing a fresh corporate agenda on users and physicians, with the suppplier being the convenient whipping boy for any complaints about ambitious and sometimes oppressive changes that the culture just can’t support.

Cerner hosting outage raises concerns

HIStalk on Cerner outage

Case study on CSC data centre crash in 2006

IBM won bid without lowest-price – council gives detail under FOI

By Tony Collins

Excessive secrecy has characterised a deal between IBM and Somerset County Council which was signed in 2007.

Indeed I once went to the council’s offices in Taunton, on behalf of Computer Weekly, for a pre-arranged meeting to ask questions about the IBM contract. A council lawyer refused to answer most of my questions because I did not live locally.

Now (five years later) Somerset’s Corporate Information Governance Officer Peter Grogan at County Hall, Taunton, has shown that the council can be surprisingly open.

He has overturned a refusal of the council to give the bid prices. Suppliers sometimes complain that the public sector awards contracts to the lowest-price bidder. But …

Supplier / Bid Total cost over 10 years
BT Standard bid £220.552M
BT Variant Bid £248.055M
Capita Standard Bid £256.671M
Capita Variant Bid £267.687M
IBM Standard Bid £253.820M
IBM Variant Bid £253.820M

The FOI request was made by former council employee Dave Orr who has, more than anyone, sought to hold Somerset and IBM to account for what has turned out to be a questionable deal.

Under the FOI Act, Orr asked Somerset County Council for the bid totals. It refused saying the suppliers had given the information  in confidence. Orr appealed. In granting the appeal Grogan said:

“I would also consider that the passage of time has a significant impact here as the figures included under the exemption are now some 5 years old and their commercial sensitivity is somewhat eroded.

“Whilst, at the time those companies tendering for the contract would justifiably expect the information to be confidential and that they could rely upon confidentiality clauses, I am not able to support the non-disclosure due the fact that the FOI Act creates a significant argument for disclosure that outweighs the confidentiality agreement once the tender exercise is complete and a reasonable amount of time has passed.

“I therefore do not consider this exemption [section 41] to be engaged. Please find the information you requested below…”

[In my FOI experience – making requests to central government departments – the internal review process has always proved pointless. So all credit to Peter Grogan for not taking the easy route, in this case at least.]

MP Ian Liddell-Grainger ‘s website on the “Southwest One” IBM deal.

IBM struggles with SAP two years on – a shared services warning.

Council accepts IBM deal as failing.

Was Audit Commission Somerset and IBM’s unofficial PR agents?

Poor IT suppliers to face ban from contracts?

By Tony Collins

The Cabinet Office minister Francis Maude is due to meet representatives of suppliers today, including  Accenture BT,Capgemini, Capita, HP, IBM, Interserve, Logica, Serco, and Steria.

They will be warned that suppliers with poor performance may find it more difficult to secure new work with the Government. The Cabinet Office says that formal information on a supplier’s performance will be available and will be taken into consideration at the start of and during the procurement process (pre-contract).

Maude will tell them that the Government is strengthening its supplier management by monitoring suppliers’ performance for the Crown as a whole.

“I want Whitehall procurement to become as sharp as the best businesses”, says Maude. “Today I will tell companies that we won’t tolerate poor performance and that to work with us you will have to offer the best value for money.”

The suppliers at today’s meeting represent around £15bn worth of central government contract spend.

The representatives will also be:

– asked their reactions on the government’s approach to business over the past two years

– briefed on the expanded Cabinet Office team of negotiators (Crown Representatives) from the private and public sectors. Maude says these negotiators aim to maximise the Government’s bulk buying power to obtain strategic discounts for taxpayers and end the days of lengthy and inflexible contracts.

Spending controls made permanent

Maude is announcing today that cross-Whitehall spending controls will be a permanent way of life. The Government introduced in 2010 temporary controls on spending in areas such as ICT  and consultancy. It claims £3.75bn of cash savings in 2010/11, and efficiency savings for 2011/12, which it says are being audited.

The Cabinet Office says: “By creating an overall picture of where the money is going, the controls allow government to act strategically in a way it never could before. For example, strict controls on ICT expenditure do not just reduce costs but also reveal the software, hardware and services that departments are buying and whether there is a competitive mix of suppliers and software standards across government.”

Maude said: “Our cross-Whitehall controls on spending have made billions of cash savings for the taxpayer – something that has never been done before. That’s why I’m pleased to confirm that our controls will be a permanent feature, helping to change fundamentally the way government operates.”

Why is MoD spending more on IT when its data is poor?

By Tony Collins

The Ministry of Defence and the three services have spent many hundreds of millions of pounds on logistics IT systems over the past 20 years, and new IT projects are planned.

But the National Audit Office, in a report published today – Managing the defence investory –  found that logistics data is so unreliable and limited that it has hampered its investigations into stock levels.

“During the course of our study,” says the NAO, “the Department provided data for our analyses from a number of its inventory systems. However, problems in obtaining reliable information have limited the scope of our analysis…”

The NAO does not ask the question of why the MoD is spending money on more IT while data is unreliable and there are gaps in the information collected.

But the NAO does question whether new IT will solve the MoD’s information problems.

“The Department has acknowledged the information and information systems gaps and committed significant funds to system improvements. However these will not address the risk of failure across all of the inventory systems nor resolve the information shortfall.”

MPs on the Public Accounts Committee, who will question defence staff on the NAO report, may wish to ask why the MoD’s is so apparently anxious to hand money to IT suppliers when data is poor and new technology will not plug information gaps.

Comment:

MPs on the Public Accounts Committee found in 2003 (Progress in reducing stocks) that the MoD was buying and storing stock it did not need. Indeed after two major fires at the MoD’s warehouses at Donnington in 1983 and 1988 more than half of the destroyed stock did not need replacing. Not much has changed judging by the NAO’s latest report.

It’s clear that the MoD lacks good management information. Says the NAO in today’s report:

“The summary management and financial information on inventory that is provided to senior staff within Defence Equipment and Support is not sufficient for them to challenge and hold to account the project teams…”

But will throwing money at IT suppliers make much difference? The MoD plans the:

–  Future Logistics Information Services project, which is intended to bring together and replace a number of legacy inventory management systems; and

–  Management of the Joint Deployed Inventory system which will provide the armed services with a common system for the inventory they hold and manage.

But is the  MoD using IT spending as proof of its conviction to improve the quality of data and the management of its inventory?

Managing the defence inventory

RBS/Natwest: Some lessons from the IT crash – Bank of England Governor.

By Tony Collins

Sir Mervyn King, Governor Bank of England, promised today that there would be a “very detailed inquiry” once problems at RBS/Natwest are back to normal.

Such a report would be unusual because the cause or causes of IT-related crashes in the public and private sectors are usually kept secret unless in rare cases a legal action comes to court.

Mervyn King told the Treasury Committee today:

“Once the current difficulties are over then we will need the FSA to go in and carry out a very detailed investigation to find out first of all what went wrong but even more importantly why it took so long to recover.

“Computer systems will always go wrong from time to time. The important things are your back-up systems and the time it takes to implement recovery. As of now we have kept in very close touch. My office was in touch with senior RBS management right through the weekend. Our banking director was in touch with RBA and FSA on this right since this problem began … It is still going to take time to catch up, to get back to normal.

“The important thing now is that we provide whatever support is needed to let them put it right. Once it is back to normal then we must carry out a very detailed inquiry.

“To my mind, one of the big lessons from this is that it shows everyone is how important the basic functions of banking actually are: what can go wrong when the system of payments from person to another is interrupted. Fortunately it has been one bank, albeit a very big bank, and customers of that bank have been affected, and of course customers of other banks have been affected, and payments have not gone through.

“I hope this is a reminder, a demonstration, to everyone, for example, of what might have happened if we had not rescued RBS in the autumn of 2008. The whole payment system would have collapsed. [It is] why it is so important to ensure you have a banking system where the people running it are completely focused on this essential service function of banking to provide … customers with a functioning payment system.

Learning from supermarkets.

“I have been driven by the belief that the nature of banking and providing these kinds of services is very different from investment banking operations. Those are important but they are very different. When you go out and see how supermarkets operate, the senior management is utterly focused on ensuring that the IT systems, the ordering systems, the delivery system, works hour by hour. That is very important to ensure that that is true of the banking system as well…”

Comment:

History  is, to some extent, the story of the unforeseen, in which case a published report on the cause of the problems at RBS/Natwest could be helpful to other banks and major organisations whose ageing systems are vulnerable to an unforeseen failure of huge proportions.

A published report on the crisis may show systemic management failures. The mere fear of such a report would be an added deterrent – additional to potential losses and payments of compensation – to any bank that does not give the attention it should to operational systems, even when those systems support a retail banking operation that may represent a small part, perhaps only 2% of a bank’s balance sheet.

It is ironic that RBS is publicly owned. Will the IT disaster now be added to the list of other public sector failures? Did RBS, now in the public sector, drop its IT-related standards and caution in part because the commercial imperative was absent?

Natwest/RBS – what went wrong?