Owen McCall Consulting Limited

Project Success and Failure

Home »  Blog »  Project Success and Failure

Project Success and Failure

|

A few years ago I was contacted by a local doctoral student. They were looking into success rates and best practices for ERP implementations. What he as looking for was local case study input into his thesis. Whenever I get these requests I try and help as much as I can, It’s a good thing to do in terms of giving back to the industry in a small way and it all helps contribute to the accumulation of knowledge around our industry.  

To support him the first thing I needed to do was select an example of a recent ERP project I had been involved in and describe it to him. I decided to select the most recent ERP project experience I was part of. It was not a project I had directly participated in, as it was performed in a related company, but I was very familiar with what happened. Choosing this particular project took what I thought was a dose of bravery as, while not my project, I was broadly associated with it and in my view it did not go well. It was a project that I consider to be at least troubled and most likely failed. But more on that later.

The starting point was to outline the project for him.  Broadly the project went like this:

  • The scope was to implement a fairly broad base of standard ERP functionality including finance, supply chain, warehouse management and associated data warehousing and reporting.  There was also some significant integration to other applications.
  • This was a large project in relation the the size of the organisation but in the scheme of super projects it was at best medium sized and most likely considered small.  The original project plan had the project completed in about  9 months from agreement to the business case.  The work to complete the business case took another 4 months or so on top of that.
  • There was extensive work done at an executive level to ensure there was buy-in to the project and to ensure the team knew exactly what was expected in terms of executive support and benefits realisation.  All benefits were signed off by the executive team as part of the design sign off in a very public and committed way.
  • It was about then that things began to unravel.  On the company side company leadership was turned over.  The project manager needed to be replaced twice.  Once for personal reasons as they moved on and once for performance.  Also there was a change of CEO while the implementation was in progress.  Without getting into details the change of the chief executive was accompanied by a revisiting of the benefits already agreed by the executive team.  The cracks began to emerge.  On the implementation partner side they struggled to get the right resources engaged in the project.  This was a reasonably leading edge implementation for New Zealand or perhaps more correctly a first for this system and this industry.  There were several discussions about utilising overseas resources, who had done this before, but for whatever reason it didn’t happen.
  • As the project approached go live it experienced the usual issues of large projects.  Running out of time, a high volume of testing issues that took time to resolve and as a result the project was delayed.  Better to delay the go live than to risk a troubled go live.   When we did go live about 2 – 3 months later, and despite the delay, we had a troubled go live.  There were a large number of issues but the main ones were:
    1. Users struggled to use the system and complained it took longer to process transactions than it had previously.  To make matters worse when the system was under load it was very very slow, so yes it took forever to process everything but it wasn’t clear whether it was a bad process or a really slow system which was the problem.
    2. There were issues in the data conversion.  These issues alone were not major problems but they quickly compounded as people got frustrated with the slow processing and the uncertainty around the accuracy of the data and began to work around the system.  It wasn’t long until there was no trust in the quality of the data and the organisation began to lose visibility to their stock.  Lack of visibility to stock meant no late or wrong stock movements, which meant rework and potential loss of sales.
    3. As things progressed stress levels began to climb.  To top it all off the financial results were dependant on the benefits of this new system being realised.  Not only were the benefits not being realised but costs were increasing as staffing levels were temporarily increased to cope with the growing workload.  As time progressed there was much tension and angry conversations.  Alongside that there were people and organisations who stepped up and committed to help, however and wherever they could, even if it wasn’t their role to do so.  

In the end, about 18 months after go live they got there.  The system was stable, extra people had begun to be rolled out and at least some of the benefits were beginning to be realised.  For the first time ever the company got a clean audit report with no significant data integrity issues.  This had never happened before.  Project costs were well exceeded (over 150% of budget) and there was about $2 million other costs incurred outside the project post go live to “get things right”.

In my mind this project was “a disaster”.  It was over budget, it was late, it caused a significant increase in short term costs for the organisation and the benefits were not fully realised and were not realised in a timely fashion.  With this narrative running through my head I was stunned when the researcher said “actually this is one of the most successful ERP projects I have heard about.”  He said it unprompted and in complete sincerity.  

From an industry point of view this is a problem.  How can this be considered a success?  If this is considered a success is it any wonder we still struggle for credibility in the boardroom?

The unfortunate thing is the research does confirm his view of the world.  The Standish Group’s Chaos report has consistently reported the IT project success rates are less than 30%.  If you look at the largest and presumably most important projects, success rates dip below 10%.  The Standish Group is not alone however.  Consider this finding from a joint McKinsey and Oxford University study into the success rates of large projects.  In their case a large project was defined as a project with an initial budget over $15 million.  They found that “17 percent of IT projects go so bad that they can threaten the very existence of the company.”  If you compare the reports the implication is that large IT projects have a higher chance of causing the organisation to have a brush with bankruptcy than they are to be successful.

One last statistic to confirm this.  A recent KPMG survey of New Zealand project management reported that only 20% of organisations were consistently realising the benefits of their projects.  Let’s turn that around.  80% of organisations are wasting millions of dollars on projects that do not deliver. I once quipped in a speech that if we were honest we would ask our leaders to approve all projects two times because the second time would increase our chance of success.  It seems I was too conservative and perhaps we need to ask 3, 4 or 5 times!!

This needs to change.  We need to be able to deliver our technology projects consistently.  If we don’t we stand little chance of helping our organisations to succeed. With this in mind I would love to hear from you if you have been part of a project that has not delivered and struggled to deliver and to understand what you believe the reasons for this are.  

Or perhaps more importantly, I would love to hear from you if you have been involved in a project that has succeeded and to understand what you believe are the reasons for this success.

|

 

Liked the post? Share it with others!Share on LinkedInTweet about this on TwitterShare on Facebook

Leave a Reply