Cobb’s Paradox

Martin Cobb worked for the Secretariat of the Treasury Board of Canada and in 1995, after reviewing 10 complex information communication technology (ICT) projects he asked the question “We know why projects fail; we know how to prevent their failure—so why do they still fail?” This has since become known as Cobb’s paradox.

Over 20 years later organisations still struggle. Richard Self (University of Derby) published a fascinating slide, summarising the results of over 20 years of IT project delivery surveys by the Standish Group and he concluded “Essentially no improvements in the success rates of IT related projects over the last 22 years!”

So, back to Cobb’s paradox. Why do projects still fail?

My earlier blog provided a long list of the causes of project failure/conditions for project success. Many of these causes have been known for over 20 years. The list continues to grow. But the evidence suggests that this doesn’t provide the answer.

So, why do projects still fail? 

I straddle the divide between seasoned project manager and applying the theory of knowledge management (KM). Although KM is helping to break down organisational walls, encourage conversations and stimulate challenge, I do not believe it will provide the step change needed to address Cobb’s paradox. In my view, a significant proportion of the challenge is not one of knowledge management. If the project manager of a multi million project doesn’t know how to develop a schedule, business case, implement effective governance etc, then they probably shouldn’t be in the job. In the majority of cases they have the knowledge, or know where to find it. Its something that is eminently knowable.

The challenge is down to implementation. When the pressure is on, resources get diverted, fires get lit and projects evolve. The project manager’s best intentions get prioritised away. These priorities are driven by a combination of personal experience, senior level intervention and client feedback, combined with adjusting to the emerging project conditions. This is mirrored up and down the chain of command. The end result is that projects compromise on those elements that are known to trip a project up. In the majority of cases, there is no-one available to offer counsel to the team on where priorities should and shouldn’t be traded.

Cobb’s Paradox in Real World Conditions

I’ve thought long and hard about this challenge and I believe that the solution lies in hard evidence. If the body of lessons learned is shouting messages load and clear, then the project manager and governance authorities should be held to account if they ignore them. The problem is that the lessons learned don’t shout loud and clear. They all shout together, at the same volume and this degenerates into background noise that gets ignored. The lessons are too numerous, the world has moved on, the project is a one off and ‘its different this time’.

Data holds the key. I’ve collated nearly 10,000 lessons and when aggregated, segmented and filtered to the specific circumstances of the project, the messages are clear. Furthermore, by understanding the impact of lessons, the consequences of ignoring these messages is self evident. Should the project manager decide to ignore the lessons then their decision will be on record.

I’m not advocating a lessons based straightjacket – more of a handrail. But departure from the handrail should be a product of conscious decision making rather than responding to the loudest voice or fire.

This data will also provide evidence at a corporate level on where improvements need to be made. Mitigating known and knowable issues from the outset.

Bent Flyvberg’s paper on black swans highlights that the probability of an ICT project becoming a black swan is 17% and for “Black Swans the average cost risks almost triples again to +197%, schedule risk slightly increases further to +68%”. I challenge whether a black swan is a victim of an unknowable, out of the blue event. In isolation, I would agree. For the project manager, such as on NHS24, the project may be a once in a lifetime event. But if we aggregate this data across all projects, then the unknown becomes knowable. Not in all cases I agree, but its a matter of viewpoint.

The evidence demonstrates that despite some very admirable and valiant quests, we aren’t really making an impact on project delivery success. We can’t continue to tackle the problem incrementally. We need an approach that is transformational. Data holds the key to this and by packaging data in the right way we can use the lessons and insights from the past to inform how we deliver the future.

Better still, imagine a world where schedule risk assessments aren’t just informed by risk, but by an evidence based assessment of lessons that have gone before. If the evidence highlights a high risk of failure, then the information needs to be built into the forecast unless the project team can demonstrate otherwise. Lessons that rumble on should become risks for the next project; this feedback loop is currently broken. Furthermore, with the right tools it is eminently possible to provide the project team with a glimpse into the future on what is likely to trip it up and by how much. An evidence driven crystal ball driven by data and machine learning – its like a scene from Minority Report, but entirely possible.

How cool would that be?