I have submitted a few posts on Linkedin recently about how advanced data analytics and data science will change project management. There have been a few comments challenging the extent of this impact. The overarching concerns is that the projects are not deterministic, i.e. their trajectory is not predetermined. Projects encounter obstacles on a daily basis which need to be overcome, so there is a probabilistic is not random element to them. As such, some believe that the utility of advanced data analytics will be diminished. Let me try and give some counter arguments:
Could I start with a big picture. In 2011 one of the project management gurus, Bent Flyvbjerg argued that one in six projects is a black swan, i.e. These projects suffered from ‘an unpredictable or unforeseen event, typically one with extreme consequences‘ and the result was that they suffered significant delay and cost overrun. I have looked into some of these projects, including submitting FOI requests into the lessons learned. From what I saw, the majority of these issues were what I would expect to see as a seasoned project manager. There may be unique technical integration issues which trip up progress but most are known challenges for a project manager. Unforeseeable is all about personal and team experience, combined with bias and strategic blindness. If we call upon a data set of thousands of projects the unforeseeable becomes known.
Our challenge is to assess the predisposition of a project to these known challenges then watch out for them. I believe that we can deploy ‘sensors’ that could provide us with an early warning far in advance of when a human could detect it. Can I prove this hypothesis? – not today, but progress from multiple organisations in the community provide tangible evidence that this future is a reality; the uncertainty is in the scope of what we can detect. But through correlations, I even believe we’ll be able to catch a lot (not all) of the others. There will always be outliers and shocks.
People also rightly challenge me that that project management is something that needs human involvement. I would absolutely 100%, for most projects. More so for complex projects less so for projects which are turnkey or ‘obvious’. But on megaprojects, do we still need the standing army of project professionals – probably not. I would argue that we should be redeploying some of these into extracting insight from data, grounded in statistics. Insights that enable the project manager to become more adept at focusing project resources on areas of emerging challenge whilst also understanding the overall cohesion and performance of the team.
Some believe we can flick a switch and we can deliver this future. But its not that easy. From what I have seen there is a disconnect between the problems we are trying to solve and the data we collect; in terms of scope, volume and quality. It will take years to get to fix this. The early movers will secure an advantage, not just in terms of data, but the data culture that underpins it (see my other blog) and the momentum that accrues from it.
We need to work top down and bottom up. I’ll blog about the former soon. The latter can be tackled now.
Quality and volume
We can use advanced data analytics to improve data quality and volume today. We can provide feedback on the quality of site diaries – a team developed a MVP to solve this challenge at a Project:Hack 3. We can do the same almost everywhere. We can use these insights to provide feedback on individual team performance – how good are they are their job. We can do this today.
We can use a range of tools to automate some of our current roles. In a recent meetup we demonstrated how we can save thousands of hours by automating the federation of BIM models. We have also just solved a use case for autonaming of BIM documentation and files; it’s a laborious job that no longer needs to be done. There are thousands of other use cases from PMO functions through to technical and engineering work. We are just scratching the surface.
This will take longer and needs the organisation to be committed to an AI future or it will stall. Other priorities will get in the way. But the ones who stay the course will transform how projects are delivered.
We can start with some simple use cases. If we collect data on tens of thousands of IT projects we should have a good understanding of what the distribution of cost and schedules are (benchmarking data) but not just at a project level – we should be able to break it down to product or WBS level. We should be able to capture what influenced the variance, particularly if we capture ‘micro narrative’. We connect the risks, issues and contingency with the schedule and cost plan.
We can also look at the connections between projects, through multiple lenses from dependencies through to attribution of resources. We then ease the pain for the portfolio manager.
We then develop a map of the possible risks on a project (there will always be unknown unknowns). Machine learning provides a capability to assess the probability and impact of each of these risks and their potential impact based on project parameters (hundreds or thousands of features influence the weighting). It should also provide insights into successful mitigation action. There is debate about whether we’ll have enough volume of data for this to work accurately, but as a decision aim it should provide us with tremendous insights.
I have been in board meetings and the board can sometimes disagree with the PM’s call for resources; they have multiple projects all shouting loudly. Data science provides evidence to underpin this analysis. ‘But PM’s can manipulate this evidence’ I hear you cry, but data science should pick most of it up.
We understand which stakeholders are likely to impede or accelerate progress, which resources are likely to under deliver. Its evidence based. Its not deterministic, but it gives us evidence on which to have a discussion. There are thousands of other use cases.
I also acknowledge that projects are almost always different; technical scope, quantities, environment, team etc. But at a functional level there is a lot of commonality. Design, procurement, integration, commissioning etc. I know there are patterns in this data and some organisations are more predisposed to these patterns than others.
There are also human behaviours which provide lead indicators. Some of the work on this is quite advanced and starting to provide some really fascinating and cost effective insights.
All of this requires human interpretation. It needs a data analyst to ensure that the data is sound. It then needs a data scientist to understand the performance of the algorithms and undertake sensitivity analysis. Our project teams will need to evolve. We currently lack the capacity to deliver this, hence why we have launched a level 4 [project] data analyst apprenticeship. It provides the essential capability and capacity to initiate this journey.
But this only works for waterfall doesn’t it? I don’t agree. We can use data science to understand velocity, likely obstacles, likelihood of benefits being realised etc.
I also hope this helps to paint a picture of why data trusts will be essential; we securely pool huge volumes of data to inform these insights. See our website on the construction datatrust for more details
Data science isn’t about the development of an infallible crystal ball. Its about streamlining project delivery, working collegiately as a profession and changing the dial on how we deliver projects through a better understanding of predisposition and lead indicators.
Martin Paver is the CEO and Founder of Projecting Success, a consultancy that specialises in leveraging project data to transform project delivery. He has led a $1bn megaproject and a multi $billion portfolio office. He is the founder of the Project Data Analytics community, comprising nearly 4000 members who share a passion for leveraging the exhaust plume of project data. He regularly blogs and presents at international conferences, helping to ignite the professional imagination and inspire change.