I read a post from the Association for Project Management recently suggesting that “next year could be a turning point for project management and AI”. It is an interesting article that is worth a read. I wanted to take the opportunity to delve deeper into the conclusion on why next year is the turning point and whether limiting it to AI overly constrains the opportunity.
Robotic Process Automation
We have seen robotic process automation take off in multiple sectors, removing the burden of repetitive work. RPA isn’t AI, it is predominantly focused around the automation of workflows (although the higher end capabilities can execute machine learning algorithms). There is no obstacle to implementing this now other than understanding, skills, budget and most importantly, a desire to implement it. Projects are laden with processes and a lot of these could be automated. This is all possible today. We are doing it within Projecting Success – automating everything we possibly can from client work, HR workflows through to posts about meetups. It also allow us to run checks to assess whether the data we are receiving is accurate; it should perform better than a human.
But RPA is only one part of the spectrum. We can rapidly extend out into data analytics and data science. At our last Project:Hack the winning team creating a tool to check for errors in specifications. They scraped the ISO and British Standard websites and compared these with the standards in the specification, highlighting any that were out of date or inaccurate. We have use Python for a wide range of applications such as topic modelling (sense making) and error checking. It is all possible today and only limited by our imagination.
We then move into network theory, where we look for relationships between data. We tend to store data in silos and lose the connection between data. Even single points of the truth may not have the requisite connectivity in the data to enable us to conduct complex queries. For instance, if I want to understand what the key risks are with a certain supplier at a specific stage of a project, including impact, I need to correlate the following:
- Ssupplier database
- Work breakdown structure
- Risk register
- Cost and schedule variance associated with the risk. Also linking to compensation events or claims.
We also need to capture the data dynamically so that we can see how the project evolved, understand chains of events and vectors.
We haven’t yet touched on AI.
We can introduce AI using off the shelf algorithms. A lot of them are freely available. Some of these algorithms have been trained using vast data sets, such as translation or image recognition tools. But we often lack the data to train algorithms on project delivery challenges. For instance, it would be great to use AI to identify lead indicators on health and safety issues on a construction site. But the Health and Safety Executive only collate RIDDOR data, i.e. reportable incidents. They aren’t able to access data on safety audits, safety observations or levels of training. The same applies to risk, schedules, costs, benefits, commercial disputes, logistics, quality etc.
We need to be careful of conflating data analytics and AI.
In my view, AI in project delivery will only become commonplace when we address the following:
- Awareness. We educate the community on the art of the possible. But there are lots of people jumping on the bandwagon who use language loosely, which may confuse rather than educate. I spend a lot of time finding people from across the globe who are pushing the boundaries on project data analytics, sharing the latest ideas and trying to put this into terms that we can all understand. But they are still generally a rare breed.
- Accessibility. There is a perception that this is all really hard and practitioners need a PhD before they can apply it. We have been working hard to change these perceptions via our hackathons. What starts off as something very scary rapidly becomes digestable and within reach of many.
- Competence. I see 3 schools of thought emerging on developing competence within an organisation:
- Sheep dip everyone in a 1 week course on data science. From the marketing team to project engineers.
- Create a centralised team of data ninjas who work on projects on demand
- Upskill project delivery professionals. This is where we focus because project delivery is different. Data doesn’t stream out of a control panel, it is messy, incomplete and often full of errors. If we don’t educate project professionals on the art of the possible and show them how to do it, we will never create the culture within the business to change. This change needs to come from within. This is why we have focused so heavily on the project data analyst apprenticeship. Its a key part of the system.
- Data. We also need to provide access to data. If we constrain data to the big software vendors, they will dominate and create monopolies at the expense of innovation. We need to work collegiately to pool data for the benefit of the collective. Democratising data to enable access to researchers, innovators and industry professionals. We are not advocating a free for all – that will never work because a lot of this data is sensitive. But by independently stewarding the data within a data trust we create the ecosystem to make it work. Projecting Success are working with a number of organisations such as the Open Data Institute, Oil and Gas Technology Centre, Sir Robert McAlpine and others to make this a reality.
A Systems Approach
It is possible to implement RPA and data analytics today. But if we want to unlock the potential of AI we need a systems approach. We can deliver it piecemeal, but we’ll get there a lot quicker if we are able to create critical mass across multiple sectors and multiple professions. But it takes time and a lot of resilience. The tide is turning, but we need to think more strategically than bolting on software solutions and waiting for the magic to happen. We need to educate, upskill, reskill, experiment and collaborate.