# Category Archives: Risk

## CPM Anomalies Invalidate Monte Carlo

A couple of weeks ago I posted on some of the anomalies in CPM logic that will cause unexpected results: CPM Scheduling – the logical way to error #1. A comment on the post by Santosh Bhat started me thinking about the effect of these logical constructs on risk analysis.

The various arrangement of activities and links shown in CPM Scheduling – the logical way to error #1 (with the addition of a few more non-controlling links) follow all of the scheduling rules tested by DCMA and other assessments. The problem is when you change the duration of a critical activity, there is either no effect or the reverse effect on the overall schedule duration.

In this example, the change in the overall project duration is the exact opposite of the change in the duration of Activity B (read the previous post for a more detailed explanation).  For this discussion, it is sufficient to know that an increase of 2 weeks in the duration of ‘B’ results in a reduction of the overall project duration of 2 weeks (and vice-versa).

The effect these anomalies on the voracity of a Monte Carlo analysis is significant. The essence of Monte Carlo is to analyze a schedule 100s of times using different activity durations selected from a pre-determined range that represents the uncertainty associated with each of the identified risks in a schedule. If the risk event occurs, or is more serious, the affected activity duration in increased appropriately (see more on Monte Carlo).

In addition to calculating the probability of completing by any particular date, most Monte Carlo tools also generate tornado charts showing the comparative significance of each risk included in the analysis and its effect on the overall calculation.  For example, listing the risks that have the strongest correlation between the event occurring and the project being delayed.

Tornado charts help the project’s management to focus on mitigating the most significant risks.

When a risk is associated with an activity that causes on of the anomalies outlined in CPM Scheduling – the logical way to error #1 the consequence is a reduction in the accuracy of the overall probability assessments, and more importantly to reduce the significance of the risk in tornado charts. The outcome of the anomalous modelling is to challenge the fundamental basis of Monte Carlo. There are more examples of similar logical inconsistencies, that will devalue Monte Carlo analysis, included in Section 3.5 of Easy CPM.

Easy CPM is designed for schedulers that know how to operate the tools efficiently, and are looking to lift their skills to the next level. The book is available for preview, purchase (price \$35), and immediate download, from: https://mosaicprojects.com.au/shop-easy-cpm.php

## Project Management in the time of COVID

The first in a series of five papers by Dr. Lynda Bourne, commissioned by PM World Journal, has been uploaded to the Mosaic website.

Uncertainty – Is it time to rethink? considers what uncertainty might mean post-pandemic. Through a discussion of the concept of a Zone of Uncertainty, the paper considers how the current heightened awareness of uncertainty and unknown unknowns can contribute to more effective risk management practice, and how the introduction of the concepts of resilience, persistence, and adaptability may assist individuals, groups, and organizations to recover, review and reform their practices.

PM World Journal is a free monthly project management journal, see more at: https://pmworldjournal.com/

## Murphy’s Law is not an excuse, it is a call to action!

To apply Murphy’s Law proactively, you need to think through everything before you start work and ask yourself if this part fails, does the system still work?  This article looks at the historical origins of Murphy’s Law and how to use the concept to avoid problems.

For more on risk management see: https://mosaicprojects.com.au/PMKI-PBK-045.php#General

## Radical UncertaintyProject controls for an unknowable future

A new book suggests a paradigm shift in the way project controls a used, is needed on major projects combining the discipline required for major engineering works with the flexibility to deal with an uncertain future – getting the balance right could be very profitable. This article outlines the challenges and shortcomings of existing control processes: https://mosaicprojects.com.au/Mag_Articles/AA009_Radical_Uncertainty.pdf

For more papers on risk and uncertainty see: https://mosaicprojects.com.au/PMKI-SCH-045.php

For more papers on complexity see: https://mosaicprojects.com.au/PMKI-ORG-040.php

## Contingencies are not a soft option!

Knowing what a project is really likely to cost is important from every perspective: personal, professional, and organizational. But developing a realistic and achievable cost estimate has two components: first you develop an accurate baseline estimate, then you need to develop a realistic contingency.  Most people do step one, very few people even think of step two.

This article focuses on the importance of adequate contingencies and reserves in delivering a successful project: https://mosaicprojects.com.au/Mag_Articles/AA006_Contingencies_are_not_a_soft_option.pdf

## Probability -v- luck. Should we give up our day-job?

Based on a successful day at the races, 5 winners and one place from 8 bets, this article looks at the balance between luck and process in achieving the result.  Our conclusion is that you should not confuse luck with skill. Good processes will help build success, persistence will generate more opportunities for you to be lucky, and skill or capability will shift the odds in your favour, but randomness rules!

To quote Coleman Cox: I am a great believer in Luck. The harder I work, the more of it I seem to have.

For more papers on risk and probability see: https://mosaicprojects.com.au/PMKI-SCH-045.php#Process1

## The reference case for management reserves

Risk management and Earned Value practitioners, and a range of standards, advocate the inclusion of contingencies in the project baseline to compensate for defined risk events. The contingency may (should) include an appropriate allowance for variability in the estimates modelled using Monte Carlo or similar; these are the ‘known unknowns’.  They also advocate creating a management reserve that should be held outside of the project baseline, but within the overall budget to protect the performing organisation from the effects of ‘unknown unknowns’.  Following these guidelines, the components of a typical project budget are shown below.

PMBOK® Guide Figure 7-8

The calculations of contingency reserves should be incorporated into an effective estimating process to determine an appropriate cost estimate for the project[1]. The application of appropriate tools and techniques supported by skilled judgement can arrive at a predictable cost estimate which in turn becomes the cost baseline once the project is approved. The included contingencies are held within the project and are accessed by the project management team through normal risk management processes. In summary, good cost estimating[2] is a well understood (if not always well executed) practice, that combines art and science, and includes the calculation of appropriate contingencies. Setting an appropriate management reserve is an altogether different problem.

Setting a realistic management reserve

Management reserves are an amount of money held outside of the project baseline to ‘protect the performing organisation’ against unexpected cost overruns. The reserves should be designed to compensate for two primary factors.  The first are genuine ‘black swans’ the other is estimating errors (including underestimating the levels of contingency needed).

The definition of a ‘black swan’ event is a significant unpredicted and unpredictable event[3].  In his book of the same name, N.N. Taleb defines ‘Black Swans’ as having three distinct characteristics: they are unexpected and unpredictable outliers, they have extreme impacts, and they appear obvious after they have happened. The primary defence against ‘black swans’ is organisational resilience rather than budget allowances but there is nothing wrong with including an allowance for these impacts.

Estimating errors leading to a low-cost baseline, on the other hand, are both normal and predictable; there are several different drivers for this phenomenon most innate to the human condition. The factors leading to the routine underestimating of costs and delivery times, and the over estimating of benefits to be realised, can be explained in terms of optimism bias and strategic misrepresentation.  The resulting inaccurate estimates of project costs, benefits, and other impacts are major source of uncertainty in project management – the occurrence is predictable and normal, the degree of error is the unknown variable leading to risk.

The way to manage this component of the management reserves is through the application of reference class forecasting which enhances the accuracy of the budget estimates by basing forecasts on actual performance in a reference class of comparable projects. This approach bypasses both optimism bias and strategic misrepresentation.

Reference class forecasting is based on theories of decision-making in situations of uncertainty and promises more accuracy in forecasts by taking an ‘outside view’ of the projects being estimated. Conventional estimating takes an ‘inside view’ based on the elements of the project being estimated – the project team assesses the elements that make up the project and determine a cost. This ‘inside’ process is essential, but on its own insufficient to achieve a realistic budget. The ‘outside’ view adds to the base estimate based on knowledge about the actual performance of a reference class of comparable projects and resolves to a percentage markup to be added to the estimated price to arrive at a realistic budget.  This addition should be used to assess the value of the project (with a corresponding discounting of benefits) during the selection/investment decision making processes[4], and logically should be held in management reserves.

Overcoming bias by simply hoping for an improvement in the estimating practice is not an effective strategy!  Prof. Bent Flyvbjerg’s 2006 paper ‘From Nobel Prize to Project Management: Getting Risks Right[5]’ looked at 70 years of data.  He found: Forecasts of cost, demand, and other impacts of planned projects have remained constantly and remarkably inaccurate for decades. No improvement in forecasting accuracy seems to have taken place, despite all claims of improved forecasting models, better data, etc.  For transportation infrastructure projects, inaccuracy in cost forecasts in constant prices is on average 44.7% for rail, 33.8% for bridges and tunnels, and 20.4% for roads.

The consistency of the error and the bias towards significant underestimating of costs (and a corresponding overestimate of benefits) suggest the root causes of the inaccuracies are psychological and political rather than technical – technical errors should average towards ‘zero’ (plusses balancing out minuses) and should improve over time as industry becomes more capable, whereas there is no imperative for psychological or political factors to change:

• Psychological explanations can account for inaccuracy in terms of optimism bias; that is, a cognitive predisposition found with most people to judge future events in a more positive light than is warranted by actual experience[6].
• Political factors can explain inaccuracy in terms of strategic misrepresentation. When forecasting the outcomes of projects, managers deliberately and strategically overestimate benefits and underestimate costs in order to increase the likelihood that their project will gain approval and funding either ahead of competitors in a portfolio assessment process or by avoiding being perceived as ‘too expensive’ in a public forum – this tendency particularly affects mega-projects such as bids for hosting Olympic Games.

Optimism Bias

Reference class forecasting was originally developed to compensate for the type of cognitive bias that Kahneman and Tversky found in their work on decision-making under uncertainty, which won Kahneman the 2002 Nobel Prize in economics[7]. They demonstrated that:

• Errors of judgment are often systematic and predictable rather than random.
• Many errors of judgment are shared by experts and laypeople alike.
• The errors remain compelling even when one is fully aware of their nature.

Because awareness of a perceptual or cognitive bias does not by itself produce a more accurate perception of reality, any corrective process needs to allow for this.

Strategic Misrepresentation

When strategic misrepresentation is the main cause of inaccuracy, differences between estimated and actual costs and benefits are created by political and organisational pressures, typically to have a business case approved, or a project accepted, or to get on top of issues in the 24-hour news cycle.  The Grattan Institute (Australia) has reported that in the last 15 years Australian governments had spent \$28 billion more than taxpayers had been led to expect. A key ‘political driver’ for these cost overruns was announcing the project (to feed the 24-hour news cycle) before the project team had properly assessed its costs.  While ‘only’ 32% of the projects were announced early, these accounted for 74% of the value of the cost overruns.

The Grattan Institute (Australia) has reported that in the last 15 years Australian governments had spent \$28 billion more than taxpayers had been led to expect on transport infrastructure projects. One of the key ‘political drivers’ for these cost overruns was announcing the project (to feed the 24-hour news cycle) before the project team had properly assessed its costs.  While ‘only’ 32% of the projects were announced early, these projects accounted for 74% of the value of the cost overruns.

Reference class forecasting will still improve accuracy in these circumstances, but the managers and estimators may not be interested in this outcome because the inaccuracy is deliberate. Biased forecasts serve their strategic purpose and overrides their commitment to accuracy and truth; consequently the application of reference class forecasting needs strong support from the organisation’s overall governance functions.

Applying Reference Class Forecasting

Reference class forecasting does not try to forecast specific uncertain events that will affect a particular project, but instead places the project in a statistical distribution of outcomes from the class of reference projects.  For any particular project it requires the following three steps:

1. Identification of a relevant reference class of past, similar projects. The reference class must be broad enough to be statistically meaningful, but narrow enough to be truly comparable with the specific project – good data is essential.
2. Establishing a probability distribution for the selected reference class. This requires access to credible, empirical data for a sufficient number of projects within the reference class to make statistically meaningful conclusions.
3. Comparing the specific project with the reference class distribution, in order to establish the most likely outcome for the specific project.

The UK government (Dept. of Treasury) were early users of reference class forecasting and continue its practice.  A study in 2002 by Mott MacDonald for Treasury found over the previous 20 years on government projects the average works duration was underestimated by 17%, CAPEX was underestimated by 47%, and OPEX was underestimated by 41%.  There was also a small shortfall in benefits realised.

This study fed into the updating of the Treasury’s ‘Green Book’ in 2003, which is still the standard reference in this area. The Treasury’s Supplementary Green Book Guidance: Optimism Bias[8] provides the recommended range of markups with a requirement for the ‘upper bound’ to be used in the first instance by project or program assessors.

These are very large markups to shift from an estimate to a likely cost and are related to the UK government’s estimating (ie, the client’s view), not the final contractors’ estimates – errors of this size would bankrupt most contractors.  However, Gartner and most other authorities routinely state project and programs overrun costs and time estimates (particularly internal projects and programs) and the reported ‘failure rates’ and overruns have remained relatively stable over extended periods.

Conclusion

Organisations can choose to treat each of their project failures as a ‘unique one-off’ occurrence (another manifestation of optimism bias) or learn from the past and develop their own framework for reference class forecasting. The markups don’t need to be included in the cost baseline (the project’s estimates are their estimates and they should attempt to deliver as promised); but they should be included in assessment process for approving projects and the management reserves held outside of the baseline to protect the organisation from the effects of both optimism bias and strategic misrepresentation.  As systems, and particularly business cases, improve the reference class adjustments should reduce but they are never likely to reduce to zero, optimism is an innate characteristic of most people and political pressures are a normal part of business.

If this post has sparked your interest, I recommend exploring the UK information to develop a process that works in your organisation: http://www.gov.uk/government/publications/the-green-book-appraisal-and-evaluation-in-central-governent

______________________

[1] For more on risk assessment see: http://www.mosaicprojects.com.au/WhitePapers/WP1015_Risk_Assessment.pdf

[2] For more on cost estimating see: http://www.mosaicprojects.com.au/WhitePapers/WP1051_Cost_Estimating.pdf

[3] For more on ‘black swans’ see: https://mosaicprojects.wordpress.com/2011/02/11/black-swan-risks/

[4] For more on portfolio management see: http://www.mosaicprojects.com.au/WhitePapers/WP1017_Portfolios.pdf

[5] Project Management Journal, August 2006.

[6] For more on the effects of bias see: http://www.mosaicprojects.com.au/WhitePapers/WP1069_Bias.pdf

[7] Kahneman, D. (1994). New challenges to the rationality assumption. Journal of Institutional and Theoretical
Economics, 150, 18–36.

## Risk management handbook published

The Risk Management Handbook edited by Dr. David Hillson (the ‘risk doctor’) is a practical guide to managing the multiple dimensions of risk in modern projects and business.  We contributed Chapter 10: Stakeholder risk management.

The 23 Chapters are a cutting-edge survey of the risk management landscape, providing a broad and up-to-date introduction to risk, with expert guidance on current best practice and cutting-edge insight into new developments within risk management.

For more on the book, see: www.koganpage.com/product/the-risk-management-handbook-9780749478827

## The language used to define risks can contribute to failure.

If a risk is going to be adequately managed, it needs to be defined.  Failing to describe the actual risk (or risks) will almost inevitably lead to project failure and will frequently exacerbate the damage.

In recent times, there seems to be an explosion of documents in the public domain, including academic papers (where one would have hoped the reviewers and editors knew better) listing as ‘risks’ factors that cannot ever be risks.  The ‘fact’ hides the real or consequential risks that may be manageable.

Risk 101 – a risk is an uncertainty that may affect a project objective if it occurs. For something to be a risk, there has to be an uncertainty and the uncertainty may have a positive or negative impact on one or more objectives (see more on risk management). Risk management involves balancing the uncertainty, its potential impact and the cost and effort needed to change these for the better. But to do this you need to focus on the uncertainties that can be managed.

One of more frequently miss-described risks is ‘technical complexity’.  The degree of technical difficulty involved in a project is a FACT that can be measured and described!  Some projects such as launching a space rocket are technically complex, other less so; but NASA has a far higher success rate in its rocket launches than most IT departments have in developing successful software applications that achieve their objectives.  The technical difficulty may give rise to consequential risks that need addressing but these risks have to be identified and catalogued if they are going to be managed. Some of the risks potentially arising out of technical complexity include:

• Inadequate supply of skilled resources in the marketplace / organisation;
• Management failing to allow adequate time for design and testing;
• Management failing to provide appropriately skilled resources;
• Management lacking the skills needed to properly estimate and manage the work;
• Etc.

Another common risk in many of these pseudo risk lists is ‘lack of senior management support’.  This is a greyer area, the project team’s perception of management support and the actual level of support from senior management may differ. Developing an understanding of the actual attitude of key senior managers requires a methodical approach using tools such as the Stakeholder Circle.  However, even after defining the actual attitude of important senior managers the lack of precision in the risk description will often hide the real risks and their potential solutions or consequences:

• If there is a real lack of senior management support the project should be cancelled, its probability of failure is greater than 80%. Continuing is simply wasting money.
• If the problem is senior management failing to understand the importance of the project, this is an issue (it exists) and the solution is directed communication (see more on directed communication). The risk is that the directed communication effort will fail, leading to project failure, this risk needs careful monitoring.
• If the problem is a project sponsor (or steering committee) who is not committed to project success and/or a sponsor (or steering committee) lacking understanding of his/her role (see more on the role of a sponsor) this is another issue with a solution based in education or replacement. Depending on the approach to resolving the issue (and its guaranteed impact on project success if the issue remains unresolved) the risk is either the necessary education process may not work and/or poor governance and senior management oversight will allow the issue to continue unresolved – these specific risks need to be explicitly described and acknowledged if they are to be managed.

The first step to managing risks effectively is developing a precise description of the actual risk that requires managing. If there are several associated risks, log each one separately and then group them under a general classification.   The description of each risk is best done using a common meta language such as:

• ‘[Short name]: If a [description of risk] caused by [cause of risk] occurs, it may cause [consequence of occurrence]’. For example:
• ‘Storms: If a heavy thunderstorm caused by summer heat occurs, it may cause flooding and consequential clean up’.

For each risk you need to:

• Define the risk category and short name;
• Describe the risk using an effective ‘risk meta language’;
• Determine if the risk is an opportunity or threat and quantify its effect;
• Prioritise the risk using qualitative assessment process;
• Determine the optimum response;
• Implement the response and measure its effectiveness (see more on risk assessment).

A simple Excel template such as this can help: http://www.mosaicprojects.com.au/Practical_Risk_Management.html#Tools

Managing issues is similar, the key difference is the consequences of an unresolved issue are certain – the issue is a fact that has to be dealt with (see more on issues management).

There are a number of factors that can cause both risks and issues to be improperly defined, some technical, most cultural. Three of the most important are:

• Dealing with easy to identify symptoms without looking for the root cause of the risk / issue (see more on root cause analysis).
• A management culture that does not allow open and honest reporting of risks and issues; preferring to hide behind amorphous descriptions such as ‘technical complexity’ rather than the real risk ‘management’s inability to manage this level of complicated technology’.
• Failing to allow adequate time to analyse the stakeholder community using tools such as the as the Stakeholder Circle so that the full extent of risks associated with people’s capabilities and attitudes can be understood – these can account for up to 90% of the actual risks in most projects.

Management culture is the key to both allowing and expecting rigorous and honest assessment of risk. One of the key functions of every organisation’s governing body is to design, create and maintain the organisation’s management culture, this is a problem that starts at the top! For more on the roles of governance see: http://www.mosaicprojects.com.au/WhitePapers/WP1096_Six_Functions_Governance.pdf.

## Project Risk Management – how reliable is old data?

One of the key underpinnings of risk management is reliable data to base probabilistic estimates of what may happen in the future.  The importance of understanding the reliability of the data being used is emphasised in PMBOK® Guide 11.3.2.3 Risk Data Quality Assessment and virtually every other risk standard.

One of the tenets underpinning risk management in all of its forms from gambling to insurance is the assumption that reliable data about the past is a good indicator of what will happen in the future – there’s no certainty in this processes but there is degree of probability that future outcomes will be similar to past outcomes if the circumstances are similar. ‘Punters’ know this from their ‘form guides’, insurance companies rely on this to calculate premiums and almost every prediction of some future outcome relies on an analogous interpretation of similar past events. Project estimating and risk management is no different.

Every time or cost estimate is based on an understanding of past events of a similar nature; in fact the element that differentiates an estimate from a guess is having a basis for the estimate! See:
–  Duration Estimating
–  Cost Estimating

The skill in estimating both normal activities and risk events is understanding the available data, and being able to adapt the historical information to the current circumstances. This adaptation requires understanding the differences in the work between the old and the current and the reliability and the stability of the information being used. Range estimates (three point estimates) can be used to frame this information and allow a probabilistic assessment of the event; alternatively a simple ‘allowance’ can be made. For example, in my home state we ‘know’ three weeks a year is lost to inclement weather if the work is exposed to the elements.  Similarly office based projects in the city ‘know’ they can largely ignore the risk of power outages – they are extremely rare occurrences. But how reliable is this ‘knowledge’ gained over decades and based on weather records dating back 180 years?

Last year was the hottest year on record (by a significant margin) as was 2014 – increasing global temperatures increase the number of extreme weather events of all types and exceptionally hot days place major strains on the electrical distribution grids increasing the likelihood of blackouts.  What we don’t know because there is no reliable data is the consequences.  The risk of people not being able to get to work, blackouts and inclement weather events are different – but we don’t know how different.

Dealing with this uncertainty requires a different approach to risk management and a careful assessment of your stakeholders. Ideally some additional contingencies will be added to projects and additional mitigation action taken such as backing up during the day as well as at night – electrical storms tend to be a late afternoon / evening event. But these cost time and money…..

Getting stakeholder by-in is more difficult:

• A small but significant number of people (including some in senior roles) flatly refuse to accept there is a problem. Despite the science they believe based on ‘personal observations’ the climate is not changing…….
• A much larger number will not sanction any action that costs money without a cast iron assessment based on valid data. But there is no valid data, the consequences can be predicted based on modelling but there are no ‘facts’ based on historical events……..
• Most of the rest will agree some action is needed but require an expert assessment of the likely effect and the value proposition for creating contingencies and implementing mitigation activities.

If it ain’t broke, don’t fix it????

The challenge facing everyone in management is deciding what to do:

• Do nothing and respond heroically if needed?
• Think through the risks and potential responses to be prepared (but wait to see what actually occurs)??
• Take proactive action and incur the costs, but never being sure if they are needed???

There is no ‘right answer’ to this conundrum, we certainly cannot provide a recommendation because we ‘don’t know’ either.  But at least we know we don’t know!

I would suggest discussing what you don’t know about the consequences of climate change on your organisation is a serious conversation that needs to be started within your team and your wider stakeholder community.

Doing nothing may feel like a good options – wait and see (ie, procrastination) can be very attractive to a whole range of innate biases. But can you afford to do nothing?  Hoping for the best is not a viable strategy, even if inertia in your stakeholder community is intense. This challenge is a real opportunity to display leadershipcommunication and  negotiation skills to facilitate a useful conversation.