Tag Archives: Risk Management

The reference case for management reserves

Risk management and Earned Value practitioners, and a range of standards, advocate the inclusion of contingencies in the project baseline to compensate for defined risk events. The contingency may (should) include an appropriate allowance for variability in the estimates modelled using Monte Carlo or similar; these are the ‘known unknowns’.  They also advocate creating a management reserve that should be held outside of the project baseline, but within the overall budget to protect the performing organisation from the effects of ‘unknown unknowns’.  Following these guidelines, the components of a typical project budget are shown below.

PMBOK® Guide Figure 7-8

The calculations of contingency reserves should be incorporated into an effective estimating process to determine an appropriate cost estimate for the project[1]. The application of appropriate tools and techniques supported by skilled judgement can arrive at a predictable cost estimate which in turn becomes the cost baseline once the project is approved. The included contingencies are held within the project and are accessed by the project management team through normal risk management processes. In summary, good cost estimating[2] is a well understood (if not always well executed) practice, that combines art and science, and includes the calculation of appropriate contingencies. Setting an appropriate management reserve is an altogether different problem.

 

Setting a realistic management reserve

Management reserves are an amount of money held outside of the project baseline to ‘protect the performing organisation’ against unexpected cost overruns. The reserves should be designed to compensate for two primary factors.  The first are genuine ‘black swans’ the other is estimating errors (including underestimating the levels of contingency needed).

The definition of a ‘black swan’ event is a significant unpredicted and unpredictable event[3].  In his book of the same name, N.N. Taleb defines ‘Black Swans’ as having three distinct characteristics: they are unexpected and unpredictable outliers, they have extreme impacts, and they appear obvious after they have happened. The primary defence against ‘black swans’ is organisational resilience rather than budget allowances but there is nothing wrong with including an allowance for these impacts.

Estimating errors leading to a low-cost baseline, on the other hand, are both normal and predictable; there are several different drivers for this phenomenon most innate to the human condition. The factors leading to the routine underestimating of costs and delivery times, and the over estimating of benefits to be realised, can be explained in terms of optimism bias and strategic misrepresentation.  The resulting inaccurate estimates of project costs, benefits, and other impacts are major source of uncertainty in project management – the occurrence is predictable and normal, the degree of error is the unknown variable leading to risk.

The way to manage this component of the management reserves is through the application of reference class forecasting which enhances the accuracy of the budget estimates by basing forecasts on actual performance in a reference class of comparable projects. This approach bypasses both optimism bias and strategic misrepresentation.

Reference class forecasting is based on theories of decision-making in situations of uncertainty and promises more accuracy in forecasts by taking an ‘outside view’ of the projects being estimated. Conventional estimating takes an ‘inside view’ based on the elements of the project being estimated – the project team assesses the elements that make up the project and determine a cost. This ‘inside’ process is essential, but on its own insufficient to achieve a realistic budget. The ‘outside’ view adds to the base estimate based on knowledge about the actual performance of a reference class of comparable projects and resolves to a percentage markup to be added to the estimated price to arrive at a realistic budget.  This addition should be used to assess the value of the project (with a corresponding discounting of benefits) during the selection/investment decision making processes[4], and logically should be held in management reserves.

Overcoming bias by simply hoping for an improvement in the estimating practice is not an effective strategy!  Prof. Bent Flyvbjerg’s 2006 paper ‘From Nobel Prize to Project Management: Getting Risks Right[5]’ looked at 70 years of data.  He found: Forecasts of cost, demand, and other impacts of planned projects have remained constantly and remarkably inaccurate for decades. No improvement in forecasting accuracy seems to have taken place, despite all claims of improved forecasting models, better data, etc.  For transportation infrastructure projects, inaccuracy in cost forecasts in constant prices is on average 44.7% for rail, 33.8% for bridges and tunnels, and 20.4% for roads.

The consistency of the error and the bias towards significant underestimating of costs (and a corresponding overestimate of benefits) suggest the root causes of the inaccuracies are psychological and political rather than technical – technical errors should average towards ‘zero’ (plusses balancing out minuses) and should improve over time as industry becomes more capable, whereas there is no imperative for psychological or political factors to change:

  • Psychological explanations can account for inaccuracy in terms of optimism bias; that is, a cognitive predisposition found with most people to judge future events in a more positive light than is warranted by actual experience[6].
  • Political factors can explain inaccuracy in terms of strategic misrepresentation. When forecasting the outcomes of projects, managers deliberately and strategically overestimate benefits and underestimate costs in order to increase the likelihood that their project will gain approval and funding either ahead of competitors in a portfolio assessment process or by avoiding being perceived as ‘too expensive’ in a public forum – this tendency particularly affects mega-projects such as bids for hosting Olympic Games.

 

Optimism Bias

Reference class forecasting was originally developed to compensate for the type of cognitive bias that Kahneman and Tversky found in their work on decision-making under uncertainty, which won Kahneman the 2002 Nobel Prize in economics[7]. They demonstrated that:

  • Errors of judgment are often systematic and predictable rather than random.
  • Many errors of judgment are shared by experts and laypeople alike.
  • The errors remain compelling even when one is fully aware of their nature.

Because awareness of a perceptual or cognitive bias does not by itself produce a more accurate perception of reality, any corrective process needs to allow for this.

 

Strategic Misrepresentation

When strategic misrepresentation is the main cause of inaccuracy, differences between estimated and actual costs and benefits are created by political and organisational pressures, typically to have a business case approved, or a project accepted, or to get on top of issues in the 24-hour news cycle.  The Grattan Institute (Australia) has reported that in the last 15 years Australian governments had spent $28 billion more than taxpayers had been led to expect. A key ‘political driver’ for these cost overruns was announcing the project (to feed the 24-hour news cycle) before the project team had properly assessed its costs.  While ‘only’ 32% of the projects were announced early, these accounted for 74% of the value of the cost overruns.

The Grattan Institute (Australia) has reported that in the last 15 years Australian governments had spent $28 billion more than taxpayers had been led to expect on transport infrastructure projects. One of the key ‘political drivers’ for these cost overruns was announcing the project (to feed the 24-hour news cycle) before the project team had properly assessed its costs.  While ‘only’ 32% of the projects were announced early, these projects accounted for 74% of the value of the cost overruns.

Reference class forecasting will still improve accuracy in these circumstances, but the managers and estimators may not be interested in this outcome because the inaccuracy is deliberate. Biased forecasts serve their strategic purpose and overrides their commitment to accuracy and truth; consequently the application of reference class forecasting needs strong support from the organisation’s overall governance functions.

 

Applying Reference Class Forecasting

Reference class forecasting does not try to forecast specific uncertain events that will affect a particular project, but instead places the project in a statistical distribution of outcomes from the class of reference projects.  For any particular project it requires the following three steps:

  1. Identification of a relevant reference class of past, similar projects. The reference class must be broad enough to be statistically meaningful, but narrow enough to be truly comparable with the specific project – good data is essential.
  2. Establishing a probability distribution for the selected reference class. This requires access to credible, empirical data for a sufficient number of projects within the reference class to make statistically meaningful conclusions.
  3. Comparing the specific project with the reference class distribution, in order to establish the most likely outcome for the specific project.

The UK government (Dept. of Treasury) were early users of reference class forecasting and continue its practice.  A study in 2002 by Mott MacDonald for Treasury found over the previous 20 years on government projects the average works duration was underestimated by 17%, CAPEX was underestimated by 47%, and OPEX was underestimated by 41%.  There was also a small shortfall in benefits realised.

 

This study fed into the updating of the Treasury’s ‘Green Book’ in 2003, which is still the standard reference in this area. The Treasury’s Supplementary Green Book Guidance: Optimism Bias[8] provides the recommended range of markups with a requirement for the ‘upper bound’ to be used in the first instance by project or program assessors.

These are very large markups to shift from an estimate to a likely cost and are related to the UK government’s estimating (ie, the client’s view), not the final contractors’ estimates – errors of this size would bankrupt most contractors.  However, Gartner and most other authorities routinely state project and programs overrun costs and time estimates (particularly internal projects and programs) and the reported ‘failure rates’ and overruns have remained relatively stable over extended periods.

 

Conclusion

Organisations can choose to treat each of their project failures as a ‘unique one-off’ occurrence (another manifestation of optimism bias) or learn from the past and develop their own framework for reference class forecasting. The markups don’t need to be included in the cost baseline (the project’s estimates are their estimates and they should attempt to deliver as promised); but they should be included in assessment process for approving projects and the management reserves held outside of the baseline to protect the organisation from the effects of both optimism bias and strategic misrepresentation.  As systems, and particularly business cases, improve the reference class adjustments should reduce but they are never likely to reduce to zero, optimism is an innate characteristic of most people and political pressures are a normal part of business.

If this post has sparked your interest, I recommend exploring the UK information to develop a process that works in your organisation: http://www.gov.uk/government/publications/the-green-book-appraisal-and-evaluation-in-central-governent

______________________

[1] For more on risk assessment see: http://www.mosaicprojects.com.au/WhitePapers/WP1015_Risk_Assessment.pdf

[2] For more on cost estimating see: http://www.mosaicprojects.com.au/WhitePapers/WP1051_Cost_Estimating.pdf

[3] For more on ‘black swans’ see: https://mosaicprojects.wordpress.com/2011/02/11/black-swan-risks/

[4] For more on portfolio management see: http://www.mosaicprojects.com.au/WhitePapers/WP1017_Portfolios.pdf

[5] Project Management Journal, August 2006.

[6] For more on the effects of bias see: http://www.mosaicprojects.com.au/WhitePapers/WP1069_Bias.pdf

[7] Kahneman, D. (1994). New challenges to the rationality assumption. Journal of Institutional and Theoretical
Economics, 150, 18–36.

[8] Green Book documents can be downloaded from: http://www.gov.uk/government/publications/the-green-book-appraisal-and-evaluation-in-central-governent

Risk management handbook published

The RM Handbook coverThe Risk Management Handbook edited by Dr. David Hillson (the ‘risk doctor’) is a practical guide to managing the multiple dimensions of risk in modern projects and business.  We contributed Chapter 10: Stakeholder risk management.

The 23 Chapters are a cutting-edge survey of the risk management landscape, providing a broad and up-to-date introduction to risk, with expert guidance on current best practice and cutting-edge insight into new developments within risk management.

For more on the book, see: www.koganpage.com/product/the-risk-management-handbook-9780749478827

The language used to define risks can contribute to failure.

Risk1If a risk is going to be adequately managed, it needs to be defined.  Failing to describe the actual risk (or risks) will almost inevitably lead to project failure and will frequently exacerbate the damage.

In recent times, there seems to be an explosion of documents in the public domain, including academic papers (where one would have hoped the reviewers and editors knew better) listing as ‘risks’ factors that cannot ever be risks.  The ‘fact’ hides the real or consequential risks that may be manageable.

RiskRisk 101 – a risk is an uncertainty that may affect a project objective if it occurs. For something to be a risk, there has to be an uncertainty and the uncertainty may have a positive or negative impact on one or more objectives (see more on risk management). Risk management involves balancing the uncertainty, its potential impact and the cost and effort needed to change these for the better. But to do this you need to focus on the uncertainties that can be managed.

head-in-sandOne of more frequently miss-described risks is ‘technical complexity’.  The degree of technical difficulty involved in a project is a FACT that can be measured and described!  Some projects such as launching a space rocket are technically complex, other less so; but NASA has a far higher success rate in its rocket launches than most IT departments have in developing successful software applications that achieve their objectives.  The technical difficulty may give rise to consequential risks that need addressing but these risks have to be identified and catalogued if they are going to be managed. Some of the risks potentially arising out of technical complexity include:

  • Inadequate supply of skilled resources in the marketplace / organisation;
  • Management failing to allow adequate time for design and testing;
  • Allowing technicians to ‘design in’ unnecessary complexity;
  • Management failing to provide appropriately skilled resources;
  • Management lacking the skills needed to properly estimate and manage the work;
  • Etc.

Another common risk in many of these pseudo risk lists is ‘lack of senior management support’.  This is a greyer area, the project team’s perception of management support and the actual level of support from senior management may differ. Developing an understanding of the actual attitude of key senior managers requires a methodical approach using tools such as the Stakeholder Circle.  However, even after defining the actual attitude of important senior managers the lack of precision in the risk description will often hide the real risks and their potential solutions or consequences:

  • If there is a real lack of senior management support the project should be cancelled, its probability of failure is greater than 80%. Continuing is simply wasting money.
  • If the problem is senior management failing to understand the importance of the project, this is an issue (it exists) and the solution is directed communication (see more on directed communication). The risk is that the directed communication effort will fail, leading to project failure, this risk needs careful monitoring.
  • If the problem is a project sponsor (or steering committee) who is not committed to project success and/or a sponsor (or steering committee) lacking understanding of his/her role (see more on the role of a sponsor) this is another issue with a solution based in education or replacement. Depending on the approach to resolving the issue (and its guaranteed impact on project success if the issue remains unresolved) the risk is either the necessary education process may not work and/or poor governance and senior management oversight will allow the issue to continue unresolved – these specific risks need to be explicitly described and acknowledged if they are to be managed.

Fine tune your detectorsThe first step to managing risks effectively is developing a precise description of the actual risk that requires managing. If there are several associated risks, log each one separately and then group them under a general classification.   The description of each risk is best done using a common meta language such as:

  • ‘[Short name]: If a [description of risk] caused by [cause of risk] occurs, it may cause [consequence of occurrence]’. For example:
  • ‘Storms: If a heavy thunderstorm caused by summer heat occurs, it may cause flooding and consequential clean up’.

For each risk you need to:

  • Define the risk category and short name;
  • Describe the risk using an effective ‘risk meta language’;
  • Determine if the risk is an opportunity or threat and quantify its effect;
  • Prioritise the risk using qualitative assessment process;
  • Determine the optimum response;
  • Implement the response and measure its effectiveness (see more on risk assessment).

A simple Excel template such as this can help: http://www.mosaicprojects.com.au/Practical_Risk_Management.html#Tools

Managing issues is similar, the key difference is the consequences of an unresolved issue are certain – the issue is a fact that has to be dealt with (see more on issues management).

There are a number of factors that can cause both risks and issues to be improperly defined, some technical, most cultural. Three of the most important are:

  • Dealing with easy to identify symptoms without looking for the root cause of the risk / issue (see more on root cause analysis).
  • A management culture that does not allow open and honest reporting of risks and issues; preferring to hide behind amorphous descriptions such as ‘technical complexity’ rather than the real risk ‘management’s inability to manage this level of complicated technology’.
  • Failing to allow adequate time to analyse the stakeholder community using tools such as the as the Stakeholder Circle so that the full extent of risks associated with people’s capabilities and attitudes can be understood – these can account for up to 90% of the actual risks in most projects.

Management culture is the key to both allowing and expecting rigorous and honest assessment of risk. One of the key functions of every organisation’s governing body is to design, create and maintain the organisation’s management culture, this is a problem that starts at the top! For more on the roles of governance see: http://www.mosaicprojects.com.au/WhitePapers/WP1096_Six_Functions_Governance.pdf.

Stakeholders and Reputational Risk

trust-valueYour reputation and your organisation’s reputation are valuable assets. The willingness of others to trust you, their desire to work with you and virtually every other aspect of the relationship between you and your stakeholders is influenced by their perception of your reputation (see more on The value of trust).  But reputations are fragile: they can take a lifetime to build and seconds to lose. Some of the factors influencing them are:

  1. Reputation cannot be controlled: it exists in the minds of others so it can only be influenced, not managed directly.
  2. Reputation is earned: trust is based on consistent behaviour and performance.
  3. Reputation is not consistent: it depends on each stakeholder’s view. One organisation can have many different reputations, varying with each stakeholder.
  4. Reputation will vary: each stakeholder brings a different expectation of behaviour or performance and so will have a distinct perception of reputation.
  5. Reputation is relational: you have a reputation with someone for something. The key question is therefore: ‘with whom, for what?’
  6. Reputation is comparative: it is valued in comparison to what a particular stakeholder experiences or believes in relation to peers, performance and prejudice.
  7. Reputation is valuable: but the true value of reputation can only be appreciated once it is lost or damaged.

Estimating the ‘true value’ of your reputation is difficult and as a consequence decisions on how much to invest in enhancing and protecting your reputation becomes a value judgment rather than a calculation. Your reputation is created and threatened by both your actions and their consequences (intended or not).  Some actions and their effects on your reputation are predictable, others are less so and their consequences, good or bad are even less certain. This is true regardless of your intention; unexpected outcomes can easily cause unintended benefit or damage to your reputation.

Building a reputation requires hard work and consistency; the challenge is protecting your hard earned reputation against risks that can cause damage; and you never know for sure what will cause reputational damage until it is too late – many reputational risks are emergent.

Managing Reputational Risk in Organisations

Because an organisation’s reputation is not easy to value or protect, managing reputational risk is difficult! This is particularly true for larger organisations where thousands of different interactions between staff and stakeholders are occurring daily.

The first step in managing an organisation’s reputational risk is to understand the scope of possible damage, as well as potential sources and the degree of possible disruption. The consequence of a loss of reputation is always the withdrawing of stakeholder support:

  • In the private sector this is usually investor flight and share value decline; these can spiral out of control if confidence cannot be restored.
  • In the public sector this is typically withdrawal of government support to reflect declining confidence.
  • In the professional sector client confidence is vital for business sustainability; a loss of reputation means a loss of clients.

Each sector can point to scenarios where the impact of reputation damage can vary from mild to catastrophic; and whilst the consequences can be measured after the effect they are not always predictable in advance.  To overcome this problem, managing reputation risk for an organisation requires three steps:

  • Predict: All risk is future uncertainty, and an appropriate risk forecasting system to identify reputation risk is required – creative thinking is needed here! The outcomes from a reputational risk workshop will be specific to the organisation and the information must feed directly into the governance process if reputation risk is to be taken seriously (see more on The Functions of Governance).
  • Prepare: Reputation risk is a collective responsibility, not just the governing body’s. All management and operational staff must recognise the organisation’s reputation is important and take responsibility for protecting it in their interaction with stakeholders. The protection of reputation should also be a key element in the organisation’s disaster recovery plans.
  • Protect: A regular vulnerability review will reveal where reputation risk is greatest, and guide actions to prevent possible damage. Each vulnerability must be assessed objectively and actions taken to minimise exposure. Significant risks will need a ‘protection plan’ developed and then implemented and monitored.

Dealing with a Reputational Risk Event

When a risk event occurs, some standard elements needs to be part of the response for individuals and organisations alike. For reputation enhancing risk events, make sure you acknowledge the ‘good luck’ in an appropriately and take advantage of the opportunity in a suitably authentic way. Over-hyping an event will be seen as unauthentic and have a negative effect on reputation; but good news and good outcomes should be celebrated. Reputation threatening risk events need a more proactive approach

  • Step 1: Deal with the event itself. You will not protect your reputation by trying to hide the bad news or ignoring the issue.  Proactively work to solve the problem in a way that genuinely minimise harm for as many stakeholders as possible minimises the damage that has to be managed.
  • Step 2: Communicate. And keep communicating – organisations need to have a sufficiently senior person available quickly as the contact point and keep the ‘news’ coming. Rumours and creative reporting will always be worse then the fact and will grow to fill the void. All communication needs to be open, honest and as complete as possible at the time.  Where you ‘don’t know’ tell people what you are doing to find out. (see Integrity is the key to delivering bad news successfully).
  • Keep your promises and commitments. If this becomes impossible because of changing circumstances tell people as soon as you know, don’t wait for them to find out.
  • Follow up afterwards. Actions that show you really care after the event can go a long way towards repairing the damage to your reputation.

Summary

Reputation is ephemeral and a good reputation is difficult to create and maintain. Warren Buffet in his 2015 memo to his top management team in Berkshire Hathaway emphasised that their top priority must be to ‘zealously guard Berkshire’s reputation’. He also reminded his leadership team that ‘we can afford to lose money–even a lot of money. But we can’t afford to lose reputation–even a shred of reputation’ (discussed in Ethics, Culture, Rules and Governance). In the long run I would suggest this is true for every organisation and individual – your reputation is always in the minds of other people!

Project Risk Management – how reliable is old data?

One of the key underpinnings of risk management is reliable data to base probabilistic estimates of what may happen in the future.  The importance of understanding the reliability of the data being used is emphasised in PMBOK® Guide 11.3.2.3 Risk Data Quality Assessment and virtually every other risk standard.

One of the tenets underpinning risk management in all of its forms from gambling to insurance is the assumption that reliable data about the past is a good indicator of what will happen in the future – there’s no certainty in this processes but there is degree of probability that future outcomes will be similar to past outcomes if the circumstances are similar. ‘Punters’ know this from their ‘form guides’, insurance companies rely on this to calculate premiums and almost every prediction of some future outcome relies on an analogous interpretation of similar past events. Project estimating and risk management is no different.

Every time or cost estimate is based on an understanding of past events of a similar nature; in fact the element that differentiates an estimate from a guess is having a basis for the estimate! See:
–  Duration Estimating
–  Cost Estimating

The skill in estimating both normal activities and risk events is understanding the available data, and being able to adapt the historical information to the current circumstances. This adaptation requires understanding the differences in the work between the old and the current and the reliability and the stability of the information being used. Range estimates (three point estimates) can be used to frame this information and allow a probabilistic assessment of the event; alternatively a simple ‘allowance’ can be made. For example, in my home state we ‘know’ three weeks a year is lost to inclement weather if the work is exposed to the elements.  Similarly office based projects in the city ‘know’ they can largely ignore the risk of power outages – they are extremely rare occurrences. But how reliable is this ‘knowledge’ gained over decades and based on weather records dating back 180 years?

World-Temprature

Last year was the hottest year on record (by a significant margin) as was 2014 – increasing global temperatures increase the number of extreme weather events of all types and exceptionally hot days place major strains on the electrical distribution grids increasing the likelihood of blackouts.  What we don’t know because there is no reliable data is the consequences.  The risk of people not being able to get to work, blackouts and inclement weather events are different – but we don’t know how different.

Dealing with this uncertainty requires a different approach to risk management and a careful assessment of your stakeholders. Ideally some additional contingencies will be added to projects and additional mitigation action taken such as backing up during the day as well as at night – electrical storms tend to be a late afternoon / evening event. But these cost time and money…..

Getting stakeholder by-in is more difficult:

  • A small but significant number of people (including some in senior roles) flatly refuse to accept there is a problem. Despite the science they believe based on ‘personal observations’ the climate is not changing…….
  • A much larger number will not sanction any action that costs money without a cast iron assessment based on valid data. But there is no valid data, the consequences can be predicted based on modelling but there are no ‘facts’ based on historical events……..
  • Most of the rest will agree some action is needed but require an expert assessment of the likely effect and the value proposition for creating contingencies and implementing mitigation activities.

If it ain’t broke, don’t fix it???? 

The challenge facing everyone in management is deciding what to do:

  • Do nothing and respond heroically if needed?
  • Think through the risks and potential responses to be prepared (but wait to see what actually occurs)??
  • Take proactive action and incur the costs, but never being sure if they are needed???

There is no ‘right answer’ to this conundrum, we certainly cannot provide a recommendation because we ‘don’t know’ either.  But at least we know we don’t know!

head-in-sandI would suggest discussing what you don’t know about the consequences of climate change on your organisation is a serious conversation that needs to be started within your team and your wider stakeholder community.

Doing nothing may feel like a good options – wait and see (ie, procrastination) can be very attractive to a whole range of innate biases. But can you afford to do nothing?  Hoping for the best is not a viable strategy, even if inertia in your stakeholder community is intense. This challenge is a real opportunity to display leadershipcommunication and  negotiation skills to facilitate a useful conversation.

Extreme Risk Taking is Genetic……

A recent 2014 scientific study, Going to Extremes – The Darwin Awards: sex differences in idiotic behaviour highlights the need for gender diversity.  The class of risk studied in this report is the idiotic risk, one that is defined as senseless risks, where the apparent payoff is negligible or non-existent, and the outcome is often extremely negative and often final. The results suggest that having an ‘all male’ or male dominated decision making group may be a source of risk in itself.

Darwin1Sex differences in risk seeking behaviour, emergency hospital admissions, and mortality are well documented and confirm that males are more at risk than females. Whilst some of these differences may be attributable to cultural and socioeconomic factors (eg, males may be more likely to engage in contact and high risk sports, and are more likely to be employed in higher risk occupations), sex differences in risk seeking behaviour have been reported from an early age, raising questions about the extent to which these behaviours can be attributed purely to social and cultural differences. This study extends on these studies to look at ‘male idiot theory’ (MIT) based on the archives of the ‘Darwin Awards’. Its hypothesis derived from Women are from Venus, men are idiots (Andrews McMeel, 2011) is that many of the differences in risk seeking behaviour may be explained by the observation that men are idiots and idiots do stupid things…… but little is known about sex differences in idiotic risk taking behaviour.

Darwin2The Darwin Awards are named in honour of Charles Darwin, and commemorate those who have improved the human gene pool by removing themselves from it in an idiotic way (note the photographs are both of unsuccessful attempts to win an award).  Whilst usually awarded posthumously, (the idiot normally has to kill themselves) the 2014 The Thing Ring award shows there are other options.  Based on this invaluable record of idiotic human behaviour, the study considered the gender of the award recipients over a 20 year period (1995-2014) and found a marked sex difference in Darwin Award winners: males are significantly more likely to receive the award than females.

Darwin3Of the 413 Darwin Award nominations in the study period, 332 were independently verified and confirmed by the Darwin Awards Committee. Of these, 14 were shared by male and female nominees (usually overly adventurous couples in compromising positions – see: La Petite Mort) leaving 318 valid cases for statistical testing. Of these 318 cases, 282 Darwin Awards were awarded to males and just 36 awards given to females. Meaning 88.7% of the idiots accepted as Darwin Award winners were male!

Gender diversity on decision making bodies may help to reduce this potential risk factor in two ways.  First, by reducing the percentage of people potentially susceptible to MIT. Second, by modifying the social and cultural environment within decision making body, reducing the body’s tendency to take ‘extreme risk decisions’.

One well documented example is the current Federal Government. Given the extremely limited representation of women in the make-up of the current Abbott government, and some of the self-destructive decisions they have made, I’m wondering if there is a correlation. A softer, less aggressive, lower risk approach to implementing many of the policies they have failed to enact may have resulted in a very different outcome for the government.

Two events not to miss

Two events in Melbourne not to be missed!

1: IPMD Melbourne:

First on the 14th November – Melbourne’s first ever International Project Management Day breakfast.

IPMD_Medium_RGB

To preview Dr Amantha Imber, the breakfast keynote speaker’s thoughts on innovation and creativity see: http://www.youtube.com/watch?v=nfUNp9LABhU.  Want more?  Download the event PDF and book your breakfast from: http://www.mosaicprojects.com.au/ftp/IPMD_14th%20Nov_Melbourne%20Event.pdf

Unfortunately I cannot make the breakfast – we are running our last PMP and CAPM courses for the year that week (starting 11th November) and as a scheduler, know being in two places at the same time is difficult. For more on the training see: http://www.mosaicprojects.com.au/Training-Schedule.html

2: Making Sense of Schedule Risk Analysis – Free Event

Second, on the 20 November, international risk management expert, Tony Welsh, President of Barbecana Inc and developer of the Full Monte risk analysis tool is the speaker at the AIPM Project Controls SIG (PC-SIG) meeting to be held at The Water Rat Hotel, 256 Moray Street, South Melbourne: http://www.thewaterrathotel.com.au/, on Wednesday 20th November (start 5.30pm). There is no catering for the forum but interested participants are invited to pre- and post-forum drinks at the bar (after all it is a pub!!).

This is a free event open to anyone with an interest in project controls, however, to make sure there’s adequate seating we are asking you to register with AIPM at http://www.aipm.com.au/iMIS/Events/Event_Display.aspx?EventKey=VI131120&zbrandid=2139&zidType=CH&zid=5168408&zsubscriberId=505907810&zbdom=http://aipm.informz.net (the event is free). But as long as you don’t mind the risk of standing, pre-registration is not essential. And I definitely will be attending this event.