Monthly Archives: January 2015

Two new papers on the web

BeaverWe presented papers at the Engineers Australia MCPC14 conference late last year. They are now available on our website.

Understanding Design – The challenge of informed consent looks at the problem of communicating complex project information to stakeholders in a way they can understand.

Scheduling Complexity discusses the challenges of managing time in complex projects and the need for qualified schedulers.

For more of our papers and articles see:

Take the time to be creative

Smell the dasiesOne of the most overlooked aspects of creativity and learning is simply taking the time needed to reflect and think.  Professor Manfred Kets De Vries suggests that the fast paced, continuous access, instant response culture we operate in is eroding people’s ability to reflect and create innovative solutions to problems.  The pressure to ‘just finish this’ or ‘find out about (and hit Google on your smart phone)’ is usually too great to resist. But working quicker and harder is not necessarily working smarter.

De Vries believes that deliberately slowing down  and setting aside regular periods of ‘constructively doing nothing’ may be the best thing  you can do to induce a state of mind that nurtures imagination, creativity, and improves your mental well-being, by giving ideas time to mature.

“Learning without reflection is a waste, reflection without learning is dangerous” – Confucius

Business may be all pervasive, almost everyone seems glued to their PDA and feels compelled to respond to virtually every email received instantaneously but being busy and being effective are not the same thing unless you work in a customer service or support role!

If you are in a management, problem solving, or creative role a significant part of your job is developing new ideas or concepts that have been though through  and optimised. This requires thinking time.  But is creatively doing nothing really acceptable? Most of us feel guilty if we don’t have something to do, and we get a buzz when we feel really busy. And these busy behaviours generate their own reward by stimulating the brain to shoot dopamine into the bloodstream giving us a rush that can make stopping being busy so much harder. It really is nice to feel wanted, busy and in demand.

The problem with being busy is that if you don’t allow yourself periods of uninterrupted, freely associated, thought then personal growth, insight and creativity are less likely to emerge. And taking the time to ‘smell the daisies’ has multiple benefits……

The world of multitasking and hyperactivity helps us to delude ourselves that we are productive. The reality is that social media is reactive and not very original. It contracts creativity and can impact mental health. If we don’t know how to calibrate the balance between action and reflection we may become a casualty of information overload and psychological burnout.

Similarly, in many contemporary organisations work addicts are encouraged and rewarded; the behaviour is superficially useful to the organisation. Unfortunately, a workaholic environment can contribute to serious personal and mental health problems including low morale, depression, and above average absenteeism. The most effective knowledge workers are those who can both act and reflect, which means unplugging themselves from the compulsion to keep busy.

Deliberately doing nothing creates valuable opportunities for unconscious thought processes. Unconscious thought excels at integrating and associating information; we are less constrained by conventional associations and more likely to generate novel ideas. As well as being good for our mental health, doing nothing may turn out to be the best way to resolve complex problems.  Italian painter Giorgio Vasari summed it up well when he said “Men of genius sometimes accomplish most when they work least”.

Some of the ways you can create time for reflection include:

  1. Maintaining your relationships. We all need meaningful contact with people to feel fully alive. Maintaining our relationships needs interaction, engagement and time out from work. Conversation is also a powerful stimulant for creativity (just make sure you have a notebook handy).
  2. Saying No. Being able to say no is a key skill. Simply saying no to unimportant requests can free up time for more important things (see more on personal time management).
  3. Managing your sleep habits. In a perfect world we should all sleep around eight hours a night. Good sleep is essential for personal growth and creativity.

The challenge with taking time out to be creative is the good ideas always come ‘from nowhere’, usually at the most inappropriate moments (eg, in the shower). If this happens to you, you are not alone; from Archimedes in his bath, to Newton in his Lincolnshire garden (but no ‘apple’), brilliant ideas just seem to just appear. So the final element in creatively doing nothing is being able to trap your ideas when they surface.

ProductivityIn summary, a walk around outside or time spent with your feet on the desk can be more productive than working through a lunch-break – now all you have to do is convince the boss.

For a different take on productive laziness see:

New Articles posted to the Web #19

BeaverWe have been busy beavers updating the PM Knowledge Index on our website with White Papers and Articles.   Some of the more interesting uploaded during the last couple of weeks include:

And we continue to tweet a free PMI style of exam question every day for PMP, CAPM and PMI-SP candidates: See today’s question and then click through for the answer and the Q&As from last week.

You are welcome to download and use the information under our Creative Commons licence

Project failure revisited

Project-FailOver the holiday period there has been a couple of interesting discussion on project success and failure. The consensus of the many commentators was that the simplistic measures of time, cost and scope are inadequate but there was little consensus on the solution.  This post poses some of the questions that need a considered answer:

Firstly, the APM website posed the question which of the following projects was successful?

Two organisations decided to undertake identical projects with a normalised value of  $1million.
–  Organisation A assessed their project and set the project budget at $800,000
–  Organisation B assessed their project and set the project budget at $1,200,000
Organisation A’s team did their best to ‘meet the challenge’ and achieved an outcome of $900,000 – a cost overrun of $100,000 nominally a project failure.
Organisation B’s team did ‘a good job’ and achieved an outcome of $1,100,000 – a profit of $100,000 nominally a project success.

But which project is really successful??  The one that cost $900,000 or the one that cost $1,100,000 to produce the same output.  This example is simplistic, the numbers are given and the problem is demonstrated, but nowhere will you ever have two identical projects run against different baselines.  How can you assess the ‘project risk’ caused by soft or hard targets??

Similar issues arise when allocating the blame for ‘failure’ to different parts of the ‘performing organisation’.  Many so-called project management and project leadership failures are likely to be either unavoidable consequences or symptoms of far more significant underlying issues (for more on this see: Project or Management Failures?).  Focusing on the superficial (and blaming the project manager) prevents a more thorough ‘root cause analysis’  of the real issues and problems in organizations.  I will take 2 examples and borrowing from Toyota’s ‘Five Whys’ ask ‘why’ a few times:

  1. Failure of PM leadership. The project manager failed to lead, relate or communicate, with stakeholders. But the project manager did not appoint him/her self , some of the unanswered questions are:
    1. Why did the organisation appoint a PM lacking the requisite skills?
    2. Why did the organisation fail to support/train the PM?
    3. Why were the failings not picked up and resolved during routine project surveillance?
  2. Failing to use recognised techniques such as risk management. Some of the unanswered questions are:
    1. Why does the organisation allow sub-standard practices to exist?
    2. Does the organisation have proper templates, processes and support in place to support the practice?
    3. Does the organisation provide adequate time, training and resources to implement the practice?
    4. Why were the failings not picked up and resolved during routine project surveillance?

The answer to these questions may go back to organisational culture, the overall organisational ability to effectively manage and support its projects (the strategic management of projects)  and/or ultimately the governance of the organisation.

Certainly some projects will fail for project related reasons; projects and programs are innately risky and this means project related failures are to be expected – minimising this cause of failure will be valuable. But, simply measuring performance against cost and time targets is influenced by the way the initial target was set in the first place.

project-failure-2The problem is compounded by the lack of ‘root cause’ assessments. I expect a proper study of the root causes of many so-called ‘project failures’ will show many projects are effectively set up to fail by the organisation.  Allowing executive management to continue with these types of practices is ultimately a governance failure. Addressing the ‘root causes’ of failure hidden in executive management practise, culture and governance are likely to generate significantly greater benefits than simply trying to ‘fix project management’; but you cannot see the failures without proper data.

One initiative aimed at working towards a standardised assessment of project failures is a series of articles being published by Proff. Alan Stretton in PM World Journal, see:  (registration is free).

Given the general management mantra of ‘you cannot manage what you cannot measure’, developing a measure of project failure that is valuable and consistent would be a good start in developing the data needed to allow management improvement across the board.

As Alan concluded the referenced article:

The above deficiencies in current data all point to an urgent and obvious need to develop comprehensive data on causes of project failures – preferably validated by appropriate and agreed criteria as to what constitutes success / failure, and covering the widest possible range of project types and project management application areas.

A suggestion (or challenge) here is for global project management organisations (IPMA, PMI, apfpm, etc) to jointly create a framework to develop and share project success / failure data, covering the widest possible range of project management types and application areas. This would include

  • Developing and agreeing common criteria for project success / failure;
  • Collecting and sharing validated data on success/ failure rates;
  • Researching and sharing validated data on success drivers / failure causes.

If you agree support Alan and start lobbying your PM association of choice. Defining the problem is easy, solving it elegantly is not!

Ethics, Culture, Rules and Governance

RulesFar too many governing bodies spend far too much time focused on rules, conformance and assurance.  While these factors are important they should be an outcome of good governance not the primary focus of the governors.

When an organisation sets high ethical standards and invests in building an executive management culture that supports those standards the need for ‘rules’ is minimised and the organisation as a whole focuses on doing ‘good business’ (see: Corporate Governance).

The order of the functions outlined in The Functions of Governance, places: ‘Determining the objectives of the organisation’, ‘Determining the ethics of the organisation’, and ‘Creating the culture of the organisation’ ahead of both assurance and conformance.  The rational being creating a culture of ‘doing the right thing’ that extends from the very top of the organisation to the very bottom, means most people most of the time will be doing the ‘right thing’ making assurance and conformance a relatively simple adjunct, there to catch the few errors and malpractices that will inevitably occur.

A very strong endorsement of this approach to governance has recently come from one of the world’s most successful business people, Warren Buffet.  His recent memo to the top management of his holding company, Berkshire Hathaway’s subsidiaries (his ‘All Stars’) emphasised that their top priority must be to ‘zealously guard Berkshire’s reputation’ (read act ethically). He also reminded his leadership team that ‘we can afford to lose money–even a lot of money. But we can’t afford to lose reputation–even a shred of reputation’.

His advice to managers also included this good advice ‘There’s plenty of money to be made in the centre of the court. If it’s questionable whether some action is close to the line, just assume its outside and forget it’. This is a simple ethical guideline that avoids the need for pages of precise ‘rules’ designed to map the edge of legality drafted by lawyers and argued over endlessly.  See more on Ethics.

Rule#1Reading the memo, its clear Buffet has built a massive organisation based on an ethical culture, employs executives that reinforce the culture, and still makes a very good profit. It’s a long term investment but infinitely preferable to the sort of issues that confronted Salomon Bros., 20 years ago (see: Warren Buffett’s Wild Ride at Salomon), the banks associated with the GFC, and the on-going damage continuing to be suffered by the Australian banks as more ethical failures come to light. I’m sure they all had hundreds of ‘rules’ some of which may even have been sensible.

A copy of Warren Buffet’s memo can be downloaded from:

The three phases of project controls

The need to control projects (or bodies of work that we would call a project today) extends back thousands of years. Certainly the Ancient Greeks and Romans used contracts and contractors for many public works. This meant the contractors needed to manage the work within a predefined budget and an agreed timeframe.  However, what was done to control projects before the 19th century is unclear – ‘phase 0’.  But from the 1800’s onward there were three distinct phases in the control processes.

Phase 1 – reactive

The concept of using charts to show the intended sequence and timing of the work became firmly established in the 19th century and the modern bar chart was in use by the start of the 20th century. One of the best examples is from a German project in 1910, see: Schürch .  A few years later Henry Gantt started publishing his various charts.


From a controls perspective, these charts were static and reactive. The diagrams enabled management to see, in graphic form, how well work was progressing, and indicated when and where action would be necessary to keep the work on time. However, there is absolutely no documented evidence that any of these charts were ever used as predictive tools to determine schedule outcomes. To estimate the completion of a project, a revised chart had to be drawn based on the current knowledge of the work – a re-estimation process; however, there is no documentation to suggest even this occurred regularly. The focus seemed to be using ‘cheap labour’ to throw resources at the defined problem and get the work back onto program.

Costs management seems to have be little different; the reports of the Royal Commissioners to the English Parliament on the management of the ‘Great Exhibition’ of 1851 clearly show the accurate prediction of cost outcomes. Their 4th report predicted a profit of ₤173,000.  The 5th and final report defined the profit as ₤186,436.18s. 6d. However this forward estimation of cost outcomes does not seem to have transitioned to predicting time outcomes, and there is no real evidence as to how the final profit was ‘estimated’. (See Crystal Palace).

Phase 2 – empirical logic

Karol Adamiecki’s Harmonygraph (1896) introduced two useful concepts to the static views used in bar charts and the various forms of Gantt chart. In a Harmonygraph, the predecessors of each activity are listed at the top and the activities timing and duration are represented by vertical strips of paper pinned to a date scale. As the project changed, the strips could be re-pinned and an updated outcome assessed.

The first step towards a true predictive process to estimate schedule completion based on current performance was the development of PERT and CPM in the late 1950s.  Both used a logic based network to define the relationship between tasks, allowing the effect of the current status at ‘Time Now’ to be cascaded forward and a revised schedule completion calculated.  The problem with CPM and PERT is the remaining work is assumed to occur ‘as planned’ no consideration of actual performance is included in the standard methodology. It was necessary to undertake a complete rescheduling of the project to assess a ‘likely’ outcome.

Cost controls had been using a similar approach for a considerable period. Cost Variances could be included in the spreadsheets and cost reports and their aggregate effect demonstrated, but it was necessary to re-estimate future cost items to predict the likely cost outcome.

Phase 3 – predictive calculations

The first of the true predictive project controls processes was Earned Value (EV). EV was invented in the early 1960s and was formalised in the Cost Schedule Controls System Criteria issued by US DoD in December 1967.  EV uses predetermined performance measures and formula to predict the cost outcome of a project based on performance to date.  Unlike any of the earlier systems a core tenet of EV is to use the current project data to predict a probable cost outcome – the effect of performance efficiencies to date is transposed onto future work. Many and varied assessments of this approach have consistently demonstrated EV is the most reliable of the options for attempting to predict the likely final cost of a project.

Unfortunately EV in its original format was unable to translate its predictions of the final cost outcome (EAC) into time predictions.  On a plotted ‘S-Curve’ it was relatively easy to measure the time difference between when a certain value was planned to be earned and when it was earned (SV time) but the nature of an ‘S-Curve’ meant the current SVt had no relationship to the final time variance.  A similar but different issue made using SPI equally unreliable. The established doctrine was to ‘look to the schedule’ to determine time outcomes. But the schedules were either at ‘Phase 1’ or ‘Phase 2’ capability – not predictive.

A number of options were tried through the 1960s, 70s and 80s to develop a process that could accurately predict schedule completion based on progress to date. ‘Count the Squares’ and ‘Earned Time’ in various guises to name two.  Whilst these systems could provide reasonable information on where the project was at ‘time now’ and overcame some of the limitations in CPM to indicate issues sooner than standard CPM (eg, float burn hiding a lack of productivity), none had a true predictive capability.

The development of Earned Schedule resolved this problem.  Earned Schedule (ES) is a derivative of Earned Value, uses EV data and uses modified EV formula to create a set of ‘time’ information that mirrors EV’s ‘cost’ information to generate a predicted time outcome for the project. Since its release in 2003 studies have consistently shown ES to be as accurate in predicting schedule outcomes as EV is in predicting cost outcomes.  In many respects this is hardly surprising as the underlying data is the same for EV and ES and the ES formula are adaptations of the proven EV formula (see more on Earned Schedule).

Phase 4 – (the future) incorporating uncertainty

The future of the predictive aspects of project controls needs to focus on the underlying uncertainty of all future estimates (including EV and ES).  Monte Carlo and similar techniques need to become a standard addition to the EV and ES processes so the probability of achieving the forecast date can be added into the information used for project decision making. Techniques such as ‘Schedule Density‘ move project controls into the proactive management of uncertainty but again are rarely used.


From the mid 1800s (and probably much earlier) projects and businesses were being managed against ‘plans’.  The plans could be used to identify problems that required management action, but they did not predict the consequential outcome of the progress being achieved.  Assessing a likely outcome required a re-estimation of the remaining work, which was certainly done for the cost outcome on projects such as the construction of the Crystal Palace.

The next stage of development was the use of preceding logic, prototyped by Karol Adamiecki’s Harmonygraph, and made effective by the development of CPM and PERT as dynamic computer algorithms in the late 1950s. However, the default assumption in these ‘tools’ was that all future work would proceed as planned. Re-scheduling was needed to change future activities based on learned experience.

The ability to apply a predictive assessment to determine cost outcomes was introduced through the Earned Value methodology, developed in the early 1960s and standardised in 1967.   However, it was not until 2003 that the limitations in ‘traditional EV’ related to time was finally resolved with the publication of ‘Earned Schedule’.

In the seminal paper defining ES, “Schedule is Different”, the concept of ES was defined as an extension of the graphical technique of schedule conversion (that had long been part of the EVM methodology). ES extended the simple ‘reactive statement’ of the difference between ‘time now’ and the date when PV = EV, by using ‘time’ based formula, derived from EV formula, to predict the expected time outcome for the project.

The Challenge

The question every project controller and project manager needs to take into the New Year is why are more then 90% of project run using 18th century reactive bar charting and the vast majority of the remainder run using 60 year old CPM based approaches, non of which offer any form of predictive assessment.  Don’t they want to know when the project is likely to finish?

It’s certainly important to understand where reactive management is needed to ‘fix problems’, but it is also important to understand the likely project outcome and its consequences so more strategic considerations can be brought into play.

Prediction is difficult (especially about the future) but it is the only way to understand what the likely outcome will be based on current performance, and therefore support value based decision making focused on changing the outcome when necessary.

I have not included dozens or references in this post, all of the papers are available at

Fine Tune your detectors

Fine tune your detectorsThe quality of any decision you make is determined by the quality of the information and advice you receive. Good information does not necessarily mean a good decision, but bad information will almost certainly lead to a bad decision.

The decision making process and the types of decision a project manager, and almost anybody else, has to make are discussed in WP1053 Decision Making.  The closely aligned process of problem solving in WP1013 . Good information and advice is an essential input to both of these processes.

The right information has the potential to reduce or remove the uncertainty at the centre of every decision. If you are lucky and the information or advice removes all of the uncertainty, then there is nothing left to decide! Usually even with good advice, there is still some uncertainty and you still have to make the decision.

In reality, we rarely if ever have enough information; the challenge is to get as much information as is sensible in the circumstances and then make a timely decision accepting there will inevitably be gaps in your knowledge potentially leading to suboptimal outcomes.

However, simply collecting vast quantities of information does not help (unless you are using data mining). Generally information has no value, unless it has the potential to change your decision! The critical thing in decision making is having the key elements of information available when needed, in a useful form, which improves your awareness of the situation and your ability to decide.

But no information or advice is perfect. Before making use of any information, the decision maker has to evaluate the reliability and accuracy of the information or advice and look for any vested interests or bias on the part of the people developing the information or proposing the advice. Good decision makers usually have very finely tuned ‘bull s**t’ detectors.  And whilst this skill often seems to be innate to an individual many of the skills can be learned.

Some of the elements to consider when weighing up information are:

  1. As a starting point, everyone is biased and most people have vested interests.
    The antidote to bias and vested interests are to consider what effect these influences may have. The more effort someone has committed to developing a set of information, the greater their vested stake in the work. See more on Biases.
  2. Beware of factoids!
    You will be pleased to know, you are one of the 1635 people who have read this post, and as a consequence are now aware of factoids.How do we know this? We don’t. I just made it up; but you can’t call me wrong, because you don’t know, either. A factoid is something that looks like a very precise fact.  The antidote to factoids is source information. Good source information in the statement above would be ‘our web counter shows that you are visitor 1635 to this page’. Start worrying if the source is nebulous ‘our webmaster advises’ or ‘based on a sophisticated time related algorithm…’.
  3. Beware of false precision.
    Almost everything that affects project decisions is a guess, assessment or estimate (the terms are largely synonymous) about something that may occur in the future But no one has precise information about the future! False precision damages credibility (see: Is what you heard what I meant?) and is generally less than useful.  The antidote to false precision is to ask for ranges and the basis of the range statement.
  4. Lies, dam lies and statistics 1.
    Some statistics result from the counting of real things. If you trust the people who do the counting, the math and the reporting, the data is as good as you are going to get. However, most statistics are estimates for a large population, derived from the extrapolation of the results from a small sample. Professional statisticians and pollsters attach a calculated margin of error to their work – this margin is important!  The antidote to false statistics is to ignore any that do not come with a statement of the margin for error and how this was derived.
  5. Lies, dam lies and statistics 2.
    Understand the basis for comparison – it is very easy to distort information. Project A will increase the profit on the sale of widgets by 50% whereas project B will only increase the profit on our training business by 10%, if both projects involve a similar cost outlay which one is best??? You need to know the basis for comparison to answer the question: a 50% increase in profits from a base of $100,000 = $50,000 which is half the value of a 10% increase in profits from a base of $1 million.  The antidote to statistical distortion is to largely ignore percent changes and statements such as ‘fastest growing’, ‘biggest increase’, etc.  It is always easier to be the ‘biggest’ if your starting point is the smallest.
  6. The ‘one-in-a-million’ problem
    Discussed in The role of ‘sentinels’  many ‘one-off’ problems are symptoms of a much deeper issue. Our entire working life is less than 20,000 days so the chances of you encountering a genuine ‘one-in-a-million’ event, just once in your working life, is about 2%. Other phrases that should trigger concern include; ‘she’ll be right’, ‘no-problems’, ‘it’s easy’, etc…  The antidote to these type of expression is to simply reverse the statement:
    – one-off / one-in-a-million = there’s probably a structural cause to be discovered;
    – she’ll be right = I have no idea how to fix it (and its definitely not OK);
    – no-problems = this is a major problem for me;
    – it’s easy = this will be very difficult (unless the ‘easy’ is followed by an explanation of how it is easy).
  7. The false prophet
    False prophecies are allegations and unsubstantiated statements made with the expectation that the ‘expertise’ of the person the statement is attributed to will cover the statement with absolute credibility. If the statement is improbable, it is improbable regardless of the alleged source.  The antidote to false profits being quoted in the ‘third party’; eg, “Einstein said controlled nuclear fusion was easy”; is simply to seek authentication from the source. If the ‘prophet’ is present, ask them for more information.  Real experts know both the upside and the down side of any course of action they are proposing – they understand the uncertainty. Wannabe experts pretend there is no downside or uncertainty.
  1. Well known facts
    Remember, most ‘well known facts’ are in fact commonly held misconceptions (this statement is a factoid but also useful).  The antidote to ‘well know facts’ is to dig deeper and gather actual facts.

These are just a few ways bad advice and information can be introduced into a decision making process. Taking a few minutes to verify the quality of the advice you are being given, ditch the unsound advice and information, and then use what’s left to inform the decision will enhance the probability of making the best decision in the circumstances.  This is not easy to do (but good decisions are rarely ‘easy’); the consolation is once you develop a reputation for having a good ‘bull s**t’ detector, most sensible people will stop trying to use it on you. Then all you need to do is make the right decision.