Tag Archives: Project Controls

Project Governance and Controls Symposium (PGCS) 2015

The Project Governance and Controls Symposium (PGCS) 2015 will be held on 6-7 May 2015 at the Australian Defence Force Academy (ADFA) thanks to the continued platinum sponsorship provided by the University of New South Wales Canberra.

The PGCS is the only annual project governance, controls and management event held in Australia. It has a unique focus on the project governance and control needs of the Australian Federal Government from the perspectives of Government agencies, Industry and Academia. The Symposium is focused towards public sector projects and seeks to engage with Defence/DMO and other large departments because they have they have the largest, longest running and most nationally significant projects in Australia.

The PGC Symposium also differs significantly to those run by other organisations.  While it does not ignore the soft skills (they are a critical part of governance),  the focus is on the quantitative project control techniques that are essential to providing a clear and objective status of where a project is truly at – particularly for major capital acquisition projects – and how this information supported effective governance.

The PGCS is supported by the Australian Institute for Project Management (AIPM), and a range of other project management bodies.

For more information visit the conference website at: http://www.pgcs.org.au/

Bookings are now open and it is not too late to offer a paper – contact me if you are interested in speaking.

Two new papers on the web

BeaverWe presented papers at the Engineers Australia MCPC14 conference late last year. They are now available on our website.

Understanding Design – The challenge of informed consent looks at the problem of communicating complex project information to stakeholders in a way they can understand.

Scheduling Complexity discusses the challenges of managing time in complex projects and the need for qualified schedulers.

For more of our papers and articles see: http://www.mosaicprojects.com.au/PM-Knowledge_Index.html

Project failure revisited

Project-FailOver the holiday period there has been a couple of interesting discussion on project success and failure. The consensus of the many commentators was that the simplistic measures of time, cost and scope are inadequate but there was little consensus on the solution.  This post poses some of the questions that need a considered answer:

Firstly, the APM website posed the question which of the following projects was successful?

Two organisations decided to undertake identical projects with a normalised value of  $1million.
–  Organisation A assessed their project and set the project budget at $800,000
–  Organisation B assessed their project and set the project budget at $1,200,000
Organisation A’s team did their best to ‘meet the challenge’ and achieved an outcome of $900,000 – a cost overrun of $100,000 nominally a project failure.
Organisation B’s team did ‘a good job’ and achieved an outcome of $1,100,000 – a profit of $100,000 nominally a project success.

But which project is really successful??  The one that cost $900,000 or the one that cost $1,100,000 to produce the same output.  This example is simplistic, the numbers are given and the problem is demonstrated, but nowhere will you ever have two identical projects run against different baselines.  How can you assess the ‘project risk’ caused by soft or hard targets??

Similar issues arise when allocating the blame for ‘failure’ to different parts of the ‘performing organisation’.  Many so-called project management and project leadership failures are likely to be either unavoidable consequences or symptoms of far more significant underlying issues (for more on this see: Project or Management Failures?).  Focusing on the superficial (and blaming the project manager) prevents a more thorough ‘root cause analysis’  of the real issues and problems in organizations.  I will take 2 examples and borrowing from Toyota’s ‘Five Whys’ ask ‘why’ a few times:

  1. Failure of PM leadership. The project manager failed to lead, relate or communicate, with stakeholders. But the project manager did not appoint him/her self , some of the unanswered questions are:
    1. Why did the organisation appoint a PM lacking the requisite skills?
    2. Why did the organisation fail to support/train the PM?
    3. Why were the failings not picked up and resolved during routine project surveillance?
  2. Failing to use recognised techniques such as risk management. Some of the unanswered questions are:
    1. Why does the organisation allow sub-standard practices to exist?
    2. Does the organisation have proper templates, processes and support in place to support the practice?
    3. Does the organisation provide adequate time, training and resources to implement the practice?
    4. Why were the failings not picked up and resolved during routine project surveillance?

The answer to these questions may go back to organisational culture, the overall organisational ability to effectively manage and support its projects (the strategic management of projects)  and/or ultimately the governance of the organisation.

Certainly some projects will fail for project related reasons; projects and programs are innately risky and this means project related failures are to be expected – minimising this cause of failure will be valuable. But, simply measuring performance against cost and time targets is influenced by the way the initial target was set in the first place.

project-failure-2The problem is compounded by the lack of ‘root cause’ assessments. I expect a proper study of the root causes of many so-called ‘project failures’ will show many projects are effectively set up to fail by the organisation.  Allowing executive management to continue with these types of practices is ultimately a governance failure. Addressing the ‘root causes’ of failure hidden in executive management practise, culture and governance are likely to generate significantly greater benefits than simply trying to ‘fix project management’; but you cannot see the failures without proper data.

One initiative aimed at working towards a standardised assessment of project failures is a series of articles being published by Proff. Alan Stretton in PM World Journal, see: http://pmworldjournal.net/article/series-project-success-failure-deficiencies-published-causes-project-failures/  (registration is free).

Given the general management mantra of ‘you cannot manage what you cannot measure’, developing a measure of project failure that is valuable and consistent would be a good start in developing the data needed to allow management improvement across the board.

As Alan concluded the referenced article:

The above deficiencies in current data all point to an urgent and obvious need to develop comprehensive data on causes of project failures – preferably validated by appropriate and agreed criteria as to what constitutes success / failure, and covering the widest possible range of project types and project management application areas.

A suggestion (or challenge) here is for global project management organisations (IPMA, PMI, apfpm, etc) to jointly create a framework to develop and share project success / failure data, covering the widest possible range of project management types and application areas. This would include

  • Developing and agreeing common criteria for project success / failure;
  • Collecting and sharing validated data on success/ failure rates;
  • Researching and sharing validated data on success drivers / failure causes.

If you agree support Alan and start lobbying your PM association of choice. Defining the problem is easy, solving it elegantly is not!

Ethics, Culture, Rules and Governance

RulesFar too many governing bodies spend far too much time focused on rules, conformance and assurance.  While these factors are important they should be an outcome of good governance not the primary focus of the governors.

When an organisation sets high ethical standards and invests in building an executive management culture that supports those standards the need for ‘rules’ is minimised and the organisation as a whole focuses on doing ‘good business’ (see: Corporate Governance).

The order of the functions outlined in The Functions of Governance, places: ‘Determining the objectives of the organisation’, ‘Determining the ethics of the organisation’, and ‘Creating the culture of the organisation’ ahead of both assurance and conformance.  The rational being creating a culture of ‘doing the right thing’ that extends from the very top of the organisation to the very bottom, means most people most of the time will be doing the ‘right thing’ making assurance and conformance a relatively simple adjunct, there to catch the few errors and malpractices that will inevitably occur.

A very strong endorsement of this approach to governance has recently come from one of the world’s most successful business people, Warren Buffet.  His recent memo to the top management of his holding company, Berkshire Hathaway’s subsidiaries (his ‘All Stars’) emphasised that their top priority must be to ‘zealously guard Berkshire’s reputation’ (read act ethically). He also reminded his leadership team that ‘we can afford to lose money–even a lot of money. But we can’t afford to lose reputation–even a shred of reputation’.

His advice to managers also included this good advice ‘There’s plenty of money to be made in the centre of the court. If it’s questionable whether some action is close to the line, just assume its outside and forget it’. This is a simple ethical guideline that avoids the need for pages of precise ‘rules’ designed to map the edge of legality drafted by lawyers and argued over endlessly.  See more on Ethics.

Rule#1Reading the memo, its clear Buffet has built a massive organisation based on an ethical culture, employs executives that reinforce the culture, and still makes a very good profit. It’s a long term investment but infinitely preferable to the sort of issues that confronted Salomon Bros., 20 years ago (see: Warren Buffett’s Wild Ride at Salomon), the banks associated with the GFC, and the on-going damage continuing to be suffered by the Australian banks as more ethical failures come to light. I’m sure they all had hundreds of ‘rules’ some of which may even have been sensible.

A copy of Warren Buffet’s memo can be downloaded from:  http://www.mosaicprojects.com.au/pdf/Ethics_Culture_Rules-Buffet_Memo.pdf

The three phases of project controls

The need to control projects (or bodies of work that we would call a project today) extends back thousands of years. Certainly the Ancient Greeks and Romans used contracts and contractors for many public works. This meant the contractors needed to manage the work within a predefined budget and an agreed timeframe.  However, what was done to control projects before the 19th century is unclear – ‘phase 0’.  But from the 1800’s onward there were three distinct phases in the control processes.

Phase 1 – reactive

The concept of using charts to show the intended sequence and timing of the work became firmly established in the 19th century and the modern bar chart was in use by the start of the 20th century. One of the best examples is from a German project in 1910, see: Schürch .  A few years later Henry Gantt started publishing his various charts.


From a controls perspective, these charts were static and reactive. The diagrams enabled management to see, in graphic form, how well work was progressing, and indicated when and where action would be necessary to keep the work on time. However, there is absolutely no documented evidence that any of these charts were ever used as predictive tools to determine schedule outcomes. To estimate the completion of a project, a revised chart had to be drawn based on the current knowledge of the work – a re-estimation process; however, there is no documentation to suggest even this occurred regularly. The focus seemed to be using ‘cheap labour’ to throw resources at the defined problem and get the work back onto program.

Costs management seems to have be little different; the reports of the Royal Commissioners to the English Parliament on the management of the ‘Great Exhibition’ of 1851 clearly show the accurate prediction of cost outcomes. Their 4th report predicted a profit of ₤173,000.  The 5th and final report defined the profit as ₤186,436.18s. 6d. However this forward estimation of cost outcomes does not seem to have transitioned to predicting time outcomes, and there is no real evidence as to how the final profit was ‘estimated’. (See Crystal Palace).

Phase 2 – empirical logic

Karol Adamiecki’s Harmonygraph (1896) introduced two useful concepts to the static views used in bar charts and the various forms of Gantt chart. In a Harmonygraph, the predecessors of each activity are listed at the top and the activities timing and duration are represented by vertical strips of paper pinned to a date scale. As the project changed, the strips could be re-pinned and an updated outcome assessed.

The first step towards a true predictive process to estimate schedule completion based on current performance was the development of PERT and CPM in the late 1950s.  Both used a logic based network to define the relationship between tasks, allowing the effect of the current status at ‘Time Now’ to be cascaded forward and a revised schedule completion calculated.  The problem with CPM and PERT is the remaining work is assumed to occur ‘as planned’ no consideration of actual performance is included in the standard methodology. It was necessary to undertake a complete rescheduling of the project to assess a ‘likely’ outcome.

Cost controls had been using a similar approach for a considerable period. Cost Variances could be included in the spreadsheets and cost reports and their aggregate effect demonstrated, but it was necessary to re-estimate future cost items to predict the likely cost outcome.

Phase 3 – predictive calculations

The first of the true predictive project controls processes was Earned Value (EV). EV was invented in the early 1960s and was formalised in the Cost Schedule Controls System Criteria issued by US DoD in December 1967.  EV uses predetermined performance measures and formula to predict the cost outcome of a project based on performance to date.  Unlike any of the earlier systems a core tenet of EV is to use the current project data to predict a probable cost outcome – the effect of performance efficiencies to date is transposed onto future work. Many and varied assessments of this approach have consistently demonstrated EV is the most reliable of the options for attempting to predict the likely final cost of a project.

Unfortunately EV in its original format was unable to translate its predictions of the final cost outcome (EAC) into time predictions.  On a plotted ‘S-Curve’ it was relatively easy to measure the time difference between when a certain value was planned to be earned and when it was earned (SV time) but the nature of an ‘S-Curve’ meant the current SVt had no relationship to the final time variance.  A similar but different issue made using SPI equally unreliable. The established doctrine was to ‘look to the schedule’ to determine time outcomes. But the schedules were either at ‘Phase 1’ or ‘Phase 2’ capability – not predictive.

A number of options were tried through the 1960s, 70s and 80s to develop a process that could accurately predict schedule completion based on progress to date. ‘Count the Squares’ and ‘Earned Time’ in various guises to name two.  Whilst these systems could provide reasonable information on where the project was at ‘time now’ and overcame some of the limitations in CPM to indicate issues sooner than standard CPM (eg, float burn hiding a lack of productivity), none had a true predictive capability.

The development of Earned Schedule resolved this problem.  Earned Schedule (ES) is a derivative of Earned Value, uses EV data and uses modified EV formula to create a set of ‘time’ information that mirrors EV’s ‘cost’ information to generate a predicted time outcome for the project. Since its release in 2003 studies have consistently shown ES to be as accurate in predicting schedule outcomes as EV is in predicting cost outcomes.  In many respects this is hardly surprising as the underlying data is the same for EV and ES and the ES formula are adaptations of the proven EV formula (see more on Earned Schedule).

Phase 4 – (the future) incorporating uncertainty

The future of the predictive aspects of project controls needs to focus on the underlying uncertainty of all future estimates (including EV and ES).  Monte Carlo and similar techniques need to become a standard addition to the EV and ES processes so the probability of achieving the forecast date can be added into the information used for project decision making. Techniques such as ‘Schedule Density‘ move project controls into the proactive management of uncertainty but again are rarely used.


From the mid 1800s (and probably much earlier) projects and businesses were being managed against ‘plans’.  The plans could be used to identify problems that required management action, but they did not predict the consequential outcome of the progress being achieved.  Assessing a likely outcome required a re-estimation of the remaining work, which was certainly done for the cost outcome on projects such as the construction of the Crystal Palace.

The next stage of development was the use of preceding logic, prototyped by Karol Adamiecki’s Harmonygraph, and made effective by the development of CPM and PERT as dynamic computer algorithms in the late 1950s. However, the default assumption in these ‘tools’ was that all future work would proceed as planned. Re-scheduling was needed to change future activities based on learned experience.

The ability to apply a predictive assessment to determine cost outcomes was introduced through the Earned Value methodology, developed in the early 1960s and standardised in 1967.   However, it was not until 2003 that the limitations in ‘traditional EV’ related to time was finally resolved with the publication of ‘Earned Schedule’.

In the seminal paper defining ES, “Schedule is Different”, the concept of ES was defined as an extension of the graphical technique of schedule conversion (that had long been part of the EVM methodology). ES extended the simple ‘reactive statement’ of the difference between ‘time now’ and the date when PV = EV, by using ‘time’ based formula, derived from EV formula, to predict the expected time outcome for the project.

The Challenge

The question every project controller and project manager needs to take into the New Year is why are more then 90% of project run using 18th century reactive bar charting and the vast majority of the remainder run using 60 year old CPM based approaches, non of which offer any form of predictive assessment.  Don’t they want to know when the project is likely to finish?

It’s certainly important to understand where reactive management is needed to ‘fix problems’, but it is also important to understand the likely project outcome and its consequences so more strategic considerations can be brought into play.

Prediction is difficult (especially about the future) but it is the only way to understand what the likely outcome will be based on current performance, and therefore support value based decision making focused on changing the outcome when necessary.

I have not included dozens or references in this post, all of the papers are available at http://www.mosaicprojects.com.au/PM-History.html

What is the critical path?

One of the most common misconceptions in planning and scheduling is that float somehow determines the ‘critical path’. For the PMI-SP exam and any serious consideration of the definition of the ‘critical path’, float is not the right answer.

Associating zero float with the critical path is correct if, and only if, there are no constraints placed on the schedule.  As soon as you introduce a contract completion date the critical path may finish before the contract requirement and have positive float or after the contracted completion date and have negative float (and knowing by how much is important to managing both the schedule and the work).

Then add in the common contractual issues of delayed access to areas of work (available on or after a specified date), and mandated interim handovers of part of the deliverables and float goes all over the place. These issues were considered at length when we were writing of the Guide to Good Practice in the Management of Time in Complex Projects.

The description of the critical path developed for The Guide is:

Critical Path = the longest sequence of activities from commencement to completion of a key date, section, or completion of the works as a whole. In relation to each, it is that sequence of activities, which will take the longest to complete or, put another way, the sequence of activities, which will determine the earliest possible finish date. Hence, it is timely commencement and completion of those activities on that path, which will secure completion of the key date, section, or the works as a whole on time.

This description was condensed to a definition in ISO 21500 Guide to Project Management (2012), as:

Critical Path: sequence of activities that determine the earliest possible completion date for the project or phase.

This ‘Standard Definition’ does not preclude the possibility of several ‘completions’ within the one project to account for interim handovers required under a contract. It allows for the possibility of the critical path starting at the beginning of the schedule or at some interim point where an external dependency allows the ‘critical’ work to start. Additionally, the sequence of activities may be determined logically (through links or dependencies) or through the sequential movement of resources. The definition is both concise and unambiguous. For more see: http://www.mosaicprojects.com.au/WhitePapers/WP1043_Critical_Path.pdf

You need to get with the game – people who want to ignore the current international standard definition will become increasingly marginalised as the various national standards move into alignment with ISO.

Schedule Calculations – Old and New

CPMThe way CPM schedules were calculated in the 1970s and 80s (prior to the availability of low-cost PC scheduling tools) used a simplification designed to minimise error and speed up a tedious task.  Whilst some of us are old enough to have used this ‘manual’ technique on real schedules, everyone in the modern world recognises Day # 1 = Wednesday 1st October and a 3 day duration activity will work on Wednesday, Thursday and Friday to finish on the 3rd October and the fact 1 + 3 = 4 is simply an anomaly in the way integers and ‘elapsed time’ interact that has to be dealt with inside the computers computations to produce accurate date based bar charts and tabulations.

Unfortunately there has been a rash of postings on linked-in over the last week totally confusing everyone with their nonsense about CPM calculations.  This blog is designed to correct the message!

To overcome the problem of a 3 day activity starting on the 1st October finishing on the 3rd October, but  staring on day 1 and adding a duration of 3 gives you 1 + 3 = 4, the simplified manual calculations assumed the starting point was ‘day Zero’ 0 + 3 = 3!

However, the old manual calculations starting from day Zero have never been correct – the start day number for every activity in a schedule is always the day before it actually starts.  The end dates (day numbers / dates) are correct and the advantage of this option is it only requires one simple calculation per task for both the forward and back passes and the Free Float calculations are a simple subtraction.

EF = ES + Duration
LS = LF – Duration ….  Easy!!

This simplistic methodology was absolutely essential for manually calculating large PDM schedules. The ‘normal’ scheduling practice through to the mid 1980s when affordable PCs arrived – very few companies could afford the expense of mainframe scheduling tools and those that did wanted to make sure the data was correct before the computer run.

The accurate calculation used in all scheduling software, recognises that a 3 day activity starts at the beginning of day 1 and works on days 1, 2 and 3 to finish at the end of day 3 and its successor (assuming a FS0 link) starts at the beginning of day 4.  Unfortunately these ‘real’ calculations require much more complex calculations[1].

ES = 1, EF = (1 + 3) – 1 to get to the end of day 3.
The Zero duration link requires (EF 3 + 0) + 1 = the next activity ES is the start of day 4.

This approach more than doubles the amount of calculation effort and increases the opportunity for error and of course affects Free Float calculations as well.

Fortunately computer software is not prone to making calculation errors and runs these more complex sums 100% accurately to calculate the date activities start and end accurately when transposed onto a calendar. For more on the actual calculations see: http://www.mosaicprojects.com.au/PDF/Schedule_Calculations.pdf

Given no one has used manual calculations to determine a major schedule in the last 20 years (at least) the old simplistic manual approach is redundant and should be consigned to my area of interest, the history of project scheduling (see: http://www.mosaicprojects.com.au/PM-History.html).

[1] For a more complete discussion see the excellent paper by Ron Winters written in 2003 and entitled ‘How to Befuddle a College Professor’, which can be found at:  http://www.ronwinterconsulting.com/Befuddle.pdf