Tag Archives: Project Planning

Free, Exclusive Project Scheduling Virtual Event for PMI Members

pmi-virtual-scheduling

PMI members are entitled to register and attend this member-only event on the 29th March (9:00 am to 5:00 pm ET) for free! It is the perfect way to learn what’s new in project scheduling and network with PMI members across the globe. This year we are talking about how to tackle project scheduling challenges in a changing profession.

My presentation is focused on Projects Controls Using Integrated Data – The Opportunities and Challenges.   The presentation is focused on the practical and ethical challenges posed by integrated information management tools such as BIM and ‘drones’ in the construction/engineering industries and how this affects the work of project controls professionals.

To register go to: https://www.projectmanagement.com/events/356123/PMI-Scheduling-Conference-2017

If you are not a PMI member (or cannot make the date) watch this space.

 

The future of project controls

Last week I participated in two PUXX panel discussions in Perth and Sydney focused on predicting the influence of technology on project controls.  The range of subjects covered ranged from drones and remote monitoring to virtual reality.

Many of the topics discussed offered better ways to do things we already do, provided we can make effective use of the data generated in ever increasing quantities – significant improvements but essentially ‘business-as-usual’ done better. The aspect I want to focus on in this post is the potential to completely reframe the way project schedules are developed and controlled when existing ‘gaming technology’ and BIM are synthesised.

The current paradigm used for critical path scheduling is a (dumbed-down) solution to a complex set of problems required to allow the software to run on primitive mainframe computers in the late 1950s – the fundamentals have not changed since! See: A Brief History of Scheduling.

The underlying assumption is a project consists of a set of activities each with a defined duration and depending on the logical relationship between the activities, some are ‘critical’ others have ‘float’.  The basic flaw in this approach can be demonstrated by looking at the various options open to a schedule to define the work involved in 3 simple foundations involving excavation and mass concrete fill.

schedule-options

All four of the above options above are viable alternatives that may be chosen by different schedulers to describe the work using CPM, and none of them really describe what actually happens. The addition of more links would help but even then the real situation which is one resource crew visits three locations in turn and excavates the foundations, a second crew follows and places the concrete with some options for overlapping, parallel working and possibly synchronising the actual pouring of all three foundations on the same day…….. Optimising the work of the crews is the key to a cost effective outcome and this depends on what follows their work.  For more on resource optimisation see: www.mosaicprojects.com.au/Resources_Papers_152.html. Advances in computer software offer the opportunity to develop a new way of working.

The starting point for the hypothesis outlined I this post is 4D BIM (Building Information Modelling). Last month I was in London working on the final edits to the second edition of the CIOB’s book, Guide to Good Practice in the Management of Time in Complex Projects (due for publication in 2017 as The Management of Time in Major Projects). One of the enhancements in the second edition is an increased focus on BIM. To assist our work a demonstration of cutting edge 4D BIM was provided Freeform.

Their current capabilities include:

  • The ability to model in real time clashes in working space provided the space needed for each crews work is parameterised. Change the timing of one work crew and the effect on others in a space is highlighted.
  • The ability to view the work from any position at any time in the construction process; allowing things such as a tower crane driver’s actual line of sight to be literally ‘seen’ at different stages of the construction.
  • The relatively normal ability to import schedule timings from a range of standard tools to animate the building of the model, and the ability to feedback information derived from processes such as the identification of clashes in the use of working space using
  • The space occupied by temporary works and various pieces of equipment can be defined and clashes with permanent works identified over time.
  • Finally the ability for a person to see and move around within the virtual model using the same type of 3D virtual reality goggles used by many gaming programmes. The wearer is literally immersed in the model.

For all of this in action on a major rail project see: https://www.newcivilengineer.com/future-tech/pushing-the-limits-of-bim/10012298.article

Moving into the world of game playing, there are many different games that allow players in competition, or collaboration, to ‘build’ cities, empires, fortifications, farms, etc. These games know the resources available to the players and how many resources will be required to construct each new element in the game – if you don’t have the resources, you can’t build the new asset.

Combining these two concepts opens up the possibility for a completely new approach to scheduling physical projects that involve the deployment of resources to physical locations to undertake work. The concept of location-based scheduling is not new, it was used in the 1930s to construct the Empire State Building (see: Line of Balance) and is still widely used.  For more on location-based scheduling see: Location-Based Management for Construction: Planning, Scheduling, and Control by Prof. Russell Kenley.

How these concepts tie into BIM starts with the model itself.  A BIM model consists of a series of parameterised objects. Each object can contain data on its size, weight, durability, cost, maintainability, carbon footprint, etc. As BIM develops many of these objects will come from standard libraries created by suppliers and subcontractors. Change an object, for example, replace windows from manufacturer “A” with similar Windows from manufacturer “B” and the model is update and potential issues with sizes, fixings and waterproofing can be identified. It is only a small step from this point to add parameters related to the resources needed to undertake the work of installation.

With this information and relatively minor enhancements to current BIM capabilities, once the engineering model is reasonably complete a whole new paradigm for planning work opens up.

4d-vr

To plan the work the ‘planning team’ put on their virtual reality headsets and literally ‘walk’ onto the site.  As they start to locate temporary works and begin the building process the model is tracking the use of resources and physical space in real time. The plan is developed based on the embedded parameters in the fully integrated 3D model.

Current 4D imports a schedule ‘shows you’ the effect.  Using the proposed gaming approach and parameterized objects you can literally build the project in the virtual space and either see the consequences on resource loading or be limited by resource availability.  A whole bunch of games do this already, add in existing clash detection capabilities (but applied to workers using the space) and you change the whole focus of planning a project. Decisions can be made to adjust the size of resource crews and the flow of work can be optimised to balance the competing objectives of cost efficiency, time efficiency and resource optimisation.

The proposed model is a paradigm shift away from CPM and its arbitrary determination of activities and durations to a process focused on the smooth flow of resources through work areas. The computational base will be focused on resource effectiveness and resource utilisation. Change ‘critical path’ to ‘critical resources’, eliminate the illusion of ‘float’ but look for underutilised resources and resource waiting time. To optimise the work, different scenarios can be stored, replayed and edited – the ultimate ‘what-if’ experience.

The concept of schedule density ties in with this approach nicely; initial planning is done for the whole project at the ‘low density’ level with activity durations of several weeks or months setting out the overall ‘time budget’ for the project and establishing the strategic flow of work.  As the design improves and more information becomes available, the schedule is enhanced first to ‘medium density’ and then to ‘high density’. The actual work is controlled by the ‘high density’ part of the schedule. For more on ‘schedule density’ see: www.mosaicprojects.com.au/WhitePapers/WP1016_Schedule_Density.pdf.

Where this concept gets really interesting is in the control of the work.  The medium and high density elements of the schedule are built using the same ‘virtual reality’ process as the overall schedule, therefore each object in the overall BIM model can include data on the resources allocated to the work, the sequence of work and the time allowed. Given workers on BIM-enabled projects already use various PDAs to access details of their work, the same tablet or smart device can be used to tell the workers their next job and how long that have to complete it. When they complete the task, updating the BIM model with that progress information updates the schedule, tells the crew their next job and tells the next resources planned to move into the area that the space is available. The schedule and the 3D model are the same entity.

Similarly, off-site manufacturing and design lead-times can be integrated into the dataset.  Each manufactured item can have its design, manufacture and transport and approval times associated with the element making the development of an off-site works / procurement schedule a simple process to extract the report once the schedule is set.  Identifying delays in the supply chain and dealing with changes in the timing of installation become staigtforward.

When inevitable problems occur, the project management team have the ideal tool to work through solutions and determine the optimum way forward, as soon as the new schedule is agreed, the BIM model already holds the information.

One of the key concepts in ‘schedule density’ is that any work planned for the short-term future has to be based on the actual performance of the crews doing the work. In a BIM enabled scheduling system this can also be automated. The work content of each activity is held in the model as is the crew assigned to the work. As soon as the work crew’s productivity can be measured, the benchmark values used in the original planning can be updated with real data. Where changes in performance are needed to deal with slippages and productivity issues these can be properly planned and incorporated into the schedule based on when the implemented changes can be expected to occur.

I’m not sure if this is BIM2 or BIM++ but these ideas are not very far in advance of current capabilities – all we need now is a software developer to take on the ideas and make them work.

These concepts will be harder to apply to ‘soft projects’ but the planning paradigms in soft projects have already been shaken up by Agile. But integrating 3D modelling with an integrated capability for real 4D interaction certainly seem to make sense for projects where the primary time management issue is the flow of resources in the correct sequence through a defined series of work locations in three dimensions.   What do you think???

New Planning and controls website

meeting1b

Our new project Planning and Controls website at www.planning-controls.com.au/ is now up and running.  This site currently has two focuses:

Helping people study to pass their PMI-SP® examination:  www.planning-controls.com.au/pmisp-courses/  Backed by a library of helpful PMI-SP exam support resources:  www.planning-controls.com.au/support/

Providing a single location for planners and schedulers to access our library of project controls papers and other free resourceswww.planning-controls.com.au/controls/   Almost all of the papers are available for download and use under the Creative Commons licence.

This site will be progressively updated with a view to becoming a key reference for all planning and control professionals worldwide!  Any suggestions for improvements will be appreciated – we look forward to hearing from you.

 

 

Critical confusion – when activities on the critical path don’t compute……

The definition of a schedule ‘critical path’ varies (see Defining the Critical Path), but the essence of all of the valid definitions is the ‘critical path’ determines the minimum time needed to complete the project and either by implication or overtly the definitions state that delaying an activity on the critical path will cause a delay to the completion of the project and accelerating an activity will (subject to float on other paths[1]) accelerate the completion of the project.

A series of blog posts by Miklos Hajdu, Research Fellow at Budapest University of Technology and Economics, published earlier this year highlights the error in this assumption and significantly enhances the basic information contained in my materials on ‘Links, Lags and Ladders’ and our current PMI-SP course notes.  The purpose of this post is to consolidate all these concepts into a single publication.

The best definition of a critical path is Critical Path: sequence of activities that determine the earliest possible completion date for the project or phase[2].  This definition is always correct.  Furthermore, in simple Precedence networks (PDM) that only use Finish-to-Start links, and traditional Activity-on-Arrow (ADM) networks the general assumption that increasing the duration of an activity on the critical path delays the completion of the schedule and reducing the duration of an activity on the critical path accelerates the completion of the schedule holds true.  The problems occur in PDM schedules using more sophisticated link types.  Miklos has defined five constructs using standard PDM links in which the normal assumption outlined above fails. These constructs, starting with the ‘normal critical’ that behaves as expected are shown diagrammatically below[3].

Normal Critical

The overall project duration responds as expected to a change in the activity duration.

1 Normal critical

A one day reduction of the duration of an activity on the critical path will shorten the project duration by one day, a one day increase will lengthen the project duration by one day.

Reverse Critical

The change in the overall project duration is the opposite of any change in the activity duration.

2 Reverse Critical

A one day reduction of the duration of Activity B will lengthen the project duration by one day, a one day increase will reduce the project duration by one day.

Neutral Critical

Either a day decrease or a day increase leaves the project duration unaffected. There are two variants, SS and FF:

3 Neutral 1

3 Neutral 2

In both cases it does not matter what change you make to Activity B, there is no change in the overall duration of the project.  This is one of the primary reasons almost every scheduling standard requires a link from a predecessor into the start of every activity and a link from the end of the activity to a successor.

Bi-critical Activities

Any change in the duration of Activity B will cause the project duration to increase.

4 Bi-critical

A one day reduction of the duration of Activity B will lengthen the project duration by one day, a one day increase will lengthen the project duration by one day.  Bi-critical activities depend on having a balanced ladder where all of the links and activities are critical in the baseline schedule. Increasing the duration of B pushes the completion of C through the FF link.  Reducing the duration of B ‘pulls’ the SS link back to a later time and therefore delays the start of C.  The same effect will occur if the ladder is unbalanced or there is some float across the whole ladder, it is just not as obvious and may not flow through to a delay depending on the float values and the extent of the change.

Increasing Normal Decreasing Neutral

An increase in Activity B will delay completion, but a reduction has no effect! There are two variations on this type of construct.

5 Increasing Normal Decreasing Neutral 1

5 Increasing Normal Decreasing Neutral 2

A one day increase in the duration of Activity B will increase the project duration by one day, however, reducing the length of Activity B has no effect on the project’s duration.

Increasing Neutral Decreasing Reverse

An increase in Activity B has no effect, but a reduction will delay completion! Again, there are two variations on this type of construct.

6 Increasing neutral decreasing reverse 1

6 Increasing neutral decreasing reverse 2

A one day increase in the duration of Activity B has no effect on the project’s duration, however, reducing the length of Activity B by one day will increase the project duration by one day.

Why does this matter?

The concept of the schedule model accurately reflecting the work of the project to support decision making during the course of the work and for the forensic assessment of claims after the project has completed, is central to the concepts of modern project management.  Apart from the ‘normal critical’ construct, all of the other constructs outlined above will produce wrong information or allow a claim to be dismissed based on the nuances of the model rather than the real effect.

Using most contemporary tools, all the planner can do is be aware of the issues and avoid creating the constructs that cause issues.  Medium term, there is a need to revisit the whole function of overlapping activities in a PDM network to allow overlapping and progressive feed to function efficiently.  This problem was solved in some of the old ADM scheduling tools, ICL VME PERT had a sophisticated ‘ladder’ construct[4].  Similar capabilities are available in some modern scheduling tools that have the capability to model a ‘Continuous precedence relationship[5]’ or implement RD-CPM[6].


[1] For more on the effect of ‘float’ see: http://www.mosaicprojects.com.au/PDF/Schedule_Float.pdf

[2] From ISO 21500 Guide to Project Management,

[3] The calculations for these constructs are on Miklos’s blog at: https://www.linkedin.com/in/miklos-hajdu-a1418862

[4] For more on ‘Links, Lags and Ladders’ see: http://www.mosaicprojects.com.au/PDF/Links_Lags_Ladders.pdf

[5] For more on continuous relationships see:  http://www.sciencedirect.com/science/article/pii/S1877705815031811

[6] For more on RD-CPM see: http://www.mosaicprojects.com.au/WhitePapers/WP1035_RD-CPM.pdf

The PM College of Scheduling Conference and Membership

The Project Management College of Scheduling is now officially open for business.   As you may already know, a group of us led by Jon Wickwire and Stu Ockman joined together to found the Project Management Institute College of Scheduling in early-2002.  A dozen years later, a new group (including me and many of the leaders of the former College) founded its successor, The Project Management College of Scheduling (PM-COS).   Subsequently, the PM-COS has completed the formalities necessary under USA law and is now officially open for business.

The role fulfilled by PM-COS is intended to be quite different to most member based organisations, focused on creating knowledge and capability in the scheduling profession.  As a member, you will:

  • Be a part of creating the centre of excellence for the advancement of scheduling and project controls throughout the world
  • Collaborate with other top schedule professionals, consultants and experts in identifying and instituting best practices on your projects
  • Help develop standards in all areas of scheduling including specifying, preparing, updating, software, claims, training and research
  • Provide education and training to promote accurate and ethical scheduling
  • Join in a dialog with software developers to foster implementations of new, innovative features in upcoming releases
  • Participate in mentoring the next generation of scheduling professionals

If sharing ideas and giving back to the profession get you excited and you’d like to be a part of our journey, why not Join Us now.  And, whether or not membership in the College is in your future, we’d love to have you with us at our annual conference, May 15th-18th in Chicago.

PM-COS16The Project Management College of Scheduling Annual International Conference, Scheduling the Future, will be held on the 15th to 18th May at the Hyatt Chicago Magnificent Mile.  This is a terrific opportunity to:

(1) share ideas,
(2) see old friends and make new ones and
(3) participate in this year’s premier planning and scheduling event.

We have a terrific technical program offering 14 Professional Development Units (PDU’s), with speakers and panel discussions planned to give everyone a chance to participate.  In addition, we have a social program with a Sunday night vendor reception, Monday night Gala Dinner and Tuesday night free for a night on the town.  We’re also planning a golf tournament Wednesday afternoon.

Don’t forget to check out the conference program, and drop by our website, http://www.pmcos.org/, to sign up now.  We’re offering a discount for PMCOS Members and another for early member-registration.  Finally, make your hotel reservations directly with the Hyatt Magnificent Mile at their website.  This may be the most important part since we’re visiting during peak season and the hotel has reserved a limited number of rooms for the conference.

We’ve got a lot planned, and you can help us make it a success!

GAO Schedule Assessment Guide released

GAO-TimeOn 22nd December, the U.S. Government Accountability Office (GAO) issued the final version of its Schedule Assessment Guide: Best Practices for Project Schedules (GAO-16-89G), this guide is a companion the GAO Cost Estimating and Assessment Guide published in 2009. The Government Accountability Office is an independent, nonpartisan, agency that exists to support Congress in meeting its constitutional responsibilities, and works to improve the performance of federal government programs.

The Schedule Assessment Guide applies to civilian and defence projects managed by either government entities or private contractors in the USA; it is also an extremely valuable reference for all projects world-wide. On its release, Gene Dodaro, Comptroller General of the United States and head of the GAO said “A well-planned schedule is an essential tool for program management. The best practices described in the guide are intended to help agencies create and maintain schedules that are comprehensive, well-constructed, credible, and controlled.”

Over the last 5 years, the GAO has worked with experts in cost estimating, scheduling, and earned value management from government agencies, private industry, and academia to develop and formalise scheduling the best practices outlined in the Schedule Assessment Guide. The ten best practices associated with a high-quality and reliable schedule defined in the Schedule Assessment Guide are:

  1. Capturing all activities. The schedule should reflect all activities necessary to accomplish a project’s objectives, including activities both the owner and the contractors are to perform.
  2. Sequencing all activities. All activities must be logically sequenced and linked. Date constraints and lags should be minimised and justified.
  3. Assigning resources to all activities. The schedule should reflect the resources (labour, materials, travel, facilities, equipment, and the like) needed to do the work.
  4. Establishing the duration of all activities. The schedule should realistically reflect how long each activity will take. Schedules that contain planning and summary planning packages as activities will normally reflect longer durations until broken into work packages or specific activities.
  5. Verifying that the schedule can be traced horizontally and vertically. The schedule should be horizontally traceable with “hand-offs” defined. And vertically traceable; lower-level schedules are clearly consistent with upper-level schedule milestones.
  6. Confirming that the critical path is valid. The schedule should identify the program’s critical path.
  7. Ensuring reasonable total float. The schedule should identify reasonable total float on activities.
  8. Conducting a schedule risk analysis. Using a statistical simulation to predict the level of confidence in meeting a program’s completion date. Programs should include the results of the schedule risk analysis in constructing an executable baseline schedule.
  9. Updating the schedule using actual progress and logic. Progress updates and logic provide a realistic forecast of start and completion dates for program activities. Maintaining the integrity of the schedule logic is necessary to reflect the true status of the program.
  10. Maintaining a baseline schedule. A baseline schedule is the basis for managing the program scope, the time period for accomplishing it, and the required resources. Program performance is measured, monitored, and reported against the baseline schedule.

In its 224 pages the Schedule Assessment Guide provides detailed explanations of each of the best practices, supported by case studies and includes ‘key questions’ and the ‘key documentation’ to be used by auditors in assessing schedule compliance.

The development of the Schedule Assessment Guide has been lead by 2014 PGCS keynote presenter Karen Richey, her presentation to the symposium outlining the challenges faced by the USA government auditors can be downloaded from: http://www.pgcs.org.au/index.php/download_file/view/116/
(see more on the Project Governance and Controls Symposium).

The Schedule Assessment Guide validates many of the concepts defined in our scheduling papers and the CIOB Guide to Good Practice in the Management of Time in Complex Projects , see: http://www.mosaicprojects.com.au/Planning.html

To download your copy of the Schedule Assessment Guide go to: http://www.gao.gov/products/gao-16-89g

The three phases of project controls

The need to control projects (or bodies of work that we would call a project today) extends back thousands of years. Certainly the Ancient Greeks and Romans used contracts and contractors for many public works. This meant the contractors needed to manage the work within a predefined budget and an agreed timeframe.  However, what was done to control projects before the 19th century is unclear – ‘phase 0’.  But from the 1800’s onward there were three distinct phases in the control processes.

Phase 1 – reactive

The concept of using charts to show the intended sequence and timing of the work became firmly established in the 19th century and the modern bar chart was in use by the start of the 20th century. One of the best examples is from a German project in 1910, see: Schürch .  A few years later Henry Gantt started publishing his various charts.

BC#03

From a controls perspective, these charts were static and reactive. The diagrams enabled management to see, in graphic form, how well work was progressing, and indicated when and where action would be necessary to keep the work on time. However, there is absolutely no documented evidence that any of these charts were ever used as predictive tools to determine schedule outcomes. To estimate the completion of a project, a revised chart had to be drawn based on the current knowledge of the work – a re-estimation process; however, there is no documentation to suggest even this occurred regularly. The focus seemed to be using ‘cheap labour’ to throw resources at the defined problem and get the work back onto program.

Costs management seems to have be little different; the reports of the Royal Commissioners to the English Parliament on the management of the ‘Great Exhibition’ of 1851 clearly show the accurate prediction of cost outcomes. Their 4th report predicted a profit of ₤173,000.  The 5th and final report defined the profit as ₤186,436.18s. 6d. However this forward estimation of cost outcomes does not seem to have transitioned to predicting time outcomes, and there is no real evidence as to how the final profit was ‘estimated’. (See Crystal Palace).

Phase 2 – empirical logic

Karol Adamiecki’s Harmonygraph (1896) introduced two useful concepts to the static views used in bar charts and the various forms of Gantt chart. In a Harmonygraph, the predecessors of each activity are listed at the top and the activities timing and duration are represented by vertical strips of paper pinned to a date scale. As the project changed, the strips could be re-pinned and an updated outcome assessed.

The first step towards a true predictive process to estimate schedule completion based on current performance was the development of PERT and CPM in the late 1950s.  Both used a logic based network to define the relationship between tasks, allowing the effect of the current status at ‘Time Now’ to be cascaded forward and a revised schedule completion calculated.  The problem with CPM and PERT is the remaining work is assumed to occur ‘as planned’ no consideration of actual performance is included in the standard methodology. It was necessary to undertake a complete rescheduling of the project to assess a ‘likely’ outcome.

Cost controls had been using a similar approach for a considerable period. Cost Variances could be included in the spreadsheets and cost reports and their aggregate effect demonstrated, but it was necessary to re-estimate future cost items to predict the likely cost outcome.

Phase 3 – predictive calculations

The first of the true predictive project controls processes was Earned Value (EV). EV was invented in the early 1960s and was formalised in the Cost Schedule Controls System Criteria issued by US DoD in December 1967.  EV uses predetermined performance measures and formula to predict the cost outcome of a project based on performance to date.  Unlike any of the earlier systems a core tenet of EV is to use the current project data to predict a probable cost outcome – the effect of performance efficiencies to date is transposed onto future work. Many and varied assessments of this approach have consistently demonstrated EV is the most reliable of the options for attempting to predict the likely final cost of a project.

Unfortunately EV in its original format was unable to translate its predictions of the final cost outcome (EAC) into time predictions.  On a plotted ‘S-Curve’ it was relatively easy to measure the time difference between when a certain value was planned to be earned and when it was earned (SV time) but the nature of an ‘S-Curve’ meant the current SVt had no relationship to the final time variance.  A similar but different issue made using SPI equally unreliable. The established doctrine was to ‘look to the schedule’ to determine time outcomes. But the schedules were either at ‘Phase 1’ or ‘Phase 2’ capability – not predictive.

A number of options were tried through the 1960s, 70s and 80s to develop a process that could accurately predict schedule completion based on progress to date. ‘Count the Squares’ and ‘Earned Time’ in various guises to name two.  Whilst these systems could provide reasonable information on where the project was at ‘time now’ and overcame some of the limitations in CPM to indicate issues sooner than standard CPM (eg, float burn hiding a lack of productivity), none had a true predictive capability.

The development of Earned Schedule resolved this problem.  Earned Schedule (ES) is a derivative of Earned Value, uses EV data and uses modified EV formula to create a set of ‘time’ information that mirrors EV’s ‘cost’ information to generate a predicted time outcome for the project. Since its release in 2003 studies have consistently shown ES to be as accurate in predicting schedule outcomes as EV is in predicting cost outcomes.  In many respects this is hardly surprising as the underlying data is the same for EV and ES and the ES formula are adaptations of the proven EV formula (see more on Earned Schedule).

Phase 4 – (the future) incorporating uncertainty

The future of the predictive aspects of project controls needs to focus on the underlying uncertainty of all future estimates (including EV and ES).  Monte Carlo and similar techniques need to become a standard addition to the EV and ES processes so the probability of achieving the forecast date can be added into the information used for project decision making. Techniques such as ‘Schedule Density‘ move project controls into the proactive management of uncertainty but again are rarely used.

Summary:

From the mid 1800s (and probably much earlier) projects and businesses were being managed against ‘plans’.  The plans could be used to identify problems that required management action, but they did not predict the consequential outcome of the progress being achieved.  Assessing a likely outcome required a re-estimation of the remaining work, which was certainly done for the cost outcome on projects such as the construction of the Crystal Palace.

The next stage of development was the use of preceding logic, prototyped by Karol Adamiecki’s Harmonygraph, and made effective by the development of CPM and PERT as dynamic computer algorithms in the late 1950s. However, the default assumption in these ‘tools’ was that all future work would proceed as planned. Re-scheduling was needed to change future activities based on learned experience.

The ability to apply a predictive assessment to determine cost outcomes was introduced through the Earned Value methodology, developed in the early 1960s and standardised in 1967.   However, it was not until 2003 that the limitations in ‘traditional EV’ related to time was finally resolved with the publication of ‘Earned Schedule’.

In the seminal paper defining ES, “Schedule is Different”, the concept of ES was defined as an extension of the graphical technique of schedule conversion (that had long been part of the EVM methodology). ES extended the simple ‘reactive statement’ of the difference between ‘time now’ and the date when PV = EV, by using ‘time’ based formula, derived from EV formula, to predict the expected time outcome for the project.

The Challenge

The question every project controller and project manager needs to take into the New Year is why are more then 90% of project run using 18th century reactive bar charting and the vast majority of the remainder run using 60 year old CPM based approaches, non of which offer any form of predictive assessment.  Don’t they want to know when the project is likely to finish?

It’s certainly important to understand where reactive management is needed to ‘fix problems’, but it is also important to understand the likely project outcome and its consequences so more strategic considerations can be brought into play.

Prediction is difficult (especially about the future) but it is the only way to understand what the likely outcome will be based on current performance, and therefore support value based decision making focused on changing the outcome when necessary.

I have not included dozens or references in this post, all of the papers are available at http://www.mosaicprojects.com.au/PM-History.html