Tag Archives: Project Controls

The origins of PERT and CPM – What came before the computers!

The development of PERT and CPM as Mainframe software systems starting in 1957 is well documented with contemporary accounts from the key people involved readily available.  What is less clear is how two systems developed contemporaneously, but in isolation, as well as a number of less well documented similar systems developed in the same timeframe in the UK and Europe came to have so many similar features.  These early tools used the ‘activity-on-arrow’ (AoA or ADM) notation which is a far from obvious model.  Later iterations of the concept of CPM used the ‘precedence’ notation which evolved from the way flow-charts were and are drawn.

stockpile

One obvious connection between the early developments was the community of interest around Operation (or Operational) Research (OR) a concept developed by the British at the beginning of WW2.  OR had developed to include the concept of linear programming by the mid-1950s which is the mathematical underpinning of CPM, but while this link explains some of the cross pollination of ideas and the mathematics it does not explain terms such as ‘float’ and the AoA notation (for more on the development of CPM as a computer based tool see http://www.mosaicprojects.com.au/PDF_Papers/P042_History%20of%20Scheduing.pdf).

A recent email from Chris Fostel, an Engineering Planning Analyst with Northrop Grumman Corporation (CFostel@rcn.com) appears to offer a rational explanation.  I’ve reproduced Chris’ email pretty much verbatim below – the challenge posed to you is to see if the oral history laid out below can be corroborated or validated.  I look forward to the responses.

Chris’ Oral History

quartermaster_corpsI was told this story in 1978 by a retired quartermaster who founded his own company after the War to utilize his global contacts and planning skills.  Unfortunately the individual who told me this story passed away quite a few years ago and I’m not sure any of his compatriots are still alive either.  Regardless, I thought I should pass this along before I join them in the next life.  I do not wish to minimize the work of Kelly and Walker. They introduced critical path scheduling to the world and formalized the algorithms.  They did not develop or invent the technique.

The origin of critical path scheduling was the planning of the US Pacific Island hopping campaign during World War II.  The Quartermaster Corps coordinated orders to dozens if not hundreds of warships, troop ships and supply ships for each assault on a new island.  If any ships arrived early it would alert the Japanese of an imminent attack.  Surprise was critical to the success of the island hopping campaign.  The US did not have enough warships to fight off the much larger Japanese fleet until late in the war. Alerting the Japanese high command would allow the Japanese fleet to intercept and destroy the slow moving US troop ships before they had a chance to launch an attack. 

Initially the quartermasters drew up their plans on maps of the pacific islands, including current location and travel times of each ship involved.  The travel times were drawn as arrows on the map.  Significant events, personnel or supplies that traveled by air were shown as dashed lines hopping over the ship’s arrows.  The quartermasters would then calculate shortest and longest travel times to the destination for all ships involved in the assault. The plans became very complicated.  Many ships made intermediate stops at various islands to refuel or transfer cargo and personnel.  The goal was to have all ships arrive at the same time.  It didn’t take the quartermasters long to realize that a photograph of the planning maps would be a devastating intelligence lapse.  They started drawing the islands as identical bubbles with identification codes and no particular geographical order on the bubble and arrow charts. These were the first activity on arrow critical path charts; circa 1942. 

The only validation I can offer you is that by now you should realize that activity on arrow diagrams were intuitive as was the term ‘float.’  Float was the amount of time a particular ship could float at anchor before getting underway for the rendezvous.  Later when the US quartermasters introduced the technique to the British for planning the D-Day invasion the British changed float to “Slack”, to broaden the term to include air force and army units which did not float, but could ‘slack off’ for the designated period of time. 

You will not find a written, dated, account of this story by a quartermaster corps veteran.  Critical path scheduling was a military secret until declassification in 1956.  In typical fashion, the veterans of WWII did not write about their experiences during the War.  No one broke the military secrecy.  After 1956 they were free to pass the method on to corporate planners such as Kelly and Walker.  A living WWII Quartermaster veteran, should be able to provide more than my intuitive confirmation.

This narrative makes sense to me from a historical perspective (military planning has involved drawing arrows on maps for at least 200 years) and a timing perspective.  Can we find any additional evidence to back this up??  Over to you!

The Yin and Yang of Integrated Data Systems

yin_yangIntegrated project management information systems (PMIS) are becoming more common and more sophisticated ranging from ‘web portals’ that hold project data through to the potential for fully integrated design and construction management using BIM[1].  The benefits derived from using these systems can be as much as 20% of the build price on complex construction projects using BIM.

pmisThe advantages of this type of information storage and retrieval system include:

  • Ready access to data when needed via PDAs and ‘tablets’ significantly reducing the need for ‘push’ communication and the existence of ‘redundant data’[2].
  • One place to look for information with indexing and cross-referencing to minimise the potential for missed information.
  • Audit trails and systems to ensure only the latest version of any document is available.
  • Cross-linking of data in different documents and formats to assist with configuration management, requirements traceability, and change control.
  • Controls on who can ‘see’ the data, access the data and edit the data.
  • Workflow functions to remind people of their next job, list open actions, record actual progress, etc[3].
  • A range of built-in functions to validate data and avoid ‘clashes’, including locking or ‘freezing’ parts of the data set when that information has been moved into ‘work’.

These benefits are significant and a well-designed system reduces errors and enhances productivity leading to reduced costs, but the ‘yin’ of well-designed PMIS comes with a ‘yang’!

People increasingly tend to believe information produced from a computer system, this is true of ‘Facebook’, Wikipedia and flows through to more sophisticated systems. There also seems to be a steady reduction in the ability of younger people in particular to critically analyse information; in short, if it comes from the computer many people will assume it is correct. Add to this the ability of many of the more sophisticated PMIS tools to transpose and transfer information between different parts of the systems automatically or semiautomatically and there is a potential for many of the benefits outlined above to be undermined by poor data. This issue has been identified for decades and has the acronym GIGO – garbage in, garbage out.

The question posed in this blog is how many projects and project support organisations (PMOs, etc.) consider or actively implement effective data traceability.  Failed audits, overruns from scope oversights, and uninformed or ill-informed decision-making are just a few of the consequences project teams suffer from if they do not have full traceability of their project management data. This issue exists in any information processing system from basic schedule updating, through monthly reporting to the most sophisticated, integrated PMIS. If you cannot rely on the source data, no amount of processing will improve the situation! And to be able to rely on data, you need to be able to trace it back to its source.

tracabilityTraceability is defined as ‘the ability to trace the location, history and use of each data element’. This sounds simple but in reality can be very challenging, and the results of poor visibility can be devastating to a project. Some of the key questions to ask are:

  • Where did this data or these actuals come from?
  • What is the authorizing document and when did it get signed/approved?
  • Has everyone approved the change request or action item?

Traceability does not happen by accident! Project management information systems have to be designed with traceability as a key element in each of its aspects.  As information comes into the system the author or the origin of the information has to be recorded (preferably automatically). Depending on the nature of the information it may need to be quarantined until appropriate checks have been carried out and/or approvals have been obtained and then there needs to be traceability of any subsequent changes. The foundation of traceability is the combination of processes (people) and data management.

Therefore, the ‘yang’ of a sophisticated integrated project management information systems is that as the systems become more integrated and sophisticated people will come to rely on the information provided and ‘trust it’ whilst the source and veracity of the data used becomes less obvious.

Resolving this is partly process and partly people. The Chartered Institute of Building (CIOB) has produced the Time and Cost Management Contract Suite 2015 focused on complex construction projects using BIM.  This contract defines a number of key support roles (largely independent of the parties) focused on managing the information flows into and out of the system to ensure its accuracy and validity. Similar roles and responsibilities are essential in any effective PMIS.

My latest post on the PMI ‘Voices blog’, From Data to Wisdom: Creating & Managing Knowledge highlights the importance of data as the underpinning of all reporting and communication.  So the question is, how much focus does your project team or PMO put on ensuring the data it is using is timely, complete, accurate and traceable?

____________________

[1] BIM = Building Information Modelling, see: http://www.mosaicprojects.com.au/WhitePapers/WP1082_BIM_Levels.pdf

[2] For more on planning project communication see: http://www.mosaicprojects.com.au/Mag_Articles/ESEI-09-communication-planning.pdf

[3] A discussion on how these capabilities can enhance project controls is at: https://mosaicprojects.wordpress.com/2016/11/26/the-future-of-project-controls/

The future of project controls

Last week I participated in two PUXX panel discussions in Perth and Sydney focused on predicting the influence of technology on project controls.  The range of subjects covered ranged from drones and remote monitoring to virtual reality.

Many of the topics discussed offered better ways to do things we already do, provided we can make effective use of the data generated in ever increasing quantities – significant improvements but essentially ‘business-as-usual’ done better. The aspect I want to focus on in this post is the potential to completely reframe the way project schedules are developed and controlled when existing ‘gaming technology’ and BIM are synthesised.

The current paradigm used for critical path scheduling is a (dumbed-down) solution to a complex set of problems required to allow the software to run on primitive mainframe computers in the late 1950s – the fundamentals have not changed since! See: A Brief History of Scheduling.

The underlying assumption is a project consists of a set of activities each with a defined duration and depending on the logical relationship between the activities, some are ‘critical’ others have ‘float’.  The basic flaw in this approach can be demonstrated by looking at the various options open to a schedule to define the work involved in 3 simple foundations involving excavation and mass concrete fill.

schedule-options

All four of the above options above are viable alternatives that may be chosen by different schedulers to describe the work using CPM, and none of them really describe what actually happens. The addition of more links would help but even then the real situation which is one resource crew visits three locations in turn and excavates the foundations, a second crew follows and places the concrete with some options for overlapping, parallel working and possibly synchronising the actual pouring of all three foundations on the same day…….. Optimising the work of the crews is the key to a cost effective outcome and this depends on what follows their work.  For more on resource optimisation see: www.mosaicprojects.com.au/Resources_Papers_152.html. Advances in computer software offer the opportunity to develop a new way of working.

The starting point for the hypothesis outlined I this post is 4D BIM (Building Information Modelling). Last month I was in London working on the final edits to the second edition of the CIOB’s book, Guide to Good Practice in the Management of Time in Complex Projects (due for publication in 2017 as The Management of Time in Major Projects). One of the enhancements in the second edition is an increased focus on BIM. To assist our work a demonstration of cutting edge 4D BIM was provided Freeform.

Their current capabilities include:

  • The ability to model in real time clashes in working space provided the space needed for each crews work is parameterised. Change the timing of one work crew and the effect on others in a space is highlighted.
  • The ability to view the work from any position at any time in the construction process; allowing things such as a tower crane driver’s actual line of sight to be literally ‘seen’ at different stages of the construction.
  • The relatively normal ability to import schedule timings from a range of standard tools to animate the building of the model, and the ability to feedback information derived from processes such as the identification of clashes in the use of working space using
  • The space occupied by temporary works and various pieces of equipment can be defined and clashes with permanent works identified over time.
  • Finally the ability for a person to see and move around within the virtual model using the same type of 3D virtual reality goggles used by many gaming programmes. The wearer is literally immersed in the model.

For all of this in action on a major rail project see: https://www.newcivilengineer.com/future-tech/pushing-the-limits-of-bim/10012298.article

Moving into the world of game playing, there are many different games that allow players in competition, or collaboration, to ‘build’ cities, empires, fortifications, farms, etc. These games know the resources available to the players and how many resources will be required to construct each new element in the game – if you don’t have the resources, you can’t build the new asset.

Combining these two concepts opens up the possibility for a completely new approach to scheduling physical projects that involve the deployment of resources to physical locations to undertake work. The concept of location-based scheduling is not new, it was used in the 1930s to construct the Empire State Building (see: Line of Balance) and is still widely used.  For more on location-based scheduling see: Location-Based Management for Construction: Planning, Scheduling, and Control by Prof. Russell Kenley.

How these concepts tie into BIM starts with the model itself.  A BIM model consists of a series of parameterised objects. Each object can contain data on its size, weight, durability, cost, maintainability, carbon footprint, etc. As BIM develops many of these objects will come from standard libraries created by suppliers and subcontractors. Change an object, for example, replace windows from manufacturer “A” with similar Windows from manufacturer “B” and the model is update and potential issues with sizes, fixings and waterproofing can be identified. It is only a small step from this point to add parameters related to the resources needed to undertake the work of installation.

With this information and relatively minor enhancements to current BIM capabilities, once the engineering model is reasonably complete a whole new paradigm for planning work opens up.

4d-vr

To plan the work the ‘planning team’ put on their virtual reality headsets and literally ‘walk’ onto the site.  As they start to locate temporary works and begin the building process the model is tracking the use of resources and physical space in real time. The plan is developed based on the embedded parameters in the fully integrated 3D model.

Current 4D imports a schedule ‘shows you’ the effect.  Using the proposed gaming approach and parameterized objects you can literally build the project in the virtual space and either see the consequences on resource loading or be limited by resource availability.  A whole bunch of games do this already, add in existing clash detection capabilities (but applied to workers using the space) and you change the whole focus of planning a project. Decisions can be made to adjust the size of resource crews and the flow of work can be optimised to balance the competing objectives of cost efficiency, time efficiency and resource optimisation.

The proposed model is a paradigm shift away from CPM and its arbitrary determination of activities and durations to a process focused on the smooth flow of resources through work areas. The computational base will be focused on resource effectiveness and resource utilisation. Change ‘critical path’ to ‘critical resources’, eliminate the illusion of ‘float’ but look for underutilised resources and resource waiting time. To optimise the work, different scenarios can be stored, replayed and edited – the ultimate ‘what-if’ experience.

The concept of schedule density ties in with this approach nicely; initial planning is done for the whole project at the ‘low density’ level with activity durations of several weeks or months setting out the overall ‘time budget’ for the project and establishing the strategic flow of work.  As the design improves and more information becomes available, the schedule is enhanced first to ‘medium density’ and then to ‘high density’. The actual work is controlled by the ‘high density’ part of the schedule. For more on ‘schedule density’ see: www.mosaicprojects.com.au/WhitePapers/WP1016_Schedule_Density.pdf.

Where this concept gets really interesting is in the control of the work.  The medium and high density elements of the schedule are built using the same ‘virtual reality’ process as the overall schedule, therefore each object in the overall BIM model can include data on the resources allocated to the work, the sequence of work and the time allowed. Given workers on BIM-enabled projects already use various PDAs to access details of their work, the same tablet or smart device can be used to tell the workers their next job and how long that have to complete it. When they complete the task, updating the BIM model with that progress information updates the schedule, tells the crew their next job and tells the next resources planned to move into the area that the space is available. The schedule and the 3D model are the same entity.

Similarly, off-site manufacturing and design lead-times can be integrated into the dataset.  Each manufactured item can have its design, manufacture and transport and approval times associated with the element making the development of an off-site works / procurement schedule a simple process to extract the report once the schedule is set.  Identifying delays in the supply chain and dealing with changes in the timing of installation become staigtforward.

When inevitable problems occur, the project management team have the ideal tool to work through solutions and determine the optimum way forward, as soon as the new schedule is agreed, the BIM model already holds the information.

One of the key concepts in ‘schedule density’ is that any work planned for the short-term future has to be based on the actual performance of the crews doing the work. In a BIM enabled scheduling system this can also be automated. The work content of each activity is held in the model as is the crew assigned to the work. As soon as the work crew’s productivity can be measured, the benchmark values used in the original planning can be updated with real data. Where changes in performance are needed to deal with slippages and productivity issues these can be properly planned and incorporated into the schedule based on when the implemented changes can be expected to occur.

I’m not sure if this is BIM2 or BIM++ but these ideas are not very far in advance of current capabilities – all we need now is a software developer to take on the ideas and make them work.

These concepts will be harder to apply to ‘soft projects’ but the planning paradigms in soft projects have already been shaken up by Agile. But integrating 3D modelling with an integrated capability for real 4D interaction certainly seem to make sense for projects where the primary time management issue is the flow of resources in the correct sequence through a defined series of work locations in three dimensions.   What do you think???

Governmentality – the cultural underpinning of governance

Governmentality1Two major governance failures in recent times highlight the importance of organisational culture in delivering a well-governed entity.  Professor Ralf Müller has adapted the term ‘governmentality’ to describe the systems of governance and the willingness of the people within an organisation to support the governance objectives of the organisation’s governing body. When the willingness to be governed breaks down, as these two examples demonstrate, governance failures follow.

Toyota

The Lexus ‘unintended acceleration problem’ from 2009 has cost  car manufacturer Toyota a staggering $1.2 billion fine to avoid prosecution for covering up severe safety problems and continuing to make cars with parts the FBI said Toyota “knew were deadly.”  In addition to numerous civil actions and costs of reputational damage.  The saga was described as a classic case of corporate culture that favoured the seemingly easy way out instead of paying the cost and doing the right thing.  But, the actions of the people who magnified the problem by attempting to cover up the issues fundamentally contradicts the ‘Toyota Way’ that has guided Toyota since 2001. The Toyota Way has two core principles, respect for people and continuous improvement (kaizen).

Respect for people puts ‘people before profits’, and this is not an idle slogan.  Following an Australian Government decision in 2014, all motor vehicle manufacturing in Australia will cease by 2018 (this affects General Motors Holden, Ford and Toyota). In February 2014 Toyota president Akio Toyoda personally came to Australia to tell his workers of the closure and Toyota’s commitment to its staff through training and other activities has maintained staff commitment at our local Altona plant with everyone working to make the “last car the best global car!”.

The difference between the “people first equals customer first” attitude demonstrated in the approach to closing the Altona plant where people are still being released for paid training to up skill for new roles and the ‘customer last’ approach that dominated the Lexus saga is staggering.  The reaffirmation of the ‘Toyota Way’ may have been driven in part by the Lexus disaster but this does not explain why quality and customer service was allowed to fail so badly in the company that practically invented modern quality.

Volkswagen

A similar dichotomy is apparent in the Volkswagen diesel engine emissions scandal.  A company renowned for engineering excellence, from a country renowned for engineering excellence allowed engineering standards to slip to a point where the cars being sold were illegal.  The actual emissions were only part of the problem, Volkswagen engineers had developed a software program dubbed the ‘diesel dupe’ that could detect when the cars were being tested and change the engine performance to improve results. When the cars were operating under controlled laboratory conditions – which typically involve putting them on a stationary test rig – the device appears to have put the vehicle into a sort of safety mode in which the engine ran below normal power and performance thereby reducing emissions. Once on the road, the engines switched out of this test mode.

Governance issues

Neither of these issues involved ‘a few bad apples’ – the excuse used by most institutions to explain banking and financial scandals. They both required extensive management involvement and cover-ups or acquiescence. A substantial subset of both organisation’s management felt that doing the wrong thing was in the best interests of either themselves or the organisation (or both, at least in the short term). But the governing bodies of both organisations would seem to have maintained a commitment to their overall philosophy, the ‘Toyota Way’ and ‘Engineering excellence’.  So what caused the governance failure?

Governmentality

One element that seems central to both of these failures was a breakdown in the willingness of managers to comply with the overall governance philosophy of the organisation which in turn caused the governance processes to fail; this is the domain of governmentality. Governance cannot be successfully imposed on a population that does not want to be governed!

Governmentality2Governmentality is a term coined by philosopher Michel Foucault around 1980 and refers to the way in which the state (or another governing body) exercises control over, or governs, the body of its populace. The concept involves a complex series of two-way transactions involving:

  • the way governing bodies try to produce the people best suited to fulfil those governments’ policies;
  • the organised practices (mentalities, rationalities, and techniques) through which people are governed, and
  • the techniques and strategies by which a society is rendered governable.

In the same way as governments rely on most people complying with legislation most of the time, organisational governance mechanisms such as ‘project management offices’ and ‘portfolio management’ cannot function effectively without the cooperation of the people being governed. When governmentality breaks down and people no longer support the governance processes they cease to be effective.

The challenge facing every governing body, in every organisation, is in three parts

  1. Creating an authentic vision and mission for the organisation.
  2. Creating an effective governance system that supports the achievement of the vision.
  3. Creating and maintaining an ethical culture that embraces and supports governmentality.

Effective governance systems can weed out the bad apples and correct errors, but they cannot oversee the actions of every manager all of the time if the majority of people do not wish to follow the governance dictates, or actively work to subvert them.

Developing the ‘right culture’ by employing the right people (and importantly offloading the wrong people) starts at the top.  The governing body needs to ‘walk the talk’, their CEO and senior executives need to model the desired behaviours and ‘doing the right thing’ needs to be encouraged throughout the organisation.

Achieving this requires authenticity and a holistic approach to the way the organisation functions; all of the elements need to work together cohesively. Achieving this is the primary responsibility and challenge for the ‘governing body’, in most organisations, the Board of Directors!

If you get the vision, mission and culture right, even major lapses such as the ‘Lexus unintended acceleration problem’ can be overcome.  Despite the damage this caused, Toyota is now the world’s largest automotive manufacturer with a market capitalisation that is nearly double that of Ford and GM combined.  This is also the reason why Objectives, ethics and culture are the top three elements in my model for the ‘Functions of Governance’.

New Planning and controls website

meeting1b

Our new project Planning and Controls website at www.planning-controls.com.au/ is now up and running.  This site currently has two focuses:

Helping people study to pass their PMI-SP® examination:  www.planning-controls.com.au/pmisp-courses/  Backed by a library of helpful PMI-SP exam support resources:  www.planning-controls.com.au/support/

Providing a single location for planners and schedulers to access our library of project controls papers and other free resourceswww.planning-controls.com.au/controls/   Almost all of the papers are available for download and use under the Creative Commons licence.

This site will be progressively updated with a view to becoming a key reference for all planning and control professionals worldwide!  Any suggestions for improvements will be appreciated – we look forward to hearing from you.

 

 

Critical confusion – when activities on the critical path don’t compute……

The definition of a schedule ‘critical path’ varies (see Defining the Critical Path), but the essence of all of the valid definitions is the ‘critical path’ determines the minimum time needed to complete the project and either by implication or overtly the definitions state that delaying an activity on the critical path will cause a delay to the completion of the project and accelerating an activity will (subject to float on other paths[1]) accelerate the completion of the project.

A series of blog posts by Miklos Hajdu, Research Fellow at Budapest University of Technology and Economics, published earlier this year highlights the error in this assumption and significantly enhances the basic information contained in my materials on ‘Links, Lags and Ladders’ and our current PMI-SP course notes.  The purpose of this post is to consolidate all these concepts into a single publication.

The best definition of a critical path is Critical Path: sequence of activities that determine the earliest possible completion date for the project or phase[2].  This definition is always correct.  Furthermore, in simple Precedence networks (PDM) that only use Finish-to-Start links, and traditional Activity-on-Arrow (ADM) networks the general assumption that increasing the duration of an activity on the critical path delays the completion of the schedule and reducing the duration of an activity on the critical path accelerates the completion of the schedule holds true.  The problems occur in PDM schedules using more sophisticated link types.  Miklos has defined five constructs using standard PDM links in which the normal assumption outlined above fails. These constructs, starting with the ‘normal critical’ that behaves as expected are shown diagrammatically below[3].

Normal Critical

The overall project duration responds as expected to a change in the activity duration.

1 Normal critical

A one day reduction of the duration of an activity on the critical path will shorten the project duration by one day, a one day increase will lengthen the project duration by one day.

Reverse Critical

The change in the overall project duration is the opposite of any change in the activity duration.

2 Reverse Critical

A one day reduction of the duration of Activity B will lengthen the project duration by one day, a one day increase will reduce the project duration by one day.

Neutral Critical

Either a day decrease or a day increase leaves the project duration unaffected. There are two variants, SS and FF:

3 Neutral 1

3 Neutral 2

In both cases it does not matter what change you make to Activity B, there is no change in the overall duration of the project.  This is one of the primary reasons almost every scheduling standard requires a link from a predecessor into the start of every activity and a link from the end of the activity to a successor.

Bi-critical Activities

Any change in the duration of Activity B will cause the project duration to increase.

4 Bi-critical

A one day reduction of the duration of Activity B will lengthen the project duration by one day, a one day increase will lengthen the project duration by one day.  Bi-critical activities depend on having a balanced ladder where all of the links and activities are critical in the baseline schedule. Increasing the duration of B pushes the completion of C through the FF link.  Reducing the duration of B ‘pulls’ the SS link back to a later time and therefore delays the start of C.  The same effect will occur if the ladder is unbalanced or there is some float across the whole ladder, it is just not as obvious and may not flow through to a delay depending on the float values and the extent of the change.

Increasing Normal Decreasing Neutral

An increase in Activity B will delay completion, but a reduction has no effect! There are two variations on this type of construct.

5 Increasing Normal Decreasing Neutral 1

5 Increasing Normal Decreasing Neutral 2

A one day increase in the duration of Activity B will increase the project duration by one day, however, reducing the length of Activity B has no effect on the project’s duration.

Increasing Neutral Decreasing Reverse

An increase in Activity B has no effect, but a reduction will delay completion! Again, there are two variations on this type of construct.

6 Increasing neutral decreasing reverse 1

6 Increasing neutral decreasing reverse 2

A one day increase in the duration of Activity B has no effect on the project’s duration, however, reducing the length of Activity B by one day will increase the project duration by one day.

Why does this matter?

The concept of the schedule model accurately reflecting the work of the project to support decision making during the course of the work and for the forensic assessment of claims after the project has completed, is central to the concepts of modern project management.  Apart from the ‘normal critical’ construct, all of the other constructs outlined above will produce wrong information or allow a claim to be dismissed based on the nuances of the model rather than the real effect.

Using most contemporary tools, all the planner can do is be aware of the issues and avoid creating the constructs that cause issues.  Medium term, there is a need to revisit the whole function of overlapping activities in a PDM network to allow overlapping and progressive feed to function efficiently.  This problem was solved in some of the old ADM scheduling tools, ICL VME PERT had a sophisticated ‘ladder’ construct[4].  Similar capabilities are available in some modern scheduling tools that have the capability to model a ‘Continuous precedence relationship[5]’ or implement RD-CPM[6].


[1] For more on the effect of ‘float’ see: http://www.mosaicprojects.com.au/PDF/Schedule_Float.pdf

[2] From ISO 21500 Guide to Project Management,

[3] The calculations for these constructs are on Miklos’s blog at: https://www.linkedin.com/in/miklos-hajdu-a1418862

[4] For more on ‘Links, Lags and Ladders’ see: http://www.mosaicprojects.com.au/PDF/Links_Lags_Ladders.pdf

[5] For more on continuous relationships see:  http://www.sciencedirect.com/science/article/pii/S1877705815031811

[6] For more on RD-CPM see: http://www.mosaicprojects.com.au/WhitePapers/WP1035_RD-CPM.pdf

Risk management handbook published

The RM Handbook coverThe Risk Management Handbook edited by Dr. David Hillson (the ‘risk doctor’) is a practical guide to managing the multiple dimensions of risk in modern projects and business.  We contributed Chapter 10: Stakeholder risk management.

The 23 Chapters are a cutting-edge survey of the risk management landscape, providing a broad and up-to-date introduction to risk, with expert guidance on current best practice and cutting-edge insight into new developments within risk management.

For more on the book, see: www.koganpage.com/product/the-risk-management-handbook-9780749478827