Category Archives: General Project Management

The Quest for Control.

In our paper A Brief History of Agile we looked at the evolution of the agile software development approach which briefly covered the ‘Waterfall Dead-end’.  This post looks in more detail at the factors that made Waterfall appear attractive to a large number of competent people although most of these were not involved in developing major software programs.  The people actually involved in managing major software developments were always largely in favor of iterative and incremental development approaches to software development which evolved into Agile some 20+ years later. The focus of this discussion is on the structured ‘Waterfall’ approach to developing software. 

Just for clarification, Waterfall has never been used for other types of projects, and is not a synonym for plan-driven, predictive projects.

In most hard[1] projects, the design has to be significantly progressed before much work can be done, and the work has to be done in a defined sequence. For example, before concreting this house slab, the precise position of every plumbing fitting and wall has to be known (ie, the Architectural plans finalized), so the right pipe of the right size is positioned to be exactly under a fitting or centered within a wall line.  The loadings on the slab also have to be known so the engineering design has correctly calculated the required thicknesses of slab, ground beams, and steel reinforcement. Once all of the necessary design has been done, the various trade works have to be completed in the right sequence and checked, before the slab can be poured. Once the concrete is set, any change is very expensive!  

This is quite different to software projects, which are a class of soft project that is particularly amenable to incorporating change and being developed incrementally and iteratively to minimize test failures and rework. This iterative and incremental approach was being used by most major software development projects prior to the introduction of Waterfall in 1988, so what was it that made the Waterfall concept seem attractive?  To understand this, we need to go back 30 years before the Agile Manifesto was published, to the USA Defense Department of the 1980s:

  • The Cold war was in full swing and the fear of Russia’s perceived lead in technology was paramount. Sputnik 1 had been launched in 1957 and the USA still felt they were playing catch-up.
  • Defense projects were becoming increasingly complex systems of systems and software was a major element in every defense program.
  • Complexity theory was not generally understood, early developments were largely academic theories[2].
  • CPM was dominant and appeared to offer control of workflows.
  • EVM appeared to offer control of management performance.
  • Disciplined cost controls were more than 100 years old.

The three dominant controls paradigms CPM, EVM, and Cost control appeared to offer certainty in most projects, but there seemed to be very little control over the development of software. This was largely true, none of these approaches offer much assistance in managing the creative processes needed to develop a new software program, and the concept of Agile was some 20+ years in the future.  

In this environment, Waterfall appeared to offer the opportunity to bring control software projects by mimicking other hard engineering projects:

  1. The management paradigm most familiar to the decision makers was hard engineering – you need the design of an aircraft or missile to be close to 100% before cutting and riveting metal – computers were new big pieces of ‘metal’ why treat them differently?
  2. For the cost of 1 hour’s computer time, you could buy a couple of months of software engineering time – spending time to get the design right before running the computer nominally made cost-sense. 
  3. The ability to work on-line was only just emerging and computer memory was very limited. Most input was batch loaded using punch cards or tape (paper or magnetic). Therefor concept of: design the code right, load-it-once, and expect success may not have seemed too unrealistic.
The moon-landing software
written by Margaret Hamilton
(c 1969)

The problem was nothing actually worked. Iterative and incremental development means you are looking for errors in small sections of code and use the corrected, working code as the foundation for the next step.  Waterfall failures were ‘big-bang’ with the problems hidden in 1000s of lines of code and often nested, one within another. Finding and fixing errors was a major challenge.

To the US DoD’s credit, they ditched the idea of Waterfall in remarkably quick time for a large department. Waterfall was formally required by the US DoD for a period of 6 years between 1988 and 1994, before and after iterative and incremental approaches were allowed.  

The reasons why the name Waterfall still drags on is covered in two papers:
A Brief History of Agile
How should the different types of project management be described?

Conclusion

While Waterfall was an unmitigated failure, significantly increasing the time and cost needed to develop major software programs, the decision to implement Waterfall by the US DoD is understandable and reasonable in the circumstances.  The current (1980s) software development methods were largely iterative and incremental, and were failing to meet expectations. The new idea of Waterfall offered a solution. it was developed by people with little direct experience of major software development (who were therefore not tarnished with the perceived failures). The advice of people with significant software development experience was ignored – they were already perceived to be failing. The result was 6 years of even worse performance before Waterfall was dropped as a mandated method. The mistake was not listening to the managers with direct experience of developing major software systems. But these same people were the ones managing the development of software that was taking much longer and costing far more than allowed in the budget.  

The actual cause of the perceived failures (cost and time overruns) was unrealistic expectations caused by a lack of understanding of complexity leading to overly optimistic estimates. Everyone could see the problems with the current approach to software development and thought Waterfall was a ‘silver bullet’ to bring control to a difficult management challenge.

Unfortunately, the real issues were in a different area. Underestimating the difficulties involved in software development and a lack of appreciation of the challenges in managing complex projects. This issue is still not fully resolved, even today, the complexities of managing major software developments are underestimated most of the time.

For more on the history of Agile, see: https://mosaicprojects.com.au/PMKI-ZSY-010.php#Agile


[1] The definition of hard and soft projects can be found at: https://mosaicprojects.wordpress.com/2023/01/21/hard-v-soft-projects/

[2] For more on complexity see: https://mosaicprojects.com.au/PMKI-ORG-040.php#Overview

Measuring time Updated

The process of measuring time (and just about everything else has got more precise and more confusing.  The weird result following the last (2019) update to the international system of measurements is almost everything from length to weight has time (seconds) as part of the definition.

Our article on Measuring Time has been updated to trace the way time is measured from around 3500 BCE through to the present time.

See the full article at: https://mosaicprojects.com.au/PMKI-ZSY-010.php#Overview

Assessing Delays in Agile & Distributed Projects

Published in the April edition of PM World Journal,  our latest paper, Assessing Delays in Agile & Distributed Projects looks at the challenge of assessing delay and disruption in projects where a CPM schedule is not being used. 

For the last 50 or more years, the internationally recognized approaches to assessing delay and disruption has been based on the forensic assessment of a CPM schedule. However, the methods detailed in RP 29R-03 Forensic Schedule Analysis and the SCL Delay and Disruption Protocol do not offer a viable option for assessing delay and disruption in a wide range of projects including:
– Projects managed using an agile or iterative approach to development.
– Distributed projects where there is no particular requirement for large parts of the work to be completed in any particular order.
– Projects managed using Lean Construction and similar approaches where the forward planning is iterative and short term.

With Agile becoming mainstream, the number of commercial contracts requiring the delivery of a defined output, within specified time and cost parameters, that use management methodologies that do not support CPM scheduling are increasing. But, the law of contract will not change. If the contractor fails to comply with the contract requirements to deliver the defined scope, within the required time the contractor will be in breach of contact and liable to pay damages to the client for that breach.  A number of IT companies have already been successfully sued for failing to comply with their contractual obligations, the risk is real. 

One way to minimize the contractor’s exposure to damages, is to claim extensions of time to the contract for delay events, particularly delays caused by the client.  What has been missing in the literature to this point in time is a set of processes for calculating the delay in the absence of a viable CPM schedule. The good news is the law was quite capable of sorting out these questions before CPM was invented, what’s missing in the current situation is an approach that can be used to prove a delay in agile or distributed projects.

This paper defines a set of delay assessment methods that will provide a robust and defensible assessment of delay in this class of project where there simply is no viable CPM schedule. The effect of any intervening event is considered in terms of the delay and disruption caused by the loss of resource efficiency, rather than its effect on a predetermined, arbitrary, sequence of activities.

Download Assessing Delays in Agile & Distributed Projects and supporting materials from: https://mosaicprojects.com.au/PMKI-ITC-020.php#ADD

Understanding the Iron Triangle and Projects

The concept of the iron triangle as a framework for managing projects has long past its use-by date, almost everyone, including PMI, recognize the challenges faced by every project manage are multi-faceted, and multi-dimensional – three elements are not enough. But dig back into the original concepts behind the triangle, and you do uncover a useful framework for defining a project, and separating project management from general management.

The Iron Triangle

The origin of the project management triangle is accredited to Dr. Martin Barnes. In 1969[1], he developed the triangle as part of his course ‘Time and Money in Contract Control’ and labelled three tensions that needed to be balanced against each other in a project: time, cost, and output (the correct scope at the correct quality). The invention of the iron triangle is discussed in more depth in The Origins of Modern Project Management.

The concept of the triangle was widely accepted, and over the years different versions emerged:

  • The time and cost components remained unchanged, but the ‘output’ became variously scope or quality and then to scope being one of the three components and quality in the middle of the triangle (or visa versa). These changes are semantic:
    • you have not delivered scope unless the scope complies with all contractual obligations including quality requirements
    • achieving quality required delivering 100% of what is required by the client.

  • The shift from tensions to constraints changes the concept completely. A constraint is something that cannot be changed, or requires permission to change. Tensions vary based on external influences, the three tensions can work together or against each other.  
     
  • The introduction of iron!  It seems likely, the iron triangle, based on the concept of the iron cage from Max Weber’s 1905 book The Protestant Ethic and the Spirit of Capitalism’s English translation (1930). The iron cage traps individuals in systems based purely on goal efficiency, rational calculation, and control[2].

So, while the concept of the iron triangle and/or triple constraint has been consigned to history, the original concept of the triangle, as a balance between the three elements that are always present in a project still has value.

Defining a project

Understanding precisely what work is a project, and what is operational (or some other forms of working) is becoming increasingly important as various methodologies such as Lean and Agile span between the operations and project domains. 

Some of the parameters used to define or categorize projects include:

There are many other ways to categorize projects, some of which are discussed in the papers at: https://mosaicprojects.com.au/PMKI-ORG-035.php#Class. But these classifications do not really provide a concise definition of a project. And, while there are many different definitions, we consider the best definition of a project to be: 

Project:  A temporary organization established to accomplish an objective, under the leadership of a person (or people) nominated to fulfil the role of project manager[3].

The two key elements being:

  1. The project is temporary, the project delivery team or organization closes and its people reallocated, or terminated, when the objective is achieved, and

  2. The project is established to accomplish an objective for a client which may be internal or external to the delivery organization.

The concept of a project being temporary is straightforward (even though it is often ignored). Departments that are set up and funded to maintain and enhance plant, equipment and/or software systems on an ongoing basis are not projects, and neither is most of their work. We discussed this several years back in De-Projectizing IT Maintenance, and the concept seems to be becoming mainstream with the concept of ‘flow’ being introduced to Disciplined Agile.

The value of the project management triangle is identifying the three mandatory elements needed to describe the client’s objective. There are usually a number of additional elements but if any these three are not present, the work is not a project:

  1. The expected outcome is understood. The project is commissioned to deliver a change to the status quo. This may be something new, altered, and/or removed; and the outcome may be tangible, intangible or a combination.  The outcome does not need to be precisely defined to start a project, and may evolve as the project progresses, but the key element is there is an understood objective the project is working to achieve.  
      
  2. There is an anticipated completion time. This may be fixed by a contract completion date, or a softer target with some flexibility.  However, if there are no limitations on when the work is expected to finish (or it is assumed to be ongoing), you are on a journey, not working on a project. 
     
  3. There is an anticipated budget to accomplish the work. Again, the budget may be fixed by a contract, or a softer target with some flexibility.  The budget is the client view of the amount they expect to pay to receive the objective. The actual cost of accomplishing the work may be higher or lower and who benefits or pays depends on the contractual arrangements. 

Conclusion

Understanding the difference between project work and non-project work is important.  Project overheads are useful when managing the challenges of delivering a project, including dealing with scope creep, cost overruns, schedule slippage, and change in general. The mission of the project team is to deliver the agreed objective as efficiently as possible.

The objective of an operational department is to maintain and enhance the organizational assets under its control. This typically needs different approaches and focuses on a wider range of outcomes.  Many techniques are common to both operational and project work, including various agile methodologies and lean, and many management traits such as agility are desirable across the spectrum.  The difference is understanding the overarching management objectives, and tailoring processes to suite.  

For more on project definition and classification see:
https://mosaicprojects.com.au/PMKI-ORG-035.php#Overview


[1] We published and widely circulated this claim after a meeting with Dr. Barns in 2005 at his home in Cambridge. So far no one has suggested an alternative origin.  

[2] For more on the work of Max Weber, see The Origins of Modern Management: https://mosaicprojects.com.au/PDF_Papers/P050_Origins_of_Modern_Management.pdf

[3] The basis of this definition is described in Project Fact or fiction: https://mosaicprojects.com.au/PDF_Papers/P007_Project_Fact.pdf 

Waterfall is Dead

The PMI 2024 Pulse of the Profession has introduced a framework for categorizing projects based on the management approach being used of: Predictive – Hybrid – Agile.  If generally adopted, this framework will at long last kill of the notion of waterfall as a project delivery methodology.

As shown in our historical research The History of Agile, Lean, and Allied Concepts, the idea of waterfall as a project delivery methodology was a mistake, and its value as a software development approach was limited.

The PMI framework has some problems but the predictive project delivery paradigm is described as focused on schedule, scope, and budget. The projects tend to use a phase-based approach and are plan driven.  This describes most hard projects and many soft projects that are not using an unconstrained agile approach.

For a detailed review of the PMI 2024 Pulse of the Profession report, and how the classification system works see How should the different types of project management be described?, download from: https://mosaicprojects.com.au/Mag_Articles/AA026_How_should_different_types_of_PM_be_described.pdf

For more on project classification see: https://mosaicprojects.com.au/PMKI-ORG-035.php#Class

White Constructions v PBS Holdings Revisited

White Constructions Pty Ltd v PBS Holdings Pty Ltd [2019] NSWSC 1166, involved a claim for delay and costs arising out of a contract to design a sewerage system for a subdivision and submit it for approval. The alleged breach was the failure to create and submit a sewer design acceptable to the approval authority which had the effect of delaying completion of the subdivision, giving rise to a claim for damages by White.

White and PBS both appointed experts to undertake a schedule analysis, and they did agree an ‘as-built’ program of the works but disagreed on almost everything else including the delay analysis method to use, the correct application of the methods, and the extent of the overall project delay caused by the delays in approving the sewer design.

The Judge found:

[Clause 18]      Plainly, both experts are adept at their art. But both cannot be right. It is not inevitable that one of them is right.
[Note: This approach is consistent with the UK court decision of Akenhead J in Walter Lilly & Company Ltd v Mckay [2012] EWHC 1773 (TCC) at [377], “the court is not compelled to choose only between the rival approaches and analyses of the experts. Ultimately it must be for the court to decide as a matter of fact what delayed the works and for how long”. This precedent has been followed on a number of occasions[1].]

[Clause 22]      The expert reports are complex. To the unschooled, they are impenetrable. It was apparent to me that I would need significant assistance to be put in a position to critically evaluate their opinions and conclusions.

[Clause 25]      Under UCPR r 31.54, the Court obtained the assistance of Mr Ian McIntyre (on whose appointment the parties agreed).

[Clause 137]   The major components of the works were:
       • earthworks,
       • roadworks and kerbing,
       • sewerage,
       • electrical and National Broadband Network (NBN) installation,
       • footpaths, and
       • landscaping.,

[Clause 138]   The electrical and NBN installation was contracted to and carried out by an organisation called Transelect. Landscaping was contracted to RK Evans Landscaping Pty Ltd. The as-built program is not in dispute.
[Note: the rest of the work was undertaken by other contractors]

[Clause 184]   White bears the onus of establishing that it suffered loss and the quantum of it.

[Clause 185]   White’s damages are based on delay to the whole project, said to be attributable to the late (underbore) sewer design. This is not the type of subject upon which precise evidence cannot be adduced. [Therefore] It is not a subject which involves the Court having to make an estimation or engage in some degree of guesswork.

[Clause 188]   The descriptions of the methods adopted by Shahady and Senogles respectively are evidently derived from the publication of the United Kingdom Society of Construction Law, the Delay and Disruption Protocol….

[Clause 191]   Mr McIntyre’s opinion, upon which I propose to act, is that for the purpose of any particular case, the fact that a method appears in the Protocol does not give it any standing, and the fact that a method, which is otherwise logical or rational, but does not appear in the Protocol, does not deny it standing.
[Note: this is the same wording as an express statement contained in the Delay and Disruption Protocol]

[Clause 195]   Mr McIntyre’s opinion, upon which I propose to act, is that neither method [used by the parties experts] is appropriate to be adopted in this case.

[Clause 196]   Mr McIntyre’s opinion, upon which I propose to act, is that close consideration and examination of the actual evidence of what was happening on the ground will reveal if the delay in approving the sewerage design actually played a role in delaying the project and, if so, how and by how much. In effect, he advised that the Court should apply the common law common sense approach to causation In effect, he advised that the Court should apply the common law common sense approach to causation referred to by the High Court in March v E & MH Stramare Pty Ltd (1991) 171 CLR 506.

[Clause 197]   The Court is concerned with common law notions of causation. The only appropriate method is to determine the matter by paying close attention to the facts, and assessing whether White has proved, on the probabilities, that delay in the underboring solution delayed the project as a whole and, if so, by how much.

[Clause 198]   This requires it to establish that:
• the whole project would have been completed by 15 July 2016,
• the final sewer approval delay delayed sewer works,
• the sewer works delay prevented non-sewer works from otherwise proceeding, that is, that the programme could not reasonably have been varied to accommodate the consequences of late approval, and
• other works could not have been done to fill downtimes so as to save time later.

[Clause 199]   ……… White has failed to discharge this burden.

Summary

The factors required to prove a delay outlined by the Judge at Clause 198 can be generalised as follows:

  1. The completion date for the project before the delay event occurred has to be known with some certainty.
  2. The delay event has to be shown to cause a delay which flowed through to extend the overall project completion date.
  3. There were not reasonable alternative ways of working that could mitigate the effect of the delay on project completion.

More significant, none of these steps needs a CPM schedule.  The project status and the effect of the disruption on project completion can be assessed based on its effect on the productivity of key resources. This is discussed in Assessing Delays in Agile & Distributed Projects: https://mosaicprojects.com.au/PDF_Papers/P215_Assessing_Delays_In_Agile_+_Distributed_Projects.pdf   


[1]     This approach by the courts is discussed in Delivering Expert Evidence is Becoming Harder: https://mosaicprojects.com.au/Mag_Articles/AA028_Delivering_Expert_Evidence.pdf

The Artificial Intelligence Ecosystem

We have posted a number of times discussing aspects of Artificial Intelligence (AI) in project management, but what exactly is AI?  This post looks at the components in the AI ecosystem and briefly outlines what the various terms mean.

𝗔𝗿𝘁𝗶𝗳𝗶𝗰𝗶𝗮𝗹 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲: a range of computer algorithms and functions that enable computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

Automatic Programming: is a technology that enables computers to generate code or write programs with minimal human intervention.

Knowledge Representation: is concerned with representing information about the real world in a way that a computer can understand, so it can utilize this knowledge and behave intelligently.

Expert System: is a computer system emulating the decision-making ability of a human expert. A system typically includes: a knowledge base, an inference engine that applies logical rules to the knowledge base to deduce new information, an explanation facility, a knowledge acquisition facility, and a user interface.

Planning and Scheduling: an automated process that achieves the realization of strategies or action sequences that are complex and must be discovered and optimized in multidimensional space, typically for execution by intelligent agents, autonomous robots, and unmanned vehicles.

Speech Recognition: the ability of devices to respond to spoken commands. Speech recognition enables hands-free control of various devices, provides input to automatic translation, and creates print-ready dictation.

Intelligent Robotics: robots that function as an intelligent machine and it can be programmed to take actions or make choices based on input from sensors.

Visual Perception: enables machines to derive information from, and understand images and visual data in a way similar to humans

Natural Language Processing (NLP): gives computers the ability to understand text and spoken words in much the same way human beings can.

Problem Solving & Search Strategies: Involves the use of algorithms to find solutions to complex problems by exploring possible paths and evaluating the outcomes. A search algorithm takes a problem as input and returns a solution in the form of an action sequence.

𝗠𝗮𝗰𝗵𝗶𝗻𝗲 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴: is concerned with the development and study of statistical algorithms that allow a machine to be trained so it can learn from the training data and then generalize to unseen data, to perform tasks without explicit instructions. There are three basic machine learning paradigms, supervised learning, unsupervised learning, and reinforcement learning.

• Supervised learning: is when algorithms learn to make decisions based on past known outcomes. The data set containing past known outcomes and other related variables used in the learning process is known as training data.

• Unsupervised learning: is a type of machine learning that learns from data without human supervision. Unlike supervised learning, unsupervised machine learning models are given unlabelled data and allowed to discover patterns and insights without any explicit guidance or instruction.

Reinforcement Learning (RL): is an interdisciplinary area of machine learning concerned with how an intelligent agent ought to take actions in a dynamic environment to maximize the cumulative reward.

Classification: a process where AI systems are trained to categorize data into predefined classes or labels.

K-Means Clustering: cluster analysis is an analytical technique used in data mining and machine learning to group similar objects into related clusters.

Principal Component Analysis (PCA): is a dimensionality reduction method used to reduce the dimensionality of large data sets, by transforming a large set of variables into a smaller one that still contains most of the information in the large set.

Automatic Reasoning: attempts to provide assurance about what a system or program will do or will never do based on mathematical proof.

Decision Trees:  is a flow chart created by a computer algorithm to make decisions or numeric predictions based on information in a digital data set.

Random Forest: is an algorithm that combines the output of multiple decision trees to reach a single result. It handles both classification and regression problems.

Ensemble Methods: are techniques that aim at improving the accuracy of results in models by combining multiple models instead of using a single model. The combined models increase the accuracy of the results significantly.

Naive Bayes: is a statistical classification technique based on Bayes Theorem. It is one of the simplest supervised learning algorithms.

Anomaly Detection: the identification of rare events, items, or observations which are suspicious because they differ significantly from standard behaviours or patterns.

𝗡𝗲𝘂𝗿𝗮𝗹 𝗡𝗲𝘁𝘄𝗼𝗿𝗸𝘀: are machine learning (ML) models designed to mimic the function and structure of the human brain and help computers gather insights and meaning from text, data, and documents by being trained to recognising patterns and sequences.

Large Language Model (LLM): is a type of neural network called a transformer program that can recognize and generate text, answer questions, and generate high-quality, contextually appropriate responses in natural language. LLMs are trained on huge sets of data.

Radial Basis Function Networks: are a type of neural network used for function approximation problems. They are distinguished from other neural networks due to their universal approximation and faster learning speed.

Recurrent Neural Networks (RNN): is a type of neural network where the output from the previous step is used as input to the current step. In traditional neural networks, all the inputs and outputs are independent of each other. For example, when it is required to predict the next word of a sentence, the previous words are required and hence there is a need to remember the previous words.

Autoencoders: is a type of neural network used to learn efficient coding of unlabelled data (unsupervised learning). An autoencoder learns two functions: an encoding function that transforms the input data into code, and a decoding function that recreates the input data from the encoded representation.

Hopfield Networks: is a recurrent neural network having synaptic connection pattern such that there is an underlying Lyapunov function (method of stability) for the activity dynamics. Started in any initial state, the state of the system evolves to a final state that is a (local) minimum of the Lyapunov function.

Modular Neural Networks: are characterized by a series of independent neural networks moderated by some intermediary to allow for more complex management processes.

Adaptive Resonance Theory (ART): is a theory developed to address the stability-plasticity dilemma. The terms adaptive and resonance means that it can adapt to new learning (adaptive) without losing previous information (resonance).

Deep Learning:  is a method in artificial intelligence (AI) that teaches computers to process data in a way that is inspired by the human brain. Deep learning models can recognize complex patterns in pictures, text, sounds, and other data to produce accurate insights and predictions. The adjective deep refers to the use of multiple layers in the network.

Transformer Model:  is a neural network that learns context and thus meaning by tracking relationships in sequential data by applying an evolving set of mathematical techniques to detect subtle ways even distant data elements in a series influence and depend on each other.

Convolutional Neural Networks (CNN): is a regularized type of feed-forward neural network that learns feature engineering by itself via filters or kernel optimization.

Long Short-Term Memory Networks (LSTM): is a recurrent neural network (RNN), aimed to deal with the vanishing gradient problem present in traditional RNNs.

Deep Reinforcement Learning: is a subfield of machine learning that combines reinforcement learning (RL) and deep learning.

Generative Adversarial Networks (GAN): is a class of machine learning frameworks for approaching generative AI. Two neural networks contest with each other in the form of a zero-sum game, where one agent’s gain is another agent’s loss.  Given a training set, this technique learns to generate new data with the same statistics as the training set. A GAN trained on photographs can generate new photographs that look at least superficially authentic.

Deep Belief Networks (DBN): are a type of neural network that is composed of several layers of shallow neural networks (RBMs) that can be trained using unsupervised learning. The output of the RBMs is then used as input to the next layer of the network, until the final layer is reached. The final layer of the DBN is typically a classifier that is trained using supervised learning. DBNs are effective in applications, such as image recognition, speech recognition, and natural language processing.

For more discussion on the use of AI in project management see:
https://mosaicprojects.com.au/PMKI-SCH-033.php#AI-Discussion

A Brief History of Agile

The history of agile software development is not what most people think, and is nothing like the story pushed by most Agile Evangelists.

Our latest publication A Brief History of Agile shows that from the beginning of large system software development the people managing the software engineering understood the need for prototyping and iterative and incremental development. This approach has always been part of the way good software is developed.

The environment the authors of the early papers referenced and linked in the article were operating in, satellite software and ‘cold-war’ control systems, plus the limitations of the computers they were working on, did require a focus on testing and documentation – it’s too late for a bug-fix once WW3 has started…..  But this is no different to modern day control systems development where people’s lives are at stake. Otherwise, nothing much has changed, good software is built incrementally, and tested progressively,  

The side-track into ‘waterfall’ seems to have bee started by people with a focus on requirements management and configuration management, both approached from a document heavy bureaucratic perspective. Add the desire of middle-management for the illusion of control and you get waterfall imposed on software developers by people who knew little about the development of large software systems. As predicted in 1970, ‘doing waterfall’ doubles to cost of software development. The fact waterfall survives in some organisations through to the present time is a factor of culture and the desire for control, even if it is an illusion.

The message from history, echoed in the Agile Manifesto, is you need to tailor the documentation, discipline, and control processes, to meet the requirements of the project. Developing a simple website with easy access to fix issues is very different to developing the control systems for a satellite that is intended to work for years, millions of miles from earth.

To read the full article and access many of the referenced papers and third-party analysis see: https://mosaicprojects.com.au/PMKI-ZSY-010.php#Agile

Agile’s Hidden Secret!

The two fundamental questions standard agile metrics cannot answer consistently are:

1.  How far ahead or behind schedule are we currently?

2.  When are we expected to finish?

Most of the tools and techniques used to manage Agile projects are good at defining the work (done, in-progress, or not started) and can indicate if the work is ahead or behind a nominated planned rate of production, but there is no direct calculation of the time the work is currently ahead or behind the required production rate, or what this is likely to mean for the completion of the project. A full discussion of this topic is in Calculating Completion.  However, most project sponsors and clients need to know when the project they are funding will actually finish, they have other people that need to make use of the project’s outputs to achieve their objectives. At present all Agile can offer is an educated assessment based on the project teams understanding of the work.

Work Performance Management (WPM) has been designed to solve this challenge by providing answers to these questions based on consistent, repeatable, and defensible calculations.

WPM is a simple, practical tool that uses project metrics that are already being used for other purposes within the project, to assess progress and calculate a predicted completion date by comparing the amount of work achieved at a point in time with the amount of work needed to have been achieved. Based on this data WPM calculates the project status and the expected completion date assuming the rate of progress remains constant.

Our latest article, WPM for Agile Projects identifies the cause of this information gap in Agile project management, explains the inability of current tools to accurately predict completion and demonstrates how WPM will effectively close this critical information gap.
Download WPM for Agile Projects: https://mosaicprojects.com.au/Mag_Articles/AA040_-_WPM_for_Agile_Projects.pdf

For more on the practical use of WPM, free sample files, and access to the tool see: https://mosaicprojects.com.au/PMKI-SCH-041.php  

The Problem with Waterfall

The term ‘waterfall’ is seen in lots of different posts without any clear definition of what the writers of those posts mean by the term.  The only constant seems to be in each of the writer’s view ‘waterfall’ is not Agile, and generally represents bad project management practice. In summary, the agile advocates view seems to be:

Agile: A well-defined flexible project delivery process, based on the Agile Manifesto, applicable to software development and a wide range of other “soft projects” such as business change. Agile = Good!

Waterfall: Any project delivery process that is not Agile. Waterfall = Bad!

There are many problems with this simplistic viewpoint starting with the fact the concept of ‘waterfall’ had a very short life and with the possible exception of a few, very traditional, software development organizations, no one uses waterfall for anything.

History of Waterfall.

To the best of my knowledge, the first publication to use the term Waterfall was in the 1976 paper Software Requirements: Are They Really a Problem, by T.E. Bell and T.A. Thayer. This paper misrepresented the 1970 paper Managing the development of large software systems, by Dr Winston Royce[1]. Royce proposed an iterative approach to the development of large systems, but Bell and Thayer falsely claimed he supported ‘waterfall’[2].  

Summary diagram from Royce 1970.

The real start of Waterfall was the publication in 1988 of DOD-STD-2167A by the US Department of Defense, which established uniform requirements for the development of software based on the Waterfall approach[3].   

Extract from DOD-STD-2167A

Problems with the Waterfall approach were quickly identified and in 1996 MIL-STD-498 was released by the US Department of Defense to correct the problems. Officially Waterfall was dead and buried but many companies had adopted waterfall and because waterfall projects were slow and subject to delay, hourly paid staff and contractors had a powerful incentive not to change despite many better software development processes being developed starting from the early 1980s.   

Other types of projects and project delivery.

Waterfall was a short-lived software development methodology. The vast majority of projects in the construction, engineering, oil & gas, defence, and aerospace industries use project delivery methods based on the approaches described in A Guide to the Project Management Body of Knowledge (PMBOK® Guide)—Sixth Edition, and a range of other standards. These other projects generally have three phases:

  1. definition phase undertaken by the client organization to define the capabilities of the product being developed
  2. procurement phase where the client selects a delivery agent for the development of the product
  3. delivery phase where the delivery agent builds and delivers the product

The design of the product (ship, building, rocket, etc.) may be undertaken in full or in part during any one of the three phases. A minimum level of design is required to initiate procurement, but for simple buildings and civil engineering projects, it is not unusual for a complete design and specification to be provided by the client.

The procurement phase may be a simple pricing exercise, or a complex, and phased, design process (sometimes even involving the production of working prototypes), with selection being based on the capabilities of the design produced by the successful tenderer.

Then, in many projects, a significant amount of detailed design is still required during the delivery phase, including shop drawings produced by subcontractors and suppliers.

Similarly, the procurement arrangements vary widely. The client may choose to enter into some form of alliance or partnership with the preferred delivery agent based on shared risk and profits, or the client may choose a hard-dollar contract based on a fixed price to deliver a fixed scope, or some other form of contractual arrangement.

The only certainties are that the typical project approaches used for the vast majority of ‘other’ projects bear no resemblance to the waterfall approach, and this ‘other’ classification includes more than two-thirds of the world’s projects by value.

Conclusions

  1. I suggest it is time to follow the US DOD lead from 1994 and bury the concept of ‘waterfall’ – using the name 30 years after it was officially dropped is long enough.
  2. People involved in the ‘Agile’ industry need to wake up to the fact that software development is only one of many types of project. Most of the ‘other’ types of project do not use Agile, and they certainly don’t use waterfall.
  3. Agile and agility are not synonymous – all organisations benefit from a degree of agility, but this has nothing to do with selecting the best project delivery methodology (more on this later).
  4. In the 21st century, Waterfall is not synonymous with over documentation and/or bad project management. There is plenty of bad project management practice around. But bad management needs to be called out for what it is – 99.999% of the time the bad managers are not trying to use waterfall in their work.   

Ditching the concept of waterfall does create a couple of challenges – we all have an understanding what Agile means as a project delivery process, we need similar generally accepted classifications for other types of project delivery – more on this later. Similarly, the bad management practices branded as ‘waterfall’ need to be identified and understood, you cannot improve a bad process until the root cause of the problem is understood.

For more on Agile management see: https://mosaicprojects.com.au/PMKI-ITC-040.php#Process1

Note: THE MYTH OF THE ‘WATERFALL’ SDLC by expands on this post in far greater detail and is highly recommended as a reference: http://www.bawiki.com/wiki/Waterfall.html


[1] Download a copy of the 1970 Royce paper: https://mosaicprojects.com.au/PDF-Gen/Royce_-_Managing_the_development_of_large_software_systems.pdf  See Fig. 10.

[2] Download a copy of the 1976 Bell & Thayer paper: https://mosaicprojects.com.au/PDF-Gen/software_requirements_are_they_really_a_problem.pdf

[3] Download DOD-STD-2167A Defense System Software Development (1988): https://mosaicprojects.com.au/PDF-Gen/DOD-STD-2167A.pdf