At one level, completing on schedule has been a requirement, enforced to a greater or lesser extent for millennia. In the 1905 English court case Clydebank Engineering and Shipbuilding Co Ltd v Don Jose Ramos Yzquierdo y Castaneda  AC 6; the court was prepared to uphold a ‘liquidated damages’ clause for delivery at the rate of ₤500 per week for each vessel not delivered by the contractors in the contract time. And rather more sever penalties could be imposed by Roman Emperors for late delivery.
As governments do today, the Romans outsourced most of their major works to contractors, with both public accountability and a legal framework as key governance constraints. What was significantly different was the consequences of failure! If a project went badly wrong in Roman times, the responsible public official would suffer a major career limiting event that could affect the prospects of his descendants for generations to come. Whilst the retribution applied to the contractor could be even more serious including death as well as retribution for generations to come. Applying the Roman approach could give a whole new meaning to the ‘pain share’ part of a modern Alliance contracts…… as well as removing by execution many of the worst performing contractors. Rome was not built in a day but their empire did last for close to 1000 years [Frontinus – A Project Manager from the Roman Empire Era by Walker & Dart (Project Management Journal Vol.42, No.5, 4-16].
However, whilst there was undoubtedly an imperative for timely completion of contracts (projects in today’s terminology), there seems to be little in the way of predictive processes used by managers to assess the current expected completion date prior to the 1950s.
Having said that, I’m as sure that ‘smart people’ would have been assessing the expected completion of any significant ‘body of work’; both during the planning processes and during the course of the work. You simply cannot run a factory profitably if you cannot tell a customer when to expect his order – but predictive assessments and predictive processes are quite different.
Cost management and accounting has documented roots more than 6000 years old (provided you can read clay tablets), with modern book keeping emerging in the 15th century. I have found plenty of evidence of proficient governance and effective cost control on projects in the 17th, 18th and 19th centuries but so far nothing ‘predictive’ (cost or time) until the 20th century. Prior to the 20th century, ‘cost control’ focused on comparing actual costs against the planned cost (a process still common in many organisations).
Similarly, the idea of probability and making calculations about future outcomes from a risk management perspective can be traced back to the 17th century and the work of Newton, Leibniz, Bernoulli and Pascal. These mathematicians advanced probability to the point where life insurance and annuities could be bought and sold, but again there seems to be little cross over into the realm of predicting project outcomes until the 20th century.
From a time management perspective, William Playfare ‘invented’ graphical statistics (including bar charts) and published a series of different charts in his Commercial and Political Atlas of 1786.
However, whilst Playfair’s charts are detailed and accurate, they only report history; trends and forecasts were not considered (or at least not published).
There is a continuum from these early charts through to the work of Henry Gantt (who is falsely accredited with developing ‘project management’ and ‘bar charts’) some 200 years later (for more on this see: The Origins of Bar Charting).
The most sophisticated of Gantt’s charts described in ‘The Gantt chart a working tool of management’ (Wallace Clark, 1923) shows slippage or acceleration on the overall target production for one batch of parts on one machine, but again this work does not extend to predicting the completion date for the work, or a related set of activities.
From a measurement perspective, the concept of ‘piece rates’ can be traced back to the 16th century (the phrase ‘piece work’ first appears in writing around the year 1549). Piece work requires measurement of performance to calculate a workers pay and record keeping. However, whilst there is ample evidence of people being measured and paid this way for more then 400 years, there is little indication of this information being used to predict outcomes.
Measuring performance was integral to Taylor’s scientific management and the work of Henry Gantt, Chapter 3 of Gantt’s ‘Work Wages & Profits’ focuses on incentives and bonus payments for production work in machine shops. Foremen and worker are to be paid a bonus if they achieve the target production time for a ‘piece’ of work. The bonuses are calculated after the event and nothing in work wages and profits refers to any form or predictive scheduling beyond the usual planning needed for machine loadings. Gantt’s work is the most advanced of any of the options discussed to date, but all of his charts are focused on highlighting problems so that management action could be directed to resolving the problem.
In short, nothing in the documented practice of accounting, ‘bar charting’, or piece rates, or Gantt’s motivational bonuses, were designed to predict the completion date of the work or it finals cost based on performance to date. All of these processes, including Gantt’s, were focused on solid planning and then working to achieve the plan by eliminating problems that caused slippage to make sure the work was accomplished in accordance with the plan (reactive management).
Whilst there would have been very little effort required to take the actual, planned or estimated production rates (minutes per unit) and divide that into the lot (scope of work) to predict when the production lot is going to be finished, no one seems to have taken this step. The start of predictive calculations does not seem to have emerged until operational research people started playing around with the concepts during WW2 (1940s).
Predictive time focused planning emerged at some time in the late 1940s or early 1950s with the development of linear programming in association with OR, which in turn morphed in CPM , PERT, MPM and a number of similar techniques all around the same time in the UK, Europe and the USA. Prior to the 1950s the focus was on ‘how far behind’ is any element of the work; the advent of network based scheduling allowed a process for predicting completion to be developed. Kelley and Walker’s 1959 paper is very clear on this (see Annex in A Brief History of Scheduling). From these time management foundations, PERT Cost emerged, then C/SCSC which in turn grew into EVM and more recently Earned Schedule (see: Predicting Completion).
Today, project controls are expected to predict cost and time outcomes for each project and in both business and government the expectation of forward predictions of profits, incomes and expenditures are normal.
The question posed by this blog is that given the fact astronomers were predicting celestial events from 1000 BC or earlier; and some of the charts and risk assessments we use today were available from at least the 17th century if not earlier, were these concepts used by managers to predict work outcomes? Or did all of this emerge in the last 60 years??
More precisely, is there documented evidence of someone using current performance to update a plan and predict cost, time or other outcomes before the 1950s?
The evidence I have found to date that suggests predictions are very much a development of the last 60 years is freely available at: http://www.mosaicprojects.com.au/PM-History.html. I would be delighted to be proved wrong!