Three schedule activities of 10 days duration each need to be complete before their outputs can be integrated.

Activity 1 & 2 both have a 90% probability of achieving the estimated duration of 10 days.

Activity 3 has an 80% probability of achieving the 10 days.

**Scenario 1:**

The three activities are in parallel with no cross dependencies, what is the probability of the integration activity starting on schedule?

**Possible solution #1**

There is a 10% probability of the start being delayed by Activity 1 overrunning.

There is a 10% probability of the start being delayed by Activity 2 overrunning.

There is a 20% probability of the start being delayed by Activity 3 overrunning.

Therefore in aggregate there is a 40% probability of the start being delayed meaning there is a 60% probability of the integration activity starting on time.

**Possible solution #2**

The three activities are in parallel and the start of the integration is dependent on all 3 activities achieving their target duration. The probability of a ‘fair coin toss’ landing on heads 3 times in a row is 0.5 x 0.5 x 0.5 = 0.125 (an independent series)

Therefore the probability of the three activities achieving ‘on time’ completion as opposed to ‘late’ completion should be 0.9 x 0.9 x 0.8 = 0.648 or a 64.8% probability of the integration activity starting on time.

Which of these probabilities are correct?

**Scenario #2**

The more usual project scheduling situation where activities 1, 2 and 3 are joined ‘Finish-to-Start’ in series (an interdependent series). Is there any way of determining the probability of activity 4 starting on time from the information provided or are range estimates needed to deal with the probability of the activities finishing early as well as late?

**There is a correct answer and an explanation – see the next post**

(its too long for a comment)

### Like this:

Like Loading...

*Related*

Bollocks!

You’ve set the exercise to fail.

1. The question was set in a PMP exam prep book. I know enough about risk to appreciate the answer in the book was wrong.

2. I believe

Scenario 1is resolvable based on probabilities but there are two alternative answers that produce different outcomes and I would like to understand which option is correct (and why).3. However I am fairly sure

Scenario 2is impossible to resolve based on probability alone, but I would be interested to have this confirmed by real risk experts.Not enough information. What are the probability distributions for each of the probability of completion. It could be a sooth symmetric PDF or a step wise binary distributions.

In our Monte Carlo Simulation, this would not be the approach we’d need to know the aleatory uncertainties for each of these activities.

The words used are “there is a 90% confidence of completing on or before the date.”

So the answer – again from the MCS point of view – is the question can not be determined.

Here’s the mathematical answer. The probability distributions that produce the confidence number – 90% confidence of completion – can not participate in arithmetic functions like multiplication, since they are probability distributions represented by integral equations representing the generating functions for the random numbers used to model the 90% confidence value.

Thanks Glen – your confirmed my thoughts re scenario 2 being impossible to work out without range data and distribution information. But there seems to be a lot of published information on variants of scenario 1 assuming the probabilities have a basis for validity. Most of the material talks about multiplying the probabilities, but this give a different answer to a simple logical consideration???

Pat

From a purely risk management perspective (without going into the maths and standard formulae which Glen has done) the probability of the integration activity is entirely dependent on the Activities being completed.

If the Activities have not been completed, irrespective of probability, the Integration cannot take place – from a risk management perspective.

Notwithstanding that this is a question based on the hypothetical, I would be advising the project management to look at which aspects of each Activity are causing the negative value and to improve on the management of those risks.

Probability is a thumb-suck. Sometimes accurate, most times not.

For scenario 1 I would expect solution 2 to be correct as co-dependent probabilities are multiplied not added. For scenario 2, a simplistic answer might be that the probably of Integration starting is most dependent on the least certain activity (#3) as there is greater certainty of Activities 1 and 2 completing on their advertised durations. This assumes the distribution curves for activities 1 and 2 have smaller deviations (tall and skinny) compared to Activity three which has a wider range of potential values. But that is a simplistic view and the simplistic answer would say 80% for Integration.

Mervyn,

“Probability is a thumb-suck. Sometimes accurate, most times not.” is simply not allowed in our domain – Federal Contracts subject to EVM. DI-MGMT-81861, mandates a Schedule Risk Analysis. Some agencies mandate the range estimate based on “past performance” – Reference Class Forecasting, http://goo.gl/qsm6a. These RCF’s are derived from like systems, parametric (calbrated meaning Cardinal) models, and similar sources.

As well, the probabilistic basis of estimating is also based several other sources. One used many times is the Joint Confidence Level, http://goo.gl/V5i1M

Other domains “may” not have this approach, and may consider that probabilistic cost and schedule models a “thumb suck.”

As an aside the question is about an aleatory risk – a risk created by an aleatory uncertainty – the probabilistic duration. The key here is the “possible” durations are generated by an underlying probabilistic generating function – the generator of “all” possible durations. More information about this process cannot be “purchased.” It’s just part of the natural processes associated with the work.

This is not the case for Epistemic uncertainties – Event Based Risk. The actual probability of completing “on or before” a date likely is composed both these uncertainties. This question does not distinguish between those.

Change the scenario without changing the probability factors. Three totally independent projects are needed for a ‘go live’ date. Each project has been modelled in accord with the best practices you have just outlined. Two of the projects have a P90 date of the 1st March; the third has a P80 date of the 1st March what is the probability of going live on the first of March?

Option 1: 0.9 x 0.9 x 0.8 = 0.648

Option 2: 60% (or 0.6) because there is a 10% + 10% + 20% probability of delay

The difference seems to be if one person tosses a coin 3 times, the probability of getting 3 heads in a row is 0.5 x 0.5 x 0.5 = 0.125

However if 3 people simultaneously toss one coin each what is the chance of any one of the three being a head?

Glen

Thank you for your full and frank explanation, it is noted with appreciation. In hindsight I did not relate the question into the context it deserved, I simply identified that the question itself (or the manner in which the data was offered) was flawed, and therefore answered from that angle.

As my comments regarding risk probability mechanism to be a thumbsuck, I retract this comment with humility as I did not seek to offend, either by way of portrayed lack of knowledge or misplaced wit.

Your original comment does address the question in the context that it deserves, as does your explanation above. Nevertheless, you are (attempting) to answer a flawed question with an (industry) relevant answer and it is there that I am finding it difficult to apply pragmatism of understanding.

Pat,

Your numbers are cumulative. These numbers are “created” by the underlying probability density function. We need to know the “shape” of that pdf, before the answer you’re looking for can be addressed.

The duration statistics is not a bi-manorial (H or T) function in the example of tossing coins, but a continuous, or possible a semi continuous function.

Thanks everyone – the consensus seems to be ‘stupid question’ with no real answer! I obviously need to be more careful of the books I read……

Mervyn,

Pat’s example is flawed (no offense Pat), because it’s missing the underlying statistical processes that drive the duration variances.

Is that the flaw you are speaking of?

Glen

In the context that you are looking at, Yes, the question is flawed due to lack of (background or cumulative) data – what you refer to as the statistical processes.

As Pat explained that this was a question in a PMP exam, I do not have the subject matter to which it would be referring, which possibly explains why we are all coming from different directions but all heading to the same point.

Pat,

No, it’s NOT a bad question. It is a question asked many times in many domains. The issue is that most of those asking the question are not unfamiliar with the concepts of probabilistic uncertainty and its impact on aleatory and epistemic risks.

Without those pesky pdf’s it’s all a guess as to the outcomes, given only the probability of the final completion time – the confidence number.

Mervyn,

The approach used for modeling probabilistic uncertainty demands that the “prior conditions.” This could be past performance, parametric models, “like comparisons,” or similar approaches.

Take a look at the Reference Class Forecasting in a previous post. There are reference their for the broader approach to estimating in a variety of domains, including large construction

Pat and Mervyn,

Here’s a conjecture. Is there any project you’ve ever worked that was “pure never been done before” development. A project that would not have some basis of estimating the cost and duration?

It would that there never before in the history of mankind is there any prior performance that could be used as the starting point for the “basis of estimate.”

Glen

I’ve worked on some pretty unique projects, particularly in Africa, where the general book of rules does not apply. Having said that I must admit that as challenging as those projects were to deliver, and deliver we did, each challenge and each risk was managed according to our current knowledge and management systems – the same which you, Pat and I use today.

Mervyn,

Log ago I was a newly minted physicist. Not a very good one, but good enough to get my first job working in a defense contractor on radar signal processing system. Radar processes are similar to the signal processes needed for particle detection on an accelerator.

The little joke at this firm was when we were estimating the effort and cost to add features to the first missle defense systems (Cobra Dane), was “do this require we invent new physics?”

In one case it did. The Surface Acoustic Wave device was “invented” to fulfill a technical need of down-converting microwave signals to frequencies that could then be sampled by the Analog-to-Digital converters of the era. Today this is all down digitally. But that notion stuck. It would be very rare, to need to invent new physics for the vast majority of projects we work.

That does’t mean you may not know how to do something. But that is different than it not be “knowable.”

Glen, I can appreciate the subtle difference between not knowing how to do something and the knowledge pertaining to the job being absent. Very rare in today’s global fishbowl.

It turns out Geoff Markley is correct for Scenario 1 and as Glen has pointed out there is no answer for scenario 2 – for an explanation see the next post.