# Averaging the Power of Portfolios

The interaction between dependent or connected risk and independent risk is interesting and will significantly change the overall probability of success or failure of an endeavour or organisation.

As discussed in my last post on ‘The Flaw of Averages’  using a single average value for an uncertainty is a recipe for disaster. But there is a difference between averaging, connecting and combining uncertainties (or risk).

Where risk events are connected, the ability to model and appreciate the effect of the risk events interacting with each other is difficult. In ‘The Flaw of Averages’ Sam Shaw uses the simile of wobbling a step ladder to determine the uncertainty of how safe the ladder is to climb. You can test the stability of one ladder by giving it a good ‘wobble’. However, if you are trying to determine the stability of a plank between two stepladders doubling the information from wobbling just one is not a lot of help. Far more sophisticated modelling is needed and even then you cannot be certain the full set of potential interactions is correctly combined in the model. The more complex the interactions between uncertainties, the less accurate the predictive model.

However, when the risks or uncertainties are independent, combining the risks through the creation of a portfolio of uncertainties reduces the overall uncertainty quite dramatically.

The effect of portfolios

Consider a totally unbiased dice, any one throw can end up anywhere and every value between 1 & 6 has an equal probability of being achieved. The more throws, the more even the results for each possibility and consequently there is no possibility of determining the outcome!

The distribution after 10, 100 and 1000 throws.

As the number of throws increase, the early distortions apparent after 10 throws smooth out and after 1000 throws the probabilities are almost equal.

However, combine two dice and total the score results in a very different outcome. Whilst it is possible to throw any value between 2 & 12, the probability of achieving a number nearer the middle of the range is much higher than the probability of achieving a 2 or a 12. The potential range of outcomes starts to approximate a ‘normal distribution curve’ (or a bell curve). The reason for this is there is only one combination of numbers that will produce a 2 or a 12; there are significantly more combinations that can make 7.

The more dice you add to the ‘throw’, the closer the curve becomes to a ‘normal distribution’ (or bell curve), which is normally what you expect/get, which is the origin of the name!

The consequence of this phenomenon is to demonstrate that the creation of a portfolio of projects will have the effect of generating a normal distribution curve for the outcome of the overall portfolio, which makes the process of portfolio management a more certain undertaking than the management of the individual projects within the portfolio. The overall uncertainty is less than the individual uncertainties……

Each project carries its level of uncertainty and has a probability of succeeding off-set by a probability of failing (see Stakeholder Risk Tolerance) but as more projects are added the probability of the overall portfolio performing more or less as expected increases, provided each of the uncertainties are independent! This effect is known as the Central Limit Theorem.

One important effect of the Central Limit Theorem is the size if the contingency needed to achieve a desired level of safety for a portfolio of projects is much smaller than the sum of the contingencies needed to achieve the same level of ‘safety’ in each of the individual projects. Risk management is a project centric process; contingency management is better managed at the portfolio level. Not only is the overall uncertainty reduced, but the portfolio manager can offset losses in one project against gains in another.

Whist this theorem is statistically valuable, the nature of most organisations constrains the potential benefit. From a statistical perspective diversity is the key; this is why most conservative investment portfolios are diversified. However, project portfolios tend to be concentrated in the area of expertise of the organisation which removes some of the randomness needed for the Central Limit Theorem to have its full effect.

It is also important to remember that whilst creating a portfolio will reduce uncertainty, no portfolio can remove all uncertainty.

In addition to the residual risk of failure inherent in every project, there is always the possibility of a ‘black swan’ lurking in the future. Originally conceptualized by philosopher Karl Popper and refined by N. N. Taleb, a ‘black swan’ is a risk event that has never occurred before, if it did occur would have and extreme impact and is easy to explain after the event, but is culturally impossible to predict in advance (ie, the event could be foreseen if someone is asked to think about it but it is nearly impossible to think the thought for a compelling reason). For more on black swans see our blog post  and White Paper.

The Law of Averages

The Central Limit Theorem is closely aligned to The Law of Averages. The Law of Averages states that if you repeatedly take the average of the same type of uncertain number the average of the samples will converge to a single result, the true average of the uncertain number. However, as the ‘flaw of averages’ has demonstrated, this does not mean you can replace every uncertainty with an average value and some uncertain numbers never converge.

Summary

Both the Law of Averages and Central Limit Theorem are useful concepts; they are the statistical equivalent of the adage “don’t put all your eggs in one basket”. When you create a portfolio of projects, the average probability of any one project succeeding or failing remains the same as if the project was excluded from the portfolio, but the risk of portfolio suffering an overall failure becomes less as the number of projects included in the portfolio increases.

However, unlike physical laws such as gravity, these laws are not immutable – drop an apple within the earths gravitational pull and it will fall; create a portfolio and there is always a low probability that the results will not conform to normal expectations!

Certainly the probability of a portfolio of projects ‘failing’ is lower then the average probability of each project failing but a reduced level of risk still leaves a residual level of risk.

### 4 Responses to Averaging the Power of Portfolios

1. Pat, some care is needed here.

I know the “simple” approach is sought, but many times that simple approach is simplistic and leads to errors that are not revealed until it is too late for the Portfolio Manager.

The Law of Large numbers (the precursor to the CLT) says that when “n” is large (determining how large is a seperate problem), the sample mean X(n) has a probability distribution that is concentrated around μ – the common mean of the independent, identically distributed random variables (i.i.d.r.v).

This is a working concept of “fair dice,” but all the variables in projects are rarely if ever i.i.d.r.v’s. And since this is rarely the case, as the number of projects grows in the portfolio there are many ways the portfolio as a whol can fail, starting with a “common cause failure.”

The CLT requires that the random variables are “approximately normal” when n is large. Again how large is not stated yet. The examples for “fair dice” can be used to show how the CLT works. The “normalized” distribution Z=(X-μ(X)) / σ(X) can then be used to deternine the number of samples needed to have the CLT be useful.

First the CLT is based on the premise that the random variables are chosen “with replacement.” This is not likely in actual portfolios of projects either. Next you need to determine the sample size compared to the population of all possible samples. In the dice example the population size and the sample size are the same.

Then the sampling size, the sampling error and the “true” portion of success – that is the sample belongs to the probability distribution are needed to answer the question (in actual project portfolios with risk) of “How can a value for the true proportion be stated for the value of the sample proportion that is being measured?”

This is answered by the Clopper-Pearson estimation for the number of sample to establish a confidence and the error.

For example if you want to have a sample proportion of be a specific number, with a 90% confidence and a 5% error here’s the number of samples

p=0.4 (40% chance your sample is in the population) n=50
p=0.3 or 0.4, n=80
p=0.2 or 0.8, n=200
p=0.1 or 0.9, n=600
p=0.5 or 0.95, n=1,400

So this means for each project that contains i.i.d.r.v’s then samples in the range of 1,000 to 1,400 are needed to get a credible view of the CLT for the portfolio. But all those projects must be i.i.d.r.v, which is very sporty in practice.

2. Thanks Glen for the statistical backing to support Lynda’s preposition that project portfolios are never random and therefore C.L.T. has its limitations.

However, whilst the uniqueness of every project makes precisely assessing its actual variability impossible, and the tendency of project portfolios to concentrate in specific areas of expertise severely limit the ability to accurately calculate aggregate values, the inability to do the calculations does not remove the importance of the underlying principles discussed in the article.

‘The absence of evidence does not mean the evidence of absence’. Project will never generate good statistical data, they are too unique (and too few in number). But this should not be an excuse for ignoring relatively sound principles such as the value of aggregating risk to provide more efficient management of the uncertainties (always assuming there are managers capable of doing the managing).

• Pat,

The problem is without an understanding of the individual probabilistic model and having the means of putting them in a portfolio – Monte Carlo or Method of Moments, what actual decisions can be made with a known confidence and a known error.

It’s back to the core problem of providing actionable information in units of measure meaningful to the decision makers.

Those important principles have no way to be put into practice. Like Yogi Berra says “In theory there is no difference between theory and practice. In practice there is.”