A Geeky Detour: Identifying Robust Portfolios by Using Simulation Optimization

To develop our model portfolios in 2003, we used an approach called simulation optimization, or "SO" for short.

Our goal in employing the simulation optimization methodology was to develop a multi-period asset allocation solutions that are robust enough to achieve an investor's objectives under a range of possible future asset class return scenarios.

In practice, SO works like this:

We start with two investment objectives: (a) an annual real income target (defined as a percentage of the portfolio's initial value); and (b) a savings target (defined as the size of the ending portfolio relative to the current portfolio).

Next we add an assumption about life expectancy and assumptions about the future average annual return, standard deviation, and correlations for the asset classes under consideration.

These asset class assumptions are based on a combination of historical data (our reference case) and outputs from a forward-looking asset pricing model.

The third step in setting up our SO model is to define the maximum amounts that can be invested in any single asset class. This is a further hedge against allocation mistakes caused by errors in our estimates of future asset class risks and returns. At this point, we are ready to run our simulation optimization model.

Here's how it works. SO starts with a "candidate" asset allocation solution (e.g., 20% domestic bonds, 10% foreign bonds, 40% domestic equities, and 30% international equities). It then uses the asset class inputs to calculate a scenario covering a holding period equal to the life expectancy assumption.

For example, if you had seven asset classes, and a holding period of twenty hears, the scenario would contain 140 different annual returns. Once the scenario has been defined, the model checks to see how well the candidate asset allocation strategy satisfies our income and savings goals. It then repeats this process many times (we use 10,000 scenarios per asset allocation strategy) to develop a clear picture of the range of outcomes the candidate asset allocation strategy might produce.

But the SO approach doesn't stop here. It repeats this process over and over again to test other asset allocation strategies that might do a better job of achieving the goals we have set under a wide range of possible future scenarios.

It is also important to understand how these alternative asset allocation strategies are determined.

Because of the nature of the optimization problem (multiple years of outcomes, with multiple possible asset class combinations and constraints on which ones are permissible), it is impossible to use a linear optimization algorithm to solve it. Moreover, given the large number of possible solutions, a "brute force" approach ("test all of them") is, from a computational perspective, out of the question. What we need instead is a process that intelligently searches the landscape of possible asset allocation solutions for one which is likely (but not guaranteed) to be at least one of the best available (in terms of its probability of achieving the income and savings goals under a wide range of future asset class return scenarios).

Our SO model (which was built in 2003) used a probabilistic search process (technically, scatter/tabu search with a neural network accelerator) to identify different asset allocation strategies to test.
The results of our SO methodology cannot be said to be "optimal" in the same sense that linear optimization produces a single optimal solution. Instead, the goal of our SO approach is to produce a robust solution -- one that, in comparison to other asset allocation strategies, has a relatively higher probability of achieving the specified income and savings goals under a wide range of possible future conditions.

The advantages of the SO approach are that it handles time, uncertainty, and multiple investor objectives much better than either the either rules of thumb or single period "mean/variance" linear optimization methodologies. However, it is also subject to some limitations. As one should never use a quantitative tool without being clear about what it can and cannot do, let's look at these in more detail.

We'll start with the most important point:
All quantitative approaches to portfolio construction suffer from some shortcomings.

The first major issue that many people don't fully appreciate is that the inputs used in asset allocation processes are themselves only statistical estimates of the "true" values for these variables. Techniques such as resampling (essentially, using Monte Carlo simulation to make these statistical estimates explicit) show that, because of the possibility of estimation error, many portfolios with different asset allocations are statistically indistinguishable from one another in terms of their expected risk and return. Practically, the lesson here is that because of estimation errors, a portfolio should only be rebalanced when its actual asset class weights get significantly out of line with their long term targets (see the Member Section of this site for more details about this).

The second issue that affects asset allocation models is the fact that the historical returns for many asset classes are not normally distributed, and have "fatter tails" than would be the case in a normal distribution. Statistically, this means extreme events are more likely to happen than would be the case if the returns were normally distributed. How much more likely? Fortunately, a 19th century Russian mathematician named Pafnuty Chebyshev worked this out. In the case of a normal distribution, the range defined as the mean (average) plus or minus two deviations is supposed to cover about 95 percent of possible outcomes, while the three standard deviations are supposed to cover 99 percent. Chebyshev showed that if the distribution isn't normal, you would need (at most) about four standard deviations to cover 95 percent of the possible outcomes, and about seven standard deviations to capture 99 percent.

Unfortunately (back in 2003), the assumption of normality was practically necessary to make many asset allocation models computationally feasible, and most investors aren't told that as a result they often provide a false sense of confidence about "the worst" outcomes that could occur. This means that there is more risk inherent in high standard deviation asset classes (like equities) than people may realize, and that more conservative asset allocations are probably more effective in the long term (a point we've taken to heart in the construction of our model portfolios).

The third issue that affects asset allocation models is the fact that the underlying economic processes that generate the return distributions they use as inputs are not themselves stable (or, as they say in statistics, it isn't "stationary"). The evidence in support of this observation is quite strong: for example, standard deviations (also known as volatility) are not stable across time; rather, they tend to cluster in "regimes" of high and low values for this variable. The same is true for the correlations of returns between asset classes: there is a lot of data that says that correlations tend to increase during bad times, and then move apart during good times.

Developing new ways to deal with this "non-stationarity" risk has become quite a hot topic in the financial world.
Our model portfolios were based on a so-called "regime switching" model. We assumed that the financial markets could be in one of three regimes, that we termed normal times, high inflation, and high uncertainty.

For each regime, we then estimated asset class inputs, including the average real return, standard deviation of return, and correlation with the real returns on other asset classes. To do this, we combined historical data with the outputs from our asset pricing model. Within each regime we assumed Gaussian/normal distributions of asset class returns. However, the different distributions within each regime and use of the regime switching model produced an aggregate distribution of asset class returns that was quite close to the distribution features observed in the historical data (i.e., fat tails, clustered volatility, etc.).


In portfolio construction there are other steps one can take to address the challenges posed by model non-stationarity and parameter estimation errors.

With respect to the latter, the first important point was made by Chopra and Ziemba in their 1993 Journal of Portfolio Management article entitled "
The Effect of Errors in Means, Variances, and Covariances on Optimal Portfolio Choice." Their key finding was that the estimation error of the mean is 10x as important as the estimation error of the variance, which in turn is 2x as important as the estimation error of the covariance (except where the number of assets is large, in which case the covariance error becomes more important).

There are qualitative and quantitative approaches one can take.

First, there are a number of simple heuristic approaches. These include giving equal weights to all asset classes, and excluding return estimates altogether, and simply optimizing to minimize risk.

Another heuristic approach is to put constraints on the maximum weight that can be given to any asset class. The balance of theoretical argument on the merits of this approach seems to favor the view that in general, it does more good than harm. This view is also reinforced by the finding that resampling analysis typically concludes that many "intuitive" portfolios (which generally include either explicit or implicit asset class constraints) are within the "efficient region" of statistically equivalent portfolios.

Resampling is one of three more quantitative approaches to dealing with estimation risk. Its primary benefit seems to be that it results in less rebalancing, due to the statistical equivalence it demonstrates between many efficient portfolios.

On the other hand, resampling has two key shortcomings. First, portfolios with statistically equal risk/return trade-offs can have very different asset weights, which leaves more room for discretion than some (but certainly not all) people might prefer. More important, because all resampled returns are drawn from the same distribution, resampling implicitly assumes that the underlying return generating process is stationary. Unfortunately, a number of research papers (not to mention the existence of clustered volatility in most asset classes) have demonstrated that this is not the case. For example, in "
Structural Change and the Predictability of Stock Returns", Rapach and Wohar "find, in the period since World War Two, evidence of structural breaks in seven of the eight predictive models for the S&P 500" that they study. They note that these breaks occur for many reasons, including changes in political conditions (e.g., war), economic conditions (e.g., monetary or tax policy), and financial market conditions (e.g., bubbles). The net result is "significant parameter uncertainty in the use of predictive models."

The second more quantitative approach to dealing with estimation errors has been proposed by Horst, de Roon, and Werker in their paper "
Incorporating Estimation Risk in Portfolio Choice." In essence, they propose the use of higher than actual risk aversion when determining mean/variance efficient portfolio. You can see the relationship of this approach with the heuristic one of simply setting a maximum constraint on the allocation to certain asset classes, which is another way of increasing your defacto risk aversion.

The third quantitative approach is using Bayesian estimators that combine prior beliefs with historical (i.e., base rate) returns to generate posterior beliefs. The challenge here is deciding what prior belief about the distribution of returns one should use. A number of different alternatives have been proposed, including a grand mean (that is, the average of all the sample means for all the asset classes under consideration), the same mean for all asset classes, and the outputs from a theoretically sound asset pricing model (which also introduces potential model error into the estimating process).

We used this Bayesian approach when developing our asset class assumptions, which we review every two years. We first derived two different sets of future asset class return estimates. The first was based on 1971-2002 historical real returns (except where the data was unavailable, where we used the longest series possible -- e.g., emerging markets equities only started in 1988).

The second set was derived from standard forward-looking asset pricing models. The standard deviations and correlations were the same in both cases, and based on the historical data. Both of the historical and model-based approaches have their strengths and weaknesses; combining them should theoretically produce a better estimate of future returns. An interesting point here is the weights we gave to the asset class weights from each approach. We chose .67 for the historical weights (because they were derived from quite a long data series, relative to our 20 year investment horizon), and .33 for the future estimate. We noted, however, that reasonable people can and do disagree on such matters, so we presented both sets of weights so people could do their own calculations if they desired. The fact is, science, even with clearly explained theory, can only take you so far; at some point in the asset allocation process, the need for informed judgement is inescapable!

In sum, back in 2003 we thought long and hard about asset allocation issues, and were quite sure we're using a theoretically sound approach to address them.


So how did Retired Investor's model portfolios perform between 2003 and 2019?

.
Stacks Image 115
We hope our free content has helped you…
If it has, please click here to support our site.