Bringing Down the House
Starting in the late 1970s, teams of blackjack players recruited from MIT began descending the Eastern Seaboard to the Mid-Atlantic casinos of Atlantic City to test a fundamental statistical principle... conditional probability.
By keeping a running mental count of the number of high-value to low-value cards in a blackjack shoe relative to the size of the remaining cards in the shoe, teams could deploy dynamic betting strategies to try to take advantage of the changing probabilities of successful outcomes in the game...
It worked.
For the next twenty years, hundreds of Massachusetts numberphiles took on the world's casinos armed with this Bayesian principle raking in millions of dollars.
Conditional Probability in Finance
Conditional probability is defined as 'the likelihood of an event or outcome occurring, based on the occurrence of a previous event or outcome' and it's the cornerstone of financial data modeling.
Trading algorithms call indicators of changing probabilities 'signals'... IF a certain signal, or set of signals, is triggered, THEN an action is taken... e.g. buy long, sell short, buy-to-close, sell-to-close, reduce risk by half, etc...
One such signal that has been a reliable indicator for future performance of the economy has been the yield differential between the 10-year Treasury and 2-year Treasury notes... more commonly referred to as the 2/10 spread.
Over the last 40+ years, each time the 2/10 spread has gone to zero or negative, the economy has entered into a recession - however brief - within the subsequent two year period.
Building Dynamic Data Sets
In order to build a meaningful data set to test our hypothesis, we'll include the following measurements:
- Daily observations of the 2/10 Treasury spread going back to 1978
- Since yield levels have changed dramatically over this period, we'll measure each spread relative to the 10-year yield at the time - this produces a (10 Yr - 2 Yr)/10 Yr variable... we'll refer to this as the 'spread variable' going forward
- We'll then compare the spread variable against 2-year forward equity returns in the S&P 500 (using the S&P as a lone proxy for overall market behavior)
This produces a data set with the following distribution:
Here are the key points of our sample population of 2-year forward equity returns:
- Total observations: 10,617
- Mean annualized return: 9.61%
- Mean annualized volatility of 2-year forward returns: 16.66%
- Total negative observations: 1,443 or 13.59%
To understand the potential effects of flat yield curves on expected market performance, we'll now create two data subsets:
- The bottom quartile of the sample population data set based on the spread variable
- All 2-year forward returns following daily observations of the 2/10 spread equal to or less than 0
Here's how our subsets compare to the population data...
- Both data subsets have significantly different expected returns than the population data
- Both data subsets exhibit higher realized volatility than the population data
- The proportions of negative observations and observations less than -10% are significantly different than the population data
Based on these observations, we would fail to reject a hypothesis that states flat yield curves have a material affect on future expected market performance.
However, it is worth noting that there is no linear - or even non-linear - relationship between the yield spread variable and future market performance... markets are still a stochastic process and trying formulate a meaningful R-squared will not yield a meaningful result.
Mean-Variance Optimization Modeling for Asset Allocation
- Equity:
- S&P 500 - SPY
- Nasdaq 100 - QQQ
- Russell 2000 - IWM
- MSCI EAFE - EFA
- Fixed Income:
- Core US Aggregate Bond Index - AGG
- 20+ Year US Treasury Bond - TLT
- US Investment Grade Corporate Bond - LQD
- Real Estate:
- US Real Estate - IYR
- Utilities:
- Utilities Select Sector - XLU
- Commodities:
- Gold - GLD
- Hedge Funds:
- Eurekahedge Asset Weighted Hedge Fund Index - HFI
...and then use mean-variance optimization to build an efficient frontier for the data population...
Now, we'll examine the effects of a flat yield curve on our basket of investment vehicles (remember this only dates back to 2005) by looking at the bottom quartile of the spread variable:
The Impact of Runaway Gold
The output from the portfolio optimization model may not be what was expected after we determined flat yield curves lead to underperforming markets. What we're seeing, however, is the impact of a single input to the model.
If we look at the risk/return characteristics of gold for the population data set, we see it averages 8.92% return with 13.19% vol for a risk-adjusted return of 0.676... however, when we see its 2-year forward performance following a flat yield curve environment, the performance is so dramatically different it impacts the entire modeling exercise. Since 2005 when the yield curve has been flat, gold has a forward return of 19.36% with a vol of just 5.92% for a risk-adjusted return of 3.269.
Therefore, when included in an unconstrained asset allocation model, gold absolutely dominates the asset allocation and leads to a higher efficient frontier curve.
By taking gold out of the mix, we get an efficient frontier more in line with the expectation of depressed market returns...
...of course this does somewhat defeat the point of the exercise but it's also unlikely that any real world multi-asset portfolio would allow for unconstrained asset allocations.
Tactical Asset Allocation
So if a multi-asset manager, who was running an efficient 5% vol portfolio, stuck with his strategic asset allocation, what would happen?
- Given the expected asset class performance under the population data, the portfolio would return 9.85% with a risk-adjusted return of 1.97x
- While keeping the strategic asset allocation in flat yield curve environment would generate a seemingly superior return of 10.79%, the portfolio volatility would be significantly higher than budgeted resulting in a sub-optimal risk-adjusted return of 1.51x
- By shifting to a tactical asset allocation, the manager would expect to generate a return of 11.35% while staying within the original volatility budget of 5% thus producing a superior 2.27x risk-adjusted return
Critiquing the Data Set
When building any model that requires correlation input (in this case, a variance-covariance matrix), we are constrained to the term of the shortest available time series data... which for this analysis, limited me to a starting date of January 2005. This produced the following shortcomings in the analysis output:
- The expected return and volatility of technology stocks fails to incorporate the effects of the 2000 dotcom bubble crash... a longer data set would reduce return and increase volatility, especially in the bottom quartile
- Expected return and volatility of real estate is significantly affected by the housing crash of 2008... a longer data set would increase expected return and decrease volatility for the population data set of real estate
- As mentioned before, the performance of gold is significant in the bottom quartile... and while gold also produced similar performance following the 2019 curve flattening, there were exogenous political factors that contributed to its performance following the financial crisis... a longer time series might produce lower expected return and higher volatility for gold
Having access to more complete time series data would almost certainly change the output of this analysis... however, the sentiment of conditional probability and using dynamic tactical asset allocation remains relevant - especially in the curve flattening environment we're currently in.
Comments
Post a Comment