Skip to main content

Projecting Equity Prices Using Exponential Trends and Stochastic Simulations...

A RIDDLE, WRAPPED IN A MYSTERY, INSIDE AN ENIGMA

Speaking in a 1939 radio interview regarding a British alliance with Russia, then First Lord of the Admiralty, Winston Churchill said "I cannot forecast to you the action of Russia.  It is a riddle, wrapped in a mystery inside an enigma..."  Present day Russian actions could be described in much the same way but that's for a whole other blog all together.  Rather, more fitting to the context of this discussion is the riddle of forecasting equity prices.

Let's be clear at the start... equity prices are impossible to predict... people far smarter than I have tried and failed on countless occasions.  Despite this fact, traders and academics alike continue in the pursuit to predict future prices for motivations far less noble than those of our grandparent's generation.

According to the Black-Scholes-Merton options pricing model, equity prices follow a lognormal distribution known as Geometric Brownian Motion.  This is a stochastic process that has a 'random probability distribution... but may not be predicted precisely.'  What this means is that while future prices can't be pinpointed, they can be estimated by creating a probability density function to project likely outcomes.

What I'm going to illustrate here is how this estimated range (I'll refer to it as the 'distribution' from here on out) is constructed and what can be concluded from it.  Or at least, how I have designed my models to do it and what I take from the results.


FIRST, THESE TWO THINGS

In order to build our distribution we are going to need two input variables... 1) an expected future return and 2) an expected future volatility.  There is no precise way of knowing either of these unless you have a crystal ball... and if you do, I suspect you wouldn't waste your time doing the math.


TRENDING vs TRACKING... THE QUEST FOR EXPECTED RETURN

Predictive analytics is the application of data to determine likely future outcomes.  One of its key building blocks is trend analysis... identifying historical movements to project future expectations.

Trending:

Simple trends are easy to identify and require little more than common sense to project future values. Take this one for example:

10, 20, 30, 40, 50...

It is easy to deduce that the next likely outcome for the series is 60.  The pattern is perfectly smooth and finding the subsequent expected value only requires the application of linear logic.

When we think about trends in relation to stock prices, we think in terms of moving averages.  The most commonly cited trends in trading are the 20, 50, 100 and 200 day moving averages.  These benchmarks are ubiquitous among professional traders in their attempts to determine where equity prices will trade in the future. 

The longer the term of a moving average, the less biased it is to random price fluctuations... typically in the form of systemic beta.  Therefore, longer moving averages will be smoother than their shorter termed counterparts and will tend to give better price projections in the absence of a fundamental shift in the price of the stock.

But what happens if the price does fundamentally shift?

Tracking:

On August 23, 1999, Enron stock traded for roughly $44 a share.  One year later, shares of the once modest utility company turned energy giant had more than doubled and traded for an all-time high of $90.75.  One year later still, the closing price was less than $37.  By the end of 2001, the market value of a share of stock was just sixty cents.

Ask why WTF?!


Trends change... sometimes, dramatically so.  The ability to recognize and adapt to these changes is crucial. 

When using moving averages to predict future prices, shorter term trends are quick to incorporate potentially meaningful changes and thus have a higher tracking efficiency... or it can be conversely said that they have a lower tracking error.  Longer term trends are slower to adapt to changes because of 'arithmetic bulkiness' and will thus have lower tracking efficiency - higher tracking error. 

So what's the answer?:

To solve the trending vs tracking dilemma, I use a predictive analytical method called exponentially weighted moving average (EWMA).  EWMAs incorporate both predictive methods to produce an optimized expected future value.

Here's how it works.

EWMAs are basically what they sound like... weighted moving averages.  The most recent price in a time series gets the highest weight and the weight of each previous price is reduced.  The weight of all the prices in the time series equals 100%.  Finally, the sum of the weighted prices is, therefore, the expected future price. 

But then again, it's not entirely that easy.

The rate of change in the weights is known as the smoothing constant which is the exponential reduction factor applied to each weight.  The initial price weight and smoothing constant are calibrated in conjunction with individual forecast errors by way of mean-variance optimization to produce the lowest aggregate standard error for the time series.


EXPECTED VOLATILITY... DEFINING THE RANGE

The expected volatility ultimately determines the size of the distribution... higher volatility expectations will lead to a larger range of simulated values while lower expectations will generate smaller ranges of values.


Historical Volatility:

We've all heard the tired legal disclaimer that 'past performance is no guarantee of future returns...'  This may be true, but what else do we have besides history when making projections to build well-constructed investment portfolios.

My model uses historical volatility to project future volatility much in the same way it uses historical return trends to project expected returns using EWMAs.  The problem here is that volatility does not always follow linear regressable patterns.  When this happens, it is not possible to make future projections based on historical observations.  So in this case, historical performance really is no indication of the future... lawyers are finally good for something.  This issue can be addressed, however, using what is called a GARCH (Generalized Autoregressive Conditional Heteroskedasticity) process... but I'm not going to go that deep here.

When my model can't project future volatility, it simply reverts to using the most recent historical volatility observation.   

Implied Volatility:

Implied volatility is taken directly from the market.  Options prices reflect investor expectations for future performance.  Using options prices, we can back out the implied volatility expectations using Black-Scholes-Merton (BSM).

Here are a couple of examples.

In the first example, we're looking at the at-the-money price of a 1 week SPY call option.  The price of the ETF closed last Friday at exactly $210.50.  The offer price of the $210.50 call option is $1.18.


Here, two volatility numbers are circled.  The circle on the left is the implied volatility assumption that has been plugged into a BSM calculator to get the calculated price of the option to reconcile with the market price of $1.18.  The circle on the right is the projected volatility produced by the model.

The two projections are very close... the implied volatility is 9.9% and the projected volatility is 9.8%.  These are both annualized numbers.

In the second example, we'll look at a 1 week Disney (DIS) call option.  DIS closed last Friday at exactly $120.  The offer price of the call options is $1.75.


In this example, there is a significant difference in the two volatility assumptions.  The implied volatility is 25.8% while the historically projected volatility is 18.46%.  This drastic difference is due directly to the fact that DIS will issue an earnings report in the middle of the week and the implied volatility is reflecting investor expectations given an imminent pricing catalyst.

Historical volatility clearly has its limitations, but when used in combination with implied volatility we have the advantage of being able to directly compare any differences between future expectations and historical performance.


SIMULATING EQUITY PRICE DISTRIBUTIONS

Now that we have our two input variables, we can plug them into a stochastic price simulation and get a distribution.

Before we do that, however, let's look at a single simulation to understand how the variables interact within the process.

Below, is a 1 year illustration of a single simulation on the SPY with a current price of $210.50, an expected volatility of 14% (taken from the June 2016 ATM call) and an expected return of 15% (unfortunately my model doesn't go a year out so this is an arbitrary assumption).


Our expected return of 15% is represented by straight orange line.  The volatility component is represented by the gray line at the bottom of the chart.  Finally, the combination of the two is the simulated price movement over the course of 1 year and is represented in blue.

My model does this process 500 times to produce a distribution that will look like this...


This distribution goes back to our 5 day SPY projection that has an expected return of -0.53% and a projected annualized volatility of 9.8%.

The bars of the histogram make up a density function for expected prices.  The highest densities being in the range of approximately $209 to $211... these values represent the most likely future outcome for the ETF over the next week.  However, there are also outliers that represent other possible outcomes... as low as $201 and as high as $219.

The inevitable passage of time will answer this particular riddle of future price movements.

Comments

Popular posts from this blog

Modeling Credit Risk...

     Here's a link to a presentation I gave back in August on modeling credit risk.  If anyone would like a copy of the slides, go ahead and drop me a line... https://www.gotostage.com/channel/39b3bd2dd467480a8200e7468c765143/recording/37684fe4e655449f9b473ec796241567/watch      Timeline of the presentation: Presentation Begins:                                                                0:58:00 Logistic Regression:                                                                1:02:00 Recent Trends in Probabilities of Default:                              1:10:20 Machine Learning:                                                                  1:15:00 Merton Structural Model:                                                        1:19:30 Stochastic Asset Simulation Model:                                        1:27:30 T-Year Merton Model:                

Modeling Black-Litterman; Part 1 - Reverse Optimization

  "The 'radical' of one century is the 'conservative' of the next." -Mark Twain In this series, I'm going to explore some of the advances in portfolio management, construction, and modeling since the advent of Harry Markowitz's Nobel Prize winning Modern Portfolio Theory (MPT) in 1952. MPT's mean-variance optimization approach shaped theoretical asset allocation models for decades after its introduction.  However, the theory failed to become an accepted industry practice, so we'll explore why that is and what advances have developed in recent years to address the shortcomings of the original model. The Problems with Markowitz For the purpose of illustrating the benefits of diversification in a simple two-asset portfolio, Markowitz's model was a useful tool in producing optimal weights at each level of assumed risk to create efficient portfolios.   However, in reality, investment portfolios are complex and composed of large numbers of holdin

Evidence the SPY is Overbought...

 A quick note on the recent market rally here of late.  It's plain to see the markets have been on a tear for the month of June (and going back into May for the QQQ) as the SPY closed today at its highest level in almost fourteen months. If we start to look at the historical levels, however, it appears the SPY may be overbought in the short-run and susceptible to a mean-reverting pattern. Here's the daily chart of the SPY as of today's (6/15/23) close... When looking at the distance between the closing price and the 50-day moving average (illustrated by the yellow bar), we're noticing a large gap... this can be measured by a statistic I developed which I casually refer to as "variance"... or the distance between current prices and their respective moving averages. Historically, throughout the life of the SPY (which debuted in January of '93), the variance over the 50-day moving average has peaked at a reading of 3.20... today's reading posts up at 2.49