Mean-Variance Portfolio Optimization with R and Quadratic Programming

The following is a demonstration of how to use R to do quadratic programming in order to do mean-variance portfolio optimization under different constraints, e.g., no leverage, no shorting, max concentration, etc.

Taking a step back, it’s probably helpful to realize the point of all of this. In the 1950s, Harry Markowitz introduced what we now call Modern Portfolio Theory (MPT), which is a mathematical formulation for diversification. Intuitively, because some stocks zig when others zag, when we hold a portfolio of these stocks, our portfolio can have some notional return at a lower variance than holding the stocks outright. More specifically, given a basket of stocks, there exists a notion of an efficient frontier. I.e., for any return you choose, there exists a portfolio with the lowest variance and for any variance you fix, there exists a portfolio with the greatest return. Any portfolio you choose that is not on this efficient frontier is considered sub-optimal (for a given return, why would you choose a a higher variance portfolio when a lower one exists).

The question becomes if given a selection of stocks to choose from, how much do we invest in each stock if at all?

In an investments course I took a while back, we worked the solution for the case where we had a basket of three stocks to choose from, in Excel. Obviously, this solution wasn’t really scalable outside of the N=3 case. When asked about extending N to an arbitrary number, the behind-schedule-professor did some handwaving about matrix math. Looking into this later, there does exist a closed-form equation for determining the holdings for an arbitrary basket of stocks. However, the math starts getting more complicated with each constraint you decide to tack on (e.g., no leverage).

The happy medium between “portfolio optimizer in Excel for three stocks” and “hardcore matrix math for an arbitrary number of stocks” is to use a quadratic programming solver. Some context is needed to see why this is the case.

Quadratic Programming
According to wikipedia, quadratic programming attempts to minimize a function of the form \frac{1}{2}x^{T}Qx + c^{T}x subject to one or more constraints of the form Ax \le b (inequality) or Ex = d (equality).

Modern Portfolio Theory
The mathematical formulation of MPT is that for a given risk tolerance q \in [0,\infty), we can find the efficient frontier by minimizing w^{T} \Sigma w - q*R^{T}w.

Where,

  • w is a vector of holding weights such that \sum w_i = 1
  • \Sigma is the covariance matrix of the returns of the assets
  • q \ge 0 is the “risk tolerance”: q = 0 works to minimize portfolio variance and q = \infty works to maximize portfolio return
  • R is the vector of expected returns
  • w^{T} \Sigma w is the variance of portfolio returns
  • R^{T} w is the expected return on the portfolio

My introducing of quadratic programming before mean-variance optimization was clearly setup, but look at the equivalence between \frac{1}{2}x^{T}Qx + c^{T}x and w^{T} \Sigma w - q*R^{T}w.

Quadratic Programming in R
solve.QP, from quadprog, is a good choice for a quadratic programming solver. From the documentation, it minimizes quadratic programming problems of the form -d^{T}b + \frac{1}{2} b^{T}Db with the constraints A^{T}b \ge b_0. Pedantically, note the variable mapping of D = 2\Sigma (this is to offset the \frac{1}{2} in the implied quadratic programming setup) and d = R.

The fun begins when we have to modify A^{T}b \ge b_0 to impose the constraints we’re interested in.

Loading Up the Data
I went to google finance and downloaded historical data for all of the sector SPDRs, e.g., XLY, XLP, XLE, XLF. I’ve named the files in the format of dat.{SYMBOL}.csv. The R code loads it up, formats it, and then ultimately creates a data frame where each column is the symbol and each row represents an observation (close to close log return).

The data is straight-forward enough, with approximately 13 years worth:

> dim(dat.ret)
[1] 3399    9
> head(dat.ret, 3)
              XLB         XLE          XLF         XLI          XLK
[1,]  0.010506305  0.02041755  0.014903406 0.017458395  0.023436164
[2,]  0.022546751 -0.00548872  0.006319802 0.013000812 -0.003664126
[3,] -0.008864066 -0.00509339 -0.013105239 0.004987542  0.002749353
              XLP          XLU          XLV          XLY
[1,]  0.023863921 -0.004367553  0.022126545  0.004309507
[2,] -0.001843998  0.018349139  0.006232977  0.018206972
[3,] -0.005552485 -0.005303294 -0.014473165 -0.009255754
> 

Mean-Variance Optimization with Sum of Weights Equal to One
If it wasn’t clear before, we typically fix the q in w^{T} \Sigma w - q*R^{T}w before optimization. By permuting the value of q, we then generate the efficient frontier. As such, for these examples, we’ll set q = 0.5.

solve.QP’s arguments are:

solve.QP(Dmat, dvec, Amat, bvec, meq=0, factorized=FALSE)

Dmat (covariance) and dvec (penalized returns) are generated easily enough:

risk.param

Amat and bvec are part of the inequality (or equality) you can impose, i.e., A^{T}b \ge b_0. meq is an integer argument that specifies “how many of the first meq constraints are equality statements instead of inequality statements.” The default for meq is zero.

By construction, you need to think of the constraints in terms of matrix math. E.g., to have all the weights sum up to one, Amat needs to contain a column of ones and bvec needs to contain a single value of one. Additionally, since it’s an equality contraint, meq needs to be one.

In R code:

# Constraints: sum(x_i) = 1
Amat

Having instantiated all the arguments for solve.QP, it’s relatively straightforward to invoke it. Multiple things are outputted, e.g., constrained solution, unconstrained solution, number of iterations to solve, etc. For our purpose, we’re primarily just interested in the solution.

> qp  qp$solution
[1] -0.1489193  0.6463653 -1.0117976  0.4107733 -0.4897956  0.2612327 -0.1094819
[8]  0.5496478  0.8919753

Things to note in the solution are that we have negative values (shorting is allowed) and there exists at least one weight whose absolute value is greater than one (leverage is allowed).

Mean-Variance Optimization with Sum of Weights Equal to One and No Shorting
We need to modify Amat and bvec to add the constraint of no shorting. In writing, we want to add a diagonal matrix of ones to Amat and a vector of zeros to bvec, which works out when doing the matrix multiplication that for each weight, its value must be greater than zero.

# Constraints: sum(x_i) = 1 & x_i >= 0
Amat  qp$solution
[1] 0.0000000 0.4100454 0.0000000 0.0000000 0.0000000 0.3075880 0.0000000
[8] 0.2823666 0.0000000

Note that with the constraints that all the weights sum up to one and that the weights are positive, we’ve implicitly also constrained the solution to have no leverage.

Mean-Variance Optimization with Sum of Weights Equal to One, No Shorting, and No Heavy Concentration
Looking at the previous solution, note that one of the weights suggests that we put 41% of our portfolio into a single asset. We may not be comfortable with such a heavy allocation, and we might want to impose the additional constraint that no single asset in our portfolio takes up more than 15%. In math and with our existing constraints, that’s the same as saying -x \ge -0.15 which is equivalent to saying x \le 0.15.

# Constraints: sum(x_i) = 1 & x_i >= 0 & x_i <= 0.15
Amat  qp$solution
[1] 0.1092174 0.1500000 0.0000000 0.1407826 0.0000000 0.1500000 0.1500000
[8] 0.1500000 0.1500000

Turning the Weights into Expected Portfolio Return and Expected Portfolio Volatility
With our weights, we can now calculate the portfolio return as R^{T}w and portfolio volatility as sqrt{w^T \Sigma w}. Doing this, we might note that the values look “small” and not what you expected. Keep in mind that our observations are in daily-space and thus our expected return is expected daily return and expected volatility is expected daily volatility. You will need to annualize it, i.e., R^{T}w * 252 and \sqrt{w^{T} \Sigma w * 252}.

The following is an example of the values of the weights and portfolio statistics while permuting the risk parameter and solving the quadratic programming problem with the constraints that the weights sum to one and there’s no shorting.

> head(ef.w)
      XLB       XLE XLF XLI XLK XLP XLU       XLV        XLY
1       0 0.7943524   0   0   0   0   0 0.1244543 0.08119329
1.005   0 0.7977194   0   0   0   0   0 0.1210635 0.08121713
1.01    0 0.8010863   0   0   0   0   0 0.1176727 0.08124097
1.015   0 0.8044533   0   0   0   0   0 0.1142819 0.08126480
1.02    0 0.8078203   0   0   0   0   0 0.1108911 0.08128864
1.025   0 0.8111873   0   0   0   0   0 0.1075003 0.08131248
> head(ef.stat)
             ret        sd
1     0.06663665 0.2617945
1.005 0.06679809 0.2624120
1.01  0.06695954 0.2630311
1.015 0.06712098 0.2636519
1.02  0.06728243 0.2642742
1.025 0.06744387 0.2648981
> 

Note that as we increase the risk parameter, we’re working to maximize return at the expense of risk. While obvious, it’s worth stating that we’re looking at the efficient frontier. If you plotted ef.stat in its entirety on a plot whose axis are in return space and risk space, you will get the efficient frontier.

Wrap Up
I’ve demonstrated how to use R and the quadprog package to do quadratic programming. It also happens to coincide that the mean-variance portfolio optimization problem really lends itself to quadratic programming. It’s relatively straightforward to do variable mapping between the two problems. The only potential gotcha is how to state your desired constraints into the form A^{T}b \ge b_{0}, but several examples of constraints were given, for which you can hopefully extrapolate from.

Getting away from the mechanics and talking about the theory, I’ll also offer that there are some serious flaws with the approach demonstrated if you attempt to implement this for your own trading. Specifically, you will most likely want to create return forecasts and risk forecasts instead of using historical values only. You might also want to impose constraints to induce sparsity on what you actually hold, in order to minimize transaction costs. In saying that your portfolio is mean-variance optimal, there’s the assumption that the returns you’re working with is normal, which is definitely not the case. These and additional considerations will need to be handled before you let this run in “production.”

All that being said, however, Markowitz’s mean-variance optimization is the building block for whatever more robust solution you might end up coming with. And, an understanding in both theory and implementation of a mean-variance optimization is needed before you can progress.

Helpful Links
Lecture on Quadratic Programming and Markowitz Model (R. Vanderbei)
Lecture on Linear Programming and a Modified Markowitz Model (R. Vanderbei)

You are Horrible at Market Timing

You are horrible at market timing. Don’t even attempt it. I probably can’t convince you how horrible you are, but hopefully some empirical data analysis will show how you and the majority of people are no good at market timing.

Recently a friend came to me lamenting about a recent stock purchase he made, lamenting how the stock has gone down since he’s bought it and how he should have waited to buy it for even cheaper. From this, I was reminded by an anecdote from a professor from my econometrics class. I was taking the class in late 2008, which if you don’t remember, was right in the midst of the major financial collapse, with all the major indices taking a huge nose dive.

Students being students, somebody asked the professor what he thought about the collapse and what he was doing himself in his own personal account. Keep in mind the tone was what a “normal” person does instead what a 1000-person hedge fund does. He referred to a past study that showed that most recoveries in the equities space didn’t come from steady returns but instead were concentrated on a few, infrequently-spaced days. That is, there was no way for you to catch the recoveries unless you were already invested the day before. And, if you were sitting on cash before, saw the move happen, and attempted to then get into the markets, it would have been too late for you.

I decided to (very) roughly replicate this purported study for my friend.

I first went to google to download daily prices for SPY. They provided a nice facility for you to export the data to a csv format.

The data is relatively straightforward.

Date,Open,High,Low,Close,Volume
29-May-12,133.16,133.92,132.75,133.70,32727593
25-May-12,132.48,132.85,131.78,132.10,28450945
24-May-12,132.67,132.84,131.42,132.53,31309748
...

I wrote some R code to read in this data and to trim out days that didn’t have an open, which left me with observation starting in 2000/01/03 and ~3100 data points. Additionally, I created log returns for that day’s open to close, i.e., log(p_{close}) - log(p_{open}).

# Get the data
xx <- read.table(file="~/tmp/spy.data.csv", header=T, sep=",", as.is=T)
names(xx) <- c("date", "open", "high", "low", "close", "vlm")

# Get date in ymd format
xx$ymd <- as.numeric(strftime(as.Date(xx$date, "%d-%b-%y"), "%Y%m%d"))
xx <- xx[, names(xx)[-1]]
xx <- xx[,c(ncol(xx), 1:(ncol(xx)-1))]

# We want to work with complete data
xx <- xx[xx$open != 0,]

# I prefer low dates first than high dates
xx <- xx[order(xx$ymd),]
rownames(xx) <- 1:nrow(xx)

# Getting open to close
xx$o2c <- log(xx$close) - log(xx$open)
xx <- xx[!is.infinite(xx$o2c),]

Getting the top 10 return days is relatively straightforward. Note that finger-in-the-wind, a lot of the top 10 return days came from end of 2008, for which presumably a lot of people decided to put their money into cash out of fear.

> head(xx[order(-xx$o2c),], n=10)
          ymd   open   high    low  close       vlm        o2c
635  20020724  78.14  85.12  77.68  84.72    671400 0.08084961
2202 20081013  93.87 101.35  89.95 101.35   2821800 0.07666903
2213 20081028  87.34  94.24  84.53  93.76  81089900 0.07092978
2225 20081113  86.13  91.73  82.09  91.17 753800996 0.05686811
2234 20081126  84.30  89.19  84.24  88.97 370320441 0.05391737
2019 20080123 127.09 134.19 126.84 133.86  53861000 0.05189898
248  20010103 128.31 136.00 127.66 135.00  17523900 0.05082557
2241 20081205  83.65  88.42  82.24  87.93 471947872 0.04989962
2239 20081203  83.40  87.83  83.14  87.32 520103726 0.04593122
2315 20090323  78.74  82.29  78.31  82.22 420247245 0.04324730

Emphasizing this point more, if you didn’t have your cash in equities at the beginning of the day, you would have missed out on the recovery. An additional point we can do is to see what the returns were on the prior day. In other words, is there some in-your-face behavior the prior day that would lead you to believe that huge returns would have come the next day?

> max.ndx <- head(order(-xx$o2c), n=10)
> max.ndx <- as.vector(t(cbind(max.ndx, max.ndx-1)))
> xx[max.ndx,]
          ymd   open   high    low  close       vlm          o2c
635  20020724  78.14  85.12  77.68  84.72    671400  0.080849612
634  20020723  82.55  83.24  78.85  79.95  65806500 -0.032002731
2202 20081013  93.87 101.35  89.95 101.35   2821800  0.076669027
2201 20081010  86.76  93.94  83.58  88.50  90590400  0.019856866
2213 20081028  87.34  94.24  84.53  93.76  81089900  0.070929778
2212 20081027  85.97  89.51  83.70  83.95  62953200 -0.023777015
2225 20081113  86.13  91.73  82.09  91.17 753800996  0.056868113
2224 20081112  88.23  90.15  85.12  85.82 454330554 -0.027694962
2234 20081126  84.30  89.19  84.24  88.97 370320441  0.053917369
2233 20081125  87.30  87.51  83.82  85.66 454188290 -0.018964491
2019 20080123 127.09 134.19 126.84 133.86  53861000  0.051898981
2018 20080122 127.21 132.43 126.00 130.72  75350600  0.027218367
248  20010103 128.31 136.00 127.66 135.00  17523900  0.050825568
247  20010102 132.00 132.16 127.56 128.81   8732200 -0.024463472
2241 20081205  83.65  88.42  82.24  87.93 471947872  0.049899616
2240 20081204  86.06  88.05  83.74  85.30 444341542 -0.008870273
2239 20081203  83.40  87.83  83.14  87.32 520103726  0.045931222
2238 20081202  83.47  85.49  82.04  85.27 469785220  0.021335407
2315 20090323  78.74  82.29  78.31  82.22 420247245  0.043247296
2314 20090320  78.76  78.91  76.53  76.71 371165651 -0.026373176

Looking at the data, we can see that there were both positive and negative returns the day before. However, there weren’t any moments of “large return today, I better get in.” My perception of this is that, from the perspective of a normal investor saving for retirement, they should just leave their money in and are already hopefully using some variant of dollar cost averaging.

For what it’s worth, my professor said he hardly touched his own personal investments, presumably just putting his 401k money in a few indices and forgetting about it. His time was better spent on writing academic papers.