Skip to content
22 min read

An Insider's Guide to Optimizers

How optimizers work

Insider's Guide to Optimizers

As a reader of this blog, you no doubt know — if only vaguely — about optimizers. They’ve become standard tools for managing complex customized and tax-managed portfolios. But, how, exactly, do they work? If you’ve ever wondered, perhaps we can help. Here it is, an insider’s guide to optimizers. (It’s great stuff. Read this, and you’re practically guaranteed to be the life of the party.1)

To see how they work, think of an old-fashioned balance scale. Suppose IBM has been replaced by HP in a model, and we’re trying to decide, in a particular portfolio, whether to sell the IBM and buy HP. On the left side of the scale, we put all the factors that favor selling (e.g., reduced drift and improved expected returns). On the right side of the scale, we put all the factors that favor not selling (e.g., reduced taxes and transaction costs). If the scale tips to the left, we sell. If the scale tips to the right, we hold.

That’s the essence of it, but, of course, in real life, it’s more complicated.

First, we can’t actually put things on scales. Instead, we give each of the various factors we’re considering a numerical score. Then, for any set of trades and any resulting portfolio, we add up all the scores and see which one has the highest total. In our example, we'd calculate the total score for holding IBM by adding a negative number for drift and a negative number for return expectations. For selling IBM and buying HP, we'd add a positive number for return expectations and a negative number for taxes and trading costs.  We compare the totals for the two portfolios and go with whichever is higher. It looks something like this: 

  Holdings Score Decision
Drift Return expectations Transaction & tax costs Total
Current portfolio  IBM  -1 -1  0  -2  ✓
Model  HP  0 +1  -5  -4  

  

In this example, holding IBM wins because it has the higher score.

The second complication is that our objective is not just to make a decision about a single trade (sell IBM, buy MSFT). We’re making decisions about every position in the portfolio, and these decisions can’t be made independently. Risk and drift are not properties of single positions; they’re properties of the portfolio as a whole. Everything else being equal, an overweighted position in IBM would contribute to drift. But everything else may not be equal. If there is a “never buy HP” restriction that keeps the portfolio underweighted in HP, being overweighted in IBM might lower overall drift (because the IBM is acting as a pretty good substitute for the missing HP position).

So we don’t want to just score selling IBM and buying HP. We want to assign scores (calculated as the sum of total taxes, drift, and transaction costs) to every possible combination of permitted buys and sells, and choose the one with the highest score. It might look something like this:

 
Holdings
Score
Decision
Drift Return expectations Transaction & tax costs Total
 Current portfolio  IBM, F, Cash  -3 -2  0  -5  
Intermediate portfolio IBM, GM,   XOM -1 +1 -1 -1
 Model  HP, GM, XOM  0 +3  -6  -3  


In this example, buying the model is better than keeping the current portfolio, but there’s an even better intermediate choice, where we hold onto IBM to reduce taxes and transaction costs but otherwise buy the model.

Optimizers don’t literally score every possible combination. They start with the existing portfolio and incrementally find small changes that make things better. You can think of it as a hiker trying to find the top of the hill. From wherever you are, just head off in the uphill direction, stopping when you can’t go any higher.2

As you might expect, there are a lot of details that go into actually building the optimizer logic. These include:

  • Obeying constraints. You can’t consider every portfolio, only those that obey constraints. These can be basic constraints that apply to all accounts, like “you can’t spend more than you have” (no buying on margin) or “you can’t own negative shares” (no short sales). Or they can be account specific, like “never sell Walmart” and “never buy tobacco”.
  • Dealing with conflicting constraints. Sometimes constraints conflict. For example, it might not be possible to simultaneously obey tax budget and asset class min/max constraints. You need to decide what the optimizer should do. Return an “I can’t do this” error message? Or privilege one of the constraints over the other?
  • Creating a uniform measure of drift. In the above examples, we talked about drift, but never described how it’s measured. The standard approach is to use expected “tracking error”, defined as (wait for it) the estimated standard deviation of the distribution of return differences between two portfolios. Tracking error is usually calculated using a risk model (which is its own story--possibly the subject of a future post).
  • Not trading on noisy data. Drift and return estimates are inexact. If you treat them as perfect predictors of the future, you’ll end up trading too much and possibly making overly concentrated bets. You need to take countermeasures to prevent noisy data causing noisy trades.
  • Loss harvesting. Loss harvesting means selling a position not because you intrinsically want to get rid of it, but to realize a capital loss that can be used to lower your taxes. Typically, you sell the position and then buy it back 31 days later (after the so-called wash sale period). But what is the optimal price at which to sell a position with an unrealized loss? You want to at least cover round-trip transaction costs (otherwise you’re losing money on the whole process). But just covering transaction costs would result in no net gains, which would be pointless. So you need to cover more than transaction costs. But how much more? There’s an optimal strategy for this, and that needs to be built into the process.
  • Short-term gains. When faced with an overweighted position with short-term gains, you don’t have just two choices (sell or don’t sell), you have three (sell now, sell later when it’s long-term, or don’t sell). So the set of choices you look at needs to expand to include the “sell later” option.

Optimizers were designed to help advisors make better trade-off decisions. But they’re complicated instruments, and, traditionally, they were only used by “quants”. This made them expensive to deploy, and so their application for private accounts was pretty much limited to High Net Worth (HNW) and Ultra High Net Worth (UHNW) portfolios.

But that is changing. Because of advances in automation, it is now economical to use them for all accounts, even robo-accounts (you can read about how Smartleaf uses an optimizer here: The Difference Between Smartleaf and an Optimizer). So their use is spreading, and offering the type of customization that was once the exclusive preserve of UHNW accounts is becoming “table stakes”.  And that, we think, is optimal.

 

 


Warning: Not really. We don’t advise talking about optimizers at parties.

2 This procedure is not foolproof. There’s a danger of ending up not at the top of the peak, but at the top of an outcrop (a so-called “local maximum”). Optimizers take steps to avoid this.

avatar
President, Co-Founder

COMMENTS