Machine Learning’s Growing Pains

Solomon_B 150x150Brad Solomon, Junior Investment Analyst

“Machine learning,” on its surface, sounds nothing short of miraculous.  For anyone who has ever felt overwhelmed when working with a large amount of intractable data, it evokes a certain fantasy: press a button, and let the machine learn. Poof, without any further instruction, your computer spits out relationships in the data seemingly untraceable to the human eye.

Yet paradoxically, there is also a competing perception that only a certain breed of mathematics PhDs and programming prodigies are worthy of using machine learning (ML) techniques.  The field of computer science has never been short on patronization; this post recommends first making sure that you have several advanced degrees and then learning the C or C++ languages, both of which are seen as some of the least user-friendly computer languages and neither of which are the language into which most machine learning is actually incorporated.

Now that machine learning has made its way to the top of Gartner’s Hype Cycle for emerging technologies, and has also become pervasively marketed as part of the tool set of quantitative investment strategies, it’s probably a good time to debunk some misconceptions about what machine learning is, and what it isn’t.

Let’s start with a positive.  ML encompasses a wide range of statistical modeling techniques that can be applied toward facial recognition, predicting credit card fraud, and classifying tumors as malignant or benign, to name just a few implementations.  At the heart of machine learning are a number of different models that all serve as means to the same ends: predicting a value or classifying something categorically.  The list of models themselves is an intimidating mouthful: to name a few, there are neural networks, decision trees, Bayesian ridge regression, and support vector machines.

If your head is spinning, you’re not alone.  However, you might be surprised to learn that you likely covered some elements of machine learning in any introductory statistics course: for instance, ordinary least squares regression (linear regression) also falls under the hood of machine learning. Machine learning practitioners also like to throw around a number of fancy terms that go by other names elsewhere in the realm of broader statistics discipline.  For example, training and test data are analogous to the more familiar terms in-sample and out-of-sample; supervised learning simply means that you are starting with an independent and dependent variable and want to establish a relationship between the two and then apply that relationship to a “fresh” (test) variable.

Now, to debunk one of several myths: ML is not new; the term was coined in 1959 and has been used pervasively in the tech industry for decades.  However, growth in the popularity of the Python programming language, which is open-source, free, and offers a number of user-friendly machine learning packages, has fueled interest in the concept.

One result has been the proliferation of machine learning techniques and their (purported) use in quantitative investment applications.  At Brinker, we’ve come across more than a handful of managers using machine learning: the use of random forest classification to identify the likelihood that a company will cut its dividend, or forecasting of market volatility regimes through Markov chain Monte Carlo methods.  However, we would be remiss to mention that for every manager that usefully employs machine learning, there are a half-dozen others that simply like being able to include it in a slide in their strategy’s pitch-book.  Bloomberg bluntly articulated this recently: “Hedge Funds Beware: Most Machine Learning Talk Is Really ‘Hokum’.”  A healthy dose of skepticism is warranted.

That engenders a second key point: when interacting with managers who profess to use ML in their everyday process, ask as many “dumb” questions as possible.  In layman’s terms, can you describe what’s going on “under the hood”?  Why did you select this model in particular?  While the mathematics behind certain models can be quite hairy, the high-level intuition should not be.  And lastly, while machine learning hasn’t yet been fully commoditized, that doesn’t mean you should be paying a 2 & 20 fee to access its capabilities.

The views expressed are those of Brinker Capital and are not intended as investment advice or recommendation. For informational purposes only. Holdings are subject to change. Brinker Capital, Inc., a Registered Investment Advisor.

Active share: What’s size got to do with it?

Solomon-(2)Brad Solomon, Junior Investment Analyst

Following the scholarly debate over Active Share has been akin to witnessing a group of kids fight over a toy in the sandbox. Does Active Share “work”?  Yes or no? Fortunately, we’re not debating over Coke versus Pepsi. There are objective, wholly verifiable means to assess Active Share’s efficacy, and here I’ll attempt it in 999 words or less.

First, some obligatory background: the original working paper on Active Share, by Antti Petajisto and Martijn Cremers (“CP” hereafter), proposed a measure of the dissimilarity of a fund’s holdings versus an index.  Mathematically,

active-share-formula-1

where wfund,i and windex,i are the weight of security i in the fund and in the index.  Intuitively, this quantifies the portion of the fund’s holdings that differs from the fund’s benchmark. Petajisto followed up in 2013 with an updated version that extends the study through 2009 (from 2003 in the original paper).

CP’s main finding is straightforward. In their sample of equity mutual funds, higher Active Share is associated with greater subsequent benchmark outperformance.[1]

Proponents of “true” active management were instantly infatuated. However, in early 2015, AQR Capital Management[2] published a scathing criticism[3] that states bluntly, “Our conclusions do not support an emphasis on Active Share as a tool for selecting managers.”

AQR makes two chief claims:

  1. Sorting funds by Active Share is equivalent to a sort on market cap orientation (benchmark type). High Active Share funds are, by and large, also small cap funds.
  2. When funds are first sorted by their benchmark and then scored on group-relative Active Share, Active Share’s usefulness goes away.

Some concluded that Active Share had been thoroughly dismantled. However, Petajisto and then Cremers authored responses (here and here, respectively) that are worth reading. Specifically, if your eyelids haven’t started to droop already, I’ll direct you to pages 16-18 of Cremers’ response. Here’s an impartial look at each of AQR’s claims.

Claim #1: Active Share functions largely to be a sort on market cap focus.

Verdict: True.

Here’s a simple verification using panel data from the original Active Share studies. For the slice of equity funds extant as of September 2015 (1,850 funds), funds with large-cap benchmarks[4] dominate the universe, accounting for a little under 60% of the sample. Next, dividing the universe into Active Share quintiles allows for an easy examination of how “different” the top-quintile group of Active Share funds looks from the broader universe:

active share graph 1

Chart source: Brinker Capital, Martijn Cremers & University of Notre Dame (Data as of September, 2015).

 

The takeaway is pretty clear: investing “blindly” in high Active Share mutual funds will lead you to own a much greater proportion of small and mid-cap funds than would taking a random “slice” of the universe. Let’s move on.

Claim #2: Active Share no longer “works” after controlling for benchmark.

Verdict: Sorry, this one’s False or, at the very least, shaky.

With regard to this claim, the devil is in the details, and follow-up studies from investment managers with vested interests in destroying Active Share have further convoluted things.

AQR’s method is to first sort funds into benchmark groups (17, to be precise) and then to form Active Share quintiles within each group. This constitutes a group-relative scoring rather than an absolute ranking. The performance of high Active Share funds is then the equal-weighted average of top-quintile Active Share funds across all 17 benchmarks. I’ll ignore the discussion around t-stats and simply point to the following “damning” chart from AQR’s paper. The spread between high- and low Active Share funds seems to be a random draw above or below zero when controlling for benchmark:

active share graph 2

Chart source: AQR Capital

 

But wait—Professor Cremers rightfully deserves the last word here. The lurking issue with the above is that it effectively overweights small groups (only 5 funds were benchmarked to the S&P 500 Value Index in the latest year of the study, versus 304 to the S&P 500 Index, for instance—is an equal-weighted average of these two groups meaningful?). As Cremers explains, “The 11 benchmark groups where high Active Share funds do not outperform contain about 28% of fund‐year observations.” He goes on to admit, “It is a fair and natural question to ask whether the outperformance of high Active Share funds is only true for small cap funds and the underperformance of low Active Share funds.”

Cremers’ fix is to form broader sorts. Instead of 17 benchmark groups, a handful of which are tiny in size and skew the results nonsensically, Cremers uses broader sorts by first classifying each benchmark by its cap orientation (large, mid, small) and then grouping funds into these three cap-orientation samples, allowing for more diversified groups and a truer result.

I’ll leave you with the following two tables, reproduced from Cremers page 18. They convey a conclusion that is difficult to counter: in both groups of funds with large and small-cap benchmarks, there is a positive spread in outperformance between high and low Active Share funds. In most cases, the spread is significant from a statistical sense.[5]

active share graph 3

Chart source: Martijn Cremers, University of Notre Dame

 

In some cases, someone’s wrong, and someone’s right. Personally, I’ll take a Pepsi over Coke any day.

[1] “Outperformance” here means both gross outperformance versus a relevant benchmark and positive 4-factor alpha (an approach used to measure risk-adjusted value added in empirical academic research).

[2] This blog is not in any way a criticism of AQR at the firm level. In fact, Brinker Capital utilizes an AQR strategy in several of its products, and we will be the first to acknowledge AQR as a thought leader and one of the best sources of pertinent and rigorous research. Moreover, Brinker Capital’s own interest in advocating for or destroying Active Share is minimal; the extent to which we employ active management varies significantly based on our assessment of the efficiency of price discovery within each asset class.

[3] See also here for a revised version.

[4] I use each fund’s “minimum Active Share benchmark” as its benchmark, following the common methodology of Active Share studies. This establishes a fund’s benchmark as the benchmark to which it has the lowest Active Share, which is the index that it most resembles on a holdings basis.

[5] t-statistics greater than 2 represent the “industry standard” and imply statistical significant at a 95% confidence interval.

The views expressed are those of Brinker Capital and are not intended as investment advice or recommendation. For informational purposes only. Brinker Capital, Inc., a Registered Investment Advisor.

Start the New Year off right: Resolve to read more

Solomon-(2)Brad Solomon, Junior Investment Analyst

Many New Year’s resolutions focus on developing healthy habits. An important one to keep is intellectual curiosity. In no particular order, below is a reading list for 2017. Some deal more directly with finance than others, but they each explore economic, sociopolitical, cultural and behavioral issues that are ultimately relevant to global markets.

  1. Nation on the Take: How Big Money Corrupts Our Democracy and What We Can Do About It. Wendell Potter & Nick Penniman, Bloomsbury Press, 2016.

Nation on the Take explores the evolution of lobbying in the United States and the increased role of money in politics following the Citizens United case of 2010. What is most satisfying about the book is the extent to which its authors manage to remain nonpartisan, calling out Republicans and Democrats alike. If your New Year’s resolution involves lowering your blood pressure, I advise against skipping over this suggestion.

  1. Hillbilly Elegy: A Memoir of a Family and Culture in Crisis. D. Vance, Harper Collins Publishing, 2016.

J.D. Vance’s Hillbilly Elegy details the disenchantment of Appalachia in a view that manages to be impartially critical but also remain in solidarity with the region. This book seems to be making its way onto every “essential reading” list, and deservedly so given its relevancy to the foundations of the new wave of populism that is still working its way across the globe.

  1. Nothing is True and Everything is Possible: The Surreal Heart of the New Russia. Peter Pomerantsev, Public Affairs Publishing, 2015.

While Charles Clover’s more recent Black Wind, White Snow overtly concerns itself with the Kremlin as its sole subject, Nothing is True is a wide-ranging, colorful firsthand account of the backwards elements of Russia’s culture. A poll of a certain political party recently showed that 37 percent of respondents view Vladimir Putin favorably, versus just 10 percent in July 2014. As America’s attitude towards Russia evolves, this book is a warning to think twice before offering such a seal of approval—a stark illustration of just how diametrically opposed many Russian norms are relative to those of the U.S.

  1. The Crisis of Crowding: Quant Copycats, Ugly Models, and the New Crash Normal. Ludwig Chincarini, John Wiley & Sons, 2012.

Chincarini’s The Crisis of Crowding could best be described as a mathematically detailed, focused version of Scott Patterson’s The Quants. The book analytically decomposes the 1998 collapse of Long-Term Capital Management and the 2008-09 Financial Crisis, exploring the common thread between them in that both resulted partly from incomplete pictures of risk in behaviorally erratic systems.

  1. Fortune’s Formula: The Untold Story of the Scientific Betting System That Beat Casinos and Wall Street. William Poundstone, Hill and Wang, 2006.

Like the preceding choice on this list, Fortune’s Formula is a technical treatise of a subject that often gets “glossed over” despite its critical importance to markets. The author manages to explore the mathematically weighty Kelly criterion in a boiled-down, coherent, and practically applicable framework.

  1. Personal Benchmark: Integrating Behavioral Finance and Investment Management. Chuck Widger and Daniel Crosby, John Wiley & Sons, 2014.

Financial advisors do their clients a great service by educating them about investing best practices, but at times of volatility, logic is often thrown out the window. As the authors wrote in the book, “While investor awareness and education can be powerful, the very nature of stressful events is such that rational thinking and self-reliance are at their nadir when fear is at its peak.” The authors provide a framework for embedding good behavior into the investment process.

  1. The Laws of Wealth: Psychology and the Secret to Investing Success. Daniel Crosby, Harriman House, 2016.

And if you are looking for a list of rules to follow in the year to exercise good investing behavior, The Laws of Wealth helps keep you on the straight and narrow. The book provides clear, concise direction on what investors should think, ask and do.

Once you finish these books, more books can be found from the recommended lists by The Economist, Financial Times, and Bloomberg

Enjoy, and happy New Year!

The views expressed are those of Brinker Capital and are not intended as investment advice or recommendation. For informational purposes only. Brinker Capital, Inc., a Registered Investment Advisor.

 

 

Trump’s free lunch: Avoiding a painful indigestion

Solomon-(2)Brad Solomon, Junior Investment Analyst

The aphorism “there is no free lunch” is one of those handy phrases used ad-nauseam in Economics courses. The seductively tasty platter currently set in front of investors is a lightning-fast reallocation of assets towards stocks that should “clearly” benefit from a Donald Trump presidency. Often, however, it pays to be a skeptic. I’m not critiquing the efficacy of the policies themselves towards promoting Americans’ well-being; I’m talking about the need to unhurriedly assess the second-level investment implications of policy and whether they have already been discounted into asset prices.

The ascendancy of the Trump administration and the degree to which President-Elect Trump will remain wedded to his campaign rhetoric have a number of moving parts. Now may be an opportune time to patiently exercise what Howard Marks of Oaktree Capital calls “second-level thinking”:

First-level thinking says, “It’s a good company; let’s buy the stock.” Second-level thinking says, “It’s a good company, but everyone thinks it’s a great company, and it’s not. So the stock’s overrated and overpriced; let’s sell.”[1]

At Brinker Capital, we believe that second-level thinking is best nurtured by asking questions. Trump’s vision is to “transform America’s crumbling infrastructure into a golden opportunity for accelerated economic growth.” The number touted by greatagain.gov is $550 billion, and a recent paper by senior Trump advisors, Wilbur Ross and Peter Navarro, calls for spurring $1 trillion in privately-financed infrastructure investment over the next decade through use of tax credits.[2]  Buy infrastructure seems to be the screamingly obvious investment implication, but here are a few less obvious questions:

Is our infrastructure actually “crumbling?”

The American Society of Civil Engineers (ASCE) gave America’s infrastructure a “D” in its 2013 report card.[3] But coming from a professional trade organization of civil engineers, that’s probably akin to asking the cows from the Chic-fil-A commercials whether they prefer beef or chicken. Policy analyst Mark Scribner calls this the “Great Infrastructure Myth” and notes that the number of structurally deficient bridges has been declining for over two decades while pavements have become smoother in aggregate.[4]  A recent piece by Deutsche Bank Research[5] argued that infrastructure spending in the U.S. is not, as commonly assumed, lacking:

  • When using infrastructure-specific price indices, the share of real government investment to output has been stable for much of the last three decades.
  • After taking into account compositional changes in private capex, business investment has also remained steady as a percent of output.

How much “leakage” is there to the transmission mechanism by which government spending boosts profits in the private sector?

Investors would be wise to examine the intended and realized consequences of President Obama’s $840 billion American Recovery and Reinvestment Act (ARRA) of February 2009, much of which was directed towards infrastructure. Michael Grabell’s 2012 piece “How Not to Revive an Economy” provides a sobering look at what led President Obama to admit that “there’s no such thing” as a shovel-ready project.[6]

Which subsectors are winners of increased public spending on infrastructure?

Infrastructure is a blanket term that encompasses a large array of systems: energy, transit, ports, aviation, levees, dams, schools, roads, inland waterways, public parks, rail, bridges, drinking water, and waste treatment. Twelve of the 16 sectors reviewed on the ASCE’s 2013 report card received a grade of “C” or worse. Narrowing in on two subsectors, what evidence exists that Trump will favor oil and gas over renewable energy, for instance, and will he possess the means to undo the renewable energy investment tax credit (ITC) that was recently renewed in December 2015?

Okay, you’ve decided to buy an infrastructure fund. What’s under the hood?

There are 18 open-end funds focused on infrastructure and 15 ETFs with “infrastructure” in their name. Let’s say that you’ve set your sights on one of the larger ETFs in the group focused on income-generating infrastructure equities. By sector, utilities comprise 49% of the ETF, not uncommon for other members of the group. Is that an allocation you’re comfortable making? The Committee for a Responsible Federal Budget projects that the Trump administration’s plans will increase the national debt by $5.3 trillion, to 105 percent of GDP by 2026.[7] Profligate deficits tend to have the effect of raising benchmark interest rates, and high-yielding utility stocks have traditionally been rate-sensitive instruments.

The investment world lends mythical status to the “contrarian” who takes out-of-favor positions. But standing out from the crowd is also possible simply through exercising patience and requiring a fully fleshed out view as precedent for making a judgment.

Our founder, Chuck Widger, provides timeless advice in his New York Times best selling book entitled, Personal Benchmark: Integrating Behavioral Finance and Investment Management, that helps advisors and investors stay the course in times such as these:

What this boils down to is that advisors must develop and oversee the execution of an investment strategy that anticipates the inevitable potholes and stays the course of efficiently compounding the investment portfolio to create purchasing power. This requires both the management of the investment portfolio and the management of investor behavior. Skilled, experienced advisors know that one of their most important responsibilities is to help investors avoid making emotional decisions when volatility is high or when markets are irrationally exuberant.

The views expressed are those of Brinker Capital and are not intended as investment advice or recommendation. For informational purposes only. Brinker Capital, Inc., a Registered Investment Advisor. 

[1] Marks, Howard.  “It’s Not Easy.”  Oaktree Capital Management.  September 2015.

[2] Ross, Wilbur & Peter Navarro.  “Trump versus Clinton on Infrastructure.”  October 2016.  Specifically, the paper assumes projects are funded by debt and equity at a ratio of 5:1 and proposes to award a tax credit to the equity investor at 82% of the equity contribution or 13.7% of the project cost, and then tax the labor component of construction and the contractor’s pretax profits to bring the program towards revenue neutrality.

[3] American Society of Civil Engineers.  “2013 Report Card for America’s Infrastructure.”  March 2013.

[4] Scribner, Marc.  “The Great Infrastructure Myth.”  Competitive Enterprise Institute.  November 2016.

[5] Tierney, John.  “America’s Fiscal Consensus—A Bridge Too Far.”  Deutsche Bank Research.  October 2016.

[6] Grabell, Michael.  “How Not To Revive an Economy.”  The New York Times.  February 2012.

[7] Committee for a Responsible Budget.  “Promises and Price Tags: An Update.”  September 22, 2016.

60% of the Time, It Works Every Time

Solomon-(2)Brad Solomon, Junior Investment Analyst

“Bonds Show 60% Odds of Recession.”

It was a bold, slightly jarring headline to an article I happened across one recent morning. I had done a solid minute of skimming before I scrolled back to the top and noticed the published date—October 22, 2011.  If the models cited in the article had bet their chips on red, so to say, then the U.S. economy continued to hit black for some time.  Over the next four years, the domestic unemployment rate nearly halved while the S&P 500 returned a cumulative 84%.  Say what you want about much of that return being multiple expansion (84% total return on cumulative earnings per share growth of 16%)—it would’ve been a tough four years for investors to sit on the sidelines.

I’m writing this from an investment perspective rather than an academic one, but it is still a preoccupation for both fields to monitor to the economy.  Why?—because, as quantified by Evercore ISI, S&P 500 bear markets have been more severe (-30%) when they predate what actually morphs into an economic recession versus times when dire signs of economic stress do not ultimately turn up (-15%).

The world is once again on “recession watch” in 2016; signs of financial strain include the offshore weakening of China’s yuan, widening credit spreads, an apparent peak in blue chip earnings per share, and spiking European bank credit default swaps (CDSs).  One telling recession indicator, yield curve inversion, has seemingly not reared its head.  As measured through the difference between 10-year and 3-month Treasury yields, the spread today stands around 150 basis points, while it has fallen like clockwork to zero or below prior to each U.S. recession since 1956. (Recessions are indicated by the shaded grey areas below, as defined by the NBER.)

Source: The Federal Reserve, Brinker Capital

Source: The Federal Reserve, Brinker Capital

A number of commentators have raised concerns that the statistics above should not warrant an “all clear” sense of thinking there won’t be a recession.  In full awareness of the folly of claiming that “this time is different”—well, this time may be different.  Breaking down the term spread into its two components—the yield on a shorter-dated bill and longer-dated bond—the short rates have been artificially held down by a zero-bound federal funds rate for the past six years, while the feature of positive convexity that is inherently more pronounced for long rates means that it is, in theory, very tough to close the gap” on the remaining 150 basis point spread that would indicate an inverted yield curve mathematically.  (A convexity illustration is shown below—the takeaway is that the yield-price relationship becomes asymptotic at high prices, meaning that the 10-year note would need to be exorbitantly bid up to bring its yield down to equate with much shorter maturities.)

Source: Brinker Capital

Source: Brinker Capital

So, what are the odds of a recession?  If it’s not clear yet, I’m not writing this to assign a current probability but rather to warn against viewing such a figure in isolation.  Following the logic illustrated in papers such as this one, statistical programs make it possible to truly fine-tune a model: plug in any number of explanatory vectors (time series variables such as industrial production or unemployment claims) and “fit” the historical data to the response variable, which is essentially a switch that is “on” during a recession” and “off” when not.  But as calibrated as the model becomes, there is still subjectivity involved: what is the proper “trigger” for alarm?  Should your reaction to a 70% implied probability be different from your reaction to a 60% reading?  An important consideration is the objective behind such a model in the first place—to create a continuous distribution (infinite number) of outcomes and assign a probability to a discrete event (red or black, recession or no recession).  When framed this way, often it is the unquantifiable, intangible narratives and examination of what’s different this time (rather than what looks “the same”) that can create a fuller picture.

The views expressed are those of Brinker Capital and are not intended as investment advice or recommendation. For informational purposes only. Holdings are subject to change. Brinker Capital, Inc., a Registered Investment Advisor.

Has Quantitative Easing Worked? A Two-Part Blog Series Perspective (Part II)

Solomon-(2)Brad Solomon, Junior Investment Analyst

Part two in a two-part blog series discussing quantitative easing measures on a domestic and global scale. Part one published last week.

Transmission to Main Street has been dubious.

The Fed’s FRB/US model, which is the workhorse behind quantifying QE’s transmission mechanisms into the general economy, forecasted a 0.2 percentage-point drop in unemployment over a 2-year time horizon as a result of a $500 billion LSAP, according to then-Fed governor Stein in 2012. Given that the cumulative scale of QE in the U.S. totaled around $4 trillion over about 4.4 years, excluding intermittent periods between buying sprees, the FRB/US model would then forecast a reduction in unemployment of 1.6 percentage points. (This assumes that there are no marginally diminishing returns to QE dollars.) Building in a “lag” of six months, the actual U.S. unemployment rate fell by 4.0 percentage points during this period and currently hovers near 5%, right above what is often pegged as the natural rate of unemployment. To what extent that reduction is due to QE, though, is very difficult to answer—there is no “control subject” in real-world experiments. The next-best-option is the event study that looks at variables prior to and following some stimulus, although this risks blending the effect with some other variable. While unemployment has fallen near its natural rate, anecdotal evidence speaks to widespread underemployment

Other metrics look either ambiguous or decidedly impressive. Across the U.S., U.K., Eurozone, and Japan, industrial production growth has been significantly more volatile than it was pre-recession; unemployment has fallen, with exception of the Eurozone where it has marched further upward after a double-dip recession in 2013; household saving as a percent of disposable income has come down substantially. Lack of healthy inflation has proven to be the fly in the ointment. Nearly 30 countries have explicitly adopted inflation targeting (around half of those in the last 15 years), but the majority continue to be plagued by nagging disinflation or outright deflation. Consider the poster child Japan who pioneered QE over the 2001-2006 period in its commitment to purchase $3-6 trillion in Japanese government bonds (JGBs) per month until core CPI became “stably above zero.” While the Bank of Japan wrapped up with the program in March 2006 after witnessing year-over-year core CPI in Japan clock in just above zero for three consecutive months, this was more of a mathematical win. Headline inflation over the period picked up solely due to a rapid rise in the price of crude oil, which arguably has little connection to monetary policy. This is not to say that some commentators have not already called for an indefinite deflationary environment, or that QE’s effects on the money supply don’t appear ambiguous.

Getting back to using the U.S. as an example, income growth has not followed the drop in unemployment, and inequality has persisted. Annualized growth rates since 2010 have been near zero and well below their long-term averages, and the lack of growth is particularly pronounced in the lower income quintiles.

Solomon_QE_4

On another front, record-low mortgage rates are undoubtedly a product of QE but have not translated into pre-2008 home buying, even in the presence of rising FICO scores and real home prices that are hovering around their 10-year trailing average. In fairness to QE, though, there simply seems to be a lack of a relationship between the cost of borrowing money to buy a home, and the demand for borrowing that money, as evidenced by the chart below.

Solomon_QE_5

QE’s efficacy seems to have varied case-by-case, and there is a growing consensus that there are diminishing marginal returns to QE.

Of this last point, Japan and the ECB should take note. While the Bank of Japan refrained from expanding its QE program at its meeting this past Friday above the current $670 billion p.a. rate, such expansion remains on the table for its November and December meetings. A similar decision faces the ECB in December, and the rhetoric of ECB President Mario Draghi has been mostly dovish in tone. (The annual rate of asset purchases by the ECB currently stands at about $816 billion.) While both banks will ultimately adhere to their mandates in trying to combat deflation and negative export growth, perhaps expectations should be set low for how effective further QE will be in meeting those mandates.

Proponents of real business cycle theory would not be surprised at much of the above—that is, that aggressive monetary policy has failed to override a general shift in appetites for home-buying, tepid supply-glut disinflation, reduced appetite by banks to lend, and the preference by businesses towards doing nothing productive with bond issuance besides repurchasing their own equity. These “exogenous” factors may overpower the stimulatory nature of QE, or the problem may be one of model specification. (Getting back to the home sales/mortgage rate example, QE may do its job of lowering borrowing rates, but this may not ultimately stoke home-buying appetites, which is a failure of the assumed indirect transmission mechanism that underlies QE’s founding.) Whatever the case, while it has helped solve short-run liquidity problems by injecting cash into the financial system, QE has proven sub-optimal in terms of being a cure-all to the woe of general economic lethargy.

Further reading

  1. Fawley, Brett & Christopher Neely. “Four Stories of Quantitative Easing.” (2013)
  2. Krishnamurthy, Arvind & Annette Vissing-Jorgensen. “The Ins and Outs of LSAPs.” (2013)
  3. Klyuev, Vladimir et. al. “Unconventional Choices for Unconventional Times.” (2009)
  4. McTeer, Robert. “Why Quantitative Easing May Not Work the Same Way in Europe as in the U.S.” (2015)
  5. Raab, Carolin et. al. “Large-Scale Asset Purchases by Central Banks II: Empirical Evidence.” (2015)
  6. Schuman, Michael. “Does QE Work? Ask Japan.” (2010)
  7. Stein, Jeremy. “Evaluating Large-Scale Asset Purchases.” (2012)
  8. Williams, John. “Monetary Policy at the Zero Lower Bound.” (2014)
  9. Williamson, Stephen D. “Current Federal Reserve Policy Under the Lens of Economic History.” (2015)
  10. Yardeni, Edward & Mali Quintana. “Global Economic Briefing: Central Bank Balance Sheets.” (2015)

The views expressed are those of Brinker Capital and are not intended as investment advice or recommendation. For informational purposes only. Holdings are subject to change. Brinker Capital, Inc., a Registered Investment Advisor.

Has Quantitative Easing Worked? A Two-Part Blog Series Perspective

Solomon-(2)Brad Solomon, Junior Investment Analyst

Part one in a two-part blog series discussing quantitative easing measures on a domestic and global scale.

As policy rates hover near (or below) zero, the focus has been on the timing and magnitude of rate hikes by the Fed and other central banks. Don’t worry, I’m not here to add my speculative voice to that crowded discussion. Instead, I want to provide a quick ex-post assessment of another tool that has left the spotlight after being largely phased out by the Fed. I’m talking about quantitative easing (QE)—the buying of massive amounts of financial assets—or large-scale asset purchases (LSAPs) as they are termed by some economists.

At its core, QE attempts to influence the supply and demand for financial assets, thereby shifting preferences towards spending and investment and away from saving. (For those interested in getting further into the weeds on QE’s theoretical underpinnings, check out Ben Bernanke’s 2012 Jackson Hole speech, Jeremy Stein’s remarks that same year, or this release by the IMF.) Among the U.S., U.K., Japan, and the ECB, the scope of QE to date has amounted to around 10-20% of 2014 nominal GDP. To put that into perspective for the U.S.’s case, that is about the magnitude of U.S. total federal discretionary spending over the trailing four years.

Solomon_QE_1

So, with the Bank of Japan and ECB contemplating expanding quantitative easing at their upcoming meetings, does the existing research generally conclude that QE globally has been a few trillion dollars well spent? Let’s take a closer look.

LSAPs have seemed to benefit U.S. equities unequivocally well, and international equities less so. Evidence on financial system vitality is mixed.

The algebraic explanation is relatively straightforward: the yield on risk-free securities is an element of the discount rate used to value stocks and other assets. Artificially keeping this rate low, as well as creating expectations that it will stay that way, increases the discounted present value of other financial assets. However, only in the U.S. has the annualized return of that country’s respective MSCI index over the past five years exceeded the return required by a general equity risk premium of 5.57% (from Fama & French, 2002) and country risk premiums as computed by Aswatch Damodaran of NYU (2015).

Solomon_QE_2

Evidence on QE’s ability to reduce stress within the financial system is mixed. Event studies show that QE announcements were followed by sharp reductions in financial stress indicators, which consist of variables including the TED spread, corporate bond spreads, and beta of banking stocks. However, some studies on Japan’s experience with QE assert that it took a substantial amount of time for bank lending to improve, as banks were burdened by nonperforming loans and uneasiness towards extending credit.

Solomon_QE_3

Furthermore, QE may have also distorted asset prices (some have gone far enough to use the term bond bubble) while creating “price-insensitive buyers,” a term used by Ben Inker of GMO to describe an investor for whom the expected return on the asset does not dictate their decision to purchase.

Look for part two of this blog series later in the week.

The views expressed are those of Brinker Capital and are not intended as investment advice or recommendation. For informational purposes only. Holdings are subject to change.