HFT: In Search of the Truth

Most studies on the impact of high-frequency trading on market quality include at least one fatal flaw. But the truth is out there.

Statistics is a funny field – you can generally construct well-reasoned, well-supported arguments on both sides of an issue. I have found a paucity of independent, objective writing on the subject of market quality, and recently found myself researching what studies have been performed. Reviewing the literature on market quality, it is clear that nearly all studies of equity market quality have been done by either:

  1. HFT firms
  2. Exchanges (whose best customers are HFT firms), or
  3. Academics sponsored by HFT firms or exchanges.

Part of this problem is the data; it’s simply immense. In fact, it has helped to coin the term “Big Data.” And you know it’s bad when a bunch of computer scientists call something “big.”

This is not just a question of acquisition cost, though it is certainly costly to obtain this data (irrationally so considering this is data about public markets). It’s also a question of technological infrastructure and expertise. Generally it takes a lot of storage and computing resources to analyze this data, and those who know how are generally making large amounts of money in the industry. Or they’re looking to get a job making large amounts of money in the industry.

I’d like to start this article by reiterating my call to the SEC to open up the market data. Open up access to MIDAS, the SEC’s quantitative analysis platform, to academics and independent researchers! Embrace the principles of the open source movement, and make it cheap and easy to perform studies on market data with the goal of advancing the public discussion and regulatory decisions. The SEC team at the head of the MIDAS project is talented, but small and resource-constrained. I have called for this in my Senate testimony and my public comments on the SEC Technology Roundtable. Open up the data! There’s really no good argument against it.

That being said, it is fruitful to examine evidence and studies that are not produced by the three groups mentioned above. Is it ironic to decry the manipulation of statistics in one sentence, and then to use them to prove your case? Probably. Disingenuous? I don’t think so. After all, each of the parties mentioned above has a distinct book to talk, a P&L they’re responsible for, whereas I’m seeking independent analysis and trying to improve the markets. The case could be made that I’m as guilty as any of them. Clearly, I won’t let that stop me from trying.

It has become common practice in this industry to accept the fact that HFT has improved market quality by providing tighter spreads and a higher number of shares (depth) in the book, and by reducing volatility. But do independent studies bear this out?

Listen: Dave Lauer: Crime Without Punishment - The Looting of Individual Investors by Robo Traders

I would like to examine this question in detail, but first I’d like readers to consider something broader: The market has clearly transformed over the past two decades. Many of these changes were extremely beneficial, including decimalization, improved order routing algorithms and increased competition. Some of these changes are more subtle, and far less studied.

In an interesting paper published in 2010, Reginald Smith asked the question, “Is high-frequency trading inducing changes in market microstructure and dynamics?” He went on to demonstrate that while historically the Hurst exponent of the stock market has been measured at 0.5, consistent with Brownian motion and our general understanding of how the market functions on small timescales, that value has been increasing since 2005 and Reg NMS. While this is certainly an esoteric statistical discussion, his conclusion is striking:

“We can clearly demonstrate that HFT is having an increasingly large impact on the microstructure of equity trading dynamics. ... This increase becomes most marked, especially in the NYSE stocks, following the implementation of Reg NMS by the SEC which led to the boom in HFT. ... Volatility in trading patterns is no longer due to just adverse events but is becoming an increasingly intrinsic part of trading activity. Like internet traffic Leland et. al. (1994), if HFT trades are self-similar with H > 0.5, more participants in the market generate more volatility, not more predictable behavior.”

In other words, a market that is becoming more self-similar is a market more prone to positive feedback loops, illiquidity contagions, and generally non-linear behavior. It’s a market that is less resilient and more prone to instability. It’s a market that none of us want. It’s a market that takes our entire body of academic theories on how capital markets operate and overturns the most basic underlying assumption. I don’t mean to get bogged down in this discussion, but I’d like readers to keep it in mind as we discuss market quality.

The Spread

The spread between the prices at which market participants are willing to buy and sell stock is often the most cited statistic in the HFT debate. HFT proponents scream about penny spreads and how dramatically they’ve improved spreads in the market. It is questionable whether spreads have actually tightened, both in absolute terms and when adjusted for volume. For example, Watson, Van Ness and Van Ness (2012), using Dash-5 data, find that the average bid-ask spread from 2001-2005 was $0.022 (2.2 cents), while the average bid-ask spread from 2006-2010 was $0.027 (2.7 cents), a dramatic increase of 22.7%. This would suggest strongly that spread improvements are more a result of decimalization and pre-NMS computerization than HFT efficiencies.

Kim and Murphy (2011) did an examination of model-implied effective spreads. They looked at four common models used to calculate effective spreads and found that all four models underestimate spreads by 41%-46% for 2007-2009. Using their model, which takes into account volume and therefore provides a true apples-to-apples comparison of spreads across different time periods, they find that spreads between 1997 and 2009 are actually quite similar. The size of trades are much smaller in the era of electronic markets and HFT, as institutional orders must be broken up dramatically to try to reduce market impact. You must therefore collapse consecutive buy/sell orders into a single transaction to evaluate the effective spread of a trade relative to spreads from the days when trades were larger.

Their analysis focused on SPY, the most liquid instrument in the market. According to Kim and Murphy, “The average size of a buy or sell order in 1997 is 5,600 shares, while in 2009, it is only 400 shares.” Therefore, any study that claims to compare spreads between these periods without adjusting for volume, or simply looking at instantaneous bid-ask prices, is flawed. It also means that nearly any study that claims spreads have tightened in a post-NMS world, but neglects to account for trade size changes (as every study that I’ve seen does), is fundamentally flawed.

Another excellent study led by RBC’s Stephen Bain demonstrates that in Canada, spreads have indeed tightened post-2009 (these are nominal spreads, not effective spreads), but that this has come at a cost in terms of volatility. They found that “short-term price gyrations for individual stocks ... have effectively doubled from pre-2000 levels to present.” They conclude that “the behavior incented by today’s market has increased effective spread costs for investors by eroding the quality and reliability of the liquidity provided.”


Chart from RBC Capital Markets: Evolution of Canadian Equity Markets, February 2013.

Which brings us to our next measure of market quality, volatility.

Volatility

Volatility is one of those things that is difficult to study. It’s certainly not difficult to measure, but to be able to attribute it to individual causes is a challenging exercise. We know that HFT thrives at heightened levels of volatility (though not too high!), and therefore extracts higher rents. We should examine the claim that volatility has dropped as a result of market structure improvements and HFT.

A cursory look at the VIX calls this story into question. From 1996-2003, the VIX oscillated but maintained a baseline in the 20s. From 2003-2007 it dropped steadily. We’ve obviously had some interesting and non-standard market conditions since then, but from a steady-state perspective the VIX since early 2009 looks remarkably similar to the VIX during the post dot-com crash period of 2003-2007. I believe the VIX to be a less interesting and accurate gauge of volatility, and much prefer examining intraday volatility to better understand what’s happening on a daily basis in the market.

Once again referring to the study by RBC, the researchers demonstrate conclusively that volatility has increased in Canadian markets, and has been doing so since 1996. In a very compelling illustration of this, they present “a histogram of the distribution for four date ranges covering eighths trading, nickel trading, decimalization and finally market fragmentation and proliferation of maker-taker. For the identified time periods we can see that that the distributions for more recent periods exhibit a higher mean and more positive skew.”


Chart from RBC Capital Markets: Evolution of Canadian Equity Markets, February 2013.

This is a histogram of the “distribution of trading prices relative to short-term equilibrium” and demonstrates that each of these spread-narrowing innovations has been associated with an increase in relative volatility.

Another problem in volatility analysis is related to spread analysis. RBC found in the same study that 60% of its order flow is traded by 11am, and volatility during this first hour and a half of trading was far higher than in the past. Failing to adjust volatility calculations for volume or time, and averaging over the entire day, is a flawed methodology, but one that most pro-HFT studies use.

Nanex has done some excellent work in volatility analysis, and its Relative Intraday Volatility (RIV) measure gives us a good indication of the trends in the US markets. The data show a distinct trend of increasing intraday volatility since Reg NMS was passed:


Chart from Nanex: https://www.nanex.net/aqck2/3563.html

This chart shows that since 2007, RIV in SPY has more than doubled, with peak intraday volatility 10 times higher in August 2011 than in 2006. I would argue that the increase in the Hurst component mentioned above is one major cause of increased volatility.

Another excellent paper may hold another key to this increase, as well as the increase observed in the US. Dichev, Huang and Zhou (2011) tie increased trade volume to increasing volatility. They looked at data since 1926 and cross-sectionally over the past 20 years and found that “in recent years trading-induced volatility accounts for about a quarter of total observed stock volatility.” Their conclusion is that as volume increases, so does volatility. And if there’s one thing everyone can agree on, it’s that HFT has increased volume. Lest anyone make a mistake, even the SEC has admitted that increasing volume is not the same thing as increasing liquidity, and the Flash Crash is the perfect illustration of that.

The Head of Financial Stability at the Bank of England, Andrew Haldane, found in his July 2011 study that “intraday volatility has risen most in those markets open to HFT.” He also noted that “HFT algorithms tend to amplify cross stock correlation in the face of a rise in volatility,” meaning that individual risks are made more systemic, and particularly so during times of market stress.

While examining intraday volatility is useful, proponents of our current market structure will also point to many studies arguing against the ones that I’ve presented (although most would fail one of the three points laid out at the beginning of this article). At the very least the takeaway should be that this is a much more nuanced argument than is commonly accepted. At worst, other studies are published by those incentivized to maintain the status quo in market structure and are flawed.

Of course, this examination of volatility says nothing about the increase in catastrophic technology failures and so-called “black swan” events. The complexity of today’s market means more fragility and is prone to these catastrophic events. It seems to be only a matter of time before we see another one.

Conclusion

This article is not meant to vilify high-frequency trading. Rather, it is intended to shine some light on the changes in market quality during the most incredible regulatory and technological transition that most of us have witnessed. The primary force at work here is Regulation NMS in the US, and one of the consequences of this regulation was summed up by Eric Noll of Nasdaq at the Market Structure Roundtable on May 13, 2013. He stated that speed has become the de facto differentiator after price as an unintended consequence of NMS.

This is something that I will examine further in subsequent articles, along with a discussion on market impact and recommendations on how to disincentivize speed and end the race to zero latency. This drive to zero has had extreme consequences in terms of market complexity and fragility, with little benefit to the investing public. That is not to say we must wax poetic of the days of the specialist; it is simply to state that there is a far better market structure out there than what we have currently, and we should seek to structure proper regulatory incentives to gently nudge the industry in that direction. That begins with reducing complexity (reexamining market data fees and revenue distribution along with the trade-through rule is a great start here) and reducing the value of speed.

Let us not fool ourselves that the issue of market quality is cut-and-dry. It is a nuanced discussion, and one that should be taken more seriously than it currently is.

Follow Dave Lauer at twitter.com/dlauer

About the Author

President & Managing Partner
randomness