Does increased data liquidity leads to less volatile stock market?


#1

Saw an interesting tweet by Dan Romero

In a world where most economic activity happens on public block chains, GDP, CPI, BLS reports are all real-time.

Increased data liquidity leads to less volatile stock market?

I’m not an econ expert, so I can’t say. Just wanted to pose the question here.


#2

It’s an interesting thought. I can see the potential upside of data liquidity: increased transparency and lower volatility because the market can price things more accurately based on available data.

However, I wonder what the negative consequences are, if any.


#3

I think @preethi is right, there are studies showing that greater availability of information tends to result in more efficient market pricing (and a reduction in real and perceived risk because there are fewer unknowns), thus lowering volatility and increasing liquidity.

Along your line of thinking, if the 2008 crisis showed policy makers and regulators were unaware of how much debt was in the system, what would happen if instead, debt was issued on an open blockchain? Would the financial system be less volatile, and the economy be more immune to boom-and-bust cycles? Maybe a system like MarketDAO could enable fully transparent debt creation, thus permitting debt/growth capital to be created while ensuring debt levels would not harm the system itself.



#4

If one believes in the efficient market hypothesis, then increased “data liquidity” should lead to a less volatile stock market, as the “correct” price of an asset should be found even quicker than it currently is. In a world full of “data liquidity”, there will be very little information asymmetry. In a perfectly accessible data market, the “correct” price of an asset should be found pretty much instantaneously.

I do not believe in the efficient market hypothesis though, and in fact, think that markets are in a perpetual state of disequilibrium.

In my opinion, it is very hard to speculate that increased data means that there will be a less volatile stock market. Does more data mean that people make more “rational” or “logical” predictions? Not necessarily. In fact, easier access to data could lead to more people making “irrational” decisions because of a faulty analysis of the data. Those individuals could have more confidence in their decisions because they are data-driven, thus making those irrational decisions even more costly than they are now. That is unless the market is irrational in the same way as the individual at that point in time, which would actually most likely lead to decreasing volatility.

That type of action could lead to more volatility, it could lead to less, or it might not affect volatility. I think making any definite statement on what it would lead to is faulty.


#5

excellent point you make on potentially still making “faulty analysis of data”. Reminds me of Taleb’s book “Black Swan

In the first chapter, the Black Swan theory is first discussed in relation to Taleb’s coming of age in the Levant. The author then elucidates his approach to historical analysis. He describes history as opaque, essentially a black box of cause and effect. One sees events go in and events go out, but one has no way of determining which produced what effect. Taleb argues this is due to The Triplet of Opacity .[6]

The second chapter discusses a neuroscientist named Yevgenia Nikolayevna Krasnova and her book A Story of Recursion . She published her book on the web and was discovered by a small publishing company; they published her unedited work and the book became an international bestseller. The small publishing firm became a big corporation, and Yevgenia became famous. This incident is described as a Black Swan event.

The book goes on to admit that the so-called author is a work of fiction. Yevgenia rejects the distinction between fiction and nonfiction. She also hates the very idea of forcing things into well defined “categories”, holding that the world generally is complex and not easy to define. Though female, the character is based, in part, autobiographically on the author (according to Taleb), who has many of the same traits.

Chapter four brings together the topics discussed earlier, into a narrative about a turkey. Taleb uses it to illustrate the philosophical problem of induction and how past performance is no indicator of future performance. He then takes the reader into the history of skepticism.


#6

Hence the name of my username


Best TruStories of the Week - #9
#7

I do agree with that the relationship between data liqudity and price volatility is complex, although would actually argue that the efficient market hypothesis does not make a direct claim on the relationship - it states that market prices reflect all available information (assuming strong form). If information itself can is being generated continuously, and can be expected to change the price of a stock, even if EMH holds, the volatility of stock price can increase as opposed to decrease.

It is useful to look at the actual claim a bit more formally, as it reaches a similar conclusion - data availability can increase, decrease or not change the price of a stock, depending on relevant conditions - although EMH may or may not hold in any of those scenarios.

Scenario 1- Volatility and data liquidity are positively correlated

  • Volatility = sudden deviations in price of an asset (typically compared to its average, as with standard deviation)
  • Price of an asset (theoretically) is given by present value of its future cash flows (note this is not, in itself, imposed by EMH)
  • Future cash flows cannot be observed, so prices of publicly traded assets are given by investors beliefs on prospective cash flows, based on available information (assuming strong form EMH holds), and if investors have rational expectation (assuming rational expectation hypothesis - REH -holds).
  • Assuming above theories hold, any sudden deviation in price (i.e., volatility), therefore, can be only driven by data liquidity only if it releases new information about the stock’s prospective cash flows, and if investors act rationally on the information

Scenario 2 - Volatility and data liquidity are not correlated

Conversely, data liquidity may not translate to any volatility if:

  1. The data does not impact the stock’s cash flows (EMH can still hold)
  2. Investors are not, or were not acting rationally all along (failure of REH), so that prices were never given by available data (failure of EMH)
  3. There were limitations which restrict investors from acting on new information (e.g., regulatory limitations on short selling, which will render EMH as false, but not REH)

Scenario 3 - Volatility and data liquidity are inversely correlated

Finally, lack of data liquidity could also translate into more volatility if the investors were basing decisions on incomplete information, which can encourage them change their opinion more sporadically (EMH can still hold if this is true, although this is not a necessary condition, as the volatility can be driven by a range of other factors, such as investor sentiment).

Empirically, Robert Shiller’s work (https://www.nber.org/papers/w0456) has shown that for publicly traded equity stocks (given by prices of S&P composite index), the volatility cannot be explained by information on prospective cash flows that were publicly available (i.e., historical dividend distributions). One could argue that volatility was driven by insider information, but actual dividend distributions (ex-post) also does not explain the volatility, which begs the question - what were investors basing their investment decisions on?

History is also littered with examples where data availability cannot explain asset price volatility (https://www.amazon.co.uk/Manias-Panics-Crashes-Financial-Investment/dp/0471467146), as investors do not always base investment decisions on rational assumptions. I personally think crypto-related financial products can only go as far as increasing data availability - their use has to be tightly controlled if the data has to be used to form rational expectations.


#8

I very much think the implications of EMH have a claim on increasing data liquidity leading to less volatile markets. Agreed that these implications are not direct, but come from the interpretation of the various forms of EM.

Agreed with you that strong form EMH claims that market prices reflect all available information. In that scenario, I would argue that increased “data liquidity” should have no effect on the market prices bar any sort of government regulation impairment, since that information should have already been “priced in.” From my understanding, EMH doesn’t claim that prices are a reflection of DCF analyses.

Yet if one takes weak or semi-strong form EMH, then the increased “data liquidity” should lead to a less volatile stock market.

In a semi-strong EMH, prices would instantly change to reflect the changing data. If “data liquidity” is nearly perfect, I would argue that unforeseen changes or aberrations in the data would be far rarer, since instaneous access would allow one to see changes in trend even in their nascent stages. For example, if we could get a real time constant calculation of US GDP growth, the dramatic changes that can occur from a quarter to quarter basis could be foreseen. It is very unlikely that if we went from 2% growth in Q2 to 5% growth in Q3 most of the growth in Q3 came from the first day of the new quarter. If the data is published and tracked at a constant rate, our time horizons for analysis would be changed in such a way that significant variations or surprises should occur less often. Since surprises occur less often, there should be less volatility in a “liquid data” environment under weak EMH, since volatility is positively correlated with unforeseen events or market environments.

In a weak EMH environment, the same sort of scenario would apply. The distribution of information could occur faster, but the information would be new and thus could affect the market. Yet if everyone thinks the same way (which is essential from my understanding of EFH), they will react to the new information in the same manner and should update their models to account for it exactly the same. While there could be significant changes all of the sudden from second to second, chances are the dramatic changes would not happen in such a short period of time, and thus the increased speed of acquiring information and the transparency of that information would yield a less volatile market.


#9

Increased data liquidity might introduce increased complexity that actually might reduce our ability to properly connect premise to conclusion - as increased data increases the number of dependencies and “unknowns”. The burden then moves from the actual data on hand to the model that is used to analyze it. Thus, we would be reaching points of diminishing returns where our results are not limited by the liquidity of data, but rather, the models we use currently to analyze them.

It’s why even to this present day, we find that our most elegant solutions to problems are often approximations built on simple yet “true” assumptions, rather than building extremely complicated models taking in massive amounts of data - where the links between data and conclusion are less sure.