Robert Flood and Miles Kimball on the Status of the Efficient Markets Theory

Robert Flood is an economist famous for his study of asset bubbles. Links to many of his papers can be found here. 

My post “Robert Shiller: Against the Efficient Markets Theory” started a lively discussion on my Facebook wall (which is totally public). I added my discussion with Dennis Wolfe and a summing up by Richard Manning to “Robert Shiller: Against the Efficient Markets Theory” itself, but I thought the discussion with Robert Flood deserved its own post. See what you think:

Robert: This stuff is fun to talk about without a model, but finding one that works so you can use it for testing is harder. The stuff without a model says nothing about data so is nice for talk shows.

Miles: Any model we would write down at this point would be drastically wrong, so it would not be of much immediate practical value. What we need first is a suite of survey and experimental tools for measuring all the narrative forces that Bob Shiller is talking about.

Robert: As I have said, this is fun stuff. I just wish you’d get past the Efficient Markets thing. It’s undefined w/o a model and you do not want to talk about models - neither do I. The “stories organizing” notion is as good as anything else. I look forward to seeing where it goes. John Cochrane aside, the SDF approach looks like a dead end to me. It’s killing Macro and does not seem to do much for Finance.

Miles: There is no lack of efficient markets theory models that describe the way the world isn’t. Take your pick. In class today, I talked about the no-trade theorems, for example. In the real world, 95% of all trading volume cannot be explained if you insist that everyone has the same expectations.

Robert: Now you are talking. The issue you mention is a problem with Rep Agent - RA - not with EM. Indeed, having problems with RA gives an immediate research strategy - no RA - information dispersion, taste dispersion, life span dispersion, information discovery, transactions costs, rules of thumb….. In my view EM is not a hypothesis, it is an assumption about how people behave and not just in financial markets.

Miles: None of information dispersion, taste dispersion, life span dispersion, information discovery, transactions costs can possibly explain the volume we see. Only different people processing the available information differently can possibly yield the volume we see. Of your list, only “rules of thumb” is in this category, but in reality there are many people very actively processing the same information in different ways to come to different opinions. That is a failure of rational expectations. Without rational expectations, there is no efficient markets theory left, since the EMT logic runs from (approximately?) perfect competition in asset markets and (approximately?) rational expectations.

Robert: Agreed, volume is a real issue. I think it has something to do with the way we have structured compensation. Why is different processing of information a RE failure? People have very different experiences, different abilities and therefore different costs and therefore process things differently. The only failure is the failure by definition of RA. Forget econ for a moment. Look at politics. The dispersion of beliefs is, I think, much wider than the dispersion of information.

Miles: The assumption of rational expectations is the assumption of perfect information processing, given the information you have in front of you. There was a time half a century ago when economists thought that imperfect information and imperfect information processing were similar issues, but technical advances have made it clear that imperfect information can be dealt with by nice extensions of standard theory. Not so for imperfect information processing. Methodologically, that is a radical departure from standard theory, though a necessary one for many applications, since people in the real world are not infinitely intelligent and many real-world economic decisions are quite difficult computationally and conceptually–difficult enough to tax the abilities even of PhD economists, let alone people who don’t love solving optimization problems. I raised some of these issues in my post “The Unavoidability of Faith.”

Robert: Sure. The Muth model, Lucas, Sargent, Sharp etc had free relevant information - including full information about the model generating outcomes and costless processing. So what? Expand the framework to include all sorts of costs and you have a bigger model, but that does not make people behave stupidly. The guys in the model will use their history (goodby Markov) to process things until they think (Bayes comes in here) it’s not worth processing more. (Oddly, this is Peter Garber’s completely incomprehensible thesis written under Lucas)

Miles: I agree that people do not generally behave stupidly. My point is that to this day, our standard technical tools depend crucially on them being infinitely intelligent. There is a reason Peter Garber’s thesis was not easy to understand. Dealing with imperfect information *processing* is a *much* bigger technical leap than dealing with imperfect information. This is one of the themes of my paper “Cognitive Economics” that I am giving as the keynote speech at the Japanese Economic Review Conference in Tokyo in August.

Robert:Ok. I am happy to agree on the NSS way of thinking ( NSS = Not So Stupid). In my view, that’s all EM or RE says. Remember where we came from - no expectations, static expectations, adaptive expectations.

Update:Willem Buiter writes on the Facebook version of this post:

Willem H. Buiter:Efficient Markets Theory is an obvious empirical failure. Unfortunately, the alternative is a swamp of mutually contradictory but non-refutable (i.e non-scientific - long live Popper) anecdotes.