He doesn’t go the whole distance [like McCloskey did years ago] and claim that 95% of all published econometrics is garbage, and certainly doesn’t do the Austrian thing and reject the whole sordid business as not even economics, but he does point out where the huge gaping logical holes are in econometrics.
And all written in clear, jargon free language anyone can understand.
My reason for introducing the notion of theoretical cherry picking is to emphasize that
since a given result can almost always be supported by a theoretical model, the existence
of a theoretical model that leads to a given result in and of itself tells us nothing definitive
about the real world. Though this is obvious when stated baldly like this, in practice
various claims are often given credence – certainly more than they deserve – simply
because there are theoretical models in the literature that “back up” these claims. In other
words, the results of theoretical models are given an ontological status they do not
deserve. In my view this occurs because models and specifically their assumptions are
not always subjected to the critical evaluation necessary to see whether and how they
apply to the real world.
What’s astounding are the assumptions accepted in the econometrics community that he has to stoop to refute. Smiling Dave knew it was bad, but he didn’t know it was that bad. Here are a few that he mentions:
1. It’s OK to apply the model’s results to the real world without bothering about whether it is remotely connected to reality. He gives many examples of this.
2. Models should only be judged by their predictions and not by the realism of
their assumptions. Milton Freidman.
3. All models have equal standing until definitive empirical tests are conducted.
4. Quantum mechanics makes no sense, so why can’t econometrics do the same?
5. It’s OK to distort reality in our model to make the math easier.
6. Gross circular reasoning. The process goes like this. Assume solution X is optimal. Root around until you have a model based on various assumptions that will conclude X is optimal. Then say you have proven X is optimal because your model shows it to be so.
7. CFOs and other managers solve extremely complex programming problems that ultimately must be
solved numerically on a computer using programs that took weeks or months for a
researcher to write. And they do this in their heads, instantly, with no computer and with no knowledge whatsoever of math. It’s also OK to assume that everybody does this, all the time.
8. If I get very precise results in my simplified model, then I will get very precise results applying the model to the real world.
9. The better model is the more complicated one.
10. The better model is the one that uses deeper math.
Devil’s Advocate: But Dave, surely this guy is a nobody, some uneducated bum with no standing in the mainstream.
SD: Here’s some info about him: BA, MPhil and PhD in Economics from Yale University. Academic Appointment at Stanford since 1981. Has consulted for various corporations and banks and has been involved in developing risk models and optimization software for use by portfolio managers. A member of the Stanford Graduate School of Business research faculty, he is C.O.G. Miller Distinguished Professor of Finance, and Senior Associate Dean for Academic Affairs. Somebody even gave him a free law degree. Take a look at the long list of articles in journals and the like that he’s published right here.