The Next Obamacare Fiasco

The Next Obamacare Fiasco

Thousands Of Consumers Get Insurance Cancellation Notices Due To Health Law Change Kaiser Health News

Some health insurance gets pricier as Obamacare rolls out Los Angeles Times

Kaiser:

Health plans are sending hundreds of thousands of cancellation letters to people who buy their own coverage,…The main reason insurers offer is that the policies fall short of what the Affordable Care Act requires starting Jan. 1

Florida Blue, for example, is terminating about 300,000 policies, about 80 percent of its individual policies in the state. Kaiser Permanente in California has sent notices to 160,000 people – about half of its individual business in the state. Insurer Highmark in Pittsburgh is dropping about 20 percent of its individual market customers, while Independence Blue Cross, the major insurer in Philadelphia, is dropping about 45 percent.

LA Times:
Blue Shield of California sent roughly 119,000 cancellation notices out in mid-September, about 60 percent of its individual business. About two-thirds of those policyholders will see rate increases in their new policies….
Middle-income consumers face an estimated 30% rate increase, on average, in California due to several factors tied to the healthcare law. Some may elect to go without coverage if they feel prices are too high. Penalties for opting out are very small initially. Defections could cause rates to skyrocket if a diverse mix of people don’t sign up for health insurance
This is interesting. Obamacare could actually increase the number of people without insurance, because you are not allowed to keep (consumer) or sell (insurance company) simple cheap insurance.


If you’re healthy and have been paying for individual insurance all along – largely because you know people with preexisting conditions can’t get insurance, and you want to lock in your right to continue your policy should you get sick – there is now a strong incentive to drop out.

The government has just wiped out the value of those premiums you paid all these years – you don’t need the right to buy health insurance anymore, as you can always get it later. You’re seeing a large increase in premiums for benefits you don’t want and to cross-subsidize other people. The mandate penalties are almost certainly going to be pushed back, they penalties are a good deal less than the cost of health insurance (which you can always get later if you get sick), the IRS has already said it’s not going after people who don’t pay them. Dropping out of individual health insurance starts to make a lot of sense.

This was bad enough on its own. But if insurance companies cancel these people’s policies, all at once,  it’s dramatically worse. It would be hard to design a more effective “nudge” to get such people to think about it and conclude that dropping health insurance is a good idea.

The overall numbers may not change. Other reports suggest that poor and sick people have been signing up in droves, mostly to get on the expanded medicaid. But it’s an obvious fiscal disaster if Obamacare only attracts the poor and sick, does not attract the young and healthy – and now drives away the healthy people who were provident enough to buy individual health insurance!

Why is this happening? A curious tidbit
All these cancellations were prompted by a requirement from Covered California, the state’s new insurance exchange. The state didn’t want to give insurance companies the opportunity to hold on to the healthiest patients for up to a year, keeping them out of the larger risk pool that will influence future rates.
The destruction of the off-exchange individual insurance market is deliberate.

The best quote of the bunch, from the LA Times
Pam Kehaly, president of Anthem Blue Cross in California, said she received a recent letter from a young woman complaining about a 50% rate hike related to the healthcare law.

“She said, ‘I was all for Obamacare until I found out I was paying for it,’” Kehaly said.
This realization will come soon to millions more.

Bob Shiller's Nobel

As with Lars Hansen and Gene Fama, Bob Shiller has also produced a span of interesting innovative work, that I can’t possibly cover here. Again, don’t let a Nobel Prize for one contribution overshadow the rest. In addition to volatility, Bob did (with Grossman and Melino) some of the best and earliest work on the consumption model, and his work on real estate and innovative markets is justly famous.  But, space is limited so again I’ll just focus on volatility and predictability of returns which is at the core of the Nobel.

Source: American Economic Review
The graph on the left comes from Bob’s June 1981  American Economic Review paper. Here Bob contrasts the actual stock price p with the “ex-post rational” price p*, which is the discounted sum of actual dividends. If price is the expected discounted value of dividends, then price should vary less than the actual discounted value of ex-post dividends.  Yet the actual price varies tremendously more than this ex-post discounted value.

This was a bombshell. It said to those of us watching at the time (I was just starting graduate school) that you Chicago guys are missing the boat. Sure, you can’t forecast stock returns. But look at the wild fluctuations in prices! That can’t possibly be efficient. It looks like a whole new category of test, an elephant in the room that the Fama crew somehow overlooked running little regressions.  It looks like prices are incorporating information – and then a whole lot more!  Shiller interpreted it as psychological and social dynamics, waves of optimisim and pessimism.


(Interestingly, Steve Leroy and Richard Porter also wrote an essentially contemporary paper on volatility bounds in the May 1981 Econometrica: The Present Value Relation: Tests Based on Implicit Variance Bounds, that has been pretty much forgotten. I think Shiller got a lot more attention because of the snazzy graph, and the seductive behavioral interpretation. This is not a criticism. As I’ve said of the equity premium, knowing what you have and marketing it well matters. Deirdre McCloskey tells us that effective rhetoric is important and she’s right. Most great work emerges as the star among a lot of similar efforts. Young scholars take note.)

But wait, you say. “Detrended by an exponential growth factor?” You’re not allowed to detrend a series with a unit root. And what exactly is the extra content, overlooked by Fama’s return forecasting regressions? Aha, a 15 year investigation took off, as a generation of young scholars dissected the puzzle. Including me. Well, you get famous in economics for inducing lots of people to follow you, and Shiller (like Fama and Hansen) is justly famous here by that measure.

My best attempt at summarizing the whole thing is in the first few pages of “Discount Rates,” and the theory section of that paper. For a better explanation, look there. The digested version here.

Along the way I wrote “Volatility Tests and Efficient Markets” (1991) establishing the equivalence of volatility tests and return regressions, “Explaining the Variance of Price-Dividend Ratios” (1992), an up to date volatility decomposition, “Permanent and Transitory Components of GNP and Stock Prices” (1994) “The Dog That Did Not Bark” (2008), three review papers, an extended chapter in my textbook “Asset Pricing,” covering volatility, bubbles and return regressions, and last but not least an economic model that tries to explain it all, “By Force of Habit” (1999) with John Campbell. And that’s just me. Read the citations in the Nobel Committe’s  "Understanding Asset Prices.“ John Campbell’s list is three times as long and distinguished.  

So, in the end, what do we know? A modern volatility test starts with the Campbell-Shiller linearized present value relation
Here p=log price, d=log dividend, r=log return and rho is a constant about 0.96. This is just a clever linearization of the rate of return – you can rearrange it to read that the long run return equals final price less initial price plus intermediate dividends. Conceptually, it is no different than reorganizing the definition of return to
You can also read the first equation as a present value formula. The first term says prices are higher if dividends are higher. The second term says prices are higher if returns are lower – the discount rate effect. The third term represents "rational bubbles.”  A price can be high with no dividends if people expect the price to grow forever.

Since it holds ex-post, it also holds ex-ante – the price must equal the expected value of the right hand side. And now we can talk about volatilty: the price-dividend ratio can only vary if expected dividend growth, expected returns, or the expected bubble vary over time. 

Likewise, multiply both sides of the present value identity by p-d and take expectations. On the left, you have the variance of p-d. On the right, you have the amount by which p-d forecasts dividend growth, returns, or future p-d. The price-dividend ratio can only vary if it forecasts future dividend, growth, future returns, or its own long-run future. 

The question for empirical work is, which is it? The surprising answer: it’s all returns. You might think that high prices relative to current dividends mean that markets expect dividends to be higher in the future. Sometimes, you’d be right. But on average, times of high prices relative to current dividends (earnings, book value, etc.) are not followed by higher future dividends. On average, such times are followed by lower subsequent long-run returns.

Shiller’s graph we now understand as such a regression: price-dividend ratios do not forecast dividend growth. Fortunately, they do not forecast the third term, long-term price-dividend ratios, either – there is no evidence for “rational bubbles.” They do forecast long-run returns. And the return forecasts are enough to exactly account for price-dividend ratio volatility!

Starting in 1975 and continuing through the late 1980s, Fama and coauthors, especially Ken French, were running regressions of long-run returns on price-dividend ratios, and finding that returns were forecastable and dividend growth (or the other “complementary” variables) were not. So, volatility tests are not something new and different from regressions. They are exactly the same thing as long-run return forecasting regressions. Return forecastability is exactly enough to acount for price-dividend volatility.  Price-dividend volatility is another implication of return forecastability– and an interesting one at that! (Lots of empirical work in finance is about seeing the same phenomenon through different lenses that shows its economic importance.)

And the pattern is pervasive across markets. No matter where you look, stock, bonds, foreign exchange, and real estate, high prices mean low subsequent returns, and low prices (relative to “fundamentals” like earnings, dividends, rents, etc) mean high subsequent returns.

These are the facts, which are not in debate. And they are a stunning reversal of how people thought the world worked in the 1970s. Constant discount rate models are flat out wrong.

So, does this mean markets are “inefficient?” Not by itself. One of the best parts of Fama’s 1972 essay was to prove a theorem: any test of efficiency is a joint hypothesis test with a “model of market equilibrium.” It is entirely possible that the risk premium varies through time. In the 1970s, constant expected returns were a working hypothesis, but the theory long anticipated time varying risk premiums – it was at the core of Merton’s 1972 ICAPM – and it surely makes sense that the risk premium might vary through time.

So here is where we are: we know the expected return on stocks varies a great deal through time. And we know that time-variation in expected returns varies exactly enough to account for all the puzzling price volatility. So what is there to argue about? Answer: where that time-varying expected return comes from.

To Fama, it is a business cycle related risk premium. He (with Ken French again) notices that low prices and high expected returns come in bad macroeconomic times and vice-versa. December 2008 was a recent  time of low price/dividend ratios. Is it not plausible that the average investor, like our endowments,  said, “sure, I know stocks are cheap, and the long-run return is a bit higher now than it was. But they are about to foreclose on the house, reposess the car, take away the dog, and I might lose my job. I can’t take any more risk right now.” Conversely, in the boom, when people “reach for yield”, is it not plausible that people say “yeah, stocks aren’t paying a lot more than bonds. But what else can I do with the money? My business is going well.  I can take the risk now.”

To Shiller, no. The variation in risk premiums is too big, according to him, to be explained by variation in risk premiums across the business cycle. He sees irrational optimism and pessimism in investor’s heads. Shiller’s followers somehow think the government is more rational than investors and can and should stabilize these bubbles. Noblesse oblige.

Finally, the debate over “bubbles” can start to make some sense. When Shiller says “bubble,” in light of the facts, he can only mean “time-variation in the expected return on stocks, less bonds, which he believes is disconnected from rational variation in the risk premium needed to attract investors.” When Fama says no “bubble,” he means that the case has not been proven, and it seems pretty likely the variation in stock expected returns does correspond to rational, business-cycle related risk premiums. Defining a “bubble,” clarifying what the debate is about, and settling the facts, is great progress.

How are we to resolve this debate? At this level, we can’t. That’ the whole point of Fama’s joint hypothesis theorem and its modern descendants (the existence of a discount factor theorems). “Prices are high, risk aversion must have fallen” is as empty as “prices are high, there must be a wave of irrational optimism.” And as empty as “prices are high, the Gods must be pleased.” To advance this debate, one needs an economic or psychological model, that independently measures risk aversion or optimisim/pessimism, and predicts when risk premiums are high and low. If we want to have Nobels in economic “science,” we do not stop at story-telling about regressions.

One example: John Campbell and I (Interestingly, Shiller was John’s PhD adviser and frequent coauthor) wrote such a model, in "By Force of Habit“. It uses the history of consumption and an economic model as an independent measure of time varying risk aversion, which rises in recessions. Like any model that makes a rejectable hypothesis, it fits some parts of the data and not others. It’s not the end of the story.  It is, I think, a good example of the kind of model one has to write down to make any progress.

I am a little frustrated by behavioral writing that has beautiful interpretive prose, but no independent measure of fad, or at least no number of facts explained greater than number of assumptions made. Fighting about who has the more poetic interpretation of the same regression, in the face of a theorem that says both sides can explain it, seems a bit pointless. But an emerging literature is trying to do with psychology what Campbell and I did with simple economics. Another emerging literature on "institutional finance” ties risk aversion to internal frictions in delegated management, and independent measures such as intermediary leverage.

That’s where we are. Which is all a testament to Fama, Shiller, Hansen, and asset pricing. These guys led a project that assembled a fascinating and profound set of facts. Those facts changed 100% from the 1970s to the 1990s. We agree on the facts. Now is the time for theories to understand those facts.  Real theories, that make quantitative predictions (it is a quantiative question: how much does the risk premium vary over time), and more predictions than assumptions.

If it all were settled, their work would not merit the huge acclaim that it has, and deserves.

Update: I’m shutting down most comments on these. For this week, let’s congratulate the winners, and debate the issues some other day.

Lars Hansen's Nobel

Lars has done so much  deep and pathbreaking research, that I can’t begin to even list it, to say nothing of explain the small part of it that I understand.  I wrote whole chapters of my textbook “Asset Pricing” devoted to just one Hansen paper. Lars writes for the ages, and it often takes 10 years or more for the rest of us to understand what he has done and how important it is.

So I will just try to explain GMM and the consumption estimates, the work most prominently featured in the Nobel citation. Like all of Lars’ work, it looks complex at the outset, but once you see what he did, it is actually brilliant in its simplicity.

The GMM approach basically says, anything you want to do in statistical analysis or econometrics can be written as taking an average.

For example, consider the canonical consumption-based asset pricing model, which is where he and Ken Singleton took GMM out for its first big spin. The model says, we make sense of out of asset returns – we should understand the large expected-return premium for holding stocks, and why that premium varies over time (we’ll talk about that more in the upcoming Shiller post) – by the statement that the expected excess return, discounted by marginal utility growth, should be zero

where Et means conditional expectation, beta and gamma capture investor’s impatience and risk aversion, c is consumption and R is a stock or bond return and Rf is a bond return. E(R-Rf) is the premium – how much you expect to earn on a risky asset over a riskfree one, as compensation for risk. (Non-economists, just ignore the equations. You’ll get the idea). Expected returns vary over time and across assets in puzzling ways, but the expected discounted excess return should always be zero.

How do we take this to data? How do we find parameters beta and gamma that best fit the data? How do we check this over many different times and returns, to see if those two parameters can explain lots of facts? What do we do about that conditional expectation Et, conditional on information in people’s heads? How do we bring in all the variables that seem to forecast returns over time (D/P) and across assets (value, size, etc.)? How do we handle the fact that return variance changes over time, and consumption growth may be autocorrelated?

When Hansen wrote, this was a big headache. No, suggested Lars. Just multiply by any variable z that you think forecasts returns or consumption, and take the unconditional average of this conditional average, and the model predicts  that the unconditional average obeys
So, just take this average in the data. Now, you can do this for lots of different assets R and lots of different “instruments” z, so this represents a lot of averages. Pick beta and gamma that make some of the averages as close to zero as possible. Then look at the other averages and see how close they are to zero.

Lars worked out the statistics of this procedure – how close should the other averages be to zero, and what’s a good measure of the sample uncertainty in beta and gamma estimates – taking in to account a wide variety of statistical problems you could encounter. The latter part and the proofs make the paper hard to read. When Lars says “general” Lars means General!

But using the procedure is actually quite simple and intuitive. All of econometrics comes down to a generalized version of the formula sigma/root T for standard errors of the mean. (I recommend my book “Asset Pricing” which explains how to use GMM in detail.)

Very cool.

The results were not that favorable to the consumption model. If you look hard, you can see the equity premium puzzle – Lars and Ken needed huge gamma to fit the difference between stocks and bonds, but then couldn’t fit the level of interest rates.  But that led to an ongoing search – do we have the right utility function? Are we measuring consumption correctly? And that is now bearing fruit.

GMM is really famous because of how it got used. We get to tests parts of the model without writing down the whole model. Economic models are quantiative parables, and we get to examine and test the important parts of the parable without getting lost in irrelevant details.

What do these words mean? Let me show you an example. The classic permanent income model is a special case of the above, with quadratic utility. If we model income y as an AR(1) with coefficient rho, then the permanent income model says consumption should follow a random walk with innovations equal to the change in the present value of future income:


This is the simplest version of a “complete” model that I can write down. There are fundamental shocks, the epsilon; there is a production technology which says you can put income in the ground and earn a rate of return r, and there is an interesting prediction – consumption smooths over the income shocks.

Now, here is the problem we faced before GMM. First, computing the solutions of this sort of thing for real models is hard, and most of the time we can’t do it and have to go numerical. But just to understand whether we have some first-order way to digest the Fama-Shiller debate, we have to solve big hairy numerical models? Most of which is beside the point? The first equations I showed you were just about investors, and the debate is whether investors are being rational or not. To solve that, I have to worry about production technology and equilibrium?

Second, and far worse, suppose we want to estimate and test this model. If we follow the 1970s formal approach, we immediately have a problem. This model says that the change in consumption is perfectly correlated with income minus rho times last year’s income. Notice the same error epsilon in both equations. I don’t mean sort of equal, correlated, expected to be equal, I mean exactly and precisely equal, ex-post, data point for data point.

If you hand that model to any formal econometric method (maximum likelihood), it sends you home before you start. There is no perfect correlation in the data, for any parameter values. This model is rejected. Full stop.

Wait a minute, you want to say. I didn’t mean this model is a complete perfect description of reality. I meant it is a good first approximation that captures important features of the data. And this correlation between income shocks and consumption shocks is certainly not an important prediction.  I don’t think income is really an AR(1), and most of all I think agents know more about their income than my simple AR(1). But I can’t write that down, because I don’t see all their information. Can’t we just look at the consumption piece of this and worry about production technology some other day?

In this case, yes. Just look whether consumption follows a random walk. Run the change in consumption on a bunch of variables and see if they predict consumption. This is what Bob Hall did in his famous test, the first test of a part of a model that does not specify the whole model, and the first test that allows us to “condition down” and respect the fact that people have more information than we do. (Lars too walks on the shoulders of giants.) Taking the average of my first equation is the same idea, much generalized.

So the GMM approach allows you to look at a piece of a model – the intertemporal consumption part, here – without specifying the whole rest of the model – production technology, shocks, information sets. It allows you to focus on the robust part of the quantitative parable – consumption should not take big predictable movements – and gloss over the parts that are unimportant approximations – the perfect correlation between consumption and income changes.  GMM is a tool for matching quantitative parables to data in a disciplined way.

This use of GMM is part of a large and, I think, very healthy trend in empirical macroeconomics and finance. Roughly at the same time, Kydland and Prescott started “calibrating” models rather than estimating them formally, in part for the same reasons. They wanted to focus on the “interesting” moments and not get distracted by the models’ admitted abstractions and perfect correlations.

Formal statistics asks “can you prove that this model is not a 100% perfect representation of reality” The answer is often “yes,” but on a silly basis. Formal statistics does not allow you to say “does this model captures some really important pieces of the picture?” Is the glass 90% full, even if we can prove it’s missing the last 10%?

But we don’t want to give up on statistics, which much of the calibration literature did. We want to pick parameters in an objective way that gives models their best shot. We want to measure how much uncertainty there is in those parameters. We want to know how precise our predictions for the “testing” moments are. GMM lets you do all these things. If you want to “calibrate” on the means (pick parameters by observations such as the mean consumption/GDP ratio, hours worked, etc.), then “test” on the variances (relative volatility of consumption and output, autocorrelation of output, etc.), GMM will let you do that. And it will tell you how much you really know about parameters (risk aversion, substitution elasticities, etc.) from those “means”, how accurate your predictions about “variances” are, including the degrees of freedom chewed up in estimation!

In asset pricing, similar pathologies can happen. Formal testing will lead you to focus on strange portfolios, thousands of percent long some assets and thousands of percent short others. Well, those aren’t “economically interesting.” There are bid/ask spread, price pressure, short constraints and so on. So, let’s force the model to pick parameters based on interesting, robust moments, and let’s evaluate the model’s performance on the actual assets we care about, not some wild massive long-short (“minimum variance”) portfolio.

Fama long ran OLS regressions when econometricians said to run GLS, because OLS is more robust.  GMM allows you to do just that sort of thing for any kind of model – but then correct the standard errors!

In sum, GMM is a tool, a very flexible tool. It has let us learn what the data have to say, refine models, understand where they work and where they don’t, emphasize the economic intuition, and break out of the straightjacket of “reject” or “don’t reject,” to a much more fruitful empirical style.

Of course, it’s just a tool. There is no formal definition of an “economically interesting” moment, or a “robust” prediction. Well, you have to think, and read critically.

Looking hard but achieving  a remarkable simplicity when you understand it is a key trait of Lars’ work. GMM really is just applying sigma/Root T (generalized) to all the hard problems of econometrics. Once you make the brilliant step of recognizing they can be mapped to a sample mean. His “conditioning information” paper with Scott Richard took me years to digest. But once you understand L2, the central theorem of asset pricing is “to every plane there is an orthogonal line.” Operators in continuous time, and his new work on robust control and recursive preference shares the same elegance.

The trouble with the Nobel is that it leads people to focus on the cited work. Yes, GMM is a classic. I got here in 1985 and everyone already knew it would win a Nobel some day. But don’t let that fool you, the rest of the Lars portfolio is worth studying too. We will be learning from it for years to come. Maybe this will inspire me to write up a few more of his papers. If only he would stop writing them faster than I can digest them.

Source: Becker-Friedman Institute
I won’t even pretend this is unbiased. Lars is a close friend as well as one of my best colleagues at Chicago. I learned most of what I know about finance by shuttling back and forth between Lars’ office and Gene Fama’s, both of whom patiently explained so many things to me. But they did so in totally different terms, and understanding what each was saying in the other’s language led me to whatever synthesis I have been able to achieve. If you like the book “Asset Pricing,” you are seeing the result. He is also a great teacher and devoted mentor to generations of PhD students.

(This is a day late, because I thought I’d have to wait a few more years, so I didn’t have a Hansen essay ready to go. Likewise Shiller, it will take a day or two. Thanks to Anonymous and Greg for reporting a typo in the equations.)

Update: I’m shutting down most comments on these posts. This week, let’s congratulate the winners, and discuss issues again next week.

Gene Fama's Nobel

(For a pdf version click here.)

Photo: Elizabeth Fama

Gene Fama’s Nobel Prize

Efficient Markets

Gene’s first really famous contributions came in the late 1960s and early 1970s under the general theme of “efficient markets.” “Efficient Capital Markets: a Review of Theory and Empirical Work’’ [15] is often cited as the central paper. (Numbers refer to Gene’s CV.)

“Efficiency” is not a pleasant adjective or a buzzword. Gene gave it a precise, testable meaning. Gene realized that financial markets are, at heart, markets for information. Markets are “informationally efficient” if market prices today summarize all available information about future values. Informational efficiency is a natural consequence of competition, relatively free entry, and low costs of information in financial markets. If there is a signal, not now incorporated in market prices, that future values will be high, competitive traders will buy on that signal. In doing so, they bid the price up, until the price fully reflects the available information.

Like all good theories, this idea sounds simple in such an overly simplified form. The greatness of Fama’s contribution does not lie in a complex “theory” (though the theory is, in fact, quite subtle and in itself a remarkable achievement.) Rather “efficient markets” became the organizing principle for 30 years of empirical work in financial economics. That empirical work taught us much about the world, and in turn affected the world deeply.

For example, a natural implication of market efficiency is that simple trading rules should not work, e.g. “buy when the market went up yesterday.” This is a testable proposition, and an army of financial economists (including Gene, [4], [5],[ 6]) checked it. The interesting empirical result is that trading rules, technical systems, market newsletters and so on have essentially no power beyond that of luck to forecast stock prices. It’s not a theorem, an axiom, or a philosophy, it’s an empirical prediction that could easily have come out the other way, and sometimes did.


Similarly, if markets are informationally efficient, the “fundamental analysis” performed by investment firms has no power to pick stocks, and professional active managers should do no better than monkeys with darts at picking stocks portfolios. This is a remarkable proposition. In any other field of human endeavor, we expect seasoned professionals systematically to outperform amateurs. But other fields are not so ruthlessly competitive as financial markets! Many studies checked this proposition. It’s not easy. Among other problems, you only hear from the winners. The general conclusion is that markets are much closer to efficient here than anybody thought. Professional managers seem not to systematically outperform well-diversified passive investments. Again, it is a theory with genuine content. It could easily have come out the other way. In fact, a profession that earns its salary teaching MBA students could ask for no better result than to find that better knowledge and training lead to better investment management. Too bad the facts say otherwise.

If markets are informationally efficient, then corporate news such as an earnings announcement should be immediately reflected in stock prices, rather than set in motion some dynamics as knowledge diffuses. The immense “event study” literature, following [12] evaluates this question, again largely in the affirmative. Much of the academic accounting literature is devoted to measuring the effect of corporate events by the associated stock price movements, using this methodology.

Perhaps the best way to illustrate the empirical content of the efficient markets hypothesis is to point out where it is false. Event studies of the release of inside information usually find large stock market reactions. Evidently, that information is not incorporated ex-ante into prices. Restrictions on insider trading are effective. When markets are not efficient, the tests verify the fact.

These are only a few examples. The financial world is full of novel claims, especially that there are easy ways to make money. Investigating each “anomaly” takes time, patience and sophisticated statistical skill; in particular to check whether the gains were not luck, and whether the complex systems do not generate good returns by implicitly taking on more risk. Most claims turn out not to violate efficiency after such study.

But whether “anomalies” are truly there or not is beside the point for now. For nearly 40 years, Gene Fama’s efficient market framework has provided the organizing principle for empirical financial economics. Random walk tests continue. For example, in the last few years, researchers have been investigating whether “neural nets” or artificial intelligence programs can forecast short run stock returns, and a large body of research is dissecting the “momentum effect,” a clever way of exploiting very small autocorrelations in stock returns to generate economically significant profits. Tests of active managers continue. For example, a new round of studies is examining the abilities of fund managers, focusing on new ways of sorting the lucky from the skillful in past data. Hedge funds are under particular scrutiny as they can generate apparently good returns by hiding large risks in rare events. Event studies are as alive. For example, a large literature is currently using event study methodology to debate whether the initial public offerings of the 1990s were “underpriced” initially, leading to first-day profits for insiders, and “overpriced” at the end of the first day, leading to inefficiently poor performance for the next six months. It’s hard to think of any other conceptual framework in economics that has proved so enduring.

Development and testing of asset pricing models and empirical methods

Financial economics is at heart about risk. You can get a higher return, in equilibrium, in an efficient market, but only if you shoulder more risk. But how do we measure risk? Once an investment strategy does seem to yield higher returns, how do we check whether these are simply compensation for greater risk?

Gene contributed centrally to the developments of the theoretical asset pricing models such as the Capital Asset Pricing Model (CAPM) that measure the crucial components of risk ([9], [11], [13], [14], [16], [17], [20], [21], [26], [31], [75], 79]). [14] is a classic in particular, for showing how the CAPM could apply beyond the toy two period model.

However, Gene’s greatest contribution is again empirical. “Risk, Return and Equilibrium: Empirical Tests” with James MacBeth [25] stands out. The Capital Asset Pricing model specifies that assets can earn higher returns if they have greater “beta” or covariance with the market portfolio. This paper convincingly verified this central prediction of the CAPM.

Its most important contribution, though, lies in methods. Checking the prediction of the CAPM is difficult. This paper [25] provided the “standard solution” for all of the statistical difficulties that survives to this day. For example, we now evaluate asset pricing theories on portfolios, sorted on the basis of some characteristic, rather than using individual stocks; we often use 5 year rolling regressions to estimate betas. Most of all, The Journal of Finance in 2008 is still full of “Fama MacBeth regressions,” which elegantly surmount the statistical problem that returns are likely to be correlated across test assets, so N assets are not N independent observations. Gene’s influence is so strong that even many of the arbitrary and slightly outdated parts of this procedures are faithfully followed today. What they lose in econometric purity, they gain by having become a well-tested and trusted standard.

“The adjustment of stock prices to new information” [12] is another example of Gene’s immense contribution to methods. As I mentioned above, this paper, with over 400 citations, launched the entire event study literature. Again, actually checking stock price reactions to corporate events is not as straightforward as it sounds. Gene and his coauthors provided the “standard solution” to all of the empirical difficulties that survives to this day. Similarly, his papers on interest rates and inflation led the way on how to impose rational expectations ideas in empirical practice.

Simply organizing the data has been an important contribution. Gene was central to the foundation of the Center for Research in Securities Prices, which provides the standard data on which all U.S. stock and bond research are done. The bond portfolios he developed with Robert Bliss are widely used. He instigated the development of a survivor bias free mutual fund database, and the new CRSP-Compustat link is becoming the standard for a new generation of corporate finance research, again led by Gene’s latest efforts ([80] [83] [85], [87]).

This empirical aspect of Gene’s contribution is unique. Gene did not invent fancy statistical “econometric techniques,” and he is not a collector of easily observed facts. Gene developed empirical methods that surmounted difficult problems, and led a generation through the difficult practical details of empirical work. The best analogy is the controlled clinical trial in medicine. One would call that an empirical method, not a statistical theorem. Gene set out the empirical methods for finance, methods as central as the clinical trial is to medicine, empirical methods that last unquestioned to this day.

Predictable returns

Many economists would have rested on their laurels at this point, and simply waited for the inevitable call from the Nobel Prize committee. The above contributions are widely acknowledged as more than deserving in the financial and macroeconomics community. But Gene’s best and most important work (in my opinion) still lies ahead.

The efficient markets work of the 1960s and 1970s found that stock returns are not predictable (“random walks”) at short horizons. But returns might well still be predictable at long horizons, if investors’ fear of risk varies over time. For example, in the depths of a recession few people may want to hold risky assets, as they are rightly worried about their jobs or the larger economic risks at these times. This quite rational fear lowers the demand for risky assets, pushing down their prices and pushing up subsequent returns. If this is true, we could predict good and bad returns in the stock market based on the state of the economy, even though the market is perfectly efficient (all information is reflected in current prices). This argument is obviously much more plausible at business cycle frequencies than at short horizons, which is why the early tests concentrated on short horizons. Gene’s next great contribution, in the 1980s, was to show how returns are predictable at long horizons.

Though the last paragraph makes it sound like an easy extension, I cannot begin to describe what a difficult intellectual leap this was for Gene as well as for the rest of the financial economics profession. Part of the difficulty lay in the hard won early success of simple efficient markets in its first 10 years. Time after time, someone would claim a system that could “beat the market” (predict returns) in one way or another, only to see the anomaly beat back by careful analysis. So the fact that returns really are predictable by certain variables at long horizons was very difficult to digest.

The early inklings of this set of facts came from Gene’s work on inflation ([30], [32], [35], [37], [39], [43], [44], [49]). Since stocks represent a real asset, they should be a good hedge for inflation. But stock returns in the high inflation of the 1970s were disappointing. Gene puzzled through this conundrum to realize that the times of high inflation were boom times of low risk premiums. But this means that risk premiums, and hence expected returns, must vary through time.

Gene followed this investigation in the 1980s with papers that cleanly showed how returns are predictable in stock ([55], [58], [59], [62]), bond ([50], [52], [57], [62], [64]), commodity ([56], [60]) and foreign exchange ([40], [51]) markets, many with his increasingly frequent coauthor Ken French. These papers are classics. They define the central facts that theorists of each market are working on to this day. None have been superseded by subsequent work, and these phenomena remain a focus of active research.

(I do not mean to slight the contributions of others, as I do not mean to slight the contribution of others to the first generation of efficient markets studies. Many other authors examined patterns of long horizon return predictability. This is a summary of Gene’s work, not a literature review, so I do not have space to mention them. But as with efficient markets, Gene was the integrator, the leader, the one who most clearly saw the overarching pattern in often complex and confusing empirical work, and the one who established and synthesized the facts beyond a doubt. Many others are often cited for the first finding that one or another variable can forecast returns, but Gene’s studies are invariably cited as the definitive synthesis.)

The central idea is that the level of prices can reveal time varying expected returns. If expected returns and risk premiums are high, this will drive prices down. But then the “low” price today is a good signal to the observer that returns will be high in the future. In this way stock prices relative to dividends or earnings predict stock returns; long term bond prices relative to short-term bond prices predict bond returns; forward rates relative to spot rates predict bond and foreign exchange returns, and so forth. Low prices do not cause high returns any more than the weatherman causes it to snow.

This work shines for its insistence on an economic interpretation. Other authors have taken these facts as evidence for “fads” and “fashion” in financial markets. This is a plausible interpretation, but it is not a testable scientific hypothesis; a “Fad” is a name for something you don’t understand. Gene’s view, as I have articulated here, is that predictable returns reflect time-varying risk premia related to changing economic conditions. This is a testable view, and Gene takes great pain to document empirically that the high returns come at times of great macroeconomic stress, (see especially [60], [62]). This does not prove that return forecastability is not due to “fads,” anymore than science can prove that lightning is really not caused by the anger of the Gods. But had it come out the other way; had times of predictably high returns not been closely associated with macroeconomic difficulties, Gene’s view would have been proven wrong. Again, this is scientific work in the best sense of the word.

The influence of these results is really only beginning to be felt. The work of my generation of theoretically inclined financial economists has centered on building explicit economic models of time-varying macroeconomic risk to explain Fama and French’s still unchallenged findings. Most of corporate finance still operates under the assumption that risk premia are constant over time. Classic issues such as the optimal debt/equity ratio or incentive compensation change dramatically if risk premia, rather than changing expectations of future profits, drive much price variation. Most of the theory of investment still pretends that interest rates, rather than risk premia, are the volatile component of the cost of capital. Portfolio theory is only beginning to adapt. If expected returns rise in a recession, should you invest more to take advantage of the high returns? How much? Or are you subject to the same additional risk that is, rationally, keeping everyone else from doing so? Macroeconomics and growth theory, in the habit of considering models without risk, or first order approximations to such models in which risk premia are constant and small, are only beginning to digest the fact that risk premia are much larger than interest rates, let alone that these risk premia vary dramatically over time.

In these and many other ways, the fact that the vast majority of stock market fluctuation comes from changing expected returns rather than changing expectations of future profits, dividends, or earnings, will fundamentally change the way we do everything in financial economics.

The cross section, again

We are not done. A contribution as great as any of these, and perhaps greater still, lies ahead.

If low prices relative to some multiple (dividends, earnings, book value) signal times of high stock returns, perhaps low prices relative to some multiple signal stocks with high risks and hence high returns. In the early 1990s, Gene, with Ken French, started to explore this idea.

The claim was old, that “value stocks” purchased for low prices would yield higher returns over the long run than other stocks. This claim, if true, was not necessarily revolutionary. The Capital Asset Pricing Model allows some asset classes to have higher average returns if they have higher risk, measured by comovement with the market return, or “beta.” So, if the value effect is not a statistical anomaly, it could easily be consistent with existing theory, as so many similar effects had been explained in the past. And it would be perfectly sensible to suppose that “value” stocks, out of favor, in depressed industries, with declining sales, would be extra sensitive to declines in the market as a whole, i.e. have higher betas. The “value premium” should be an interesting, but not unusual, anomaly to chase down in the standard efficient markets framework.

Given these facts, Gene and Ken’s finding in “The Cross Section of Expected Stock Returns” [68] was a bombshell. The higher returns to “value stocks” were there all right, but CAPM betas did nothing to account for them! In fact, they went the wrong way – value stocks have lower market betas. This was an event in Financial Economics comparable to the Michelson-Morley experiment in Physics, showing that the speed of light is the same for all observers. And the same Gene who established the cross-sectional validity of the Capital Asset Pricing Model for many asset classifications in the 1970s was the one to destroy that model convincingly in the early 1990s when confronted with the value effect.

But all is not chaos. As asset pricing theory had long recognized the possibility of time varying risk premia and predictable returns, asset pricing theory had recognized since the early 1970s the possibility of “multiple factors” to explain the cross section. Both possibilities are clearly reflected in Gene’s 1970 essay. It remained to find them. Though several “multiple factor” models had been tried, none had really caught on. In a series of papers with Ken French, ([72], [73], [78], and especially [74]) Gene established the “three factor model” that does successfully account for the “value effect.”

The key observation is that “value stocks” – those with low prices relative to book value – tend to move together. Thus, buying a portfolio of such stocks does not give one a riskless profit. It merely moves the times at which one bears risk from a time when the market as a whole declines, to a time when value stocks as a group decline. The core idea remains, one only gains expected return by bearing some sort of risk. The nuance is that other kids of risk beyond the market return keep investors away from otherwise attractive investments.

Since it is new, the three-factor model is still the object of intense scrutiny. What are the macroeconomic foundations of the three factors? Are there additional factors? Do the three factors stand in for a CAPM with time-varying coefficients? Once again, Gene’s work is defining the problem for a generation.

Though literally hundreds of multiple-factor models have been published, the Fama-French three-factor model quickly has become the standard basis for comparison of new models, for risk-adjustment in practice, and it is the summary of the facts that the current generation of theorists aims at. It has replaced the CAPM as the baseline model. Any researcher chasing down an anomaly today first checks whether anomalously high returns are real, and then checks whether they are consistent the CAPM and the Fama-French three factor model. No other asset pricing model enjoys this status.

Additional contributions

Gene has made fundamental contributions in many other areas. His early work on the statistical character of stock returns, especially the surprisingly large chance of large movements, remains a central part of our understanding.([1], [2], [3], [4]). He has made central contributions to corporate finance, both its theory ([24], [36], [38], [42], [46], [47], [54], [63], [75]) and empirical findings ([10], [29], [80], [83], [85], [86], [87]). Some of the latter begin the important work of integrating predictable returns and new risk factors into corporate finance, which will have a major impact on that field. These are as central as his contributions to asset pricing that I have surveyed here; I omit them only because I am not an expert in the field. He has central contributions to macroeconomics and the theory of money and banking ([40], [41], [48], [49], [53], [70]).

The case for a Prize

I have attempted to survey the main contributions that must be mentioned in a Nobel Prize; any of these alone would be sufficient. Together they are overwhelming. Of course, Gene leads most objective indicators of influence. For example, he is routinely at or near the top of citations studies in economics, as well as financial economics.

The character of Gene’s work is especially deserving of recognition by a Nobel Prize, for a variety of reasons.

Empirical Character. Many economists are nominated for Nobel prizes for influential theories, ideas other economic theorists have played with, or theories seem to have potential in the future for understanding actual phenomena. Gene’s greatness is empirical. He is the economist who has taught us more about how the actual financial world works than any other. His ideas and organizing framework guided a generation of empirical researchers. He established the stylized facts that 30 years of theorists puzzle over. Gene’s work is scientific in the best sense of the word. You don’t ask of Gene, “what’s your theory?” you ask “what’s your fact?” Finance today represents an interplay of fact and theory unparalleled in the social sciences, and this fact is largely due to Gene’s influence.

Ideas are Alive. Gene’s ideas are alive, and his contributions define our central understanding of financial markets today. His characterizations of time varying bond, stock, and commodity returns, and the three-factor model capturing value and size effects remain the baseline for work today. His characterization of predictable foreign exchange returns from the early 1980s is still one of the 2 or 3 puzzles that define international finance research. The critics still spend their time attacking Gene Fama. For example, researchers in the “behavioral finance” tradition are using evidence from psychology to give some testable content to an alternative to Gene’s efficient market ideas, to rebut caustic comments like mine above about “fads.” This is remarkable vitality. Few other idea from the early 1970s, including ideas that won well-deserved Nobel prizes, remains an area of active research (including criticism) today.

Of course, some will say that the latest crash “proves” markets aren’t “efficient.” This attitude only expresses ignorance. Once you understand the definition of efficiency and the nature of its tests, as made clear by Gene 40 years ago, you see that the latest crash no more “proves” lack of efficiency than did the crash of 1987, the great slide of 1974, the crash of 1929, the panic of 1907, or the Dutch Tulip crisis. Gene’s work, and that of all of us in academic finance, is about serious quantiative scientific testing of explicit economic models, not armchair debates over anecdotes. The heart of efficient markets is the statement that you cannot earn outsize returns without taking on “systematic” risk. Given the large average returns of the stock market, it would be inefficient if it did not crash occasionally.

Practical importance. Gene’s work has had profound influence on the financial markets in which we all participate.

For example, In the 1960s, passively managed mutual funds and index funds were unknown. It was taken for granted that active management (constant buying and selling, identifying “good stocks” and dumping “bad stocks”) was vital for any sensible investor. Now all of us can invest in passive, low cost index funds, gaining the benefits of wide diversification only available in the past to the super rich (and the few super-wise among those). In turn, these vehicles have spurred the large increase in stock market participation of the last 20 years, opening up huge funds for investment and growth. Even proposals to open social security systems to stock market investment depend crucially on the development of passive investing. The recognition that markets are largely “efficient,” in Gene’s precise sense, was crucial to this transformation.

Unhappy investors who lost a lot of money to hedge funds, dot-coms, bank stocks, or mortgage-backed securities can console themselves that they should have listened to Gene Fama, who all along championed the empirical evidence – not the “theory” – that markets are remarkably efficient, so they might as well have held a diversified index.

Gene’s concepts, such as “efficiency” or that value and growth define the interesting cross section of stock returns are not universally accepted in practice, of course. But they are widely acknowledged as the benchmark. Where an active manager 40 years ago could just say “of course,” now he or she needs to confront the overwhelming empirical evidence that most active managers do not do well. Less than 10 years after Fama and French’s small/large and value/growth work was first published, mutual fund companies routinely categorize their products on this dimension. (See www.vanguard.com for example.)

Influence in the field. Finally,  Gene has had a personal influence in the field that reaches beyond his published work. Most of the founding generation of finance researchers got their Ph. D’s under Gene Fama, and his leadership has contributed centrally to making the Booth School at the University of Chicago such a superb institution for developing ideas about financial economics.
Fama, Hansen, and Shiller Nobel

Fama, Hansen, and Shiller Nobel

Gene Fama, Lars Hansen and Bob Shiller win the Nobel Prize. Congratulations! (Minor complaint: Nobel committee, haven’t you heard of Google? There are lots of nice Gene Fama photographs lying around. What’s with the bad cartoon?)

I’ll write more about each in the coming days. I’ve spent most of my professional life following in their footsteps, so at least I think I understand what they did more than for the typical prize.

As a start, here is an an introduction I wrote for  Gene Fama’s Talk, “The History of the Theory and Evidence on the Efficient Markets Hypothesis” given for the AFA history project. There is a link to this document on my webpage here. The video version is here at IGM.

Introduction for Gene Fama

On behalf of the American Finance Association and the University of Chicago Graduate School of Business, it is an honor and a pleasure to introduce Gene Fama. This talk is being videotaped for the AFA history project, so we speak for the ages.

Gene will tell us how the efficient-markets hypothesis developed. I’d like to say a few words about why it’s so important. This may not be obvious to young people in the audience, and Gene will be too modest to say much about it.

“Market efficiency” means that asset prices incorporate available information about values. It does not mean that orders are “efficiently” processed, that prices “efficiently” allocate resources, or any of the other nice meanings of “efficiency.” Why should prices reflect information? Because of competition and free entry. If we could easily predict that stock prices will rise tomorrow, we would all try to buy today. Prices would rise today until they reflect our information.


This seems like a pretty simple “theory,” hardly worth all the fuss. Perhaps you expect general relativity, lots of impenetrable equations. Gene is more like Darwin, and the efficient markets hypothesis is more like evolution. Both evolution and efficient markets are elegant, simple, and powerful ideas that organized and energized vast empirical projects, and that’s the true measure of any theory. Without evolution, natural history would just be a collection of curious facts about plants and animals. Without the efficient markets hypothesis, empirical finance would just be a collection of Wall-Street anecdotes, how-I-got-rich stories, and technical-trading newssheets.

Efficient-market theory and empirical work are also a much deeper intellectual achievement than my little story suggests. There are plenty of hard equations. It took nearly a century to figure out the basic prediction of an efficient market, from Bachelier’s random walk to the consumption Euler equation (price equals conditionally expected value, discounted by marginal utility growth). It took hard work and great insight to account for risk premiums, selection biases, reverse causality, and endogenous variables, and to develop the associated statistical procedures.

Efficient-markets empirical work doesn’t check off easy “predictions.” It typically tackles tough anomalies, each of which looks superficially like a glaring violation of efficiency, and each endorsed by a cheering crowd of rich (or perhaps lucky?) traders. It’s not obvious that what looks like an inefficiently low price is really a hidden exposure to systematic risk. It took genius to sort through the mountains of charts and graphs that computers can spit out, to see the basic clear picture.

Efficient-market predictions can be beautifully subtle and unexpected. One example: In an efficient market, expert portfolio managers should do no better than monkeys throwing darts. That’s a remarkable prediction. Experts are better than amateurs in every other field of human endeavor: Tiger Woods will beat you at golf; you should hire a good house painter and a better tax lawyer. The prediction is even more remarkable for how well it describes the world, after we do a mountain of careful empirical work.

That empirical work consists, fundamentally, of applying scientific method to financial markets. Modern medicine doesn’t ask old people for their health secrets. It does double-blind clinical trials. To this, we owe our ability to cure many diseases. Modern empirical finance doesn’t ask Warren Buffett to share his pearls of investment wisdom. We study a survivor-bias-free sample of funds sorted on some ex-ante visible characteristic, to separate skill from luck, and we correct for exposure to systematic risk. To this we owe our wisdom, and maybe, as a society, a lot of wealth as well.

This point is especially important now, in a period of great financial turbulence. It’s easy to look at the latest market gyration and opine, “Surely markets aren’t efficient.” But that’s not how we learn anything of lasting usefulness. Efficient markets taught us to evaluate theories by their rejectable predictions and by the numbers; to do real, scientific, empirical work, not to read newspapers and tell stories.

Efficient markets are also important to the world at large, in ways that I can only begin to touch on here. The assurance that market prices are in some sense basically “right” lies behind many of the enormous changes we have seen in the financial and related worlds, from index funds, which have allowed for wide sharing of the risks and rewards of the stock market, to mark-to-market accounting, quantitative portfolio evaluation and benchmarking, and modern risk management.

With 40 years’ hindsight, are markets efficient? Not always, and Gene said so in 1970. For example, prices rise on the release of inside information, so that information, though known by someone, was not reflected in the original price. More recently, I think we have seen evidence that short-sales constraints and other frictions can lead to informationally-inefficient prices.

This is great news. Only a theory that can be proved wrong has any content at all. Theories that can “explain” anything are as useless as “prices went down because the Gods are angry.”

Gene went on, arguing that no market is ever perfectly efficient, since no market is perfectly competitive and frictionless. The empirical question has always been to what degree a given phenomenon approaches an unanattainable ideal.

Still, the answer today is much closer to “yes” than to “no” in the vast majority of serious empirical investigations. It certainly is a lot closer to “yes” than anyone expected in the 1960s, or than the vast majority of practitioners believe today. There are strange fish in the water, but even the most troublesome are surprisingly small fry. And having conquered 157 anomalies with patient hard work, many of us can be excused for suspecting that just a little more work will make sense of the 158th.

However, empirical finance is no longer really devoted to “debating efficient markets,” any more than modern biology debates evolution. We have moved on to other things. I think of most current research as exploring the amazing variety and subtle economics of risk premiums – focusing on the “joint hypothesis” rather than the “informational efficiency” part of Gene’s 1970 essay.

This is also great news. Healthy fields settle debates with evidence and move on to new discoveries. But don’t conclude that efficient markets are passé. As evolution lies quietly behind the explosion in modern genetics, markets that are broadly efficient, in which prices quickly reflect information, quietly underlie all the interesting things we do today. This is the best fate any theory can aspire to.

Gene will talk about the history of efficient markets. People expect the wrong things of history as they expect overly complex “theory.” No lone genius ever thought up a “hypothesis,” went out to “test” it, and convinced the world with his 2.1 t-statistic. Theory and empirical work develop together, ideas bounce back and forth between many people, the list of salient vs. unimportant facts shifts, and evidence, argument and, alas, age gradually change people’s minds. This is how efficient markets developed too, as Gene has always graciously acknowledged. Gene’s two essays describe the ideas, but much less of this process. It was an amazing adventure, and historians of science should love this story. Ladies and Gentlemen, please welcome Gene Fama to tell us about it.

Friday Art Fun

Friday Art Fun

Totally off topic. It’s Friday, time to relax.

Source: Nina Katchadourian

15th Century Flemish Style Portraits Recreated In Airplane Lavatory Click the link for the full set.

From the artist:
While in the lavatory on a domestic flight in March 2010, I spontaneously put a tissue paper toilet cover seat cover over my head and took a picture in the mirror using my cellphone. The image evoked 15th-century Flemish portraiture. I made several forays to the bathroom from my aisle seat, and by the time we landed I had a large group of new photographs entitled Lavatory Self-Portraits in the Flemish Style
From the art critic (Sally Cochrane)
What no one’s saying, though, is that she was hogging the bathroom while a line of antsy people held their bladders! 
In related art news, the street artist Banksy is prowling New York. A group of Brooklyn locals, seeing people coming in to photograph the stencil, promptly covered it with cardboard and starting charging $5 per shot. Entrepreneurship and property rights are still alive.
Krugtron parts 2 and 3

Krugtron parts 2 and 3

Niall Ferguson has completed his Krugtron trilogy, with Part 2 and Part 3, (Part 1 here FYI, which I blogged about earlier.)

Part 2 continues Part 1. In fact, Krugman is as human as the rest of us, and the future is hard to see. Niall compiles a long record of what Krugman actually said at the time. As before, those of us on the sharp end of Krugman’s insults enjoy seeing at least his own record set straight.

But Niall admits what I said last time: we don’t really learn much from anyone’s prognostication

In the past few days, I have pointed out that he has no right at all to castigate me or anyone else for real or imagined mistakes of prognostication. But the fact that Paul Krugman is often wrong is not the most important thing. ..
What Niall is really mad at are the insults, the lying and slandering (I’m sorry, that’s what it is and there are no polite words for impolite behavior), and the lack of scholarship – Krugman does not read the things he castigates people for.

And it matters.

Insults
Why have I taken the trouble to do this? I have three motives…to assert the importance of humility and civility in public as well as academic discourse..
…his hero John Maynard Keynes did not go around calling his great rival Friedrich Hayek a “mendacious idiot” or a “dope”.
The “Always-Wrong Club” is just the latest of many ad hominem attacks he has made on me since 2009. On one occasion he implied that I was a racist and then called me a “whiner” when I objected. On another he referred to me as a “poseur”, adding for good measure that I had “choked on [my] own snark”. Last year he wildly accused of making “multiple errors and misrepresentations” in article for Newsweek, only one of which he ever specified. More recently I was accused of “trying to flush [my] own past statements down the memory hole” - a characteristically crude turn of phrase - and of being “inane”. Re-reading these, I can only marvel at the man’s hypocrisy, for Krugman often sanctimoniously denies that he “does ad hominem” - and once had the gall to accuse Joe Scarborough of making such an attack on him when Scarborough merely quoted Krugman’s own words back at him…
Lying
… Krugman has repeatedly misrepresented what I said in that debate. Immediately afterwards, he cynically claimed on his blog that I had been arguing that high deficits would crowd out private spending. Later, in order to have a straw man for his vulgar Keynesian claim that even larger deficits would have produced a faster recovery, he started to pretend that I had predicted “soaring interest rates” and had called for immediate austerity…. But anyone who reads the transcript of our debate - even the edited version that was published - can see that this was not my position.

Scholarship
 When Paul Krugman first began his attacks against me, he made it clear - as if almost proud of the fact - that he had read none of my books. (Quote: “I’m told that some of his straight historical work is very good.”) 
Krugman’s unabashed ignorance of my academic work raises the question of what, in fact, he does read, apart from posts by the other liberal bloggers who are his zealous followers. … (When he does read a book, he mentions it in his blog as if it’s a special holiday treat.)
It matters
It is “my duty, as I see it, is to make my case as best I honestly can,” Krugman has written, “not [to] put on a decorous show of civilized discussion.” Well, I am here to tell him that “civilized discussion” matters. It matters because vitriolic language of the sort he uses is a key part of what is wrong with America today. As an eminent economist said to me last week, people are afraid of Krugman. More “decorous” but perhaps equally intelligent academics simply elect not to enter a public sphere that he and his parasitical online pals are intent on poisoning. I agree with Raghuram Rajan, one of the few economists who authentically anticipated the financial crisis: Krugman’s is “the paranoid style in economics”: 
`All too often, the path to easy influence is to impugn the other side’s motives and methods … Instead of fostering public dialogue and educating the public, the public is often left in the dark. And it discourages younger, less credentialed economists from entering the public discourse.‘ 
The originals are full of links to documentation (a good historian’s habit) which I could not reproduce here.

There is a reason the rest of the world – especially the academic world – abides by a simple set of ethics that include: read what you criticize, document what you say, try to understand the other side’s view, respect their integrity, don’t lie, don’t insult, don’t deliberately misquote, attack ideas if you will but not people, don’t make up slanderous allegations about your opponents personal motives, and (hello, New York Times) check your facts.  And when you see someone flagrantly violating these rules, tune out.

Some interesting New York Times inside commentary.

PS: My last post on this resulted in a whole lot of nasty Ferguson’s-a-crank comments, which I deleted. You may criticize Ferguson, but do so politely and factually.
Mulligan on Obamacare Marginal Tax Rates

Mulligan on Obamacare Marginal Tax Rates

Casey Mulligan wrote a nice Wall Street Journal Oped last week, summarizing his recent NBER Working Paper (also here on Casey’s webpage) on marginal tax rates.

What do I mean, tax, you might ask. Obamacare is about giving people stuff, not taxing. Sadly, no. Obamacare gives subsidies that are dependent on income. As you earn more, you receive fewer subsidies for health care, reducing the incentive to earn more. Casey tots this sort of thing up, along with the actual taxes people will pay.

Economists use the word “tax” here and we know what we mean, but it would be better to call it “disincentives” so it’s clearer what the problem is, and just how painful we make it for poor people in this country to rise out of that poverty.

As you can see, the average marginal “tax” rate went up 10 percentage points since 2007, and about 5 percentage points due to Obamacare alone.

Going back to the working paper, I think this is actually an understatement. (Probably the first time Casey or I have ever been accused of that!)


First, not even Casey can add everything up, and it all adds in one direction. State and local taxes, and vast number of state, county, city and other income or asset-based transfers all add. I haven’t read all his papers or the whole book yet, but did Casey get them all? For example, I just got in the mail notice for a little program offered by the state of Illinois to lower your property taxes if you earn less than $100,000 per year. Nice, but one more little incentive not to earn more than $100,000 per year, and not in Casey’s calculation.

In email correspondence, Casey pulled me back from these thoughts in a way that is revealing about the calculation. I wanted to add sales tax. After all, if you earn a dollar, but you have to pay 10% sales tax to do anything with it, that’s another 10% distortion, no? Casey responds no, because he wants to measure the income-compensated distortion to labor, period. If you don’t work, and somehow you also get income, you still have to pay the 10% sales tax. So the sales tax does not distort that pure work-no work decision. Casey’s right, but I think this clearly illuminates the conservative nature of his calculation and what it means. There are a lot more margins, wedges, and distortions out there, and he’s not trying to measure them all. He’s also not trying to measure the wedges and disincentives for employers to hire people, towards non-market activities, and certainly not the effects of the regulatory tangle.

Another note of conservatisim: “The results account for the fact that many people will not participate in programs for which they are eligible.” This is an important issue, that at least had not sunk in for me until reading a recent CBO report. People don’t sign up for all the benefits to which they are eligible. If they did, marginal tax rates would be astronomical. It also sends a warning: Just how long will it be before people in an increasingly stagnant economy figure out all the programs they are eligible for?

Drawing a single line can also be unduly calming. You might say, “well 50% isn’t so bad. Europeans still work, sort of, paying 50% marginal tax rates.”  But as Casey reminds us, the spread in marginal tax rates across people is enormous. For example, the paper has a nice example (p. 13, table 2) of how a typical earner will come out ahead by choosing to work part time and receive subsidies, rather than work full time. This is a case of a 100% marginal tax rate.

It’s likely that the effect of marginal taxes is nonlinear. Much of the labor decision comes in chunks: work or don’t work; work part time or full time; apply for benefits or don’t, with transactions costs and irreversibilities.  Suppose half the population feels a 100% marginal tax rate and half feels zero. Half the population works, half does not. That is likely a much larger effect than if the whole population felt a 50% marginal tax rate.

In sum, it’s probably worse than even Casey’s graph. But Casey is doing the right thing in putting up a carefully documented graph and paper rather than speculating. Speculation is for blogs, not papers and not for good opeds.

And Casey’s big point remains the additional effect of Obamacare and other changes to Federal programs. Whatever you think the level is, it’s now 10 percentage points more than what it used to be. On average.

Margins on Exchanges

Margins on Exchanges

A nice Bloomberg View by David Goldhill offers an Econ 101 lesson in incentives. Though the average subsidy rate to health insurance is limited, the marginal subsidy rate is 100% once consumers hit the income limits – so many consumers have no incentive at all to shop for lower prices. In turn, this greatly lowers the chance that insurers will compete on price.

David:

Let’s take an example. A family of four at 138 percent of the poverty level ($32,499) has its premium capped at 3.29 percent of income or $1,071. The rest is subsidy. So, if the cost of a silver plan is $10,000, the subsidy for this family is $8,929. A family at 400 percent of the poverty level ($94,200) has to pay up to 9.5 percent of its income for a plan, or $8,949. So the same $10,000 premium carries a subsidy of only $1,051.

But now look at those two families from the insurer’s perspective. A $10,000 plan already costs more than the maximum amount either family would pay. If the insurer raises the premium to $10,001, both families get $1 in additional subsidy. If it raises premiums to $11,000, both families get $1,000 in additional subsidy. In other words, no matter how much an insurer raises rates, a subsidized household pays zero more.


Of course, it takes some cleverness for insurance companies to separate out these now totally price-insensitive buyers from regular customers, and David explains some of the ways they can do so. Having seen the airlines at work, you get a sense of just how clever companies can be. Modern big-data driven marketing is all about careful price discrimination.

A more well known incentive problem: A rule that you must spend 80% of what you take in can limit profits. Or it can incentivize wasted spending.
In what may be the single greatest source of unintended consequences in the Affordable Care Act, insurers are now required to spend at least 80 percent of revenue from premiums on care. Superficially, this means that if they set premiums too high, they will have to eventually refund much of the money that they don’t end up spending on care. But let’s say you’re running an insurance company. You can find ways to spend more money on beneficiaries’ health care – say, with more generous definitions of free preventive care, more expansive rehabilitation services or higher reimbursement rates on doctors’ services – and keep 20 percent of the all money you bring in. Or alternatively, you can spend less on care and give refunds. Easy choice.  
Update: I don’t know why I didn’t think of this before… Cash rebates! Just like credit cards. OK, we can’t be so obvious, points on your credit card, free miles, toasters, checkups for your cat…Each dollar added to the premium comes from the government, so attract customers by the closest thing you can get away with to cash back.
Ferguson on Krugtron

Ferguson on Krugtron

A fun show is breaking out. Niall Ferguson on “Krugtron the invincible.

Paul Krugman, for a while now, has been lambasting those he disagrees with by trumpeting their supposed “predictions” which came out wrong, and using words like “knaves and fools” to describe them – when he’s feeling polite. These claims often are based on a rather superficial, if any, study of what the people involved actually wrote, mirroring the sudden narcolepsy of Times fact-checkers any time Krugman steps in to the room. Niall has lately been a particular target of this calumnious campaign.

Niall’s fighting back. “Oh yeah? Let’s see how your "predictions” worked out!“ Don’t mess with a historian. He knows how to check the facts. This is only "part 1!” Ken Rogoff seems to be on a similar tear. (and a new item here.) This will be worth watching.

As regular blog readers know, I don’t think science advances by evaluating soothsaying. You make good unconditional predictions with very badly wrong structural models, and very good structural models make bad unconditional predictions.  The talent of predicting and the talent of understanding are largely uncorrelated.  The judgmental forecasts of individuals are poor ways to evaluate any serious economic or scientific theory.  I carefully don’t make “predictions” for just that reason. So, I don’t regard this cheery deconstruction effort as a useful way to show that Krugman’s “model,” whatever it is, is wrong. I also can’t see that anyone but the devoted choir of lemmings is paying much attention to Krugman’s mudslinging any more.  But it is nice that Niall and Ken are taking the effort to ask the great doctor if perhaps he also doesn’t need a bit of healing; perhaps they will force Krugman to go back to actually writing about economics. 

Update: Benn Steil Chimes in, this time on the Baltics, Iceland, and the supposed wonders of currency devaluation.

Dupor and Li on the Missing Inflation in the New-Keynesian Stimulus

Bill Dupor and Rong Li have a very nice new paper on fiscal stimulus: “The 2009 Recovery Act and the Expected Inflation Channel of Government Spending” available here.

New-Keynesian models are really utterly different from Old-Keynesian stories. In the old-Keynesian account, more government spending raises income directly (Y=C+I+G); income Y then raises consumption, so you get a second round of income increases.

New-Keynesian models act entirely through the real interest rate.  Higher government spending means more inflation. More inflation reduces real interest rates when the nominal rate is stuck at zero, or when the Fed chooses not to respond with higher nominal rates. A higher real interest rate depresses consumption and output today relative to the future, when they are expected to return to trend. Making the economy deliberately more inefficient also raises inflation, lowers the real rate and stimulates output today. (Bill and Rong’s introduction gives a better explanation, recommended.)

So, the key proposition of new-Keynesian multipliers is that they work by increasing expected inflation. Bill and Rong look at that mechanism: did the ARRA stimulus in 2009 increase inflation or expected inflation?  Their answer: No.


This is a quantitative question. How much do the large-multiplier models say the ARRA should have increased inflation? Their answer: 4.6%. Where is it?

We know, of course, that inflation (especially core inflation) basically did nothing during the period of the ARRA, and Bill and Rong have some nice graphs. Defenders might say, aha, but except for the stimulus, we would have had a catastrophic deflation spiral. Critics might reply, that’s what George Washington’s doctors said while they were bleeding him. As always, teasing out cause and effect is hard.

Bill and Rong have a range of interesting facts that address this question. Here are two that I thought particularly clever. First, they look at the survey of professional forecasters, and examined how the forecasters changed inflation forecasts along with their changes in government spending forecasts, i.e. when they figured out a big stimulus is coming. I plotted the data from Bill and Rong’s Table 2

Dupor and Li Table 2
As you can see, in 2008Q4 and 2009Q1, many forecasters updated their views on government spending, a few by a lot.  However, there is next to no correlation between learning of a big stimulus and increases in expected inflation, especially among the forecasters who strongly update their stimulus forecasts.

Bill and Rong’s interpretation is that the stimulus failed to increase expected inflation. The main defense I can think of is to say that this evidence tells us about professional forecaster’s model, not about true inflation expectations. Professional forecasters are a bunch of old-Keynesians, not properly enlightened new-Keynesians; they don’t realize that stimulus works through inflation, they’re still thinking about a pre-Friedman consumption function. That’s probably true. But if so, it’s hard to think that everyone else in the economy does understand the new truth, and changed their inflation forecasts dramatically when they learned of the stimulus.

Another nice piece of evidence: The US had much bigger government spending stimulus than the UK. The behavior of expected inflation revealed in the real vs. nominal treasury spread was almost exactly the same. (Yes, Bill and Rong delve into the TIPS pricing in the crisis.)

Source: Dupor and Li

Finally, a key point missing in most of the stimulus debate. These models predict big multipliers not just at the zero bound, but anytime that interest rates don’t respond to inflation. We don’t have to just rely on theory, there is some experience. New-Keynesians since at least Clarida Gali and Gertler’s famous regressions have said that the Fed was not increasing interest rates fast enough in the 1970s, and the 1930s and interest-rate peg of the late 40s and early 50s are another testing ground. Using standard measures of exogenous spending increases, Bill and Rong find no impact of government spending on inflation in any of these periods.

New Keynesian stimulus analysis has been particularly slippery, on the difference between the models and the words, and on advocating the policy answers without checking or believing the mechanisms. The models are Ricardian: the same stimulus happens whether paid for by taxes or borrowing. The opeds scream that the government must borrow. The models say totally useless spending stimulates. The opeds are full of infrastructure, and roads and bridges. (At least, the “sprawl”  complaint is temporarily quiet.) The models say that spending works by creating inflation, not through a consumption function. Inflation being totally flat, and the counterfactual argument weak, you don’t hear much about that in the opeds.  The models say we should be in a huge deflation with strong expected output growth. The facts are protracted stagnation. (More in my last stimulus post.) The models are models, worthy of careful examination and empirical testing. All I ask is that their proponents take them seriously, and not as holy water for a completely different old-Keynesian agenda.