Macroeconomic Theory
by Thomas J. Sargent
Buy on AmazonRecommended by
"This was the first book I ever had in grad school, in my very first macro class. It was a good book for me to learn macroeconomics as it existed in the early 1980s. It actually presents a very Keynesian model, because that was the dominant model of the time, and it also presents the beginnings of New Classical models. The reason I like it, and still use it as a reference in classes, is that it shows us how to solve expectational difference equations. These are just equations that have expectations in them. You might say that GDP today is equal to some function of government spending, of interest rates and the money supply – and it might also be a function of expected income tomorrow. So income today depends on what you expect to happen tomorrow. Once you put those expectations into that equation, it’s really, really hard to solve. In this book, Sargent begins showing us how to solve those problems in a way that’s general and works in a lot of different cases. So he brings a brand new technology to the literature that opens up a lot of questions you couldn’t ask before. The book he wrote with Lars Ljungqvist is an updated, expanded and better version of this older book. But this book is still really good at solving models that have expectations in them. I still assign one chapter of it to my students. Particularly those tools for solving difference equations – they’re called expectational difference equations – are just as good as they ever were. It’s still the best source that I know of. There’s a lot of econometric tools in these books that would still work. My colleague George Evans does exactly what you say. He builds learning models, and he doesn’t assume agents are rational. He then sees whether by using simple learning rules the models converge to a rational solution over time. He still uses quite a few of the techniques that [Christopher] Sims developed, like impulse response functions, causality testing, all those kinds of techniques. There’s a set of things that aren’t very model dependent, things that you can bring to any set of data. But there’s another set of techniques where the techniques themselves depend on implications of the rational expectations hypothesis. The rational expectations hypothesis, for instance, will you tell you that some variables have to be uncorrelated. Stock returns have to be uncorrelated over time, because if you could predict stocks tomorrow, any rational agent would arbitrage that, make money and take away the predictability. What that gives you is a zero correlation between yesterday and today. That fact that that correlation is zero is often exploited in these techniques, that’s what makes them work. So if your rational expectations hypothesis falls apart, a lot of what I would call the more structural-based econometric techniques would fall apart with it, because they rely on the implications of rational expectations. There should be, and there has been. But not as much as I would like to see. Since I was in grad school – I graduated in 1986 so it’s been about 25 years – we’ve probably gone through two or three generations of models. When I started it was very Keynesian, then it was New Classical, then we got something called the Real Business Cycle models, then we got the New Keynesian models, and today there is an emerging set of models called the new monetarist models. Within the field there’s been a lot of churning of models. The reason those first sets of models didn’t survive was because they didn’t stand up to the data. The models that Lucas got his Nobel prize for – the new classical models, where expectations play a fundamental role, only unexpected money matters and things like that – had some really strong implications. We took that model to the data and it couldn’t explain the magnitude and the duration of business cycles simultaneously. It also couldn’t explain why expected money was correlted with output, and it got rejected. Then we went to real business cycle models. They did better. But they had trouble explaining Great Depressions and other sorts of things, so we rejected those models and went to New Keynesian models. Those models were doing great, or relatively so anyway, right up to the crisis. Then they did horridly. You don’t need advanced econometrics to reject that class of models – it’s clear that they just didn’t handle the crisis. So we’re going to reject those too. There’s been a lot of change, and I expect that change will speed up. I wish it was even faster, because it’s very clear to me that the models we were using prior to the crisis are not going to get the job done. I like George’s learning models. I also like what John Geanakoplos is doing at Yale. Eric Maskin mentioned him in his interview with you. What was wrong prior to the crisis is that the macroeconomy wasn’t connected to the financial sector. There’s a technical reason for that which has to do with representative agent models – we just didn’t have any way to connect financial intermediation to the macro model. And we didn’t think we needed to. We didn’t think that was an important question. We didn’t think there was any reason to worry about the kind of meltdown we had in the 30s happening in the US today. So no one bothered to build these models, or even to ask the right questions. Nevertheless, even before the crisis, Geanakoplos was building models that tried to explain how we could have these endogenous cycles. I really like that, because it uses the same tools and techniques that we’ve been using all along, but it puts them together in a different way, and in a way that I think makes a lot more sense. A little bit. It’s partly that, and it’s partly that the answers you get as an econometrician aren’t always that precise. Because of that lack of precision and the lack of ability to experiment, you often find people getting different values for, say, the multiplier, getting different answers with different data sets – and that makes it look, perhaps correctly, that we really don’t have any answers. What happened is that the theorists retreated into their deductive world, where they weren’t taking their models to the data enough. When they did take them to the data, and found that they didn’t work, they just said, “Oh it’s because of bad econometrics, the model is logically correct so we’re going to stick with it.” I think the arrogance of the theorists, and the lack of ability to do experiments, combined to make the theorists in particular way too insular in terms of taking their models and forcing them to interact with the actual world."
Econometrics · fivebooks.com