August 29, 2011
Bastiat’s “Parable of the Broken Window,” shouldn’t be controversial at all, but it is. Your more libertarian-type economists say that it’s a fallacy committed by Keynesians all the time, as they attempt “stimulus” while ignoring the costs. On the other hand, you have Matt Yglesias and others countering that the parable only applies when we have full employment, not vast quantities of idle resources, as we have today. As is so often the case, the problem is that everything got all screwed up when we tried to do the analysis in terms of money, as opposed to the perfectly comprehensible but non-existent barter economy. Here’s what Bastiat writes:
The glazier comes, performs his task, receives his six francs, rubs his hands, and, in his heart, blesses the careless child. All this is that which is seen. [...] It is not seen that as our shopkeeper has spent six francs upon one thing, he cannot spend them upon another. It is not seen that if he had not had a window to replace, he would, perhaps, have replaced his old shoes, or added another book to his library. In short, he would have employed his six francs in some way, which this accident has prevented.
The way Bastiat writes it, it would be easy to interpret this passage as saying that the wasted resource is the six francs that get used up in repairing the window.* But you don’t have to think about it very long to realize that that can’t be true. The six francs are as usable as ever, it’s just that now they are in the hands of the glazier instead of the shopkeeper.
Clearly, the real loss here is the glazier’s time and effort, which could have been applied to some other valuable activity had the window not been broken. Among the more important ideas in economics is that when resources have alternative incompatible uses, there is always a cost to employing them in some particular use: namely, the next-best alternative that you must give up.
Those who claim the Broken Windows Fallacy is not really a fallacy during recessions are abandoning too much of classical economics. The economic problem remains the same: we are trying to find and communicate information about the ways in which to employ our resources. Their main beef with the Classicals is that they assume that this information is always transmitted quickly and accurately; critics argue that that assumption is way off during recessions.
Still, that doesn’t mean that you should employ “idle” resources in just any old way you please. When you engage in Keynesian pyramid-building you can’t argue that it was a wise policy because “Look, now we have all these great pyramids that we built out of resources that were formerly unemployed.” No, you say, “It’s true, we built all these crappy pyramids, but actually it was worth it because building these pyramids was the only way to get that Neoclassical information-transmission machine up and running again.”
The Broken Windows Fallacy, which is always a fallacy, is simply to ignore opportunity cost. If, for the sake of argument, World War II was what cured the Great Depression, it wasn’t because we built a bunch of tanks and ships and blew a bunch of stuff up. Perhaps doing all that made it possible for the Depression to end, but we have to acknowledge that it would have been even better if we could somehow end the Depression and also use the resources that went into building that war stuff in ways that people valued.
*Matt Yglesias tries to rescue this strange interpretation by claiming, strangely, that in the 1850s silver pieces really were a scarce resource.
June 4, 2011
There’s actually a lot more wisdom than I sometimes think in the “folk” definition of the word “profit.” When discussing a possible policy change, people will often justify it by saying, “Yes, it will be costly to businesses, but the only real effect is that businesses will make lower profits. What’s so bad about that?”
Actually, this imaginary person seems to have a pretty good working definition of profit, except I think that what they mean is captured more precisely by the term “rent.” Generally, in economics, a rent is defined as payment to a resource owner above what would be necessary to keep that resource in its current use. In effect, this seems to be what most people have in mind when they’re talking about profits. A business will stay open as long as it’s “making a profit,” i.e. the payment is above what it costs to acquire and use all the necessary resources, so there wouldn’t seem to be any harm done by cutting those profits from $1 million to $1.
So far, we’re doing alright, but the idea I would like people to add to their understanding is that in the long run, profits are driven down to zero. It makes sense when you recognize that “long run” doesn’t refer to time per se, but rather just to the period when all options are open to people. A person who has already built a power plant will be making a profit on the power he sells to customers, because it costs less to operate the already-existing power plant than he can sell power for. The market price of electricity could drop a lot, all the way down to the cost of production, and he would keep operating the plant. The only thing is, in retrospect, he’d curse the bad investment he made by constructing the power plant in the first place. But that bad investment is in the past, the loss on that has happened, and there’s no reason to not operate the plant just because he wishes he had never invested in electricity generation.
On the other hand, people who own steel and all the other stuff you need to build a power plant haven’t committed their resources yet in any particular way. They have to decide whether the future profits from building a plant are enough to justify the initial investment. Another name for the “profits” that the plant owner makes, from this perspective, is “quasi-rent,” because they seem like profit from the perspective of owners of specialized resources (e.g. the constructed power plant). But people who haven’t invested yet have to take into account the cost of building a new plant, as well as the cost of running the plant after it’s built; they have to compare these costs to the income from operation.
Matt Yglesias and some other people want to say that a good way to deal with the rising cost of health care is to reduce the profits of medical care providers. We know, for example, that drugs cost pennies to produce while drug companies sell them for much higher prices. That means that drug companies are making a profit, and if those profits were reduced, they would keep on producing drugs all the same.
But what you have to recognize is that drug company profits aren’t some kind of closely-guarded secret. Everyone knows that drugs don’t cost much to make once you have already developed them. That means that investors have to make a decision: is it better to invest in creating a new drug, possibly to be rewarded later by “profits” on selling them, or should they invest in something else? The very last investor should be at the point where he expects to earn just enough in future profits to pay off his current investment. Change the profits, and investors are going to decide at a much lower level that investment is not worth the cost.
That’s why in the long run, it’s quite silly to argue that we can just cut spending on medical care without affecting the supply. Of course, current doctors are going to remain doctors, even if you cut their salary in half, because they can make more money as doctors than they can doing anything else. But the student deciding whether or not to enter medical school is looking at a very different situation. At present, the reward to being a doctor is just high enough so that the student on the margin is indifferent between that career and his next-best option. Cut the reward, and a lot of students are going to decide that it’s not worth the time and trouble to become a doctor. Think about what you would have to believe to believe that the number of students in medical school would be the same if doctors’ salaries were cut in half. I suspect you can’t possibly believe it.
So, don’t be fooled by “profits” that are really just current rewards for past investments. In the short term, if you cut those profits it’s not going to make much of a difference. Long term, though, it’s pretty much crazy to believe that you can cut spending and nothing else will have to change.
April 21, 2011
Reading a post over at Aidwatchers, “An Ignorant Perspective on Libya“, it occurred to me what it is that unites my positions on most everything.
So I was thinking, what are the people and things that I am most interested in or convinced by? F.A. Hayek. Adam Smith. Immanuel Kant. The Supreme Court. Religion. Libertarianism. What all of these have in common is the idea of following strict rules.
People always try to stymie Kant’s ethics by asking, “Well, what if the Nazis showed up at the door, and they asked ‘Where are you hiding the Jews?’” Kant says you should never lie, so presumably the idea that you have to tell the truth here is pretty devastating.
It can be devastating, I myself won’t lie about that. You probably should lie to the Nazis. But my question is, if you adopt a rule that says, “I am never going to lie, except when I judge that I am able to avoid something really bad by lying,” how often are you going to end up lying to protect Jews from Nazis, and how often are you going to lie just because it’s convenient for you?
I’m talking about Type I and Type II errors, which I always forget which is which but I looked it up on Wikipedia again just now. Type I errors are false positives, in this case thinking it is a good idea to break a rule when it really isn’t, while Type II errors are false negatives, failing to break a rule when it would have been a good idea to break it.
So basically, what makes me special is that I’m willing to tolerate a lot of Type II error to avoid Type I error, and that I make most of my case for what rules we should have on the basis of epistemic problems. We can never know very much, and if we try to say we’ll just do the best we can with what we know at any given time, my perspective is that this will lead to a lot of bad decisions.
If I were a very skilled writer, I would make it more clear that the title of this post is ironic. The idea of the post is that when trying to construct a system that deals with what we should do, we can’t assume that we are special and thus the rules don’t apply to us. Because it’s not true.
April 18, 2011
Probably not of general interest, but maybe of general interest?
“If you are falling from a height it is not cowardly to clutch at a rope. If you have come up from deep water it is not cowardly to fill your lungs with air. It is merely an instinct which cannot be destroyed.”
-Nineteen Eighty-Four, p. 284
“But I don’t want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.”
“In fact,” said Mustapha Mond, “you’re claiming the right to be unhappy.”
“All right then,” said the Savage defiantly, “I’m claiming the right to be unhappy.”
“Not to mention the right to grow old and ugly and impotent; the right to have syphilis and cancer; the right to have too little to eat; the right to be lousy; the right to live in constant apprehension of what may happen tomorrow; the right to catch typhoid; the right to be tortured by unspeakable pains of every kind.” There was a long silence.
“I claim them all,” said the Savage at last.
-Brave New World, pp. 263-4
A second reason for not plugging in is that we want to be a certain way, to be a certain sort of person. Someone floating in a tank is an indeterminate blob. There is no answer to the question of what a person is like who has long been in the tank. It’s not merely that it’s difficult to tell; there’s no way he is. Plugging into the machine is a kind of suicide.
-Anarchy, State, and Utopia. p. 43
“And so, turmoil, confusion, and unhappiness—these are the present lot of mankind, after you suffered so much for their freedom! Your great prophet tells in a vision and an allegory that he saw all those who took part in the first resurrection and that they were twelve thousand from each tribe. But even if there were so many, they, too, were not like men, as it were, but gods. They endured your cross, they endured scores of years of hungry and naked wilderness, eating locusts and roots, and of course you can point with pride to these children of freedom, of free love, of free and magnificent sacrifice in your name. But remember that there were only several thousand of them, and they were gods. What of the rest? Is it the fault of the rest of feeble mankind that they could not endure what the mighty endured? Is it the fault of the weak soul that it is unable to contain such terrible gifts? Can it be that you indeed came only to the chosen ones and for the chosen ones? But if so, there is a mystery here, and we cannot understand it. And if it is a mystery, then we, too, had the right to preach mystery and to teach them that it is not the free choice of the heart that matters, and not love, but the mystery, which they must blindly obey, even setting aside their own conscience. And so we did. We corrected your deed and based it on miracle, mystery, and authority. And mankind rejoiced that they were once more led like sheep, and that at last such a terrible gift, which had brought them so much suffering, had been taken from their hearts. Tell me, were we right in teaching and doing so? Have we not, indeed, loved mankind, in so humbly recognizing their impotence, in so lovingly alleviating their burden and allowing their feeble nature even to sin, with our permission? Why have you come to interfere with us now? And why are you looking at me so silently and understandingly with your meek eyes? Be angry! I do not want your love, for I do not love you. And what can I hide from you? Do I not know with whom I am speaking? What I have to tell you is all known to you already, I can read it in your eyes. And is it for me to hide our secret from you? Perhaps you precisely want to hear it from my lips. Listen, then: we are not with you, but with him, that is our secret!”
-The Brothers Karamazov, pp. 256-7
March 22, 2011
I have no particularly warm feelings towards Greg Mankiw. He’s not an exceptionally libertarian or conservative economist. He hasn’t written anything that I’m aware of that makes any sort of argument for more conservative public policy. If he has, it’s not famous or widely-cited by libertarians. The main economic theory with which he is associated, New Keynesianism, is basically dedicated to showing how the discoordinated market is subject to failure and inefficiency, and how wise public policy might improve the situation. Greg Mankiw is not exactly Milton Friedman.
And yet, for some reason I get extremely angry when people make personal attacks against Greg Mankiw. I don’t know why, I can’t explain it, it just makes me very, very angry. Two examples:
First: A while back, Nate Silver wrote a post, entitled “Intellectual Dishonesty (Gasp!) from a Conservative Economist”, in which he claims that Mankiw misrepresents the conclusions of a paper by David and Christina Romer. Problem: Silver is wrong. Romer and Romer concluded exactly what Mankiw said they did. In his post on Mankiw, Silver argues that the Romers’ conclusion, which Mankiw cites, is not relevant for the question at hand. This argument, right or wrong, is speculative and is a challenge to the Romers’ conclusion, not a more accurate presentation of it. Silver is wrong on the facts about what the Romers were doing and why they were doing it, and uses his misunderstanding to accuse Mankiw of dishonesty.
Second: In two separate posts, Matt Yglesias accuses Greg Mankiw of dishonesty and malfeasance for linking to a table describing the share of taxes paid by the top 10% of earners in various OECD nations. Yglesias notes that although income tax is a progressive tax, other taxes are regressive, and the fact that those taxes are not included in the table makes linking to the table dishonest.
Yglesias is wrong on the facts before we even get to the level of trying to understand what those facts mean. As a commenter on Scott Sumner’s blog put it:
“The only way that Mankiw, the Tax Foundation and the OECD are misleading is if those regressive taxes would make the US look less progressive, right? Why else would Yglesias argue that those 3 sources are misleading when they all point out that the US is more progressive than other countries?
But he is wrong.”
Whether you believe the table Mankiw links to has any important implications or not, Yglesias’ description of this chart as “misleading” has no basis, unless he means that the US system is actually more progressive than this table would make it appear. Because it is. But that’s definitely not what Yglesias means.
And now this chart is getting posted all around the blogosphere, which predicts income share as a function of tax share. Come again? Why would you want to predict income share as a function of tax share? We’re trying to figure out in what countries the top decile pay a higher or lower share of the total taxes, relative to their income share. So wouldn’t the correct chart be this one?
The United States is above the trendline, meaning that US top-decile earners pay a higher share of taxes than you would expect, given their share of earnings.
The only real lesson here is that if you are going to attack someone’s character, you should be damn sure that you have the facts right before you do it. Nate Silver and Yglesias should be embarrassed that their “Gotcha!” posts turned out to be factually mistaken. And unless they are willing to say that their own partisan ideology caused them to make the errors they made, they should probably cool it with the attacks on Mankiw’s character for making what might just as easily be an honest mistake. The fact that Silver and Yglesias got the analysis wrong is a hint that it’s not just partisanship that causes people to err.
March 21, 2011
March 13, 2011
“What’s a ‘p-value’ and why do I need one?”
Late last night somebody asked me roughly that question, which led me to realize that I can’t really formulate a clear description of what’s going on in regression analysis. So I thought I’d write a post and give it a try.
Imagine that somebody tells you that, because of a recent head trauma he suffered, he can accurately predict the outcome of coin tosses. Not being an overly credulous person, but having seen a number of sitcoms where such events are possible, you want to find out whether or not what your friend says is true.
You propose a simple test: your friend makes a prediction, “heads” or “tails,” and then you flip a coin and record whether or not he got the prediction right. You will then repeat this simple procedure for 10 total coin flips, and add up his scores to see how many he got right overall.
From intuition, it’s clear that getting, say, 6 right out of 10 would not be a very impressive result. After all, anyone has a 50% chance of guessing the outcome of any given coin toss. You would hardly be ready to declare your friend psychic if he got one more correct answer than you would expect from pure chance.
How many correct answers would it take to convince you that there is something to these legendary sitcom injuries? Eight correct? Nine? When you make this judgment, you are implicitly comparing your friend’s result to the result you would expect from a person with no special powers, and thinking about whether the difference between those two results is large enough to be convincing.
The p-value is just a formalization of this intuition (the “p” stands for “probability”). After you perform the test and look at the number your friend got correct, you want to answer the question, “What is the chance I would see a result as extreme as the one I am now seeing, if my friend were just a normal person with no powers?”
For flipping coins, it turns out this question is very easy to answer. If you guess and flip a coin once, your chance of getting it right is 50%. The chance of getting it right twice in a row is 25%. The chance of getting it wrong twice in a row is also 25%. Some basic extensions of this type of reasoning leads to the following table, for 10 coin flips:
To put the same information in a chart, it looks like this:
At this point, it helps to reformulate what your friend is saying into a testable hypothesis. What we would expect, for a normal person, is that when doing this test with 10 coin flips, the average number they would get right is five. What your friend is saying, in effect, is “My personal average is not five.” The purpose of this test is to figure out whether it’s reasonable to believe your friend.
Okay, so let’s say you do the test and your friend gets 8 out of 10. Vindicated? I don’t know, let’s see. If his real average is 5, what is the chance of seeing a result at least as far away from 5 as this is? To find out, we look at all the results that are at least three away from five. That means 8, 9, and 10, but also 0, 1, and 2. Adding up all those percentages, we find that the probability that a normal person would get a result at least this extreme is almost 11%. It was a pretty good performance overall, but not really overwhelming evidence of psychic powers.
What if you had performed the test and your friend got 0 out of 10 correct? That is, every single time he says “heads,” it comes up “tails,” and vice-versa. Even though all his answers were wrong, you could still take this as strong evidence that his personal expected average is different from 5. According to the table, in only .20% of cases would you see a result as extreme as this one for a normal person (chance of getting 0 right plus chance of getting 10 right). Even though he got all the answers wrong, it would still seem to be the case that you can just reverse all his predictions and do better than chance.
So this is basically what we’re trying to do when we do statistical analysis. We have this information, our friend saying “heads” or “tails,” that we’re trying to use to predict a real-world event, a coin actually coming up either heads or tails. We want to know if his information helps us make better predictions or if they’re just garbage. The p-value is a way to measure that. It tells us that if his predictions were really garbage, this is the probability that they would have seemed to provide a prediction at least as useful as the one they actually did provide.