Wednesday, November 30, 2011

Too Much Imprisonment

The U.S. criminal justice system and its health care system have an intriguing point of similarity: They are both systems where the U.S. spends a lot more than other countries, but doesn't see substantially better outcomes--whether in terms of better health or lower crime rates.

It's perhaps useful to begin with a brief review of how the U.S. crime rate has evolved over time. Perhaps the best-known source is the Uniform Crime Reports data from the FBI, but there is also a National Crime Victimization Survey. The broad trends in all of these data sources are the same. For example, in the UCR data, the number of violent crimes doubled from about 300,000 at the start of the 1960s to 600,000 by 1968, then doubled again to 1.2 million in 1979, and continued rising to peak at 1.9 million in the early 1990s. However, since then the numbers of violent crimes have been falling and was down to 1.3 million by 2009. In other words, rates of crime have been falling for two decades, and have fallen substantially. 

Nonetheless, here's Richard Posner writing in "Incarceration Blues" in the November 17 issue of the New Republic: "For the percentage of our [U.S.] population that is incarcerated is the highest of any country in the world; and at a shade under 3 percent (though it has been falling recently because of the states’ fiscal distress), it exceeds by factors of 4 to 7 the percentage incarcerated in any of our peer countries. And yet our crime rates are generally no lower than in those countries, and our murder rate is much higher. Our higher rate of incarceration is attributable primarily to more arrests and convictions rather than to our criminalizing more activities or imposing longer prison sentences, though those are factors as well."

Many of the basic facts can be illustrated with figures from the Bureau of Justice Statistics at the U.S. Department of Justice website. As a starting point, the rate of imprisonment per 100,000 people has quadrupled over the last 30 years, and then number of people on parole has quintupled.  I'm certainly open to the argument that imprisonment rates needed to rise in response to the higher crime rates from the 1960s through to the 1980s. I believe that the higher imprisonment rate helped to bring down crime rates in the 1990s. For example, economist Steven Levitt makes this case in a Winter 2004 article in my own Journal of Economic Perspectives.   But did the rate of imprisonment need to quadruple--and then to remain at that level? At a cost (including prison construction costs) of about $50,000 per inmate per year, we may well have taken a defensible public policy and pushed it too far.

With these increases, rates of imprisonment in the United States are far higher than in other countries. Here's a graphic (including jails as well as prisons) from a June 22, 2010, article in The Economist: "Rough justice in America:Too many laws, too many prisoners."


Unsurprisingly, the costs of the criminal justice system have been rising dramatically, as well. Here are graphs again from the Bureau of Justice Statistics. The numbers are adjusted for inflation (into year 2007 dollars). Although this area of spending has leveled out the last few years, it has leveled out at levels double or more that of the early 1980s.

Given the pressures on state and local budgets, if nothing else, what might be done about this? Here are some charts that suggest some possible directions. First, Posner noted in the quotation above how more arrests are a large part of the reason for the higher U.S. imprisonment rate. A large share of the increases in arrests are for drug-related offenses: these have tripled in the last three decades. Moreover, if one looks at the number of those imprisoned for non-violent drug offenses, the total has risen from near-zero three decades ago to about 250,000 today.

 Another issue is that spending in the criminal justice system is tilting more heavily toward prisons, and less toward police: in practice, this means more toward punishing crime after it has happened and less toward keeping the peace beforehand. Notice in the graph below that "corrections" spending was about half of police spending in 1982, but has risen to about 3/4 of police spending. 

On pure crime-reduction grounds, one can make a plausible case for acting to reduce the number of people in prison for non-violent crimes like a number of drug-related offenses, and transferring the funds saved to less expensive community-supervision and more active programs for reducing crime in communities--including more police on the streets and drug treatment programs. I also suspect that when our society decided to de-institutionalize its mental patients, we have instead ended up re-institutionalizing some of that same population more expensively in prisons.

But in addition, it is not healthy for the land of the free to have reached a situation where almost one male out of every 20 in the entire country--including those on probation and parole-- is under the supervision of the criminal justice system. For African-Americans, the rate of those under the supervision of the criminal justice system is nearly one out of ten. (For details, here is a 2009 Pew Foundation report.)

People--and especially young men--will commit crimes. Many of them, especially repeat violent offenders, should do some time. But for the crime rate, and for our communities, and for the offenders themselves, we need to find more productive answers than imprisonment at $50,000 per year for nonviolent criminals. Earlier this week, a police officer named Daniel Horan wrote an op-ed in the Wall Street Journal, "The Just-Say-No Crimebusters," about how using the threat of punishment to ask petty criminals to stop can be remarkably effective.  







Tuesday, November 29, 2011

Credit Rating Agencies

One of many groups of financial institutions that were thoroughly embarrassed by the financial crisis were the big credit rating agencies, like Standard &  Poors, Moody's and Fitch. They had rated some of the financial securities that were based on pooling together subprime mortgages as AAA safe, which is what made it legally OK for banks to hold such securities, thus creating a situation where popping the bubble in housing prices caused banks to suffer severe losses. For some history on how the U.S. government financial regulators outsourced their judgment on riskiness of financial securities to the credit rating agencies, see my post from last August: "Where Did S&P Get Its Power? The Federal Government (Of Course)."

Nicolas Véron discusses "What Can and Cannot Be Done About Credit Rating Agencies" in a Policy Brief written for the Peterson Institute of International Economics. Here are some highlights (citations and footnotes omitted):

Credit rating agencies (CRAs) have some successes, and some very prominent failures.

"From this standpoint there was a clear failure of CRAs when it came to US mortgage-based structured products in the mid-2000s. Many mortgage-based securities were highly rated but had to be downgraded in large numbers following the housing market downturn in 2006–07, especially in
the subprime segment. Subsequent enquiries, in particular the Securities and Exchange Commission and Financial Crisis Inquiry Commission, have convincingly linked the CRAs’ failure to a quest for market share in a rapidly growing and highly profitable market segment. Under commercial pressure, CRAs failed to devote sufficient time and resources to the analysis of individual transactions, and also neglected to back single transaction assessments with top-down macroeconomic analysis that could have alerted them to the possibility of a US nationwide property market downturn. ...

"[T]here have been several past cases in which rating agencies clearly failed to spot deteriorations of sovereign or corporate creditworthiness in due time. This was particularly true of Lehman, AIG, and Washington Mutual, which kept investment-grade credit ratings until September 15, 2008. CRAs were similarly criticized for their failure to anticipate the Asian crisis of 1997–98 or the Enron bankruptcy in  late 2001. It appears fair to conclude that all three main CRAs have a decent though far from spotless record in sovereign and corporate ratings, but that their hardly excusable failure on
rating US residential mortgage-based securities in the mid- 2000s has lastingly damaged their brands and reputations."

Credit rating agencies typically follow the market, rather than leading it.

"Since the beginning of the crisis, CRAs have frequently been accused of timing their downgrades badly and of precipitating sudden negative shifts in investor consensus. However, it is infrequent that rating downgrades surprise markets—generally they follow degradations of market sentiment rather than precede it. When CRAs do anticipate, they are often not given much attention by investors, such as when S&P started downgrading Greece in 2004."

The most useful reforms of credit rating agencies may focus on those issuing the securities in the first place.

Véron offers an exceptionally level-headed discussion of what might be done about the credit rating agencies. For example, one could forbid such ratings, either in general or when markets are turbulent, but along with the issues of suppressing freedom of speech, it's hard to see how suppressing information is workable or would help to calm a panic. Some in Europe have suggested the creation of a new non-profit quasi-governmental credit rating agency. But could it have any legitimacy among investors? There have been calls to regulate specifically how credit ratings are done. But the difficulty here is that the credit rating agencies did not know how to rate certain financial securities, and it's not clear that government regulators knew any more. If anything, we need more innovative and insightful measures of credit risk to be created, not for certain methods to be set in concrete. Also, tighter regulation probably would make it more costly and thus unlikely for any firms seeking to enter this market.

Thus, Veron emphasizes that it may be most useful to require those issuing securities, and asking to have them rated, "to disclose more standardized and audited information about their risk factors and financial exposures." This new regulatory rule would apply to ratings for corporate bond issues, for financial securities being rated, and also for sovereign debt being rated. Credit rating agencies could be required to conduct their ratings with whatever methods they choose--but based only on the publicly available information. He concludes:

"To put it in a simplistic but concise way, what is needed is “a John Moody for the 21st century.” CRAs themselves can perhaps be somewhat improved by adequate regulation and supervision, but public policy initiatives that focus only on CRAs are unlikely to adequately address the need for substantially better financial risk assessments. If real progress is to be made towards a better public understanding of financial risks, it will have to involve innovative approaches that even well-regulated
CRAs, on the basis of recent experience, may not be the best placed to deliver."

Finally, here's a useful table showing the dominance of S&P, Moody's and Fitch in the market for credit rating services.






Monday, November 28, 2011

How Alexander Del Mar (Who?) Scooped Milton Friedman

Milton Friedman famously proposed in the 1960s that rather than having a central bank pursue a discretionary monetary policy, and thus in a world of time lags and policy uncertainties end up contributing to economic booms and busts, it would be better if the central bank just managed the money supply to grow at a constant rate of 3% per year. Unknown to Friedman, Alexander Del Mar made this same proposal in 1886.

Which raises the question, "Who the heck was Alexander Del Mar?" George S. Tavlas gives an lively and readable overview in "The Money Man" in the November/December issue of the American Interest.

Those who have heard of Del Mar make some striking comments about him. According to Tavlas, James Tobin called him "one of the most important U.S. monetary economists of the 19th century," while Robert Mundell called him “too hot to handle.” Among other professional accomplishments, Del mar was a co-founder in 1865 of the New York Social Science Review: Devoted to Political Economy and Statistics, often thought of as one of the first economics journals published in the United States. He was also the first director of the U.S. Bureau of Statistics in 1866--which later evolved into the U.S. Department of Commerce. 

But among students of monetary history, Del Mar is best-known for having challenged the prevailing view of his time that money needed to be something with intrinsic value, like gold or silver. Tavlas explains:

"He [Del Mar]  referred to the example of the ancient states of Ionia, Byzantium, Sparta and Athens. As far back as the 10th century BCE, these states created huge discs of sheet iron or bronze that had no practical value but served as common measures of value against which all exchanges of goods could take place. In other words, Del Mar realized that money originated not to serve as a medium of payment in purchases (these discs were too heavy and bulky to be exchanged for goods), but to serve as a measure of value, or what economists call “a unit of account.” By establishing common units of account, early societies enabled the direct exchange of goods against goods to take place without the need for a physical object to be interposed as a medium of exchange. Del Mar saw that the existence of a unit of account is a necessary condition for the emergence of a full-fledged money economy ..."

"The observations—first, that the function of money is to measure value rather than itself be value, and, second, that in advanced societies the state determines what is used as money—allowed Del Mar to develop a further insight. Since money measures value, if the state creates too much money, then it can cause an inflation of prices. In such a circumstance, the valuableness of money diminishes because its ability to perform its primary function degrades—that of measuring the value of one good against another. Money, he argued, is a measure like a yardstick."

Thinking of money as a unit of account for transactions and as a yardstick of value is standard fare in modern economics, but it was radical in Del Mar's time. However, these ideas are what led Del Mar to propose his rule that the government should guarantee that the supply of money would rise at 3% per year--which was Del Mar's estimate of the annual increase in the supply of goods and services each year. His proposal was well-timed, since in 1886 the U.S. economy was living through a period of ongoing deflation with periodic severe recessions. But his advice was ignored.

Why is Del Mar so little-known? Tavlas makes a case that part of the reason may be that Del Mar was Jewis, and overt anti-Semitism was common at the time among many economists and universities. Another reason is that, when it came to monetary theory, Del Mar was too far ahead of his time.




International Travel: Boosting America's Biggest Export Industry

Most Americans don't think of themselves as living in a lucrative destination for tourism. Moreover, after the terrorist attacks on 9/11, procedures for anyone visiting the United States were understandably tightened. But partly are a result of not perceiving the opportunity and partly as a result of overcautiousness, the U.S. economy is missing out on the potentially job-rich growth of  international tourism.


A few months ago, I touched on this theme when writing about  "Where Will America's Future Jobs Come From?"  A McKinsey Global Institute report looked at opportunities in different sectors for job creation and noted: "[T]o reach the high-job-growth scenario, the United States needs to retake lost ground in global tourism. ... In particular, the United States is not getting its share of tourism from a rising global middle class. More Chinese tourists visit France than the United States, for example."



Roger Dow, who is President and CEO of the trade group the U.S. Travel Association, wrote an op-ed in the Wall Street Journal on November 21 called "America's Lost Decade of Tourism." His piece hit the high spots of a longer report from the U.S. Travel Association called "Ready for Takeoff: A Plan to Create 1.3 Million U.S. Jobs by Welcoming Millions of International Travelers."
Here are some facts and comments from the report (footnotes omitted for readability):

"Between 2000 and 2010, the international long‑haul travel market grew by 60 million travelers each year. And yet, in 2010, the United States attracted essentially the same number of travelers as in 2000. International travel remains one of the few bright spots in the global economy, generating exports worth $1.1 trillion and supporting more than 96 million jobs worldwide in 2010. Despite the fragile economic recovery, global travel spending continues to grow at impressive rates, leading
some economists to describe it as a “gold rush.” ... Over the coming decade, long-haul arrivals are
forecast to rise by an additional 40 percent. Global travel spending is forecast to double between 2010 and 2020, reaching $2.1 trillion and making travel an increasingly important contributor to GDP growth for countries able to attract more overseas visitors."

"Lawrence Summers, former director of the National Economic Council, recently observed that “the easiest way to increase exports and close the trade gap is by increasing international travel to the U.S.” In fact, international travel is already the United States’ largest industry export, representing
8 percent of U.S. domestic exports of goods and services in 2010 and nearly one-fourth of services
exports alone."
"Therefore, the United States should make it a national priority to restore our share of the global long-haul travel market, currently at 12 percent, to the 2000 level of 17 percent. Achieving this goal by 2015 and sustaining it through 2020 would add nearly $390 billion in U.S. exports over the next decade and create 1.3 million more American jobs by 2020."

"[T]oday 35 percent of overseas visitors to the United States require an entry visa. Looking forward, that number is expected to rise to 51 percent. Put another way, the greatest growth in the world travel market is expected to occur in countries where the U.S. is already unable to meet existing demand for visas. The visa system is undermining our ability to compete for travel exports."


"Overall, the entire visa application process from end to end can take as long as 145 days in
Brazil and 120 days in China. In comparison, the United Kingdom takes an average of 12 days to process visas in Brazil and 11 days in China. ... In our survey, more than 40 percent of Chinese
respondents, 35 percent of Indian respondents and 29 percent of Brazilian respondents cited visa
costs as a barrier. For millions of overseas travelers seeking admission to the United States, the $140
application fee for a U.S. visa represents just the tip of the iceberg. Travelers must also
pay additional fees that vary from country to country, but can add as much as $50 to the
application fee. On top of these fees, travelers who do not live in a city where a U.S. consulate is located must incur hundreds or thousands of dollars in expenses (and take time off from their
work or studies) to complete the mandatory face-to-face interview. In Brazil, for example, just one embassy and three consulates serve a country spanning 3.3 million square miles with a population of 199 million. Eleven cities with more than one million inhabitants do not have a U.S. visa-processing center. The lack of accessibility to consular offices is an even bigger issue in China, where the United States has just five visa processing operations serving a much larger market. Indeed, there are 27 cities in China and eight in India with more than two million inhabitants that do not have a U.S. visa-processing center. By comparison, the United Kingdom has 12 visa facilities in China and 10 in India, while France has six in China and five in India."

"Among all overseas travelers to the U.S., those from China, India and Brazil rank first, second
and fourth, respectively, in spending. Because of these high levels of traveler spending, one visitor
from India is roughly equal to two visitors from the United Kingdom, Germany or France in
terms of average spending."


The U.S. Travel Association report starts with a provocative comparison that sums up the effects of these barriers imposed by the current visa process:  "Imagine an overseas biker desperate to own a Harley-Davidson—a purchase that would increase U.S. exports and improve our trade balance. Unfortunately, his government has put in place several barriers that make it more difficult and expensive to purchase this American cultural icon. Before he can even place his order, he must wait several weeks for an interview and travel hundreds of miles to a distant government office to get to an appointment. On top of that, he must pay $140 up front just to request the opportunity to purchase a Harley, with no assurance he will actually be able to buy one. ... If any foreign government even attempted to create such onerous barriers to U.S. exports, members of Congress would instantly threaten trade reprisals; U.S. government trade lawyers would quickly file legal actions with the World Trade Organization; and government policymakers at all levels would search to find a way to end these restrictions. Amazingly, the United States has imposed almost exactly these types of restrictive trade barriers on itself while competing in one of the most critical global export markets—the $1.1 trillion market for international travel."

Sure, the U.S. Travel Association is a trade group. Take its details of predictions about gains from tourism with a grain of salt. But the overall storyline is correct: The U.S. is missing an opportunity for a major growth industry as a destination for international tourism, where the major hurdle is the inability of the federal government to set up a timely and convenient process for tourist visas.

Friday, November 25, 2011

Brain Science and Economics

It's becoming more possible every day to see what happens in the human brain. One tool is functional magnetic resonance imaging, or fMRI, which measures which parts of the brain are especially active when facing certain kinds of choices. There is "transcranial magnetic stimulation" and "transcranial
direct current stimulation," where a researcher can increase or decrease neural activity in a specific region of the brain, and observe the effect on choices. There is electroencephalography or EEG data, where a bunch of wires and stickers record electrical activity on the scalp, which is picking up neural activity in the brain.

What can economics learn from this evidence? The Fall 2011 issue of my own Journal of Economic Perspectives offers a couple of articles on this theme.  Two economists, Ernst Fehr and Antonio Rangel, offer their sense of what has been learned in "Neuroeconomic Foundations of Economic Choice—Recent Advances."  Two brain scientists, Marieke van Rooij and Guy Van Orden and then offer a counterpoint in "It’s about Space, It’s about Time, Neuroeconomics and the Brain Sublime."

Fehr and Rangel note several times that economic applications of brain science to how people make choices are really just getting . However, they also believe that these studies are coming together around a five-part model of how the brain makes choices:

1. The brain computes a decision value signal for each option at the time of choice.
2. The brain computes an experienced utility signal at the time of consumption.
3. Choices are made by comparing decision values using a “drift-diffusion model.”
4. Decision values are computed by integrating information about the attributes associated with each option and their attractiveness.
5. The computation and comparison of decision values is modulated by attention.

While the evidence on some of these statements is stronger than for others, and while complex choices like whether to save for the future are going to harder to understand than a choice about which music you would prefer to listen to while having your fMRI scan, they make a solid case for what has been learned.

Van Rooij and Van Orden are more skeptical about what has been learned. They point out that looking at parts of the brain with fMRI scans and trying to identify them as the nexus of cooperation or calculation or risk-taking or other behaviors has been only a modest success so far. Different studies often seem to locate decisions in different parts of the brain, or in different constellations of brain activity.

They point out that looking at data from fMRI scans is actually a matter of looking at about 130,000 "voxels," which are sort of like pixels in a video screen, except that they estimate whether a certain point in the brain is more or less active. In doing such comparisons, the obvious question is how many of these "voxels" must be different to show that a certain area of the brain has really acted? Indeed, what actually happens in these studies is that the fMRI brain scans of people who undertake these activities under different conditions have their voxels averaged together, and then these averaged-together images are morphed onto a common brain format for comparison. The results of such studies are, understandably, not always robust.

They argue further that it may be mistaken to think of the brain as a set of spatial locations where different decisions happen. Instead, they argue that the brain may be characterized by movements over time, where activities start in certain parts of the brain but then propagate--or not--into other parts of the brain. They tend to favor EEG data over fMRI scans, because the EEG data can capture lots of spots of brain activity second-by-second over time. They argue that the brain over time follows patterns that can be discerned using analysis of fractals.

Both sets of authors are careful to emphasize that the field is very young, and both are optimistic about its possibilities. Fehr and Rangel write: "Neuroeconomics is a nascent field. Much of the basic work remains to be done, and many of the details of the computational models of choice described
here are likely to change and evolve over time. However, we hope that this description of the current frontier of neuroeconomics convinces economists that a great deal has already been learned about how the brains make choices, and that these findings already provide insights that are useful in advancing our understanding of economic behavior in many domains." Similarly, van Rooij and Van Orden write: "[B]ringing together reliable economic paradigms with reliable brain science holds a possibility for taking this new science beyond anyone’s wildest dreams."


Genetic Data and Economics: Problems in Drawing Inferences

A number of datasets that have economic and demographic information are also starting to have genetic information about the participants: in the U.S., some examples include National Longitudinal Study of Adolescent Health, the Wisconsin Longitudinal Study, and the Health and Retirement Survey. It is becoming possible, in other words, to look for connections between a person's genes and their education, income, and other economic outcomes. In the Fall 2011of my own Journal of Economic Perspectives, Jonathan Beauchamp, David Cesarini, and a host of co-authors tackle the issue of drawing inferences from this data in "Molecular Genetics and Economics."

The fundamental problem in these studies is that humans have a lot of genes.To be more specific, each person has about 3 billion "base pairs" of DNA material, and "genes" are combinations of these base pairs. However, the human genome includes more than just genes and DNA; there is also RNA and all sorts of other stuff. Figuring out the interactions between DNA, RNA, various proteins, and other ingredients is exciting and cutting-edge work in the life sciences.

For social scientists, working with this data is tricky. Current technologies create data on about 500,000 possible individual differences at the base-pair level in genes; before long, it will be a million and more. To those marinated in a bit of statistics, the problem can be phrased this way: If you have 500,000 independent variables in a least-squares regression, a whole lot of them will be statistically "significant" at conventional levels just by chance. For those to whom that statement carried no particular meaning, think of it this way:

When social scientists look at data, they are always trying to distinguish a real pattern from a pattern that could have happened by chance. To understand the difference, imagine watching a person flip a coin 10 times, and get "heads" every time. The odds of getting "heads" 10 times in a row with a fair coin is .5 raised to the power of 10, or .0009766--which is roughly one in a thousand. If you see a pattern that happens by chance only one time in a thousand, you would strongly suspect something is going on. Maybe it's a two-headed coin? But now imagine that you start off with 500,000 people each flipping a coin. After they have all flipped a coin 10 times, on average 488 of them will have gotten 10 straight heads. In this context, observing 10 straight heads is just what happens a certain amount of the time because of random chance when you start with very large numbers of people.

Bottom line: When you observe a particular event in a fairly small group, you can have some confidence (never complete certainty!) as to whether it occurred by chance. But if you see the same event happen for a small proportion of those in a really big group, then it certainly could have happened by chance. When you have 500,000 pieces of genetic data, it's like a big group, and any connections you see can happen by chance.

What's to be done? Beauchamp, Cesarini, and their co-authors suggest three steps.

First, a researcher who is working with 500,000 variables need to demand a much more extreme event before concluding that a connection is real. If I'm starting with 500,000 people flipping coins, I want to see someone flip heads maybe 100 times in a row before I conclude that something other than random chance is happening here. There are statistical methods for making this kind of correction, but they are still a work in progress. Research has found 180 different base-pairs that seem to be associated with height, but perhaps many more need to be considered as well, and perhaps considered all at once, not one at a time.

Second, it becomes extremely important to do the same calculation with multiple different datasets, to see whether the results are replicated. In their JEP article, they look at genetic determinants of education in two different datasets--and fail to replicate the results.

Third, if you're going to have really large numbers of variables, it's useful to have really large populations in your data, which isn't yet true of most of the datasets in this area.

In the same issue, Charles Manski also offers a comment on  this research in "Genes, Eyeglasses, and Social Policy."  Manski offers several useful  insights on this research. For example:

First, a finding that genes cause an effect is totally different from deciding about appropriate social policy. It seems likely that genes are highly correlated with poor eyesight, for example, but that genetic condition is easily and cheaply remedied with corrective lenses. Social policy should be about costs and benefits, not about whether something is "caused" by genes.

Second, it's important to be cautious about interactions of genes, environment and outcomes. If one looked at genetic patterns and the propensity to eat with chopsticks, for example, one might find a statistical correlation. But the obvious reason is that many of those with the common genetic pattern are also living in a common society, and it's society rather than genes which is causing correlation with chopsticks. In addition, certain traits like height are definitely highly inheritable, but they can still shift substantially over time as the environment alters--as in the way that average human heights have increased in the last century.

Third, Manski expresses some doubt that brute-force statistical calculations with hundreds of thousands of possible explanatory variables will ever yield solid inferences about causality. Instead, he suggests that over time, biologists, medical researchers and social scientists will develop better insights about how genes and all the rest of the activity in the human genome affects various traits. It will then be somewhat easier--if never actually easy--to understand cause and effect.


Thursday, November 24, 2011

Turkey Demand and Supply, and the Thanksgiving Dinner Price Index

(Originally appeared on Monday, 11/21. Bumped to today for your Thanksgiving entertainment.)

As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there's anything wrong with that.

The last time the U.S. Department of Agriculture did a detailed "Overview of the U.S. Turkey Industry" appears to be back in 2007. Some themes about the turkey market waddle out from that report on both the demand and supply sides.

On the demand side, the quantity of turkey consumed rose dramatically from the mid-1970s to the mid-1990s, but since then has declined somewhat. The figure below is from the USDA study, but more recent data from the Eatturkey.com website run by the National Turkey Federation report that U.S. producers raised 244 million turkeys in 2010, so the decline has continued in the last few years. Apparently, he Classic Thanksgiving Dinner is becoming slightly less widespread.



On the production side, the National Turkey Federation explains: "Turkey companies are vertically integrated, meaning they control or contract for all phases of production and processing - from breeding through delivery to retail." However, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which all the steps of turkey production have become separated and specialized--with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys.  Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:
"In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg  capacity per hatchery in 2007.

Turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.

Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.

Turkeys have been carefully bred to become the efficient meat producers they are today. In 1986, a turkey weighed an average of 20.0 pounds. This average has increased to 28.2 pounds per bird in 2006. The increase in bird weight reflects an efficiency gain for growers of about 41 percent."


U.S. agriculture is full of these kinds of examples of remarkable increases in yields over a few decades, but they always drop my jaw. I tend to think of a "turkey" as a product that doesn't have a lot of opportunity for technological development, but clearly I'm wrong. Here's a graph showing the rise in size of turkeys over time.


The production of turkey remains an industry that is not very concentrated, with three relatively large producers and then more than a dozen mid-sized producers. Here's a list of top turkey producers in 2010 from the National Turkey Federation




For some reason, this entire post is reminding me of the old line that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. Did I mention that I make an excellent chestnut stuffing? 

Anyway, the starting point for measuring inflation is to define a relevant "basket" or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical U.S. household buys. But one can also define a more specific basket of goods if desired, and for 26 years, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:


The top line of this graph shows the nominal price of purchasing the basket of goods for the Classic Thanksgiving Dinner. One could use the underlying data here to calculate an inflation rate: that is, the increase in nominal prices for the same basket of goods was 13% from 2010 to 2011. The lower line on the graph shows the price of the Classic Thanksgiving Dinner adjusted for the overall inflation rate in the economy. The line is relatively flat, which means that inflation in the Classic Thanksgiving Dinner has actually been a pretty good measure of the overall inflation rate in the last 26 years. But in 2011, the rise in the price of the Classic Thanksgiving Dinner, like the rise in food prices generally, has outstripped the overall rise in inflation.



Thanksgiving is my favorite holiday. Good food, good company, no presents--and all these good topics for conversation. What's not to like?






Wednesday, November 23, 2011

Underpurchasing of Annuities

Outliving your wealth is a frightening idea for any retiree. An obvious answer is to annuitize some of that wealth, thus guaranteeing a stream of future income for life. In the most recent issue of my own Journal of Economic Perspectives, Shlomo Benartzi, Alessandro Previtero, and Richard H. Thaler make the case that not enough people are annuitizing enough of their wealth "Annuitization Puzzles."

As a starting point, instead of thinking about life expectancy as a point estimate, think about it as a distribution. The authors look at the distribution of life expectancy at age 65 and explain: "There is a 22-year difference between the 10th and 90th percentile of the distribution for men (dying at 70 versus 92). Similarly, there is a 23-year difference between the 10th and 90th percentile of the distribution for women (dying at 72 versus 95). In other words, one in ten men retiring at 65 might expect to live another 27 years, and one in ten women can expect to live another 30 years. These numbers give a sense of the potential magnitude of the risk of outliving one’s retirement wealth. Of course, annuities are a straightforward way to hedge longevity risk." Here's an illustrative figure:

Of course, any given person might have some knowledge based in family history or lifetime health patterns that helps them to predict their life expectancy within this distribution. Still, looking at average experience suggests that at least some people, perhaps many people, are not annuitizing as much as they should. The authors write: "For men, life expectancy at age 65 has increased from 12.8 years to 16.6 from 1950 to 2010, an increase of nearly 30 percent. Since saving rates have also fallen, it ishard to see how Americans are rationally planning to fund this extended period ofretirement—which suggests that some of them are making a mistake." Again, here's an illustrative figure.


The authors consider a number of possible reasons why people don't annuitize more: for example, the difficulties of shopping for an annuity, the pain of writing that big check to buy an annuity, and other reasons. They cite research that people tend to like an annuity more if it is sold a providing assured consumption of (sat) $650/month for life, but to like the same annuity less if it is sold as providing an investment return of $650/month for life. To many people, guaranteeing consumption sounds safe and reassuring, but an investment return sounds risky and uncertain.

But the also point out that many Americans have a very easy way to annuitize much more of their lifetime wealth: that is, work a few years longer before they start claiming Social Security benefits. Benartzi, Previtero, and Thaler offer a nice example that's due to Jeff Brown: "Individuals are allowed to start claiming Social Security benefits as early as age 62 but do not have to begin claiming before turning age 70. As one waits longer before claiming benefits, those benefits are adjusted upward in an actuarially fair manner. This choice effectively means that by delaying the onset of benefits, participants can buy, at better-than-market prices, a larger annuity, and one that is indexed for inflation and offers survivor benefits. If one wants to buy an annuity at a good price, this is an excellent way to do it. But few participants avail themselves of this opportunity. Most people begin claiming within a year of becoming eligible, and less than 5 percent delay claiming past age 66."

Thus, the easiest way to encourage greater annuitization is to encourage it through the Social Security system, where these issues of adjusting benefits upward in an actuarially fair manner are often not well-understood. An additional step would be to help design 401k and other defined contribution retirement plans so that there is a clear and attractively phrased annuity option available.

Tuesday, November 22, 2011

True Love and Other Times When Monetary Incentives are Misguided

In the Fall 2011 issue of my own Journal of Economic Perspectives, Uri Gneezy, Stephan Meier, and Pedro Rey-Biel" tackle issue of "When and Why Incentives (Don’t) Work to Modify Behavior." Many of their well-chosen examples made me smile, but none more than this one.

"Consider a thought experiment: You meet an attractive person, and in due time you tell that person, “I like you very much and would like to have sex with you.” Alternatively, consider the same situation, but now you say, “I like you very much and would like to have sex with you, and, to sweeten the deal, I’m also willing to pay you $20!” Only a certain kind of economist would expect your partner to be happier in the second scenario. However, offering $20 worth of (unconditional) flowers might indeed make the desired partner happier."



The authors point out: "Monetary incentives have two kinds of effects: the standard direct price effect, which makes the incentivized behavior more attractive, and an indirect psychological effect. In some cases, the psychological effect works in an opposite direction to the price effect and can crowd out the incentivized behavior." They investigate these potential conflicts in three situations: incentives for students to learn; incentives to contribute to public goods (like being a blood donor); and incentives to make lifestyle changes like to start exercising or to quit smoking.

Here's their summary of their findings: "When explicit incentives seek to change behavior in areas like education, contributions to public goods, and forming habits, a potential conflict arises between the direct extrinsic effect of the incentives and how these incentives can crowd out intrinsic motivations in the short run and the long run. In education, such incentives seem to have moderate success when the incentives are well-specified and well-targeted (“read these books” rather than “read books”), although the jury is still out regarding the long-term success of these incentive programs. In encouraging contributions to public goods, one must be very careful when designing the incentives to prevent adverse changes in social norms, image concerns, or trust. In the emerging literature on the use of incentives for lifestyle changes, large enough incentives clearly work in the short run and even in the middle run, but in the longer run the desired change in habits can again disappear. ... A considerable and growing body of evidence suggests that the effects of incentives depend on how they are designed, the form in which they are given (especially monetary or nonmonetary), how they interact with intrinsic motivationsand social motivations, and what happens after they are withdrawn. Incentives do matter, but in various and sometimes unexpected ways."









Long-Term Care Insurance in the U.S.

The health care reform bill signed into law by President Obama in 2010 included the Community Living Assistance Services and Supports (CLASS) Act, under which the federal government was going to sell long-term care insurance policies directly to the public. However, the law also included a provision that before the program got underway, it had to be certified to be actuarially sound over the long term. On October 14, 2011, the Department of Health and Human Services announced that it could not make such a certification, and thus was going to end the implementation of the program.

Where does this leave insurance for long-term care in the United States? In the Fall 2011 issue of my own Journal of Economic Perspectives, Jeffrey R. Brown and Amy Finkelstein tackle this question in "Insuring Long-Term Care in the United States." Here is a sampling of the insights that emerge:

Only a minority of older Americans have long-term care insurance. Even among those in the top wealth quintile, who have the most wealth to protect by purchasing long-term care insurance, only a bit more than a quarter have it.

Long-term care insurance policies with only moderately decent coverage can still be quite expensive. Brown and Finkelstein look at annual premiums for kinds of long-term care insurance policies as of mid-2010: "These policies all cover institutional and home care, and have a maximum daily benefit amount of $150. They differ in their deductible (60 day or 30 day), their benefit period (four year or unlimited), and whether or not the daily benefit is constant in nominal terms or escalates at 5 percent per year (compounded)." For the record, average nursing home costs are already above $200/day, so this insurance would cover only part of future costs. 




The loads on long-term care insurance are very high. A "load" is a measure of how the expected value of all premiums paid compares to the expected value of all benefits received. A load of zero means these are equal to each other. A load of, say, 20% means that for every $1 in premiums paid, you can on average expect 80 cents in benefits. The formula for load is:
Moreover, the idea of load can be extended to take into account that many people take out a long-term care policy, but at some point stop paying the premiums--and as a result get little or nothing in benefits. The first column shows loads assuming that people keep their policies; the second policy shows the load with this "lapsation" and policy termination taken into account.


Long-term care insurance requires a very long-term contract, which raises problems of its own. Brown and Finkelstein write: "First, the organization and delivery of long-term care is likely to change over the decades, so it is uncertain whether the policy bought today will cover what the consumer wants out of the choices available in 40 years. Second, why start paying premiums now when there is some chance that by the time long-term care is needed in several decades, the public sector may have substantially expanded its insurance coverage? A third concern is about counterparty risk. While insurance companies are good at pooling and hence insuring idiosyncratic risk, they may be less able to hedge the aggregate risks of rising long-term care utilization or long-term care costs over decades. In turn, potential buyers of such insurance may be discouraged by the risk of future premium increases and/or insurance company insolvency."


Long-term care insurance faces the tough problem that it has to compete with Medicaid, which pays for long-term care if you run down your assets. There's no easy way out of this trap. If Medicare didn't cover long-term care insurance, then politicians would face the catastrophe of potentially turning the elderly and infirm poor into the streets. If Medicaid was redesigned to cover long-term care insurance without requiring that you run down your assets first, it would encourage saving--but also cost considerably more. As Brown and Finkelstein write: "Attempts to reduce the implicit tax and stimulate private insurance markets tend to have at least one of two undesirable consequences: either they increase public expenditures, for example, by making Medicaid a primary payer and reducing means testing; or they require that policymakers be willing to deny care to individuals who fail to insure themselves adequately."

We know that demand for long-term care is going to expand dramatically as the U.S. population ages. We know that it is less expensive and probably better for the elderly if such care can be provided, at least for many of the elderly, through supportive home care rather than institutionalization. We know that requiring the elderly to become impoverished before being eligible for public assistance in long-term care seems a peculiar and counterproductive way to proceed. We know that private markets for long-term care insurance aren't working especially well. The CLASS legislation in the 2010 health care reform bill wasn't the answer. It made utterly unrealistic promises about costs and benefits, and the government was right to pull the plug on it. But the problem of how to provide and finance long-term care isn't going away.

Here is a post from August 2011 looking at Long-Term Care in an International Perspective.

are eventually going to run down your assets and end up on Medicare anyway, then purchasing such insurance doesn't seem attractive.







Friday, November 18, 2011

Fall 2011 issue of Journal of Economic Perspectives

The Fall 2011 issue of my own Journal of Economic Perspectives is now available on-line. As has been true for several years now, all of the articles as well as archives going back to the late 1990s are freely available to all, courtesy of the American Economic Association. I may well do some posting about individual articles in the next week or so. But for now, here is the Table of Contents, with titles and authors, abstracts, and also links to the articles and on-line appendices. 

(1) Front Matter

Full-Text Access | Supplementary Materials

(2) Neuroeconomic Foundations of Economic Choice--Recent Advances
Ernst Fehr and Antonio Rangel
Neuroeconomics combines methods and theories from neuroscience psychology, economics, and computer science in an effort to produce detailed computational and neurobiological accounts of the decision-making process that can serve as a common foundation for understanding human behavior across the natural and social sciences. Because neuroeconomics is a young discipline, a sufficiently sound structural model of how the brain makes choices is not yet available. However, the contours of such a computational model are beginning to arise; and, given the rapid progress, there is reason to be hopeful that the field will eventually put together a satisfactory structural model. This paper has two goals: First, we provide an overview of what has been learned about how the brain makes choices in two types of situations: simple choices among small numbers of familiar stimuli (like choosing between an apple or an orange), and more complex choices involving tradeoffs between immediate and future consequences (like eating a healthy apple or a less-healthy chocolate cake). Second, we show that, even at this early stage, insights with important implications for economics have already been gained.
Full-Text Access | Supplementary Materials

(3) It's about Space, It's about Time, Neuroeconomics and the Brain Sublime
Marieke van Rooij and Guy Van Orden
Neuroeconomics has investigated which regions of the brain are associated with the factors contributing to economic decision making, emphasizing the position in space of brain areas associated with the factors of decision making—cognitive or emotive, rational or irrational. An alternative view of the brain has given priority to time over space, investigating the temporal patterns of brain dynamics to determine the nature of the brain's intrinsic dynamics, how its various activities change over time. These two ways of approaching the brain are contrasted in this essay to gauge the contemporary status of neuroeconomics.
Full-Text Access | Supplementary Materials

(4) Molecular Genetics and Economics
Jonathan P. Beauchamp
The costs of comprehensively genotyping human subjects have fallen to the point where major funding bodies, even in the social sciences, are beginning to incorporate genetic and biological markers into major social surveys. How, if at all, should economists use and combine molecular genetic and economic data from these surveys? What challenges arise when analyzing genetically informative data? To illustrate, we present results from a "genome-wide association study" of educational attainment. We use a sample of 7,500 individuals from the Framingham Heart Study; our dataset contains over 360,000 genetic markers per person. We get some initially promising results linking genetic markers to educational attainment, but these fail to replicate in a second large sample of 9,500 people from the Rotterdam Study. Unfortunately such failure is typical in molecular genetic studies of this type, so the example is also cautionary. We discuss a number of methodological challenges that face researchers who use molecular genetics to reliably identify genetic associates of economic traits. Our overall assessment is cautiously optimistic: this new data source has potential in economics. But researchers and consumers of the genoeconomic literature should be wary of the pitfalls, most notably the difficulty of doing reliable inference when faced with multiple hypothesis problems on a scale never before encountered in social science.
Full-Text Access | Supplementary Materials

(5) Genes, Eyeglasses, and Social Policy
Charles F. Manski
Someone reading empirical research relating human genetics to personal outcomes must be careful to distinguish two types of work: An old literature on heritability attempts to decompose cross-sectional variation in observed outcomes into unobservable genetic and environmental components. A new literature measures specific genes and uses them as observed covariates when predicting outcomes. I will discuss these two types of work in terms of how they may inform social policy. I will argue that research on heritability is fundamentally uninformative for policy analysis, but make a cautious argument that research using genes as covariates is potentially informative.
Full-Text Access | Supplementary Materials

(6) The Composition and Drawdown of Wealth in Retirement
James Poterba, Steven Venti and David Wise
This paper presents evidence on the resources available to households as they enter retirement. It draws heavily on data collected by the Health and Retirement Study. We calculate the "potential additional annuity income" that households could purchase, given their holdings of non-annuitized financial assets at the start of retirement. We also consider the role of housing equity in the portfolios of retirement-age households and explore the extent to which households draw down housing equity and financial assets as they age. Because home equity is often conserved until very late in life, for many households it may provide some insurance against the risk of living longer than expected. Finally, we consider how our findings bear on a number of policy issues, such as the role for annuity defaults in retirement saving plans.
Full-Text Access | Supplementary Materials

(7) Insuring Long-Term Care in the United States
Jeffrey R. Brown and Amy Finkelstein
Long-term care expenditures constitute one of the largest uninsured financial risks facing the elderly in the United States and thus play a central role in determining the retirement security of elderly Americans. In this essay, we begin by providing some background on the nature and extent of long-term care expenditures and insurance against those expenditures, emphasizing in particular the large and variable nature of the expenditures and the extreme paucity of private insurance coverage. We then provide some detail on the nature of the private long-term care insurance market and the available evidence on the reasons for its small size, including private market imperfections and factors that limit the demand for such insurance. We highlight how the availability of public long-term care insurance through Medicaid is an important factor suppressing the market for private long-term care insurance. In the final section, we describe and discuss recent long-term care insurance public policy initiatives at both the state and federal level.
Full-Text Access | Supplementary Materials

(8) Annuitization Puzzles
Shlomo Benartzi, Alessandro Previtero and Richard H. Thaler
In his Nobel Prize acceptance speech given in 1985, Franco Modigliani drew attention to the "annuitization puzzle": that annuity contracts, other than pensions through group insurance, are extremely rare. Rational choice theory predicts that households will find annuities attractive at the onset of retirement because they address the risk of outliving one's income, but in fact, relatively few of those facing retirement choose to annuitize a substantial portion of their wealth. There is now a substantial literature on the behavioral economics of retirement saving, which has stressed that both behavioral and institutional factors play an important role in determining a household's saving accumulations. Self-control problems, inertia, and a lack of financial sophistication inhibit some households from providing an adequate retirement nest egg. However, interventions such as automatic enrollment and automatic escalation of saving over time as wages rise (the "save more tomorrow" plan) have shown success in overcoming these obstacles. We will show that the same behavioral and institutional factors that help explain savings behavior are also important in understanding 1) how families handle the process of decumulation once retirement commences and 2) why there seems to be so little demand to annuitize wealth at retirement.
Full-Text Access | Supplementary Materials

(9) The Case for a Progressive Tax: From Basic Research to Policy Recommendations
Peter Diamond and Emmanuel Saez
This paper presents the case for tax progressivity based on recent results in optimal tax theory. We consider the optimal progressivity of earnings taxation and whether capital income should be taxed. We critically discuss the academic research on these topics and when and how the results can be used for policy recommendations. We argue that a result from basic research is relevant for policy only if 1) it is based on economic mechanisms that are empirically relevant and first order to the problem, 2) it is reasonably robust to changes in the modeling assumptions, and 3) the policy prescription is implementable (i.e, is socially acceptable and not too complex). We obtain three policy recommendations from basic research that satisfy these criteria reasonably well. First, very high earners should be subject to high and rising marginal tax rates on earnings. Second, low-income families should be encouraged to work with earnings subsidies, which should then be phased-out with high implicit marginal tax rates. Third, capital income should be taxed. We explain why the famous zero marginal tax rate result for the top earner in the Mirrlees model and the zero capital income tax rate results of Chamley and Judd, and Atkinson and Stiglitz are not policy relevant in our view.
Full-Text Access | Supplementary Materials

(10) When and Why Incentives (Don't) Work to Modify Behavior
Uri Gneezy, Stephan Meier and Pedro Rey-Biel
First we discuss how extrinsic incentives may come into conflict with other motivations. For example, monetary incentives from principals may change how tasks are perceived by agents, with negative effects on behavior. In other cases, incentives might have the desired effects in the short term, but they still weaken intrinsic motivations. To put it in concrete terms, an incentive for a child to learn to read might achieve that goal in the short term, but then be counterproductive as an incentive for students to enjoy reading and seek it out over their lifetimes. Next we examine the research literature on three important examples in which monetary incentives have been used in a nonemployment context to foster the desired behavior: education; increasing contributions to public goods; and helping people change their lifestyles, particularly with regard to smoking and exercise. The conclusion sums up some lessons on when extrinsic incentives are more or less likely to alter such behaviors in the desired directions.
Full-Text Access | Supplementary Materials

(11) Retrospectives: X-Efficiency
Michael Perelman
In a 1966 article in the American Economic Review, Harvey Leibenstein introduced the concept of "X-efficiency": the gap between ideal allocative efficiency and actually existing efficiency. Leibenstein insisted that absent strong competitive pressure, firms are unlikely to use their resources efficiently, and he suggested that X-efficiency is pervasive. Leibenstein, of course, was attacking a fundamental economic assumption: that firms minimize costs. The X-efficiency article created a firestorm of criticism. At the forefront of Leibenstein's powerful critics was George Stigler, who was very protective of classical price theory. In terms of rhetorical success, Stigler's combination of brilliance and bluster mostly carried the day. While Leibenstein's response to Stigler was well reasoned, it never resonated with many economists, and Leibenstein remains undeservedly underappreciated. Leibenstein's challenge is as relevant today as it ever was.
Full-Text Access | Supplementary Materials

(12) Recommendations for Further Reading
Timothy Taylor
Full-Text Access | Supplementary Materials

(13) Notes

Full-Text Access | Supplementary Materials

The "Chermany" Problem of Unsustainable Exchange Rates

Martin Wolf coined the term "Chermany" in one of his Financial Times columns in March 2010. China and Germany have been running the largest trade surpluses in the world for the last few years. Moreover, one of the reasons they both have such large trade surpluses is that the exchange rate of their respective currencies is set at a low enough level vis-a-vis their main trading partners to assure a situation of strong exports and weaker imports. Germany's huge trade surpluses are part of the reason that the euro-zone is flailing. Could China's huge trade surpluses at some point be part of a broader crisis in the U.S. dollar-denominated world market for trade?

Start with some facts about Chermany's trade surpluses, using graphs generated from the World Bank's World Development Indicators database. The first graph shows their current account trade surpluses since 1980 expressed as a share of GDP: China in blue, Germany in yellow. The second graphs shows their trade surpluses in U.S. dollars. Notice in particular that the huge trade surpluses for Chermany are a relatively recent phenomenon. China ran only smallish trade surpluses or outright trade deficits up to the early 2000s. Germany ran trade deficits through most of the 1990s. In recent years, China's trade surpluses are larger in absolute dollars, but Germany's surpluses are larger when measured as a share of GDP.



Those who have trade surpluses, like China and Germany, wear them as a badge of economic virtue. Those with trade deficits, like the United States, like to complain about those trade surpluses as a sign of unfairness, and wear our own trade deficits as a hairshirt of economic shame. In my view, the balance of trade is the most widely misunderstood basic economic statistic.

The economic analysis of trade surpluses starts by pointing out that a trade surplus isn't at all the same thing as healthy economic growth. Economic growth is about better-educated and more-experienced workers, using steadily increasing amounts of capital investment, in a market-oriented environment where innovation and productivity are rewarded. Sometimes that is accompanied by trade surpluses; sometimes not.China had rapid growth for several decades before its trade surpluses erupted. Germany has been a high-income country for a long time without running trade surpluses of nearly this magnitude. Japan has been running trade surpluses for decades, with a stagnant economy over the last 20 years. The U.S. economy ran trade deficits almost every year since the 1980, but has had solid economic growth and reasonably low unemployment rates during much of that time.

Instead, think of trade imbalances as creating mirror images. A country like China can only have huge trade surpluses if another country, in this case the United States, has correspondingly large trade deficits. China's trade surplus means that it earns U.S. dollars with its exports, doesn't use all of those U.S. dollars to purchase imports, and ends up investing those dollars in U.S. Treasury bonds and other financial investments. China's trade surpluses and enormous holdings of U.S. dollar assets are the mirror image of U.S. trade deficits and the growing indebtedness of the U.S. economy.

For the European Union as a whole, its exports and imports are fairly close to balance. Thus, if any country like Germany is running huge trade surpluses, it must be balanced out by other EU countries running large trade deficits. Germany's trade surpluses mean that it was earning euros selling to other countries within the EU, not using all of those euros to buy from the rest of the EU, and ending up investing the extra euros in debt issued by other EU countries. In short, Germany's trade surpluses and build-up of financial holdings are the necessary flip side of the high levels of borrowing by Greece, Italy, Spain, Portugal, and Ireland.

Joshua Aizenman and Rajeswari Sengupta explored the parallels between Germany and China in an essay in October 2010 called: "Global Imbalances: Is Germany the New China? A sceptical view." They carefully mention a number of differences, and emphasize the role of the U.S. economy in global imbalances as well. But they also point out the fundamental parallel that China runs trade surpluses and uses the funds to finance U.S. borrowing. They write: "Ironically, Germany seems to play the role of China within the Eurozone, de-facto financing deficits of other members."



Of course, when loans are at risk of not being repaid, lenders complain. German officials blames the profligacy of the borrowers in other EU countries. Chinese officials like to warn the U.S. that it needs to rectify its overborrowing. But whenever loans go really bad, it's fair to put some of the responsibility on the lender, not just the borrowers.


If a currency isn't allowed to fluctuate--like the Chinese yen vs. the U.S. dollar, or like Germany's euro vs the euros of its EU trading partners--and if the currency is undervalued when compared with wages and productivity in trading partners, then huge and unsustainable trade imbalances will result. And without enormous changes in economic patterns of wages and productivity, as well as in levels of government borrowing, those huge trade imbalances will eventually lead to financial crisis.

A group of 16 prominent economists and central bank officials calling themselves the "Committee on International Economic Policy and Reform" wrote a study on "Rethinking Central Banking" that was published by the Brookings Institution in September 2011. They point out that most international trade used to be centered on developed countries with floating exchange rates: the U.S., the countries of Europe, and Japan. A number of smaller economies might seek to stabilize or fix their exchange rate, but in the context of the global macroeconomy, their effect was small. There was a sort of loose consensus that when economies became large enough, their currencies would be allowed to move.

But until very recently, China was not letting its foreign exchange rate move vis-a-vis the U.S. dollar. As the Committee points out: "While a large part of the world economy has adopted this model,
some fast-growing emerging markets have not. The coexistence of floaters and fixers therefore remains a characteristic of the world economy. ... A prominent instance of the uneasy coexistence of floaters and fixers is the tug of war between US monetary policy and exchange rate policy in emerging market “fixers” such as China." The Committee emphasizes that the resulting patterns of huge trade surpluses and corresponding deficits lead to spillover effects around the world economy.

The Brookings report doesn't discuss the situation of Germany and the euro, but the economic roots of an immovable exchange rate leading to unsustainable imbalances apply even more strongly to Germany's situation inside the euro area.

The world economy needs a solution to its Chermany problem: What adjustments should happen when exchange rates are fixed at levels that lead to unsustainably large levels of trade surpluses for some countries and correspondingly large trade deficits for others? Germany's problems with the euro and EU trading system are the headlines right now. Unless some policy changes are made, China's parallel problems with the U.S. dollar and the world trading system are not too many years away.


Thursday, November 17, 2011

Are U.S. Banks Vulnerable to a European Meltdown?

The Federal Reserve conducts a Senior Loan Office Opinion Survey on Bank Lending Practices each quarter at about 60 large U.S. banks. The results of the October 2011 survey, released last week, suggest that these banks do not see themselves as facing a high exposure to risks of a European financial and economic meltdown.

Here are a couple of sample questions from the survey. The first question shows that of senior loan officers at 50 banks who answered the question, 36 say that less than 5% of their outstanding commercial and industrial loans are to nonfinancial companies with operations in the U.S. that have significant exposure to European economies. The second questions shows that out of 49 banks, 24 don't have any outstanding loans to European banks, and 17 of the 25 banks that do have such loans have tightened their lending standards somewhat or considerably in the last three months.



Of course, a European economic and financial meltdown could affect the U.S. economy in a number of ways: for example, by reducing U.S. exports to European markets, or by affecting nonbank financial institutions like money market funds, hedge funds or dealer-brokers. But banks, because they hold federally insured deposits, are more likely to be bailed out by taxpayers if they get into trouble. So it's good news (if it indeed turns out to be true!) that major U.S. banks don't have high exposure either to firms doing business in Europe or to European banks.