Too Much Concern about International Reserves?

A few weeks back on December 19, I posted on \”Can $12.1 Trillion be Boring? Thoughts on International Reserves.\” In that post, Edwin Truman explained the reasons why the enormous size of international reserves should be a legitimate policy concern. As a contrasting point of view, the Independent Evaluation Office of the International Monetary Fund has published a report on \”International Reserves: IMF Concerns and Country Perspectives,\” which makes the case that the IMF and others have overemphasized the potential problems of large international reserves and understated their benefits. 

(For the record, the IMF deserves credit for the existence of the Independent Evaluation Office, which was set up in 2001 with a staff of 11, mostly recruited from outside the IMF, to provide an alternative evaluation of IMF policies.)

Everyone agrees that international reserves have taken off in recent years, largely driven by changes in China, along with other emerging and developing nations.

Worrying about large and growign international reserves may be part of the intellectual DNA of the IMF; after all, the organization was formed in large part to provide a way of dealing with balance of payments problems imbalances across countries. But as the Independent Evaluation Office report points out, countries holding these reserves feel that the IMF should be turning its attention to other targets. Here are some of the arguments (footnotes omitted). 

International reserves are modest relative to overall global capital markets–and the private-sector portion of those markets in particular should be the real cause of instability and concern.

\”International reserves remain small relative to the global stock of financial assets under private management … There is considerable historical precedent and economic analysis to suggest that concerns about global financial stability should focus more closely on trends in private asset accumulation and capital flows. Country officials and private sector representatives also noted that the IMF should be more attentive to the accumulation of the private foreign assets that are the consequence of persistent current account surpluses, and which from a historical perspective have arguably been more destabilizing than official reserve accumulation.\” For illustration, here\’s a figure showing those international reserves in comparison with other global financial assets. 


Countries that hold large international reserves see them as providing a number of benefits. 

\”There was a common view among country authorities that the IMF tended to
underestimate the benefits of reserves. Thinking about the tradeoff between costs and
benefits of reserves, country officials often mentioned a range of benefits that they
considered important but were not easily incorporated into either single indicators or formal
models. In addition to precautionary self-insurance (also emphasized by the Fund), they
mentioned other important advantages: reserves provide a country with reliability of access
and the policy autonomy to act quickly, flexibly, and counter cyclically, and, as was evident
during the global crisis, they inspire confidence. Reserves have also allowed authorities to
avoid the stigma associated with approaching the Fund for resources—an issue that is very
much alive in a number of countries.\”


Large international reserves are a symptom of the concerns that countries have about instability of capital flows in the global economy, and rather than worrying about this symptom, policy-makers should focus on underlying causes. 


\”Most country officials interviewed for this evaluation also felt that in a discussion of
the stability of the international monetary system, there were more pressing issues to be
considered. These included the fluctuating leverage in global financial institutions and its
impact on global liquidity conditions and hence capital flows and exchange rate volatility;
the role and effectiveness of prudential regulations and supervision in mitigating risks
associated with cross-border finance; and the difficulty in managing capital flows in recipient
countries.\”


A final point made in the IEO report, which seems to me to have a ring of truth, is that a lot of the concern over outsized international financial reserves is really about the reserves held by China. The enormous reserves held by Japan, for example, don\’t seem to stimulate the same amount of angst. It\’s probably not wise to try to set a wide-ranging international reserve policy for countries across the world based heavily on the experience of China, which is an extraordinary case in so many ways. 

Less Globalization Than You Think: The DHL Global Connectedness Index

Pankaj Ghemawat of the IESE Business School at the Univesity of Navarra in Spain, together with Steven A. Altman, is the author of the DHL Global Connectedness Index 2012. The report has a lot of interesting angles for looking at the extent of globalization in recent years, but I was especially interested in an argument I\’ve made a few times–that while globalization has definitely increased by historical standards, the movement toward globalization is nowhere close to complete, and has in fact stalled in the last few years. The report looks at many dimensions of global connnectedness: here, I\’ll focus on international flows of goods and services, flows of capital, and international flows of communication and information.

For international flows of goods and capital, this figure shows total exports of goods and services as as share of world GDP. Of course, the estimates from the 19th century and first half of the 20th century are based on less data, but the overall patterns are clear: a first wave of 19th century globalization that lasted up until about the start of World War I, the stagnation or even decline of globalization until after World War II, and then the second wave of globalization since then. Globalization is near an all-time high by this measure, but notice that it after the drop associated with the Great Recession, this measure of globalization is about the same in 2011 as it was in 2007.

A couple of other thoughts to help put this figure in perspective. While the levels of exports/world GDP is high by historical standards, there\’s still a lot of room for it to expand further. As the report says: \”Furthermore, while 20% (or even 30%) of goods and services being traded across borders is far more than the same ratio mere decades ago, it is still far short of the 90% or more that one would expect if borders and distance did not matter at all. If the world truly became “flat,” countries’ exports-to-GDP ratios would tend toward an average of 1 minus their shares of world GDP since buyers would be no more likely to purchase goods and services from their home countries than from abroad. Borders and distance still matter a great deal, implying that even the most connected countries have substantial headroom available to participate more in international trade.\”

In addition, although the volume of trade has risen, the distance that trade travels on average, between the sending and the receiving country, has not risen. In that sense, geographical distance still very much affects trade.\”The global connectedness patterns traced in this report also highlight how distance, far from being dead, continues to depress connectedness of all types. While the distance
between a randomly selected pair of countries is roughly 8,500 km, the average distance traversed by merchandise trade, foreign direct investment flows, telephone calls, and human migration all cluster in the range from 3900 km to 4750 km. This accords with the finding that most international
flows take place within rather than between continental regions. …

[F]ocusing on trade in goods only rather than goods and services combined, as of 2011, 47% of trade took place between countries in different regions rather than within the same region, a proportion that has typically been between 40% and 50% since 1965. The average distance traveled by a dollar worth of traded merchandise in 2011 was roughly 4,750 kilometers, also in line with historical norms over the past four decades … Thus, while the depth of merchandise trade (the volume of goods traded in comparison to total economic output) has scaled new heights in recent decades, that trend has not been matched by an extension of the distances traveled by traded goods on average. Rather, much of the action in terms of trade integration has been the weaving together of national economies within the same region.\”

When it comes to flows of foreign direct investment, a similar pattern holds: it\’s at an historically high level, but it\’s still recovering from the Great Recession. Here\’s the figure.

More detailed analysis shows that the \”breadth\” of  foreign direct investment, as measured by gthe range of different countries receiving such investment, has declined since 2007.The report expalins this way: \”This pattern of declining breadth scores was not matched by declines in the average distance “traveled” by FDI or the proportion that occurs between rather than within regions.
Rather, the average distance “traveled” by FDI flows rose from 2007 to 2010 from roughly 4000 to 4900 kilometers and the proportion taking place within regions declined from 58% to 52%. These patterns suggest that while investors are indeed keeping more money at home (declining
depth), they are not generally shifting their foreign investments from distant countries to neighbors. Rather, they are selectively choosing a narrower set of investment destinations, some of which may be distant safe havens, selected in part to diversify risks in investors’ home regions.\”

We all know that the internet and related technologies have opened up extraordinary possibilities for information and communication to flow across national borders. But by and large, people are still using those technologies to connect with others closer to home. Here are some comments from the report:

\”While new technologies indeed have made it far easier and cheaper to share information with people on the other side of the world, we actually tend to use these technologies much more intensively to connect to people close to home. Consider, first of all, postal communications. As a result of efforts spearheaded by the Universal Postal Union, organized in 1874 and one of the world’s first global institutions, it has long been fairly simple to send mail anywhere in the world. And yet, only about 1 percent of all letter mail sent around the world is international.

\”What about telephone calls? Only 2 percent of voice calling minutes are international despite rapidly falling costs and improving call quality. These figures do exclude calls placed over the internet via services such as Skype, but including calls over such services would not push this ratio up past 5%. …

\”Global data on information flows over the internet, however, indicate that while internet traffic is more international than phone calls or mail, it remains primarily domestic, with international internet traffic accounting for about 17% of the total. And what about communications on social media? Facebook aims to provide a platform for “frictionless” sharing that theoretically makes it as easy to “friend” someone around the world as one’s next door neighbor. But the reality is that relationships on social media reflect offline human relationships that remain highly distance sensitive. Less than 15 percent of Facebook friends live in different countries. …

\”While the growth of international internet bandwidth implies that we can just as easily read foreign news websites as domestic ones, people still overwhelmingly get their news from domestic sources when they go online: news page views from foreign news sites constitute 1% of the total in Germany, 3% in France, 5% in the United Kingdom and 6% in the United States (and are in single digits everywhere else sampled – as low as 0.1% in China). Furthermore, news coverage by domestic sources itself tends to be very domestic. In the U.S., 21% of U.S. news coverage across all media was international according to a recent study, and of that 11 percent dealt with U.S. foreign affairs (such as U.S. diplomacy and military engagements), leaving only 10% of coverage for topics entirely unrelated to the U.S.\”

When it comes to focusing on the U.S. economy in particular, our particular experience of globalization differs from most other countries. The U.S. has a huge internal economy, and so our international trade–relative to the size of the huge domestic U.S. economy–is actually much smaller than most other countries. However, U.S. economic ties are very widespread: in the terminology of the report, U.S. global connectedness has a lot of \”breadth.\” In addition, U.S. global connectedness is more related to finance than to goods and services. Here\’s the report:

\”The United States ranks 20th overall [in global connectedness] and has the world´s second highest breadth score, reflecting its significant ties to nearly every other country around the world. It has a more modest rank on depth (89th), which is not unusual for a country with a very large internal market. The U.S. has its strongest position on the capital pillar on which it ranks 6th overall and 1st on breadth. On the other hand, the U.S. has a remarkably low score on the trade pillar, 76th overall and 139th (next to last) on depth. Merchandise and services exports account for only 14% of U.S. GDP and imports add up to only 18%. The U.S. has maintained a stable level of connectedness since 2007.\”

For those who doubt and oppose the movement to globalization, there is a good news/bad news story here: sure, globalization has risen, but it remains far from dominant. For those who welcome and support the movement to globalization, there\’s a converse good news/bad news story here: sure, globalization has risen, but there is so very much further to go. The report takes the second view, and offers an interesting argument that the potential gains from additional trade may be substantially larger than many economic models suggest:

\”[I]n calculating the benefits of additional trade, these kinds of models focus almost exclusively on growth generated by reductions in production costs as each country’s output becomes more specialized, a limited fraction of the potential gains. To broaden the range of benefits covered, consider a modified version of the ADDING Value Scorecard, a framework originally developed to help businesses evaluate international strategies. ADDING is an acronym for the following sources of value: Adding Volume, Decreasing Costs, Differentiating, Intensifying Competition, Normalizing Risk, and Generating and Diffusing Knowledge. Because traditional models assume full employment (especially problematic in times like these) and leave out scale economies, they capture only part of the gains in the first two categories, Adding Volume and Decreasing Costs. And they entirely leave out the last four categories, whose benefits can be seen clearly, for example, in the U.S. automobile industry. Decades ago, Japanese automakers started offering consumers differentiated (more reliable) products. Increased competition prompted U.S. automakers to improve their own quality. Now, GM sells more cars in China than in the U.S., diversifying its risks and helping it recover from the crisis. And cars are becoming “greener” faster because of international knowledge flows.\”

Taking these sorts of factors into account, the report estimates that reasonable expansions of globalization could generate gains on the order of 8% of world GDP: in round numbers, call that $5 trillion or so per year in benefits.

A final note: My discussion of the report touches on some of the main points that caught my eye, but completely leaves out other points, like discussions of specific countries, of regional differences, international migration, industry studies of mobile phones, cars, and pharmaceuticals, and more. Thus, those interested in additional detail and insights will find a lot more in this report.

Management in U.S. Manufacturing Firms

Everyone knows that good management matters, but in a research study, how do you define \”good management\”? For example, defining \”good management\” according to whether a company earns high profits would be circular logic; it would assume (\”good management leads to higher profits\”) what needs to be proven. Instead, \”good management\” has to be defined in some at least moderately objective way, so it can then be compared to how firms perform.

Nicholas Bloom and John Van Reenen, along with various co-authors, have been leaders in academic work that bravely seeks to define \”good management,\” surveys the management practices of firms,  and then looks at how firms with \”good management\” are performing. Much of their early work compared management practices across countries: for example, see their  article \”Why Do Management Practices Differ across Firms and Countries?\” in the Winter 2010 issue of my own Journal of Economic Perspectives.

But now the U.S. Census Bureau has for the first time used this approach as part of a survey of 30,000 U.S. manufacturing firms.  A group of authors include Bloom and van Reenen, together with
Erik Brynjolfsson, Lucia Foster, Ron Jarmin, and Itay Saporta-Eksten provide the first glimpse of these results in \”Management in America,\” a discussion paper written for the Center for Economic Studies, which is based in the U.S. Census Bureau.

Before describing the results, it\’s useful to say a bit more about how \”good management\” is measured in this survey. They refer to it as \”structured management,\” a term which is more neutral than  \”good.\” But the basic concept is a list of 16 questions about management at the firm, divided into questions about monitoring, targets and incentives. They offer these examples:

\”The monitoring section asked firms about their collection and use of information to monitor and improve the production process. For example, how frequently were performance indicators tracked at the establishment, with options ranging from “never” to “hourly or more frequently”. The targets section asked about the design, integration and realism of production targets. For example, what was the time-frame of production targets, ranging from “no production targets” to “combination of short-term and long-term production targets”. Finally, the incentives asked about non-managerial and managerial bonus, promotion and reassignment/dismissal practices. For example, how were managers promoted at the establishment, with answers ranging from “mainly on factors other than performance and ability, for example tenure or family connections” to “solely on performance and ability”? The full questionnaire is available on
http://bhs.econ.census.gov/bhs/mops/form.html.\” 

For each of the 16 questions, they arbitrarily say that the lowest possible answer is given a score of 0, the highest possible answer is given a score of 1, and in-between answers are given fractional totals. They then just sum up the total points to get a management score for each firm. A number of other questions collect information both about the firm and about the personal characteristics of the middle manager answering the survey. Those who want additional gory details on the survey process can head for the report. Here are some findings.

The obvious starting point is to figure out whether manufacturing firms that get higher scores for \”structured management\” are performing better in some way. They divide up the firms into tenths, or deciles, from lowest score to highest, and find that firms with higher scores do better on profits, productivity, output growth, research and development spending, and patents. Here are a few sample figures to give you a feeling for what these relationships look like. The relationships continue to hold after adjusting for all the obvious factors, like size of firm, industry type, and the like.

An additional conclusion is that there is a great deal of variation in structured management across firms. As they write: \”[T]here is enormous dispersion of management practices across America: 18% of establishments adopt at least 75% of structured management practices for performance monitoring, targets and incentives; while 27% of establishments adopt less than 50% of these practices.\”

They also find that management scores tend to be growing from 2005 to 2010; that management scores are higher in larger firms; and that firms in the South and Midwest tend to have  higher management scores that firms in the West and Northeast.

This research is, if not quite in its infancy, still in its toddler-hood, and so it would be unwise to draw conclusions too aggressively. But some preliminary conclusions would be: 1) It\’s possible to come up with a survey tool that captures at least a significant portion of what is meant by \”good management;\” 2) This measure of management appears to be correlated with positive performance by firms on a number of dimensions; and 3) This measure of management differs quite substantially across firms.

But there are also difficulties at this stage. Good management, of course, isn\’t just a matter of responding in a certain way on survey questions, or announcing a certain set of policies. In real life, good management is also about implementing those policies, and even of having a culture within the company in which the implementation of those policies is widely practiced and accepted. It may be, for example, that companies which lack structured management have a range of other problems or issues not well-captured by the existing data. It may be that implementing more structured management in a company that isn\’t used to this approach will prove difficult process in many firms. I expect that future research will delve more deeply into these issues, and others. 

Some Thoughts on James Buchanan

When I think of James Buchanan, who died last week, I remember hearing him answer a question at a conference in the late 1980s, not long after he had won the 1986 Nobel prize. A questioner with stars in his eyes asked how Buchanan had been able to write, not a few dozen journal articles, but a series of more than 30 book-length tomes like The Calculus of Consent during his career. (In fact, Buchanan\’s work was later collected into 20 volumes.) Buchanan paused for a moment, the very image of a courtly, soft-spoken, silver-haired Southern professor, and then gently drawled: \”Apply butt to chair.\” Words to live by!

I had barely heard of Buchanan\’s work when he won the Nobel prize in 1986: it certainly didn\’t have much prominence in my undergraduate or graduate studies in economics. But Buchanan contributed an article, \”Tax Reform as Political Choice,\” to the first issue of my own Journal of Economic Perspectives in Summer 1987. (As with all JEP articles from the first issue to the most recent, it is freely available on-line courtesy of the American Economic Association.)

At that time, Buchanan had just presented his Nobel lecture, \”The Constitution of Economic Policy,\”
which offers an overview of his arguments about how economists should do policy analysis. He quoted the great Swedish economist Knut Wicksell (1851-1926) who wrote things like: \”[N]either the executive nor the legislative body, and even less the deciding majority in the latter, are in reality … what the ruling theory tells us they should be. They are not pure organs of the community with no thought other than to promote the common weal. … [M]embers of the representative body are, in the overwhelming majority of cases, precisely as interested in the general welfare as are their constituents, neither more nor less.\”

In a similar spirit, here\’s Buchanan in his own words: \”Economists should cease proffering policy advice as if they were employed by a benevolent despot, and they should look to the structure within which political decisions are made. … I called upon my fellow economists to postulate some model of the state, of politics, before proceeding to analyse the effects of alternative policy measures. I urged economists to look at the \”constitution of economic polity,\” to examine the rules, the constraints within which political agents act. Like Wicksell, my purpose was ultimately normative rather than antiseptically scientific. I sought to make economic sense out of the relationship between the individual and the state before proceeding to advance policy nostrums.\”

The first issue of JEP back in 1987 had a symposium on the 1986 Tax Reform Act, and in his JEP paper, Buchanan applied his perspective to how one might think about the passage of a largely revenue-neutral tax reform that sought to reduce special deductions, credits, and deductions, and to use the revenue saved to reduce marginal tax rates.

Buchanan argued that the \”political agents\” who set tax policy in general prefer higher taxes, because they like to have control over more resources. They also like to offer tax breaks to special constituencies, who reward them with political support. But as the political agents offer tax breaks to special constituencies, they need to ratchet up tax rates on others–and as those rates get higher, it becomes harder to offer still more tax breaks. From the perspective of these kinds of political agents, the 1986 tax reform could thus be interpreted as a chance to wipe the slate clean: that is, start over again with lower tax rates and fewer tax breaks. But this just meant that the political agents could then again start their pattern of offering tax breaks and pushing up rates all over again.

\”The 1986 broadening of the tax base by closing several established loopholes and shelters offers potential rents to those agents who can promise to renegotiate the package, piecemeal, in subsequent rounds of the tax game. The special interest lobbyists, whose clients suffered capital value losses in the 1986 exercise, may find their personal opportunities widened after 1986, as legislators seek out personal and private rents by offering to narrow the tax base again. In one fell swoop, the political agents may have created for themselves the potential for substantially increased rents. This rent-seeking hypothesis will clearly be tested by the fiscal politics of the post-1986 years. To the extent that agents do possess discretionary authority, the tax structure established in 1986 will not be left substantially in place for decades or even years.\”

Over time, Buchanan\’s prediction has held true: new tax breaks have been created, and marginal tax rates have been pushed back up to pay for them. Thus, one common proposal for dealing with the disturbing prospects of long-run budget deficits is to raise additional revenue by limiting many tax breaks, and then using some of the revenue for lower marginal tax rates and some for deficit reduction. For previous posts of mine about such proposals, see my February 2012 post on  \”Tax Expenditures: A Way to End Budget Gridlock?\” or my  August 2011 post on \”Tax Expenditures: One Way Out of the Budget Morass?\”

Of course, Buchanan\’s work is a continual reminder that even when economists are involved in drawing up some plans for a tax reform that would broaden the tax rates and lower the base, political agents will be the ones who actually draw up the plan, vote on it, and determine how much it changes each year. This insight about the centrality of politics is perhaps obvious to the average person, but many of us who focus on the economy can lose sight of it. Buchanan\’s work is one reason that in my own Principles of Economics textbook  (available here), I follow up all the chapters on what can go wrong in markets–monopoly, externalities, public goods, inequality, incomplete information, all the rest–with a chapter on political economy. As a teacher, I want to encourage students to be thoughtful skeptics of how markets work, but I also want them to be equally skeptical about the extent to which political agents will implement the sort of economically enlightened policies that will actually address the problems of markets.
Buchanan\’s obituary in the New York Times is here. It closes with a great line from Buchanan, who once said: “I have faced a sometimes lonely and mostly losing battle of ideas for some 30 years now in efforts to bring academic economists’ opinions into line with those of the man on the street. … My task has been to ‘uneducate’ the economists.”

The National Taxpayer Advocate Speaks Up

Yes,.there is a National Taxpayer\’s Advocate, and her name is Nina E. Olson. She has just released the National Taxpayer Advocate 2012 Annual Report to Congress. The report covers a lot of ground, each year looking at issues like taxpayer rights, taxpayer rights, identity theft, and other topics. Here, I\’ll just focus on one topic:  \”The most serious problem facing taxpayers — and the IRS — is the complexity of the Internal Revenue Code.\” Here are some samples of the comments (footnotes omitted): 

  • \”According to a TAS analysis of IRS data, individuals and businesses spend about 6.1 billion hours a year complying with the filing requirements of the Internal Revenue Code. And that figure does not include the millions of additional hours that taxpayers must spend when they are required to respond to IRS notices or audits. If tax compliance were an industry, it would be one of the largest in the United States. To consume 6.1 billion hours, the “tax industry” requires the equivalent of more than three million full-time workers. … Based on Bureau of Labor Statistics data on the hourly cost of an employee, TAS estimates that the costs of complying with the individual and income tax requirements for 2010 amounted to $168 billion — or a staggering 15 percent of aggregate income tax receipts.\”
  • \”According to a tally compiled by a leading publisher of tax information, there have been approximately 4,680 changes to the tax code since 2001, an average of more than one a day.\”
  • \”The tax code has grown so long that it has become challenging even to figure out how long it is. A search of the Code conducted using the “word count” feature in Microsoft Word turned up nearly four million words.\”
  • \”Individual taxpayers find return preparation so overwhelming that about 59 percent now pay preparers to do it for them. Among unincorporated business taxpayers, the figure rises to about 71 percent. An additional 30 percent of individual taxpayers use tax software to help them prepare their returns, with leading software packages costing $50 or more.\”
  • \”IRS data show that when taxpayers have a choice about reporting their income, tax compliance rates are remarkably low. … [A]mong workers whose income is not subject to tax withholding, compliance rates plummet. An IRS study found that nonfarm sole proprietors report only 43 percent of their business income and unincorporated farming businesses report only 28 percent. Noncompliance cheats honest taxpayers, who indirectly pay more to make up the difference. According to the IRS’s most recent comprehensive estimate, the net tax gap stood at $385 billion in 2006, when there were 116 million households in the United States. This means that each household was effectively paying a “surtax” of some $3,300 to subsidize noncompliance by others.\”
  • \”From FY 2004 to FY 2012, the number of calls the IRS received from taxpayers on its Accounts Management phone lines increased from 71 million to 108 million, yet the number of calls answered by telephone assistors declined from 36 million to 31 million. The IRS has increased its ability to handle taxpayer calls using automation, but even so, the percentage of calls from taxpayers seeking to speak with a telephone assistor that the IRS answered dropped from 87 percent to 68 percent over the
    period. And among the callers who got through, the average time they spent waiting on hold increased from just over 2½ minutes in FY 2004 to nearly 17 minutes in FY 2012.\”
  • \”The IRS receives more than ten million letters from taxpayers each year responding to IRS adjustment notices. Comparing the final week of FY 2004 with the final week of FY 20102, the backlog of taxpayer correspondence in the tax adjustments inventory increased by 188 percent (from 357,151 to 1,028,539 pieces), and the percentage of taxpayer correspondence classified as “overage” jumped by 316 percent (from 11.5 percent to 47.8 percent).\”
  • \”In 2012, TAS [the Taxpayer Advocate Service] conducted a statistically representative national survey of over 3,300 taxpayers who operate businesses as sole proprietors. Only 16 percent said they believe the tax laws are fair. Only 12 percent said they believe taxpayers pay their fair share of taxes.\”
  • \”To alleviate taxpayer burden and enhance public confidence in the integrity of the tax system, the National Taxpayer Advocate urges Congress to vastly simplify the tax code. In general, this means paring back the number of income exclusions, exemptions, deductions, and credits (generally known as “tax expenditures”). For fiscal year (FY) 2013, the Joint Committee on Taxation has projected that tax expenditures will come to about $1.09 trillion, while individual income tax revenue is projected to be about $1.36 trillion. This suggests that if Congress were to eliminate all tax expenditures, it could cut individual income tax rates by about 44 percent and still generate about the same amount of revenue.\”

Finally, I\’m not the biggest fan of infographics,which often seem to me like overcrowded Powerpoint slides on steroids. But for those who like them, here\’s one from the National Taxpayer Advocate summarizing many of these points: 

FTC Data on Horizontal Merger Enforcement

The Federal Trade Commission has published Horizontal Merger Investigation Data: Fiscal Years 1996 – 2011. I found it interesting to notice the categories the report used to classify about horizontal merger enforcement: and the categorizes them along five dimensions: the Herfindahl-Hirschman Index before and after the proposed merger, the number of competitors in the market, the presence of \”hot documents,\” the presence of \”strong customer complaints,\” and ease of entry into the market.
 What surprised me was that the the notion of classifying merger policy along these kinds of categories is often de-emphasized in the recent economic research literature on industrial organization. Here, I\’ll first say a bit about the categories in the FTC report, and then compare it to the emphasis of recent empirical work in industrial organization.

The data for the FTC report comes from a requirement in the Hart Scott Rodino legislation, which I described in a post last June 14, \”Next Merger Wave Coming? Hart-Scott-Rodino 2011,\” which reviewed some of the evidence from the annual HSR report. As I noted there: \”The Hart-Scott-Rodino legislation requires that when businesses plan a merger or an acquisition above a certain price–typically $66 million in 2011–it must first be reported to the Federal Trade Commission. The FTC can let the merger proceed, or request more information. Based on that additional information, the FTC can then let the merger proceed, block it, or approve it subject to various conditions (for example, requiring that the merged entity divest itself of certain parts of the business to preserve competition in those areas).\” Thus, the  most recent FTC report looks at those 464 horizontal merger cases from 1996-2011 where the FTC requested additional information, and whether those requests led to an enforcement action of some sort (either blocking the merger or placing conditions on it), or alternatively if the request was followed by closing the case without an enforcement action.

The firsts category discussed in the report is the Herfindahl-Hirschman Index (HHI): as the report says, \”The HHI is the sum of the squares of the market shares of the competitors in the relevant market.\” In other words, if an industry has two firms, one with 60% of the market and one with 40% of the market, the HHI would be 60 squared, or 3600, plus 40 squared, or 1600, which would equal 5200. The highest possible HHI would be 10,000–that is, a complete monopoly with 100% of the market. The lowest HHI would be for an industry with many very small firms, each with only a miniscule portion of the market. After squaring these miniscule market shares and summing them up, the result would be a very low number.

Here\’s a table making HHI comparisons , with the columns showing how much the HHI would increase as a result of the merger and the rows showing the level of the HHI after the merger. This is measured across markets, where a given proposed merger often has implications across several different markets. Notice that none of the 14 cases where the post-merger HHI was less than 1700 and the rise in HHI was less than 99 led to an enforcement action. In the cells in the bottom right corner, if the post-merger HHI is above 3,000 and the gain in the HHI is above about 800, it\’s highly likely that an enforcement action will be taken. 

 
The next category in the FTC report is the number of competitors–which of course, in some ways, is based on the same information used to calculation the HHI. In the 20 cases with more than 10 competitors, there were no enforcement actions. But if the number of competitors is being reduced from 2 to 1, or 3 to 2, or 4 to 3, the request for additional information is very likely to lead to an enforcement action.

The FTC report then looks at a few other categories. \”Hot documents\” refers to a \”cases where the staff identified one or more party documents clearly predicting merger-related anticompetitive effects.\” \”Strong customer complaints\” refers to cases \”where customers expressed a credible concern that a significant anticompetitive effect would result if the transaction were allowed to proceed.\” And \”ease of entry\” is determined by an assessment by the FTC staff of the \”timeliness, likelihood, and sufficiency\” of entry. As one would expect, hot documents and strong customer complaints make enforcement actions more likely; ease of entry makes enforcement less likely.

 For teachers of undergraduate or high school courses in economics, this may all seem pretty straightforward, basically offering some background material for what is already being taught. But what was striking to me is that the direction of research in empirical industrial organization in recent years has tended to steer away from the HHI and such measures. For a nice discussion, I recommend
the discussion by  Liran Einav and Jonathan Levin of \”Empirical Industrial Organization:A Progress Report,\” which appeared in the Spring 2010 issue of my own  Journal of Economic Perspectives. Like all articles in JEP going back to the first issue in 1987, it is freely available on-line courtesy of the American Economic Association.

As Einav and Levin point out, there was an older tradition in industrial organization called \”structure-conduct-performance.\” It typically looked at the structure of an industry, as measured by something like the HHI, and then at how the industry behaved, and then at what profits the industry earned. But by the 1970s, it was clear that this approach had run into all kinds of problems. As one  example, the factors that were causing a certain industry structure might also be causing the high profits–so it would be muddled thinking to infer that the profits came from industry structure if both came from an outside factor. The available data on \”profits\” is based on accounting numbers, which differ in many ways from the economic concept of profits. More broadly, the whole idea that there should be common patterns across all industries in structure, conduct and performance seemed implausible.  They explain:

\”Both the concerns about cross-industry regression models and the development of clearer theoretical foundations for analyzing imperfect competition set the stage for a dramatic shift in the 1980s toward what Bresnahan (1989) coined the “New Empirical Industrial Organization.” Underlying this approach was the idea that individual industries are sufficiently distinct, and industry details sufficiently important, that cross-industry variation was often going to be problematic … Instead, the new wave of research set out to understand the institutional details of particular industries and to use this knowledge to test specific hypotheses about consumer or firm behavior, or to estimate models that could be used for counterfactual analysis, such as what would happen following a
merger or regulatory change. The current state of the fifi eld reflects this transition. Today, most of the influential research in empirical industrial organization looks extensively to economic theory for guidance, especially in modeling firm behavior. Studies frequently focus on a single industry or market, with careful attention paid to the institutional specifics, measurement of key variables, and econometric identification issues.\”

Einav and Levin go on to describe how this kind of research is done, and to evaluate its strengths and weaknesses. But my point here is that the summary of HHI, number of competitors and the rest across broad categories provided by the recent FTC report is quite different in spirit than how Einav and Levin are describing the research literature.

It seems to me that there are two possibilities here. The pessimistic view would be that the FTC just put together this report based on old categories and old thinking, and that it has little relevance to how antitrust analysis is actually done. But the more optimistic view, and the one that I prefer, is that the old traditional categories of measuring market structure with methods like the Herfindahl-Hirschman Index or a four-firm concentration ratio, as well as looking at factors like ease of entry, remain a more-than-adequate starting point for thinking about how antitrust enforcement is actually done. The more complex analysis described by Einav and Levin would then be applied to the hard cases, and discussion of the fine details of hard cases can be reserved for higher-level classes.

Interview with Elhanan Helpman

Douglas Clement has a characteristically excellent \”Interview with Elhanan Helpman\” in the December 2012 issue of The Region, published by the Federal Reserve Bank of Minneapolis. The main focuses of the interview are \”new growth theory, new trade theory and trade (and policy) related to market structure.\” Here\’s Helpman:

On the origins of \”new trade theory\”

\”When I was a student, the type of trade theory that was taught in colleges was essentially based on Ricardo’s 1817 insight, Heckscher’s 1919 insights and then Ohlin’s work, especially as formulated by [Paul] Samuelson later on. This view of trade emphasized sectoral trade flows. So, one country exports electronics and imports food, and another country exports chemicals and imports cars. This was the view of trade. The whole research program was focused on how to identify features of economies that would allow you to predict sectoral trade flows. In those years, there was actually relatively little emphasis on Ricardian forces, which deal with relative productivity differences across sectors, across countries, and there was much more emphasis on differences across countries in factor composition. …

Two interesting developments in the 1970s triggered the new trade theory. One was the book by Herb Grubel and Peter Lloyd in which they collected a lot of detailed data and documented that a lot of trade is not across sectors, but rather within sectors. Moreover, that in many countries, this is the great majority of trade. So, if you take the trade flows and decompose them into, say, the fraction that is exchanging [within sectors] cars for cars, or electronics for electronics, versus [across sectors] electronics for cars, then you find that in many countries, 70 percent—sometimes more and sometimes less—would have been what we call intra-industry trade, rather than across industries….

The other observation that also started to surface at the time was that when you looked at trade flows across countries, the majority of trade was across the industrialized countries. And these are countries with similar factor compositions. There were obviously differences, but they were much smaller than the differences in factor composition between the industrialized and the less-developed countries. Nevertheless, the amount of trade between developed and developing countries was much smaller than among the developed countries.

This raised an obvious question. If you take a view of the world that trade is driven by [factor composition] differences across countries, why then do we have so much trade across countries that look pretty similar? …

Then, on the theoretical front, monopolistic competition was introduced forcefully by both Michael Spence in his work, which was primarily about industrial organization, and [Avinash] Dixit and [Joseph] Stiglitz in their famous 1977 paper. These studies pointed out a way to think about monopolistic competition in general equilibrium. And trade is all—or, at least then, was all—about general equilibrium.

So combining these new analytical tools with the empirical observations enabled scholars to approach these empirical puzzles with new tools. And this is how the new trade theory developed.\”

On trade and inequality: an inverted U-shape?

\”Most of the work on trade and inequality in the neoclassical tradition was focused on inequality across different inputs. So, for example, skilled workers versus unskilled workers, or capital versus labor, and the like. There was a lot of interest in this issue with the rise in the college wage premium in the United States, which people then found happened also in other countries, including less-developed countries. …The other interesting thing that happened was that labor economists who worked on these issues also identified another source of inequality. They called it “residual” wage inequality, which is to say, if you look at wage structures and clean up wage differences across people for differences in their observed characteristics, such as education and experience, there is a residual wage difference, and wages are still quite unequal across people. In fact, it’s a big component of wage inequality.

Our aim in this research project, which has lasted now for a number of years, was to try to see the extent to which one can explain this inequality in residual wages by trade. It wasn’t an easy task, obviously, but the key theoretical insight came from the observation that once you have heterogeneity in firm productivities within industries, you might be able to translate this also into inequality in wages that different firms pay. …We tried to combine these insights, labor market frictions on the one hand and trade and firm heterogeneity on the other …We managed eventually, after significant effort, to build a model that has this feature but also maintains all the features that have been observed in the data sets previously. It was really interesting that the prediction of this model was that if you start from a very closed economy and you reduce trade frictions, then initially inequality is going to rise. However, once the economy is open enough, in a well-defined way, then additional reductions in trade friction reduce the inequality. Now, it is not clear that this is a general phenomenon, but our analytical model generated it. … [I]t’s an inverted U shape …\”

On how the gains from research and development spill across national borders

\”We computed productivity growth in a variety of OECD [Organisation for Economic Co-operation and Development] countries in this particular paper. We constructed R&D capital stocks for countries … Then we estimated the impact of the R&D capital stocks of various countries on their trade partners’ productivity levels. And we found substantial spillovers across countries. Importantly, in those data, these spillovers were related to the trade relations between the countries. And we showed that you gain more from the country that does more R&D if you trade with this country more. This produced a direct link between R&D investment in different countries and how trading partners benefit from it. …

 The developing countries don’t do much R&D. The overwhelming majority of R&D is done in industrialized countries, and this was certainly true in the data set we used at the time. So we asked the following question: If you look at developing countries, they trade with industrialized countries. Do they gain from R&D spillovers in the industrialized countries, and how does that gain depend on their trade structure with these industrialized countries? We showed empirically that the less-developed countries also benefited from R&D spillovers. And the more they trade with industrialized countries that engage heavily in R&D, the more they gain. …

One of the important findings—which analytically is almost obvious, but many people miss it—is that, if you have a process that raises productivity, such as R&D investment, then this also induces capital accumulation. So then, the contribution of R&D to growth comes not only from the direct productivity improvement, but also through the induced accumulation of capital. When you simulate the full-fledged model with these features, you get a very clear decomposition. You can see how much is attributable to each one.

With this, we could handle a relatively large number of countries in all different regions of the world, and [run some] interesting simulations. We could ask, for example, if all the industrialized countries raise their investment in R&D by an additional half percent of gross domestic product, who is going to benefit from it? Well, you find that the industrialized countries benefit from it a lot, but the less-developed countries benefit from it also a lot. It was still the case that the industrialized countries would benefit more, so in some way it broadened the gap between the industrialized and the less-developed countries. Nevertheless, all of them moved up significantly.\”

Are Loss Leaders an Anticompetitive Free Lunch?

All experienced shoppers understand the concept of a \”loss leader.\” A store offers an exceptionally low price on a particular item that is likely to be popular–indeed, the price is so low that on that item, the seller may lose money. But by advertising this \”loss leader\” item, the store hopes to bring in consumers who will then purchase other items as well that are not marked down. Of course, from the consumer point of view, the challenge is whether, in buying the loss leader item and making other purchases at that store, you actually end up with a better deal than if you bought the loss leader and then went to another store–or perhaps even did all your shopping in one stop at a different store. 

The terminology of loss-leaders is apparently nearly a century old. The earliest usage given in the Oxford English Dictionary is from a 1922 book called Chain Stores, by W.S. Hayward and P. White, who wrote: \”Many chains have a fixed policy of featuring each week a so-called ‘loss leader’. That is, some well known article, the price of which is usually standard and known to the majority of purchasers, is put on sale at actual cost to the chain or even at a slight loss..on the theory..that people will be attracted to this bargain and buy other goods as well. Loss leaders are often termed ‘weekly specials’.\”

But every economist knows at least one example of the classic loss leader from the late 19th century, the \”free lunch.\” At that time, a number of bars and saloons would advertise a \”free lunch,\” but customers were effectively required to purchase beer or some other drink. If you tried to eat the free lunch without purchasing a drink, you would likely be thrown out. Thus, the origins of the TANSTAAFL abbreviation: \”There ain\’t no such thing as a free lunch.\”

What I had not known is that there is a serious argument in the industrial organization literature over whether loss leaders should be treated by antitrust authorities as an anticompetitive practice. In 2002, for example, Germany\’s highest court upheld a decision of the Germany\’s Federal Cartel Office that Wal-Mart was required to stop selling basic food items like milk and sugar below cost as a way of attracting customers. Ireland and France have also been known for their fairly strict laws prohibiting resale below cost.

The theory of \”Loss Leading as an Exploitative Practice\” is laid out by Zhijun Chen and Patrick Rey in the December 2012 issue of the American Economic Review. (The AER is not freely available online, but many academics will have access to this somewhat technical article through library subscriptions.) Their approach has two kinds of sellers: large firms that sell a wide range of product, and smaller firms that sell a more limited range of products. It also has some buyers who have a high time cost of shopping–and thus prefer to shop at one or a few locations–along with other buyers who have a lower time cost of shopping and thus are more willing to shop at many locations. They then build up a mathematical model in which large firms use loss leaders as a way of sorting consumers and attracting those who are likely to do all their shopping in one place. Once they have those consumers inside the store, they can then charge higher prices for other items. Thus, the result of allowing \”loss leaders\” in this model is that a number of consumers end up paying more, and large stores with a wide product range may tend to drive smaller stores out of the market.

Of course, the fact that it is possible put together a certain theoretical model with this outcome doesn\’t prove that it is the only possible outcome, or that it\’s the outcome that should be of greatest practical concern. Back in 2007, the OECD Journal of Competition Law and Policy hosted a symposium on \”Resale Below Cost Laws and Regulations.\” The articles can be read freely on-line with a slightly clunky browser here, or again, many academics will have on-line access through library subscriptions.  The general tone of these articles is that loss leaders should not be viewed as an anticompetitive practice. In no particular order, and in my own words, here are some of the points that are made:

  • In general, business practices that reduce prices of at least some items to consumers should be  presumptively supported by anticompetition authorities, unless there is a very strong case against them. In general, regulators should spend little time second-guessing prices that are too low, and more time looking at prices that are too high or practices that are clearly unfair to consumers. 
  • There are a number of reasons why loss leaders might tend to encourage competition. Offering a loss leader can encourage consumers to overcome their inertia of buying the same things at the same prices and try out a new product or a new store. Sometimes producers may want to reward customers who are especially loyal or buy in especially large volumes. It may be far more effective for a store to advertise low prices on a few items than to advertise that \”everything in the store is on average 2% cheaper than the competition.\” Loss leaders may be linked to other things the seller desires, like become a provider of credit to the buyer or raising the chance of getting detailed feedback from the buyer. Loss leaders may especially useful to new entrants in markets, seeking to gain a foothold.
  • Evidence from Ireland suggests that the grocery products where loss leaders are prohibited tend to have higher or rising prices, compared with other products. Since the products where loss leaders are prohibited tend to be more essential products, the prohibition on selling such items below cost tends to weigh most heavily on those with lower income levels.
  • In general, there has been a trend in retailing toward big-volume, low price retailers. But this trend doesn\’t seem to have been any slower in places where limitations on resale-below-cost were in place. And if the policy goal is to help small retailers, there are likely to be better targeted and less costly approaches than preventing loss leaders. Indeed, small firms may in some cases wish to entice buyers by offering loss leaders themselves.
  • It\’s not clear how to apply prohibitions against loss leaders to vertically integrated firms, since they have considerable ability to use accounting rules to reduce the \”cost\” of production–and then to resell at whatever price they wish. Even in a firm that is not vertically integrated, invoices for what is purchased can often include various discounts and allowances, and it is an administrative challenge for rules preventing resale-below-cost to take these into effect. If a firm buys products at a high wholesale price, and then the market price drops, presumably a resale-below-cost rule would prevent the firm from selling its products at all.
  • It\’s worth noting that laws preventing loss leaders are not the same as laws that block \”predatory pricing,\” where the idea is to drive a competitor out of business with with very low prices and then to charge higher prices on everything. This intertemporal scenario is quite different from what happens in a prohibition of loss leaders.
  • Allowing loss leaders doesn\’t mean allowing deceptive claims, where for example there is a very low price advertised but the item is immediately out of stock, or of unexpectedly low quality.

Market competition is a multidimensional affair, happening along many dimensions at once. I lack confidence that government regulators with a mandate to block overly low prices will end up acting in a way that will benefit consumers. 

Size of Global Capital Markets

While browsing through the Statistical Appendix to the October 2012 Global Financial Stability Report from the IMF (and yes, it\’s the sort of thing I do), I ran across these numbers on the size of the global financial sector. In particular, the sum of the value of global stocks, bonds, and bank assets is 366% of the size of global GDP.

Of course, there is some mixing of apples and oranges here: bank assets, debt, and equities may overlap in various ways. But contemplate the sheer size of the $255 trillion total!

Given this size, and given the financial convulsions that have rocked the world economy in the last few years, it seems time to remember an old argument and put it to rest. The old argument was whether finance should be considered part of economics. For me, the most memorable statement of that position occurred back when Harry Markowitz, who was later to win the Nobel Prize in economics for his role in developing portfolio theory, was defending his doctoral dissertation on this work back in 1955.

Markowitz had taken a job at RAND, and so was flying back to the University of Chicago to defend his dissertation. He often told this story in interviews: here\’s a version from a May 2010 interview.

\”I remember landing at Midway Airport thinking, \’Well, I know this field cold. Not even Milton Friedman will give me a hard time.\’ And, five minutes into the session, he says, \’Harry, I read your dissertation. I don\’t see any problems with the math, but this is not a dissertation in economics. We can\’t give you a Ph.D. in economics for a dissertation that isn\’t about economics.\’ And for most of the rest of the hour and a half, he was explaining why I wasn\’t going to get a Ph.D. At one point, he said, \’Harry, you have a problem. It\’s not economics. It\’s not mathematics. It\’s not business administration.\’ And the head of my committee, Jacob Marschak, shook his head, and said, \’It\’s not literature.\’

\”So we went on with that for a while and then they sent me out in the hall. About five minutes later Marschak came out and said, \’Congratulations, Dr. Markowitz.\’ So, Friedman was pulling my leg. At the time, my palms were sweating, but as it turned out, he was pulling my leg …\”

It\’s not clear to me that Friedman was just goofing on Markowitz. Yes, Friedman wasn\’t willing to block this dissertation. However, in a later interview, Friedman did not recall the episode but said: \”What he [Markowitz] did was a mathematical exercise, not an exercise in economics.\”

But Markowitz had the last word after winning the Nobel prixe. In his acceptance lecture back in 1990, he ended by telling a version of this story, and then said: \”As to the merits of his [Milton Friedman\’s] arguments, at this point I am quite willing to concede: at the time I defended my dissertation, portfolio theory was not part of Economics. But now it is.\”

The old view of economics and finance was that, except perhaps for a few exceptions like major bubbles, the real economy was the dog and the financial economy was the tail–and the tail couldn\’t wag the dog. But trying to study the problems of the modern world economy without taking finance into account would be incomprehensible.

Classroom Evaluation of K-12 Teachers

Proposals for evaluating the classroom performance of K-12 teachers are typically based on hopes and fears, not on actual evidence. Those who support such evaluations hope to improve the quality of teaching by linking evaluations to teacher pay and jobs. The teacher unions who typically oppose such evaluations fear that they will be used arbitrarily, punitively, even whimsically, but in some way that will make teaching an even harder job.

The dispute seems intractable. But in the December 2012 ssue of the American Economic Review, Eric S. Taylor (no relation!) and John H. Tyler offer actual real-world evidence on \”The Effect of Evaluation on Teacher Performance.\”  (The AER is not freely available on-line, but many in academia will have access through library subscriptions.)

Taylor and Tyler have evidence on a sample of a little more than 100 mid-career math teachers in the Cincinnati Public Schools in fourth through eighth grade. These teachers were hired between 1993–1994 and 1999–2000. Then in 2000, a district planning process called for these teachers to be evaluated in a  a year-long classroom observation–based program, which then occurred some time between 2003–2004 and 2009–2010. The order in which teachers were chosen for evaluation, and the year in which the evaluation occurred, were for practical purposes random. The actual evaluation involved observation of actual classroom teaching. But the researchers were also able to collect evidence on math test scores for students. Although these scores were not part of the teacher evaluation, the researchers could then look to see whether the teacher evaluation process affected student scores. (Indeed, one of the reasons for looking at math teachers was because scores on a math test provide a fairly good measure of student performance, compared with other subjects.) Again, these were mid-career teachers who typically had not been evaluated in any systematic way for years. 

Here\’s how the evaluation process worked: \”During the TES [Teacher Evaluation System] evaluation year, teachers are typically observed in the classroom and scored four times: three times by an assigned peer evaluator—high-performing, experienced teachers who are external to the school—and once by the principal or another school administrator. Teachers are informed of the week during which the first observation will occur, with all other observations being unannounced. The evaluation measures dozens of specific skills and practices covering classroom management, instruction, content knowledge, and planning, among other topics. Evaluators use a scoring rubric, based on Charlotte Danielson’s Enhancing Professional Practice: A Framework for Teaching (1996), which describes performance of each skill and practice at four levels: “Distinguished,” “Proficient,” “Basic,” and “Unsatisfactory.” …After each classroom observation, peer evaluators and administrators provide written feedback to the teacher, and meet with the teacher at least once to discuss the results. \”

A common pattern is often found in these kinds of subjective evaluations: that is, the evaluators are often pretty tough in grading and commenting on lots of specific skills and practices, but then they still tend to give a high overall grade. This pattern occurred here, as well. The authors write: \”More than 90 percent of teachers receive final overall TES scores in the “Distinguished” or “Proficient” categories. Leniency is much less frequent in the individual rubric items and individual observations …\”

In theory, teachers who were fairly new to the district could lose their job if their evaluation score was low enough, and those who scored very high could get a raise, but because almost everyone was ending up with fairly high overall scores, so the practical effects of this evaluation in terms of pay and jobs was pretty minimal.

Nevertheless, student performance not only went up during the year that the evaluation happened, but student performance stayed higher for teachers who had been evaluated in previous years. \”The estimates presented here—greater teacher productivity as measured by student achievement gains in years following TES evaluation—strongly suggest that teachers develop skill or otherwise change their behavior in a lasting manner as a result of undergoing subjective performance evaluation in the TES system. Imagine two students taught by the same teacher in different years who both begin the year at the fiftieth percentile of math achievement. The student taught after the teacher went through comprehensive TES evaluation would score about 4.5 percentile points higher at the end of the year than the student taught before the teacher went through the evaluation. … Indeed, our estimates indicate that postevaluation improvements in performance were largest for teachers whose performance was weakest prior to evaluation, suggesting that teacher evaluation may be an effective professional development tool.\”

By the standards typically prevailing in K-12 education, the idea that teachers should experience an actual classroom evaluation consisting of four visits in a year, maybe once a decade or so, would have to be considered highly interventionist–which is ludicrous. Too many teachers perceive their classroom as a private zone where they should not and perhaps cannot be judged. But teaching is a profession, and the job performance of professionals should be evaluated by other professionals.  The Cincinnati evidence strongly suggests that detailed, low-stakes, occasional evaluation by other experienced teachers can improve the quality of teaching over time.  Maybe if some of the school reformers backed away from trying to attach potentially large consequences to such evaluations in terms of pay and jobs, at least a few teachers\’ unions would be willing to support this step toward a higher quality of teaching.

Note: Some readers might also be interested in this earlier post from October 3, 2011, \”Low-Cost Education Reforms: Later Starts, K-8, and Focusing Teachers.\”