Lemley on Fixing the U.S. Patent System

Mark Lemley has written \”Fixing the Patent Office\” for SIEPR, the Stanford Institute for Economic Policy Research (Discussion Paper No. 11-014, published May 21, 2012).  Lemley has an interesting starting point for thinking about the U.S. patent system. He writes (footnotes omitted):

\”Most patents don’t matter. They claim technologies that ultimately failed in the marketplace. They protect a firm from competitors who for other reasons failed to materialize. They were acquired merely to signal investors that the relevant firm has intellectual assets. Or they were lottery tickets filed on the speculation that a given industry or invention would take off. Those patents will never be licensed, never be asserted in negotiation or litigation, and thus spending additional resources to examine them would yield few benefits.\”

\”Some bad patents, however, are more pernicious. They award legal rights that are far broader than what their relevant inventors actually invented, and they do so with respect to technologies that turn out to be economically significant. Many Internet patents fall into this category. Rarely a month goes by that some unknown patent holder does not surface and claim to be the true inventor of eBay or the first to come up with now‐familiar concepts like hyperlinking and e‐commerce. While some such Internet patents may be valid–someone did invent those things, after all–more often the people asserting the patents actually invented something much more modest. But they persuaded the Patent Office to give them rights that are broader than what they actually invented, imposing an implicit tax on consumers and thwarting truly innovative companies who do or would pioneer those fields.

\”Compounding the problem, bad patents are too hard to overturn. Courts require a defendant to provide “clear and convincing evidence” to invalidate an issued patent. In essence, courts presume that the Patent Office has already done a good job of screening out bad patents. Given what we know about patents in force today, that is almost certainly a bad  assumption.\”

\”The problem, then, is not that the Patent Office issues a large number of bad patents. Rather, it is that the Patent Office issues a small but worrisome number of economically significant bad patents and those patents enjoy a strong, but undeserved, presumption of validity.\”

Long-time devotees of my own Journal of Economic Perspectives may recognize this argument, because it is similar to what Lemley argued with co-author Carl Shapiro in \”Probabilistic Patents\” in the Spring 2005 issue. (JEP articles are freely available to all courtesy of the American Economic Association.) As Lemley argues, the problems of the patent system aren\’t as simple as taking longer to examine patent applications, hiring more patent examiners, or being more stingy in granting patents. Instead, the goal should be to give greater the question attention to patents that are likely to end up being more important. How might this be done?

One approach is to give patent applicants a method of signalling whether they believe the patent will be important. The idea here is that patent applicants can apply under the current system, in which case their patent would have only the usual legal presumption in its favor if challenged in court, or they can pay a substantial amount extra for a more exhaustive patent examination, which would have a much stronger presumption in its favor if challenged in court. Lemley writes:

\”[A]pplicants should be allowed to “gold plate” their patents by paying for the kind of searching review that would merit a strong presumption of validity. An applicant who chooses not to pay could still get a patent. That patent, however, would be subject to serious—maybe even de novo—review in the event of litigation. Most likely, applicants would pay for serious review with respect to their most important patents but conserve resources on their more speculative entries. That would allow the Patent Office to focus its resources on those self-selected patents, thus benefiting from the signal given by the applicant’s own self‐interested choice. The Obama campaign proposed this sort of tiered review, and the PTO [Patent and Trademark Office] has recently implemented a scaled‐down version, in which applicants can choose the speed but not the intensity of review.Adoption has been significant but modest … [I]t appears to be performing its intended function of distinguishing some urgent applications from the rest of the pack.\”

Another approach would be to allow other parties to pay a substantial fee to the Patent Office  to re-examine the grounds for a recently granted patent. Lemley again:

\”Post‐grant opposition is a process by which parties other than the applicant have the opportunity to request and fund a thorough examination of a recently issued patent. A patent that survives collateral attack should earn a presumption of validity … [P]ost‐grant opposition is attractive because it harnesses private information; this time, information in the hands of competitors. It thus helps the PTO to identify patents that warrant serious review, and it also makes that review less expensive by creating a mechanism by which competitors can share critical information directly with the PTO. A post‐grant opposition system is part of the new America Invents Act, but it won’t begin to apply for another several years,  and the new system will be unavailable to many competitors because of the short time limits for filing an opposition. … But the evidence from operation of similar systems in Europe is encouraging.\”

Finally, the traditional way to focus on the 1-2% of patents that really matter, and where the parties can\’t agree, is to litigate. Lemley argues that such litigation will continue to be quite important, and that the underlying legal doctrine should acknowledge that many patents do not deserve a strong presumption of validity–unless is has been earned through an especially exhaustive process at the Patent and Trademark Office. Lemley one more time:

\”[W]e will continue to rely on litigation for the foreseeable future as a primary means for weeding out bad patents. Litigation elicits information from both patentees and competitors through the adversarial process, which is far superior to even the best‐intentioned government bureaucracy as a mechanism for finding truth. More important, litigation is focused on the very few patents (1-2 percent) that turn out to be important and about which parties cannot agree in a business transaction. Litigation can be abused, and examples of patent litigation abuse have been rampant in the last two decades. But a variety of reforms have started to bring that problem under control,
and the courts have the means to continue that process.  … Courts could modulate the presumption of validity for issued patents. A presumption like that embraced by the current “clear and convincing” standard must be earned, and under current rules patent applicants do not earn it. … The current presumption is so wooden that courts today assume a patent is valid even against evidence that the patent examiner never saw, much less considered, a rule that makes no sense.\”

 None of this is to say that doesn\’t make sense to rethink training and expectations for patent examiners themselves, and Lemley has some interesting evidence about how patent examiners tend to turn down fewer patents the longer they are on the job, and how they often rely on the background that they personally gather,rather than on background collected by others–including others in the patent office itself. But the idea that patent reform shouldn\’t focus on trying to review every application exhaustively, but instead on how to give greater attention to the applications that have real-world importance, seems to me a highly useful insight. 

Dimensions of U.S. College Attendance

Alan Krueger, chairman of President Obama\’s Council of Economic Advisers, gave a lecture at Columbia University in late April on \”Reversing the Middle Class Jobs Deficit.\” A certain proportion of the talk is devoted to explaining how all the good economic news is due to Obama\’s economic policies and how all of Obama\’s economic policies have benefited the U.S. economy. Readers can evaluate their own personal tolerance for that flavor of exposition. But the figures that accompany such talks are often of independent interest, and in particular, my eye was caught by some figures about U.S. college attendance.  (Full disclosure: Alan was editor of my own Journal of Economic Perspectives, and thus my direct boss, from 1996-2002.)

First look at the share of U.S. 55-64 year-olds in 2009 who have a post-secondary degree of some sort. It hovers around 40% of this age group, highest in the world, according to OECD data. Then look at the share of U.S. 25-34 year-olds in 2009 who have a post-secondary degree of some word. It\’s also right around 40% for this age group. Although one might expect that a higher proportion of the younger generation would be obtaining post-secondary degrees, this isn\’t actually true for the United States over the last 30 years. However, it is true for many other countries, and as a result, the U.S. is middle-of-the-pack in post-secondary degrees among the 25-34 age group. This news isn\’t new–for example, I posted about it in July 2011 here–but it\’s still striking. It seems to me possible to have doubts about the value and cost of certain aspects of post-secondary education (and I do), but still to be concerned that the U.S. population is falling back among its international competitors on this measure (and I am).

Krueger also points out that the chance of completing a bachelor\’s degree is strongly affected by the income level of your family.  The horizontal axis shows the income distribution divided into fourths. The vertical axis shows the share of those who complete a bachelor\’s degree by age 25. The lower red line is for those born between 1961-1964–that is, those who started attending college roughly 18 years later in 1979. The upper line is for those those born from 1979-1982–that is, those who started attending college in 1998.

Here are a few observations based on this figure:

1) Even for those from top-quartile groups in the more recent time frame, only a little more than half are completing a bachelor\’s degree by age 25. To put it another way, the four-year college degree has never been the relevant goal for the median U.S. high school student. Given past trends and the current cost of such degrees, it seems implausible to me that the U.S. is going to increase dramatically the share of its population getting a college degree. I\’ve posted at various times about how state and local funding for public higher education is down; about how the U.S. plan for expanding higher education appears to involve handing out more student loans, which then are often used at for-profit institutions with low graduation rates; and about how alternatives to college like certification programs, apprenticeships,  and ways of recognizing nonformal and informal learning should be considered.

2) Those from families in in lower income quartiles clearly have a much lower chance of finishing a four-year college degree. My guess is that this difference is only partly due to the cost of college, while a major reason for the difference is that those with lower incomes are more likely to attend schools and to come from family backgrounds that aren\’t preparing them to attend college. Moreover, the gap in college attendance between those from lower- and higher-income families hasn\’t changed much over the two decades between the lower and the higher line in the figure, so whatever we\’ve been doing to close the gap doesn\’t seem to be working.

3) It\’s a safe bet that many of those in the top quarter are families where the parents are college graduates, supporting and pushing their children to be college graduates. It\’s also a safe bet that many of those in the bottom quarter are families where the parents are not college graduates, and their children are not getting the support of all kinds that they need to become college graduates. In this way, it seems likely that college education is serving a substantial role in causing inequality of incomes to pass from one generation to the next.  Krueger has referred to this pattern of high income inequality at one time leading to high inequality in the future as the \”Great Gatsby Curve,\” as I described here.

Lawyers without Licenses?

I wrote a few days back about how widespread state-level requirements for occupational licenses limit the job market opportunities for many low-skilled workers. But of course, many other occupations are licensed, too. In their book  First Thing We Do, Let\’s Deregulate All the Lawyers, Clifford Winston, Robert Crandall, and Vikram Maheshri make the case for lawyers without licenses. (The Brookings Institution Press page for ordering the book is here; the Amazon page is here.) Cliff Winston has a nice readable overview of their argument \”Deregulate the Lawyers,\” which appears in the Second Quarter 2012 issue of the Milken Institute Review (which is ungated, although a free registration is required).

Just to be clear, the proposal here isn\’t for abolishing law schools or law degrees. Instead, the proposal is that it should be legal, if the buyer so desires, to hire people without such degrees to do legal work. Here are a few of the points they make:

  • The U.S. has about one million lawyers. [T]oday, all but a handful of states – the notable exception
    being California – require bar applicants to be graduates of ABA-accredited law schools. And every state except Wisconsin (which grants free passes to graduates of the state’s two major law schools) then requires them to pass a bar exam.
  • \”State governments (and state appellate courts) have also gone along with the ABA’s [American Bar Association\’s] wish to prohibit businesses from selling legal services unless they are owned and managed by lawyers. And not surprisingly, the group’s definition of the practice of law is expansive,
    including nearly every conceivable legal service, including the sale of simple standard form
    wills.\”
  • In the book, Winston, Crandall, and Maheshri attempt to estimate how much more income lawyers are able to receive, above and beyond the alternative jobs for those with similar levels of education, as a result of these licensing rules. They argue that about 50% of the income of lawyers is a result of the licensing limits. I view this number as closer to an educated guess than a precise valuation, but given that the U.S. spends about $200 billion per year on legal services, even half that amount would be a very large dollar value. 
  • In usual markets, more supply drives down the price. But when a society has more lawyers, and more of those lawyers end up in political and regulatory jobs, it is plausible that under the current regulatory restrictions that lawyers as a group are making more work and generating billable hours for each other. 
  • It\’s never easy to predict what would happen if it became legal to hire those with less than a three-year law degree from an accredited institution to do legal work. But it seems plausible that a lot of jobs done by lawyers could be done by someone with fewer years of education and fewer student loans to pay off: basic wills; criminal defense in simple cases of DWI or public intoxication; basic divorce and bankruptcy; simple incorporation papers; real estate transactions; and other situations. One can imagine that the skills needed in these cases might be taught as part of an undergraduate major in law, or law schools might offer one-year and two-year degrees along with the full three-year degree, or even as part of an apprenticeship program. National firms might seek to establish brand names and reputations in these areas, like H&R Block does for tax preparation services. In some cases, like certain kinds of legally required corporate disclosure filings, perhaps sophisticated forms and information technology could substitute for a lawyer filling in the blanks. 
  • Certainly, some of these steps might drive down wages for existing lawyers. But on average, lawyers receive pay that is well above the U.S. average, with unemployment rates below the U.S. average. Indeed, the U.S. economy as a whole might be better off if some of those who now work as lawyers entered other parts of the private sector–perhaps starting and managing businesses. \”There is little doubt that some people who become attorneys would have chosen to work in other occupations – and possibly made greater contributions to society – if they were not attracted to law by the prospect of inflated salaries.\”
  •  Many people with low-incomes end up without legal representation because of cost. \”Surely, many of the currently unrepresented litigants would be better off even if they gained access only to uncredentialed legal advocates.\”
  • Perhaps the quality of legal representation would decline without the licensing laws, but it isn\’t obviously true. \”[T]the American Bar Association’s own Survey on Lawyer Discipline Systems reported that, in 2009, some 125,000 complaints were logged by state disciplinary agencies
    – one complaint for every eight lawyers practicing in the United States. Note that this figure is a lower bound on client dissatisfaction because it includes only those individuals who took the time to file a complaint.\” A deregulated environment for lawyers might well produce other methods of ensuring quality: warrantees; money-back guarantees; brand-name reputation; and firms that monitor or rate providers of legal services. 
  • Deregulation in the airline industry back in the 1970s occurred partly because it was possible to observe how airline competition was actually working within the states of California and Texas. Might there be some example of deregulating the lawyers that would have similar effect? \”One state – perhaps Arizona, whose legislature has declined to re-enact its unauthorized practice statute, or California, whose bar indicated it would not initiate actions under its statute – may realize benefits that build support elsewhere. And perhaps England’s and Australia’s recent efforts to liberalize regulation of their legal services will attract attention here.\”

My own sense is that while the U.S. economy does need a certain number of big-time lawyers, but many law students spend years of class-time and tens of thousands of tuition dollars on classes that bear no relationship to the law that they will actually practice. Back in college, one of my economics professors used to have a nice pre-packaged rant against regulations that were intended to ensure high quality, because he believed that everyone should have the right to buy cheap and low-quality stuff if it was what they wanted–or all they could afford. The legal services that most of us need most of the time could be provided far more cheaply, and at least as reliably, without requiring that every provider get a four-year college degree and then spend three more years in law school.

Illustrating Economies of Scale

The concept of \”economies of scale\”has been lurking around economics since Alfred Marshall\’s Principles of Economics back in 1890 (see Book IV, Ch. VIII, from the 1920 edition here). It\’s one of the few semi-technical bits of economics-speak to make it into everyday discussions. But in explaining the concept to students I don\’t always have as as many good concrete examples as I would like. Here are some of the examples I use. But if readers are aware of sound citations to academic research to back up these examples, or other other examples with a sound research backing, I\’d be delighted to hear about them.

A number of examples of economies of scale are plausible real-world examples. Why are there only two major firms producing airplanes: Boeing and Airbus? A likely answer is that because of economies of scale make it, it is difficult for smaller firms to get more than a very specialized niche of the market. Why are there two big cola soft-drink companies: Coca-Cola and Pepsi? Why are there a relatively small number of national fast-food hamburger chains: McDonald\’s, Burger King, Wendy\’s? A likely explanation is that there are economies of scale for such firms, partly in terms of being able to afford a national advertising and promotion budget, partly in terms of cost advantages of buying large quantities of inputs. Why is there only one company providing tap water in your city? Because there are economies of scale to building this kind of network and running duplicative sets of pipes for additional water companies would be inefficient.

While I believe that these examples are a reasonable enough approximation of an underlying truth to pass along to students, I confess that I\’m not familiar with solid economic research establishing the existence and size of economies of scale in these cases.
  
In the second edition of my own Principles of Economics textbook, I give one of my favorite example of economies of scale: the \”six-tenths rule\” from the chemical manufacturing industry. (If you are an instructor for a college-level intro economic class–or you know such an instructor!–the book is available from Textbook Media. The price ranges from $20 for a pure on-line book to $40 for a black-and-white paper book with on-line access as well. In short, it\’s a good deal–and on-line student questions and test banks are available, too). The research on this rule actually goes back some decades. Here\’s my one-paragraph description from the textbook (p. 178):

\”One prominent example of economies of scale occurs in the chemical industry. Chemical plants have a lot of pipes. The cost of the materials for producing a pipe is related to the circumference of the pipe and its length. However, the volume of gooky chemical stuff that can flow through a pipe is determined by the cross-section area of the pipe. … [A] pipe which uses twice as much material to make (as shown by the circumference of the pipe doubling) can actually carry four times the volume of chemicals (because the cross-section area of the pipe rises by a factor of four). Of course, economies of scale in a chemical plant are more complex than this simple calculation suggests. But the chemical engineers who design these plants have long used what they call the “six-tenths rule,” a rule of thumb which holds that increasing the quantity produced in a chemical plant by a certain percentage will increase total cost by only six-tenths as much. \”

A recent related example of how pure size can add to efficiency is a trend toward even larger container ships. For a press discussion, see Economies of scale made steel: The economics of very large ships in the Economist, November 12, 2011,. The new generation of ships are 400 meters long and 50 meters wide, with the largest internal combustion engines ever built, driving a propeller shaft that is 130 meters long and a propeller that weighs 130 tons. Running this ship takes a crew of only 13 people, although they include a few more for redundancy. Ships with 20% larger capacity than this one are on the way.

One useful way to help make economies of scale come alive for students is to link it with antitrust and public policy concerns. For example, a big question in the aftermath of the financial crisis is whether big banks should be broken up, so that the government doesn\’t need to face a necessity to bail them out because they are \”too big to fail.\” I posted about this issue in Too Big To Fail: How to End It? on April 2, 2012  One piece of evidence in the question of whether to break up the largest banks is whether they might have large economies of scale–in which case breaking them up would force consumers of bank services to pay higher costs. However, in that post, I cite Harvey Rosenblum of the Dallas Fed arguing: \”Evidence of economies of scale (that is, reduced average costs associated with increased size) in banking suggests that there are, at best, limited cost reductions beyond the $100 billion asset size threshold.\” Since the largest U.S. banks are a multiple of this threshold, the research suggests that they could be broken up without a loss of economies of scale.

Another recent example of the interaction between claims about economies of scale and competition policy came up in the recently proposed merger between AT&T and T-Mobile. The usual counterclaims arose in this case: the companies argued that the merger would bring efficiencies that would benefit consumers, while the antitrust authorities worried that the merger would reduce competition and lead only to higher prices.
Yan Li and Russell Pittman tackle the question of whether the merger was likely to produce efficiencies in \”The proposed merger of AT&T and T-Mobile: Are there unexhausted scale economies in U.S. mobile telephony?\” a discussion paper published by the Economic Analysis Group of the U.S. Department of Justice in April 2012.

\”AT&T’s proposed $39 billion acquisition of T-Mobile USA (TMU) raised serious concerns for US policymakers, particularly at the Federal Communications Commission (FCC) and the Antitrust Division of the Justice Department (DOJ), which shared jurisdiction over the deal. Announced on March 20, 2011, the acquisition would have combined two of the four major national providers of mobile telephony services for both individuals and businesses, with the combined firm’s post-acquisition share of revenues reportedly over 40 percent, Verizon a strong number two at just under 40 percent, and Sprint a distant number three at around 20 percent. …

All of this raises the crucial question: How reasonable is it to assume that under current (i.e. without the merger) conditions, AT&T and T-Mobile enjoy substantial unexhausted economies of density and size of national operations? Recall that the fragmentary estimates made public suggest claims of at least 10-15 percent reductions in cost, and perhaps 25 percent or more. Absent an econometric examination of mobile telephony for the US as a whole as well as for individual metropolitan areas, what can we infer from the existing literature? The literature on at least one other network industry is not particularly supportive. … Most of the existing empirical literature features observations at the firm level, with output measured as number of subscribers or, less frequently, revenues or airtime minutes. These studies tend to find constant returns to scale or even decreasing returns to scale for the largest operators – i.e., generally U-shaped cost curves. …


[I]t is unlikely that T-Mobile, and very unlikely that AT&T, are currently operating in a range where large firm-level economies related to activities such as procurement, marketing, customer service, and administration would have been achievable due to the merger. Regarding both measures, the presence of “immense” unexhausted economies for the two firms seems unlikely indeed. On this basis (and on this basis alone), our results support the decision of DOJ to challenge the merger and the scepticism expressed by the FCC staff.\”


Li and Pittman also raise the useful point that very large firms should perhaps be cautious about claiming huge  not-yet-exploited economies of scale are available if only they could merge with other very large firms. After all, if economies of scale persist to a level of output where only one or a few mega-firms can take advantage of them, then an economist will ask whether this is a case of \”natural monopoly,\” and thus whether there is a case for regulation to assure that the mega-firm, insulated from competitive challenge because it can take advantage of economies of scale, will not exploit its monopoly power to overcharge consumers. As Li and Pittman write of the proposed merger between AT&T and T-Mobile: \”[W]e may justifiably ask whether if one believes the evidence of “immense” economies presented by the merging companies, one should take the next step and consider whether mobile telephony in U.S. cities is a “natural monopoly”, with declining costs throughout the relevant regions of demand?\”

Finally, an intriguing though that economies of scale may become less important in the future, at least in some areas, comes from the the new technology of manufacturing through 3D printing. Here\’s a discussion from the Economist, April 21, 2012, in an article called \”A third industrial revolution\”

\”Ask a factory today to make you a single hammer to your own design and you will be presented with a bill for thousands of dollars. The makers would have to produce a mould, cast the head, machine it to a suitable finish, turn a wooden handle and then assemble the parts. To do that for one hammer would be prohibitively expensive. If you are producing thousands of hammers, each one of them will be much cheaper, thanks to economies of scale. For a 3D printer, though, economies of scale matter much less. Its software can be endlessly tweaked and it can make just about anything. The cost of setting up the machine is the same whether it makes one thing or as many things as can fit inside the machine; like a two-dimensional office printer that pushes out one letter or many different ones until the ink cartridge and paper need replacing, it will keep going, at about the same cost for each item.

\”Additive manufacturing is not yet good enough to make a car or an iPhone, but it is already being used to make specialist parts for cars and customised covers for iPhones. Although it is still a relatively young technology, most people probably already own something that was made with the help of a 3D printer. It might be a pair of shoes, printed in solid form as a design prototype before being produced in bulk. It could be a hearing aid, individually tailored to the shape of the user’s ear. Or it could be a piece of jewellery, cast from a mould made by a 3D printer or produced directly using a growing number of printable materials.\”

Right now, 3D printing is a more expensive manufacturing technology than standard mass production, but it is also vastly more customizeable. For uses where this flexibility matters, like a hearing aid or other medical device that exactly fits, or making a bunch of physical prototypes to be tested, 3D printing is already beginning to make some economic sense. As the price of 3D printing falls, it will probably become integrated into a vast number of production processes that will combine old-style mass manufacturing with 3D-printed components. One suspects that a high proportion of the value-added and the price that is charged to consumers will be in the customized part of the production process.

Capital Controls: Evolution of the IMF and Conventional Wisdom

Back when I was first being doused with economics in the late 1970s and early 1980s,  the idea that a country might impose controls on the inflow or outflow of international capital had a sort of fusty, past-the-expiration-date odor around it. Sure, such control had existed back before World War II, and persisted for some years after the war. But wasn\’t it already time, or slightly past time, to phase them out? Not coincidentally, this was also the position of the IMF at around this time. But when the IMF was founded back in the late 1940s, it accepted the necessity for capital controls, and now, the IMF is returning to an acceptance of capital controls–but with a twist. Let me walk through this evolution.

At the founding of the IMF back in the late 1940s,  \”almost all members maintained comprehensive capital controls that the drafters of the Articles assumed would remain in place for the foreseeable future,\” reports the IMF in a 2010 paper on \”The IMF\’s Role Regarding Cross-Border Capital Flows.\” The perspective was that \”bodies such as the General Agreement on Tariffs and Trade (GATT—now the World Trade Organization, WTO) wereto be responsible for the liberalization of trade in goods and now services, while the Fund would ensure that members liberalized the payments and transfers associated with such trade.\” However, while the IMF would encourage liberalizing payments for trade, it would not especially encourage international capital flows for purposes of financial investment, based on a \”rather negative view of capital flows that then prevailed, premised on the belief that speculative capital movements had contributed to the instability of the prewar system and that it was necessary to control such movements.\”

By the 1970s, the position of the IMF (and many mainstream economists) had changed. \”Many advanced economies were liberalizing their capital accounts and it was recognized that international capital movements had begun to play an important role in the functioning of the international monetary system …\”
The IMF began to actively encourage countries to allow free movement of capital for investment purposes, although it stopped short of requiring such a change as a condition for loans. The general tenor of the IMF advice was that if there were problems with international capital flows, they could typically be resolved in other ways, like though flexibility of exchange rates or alterations in fiscal and monetary policy. By the 1990s, the IMF was proposing that all of its countries should gradually but surely remove all capital controls.
For more details on the IMF history with respect to capital controls, see the April 2005 report from the IMF\’s Independent Evaluation Office, Report on the Evaluation of the IMF\’s Approach to Capital Account Liberalization.

But this proposal never took effect, and one big reason was the East Asian financial crisis of 1997-98. The east Asian \”tiger\” economy like South Korea, Thailand, Malaysia, Indonesia and Taiwan had been growing ferociously in the 1980s and into the 1990s. They were viewed as genuine economic success stories: growing with rapid productivity growth, and fairly well-managed in their fiscal and monetary policies. But they attracted a wave of international investment capital in the early 1990s that pumped up their currencies and stock markets to unsustainable levels, and when the bubble burst and the international financial capital rushed out, it left behind a financial crisis and a deep recession. Of course, the recent difficulties in small European economies like Greece, Ireland, Portugal and Spain follow a similar pattern: international financial capital rushed in, promoted a boom, and then rushed out, leaving financial crisis and recession.

If you are in an economy that is small by global standards–which is most of them–then international capital markets have a tendency to dramatically overreact. It\’s like if when I said \”I\’m hungry,\” someone dumped a bathtub full of spaghetti over my head, and then when I said \”that\’s too much,\” they starved me for a week. When small national economies look like a good place to invest, international money floods in and can lead to price bubbles and unsustainable booms. When the economic problems become apparent, then at some point a \”sudden stop\” occurs and international money floods out, leading to financial and economic crises.

Even before the east Asian crisis hit, some folks in the IMF and many outside it were rethink its notion that capital controls should only be viewed as obstructions to be removed, and began trying to develop a more nuanced view. The most recent IMF effort along these lines is a series of papers on \”capital flows and the policies that affect them.\” The first paper in the series is the 2010 paper cited above. The fourth paper came out in March 2012, called  \”Liberalizing Capital Flows and Managing Outflows.\” Here are a few highlights (references to figures and citations omitted):

Removing capital controls has theoretical benefits, but in the real world often has costs
\”In perfect markets with full information and no externalities, liberalization of capital flows can benefit both source and recipient countries by improving resource allocation. The more efficient global allocation of savings can facilitate investment in capital-scarce countries. In addition, liberalization of capital flows can promote risk diversification, reduce financing costs, generate competitive gains from entry of foreign investors, and accelerate the development of domestic financial systems.\”

The main cost of removing capital controls is the risk of sudden stops
\”The principal cost of capital account openness stems from the vulnerability to financial crises triggered by sudden stops in capital flows, and from currency and maturity mismatches. Systemic risk-taking can increase investment, leading to higher growth but also to a greater incidence of crises. Many empirical studies have established the strong association between surges in capital inflows (and their composition) and the likelihood of debt, banking, and currency crises in emerging market countries. Other studies, however, do not find a systematic association between crises and capital account openness, but find that the relationship hinges on the level of financial sector development, institutional quality, macroeconomic policy, and trade openness … \”

The main policy recommendations still propose a gradual movement toward fewer restrictions on capital movements, but this recommendation now comes hedged about with qualifications. Three of these seem especially important to me.

1) \”In low-income countries, the benefits of capital flows arise mainly from foreign direct investment (FDI). In many countries, FDI has helped to boost investment, employment, and growth. Low-income countries generally need to strengthen their institutions and markets in order to safely absorb most other types of capital flows, which carry substantial risks until such thresholds are met.\” A common recommendation is that countries should allow foreign direct investment, and then gradually open up to international investment in equity markets, and then gradually open up to international lending and borrowing.

2) The pace at which this gradual opening-up to international capital happens should depend on the prior development of a nation\’s economic conditions and political institutions.

3) When using capital constraints, focus more on limiting international inflows than on outflows. This recommendation is a reversal from the early days of IMF, when constraints on capital outflows were common, but constraints on inflows were almost unheard of. But the recommendation makes sense. Limits on capital outflows are very hard to enforce in an interconnected modern world economy, and they are only needed when the financial crisis has already occurred. Limits on capital inflows, like encouraging foreign direct investment but discouraging dependence on short-term capital inflows from abroad, helps to prevent a financial bubble from inflating in the first place.

This advice seems sensible to me, if perhaps difficult to implement After all, would it have been politically or economically possible for Greece or Ireland or Spain to have restricted inflows of international financial capital back in 2007 or 2008? In response to such questions it\’s worth repeating an honest confession from up near the front of the March 2012 IMF report. \”[T]he theoretical and practical understanding of capital flows remains incomplete. Capital flows are a financial phenomenon, and many of the unresolved analytical and policy questions related to the financial sector carry over to capital flows.\”

China: Does 8% Growth Cause Less Satisfaction?

China\’s economy grew at extraordinary annual rates of 8% or more, on a per capita basis, in the two decades from 1990 to 2009. Using the old \”rule of 72\” that is sometimes taught to approximate the effect of growth rates, take 72, divide by the annual growth rate, and it will tell you (roughly) how many years it takes for the original quantity to double. So at an 8% growth rate, China\’s per capita GDP doubles in 9 years, and quadruples in 18 years. In the two decades from 1990-2009, average per person GDP in China has quadrupled, at least. The number of Chinese living below the international poverty line of $1.25 in consumption per day fell by 662 million from the early 1980s up to 2008, according to World Bank estimates.

But survey researchers as people in China (and all over the world): \”All things considered, how satisfied are you with your life as a whole these  days? Please use this card to help with your answer:
       1 “dissatisfied” 2 3 4 5 6 7 8 9 10 “satisfied”.\”
These researchers find that people in China are not, on average, more satisfied in 2009 than in 1990. How can this be?

This finding is an example of the \”Easterlin paradox.\” Back in 1974, Richard Easterlin wrote a paper called A. (1974) \”Does Economic Growth Improve the Human Lot?\” which appeared in a conference volume (Nations and Households in Economic Growth: Essays in Honor of Moses Abramovitz, edited by Paul A. David and Melvin W. Reder). The paper is available here.  Easterlin found that in a given society, those with more income tended to report higher happiness or satisfaction than those with less income. However, he also found that the average level of happiness or satisfaction on a 10-point scale didn\’t seem to rise over time as an economy grew: for example, in the U.S. economy between 1946 and 1970. He argued: \”The increase in output itself makes for an escalation in human aspirations, and thus negates the expected positive impact on welfare.\”

But can this effect hold true even when the standard of living is rising as dramatically as in China?  Easterlin, still going strong at USC, looks at the data with co-authors Robson Morgan, Malgorzata Switek, and Fei Wang in \”China\’s life satisfaction, 1990-2010,\” just published in the Proceedings of the National Academy of Sciences.  Here is a (slightly messy) graph showing survey results from six different surveys of satisfaction or happiness in China.: the World Values Survey, a couple of Gallup surveys, and surveys by Pew, Asiabarometer, and Horizon. The surveys use different scales: 1-10, 0-10, 1-4, 1-5, so the vertical axes of the graph are a mess. But remember, this is a time frame when per capita GDP more than quadrupled! It\’s hard to look at this data and see a huge upward movement.

Easterlin and co-authors summarize the patterns this way: \”According to the surveys that we analyzed, life satisfaction in the Chinese population declined from 1990 to around 2000–2005 and then turned upward, forming a U-shaped pattern for the period as a whole (Fig. 1). Although a precise comparison over the full study period is not possible, there appears to be no increase and perhaps some overall decline in life satisfaction. A downward tilt along with the U-shape is evident in the WVS, the series with the longest time span.\”

Indeed, Easterlin and co-authors point out that the happiness trend may be biased upward, because of this time there was a large rise in the “floating population\” of \”persons living in places other than where they are officially registered) in urban areas.\” This group tends to have lower life satisfaction.  \” Between 1990 and 2010, the floating population rose substantially, from perhaps 7% to 33% of the total urban population …  If the floating population is not as well covered in the life satisfaction surveys as their urban-born counterparts, then this negative impact is understated, and thus the full period trend is biased upward.\”

Why has satisfaction not flourished in China with the rise in GDP growth? Surely one reason is what some call the \”aspirational treadmill:\” the more you have, the more you want. But the reason emphasized by Easterlin\’s group is that \”the high 1990 level of life satisfaction in China was consistent with the low unemployment rate and extensive social safety net prevailing at that time. Urban workers were essentially guaranteed life-time positions and associated benefits, including subsidized food, housing, health care, child care, and pensions, as well as jobs for grown
children …\” However, urban unemployment in China rose sharply from about 1990 into the early 2000s, but has fallen some since the mid-2000s. In addition, \”Although incomes have increased for all income groups, China’s transition has been marked by a sharp increase in income inequality. This increasing income inequality is related to the growing urban–rural disparity in income, increased income differences in both urban and rural areas, and the significant increase of unemployment in urban areas associated with restructuring …\”

Intriguingly, the rise in income inequality in China is mirrored by greater inequality in reported life satisfaction. \”In its transition, China has shifted from one of the most egalitarian countries in terms of distribution of life satisfaction to one of the least egalitarian. Life satisfaction has declined markedly in the lowest-income and least-educated segments of the population, while rising somewhat in the upper SES [socioeconomic status] stratum.\” For example, here\’s a graph that divides the population into thirds by income level. The figure shows what share of the population gave an answer from 7-10 on the World Values Survey satisfaction data. Notice that in 1990, all three income groups are clustered together. By 2007, they have separated out, with the highest income group remaining at about the same level, and the other groups declining in reported satisfaction–despite the fact that the incomes for all groups are much higher.

Taken to an extreme, the Easterlin paradox and these results from China might seem to suggest that economic growth is a waste of time. After all, economic growth doesn\’t seem to be making people more satisfied! But Easterlin would not make this argument, and it doesn\’t quite fit the survey results.
When answering on a scale of 1-10 or 1-5, most people will make a choice thinking about the present. They don\’t answer by thinking:  \”Wow, I\’m sure glad that I wasn\’t born a Roman slave 2000 years ago, and compared to that, I\’m a 10 in satisfaction.\” Nor do they think: \”Wow, compared to people who live 100 years from now, I\’m living a short and deprived life, so I\’m a 1 in satisfaction.\”

When people in China answered a \”satisfaction\” survey in 1990, they were not all that far removed in time from a period of brutal repression, and so it\’s not shocking to me that many in the lower and middle part of the income distribution told surveyers (who after all might have a government connection) that they were really quite satisfied. It\’s not just the economy that has grown in China since 1990; it\’s also the willingness and ability of many ordinary people to express dissatisfaction or discontent. I suspect that not many people in China would view their 1990 standard of living as similar or preferable to their current standard of living.

People do seem to answer satisfaction questions with some perspective on the rest of the world. For example, in a Spring 2008 article in my own Journal of Economic Perspectives, Angus Deaton presents evidence that if you look across the countries of the world in 2003, the level of satisfaction seems to rise steadily each time per capita GDP doubles.

It would be unwise to use survey data at different points in time, measured on scales that only offer the same limited range of choices, to argue that people do not receive greater satisfaction or happiness from economic growth. If I was choosing a 1970 standard of living in 1970, I might give it a similar numerical satisfaction score that I would give a 2012 standard of living in 2012–but that doesn\’t mean I would be equally happy in 2012 with a 1970 standard of living!

However, the results of the satisfaction surveys do highlight that when people are asked about their satisfaction, they take many factors into account along with average income levels: health, education, personal and political freedom, economic security, risk of unemployment, inequality, and others.When China allowed a greater degree of economic and political freedom, it unleashed an extraordinary rate of economic growth, but it also made created a public space for many other potential reasons for dissatisfaction. A record of past economic growth, even when exceptionally rapid, doesn\’t trump present concerns in people\’s minds–nor should it.

McWages Around the World

It\’s hard to compare wages in different countries, because the details of the job differ. A typical job in a manufacturing facility, for example, is a rather different experience in China, Germany, Michigan, or Brazil. But for about a decade, Orley Ashenfelter has been looking at one set of jobs that are extremely similar across countries–jobs at McDonald\’s restaurants. He discussed this research and a broader agenda of \”Comparing Real Wage Rates\” across countries in his Presidential Address last January to the American Economic Association meetings in Chicago. The talk has now been published in the April 2012 issue of the American Economic Review, which will be available to many academics through their library subscription. But the talk is also freely available to the public here as Working Paper #570 from the Princeton\’s Industrial Relations Section. 

How do we know that food preparation jobs at McDonald\’s are similar? Here\’s Ashenfelter:  

\”There is a reason that McDonald’s products are similar.  These restaurants operate with a standardized protocol for employee work. Food ingredients are delivered to the restaurants and stored in coolers and freezers. The ingredients and food preparation system are specifically designed to differ very little from place to place. Although the skills necessary to handle contracts with suppliers or to manage and select employees may differ among restaurants, the basic food preparation work in each restaurant is highly standardized. Operations are monitored using the 600-page Operations and Training Manual, which covers every aspect of food preparation and includes precise time tables as well as color photographs. … As a result of the standardization of both the product and the workers’ tasks, international comparisons of wages of McDonald’s crew members are free of interpretation problems stemming from differences in skill content or compensating wage differentials.\”

Ashenfelter has built up McWages data from about 60 countries. Here is a table of comparisons. The first column shows the hourly wage of a crew member at McDonald\’s, expressed in U.S. dollars (using the then-current exchange rate). The second column is the wage relative to the U.S. wage level, where the U.S. wage is 1.00. The third column is the price of a Big Mac in that country, again converted to U.S. dollars. And the fourth column is the McWage divided by the price of a Big Mac–as a rough-and-ready way of measuring the buying power of the wage.

Ashenfelter sums up this data, and I will put the last line in boldface type: \”There are three obvious, dramatic conclusions that it is easy to draw from the comparison of wage rates in Table 3.  First, the developed countries, including the US, Canada, Japan, and Western Europe have quite similar wage rates, whether measured in dollars or in BMPH.   In these countries a worker earned between 2 and 3 Big Macs per hour of work, and with the exception of Western Europe with its highly regulated wage structure, earned around $7 an hour.  A second conclusion is that the vast majority of workers, including those in India, China, Latin America, and the Middle East earned about 10% as much as the workers in developed countries, although the BMPH comparison increases this ratio to about 15%, as would any purchasing-power-price adjustment.   Finally, workers in Russia, Eastern Europe, and South Africa face wage rates about 25 to 35% of those in the developed countries, although again the BMPH comparison increases this ratio somewhat.  In sum, the data in Table 3 provide transparent and credible evidence that workers doing the same tasks and producing the same output using identical technologies are paid vastly different wage rates.\”

In passing, it\’s interesting to note that McWage jobs pay so much more in western Europe than in the U.S., Canada and Japan. But let\’s pursue the highlighted theme: How can the same job with the same output and the same technology pay more in one country than in another? One part of the answer, of course, is that you can\’t hire someone in India or Sough Africa to make you a burger and fries for lunch. But at a deeper level, the higher McWages in high-income countries is not about the skill or human capital in those countries, but instead reflects that the entire economy is operating at a higher productivity level.
 

Here is an illustrative figure. The horizontal axis shows the \”McWage ratio\”: that is, the U.S. McWage is equal to 1.00, and the McWages in all other countries are expressed in proportion. The vertical axis is \”Hourly Output Ratio.\” This is measuring output per hour worked in the economy, again with the U.S. level set equal to 1.00, and the output per hour worked in all other countries expressed in proportion. The straight line at a 45-degree angle plots the points in which a country with, say, a McWage at 20% of the U.S. level also has output per hour worked at 20% of the U.S. level, a country with a McWage at 50% of the U.S. level also has output per hour worked at 50% of the U.S. level, and so on. 

The key lesson of the figure is that the differences in McWages across countries line up with the overall productivity differences across countries. The main exceptions, in the upper right-hand part of the diagram, are countries where the McWage is above U.S. levels but output-per-hour for the economy as a whole is below U.S. levels: New Zealand, Japan, Italy, Germany. These are countries with minimum wage laws that push up the McWage. 

Ashenfelter emphasizes in his remarks how real wages can be used to assess and compare the living standards of workers. I would add that these measures show that the most important factor determining wages for most of us is not our personal skills and human capital, or our effort and initiative, but whether we are using those skills and human capital in the context of a a high-productivity or a low-productivity economy.

Ignorance as Asset and Strategic Outcome

The February 2012 issue of Economy and Society is a special issue focused on a theme of \”Strategic unknowns: towards a sociology of ignorance.\” The opening essay with this title, by Linsey McGoey, is freely available here. Many academics will have access to the rest of the issue through their library subscriptions.

The central theme of the issue is that ambiguity and ignorance are not just the absence of knowledge, waiting to be illuminated by facts and disclosure. Instead, ambiguity and ignorance are in certain situations the preferred strategic outcome. McGoey writes (citations omitted): \”Ignorance is knowledge: that is the starting premise and impetus of the following collection of papers. Together, they contribute to a small but growing literature which explores how different forms of strategic ignorance and social unknowing help both to maintain and to disrupt social and political orders, allowing both governors and the governed to deny awareness of things it is not in their interest to acknowledge …\”

Many of the examples are sociological in nature, but others are based in economic and policy situations. For example, consider a number of situations that have to do with a policy response to risky situations: the risk that smoking causes cancer, the risk that growing carbon emissions will lead to climate change, the risk of future terrorist actions (and whether invading certain countries will increase or reduce those risks), and the risk of fluctuations in fluctuations in financial markets. McGoey writes:

\”Within the game of predicting risk, one often wins regardless of whether risks materialize or not. If a predicted threat fails to emerge, the identification of the threat is credited for deterring it. If a predicted threat does emerge, authorities are commended for their foresight. If an unpredicted threat appears, authorities have a right to call for more resources to combat their own earlier ignorance. ‘The beauty of a futuristic vision, of course, is that it does not have to be true’, writes Kaushik Sunder Rajan (2006, p. 121) in a study of the way expectations surrounding new biotechnologies help to create funding opportunities and foster faith in the technology regardless of whether expectations prove true or not. In fact, expectations are often particularly fruitful when they fail to materialize, for more hope and hype are needed to remedy thwarted expectations. Attention to the resilience of risks the way that claims of risk often feed on their own inaccuracy helps to highlight the value of conditionality for those in political authority.\”

One of the essays in the volume, by William Davies and Linsey McGoey, applies this framework to thinking about the recent financial crisis. They point out that many financial professionals begin from the starting point that risk and uncertainty are huge problems, and thus one needs their high-priced help to address these issues. In this way, claims of ambiguity and ignorance are an asset for the finance industry. If the investment go well, then the financial professionals claim credit for steering successfully through these oceans of uncertainty. But when investments and decision go badly, as in the Great Recession, they claim absolution for their decisions by reiterating just how ambiguous and unclear the financial markets are, and how no one could have really known what was going to  happen. And somehow, this just proves that their expertise is more needed than ever. They write: \”We examine the usefulness of the failure or refusal to act on warning signs, regardless of the motivations why. We look at the double value of ignorance: the ways that social silence surrounding unsettling facts enabled profitable activities to endure despite unease about their implications and, second, the way earlier silences are then harnessed and mobilized to absolve earlier inaction.

In another essay, Jacqueline Best applies these ideas in the context of the World Bank\’s \”good governance agenda\” and the IMF\’s \”conditionality policy.\” She writes: \”Both policies have been ambiguously defined throughout their history, enabling them to be interpreted and applied in different ways. This ambiguity has facilitated the gradual expansion of the scope of the policies. … Actors at both the IMF and the World Bank were not only aware of the central role of ambiguity in their policies, but were also ambivalent about it.  … Finally, although staff and directors at both institutions may have been ambivalent about the role of ambiguity in these policies, they ultimately ensured that ambiguities persisted and even proliferated.\”  Best also notes that ambiguity is hard to control, and can lead to unintended consequences. 

In yet another essay, Steve Rayner write about \”Uncomfortable knowledge: the social construction of ignorance in science and environmental policy discourses.\” He writes: \”My interest is therefore in how information is kept out rather than kept in and my approach is to treat ignorance as a necessary social achievement rather than a a simple background failure to acquire, store, and retrieve knowledge.\” Rayner writes: \”An example of clumsy or incompletely theorized arrangements is the implicit consensus on US nuclear energy policy that emerged in the 1980s and persisted for the best part of three decades. Despite the complete absence of any Act of Congress or Presidential Order, it was implicitly accepted by government, industry, and environmental NGOs that the US would continue to support nuclear R&D while operating an informal moratorium on the addition of new nuclear generating capacity. All of the parties agreed to this, but for various reasons, all had a stake in not acknowledging the existence of an settlement.\”

One might add that many environmental laws and other regulatory policies are chock-full of ambiguous language, which gives regulators the ability to interpret these rules as tough-minded while also giving potential offenders the possibility of saying that they had no way of knowing the rules would be applied in this way. Rayner also offers a nicely provocative claim about tendencies to dismiss and deny in the context of warnings about climate change: \”It seems odd that climate science has been held to a `platinum standard\’ of precision and reliability that goes well beyond anything that is normally required to make significant decisions in either the public or private sectors. Governments have recently gone to war based on much lower-quality intelligence than that which science offers us about climate change. Similar firms embark on product launches and mergers on the bases of much lower-quality information.\”  

Academic research of course often uses of a feigned ignorance to generate a greater persuasive effect. The title of a research paper is often written in the form of a question, and the theory and data are often presented as if the author was a Solomonic figure encountering this material for the first time, guided only by a disinterested pursuit of Truth (with a capital T). The implications for reputation of past past work, or its political implications, are shunted off to the side. Research would have less persuasive effect if it started off by saying, \”I\’ve been hammering on this same conclusion for 25 years now, and I find pretty much exactly the same result every time I look at any data set from any time or place–and by the way, this conclusion also supports the political outcomes I prefer.\”

One of many implications of thinking about ignorance and ambiguity as assets and as strategic behavior is that it highlights that many economic actors and policy-makers have strong incentives to promote both their own ignorance, and more broadly, the idea that ambiguity makes true knowledge impossible. Ignorance can be a power grab, and the basis for a job, and a get-out-of-jail-free card.

Why Does the U.S. Spend More on Health Care than Other Countries?

Everyone knows that the U.S. spends far more on health care than other countries, but do you know how much more? In 2009, the U.S. spent 17.4% of GDP on health care (using OECD data). The closest contenders are Netherlands (12% of GDP), France (11.8%), Germany (11.6%), Denmark (11.5%), and Canada (11.4%). The U.S. has higher per capita GDP than these countries, so the gap in absolute spending is even higher. In 2009, the U.S. spent $7,960 per person on health care, and the closest contenders were Switzerland ($5,144 per person) and Netherlands ($4,914).

When I hear people argue that the U.S. should follow the path of the UK health care system, I sometimes find myself thinking: \”You mean that U.S. health care spending per person should be slashed by 56%, from $7,960 per person to $3,487 per person? Really?\”

What accounts for these differences in health care spending across countries? David Squires assembles some of the evidence in \”Explaining High Health Care Spending in the United States: An International Comparison of Supply, Utilization, Prices and Quality,\” a May 2012 \”issue brief\” written for the Commonwealth Fund. I ran across it here at Larry Willmore\’s Thought du Jour blog.   I\’ll also contrast and compare it with a paper by David M. Cutler and Dan P. Ly, \”The (Paper)Work of Medicine: Understanding International Medical Costs,\” which appeared in the Spring 2011 issue of my own Journal of Economic Perspectives. For readability, footnotes and references to exhibits are omitted from the quotations below.

 Higher U.S. health care spending is not because Americans on average are notably less healthy.
As Squires sums up: \”U.S. has smaller elderly population and fewer smokers, but higher obesity rates. …  Higher rates of obesity undoubtedly inflate health spending; one study estimates the medical costs attributable to obesity in the U.S. reached almost 10 percent of all medical spending in 2008. However, the younger population and lower rates of smoking likely have an opposite effect, reducing U.S. health care spending relative to most other countries.\”

Higher U.S. health care spending is not because the U.S. has more doctors or hospital beds.
\”There were 2.4 physicians per 1,000 population in the U.S. in 2009, fewer than in all other study countries except Japan. Likewise, patients had fewer doctor consultations in the U.S. (3.9 per capita)
than in any other country except Sweden. Hospital supply and use showed similar trends, with the U.S. having fewer hospital beds (2.7 per 1,000 population), shorter lengths of stay for acute care (5.4
days), and fewer discharges (131 per 1,000 population) than the OECD median …\”

Prices for brand-name drugs are much higher in the U.S., but generics are cheaper.
Squires writes: \”[P]rices for the 30 most-commonly prescribed drugs are one-third higher than in Canada and Germany, and more than double the prices in Australia, France, Netherlands, New Zealand, and the U.K. Notably, prices for generic drugs are lower in the U.S. than in these other countries, whereas prices for brand-name drugs are much higher.\”

Cutler and Ly confirm this general pattern, but also put the potential cost savings in perspective: \”However, because pharmaceuticals are only about 10 percent of U.S. healthcare spending, the overall amount that could be saved by moving to U.S. government monopsony purchasing of drugs
is relatively small—perhaps 20 to 30 percent of pharmaceutical spending, or 2 to 3 percent of total medical costs. These cost savings also would have to be weighed against the possibility of reduced incentives for investment and innovation in the pharmaceutical industry. The dollar amount of excess pharmaceutical payments in the United States is approximately the total amount of pharmaceutical company research and development (R&D).\”

U.S. doctors are paid more, but they also live in an economy with a more unequal distribution of wages.
Squires writes: \”U.S. primary care physicians generally receive higher fees for office visits and orthopedic physicians receive higher fees for hip replacements than in Australia, Canada, France, Germany, and the U.K. … U.S. primary care doctors ($186,582) and particularly orthopedic doctors ($442,450) earned greater income than in the other five countries …\”

Cutler and Ly confirm: \”The average U.S. specialist physician earns $230,000 annually—
78 percent above the average in other countries … . Primary care physicians earn less (they earn $161,000 on average), but the same percentage more than their peers in other countries. … If we reduced all physician incomes in the United States to match the international ratio of physicians’ incomes to per capita GDP, U.S. healthcare spending would be lower by roughly 2 percent.However, these seemingly high salaries for U.S. physicians appear less high in the context of the broader income distribution.\” Cutler and Ly go on to point out that high-compensation workers in the U.S. economy earn more than their international counterparts in just about every profession–after all, that\’s part of what it means to say that the U.S. has a less equal distribution of income.

Some medical device technologies like scanning are more widely used in the U.S; some like hip replacements are not.
\”In 2009, the U.S., along with Germany, performed the most knee replacements (213 per 100,000
population) among the study countries, and 75 percent more knee replacements than the OECD median (122 per 100,000 population). However, the U.S. performed barely more hip replacements than the OECD median, and significantly less than several of the other study countries …\”

\”Relative to the other study countries where data were available, there were an above-average
number of magnetic resonance imaging (MRI) machines (25.9 per million population), computed
tomography (CT) scanners (34.3 per million), positron emission tomography (PET) scanners (3.1 per million), and mammographs (40.2 per million) in the U.S. in 2009. Utilization of imaging was also highest in the U.S., with 91.2 MRI exams and 227.9 CT exams per 1,000 population. MRI and CT devices were most prevalent in Japan, though no utilization data were available for that country. … [T]he U.S. commercial average diagnostic imaging fees ($1,080 for an MRI and $510 for a CT exam) are far higher than what is charged in almost all of the other countries …\”

The U.S. does a relatively poor job of managing chronic disease.
Squires writes: \”[Consider] rates of potentially preventable mortality due to asthma (for those between ages 5 and 39) and lower-extremity amputations due to diabetes per 100,000 population. On both measures, the U.S. had among the highest rates, suggesting a failure to effectively manage these chronic conditions that make up an increasing share of the disease burden.\”

Many chronic diseases share the general property that if they are well-managed every single day, with a combination of drugs, lifestyle, and certain kinds of monitoring of physical conditions, it is possible to reduce the need for enormously costly episodes of hospitalization. As the Centers for Disease Control puts it: \”Chronic diseases—such as heart disease, cancer, and diabetes—are the leading causes of death and disability in the United States. Chronic diseases account for 70% of all deaths in the U.S., which is 1.7 million each year. These diseases also cause major limitations in daily living for almost 1 out of 10 Americans ….\”

Prices for hospital stays are substantially higher in the U.S.
Squires points out: \”[H]ospital stays in the U.S. were far more expensive than in the other study countries, exceeding $18,000 per discharge compared with less than $10,000 in Sweden, Australia, New Zealand, France, and Germany.\” And remember, these higher costs per hospital stay happen even though the stays themselves are on average shorter in the U.S.

The tougher question is to what extent these higher costs per hospital stay reflect a larger quantity of concentrated and effective high-tech care being provided, and to what extent its just a matter of higher prices. The evidence here is mixed. It does appear that for some conditions, Americans receive more hospital care. Cutler and Ly write:  Americans also receive more-intensive care than do Canadians. While the population-adjusted hospital admission rates are about the same in the two countries, additional procedures are provided to those with the same diagnosis in the United States. For example, people with a heart attack in the United States are twice as likely to receive bypass surgery or angioplasty than are similar people in Canada.\” When it comes to cancer survival rates, Squires points out: \”The U.S. had the highest survival rates among the study countries for breast cancer (89%) and, along with Norway, for colorectal cancer (65%).\”

On the other side, the more aggressive use of heart surgery in the U.S. as compared to Canada doesn\’t seem to mean better health outcomes; instead, it reflects the existence of more heart-surgery facilities. Cutler and Ly: \”  On one side, the greater use of intensive therapies after a heart attack in the United States compared to Canada is not associated with improved mortality, though morbidity is more diffifult to determine. Similarly, a recent study concluded that there was no systematic difference in outcomes in favor of the United States over Canada; if anything, Canadians had better outcomes in most circumstances … [T]he province of Ontario has 11 open-heart surgery facilities, while the state of Pennsylvania, with roughly the same population as Ontario, has more than five times the number of heart surgery facilities. California is three times larger in population but has 10 times the number of heart surgery facilities. Given this difference in the number of facilities, it is simply impossible for physicians in Ontario to perform as many open heart surgery operations as those in Pennsylvania or California.\”

Also, not all cancer survival rates are better in the U.S. Squires writes: \”However, at 64 percent, the survival rate for cervical cancer in the U.S. was worse than the OECD median (66%), and well below the 78 percent survival rate in Norway—indicating significant room for improvement.\”

Administrative costs of health care are much higher in the U.S.
Squires doesn\’t mention this point, but it is a main emphasis for Cutler and Ly. They write:

\”[T]the U.S. healthcare system is in great need of administrative simplification. There are few other areas of the U.S. economy where waste is so apparent and the possibility of savings is so tangible. … Perhaps the most troubling difference between the U.S. and Canadian healthcare systems is the differential amount spent on administration. For every office-based physician in the United States, there are 2.2 administrative workers. That exceeds the number of nurses, clinical assistants, and technical staff put together. One large physician group in the United States estimates that it spends 12 percent of revenue collected just collecting revenue. Canada, by contrast, has only half as many administrative workers per office-based physician.  The situation is no better in hospitals. In the United States, there are 1.5 administrative personnel per hospital bed, compared to 1.1 in Canada. Duke University Hospital, for example, has 900 hospital beds and 1,300 billing clerks. On top of this are the administrative workers in health insurance. Health insurance administration is 12 percent of premiums in the United States and less than half that in Canada.

\”International comparisons of medical care occupations are difficult, but they suggest that the United States has more administrative personnel than other countries do. … [T]he United States has 25 percent more healthcare administrators than the United Kingdom, 165 percent more than the Netherlands, and 215 percent more than Germany. The number of clerks of all forms (including data entry clerks) is much higher in the United States as well.\”

\”What are all these administrative personnel doing? … One part is credentialing—receiving permission to practice medicine in a particular hospital or for a particular health plan. The average physician submits 18 credentialing applications annually—each insurer, hospital, ambulatory surgery facility, and the like, requires a different one—consuming 70 minutes of staff time and 11 minutes of physician time per application. Verifying eligibility for services is also costly. Insurance information must be verified for 20 to 30 patients daily, including three or four patients for whom verification must be sought orally. Because people change insurance plans frequently and the cost-sharing they are charged varies with plan and with past utilization (for example, how much of the deductible have they spent?), the determination of what to charge a patient is especially difficult. … Finally, significant time is spent on billing and payment collection. On average, about three claims are denied per physician per week and need to be rebilled. …  Three-quarters of denied bills are ultimately paid, but the administrative cost of securing the payment is very high. Provider groups in the United States employ 770 full-time equivalent workers per $1 billion collected, compared to an average in other U.S. industries of about 100. By all indications, the administrative burden is rising over time as insurance policies have become more complex, while the technology of administration has not kept pace.\”

Conclusion

The question of why the U.S. spends more than 50% more per person on health care than the next highest countries (Switzerland and Netherlands), and more than double per person what many other countries spend, may never have a simple answer. Still, the main ingredients of an answer are becoming more clear. The U.S. spends vastly more on hospitalization and acute care, with a substantial share of that going to high-tech procedures like surgery and imaging. The U.S. does a poor job of managing chronic conditions, which then lead to episodes of costly hospitalization. The U.S. also seems to spend vastly more on administration and paperwork, with much of that related to credentialing, documenting, and billing–which is again a particular important issue in hospitals. Any honest effort to come to grips with high and rising U.S. health care costs will have to tackle these factors head-on. 

Occupational Licensing and Low-Income Jobs

Pretty much everything I know about the economics of occupational licensing I learned from Morris Kleiner, a colleague from the days when I was based at the Humphrey School at the University of Minnesota. Morrie lays out many of the issues here in a Fall 2000 article in my own Journal of Economic Perspectives, as well as in his  2006 book, Licensing Occupations: Ensuring Quality or Restricting Competition?

He points out that nearly one-third of the U.S. labor force works in jobs where some form of government license is a requirement. Some of the largest occupations that require licenses include teachers, nurses, engineers, accountants, and lawyers.  Occupational licensing poses a potential tradeoff: on one side, requiring licenses offers a promise of a reliably high quality of service; on the other side, requiring licenses is a barrier to entry that tends to reduce the quantity of jobs in that occupation but increase the wage. Kleiner and others investigate this subject by looking at differences in licensing requirements for a certain occupation across states, and searching for evidence of wage and quality differences. A typical finding is that the wage differences are readily perceptible, but the quality differences are not. Licensing is distinguishable from certification: with certification, you are free to hire someone who doesn\’t possess the certification if you like, but with licensing, hiring someone without the license is illegal. As an example, travel agents and mechanics are often certified, but they are typically not licensed.

Dick M. Carpenter II, Ph.D., Lisa Knepper, Angela C. Erickson and John K. Ross focus on documenting differences between states in 102 of the job categories counted by the Bureau of Labor Statistics that requires a license in at least one state and that pay below-average wages. They report the results in License to Work: A National Study of Burdens from Occupational Licensing, a report from the Institute for Justice. They make the case that many of these occupational rule are more about limiting competition than about quality of service in an indirect way: they point out that licensing rules about fees, training, exams, minimum age, and minimum schooling vary enormously across states, with no particular evidence that reliability or safety are worse in states with lesser or no licensing requirements. The report goes into state-by-state and occupation-by-occupation detail, but here are some summary comments: 



\”The need to license any number of the occupations in this sample defies common sense. A short list would include interior designers, shampooers, florists, upholsterers, home entertainment installers, funeral attendants, auctioneers and interpreters for the deaf. Most of these occupations are licensed in just a handful of states; interpreters are licensed in only 16 states, while auctioneers are licensed in 33. If, as licensure proponents often claim, a license is required to protect the public health and safety, one would expect more consistency. For example, only five states require licenses for shampooers, but it is highly unlikely that conditions in those five states are any different …\”


\”Quite literally, EMTs [emergency medical technicians] hold lives in their hands, yet 66 other occupations have greater average licensure burdens than EMTs. This includes interior designers, barbers and cosmetologists, manicurists and a host of contractor designations. By way of perspective, the average cosmetologist spends 372 days in training; the average EMT a mere 33.\”



\”Licensure irrationalities are doubly evident in the inconsistencies by burden across states. Looking again at manicurists, while 10 states require four months or more of training, Alaska demands only about three days and Iowa about nine days. It seems unlikely that aspiring manicurists in Alabama (163 days) and Oregon (140 days) truly need so much more time in training. But manicurists are not alone. The education and experience requirements for animal trainers range from zero to almost 1,100 days, or three years. And for vegetation pesticide handlers, training obligations range from zero to 1,460 days, or four years, with fees up to $350. This high degree of variation is prevalent throughout
the occupations. Thirty-nine of them have differences of more than 1,000 days between the minimum and maximum number of days required for education and experience. And another 23 occupations have differences of more than 700 days.\”



\”Finally, irrationalities are particularly notable when few states license an occupation but do so onerously. One clear example is interior design, the most difficult of the 102 occupations to enter, yet licensed in only three states and D.C. Another is social service assistants, the fourth most difficult occupation to enter. It requires nearly three-and-a-half years of training but is only licensed in six states and D.C. Dietetic technicians must spend 800 days in education and training, making for the eighth most burdensome requirements, but they are licensed in only three states. Home entertainment installers must have about eight months of training on average, but only in three states. The seven states that license tree trimmers require, on average, more than a year of training.\”



\”The 102 occupational licenses studied require of aspiring workers, on average, $209 in fees, one exam and about nine months of education and training. ·· Thirty-five occupations require more
than a year of education and training, on average, and another 32 require three to nine months. At least one exam is required for 79 of the occupations. …
Particularly noteworthy is the percentage of low- and middle-income workers with less than a high school diploma—15.7 percent. As documented below, a number of the 102 occupations studied require the completion of at least 12th grade, a requirement that effectively bans a substantial number of people from those occupations.\”

\”[S]even of the 102 occupations studied are licensed in all 50 states and the District of Columbia:
pest control applicator, vegetation pesticide handler, cosmetologist, EMT, truck driver, school bus driver and city bus driver. Another eight occupations are licensed in 40 to 50 states. Thus, the vast majority of these occupations are licensed in fewer than 40 states, and five are licensed in only
one state each: florist, forest worker, fire sprinkler system tester, conveyor operator and non-contractor pipelayer. On average, the occupations on this list are licensed in about 22 states.\”

My own guess is that the politics of passing state-level occupational licensing laws is driven by three factors: 1) lobbying by those who already work in the occupation to limit competition; 2) passing laws in response to wildly unrepresentative anecdotes of terrible or dangerous service; and 3) the tendency when setting standards to feel like more is better. But in a U.S. economy which is hurting for job creation, especially jobs for low-income workers, states should be seriously rethinking many of their occupational licensing rules. Many would be better-replaced with lower standards, certification rather than licenses, or even no licenses at all.