Apprenticeships for the U.S. Economy

The U.S. economy has a major problem with unemployment, and in particular with unemployment of low-skilled workers. Apprenticeships are one major way that other countries, like Germany, have addressed this issue. Diane Auer Jones, a former  former assistant secretary for postsecondary education at the Department of Education and now at something called the Career Education Corporation in Washington, DC, writes about this in the Summer 2011 issue of Issues in Science and Technology, in an article called \”Apprenticeships Back to the Future.\”

One useful takeaway from the article is the contrast between how apprenticeship programs are a central element of education for the vast majority of students in places like Germany and Switzerland, while they are comparatively so minor in the U.S. labor market.

What apprenticeship programs are like in Germany and Switzerland

\”In Germany and Switzerland, for example, apprenticeships are a critical part of the secondary education system, and most students complete an apprenticeship even if they plan to pursue postsecondary education in the future. It is not uncommon for German or Swiss postsecondary institutions to require students to complete an apprenticeship before enrolling in a tertiary education program. In this way, apprenticeships are an important part of the education continuum, including for engineers, nurses, teachers, finance workers, and myriad other professionals.\”

\”An apprenticeship is a formal, on-the-job training program through which a novice learns a marketable craft, trade, or vocation under the guidance of a master practitioner. Most apprenticeships include some degree of theoretical classroom instruction in addition to hands-on practical experience. Classroom instruction can take place at the work site, on a college campus, or through online instruction in partnership with public- or private-sector colleges. Some apprenticeships are offered as one-year programs, though most span three to six years and require apprentices to spend at least 2,000 hours on the job. Apprentices are paid a wage for the time they spend learning in the workplace. Some apprenticeship sponsors also pay for time spent in class, whereas others do not. Some sponsors cover the costs associated with the classroom-based portion, whereas others require apprentices to pay tuition out of their wages. All of these details are part of the apprenticeship contract …\”

\”In Switzerland, almost 70% of students between the ages of 16 and 19 participate in dual-enrollment vocational education and training (VET) programs, which require students to go to school for one to two days per week and spend the rest of their time in paid on-the-job training programs that last three to four years. … Apprentices are subjected to regular assessments in the classroom and on the job, culminating in final exams associated with certification. In 2008, the completion rate for Swiss apprentices was 79%, and the exam pass rate among program completers was 91%. One of the main benefits of the Swiss apprenticeship system is that nearly 70% of all students participate in it, which means that students of all socioeconomic and ability levels are engaged in this form of learning. Such widespread involvement prevents the social stigmatization of apprenticeship programs, unlike in the United States where social prestige is almost exclusively preserved for college-based education and training. Moreover, because students entering dual-track VET programs are frequently high performers, they are academically indistinguishable from the students who elect university education rather than vocational training or dual education. As a result, Swiss dual-track VET students are likely to enter the workplace well prepared for work by possessing strong academic skills.\”

The status of apprenticeship programs in the United States

\”In the United States, however, apprenticeships generally have been considered to be labor programs for training students to work in the skilled trades or crafts. They are not viewed as education programs, so they have not become a conventional part of most secondary or postsecondary systems or programs. …\”

\”Apprenticeship programs do exist in the United States, but they are vastly underused, poorly coordinated, nonstandardized, and undervalued by students, parents, educators, and policymakers. The first successful federal legislative effort to promote and coordinate apprenticeships was the National Apprenticeship Act of 1937, commonly known as the Fitzgerald Act. This act treated apprentices not as students but as laborers, and it authorized the Department of Labor (DOL) to establish minimum standards to protect the health, safety, and general welfare of apprentice workers. The DOL still retains oversight responsibility through its Office of Apprenticeships, but the office receives an anemic annual appropriation of around $28 million.\”

\”In each state, the DOL supports a state apprenticeship agency that certifies apprenticeship sponsors, issues certificates of completion to apprentices, monitors the safety and welfare of apprentices, and ensures that women and minorities are not victims of discriminatory practices. In 2007, the latest year for which data are available, there were approximately 28,000 Registered Apprenticeship programs involving approximately 465,000 apprentices. Most of the programs were in a handful of fields and industries, including construction and building trades, building maintenance, automobile mechanics, steamfitting, machinist, tool and dye, and child care.\”


In America, many schools and parents and students will speak out strongly in favor of strong commitments to \”community service\” and volunteer projects and unpaid short-term internships. But many of these same people tend to recoil if the discussion turns to devoting similar amounts of time to a paid apprenticeship. As an American, it\’s hard to imagine a Swiss-style system where 70% of students, spread across the distribution of incomes and education levels, are in apprenticeship programs. It\’s hard to think about apprenticeships that would spread across a much wider range of jobs and industries than we currently see in the U.S. Such a change would require a substantial adjustment from firms, existing employees, schools, government, and students themselves. But the current hand-off from the education system to the job market isn\’t going too well for a lot of Americans at a wide array of skill levels. Maybe apprenticeships could help.

Millions of Missing Women: WDR #2

The most recent World Development Report from the World Bank is centered on the theme: \”Gender Equality and  Development.\” The first chapter of the report focuses on gains that have been made, and I posted earlier today on surprising (to me) conclusion that around the world, gender equality has been largely attained in education and health. However, the second chapter focuses on dimensions of inequality that persist. The chapter focuses certain contexts where a high degree of gender inequality persists: for example, in lack of female participation in certain occupations and in political leadership, and in certain areas or economic groups where females are disadvantaged in many dimensions of life. But to me, the most appalling example of gender inequality is expressed by the problem of the \”missing women.\” Based on evidence from biology and experience in high-income countries, we know that on average, slightly more men are born than women, and that women tend to have longer life-expectancies than men. But when we then apply those known proportions to certain countries and regions, we find that girls are missing at birth, and women are dying too frequently. Some excerpts, with footnotes and most references to tables and figures omitted, as usual:

Skewed sex ratios at birth and 3.9 million women missing under the age of 60
\”First, the problem of skewed sex-ratios at birth in China and India (and in some countries in the Caucasus and the Western Balkans) remains unresolved (table 2.1). Population estimates suggest that an additional 1.4
million girls would have been born (mostly in China and India) if sex ratios at birth in these countries resembled those found worldwide. Second, compared with developed economies, the rate at which women die relative to men in low- and middle-income countries is higher in many regions of the world. Overall, missing girls at birth and excess female mortality under age 60 totaled an estimated 3.9 million women in 2008—85 percent of them were in China, India, and Sub-Saharan Africa.\”

A first main cause: Preference for Sons
\”The disadvantage against unborn girls is widespread in many parts of Asia and in some countries in the Caucasus (such as Armenia and Azerbaijan), where the intersection of a preference for sons, declining fertility, and new technology increases the missing girls at birth. In China and India, sex ratios at birth point to a heavily skewed pattern in favor of boys. Where parents continue to favor sons over daughters, a gender bias in sex-selective abortions, female infanticide, and neglect is believed to account for millions of missing girls at birth. In 2008 alone, an estimated 1 million girls in China and 250,000 girls in India were missing at birth. The abuse of new technologies for sex-selective abortions—such as cheap mobile ultrasound
clinics—accounted for much of this shortfall, despite laws against such practices in many nations, such as India and China. Economic prosperity will continue to increase amniocentesis and ultrasound services throughout the developing world, possibly enabling the diffusion of sex-selective abortions where son-preferences exist. … This does not imply that change is impossible: The Republic of Korea’s male-female sex ratio under age five was once the highest in Asia, but it peaked in the mid-1990s and then
reversed—a link to societal shifts in normative values stemming from industrialization and urbanization.\”
A second main cause: Maternal Mortality
\”The female disadvantage in mortality during the reproductive ages is in part driven by the risk of death in pregnancy and childbirth and associated long-term disabilities. Although maternal mortality ratios have fallen by 34 percent since 1990, they remain high in many parts of the world: Sub-Saharan Africa had the highest ratio in 2008 at 640 maternal deaths per 100,000 live births, followed by South Asia (280), Oceania (230), and Southeast Asia (160). Bangladesh, Cambodia, India, and Indonesia have maternal mortality ratios comparable to Sweden’s around 1900, and Afghanistan’s is similar to Sweden’s in the 17th century. … Driving the high maternal mortality rates in many countries are poor obstetric health services and high fertility rates. Income growth and changes in household behavior alone appear insufficient to reduce maternal mortality; public investments are key to improving maternal health care services.\”

Worldwide Gender Equality in Education and Health: WDR #1

The most recent World Development Report from the World Bank is centered on the theme: \”Gender Equality and  Development.\” The first chapter of the report focuses on gains that have been made; the second chapter focuses on dimensions of inequality that persist. In this post, I\’ll focus on two patterns from the first chapter that I had not known about–on average, women the world around seem to have near-parity, and in some cases better than parity, with men in education and in health care. In a later post, I\’ll focus on what seems to me the most appalling widespread gender inequality that remains in the world today.

Global gender parity in education

\”In the past decade, female enrollments have grown faster than male enrollments in the Middle East and North Africa, South Asia, and Sub-Saharan Africa. Gender parity has been reached in 117 of 173 countries with data (figure 1.1). Even in regions with the largest gender gaps—South Asia and Sub-Saharan Africa (particularly West Africa)—gains have been considerable. In 2008, in Sub-Saharan Africa, there were about 91 girls for every 100 boys in primary school, up from 85 girls in 1999; in South Asia, the ratio was 95 girls for every 100 boys.

\”The patterns are similar in secondary education, with one notable difference. In roughly one-third of developing countries (45), girls outnumbered boys in secondary education in 2008 (see figure 1.1). Although the female gender gap tends to be higher in poorer countries, boys were in the minority in a wide range of nations including Bangladesh, Brazil, Honduras, Lesotho, Malaysia, Mongolia, and South Africa.

Tertiary enrollment growth is stronger for women than for men across the world. The number of male tertiary students globally more than quadrupled, from 17.7 million to 77.8 million between 1970 and 2008, but the number of female tertiary students rose more than sevenfold, from 10.8 million to 80.9 million, overtaking men. Female tertiary enrollment rates in 2008 lagged behind in only 36 developing countries of 96 with data (see figure 1.1). …

\”Although boys are more likely than girls to be enrolled in primary school, girls make better progress—lower repetition and lower dropout rates—than boys in all developing regions. …  Gender now explains very little of the remaining inequality in school enrollment …  In a large number of countries, a decomposition of school enrollments suggests that wealth is the constraining factor for most, and in only a very limited number will a narrow focus on gender (rather than poverty) reduce inequalities further …\”

Gender parity in health

\”In most world regions, life expectancy for both men and women has consistently risen, with women on average living longer than men. The gap between male and female life expectancy, while still rising in some regions, stabilized in others. On average, life expectancy at birth for females in low-income countries rose from 48 years in 1960 to 69 years in 2008, and for males, from 46 years to 65. Mirroring the worldwide increase in life expectancy, every region except Sub-Saharan Africa added between 20 and 25 years
of life between 1960 and today. …. And since 1980, every region has had a female advantage
in life expectancy.

In most developing countries, fertility rates fell sharply in a fairly short period. These declines were much faster than earlier declines in today’s rich countries. In the United States, fertility rates fell gradually in the 1800s through 1940, increased during the baby boom, and then leveled off at just above replacement. In
India, fertility was high and stable through 1960 and then sharply declined from 6 births per woman to 2.3 by 2009. What took the United States more than 100 years took India 40 (figure 1.4). Similarly, in Morocco, the fertility rate fell from 4 children per woman to 2.5 between 1992 and 2004.\”

\”On various other aspects of health status and health care, differences by sex are small. In many low-income countries, the proportion of children stunted, wasted, or underweight remains high, but girls are no worse off than boys. In fact, data from the Demographic and Health Surveys show that boys are at a slight disadvantage. …  Similarly, there is little evidence of systematic gender discrimination in the use of health services or in health spending. Out-of-pocket spending on health in the 1990s was higher for women than for men in Brazil, the Dominican Republic, Paraguay, and Peru. Evidence from South Africa reveals the same pro-female pattern, as does that for lower income countries. … Evidence from India, Indonesia, and Kenya tells a similar story. … For preventive health services such as vaccination, poverty rather than gender appears to be the major constraining factor …\”

The Vast, Automatic, Invisible Economy: W. Brian Arthur

W. Brian Arthur has written \”The Second Economy\” for the October 2011 issue of the McKinsey Quarterly. (Free registration is needed to access the article.) The subheading under the title is: \”Digitization is creating a second economy that’s vast, automatic, and invisible—thereby bringing the biggest change since the Industrial Revolution.\” As one expects from Arthur, this short essay is full of thought-provoking comments. Here are a few highlights:

What does an economic transformation look like?

 \”In 1850, a decade before the Civil War, the United States’ economy was small—it wasn’t much bigger than Italy’s. Forty years later, it was the largest economy in the world. What happened in-between was the railroads. They linked the east of the country to the west, and the interior to both. They gave access to the east’s industrial goods; they made possible economies of scale; they stimulated steel and manufacturing—and the economy was never the same.

Deep changes like this are not unusual. Every so often—every 60 years or so—a body of technology comes along and over several decades, quietly, almost unnoticeably, transforms the economy: it brings new social classes to the fore and creates a different world for business. Can such a transformation—deep and slow and silent—be happening today? …

But I want to argue that something deep is going on with information technology, something that goes well beyond the use of computers, social media, and commerce on the Internet. Business processes that once took place among human beings are now being executed electronically. They are taking place in an unseen domain that is strictly digital. On the surface, this shift doesn’t seem particularly consequential—it’s almost something we take for granted. But I believe it is causing a revolution no less important and dramatic than that of the railroads. It is quietly creating a second economy, a digital one.\” …

\”Now this second, digital economy isn’t producing anything tangible. It’s not making my bed in a hotel, or bringing me orange juice in the morning. But it is running an awful lot of the economy. It’s helping architects design buildings, it’s tracking sales and inventory, getting goods from here to there, executing trades and banking operations, controlling manufacturing equipment, making design calculations, billing clients, navigating aircraft, helping diagnose patients, and guiding laparoscopic surgeries. Such operations grow slowly and take time to form. …\”

First an economic system with muscles, and now one with nerves

\”Think of it this way. With the coming of the Industrial Revolution—roughly from the 1760s, when Watt’s steam engine appeared, through around 1850 and beyond—the economy developed a muscular system in the form of machine power. Now it is developing a neural system. This may sound grandiose, but actually I think the metaphor is valid. Around 1990, computers started seriously to talk to each other, and all these connections started to happen. The individual machines—servers—are like neurons, and the axons and synapses are the communication pathways and linkages that enable them to be in conversation with each other and to take appropriate action.

Is this the biggest change since the Industrial Revolution? Well, without sticking my neck out too much, I believe so. In fact, I think it may well be the biggest change ever in the economy. It is a deep qualitative change that is bringing intelligent, automatic response to the economy. There’s no upper limit to this, no place where it has to end. Now, I’m not interested in science fiction, or predicting the singularity, or talking about cyborgs. None of that interests me. What I am saying is that it would be easy to underestimate the degree to which this is going to make a difference.

I think that for the rest of this century, barring wars and pestilence, a lot of the story will be the building out of this second economy, an unseen underground economy that basically is giving us intelligent reactions to what we do above the ground. For example, if I’m driving in Los Angeles in 15 years’ time, likely it’ll be a driverless car in a flow of traffic where my car’s in a conversation with the cars around it that are in conversation with general traffic and with my car. The second economy is creating for us—slowly, quietly, and steadily—a different world. …\”

Will this second economy change the nature of jobs and how economic production is distributed?

\”The second economy will produce wealth no matter what we do; distributing that wealth has become the main problem. For centuries, wealth has traditionally been apportioned in the West through jobs, and jobs have always been forthcoming. When farm jobs disappeared, we still had manufacturing jobs, and when these disappeared we migrated to service jobs. With this digital transformation, this last repository of jobs is shrinking—fewer of us in the future may have white-collar business process jobs—and we face a problem.

The system will adjust of course, though I can’t yet say exactly how. Perhaps some new part of the economy will come forward and generate a whole new set of jobs. Perhaps we will have short workweeks and long vacations so there will be more jobs to go around. Perhaps we will have to subsidize job creation. Perhaps the very idea of a job and of being productive will change over the next two or three decades. The problem is by no means insoluble. The good news is that if we do solve it we may at last have the freedom to invest our energies in creative acts.\”

Global Supply Chains: U.S. ITC #2

The U.S. International Trade Commission has published the 7th edition of its occasional report: \”The Economic Effects of Significant U.S. Import Restraints.\” The report comes in two main parts. The first part, discussed in an earlier post here, is an overview and status report on the main U.S. barriers. The second part concerns the trend toward longer global supply chains. Here are some highlights (with footnotes and citations expunged for readability throughout):

Description and illustration of a basic global supply chain

\”For example, a domestic firm might provide the R&D and design of a product, and produce the initial intermediate inputs using local raw materials, as in figure 3.1. Then these intermediate inputs would be exported to a second country, where a firm would use them to produce a semifinished product. That firm would then export the semifinished good to a third country, where the final good is
assembled and packaged. The third country would then export the good back to the domestic firm, which would oversee the marketing, retailing, and delivery of the product domestically and abroad. Supply chains like these require extensive organizational oversight. They also typically involve heavy reliance on telecommunications to ensure that different stages of the product are made to specification and on logistics to coordinate the movement of material across many firms and countries. As the case
studies later in this chapter illustrate, global supply chains can involve complex interconnections between different tasks, as well as between domestic and foreign firms carrying out those tasks. This complexity is managed by lead firms in the chain that oversee production and make other key decisions …

What factors are driving longer global supply chains?
A key force behind the widespread development of global supply chains has been
technological change. Over time, technological change has allowed more production
processes to be fragmented—split into stages or tasks—and those stages or tasks to be
carried out in new, often distant locations. For example, in the 1970s some apparel
production for the U.S. market was offshored in nearby countries in the Caribbean region.
But advances in telecommunications and in transport have allowed the industry to source
from distant Asian suppliers and still meet the time-sensitive demands of the industry. …
Two other important drivers in the development of global chains are the extensive global
trade liberalization (e.g., reduction in tariff and nontariff barriers) and falling
transportation costs that have occurred in the past quarter-century. Because goods and
services produced by global supply chains typically cross borders multiple times, they
pass through multiple customs regimes and are affected by multiple tariffs and nontariff
barriers. Thus, the benefits of trade liberalization can also be multiplied for goods and
services produced in global supply chains.\”

Expansion of the processing trade

\”Numerous countries have set up programs to encourage processing trade, which allow duty-free imports of components used in products made solely for export. Using data on these programs provides a more direct measure of global supply chain trade, since all of the trade in the components and products affected by the programs moves through a supply chain. China and Mexico are the two largest users of export processing regimes in the developing world, and together account for about 80–85 percent of such exports worldwide. Chinese trade grew by more than 800 percent between 1995 and 2008—and about half of this growth is attributable to Chinese processing trade. Mexico is also heavily reliant on processing trade; processing imports represented over 50 percent of total Mexican imports in 2006.\”

A Cautionary Story for the U.S. in Global Supply Chains: Flat-Panel Display Televisions

There are two key components for FPD [flat-panel display] televisions, the display panel and the chipset, which together account for 94 percent of the costs. The global supply chain for FPD televisions uses glass produced in Japan and Korea; displays incorporating the glass, assembled in Japan, Korea, and Taiwan; and semiconductor chip sets designed in the United States and elsewhere and produced in China, Korea, Singapore, and Taiwan. Assembly occurs principally in China, the world’s largest television producer, although most sets destined for the U.S. market are assembled in Mexico. … U.S. participation in the global supply chain is now limited to the design of chips, some
product development, distribution, marketing, and customer service. The last U.S. television factory (owned by Sony) closed in 2009. All televisions sold in the United States now are imported from original equipment manufacturers (OEMs) with factories outside the United States (principally in Mexico) or from contract manufacturers with factories principally in Mexico and China. The sole remaining U.S.-headquartered television brand, Vizio, entered the U.S. market in 2002. Vizio has no factories of its own, but rather uses contract manufacturers in China, Taiwan, and Mexico to produce goods to Vizio’s specifications. Although Vizio builds products that incorporate current technology, it does no R&D; instead, it purchases patents or licenses the technology from other patent owners. Vizio has also acquired other patents, which it licenses to other television manufacturers. The principal suppliers of finished televisions to Vizio are two contract manufacturers in Taiwan, Foxconn and Amtran. These companies are also part owners of Vizio.\”

A U.S. Success in Global Supply Chains: Logistics

U.S. firms are among the leading logistics providers worldwide and hence have become essential participants in global supply chains. Logistics, the coordinated movement of goods and services, encompasses diverse activities that oversee the end-to-end transport of raw, intermediate, and final goods between suppliers, producers, and consumers…. The largest and most diversified U.S. logistics firms are FedEx and UPS, although for both firms, primary revenues are derived from the express delivery of letters and small packages. Some other large U.S.-based logistics firms include C.H. Robinson Worldwide, Expeditors International of Washington, Caterpillar Logistics Services, and Penske Logistics. All of these firms operate globally and typically have hundreds of offices worldwide. Like FedEx and UPS, these firms have added logistics and supply chain capabilities to their main lines of business which, for example, include the transportation of heavy freight (Caterpillar) and the arrangement of transportation services (C.H. Robinson and Expeditors). For all firms, supply chain management is a fast-growing business segment, with U.S. revenues for supply chain services having grown by about 20 percent during 2004–09.

Shifting to a value-added view of trade

When products cross national borders several times, then instead of focusing on the value of what crosses the border, which is \”gross trade,\” it becomes important to understand \”value-added\” trade–that is,what value-added occurred within your country. One approach here is to look at the foreign content in your production. The green line shows that foreign content in U.S. manufacturing has risen from about 10% in the mid-1980s to more than 25% now. Overall foreign content in U.S. exports has risen, but more slowly, from about 8% in the late 1970s to as high as 15% before the recession hit full force in 2008.

Looking at value-added also affects how one sees bilateral trade patterns. Here\’s an explanation: \”China is the final assembler in a large number of global supply chains, and it uses components from many other countries to produce its exports. The figure below shows that the U.S.-China trade deficit on a value-added basis is considerably smaller (by about 40 percent in 2004) than on the commonly reported basis of official gross trade.b By contrast, Japan exports parts and components to countries throughout Asia; many of these components are eventually assembled into final products and exported to the United States. Thus the U.S.-Japan trade balance on a value-added basis is larger than the comparable gross trade deficit. The U.S. value-added tradedeficits with other major trading partners (Canada, Mexico, and the EU-15) differ by smaller amountsfrom their corresponding gross trade deficits.\”

Other ways in which longer global supply chains change thinking about international trade
Here are some other changes: \”Modern complex supply chains generate more trade than traditional supply networks in which only raw materials or final goods might be sent across international borders. In the earlier example of a supply chain in which the stages in figure 3.1 were carried out in three countries, the product was exported three times before being sold in final form at home or abroad. Global chains can also generate new patterns of specialization, as firms in a particular country often specialize in a particular stage or task. In electronics, for example, intermediate and semifinished goods are often produced in Japan, Hong Kong, South Korea, and Taiwan, while final assembly activities are often contracted to Chinese firms. Finally, global chains can change the nature of a nation’s trade. As countries become more vertically specialized, their imports and exports are increasingly composed of intermediate goods and services that are moving to the next stage in the chain.\”

I would add two final thoughts here:

1) It will be interesting to see if the growth of global supply chains alters the political economy of trade. In the old view of trade, firms within a certain country made goods like cars or machine tools or computers. That doesn\’t happen so much any more; instead, firms within a country do pieces and parts of the production process. As manufacturers of cars and computers and other goods become less national in scope, will there be less political pressure to protect them from international trade? Or will being more economically intertwined make trade seem like a more frightening and salient issue?

2) The U.S. economy has some large advantages in a world of longer global supply chains: the sheer size of its existing markets; its functional rules of law and finance; its expertise in logistics and marketing; its well-developed communication and transportation facilities; the cultural and personal connections that American has throughout the world economy; its R&D and scientific capabilities; and the flexibility of its workers and firms. There are a lot of clouds in the future economic outlook for the U.S., but one potential bright spot–if we go out and seize it–is the multiplicity of roles that the U.S. can play in the longer supply chains of an evolving global economy.

For more on this subject, and in particular some measures of how foreign content in exports has evolved over recent decades, see my post from August 19 is about an IMF report on Longer Global Supply Chains.

U.S. Barriers to Imports: U.S. ITC #1

The U.S. International Trade Commission has published the 7th edition of its occasional report: \”The Economic Effects of Significant U.S. Import Restraints.\” The report comes in two main parts. The first part is an overview and status report on the main U.S. barriers. The second part, which I\’ll discuss in a follow-up post, concerns the trend toward longer global supply chains.

The main message of the first part the report is that the U.S. economy is in general extremely open to imports: \”The United States is one of the world’s most open economies. In 2010, the average U.S.
tariff on all goods remained near its historic low of 1.3 percent, on an import-weighted basis, essentially unchanged from the previous update in 2009. Nonetheless, significant restraints on trade remain in certain sectors. The U.S. International Trade Commission (Commission) estimates that U.S. economic welfare, as defined by total public and private consumption, would increase by about $2.6 billion annually by 2015 if the United States unilaterally ended (“liberalized”) all significant restraints quantified in this report. Exports would expand by $9.0 billion and imports by $11.5 billion. These changes would result from removing import barriers in the following sectors: sugar, ethanol, canned tuna, dairy products, tobacco, textiles and apparel, and other high-tariff manufacturing sectors.\”

The single most costly trade barrier concerns rules against importing ethanol. The fact that such rules exist at all, of course, strongly suggests that the key issue in ethanol policy is not how much gasoline we can replace, but instead how much of a subsidy can find a justification for sending to farmers. Here\’s the ITC overview:

\”Because of rapidly increasing quantities of ethanol mandated by the U.S. Renewable Fuel Standard, both U.S. ethanol production and U.S. imports of ethanol are projected to rise markedly by 2015. The projected higher import quantities and the continued moderate restrictiveness of ethanol restraints combine to make these restraints the most costly (in welfare terms) among all sectors considered. The Commission estimates that liberalizing ethanol import restraints would increase welfare by $1.5 billion
and increase imports by 45 percent in 2015. Although liberalization would reduce the domestic industry’s output and employment from their projected 2015 levels by 4–5 percent, these changes are minor considering that the ethanol industry employment and output are both projected to
more than double between 2005 and 2015, with or without liberalization.\”

One final element of these reports that I always appreciate is that they treat employment issues in the context of the overall economy where over time wages and industries adjust. Thus, while for each trade barrier the report seeks to quantify output and employment changes that would arise if that trade barrier was lifted, the report is also careful to note that as the economy adjusts, an equivalent number of job would arise elsewhere. This message comes through far too seldom in discussions of international trade: barriers to trade, or lifting barriers to trade, aren\’t going to alter the total number of jobs over time, but instead will shift the industries and sectors where those jobs occur.

More on Hating Biofuels: The National Research Council

I\’ve posted here and here on how many international organizations hate government subsidies for biofuels. Now it\’s time for the National Research Council to have a whack at this pinata. The Committee on Economic and Environmental Impacts of Increasing Biofuels Production of the National Research Council has published: \”Renewable Fuel Standard: Potential Economic and Environmental Effects of U.S. Biofuel Policy.\” The report was mostly written under the chairmanship of Lester Lave, but was completed after his death last May. As befits a report from the NRC, it is a sober-sided discussion that lays out evidence at great length without seeking to take a particular explicit policy stance. Here are the eight major findings of the study, with a few quick comments from me, as quoted from the \”prepublication copy\” that can be downloaded free of charge:

FINDING: Absent major technological innovation or policy changes, the RFS2-mandated consumption of 16 billion gallons of ethanol-equivalent cellulosic biofuels is unlikely to be met in 2022.
RSF2 is the committee\’s way of referring to the Renewable Fuels Standard passed into law in 2005 and revised in 2007. Cellulosic biofuel is not from corn or soybeans or animal fat, but instead from certain kinds of grasses or wood chips. Cellulosic biofuel has the theoretical advantage that the sources for such fuel are cheap and abundant; however, producing fuel from these sources is harder than producing it from corn or soybeans or sugar, and the technologies for converting cellolosic material to biofuels are far from cost-effective. Indeed, they write \”no commercially viable biorefineries exist for converting lignocellulosic biomass to fuels as of the writing of this report.\”

FINDING: Only in an economic environment characterized by high oil prices, technological breakthroughs, and a high implicit or actual carbon price would biofuels be cost-competitive with petroleum-based fuels.
Indeed, the case for biofuels probably comes down to either very high oil prices or technological breakthroughs that make is much cheaper, because as the next finding notes, it\’s not at all clear that biofuels reduce greenhouse gas emissions.

FINDING: RFS2 may be an ineffective policy for reducing global GHG emissions because the effect of biofuels on GHG emissions depends on how the biofuels are produced and what land-use or land-cover changes occur in the process.
Expanded production of biofuels will almost certainly involve clearing and planting additional land. Depending on how it is done, this process can release more carbon than biofuels save. In addition, it\’s important to remember that the biofuels and agricultural products operate in a global market, so it\’s not just an issue of how U.S. biofuels policies affect clearing and planting of U.S. land, but how it affects clearing and planting of land all around the world.

FINDING: Absent major increases in agricultural yields and improvement in the efficiency of converting biomass to fuels, additional cropland will be required for cellulosic feedstock production; thus, implementation of RFS2 is expected to create competition among different land uses, raise cropland prices, and increase the cost of food and feed production.
FINDING: Food-based biofuel is one of many factors that contributed to upward price pressure on agricultural commodities, food, and livestock feed since 2007; other factors affecting those prices included growing population overseas, crop failures in other countries, high oil prices, decline in the value of the U.S. dollar, and speculative activity in the marketplace.
Many U.S. households can find ways to adjust without too much pain to a slightly higher price of food. But food products are sold in global markets, and for many people around the world, higher food prices can have dire consequences for nutrition and health.

FINDING: Achieving RFS2 would increase the federal budget outlays mostly as a result of increased spending on payments, grants, loans, and loan guarantees to support the development of cellulosic biofuels and forgone revenue as a result of biofuel tax credits.
Even if explicit subsidies for biofuels are allowed to expire, as they are scheduled to do at the end of 2012, the mandates for consuming biofuels will remain in place, which will raise costs for consumers. Also, gasoline is taxed and biofuels are subsidized, so a movement from gasoline to biofuels will reduce government tax revenues.

FINDING: The environmental effects of increasing biofuels production largely depend on feedstock type, site-specific factors (such as soil and climate), management practices used in feedstock production, land condition prior to feedstock production, and conversion yield. Some effects are local and others are regional or global. A systems approach that considers various environmental effects simultaneously and across spatial and temporal scales is necessary to provide an assessment of the overall environmental outcome of increasing biofuels production.
Biofuels are commonly sold on their environmental merits. The committee is saying here, in a very polite way, that when different feedstocks are considered, along with their effects on air, soil, and water, these purported environmental gains have not yet been convincingly demonstrated. 

FINDING: Key barriers to achieving RFS2 are the high cost of producing cellulosic biofuels compared to petroleum-based fuels and uncertainties in future biofuel markets.

I\’m a supporter of expanded energy R&D efforts. Maybe some scientists will find a way to make biofuels that are both cost-effective and clearly an environmental gain, in a way that doesn\’t drive up food prices around the world. But at this stage, subsidizing production of biofuels or mandating that they be used in certain quantities–especially for technologies like cellolosic biofuels that don\’t exist on a commercial basis–is putting the cart way in front of the horse.

Using Financial Repression to Reduce Government Debt

The usual ways of reducing a government debt burden over time are fairly well-known: cut spending or raise taxes; have the economy grow faster than the debt burden, so the ratio of debt/GDP declines over time; a burst of inflation, which reduces the real value of past debt; and in some cases an outright default or restructuring of the debt. To this list, Carmen Reinhart, Jacob F. Kirkegaard, and M. Belen Sbrancia offer \”Financial Repression Redux.\”Here are some main themes (references omitted for readability):

Here\’s their definition of financial repression:
\”Financial repression occurs when governments implement policies to channel to themselves funds that in a deregulated market environment would go elsewhere. Policies include directed lending to the government by captive domestic audiences (such as pension funds or domestic banks), explicit or implicit caps on interest rates, regulation of cross-border capital movements, and (generally) a tighter connection between government and banks, either explicitly through public ownership of some of the banks or through heavy “moral suasion.” Financial repression is also sometimes associated with relatively high reserve requirements (or liquidity requirements), securities transaction taxes, prohibition of gold purchases, or the placement of significant amounts of government debt that is nonmarketable…. \”

How financial repression works like a tax
\”One of the main goals of financial repression is to keep nominal interest rates lower than they would be in more competitive markets. Other things equal, this reduces the government’s interest expenses for a given stock of debt and contributes to deficit reduction. However, when financial repression produces negative real interest rates (nominal rates below the inflation rate), it reduces or liquidates existing debts and becomes the equivalent of a tax—a transfer from creditors (savers) to borrowers, including the government. But this financial repression tax is unlike income, consumption, or sales taxes. The rate is determined by financial regulations and inflation performance, which are opaque compared with more visible and often highly politicized fiscal measures. Given that deficit reduction usually involves highly unpopular expenditure reductions and/or tax increases, authorities seeking to reduce outstanding debts may find the stealthier financial repression tax more politically palatable.\”

How is financial repression currently operating in the U.S.?
One potential example of how financial repression is operating in the U.S. is the super-low interest rates. In part, of course, these are an attempt to stimulate the economy, but it also seems plausible to me that they are intended to help the U.S. government in financing its debt. But a more straightforward example is that when the Federal Reserve and other central banks buy U.S. Treasury debt directlydebt that might very well need to pay a higher interest rate if it was sold to outsiders. Back in 1990, outsiders owned about 75% of U.S. Treasury debt; now they own about half.

How is financial repression happening in other countries? 
Central banks in many other countries–for example, the UK, Ireland, Portugal, and Greece–have sharply increased their holdings of government debt. In France and Ireland, major pension funds have been required to invest in government debt.

How much can financial repression reduce government debt?
These authors cite research that financial repression can have a major effect in reducing government debt, through what they call \”the liquidation effect.\” Many of their calculations focus on how government debt burdens were reduced after WWII. They write: \”For the United States and the United Kingdom, the annual liquidation effect [between 1945 and 1980] amounted on average to between 3 and 4 percent of GDP a year. … For Australia and Italy, which recorded higher inflation rates, the liquidation effect was larger (about 5 percent a year).\”

My point here isn\’t to argue for or against what they call financial repression. But if their calculations are roughly right, it\’s an option for reducing government debt that could end up playing a major role, and needs to be better understood.


2011 Nobel Prize to Thomas Sargent and Christopher Sims

According to the Nobel website: \”The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2011 was awarded jointly to Thomas J. Sargent and Christopher A. Sims `for their empirical research on cause and effect in the macroeconomy.\’\” But what does that actually mean?

The website of the Nobel organization always offers useful background information about the laureates, including a \”Scientific Background\” paper about the winners. This year\’s background paper about Thomas Sargent and Christopher Sims is going to be hard sledding for those uninitiated into academic macroeconomics–by which I mean it has a bunch of equations. But the opening  pages offer an accessible overview of why they are eminently deserving of the prize. Here are some excerpts, mixed with some of my own explanations:

How was macroeconomic analysis done before the work of Sargent, Sims, and others? 
Here\’s my own description: If one looks back at how macroeconomics was typically done in the 1960s and into the early 1970s, the common macroeconomic models were big sets of equations–that is, they added up relationships between elements like consumption, investment, saving, imports, exports and total economic output, along with equations for how interest rates and exchange rates affected each other and these categories. A big category like \”consumption\” would be broken down in to durable goods and nondurable goods, and in turn these categories would be broken down still further. The resulting models would have hundreds of equations all interrelated with each other, and adding up to a picture of the macroeconomy as a whole. But as the Nobel background paper reports: \”This estimated system was then used to interpret macroeconomic time series, to forecast the economy, and to conduct policy experiments. Such large models were seemingly successful in accounting for historical data. However, during the 1970s most western countries experienced high rates of inflation combined with slow output growth and high unemployment. In this era of stagflation, instabilities appeared in the large models, which were increasingly called into question.\”

The key role of expectations in this analysis
Many of the public policy discussions in the stagflation of the 1970s focused on expectations. What if workers were expecting higher wages? What if firms could promise higher wages because they expected prices to rise? Were the expectations causing inflation and recession, or were inflation and recession causing the expectations, or were there feedback loops in all of these and other economic factors? The macroeconomics of that time had no clear-cut tools for dealing with these issues.

The background paper puts it this way: \”In any empirical economic analysis based on observational data, it is difficult to disentangle cause and effect. This becomes especially cumbersome in macroeconomic policy analysis due to an important stumbling block: the key role of expectations. Economic decision-makers form expectations about policy, thereby linking economic activity to future policy. Was an observed change in policy an independent event? Were the subsequent changes in economic activity a causal reaction to this policy change? Or did causality run in the opposite direction, such that expectations of changes in economic activity triggered the observed change in policy? Alternative interpretations of the interplay between expectations and economic activity might lead to very different policy conclusions. The methods developed by Sargent and Sims tackle these difficulties in different, and complementary, ways.\”

Sargent and structural econometrics
Instead of trying to build a macroeconomic model on a pile of statistics, and how those statistics added up and interrelated, the approach of Sargent (and others) was to build a macroeconomic model starting from the idea that economic actors like households and firms were doing their best to pursue their own interests. This approach has sometimes been called \”rational expectations,\” but that term is probably misleading. The \”rationality\” here doesn\’t mean that economic actors have all available information, can calculate everything perfectly, and always make correct decisions. It only implies that they won\’t make the same mistake over and over again. In Sargent\’s hands, at least, this approach explicitly leaves open the question of just how people form expectations and learn.

Here\’s the background paper: \”Sargent began his research around this time [the early 1970s], during the period when an alternative theoretical macroeconomic framework was proposed. It emphasized rational expectations, the notion that economic decisionmakers like households and firms do not make systematic mistakes in forecasting. This framework turned out to be essential in interpreting the inflation-unemployment experiences of the 1970s and 1980s. It also formed a core of newly emerging macroeconomic theories. Sargent played a pivotal role in these developments. He explored the
implications of rational expectations in empirical studies, by showing how rational expectations could be implemented in empirical analyses of macroeconomic events–so that researchers could specify and test theories using formal statistical methods–and by deriving implications for policymaking. …
In fact, the defining characteristic of Sargent\’s overall approach is not an insistence on rational expectations, but rather the essential idea that expectations are formed actively, under either full or bounded rationality. In this context, active means that expectations react to current events and incorporate an understanding of how these events affect the economy. This implies that any systematic change in policymaking will influence expectations, a crucial insight for policy analysis.\”

I would add that instead of a model of the macroeconomy with potentially hundreds of variables, Sargent and others worked with models that on the surface appeared much simpler: for example, one example in the \”background\” paper is a model of the macroeconomy that has only three variables: inflation, output, and a nominal interest rate. But the inferences about cause-and-effect in these models are defensible and logical.

Sims and vector autoregressions
Sims pointed out that the earlier generation of macroeconomic models were built on a series of assumptions about how certain economic factors or policies \”caused\” other policies. But in an model of expectations, these statements about \”cause\” needed to be demonstrated, not assumed. Thus, instead of having a model in which some factors caused other factors, Sims proposed that macroeconomic analysis should begin with a model in which is was possible for every factor to \”cause\” a change in every other factor, and in addition for past values of every factor over the last few years to \”cause\” a change in every factor. This approach is called a \”vector autoregression,\” but I often preferred to think of it as starting from a position of honest ignorance.

You then plug in all your data–say, quarterly data over a period of years–and see what patterns emerge. As you might imagine, it immediately looks clear that certain factors are not affecting others. Sims proposed a process for figuring out when certain factors aren\’t connected. As you begin to rule out what is NOT connected, what is left behind is a model of connections that actually exist. It\’s sort of like the way that a sculptor starts with a block of stone, and by gradually removing pieces, ends up with an image.

The background paper puts it this way: \”Sims launched what was perhaps the most forceful critique
of the predominant macroeconometric paradigm of the early 1970s by focusing on identification, a central element in making causal inferences from observed data. Sims argued that existing methods relied on \”incredible\” identification assumptions, whereby interpretations of \”what causes what\”
in macroeconomic time series were almost necessarily flawed. Misestimated models could not serve as useful tools for monetary policy analysis and, often, not even for forecasting. As an alternative, Sims proposed that the empirical study of macroeconomic variables could be built around a statistical tool, the vector autoregression (VAR). Technically, a VAR is a straightforward N-equation, N-variable (typically linear) system that describes how each variable in a set of macroeconomic variables depends on its own past values, the past values of the remaining N – 1 variables, and on some exogenous \”shocks.\” Sims\’s insight was that properly structured and interpreted VARs might overcome many identification problems and thus were of great potential value not only for
forecasting, but also for interpreting macroeconomic time series and conducting monetary policy experiments.\”

Other thoughts and resources
Sargent and Sims were colleagues at the University of Minnesota for about 15 years. The Federal Reserve Bank of Minneapolis puts out a readable publication called \”The Region,\” which often does in-depth interviews with prominent economists about their work. An interview with Sargent from the September 2010 issue is available here; an interview with Sims from the June 2007 issue is available here.

Also, Sims published an article in the Spring 2010 issue of my own Journal of Economic Perspectives called \”But Economics Is Not an Experimental Science,\” on issues of how to draw defensible cause-and-effect inferences from naturally-occurring data. Like all article in my journal, it is freely available courtesy of the American Economic Association.

While no one quite knows what the Nobel committee is thinking when they choose laureates, it seems clear that one standard is whether the ideas have been important enough to launch a sustained research literature. The ideas of Sargent and Sims from back in the 1970s and early 1980s certainly meet this test. Both these authors, and hundreds of others, have built on these ideas for decades.


After Japan\’s Quake, the Intervention to Stabilize the Yen

In the aftermath of the dreadful earthquake and tsunami which hit Japan on March 11, 2011, I completely missed that there was an international intervention to stabilize the exchange rate of the Japanese yen. Fortunately, Christopher J. Neely tells the story and offers useful context in \”A Foreign Exchange Intervention in an Era of Restraint.\” Here are some highlights of what has happened, and what lessons can be drawn: 

Foreign exchange intervention has become rare for the G-7 countries
Back in the late 1980s and early 1990s, many major central banks stopped frequent intervention in exchange rate markets, as shown on the figure. In fact, there have been only three exchange rate interventions for these countries since 1995: an intervention after Japan\’s quake in March 2011, an intervention soon after the start of the euro in September 2000, and an intervention in the yen after East Asia\’s financial crisis in 1998. 

The FX intervention after Japan\’s March 2011 Quake
 Japan\’s currency started falling sharply after the earthquake. Here\’s how Neely describes what happened:  \”Nevertheless, the G-7 finance ministers and central bank governors held a conference call on the evening of Thursday, March 17 (Friday morning in Tokyo) and decided to conduct a coordinated intervention to weaken the JPY. The G-7 issued a press release containing the following text:

 In response to recent movements in the exchange rate of the yen associated with the tragic events in Japan, and at the request of the Japanese authorities, the authorities of the United States, the United Kingdom, Canada, and the European Central Bank will join with Japan, on 18 March 2011, in concerted intervention in exchange markets. As we have long stated, excess volatility and disorderly movements in exchange rates have adverse implications for economic and financial stability. We will monitor exchange markets closely and will cooperate as appropriate (G-7, 2011).

 Figure 7 shows that the yen reacted immediately to the intervention announcement, surging
almost 4 percent within the hour against the USD …\”

As Neely reports, the total intervention was about $10.4 billion. Notice that the yen starts stabilizing when the announcement is made, and then moves to a certain level and more-or-less sticks at that level for awhile. The volatility of the yen foreign exchange rate diminishes a great deal.

What did the 1998 exchange rate intervention look like?
Neely describes the background to the 1998 intervention this way: \”The June 1998 intervention also followed a financial crisis, the 1997 Asian exchange rate crisis in which international capital fled many developing Asian countries, such as Thailand and South Korea. In early June 1998, the main macroeconomic concern was that the yen was unusually weak and weakening further, which made goods and services from other Asian countries less competitive with Japanese goods and
services and harmed those countries’ recoveries. Policymakers probably feared that a falling yen
might cause China to devalue the renminbi (RMB), possibly sparking competitive devaluations, inflation, and instability throughout the region.\”

The pattern of the 1998 intervention is qualitatively similar to the 2011 intervention: that is, a reaction just before the announcement is made, a movement to a new level, and volatility is stabilized.

What happened in the September 2000 intervention? 
Neely sets the stage: \”On January 1, 1999, the ECB began conducting a common monetary policy with a new currency, the euro, for the 11 original nations of the European Monetary Union (EMU). From its inception, the euro tended to depreciate against the dollar, falling from about 1.18 USD/EUR on the inception date to less than 0.85 USD/EUR in September 2000. Doubts about the policies of the new central bank probably contributed to this weakness. At the same time, the U.S. economy was slowing—it would officially enter a recession in March 2001—and the strong dollar/weak euro was perceived as detrimental to U.S. exporters. In addition, the Japanese feared that an overly strong yen would price Japanese exports out of the European markets. Against this backdrop, the ECB, the United States, and Japan decided to intervene to support the euro on September 22, 2000.\”

Again, the qualitative pattern is the same: the exchange rate takes a jump, but then stabilizes at a new level with diminished volatility.

What are the overall lessons?
 Neely summarizes the lessons this way: \”Since 1995 most advanced governments/central banks have used intervention only very sparingly as a policy tool. Examination of coordinated interventions during this period shows that intervention is not a magic wand that authorities can use to move exchange rates at will. It can be a very effective tool in certain circumstances, however, to coordinate market expectations about fundamental values of the exchange rate and calm disorderly foreign exchange markets by reintroducing two-sided risk.\”

Those who are talking about pressuring China to adjust its exchange rate vs. the U.S. dollar have a reasonable case to make. But they would be wise to take to heart the practical issues here. Foreign exchange rate intervention can stabilize a disorderly  market in a short-run situation where everyone is betting the currency will move in only one directly, but is not a magic wand to move exchange rates at will.