Global Supply Chains and the Changing Nature of International Trade

The World Investment Report 2013 from UNCTAD (the UN Conference on Trade and Development) is my go-to source for statistics about levels and trends of foreign direct investment. This year, Chapter IV offers an interesting additional essay on \”Global Value Chains: Investment and Trade for Development. Here are a few points that jumped out at me.

The Preeminence of International Trade in Intermediate Goods 

The textbook story of international trade, in which an easily identifiable product made in one country like cars, computers, textiles, oil, wine or wheat is traded for a similar good in another country is no longer a fair representation of the majority of world trade. \”About 60 per cent of global trade, which today amounts to more than $20 trillion, consists of trade in intermediate goods and services that are incorporated at various
stages in the production process of goods and services for final consumption.\”

Here\’s a concrete example of trade in intermediate goods through a global value chain as it operates in Starbucks:

\”For instance, even the relatively simple GVC [global value chain] of Starbuck’s (United States), based on one service (the sale of coffee), requires the management of a value chain that spans all continents; directly employs 150,000 people; sources coffee from thousands of traders, agents and contract farmers across the developing world; manufactures coffee in over 30 plants, mostly in alliance with partner firms, usually close to final market; distributes the coffee to retail outlets through over 50 major central and regional warehouses and distribution centres; and operates some 17,000 retail stores in over 50 countries across the globe. This GVC has to be efficient and profitable, while following strict product/service standards for quality. It is supported by a large array of services, including those connected to supply chain management and human resources management/development, both within the firm itself and in relation to suppliers and other partners. The trade flows involved are immense, including
the movement of agricultural goods, manufactured produce, and technical and managerial services.\”

And here\’s are a couple of figures showing the importance of trade in intermediates to the exports of various countries. The darker green bar shows the \”upstream\” component of global supply chains: that is, what is the share of foreign value-added that is first imported, but then is re-exported by an economy. The lighter green bar shows the \”downstream\” component of global supply chains: that is, what share of exports from the country is later going to be part of the value-added of exports from another country. The sum of these two is the \”GVC participation rate,\” that is, what share of exports from a country are involved either upstream or downstream in a global value chain.

And here\’s a list by country:

The Centrality of Transnational Corporations in International Trade

Global supply chains are typically coordinated by transnational corporations, often through making foreign direct investments in other countries (which is why it makes sense to have a discussion of global supply chains in a report focused on foreign direct investment). In fact, a relatively small number of transnational corporations are the organizations that coordinate and carry out the overwhelming majority of international trade, through some combination of owning foreign subsidiaries, contract manufacturing, franchising, or arms\’-length buying and selling from local firms.

\”In the EU, the top 10 per cent of exporting firms typically accounts for 70 to 80 per cent of export volumes, while this figure rises to 96 per cent of total exports for the United States, where about 2,200 firms (the top 1 per cent of exporters, most of which are TNC [transnational corporation] parent companies or foreign affiliates) account for more than 80 per cent of total trade. The international production networks shaped by TNC parent companies and affiliates account for a large share of most countries’ trade.  On the basis of these macro-indicators of international production and firm-level evidence, UNCTAD estimates that about 80 per cent of global trade (in terms of gross exports) is linked to the international production networks of TNCs …\”

Global Value Chains and Economic Development

Clearly, global supply chains lead to prices for consumers in the importing countries that are lower than they would otherwise be–that\’s most of the reason that transnational corporations develop such chains. They also make profits for transnational corporations? But do the global supply chains help low- and middle-income countries develop? The answer depend several factors.

  • Does the position in the global value chain lend itself to additional learning? \”Is it the type of chain that presents potential for learning and upgrading? Will it enable capabilities to be acquired by firms that can be applied to the production of other products or services? In the garments industry, Mexican firms have been able to acquire new skills and functions, becoming full-package suppliers, while it seems very difficult for firms in sub-Saharan Africa supplying garments under the African Growth and Opportunity Act programme to move beyond cut, make and trim.\”
  • Does the host economy offer a supportive environment? \”Is there an environment conducive to firm-level learning and have investments been made in technical management skills? Are firms willing to invest in developing new skills, improving their capabilities and searching for new market opportunities? Local firms’ capabilities and competences determine their ability to gain access to cross-border value chains, and to be able to learn, benefit from and upgrade within GVCs [global value chains]. Government policies can facilitate this process.\” 

The overall pattern seems to be that participation in global value chains does in fact typically benefit economic growth and development, but there are a bunch of potentially difficult and important issues about treatment of workers, environmental effects, interactions with local institutions and the host government, ans so on.

America\’s Teachers: Some International Comparisons

As the school year gets underway, here are some figures comparing the situation of America\’s K-12 teachers to those of primary and secondary teachers in some other countries around the world. The figures are taken from the OECD volume \”Education at a Glance 2013.\” The quick bottom line: the average U.S. teacher faces a similar student/teacher compared to the average for teachers in other countries, but the relative pay of US teachers compared to the average wage is lower than the similar ratio in many countries, and the number of hours worked by US teachers is higher than in other countries.

First, here\’s a look at the ratio of teachers to students across countries: the top panel is pre-primary education, followed by primary education, lower secondary education, and higher secondary education.

Next, here\’s a look at teacher pay. When compared in absolute dollars, the pay of U.S. teachers looks quite good by international standards–but after all, the U.S. economy generally has higher per capita incomes and higher wages than these other countries. Thus, the figure here shows the ratio of the pay of teachers to the pay of someone with a tertiary (that is, college) degree in these comparison countries.

Finally, here\’s a comparison of the number of hours that teachers work in different counties, where US teachers are well above average, using the example of lower secondary school.

The expectations of teachers and responsibilities of teachers can be quite different across countries, so it would be a mistake to view these comparisons as precisely apples-to-apples. But bring them up with a teacher of your acquaintance, and you\’ll get an earful of reaction.  

The Origins of Labor Day

[Originally published on this blog on Labor Day, 2011]

It\’s clear that the first Labor Day celebration was held on Tuesday, September 5, 1882, and organized by the Central Labor Union, an early trade union organization operating in the greater New York City area in the 1880s. By the early 1890s, more than 20 states had adopted the holiday. On June 28, 1894, President Grover Cleveland signed into law: \’\’The first Monday of  September in each year, being the day celebrated and known as Labor\’s Holiday, is hereby made a legal public holiday, to all intents and purposes,  in the same manner as Christmas, the first day of January, the twenty-second day of February, the thirtieth day of May, and the fourth day of July are now made by law public holidays.\”

What is less well-known, at least to me, is that the very first Labor Day parade almost didn\’t happen, and that historians now dispute which person is most responsible for that first Labor Day. The U.S. Department of Labor tells how first Labor Day almost didn\’t happen, for lack of a band: 

\”On the morning of September 5, 1882, a crowd of spectators filled the sidewalks of lower Manhattan near city hall and along Broadway. They had come early, well before the Labor Day Parade marchers, to claim the best vantage points from which to view the first Labor Day Parade. A newspaper account of the day described \”…men on horseback, men wearing regalia, men with society aprons, and men with flags, musical instruments, badges, and all the other paraphernalia of a procession.\”

The police, wary that a riot would break out, were out in force that morning as well. By 9 a.m., columns of police and club-wielding officers on horseback surrounded city hall.

By 10 a.m., the Grand Marshall of the parade, William McCabe, his aides and their police escort were all in place for the start of the parade. There was only one problem: none of the men had moved. The few marchers that had shown up had no music.

According to McCabe, the spectators began to suggest that he give up the idea of parading, but he was determined to start on time with the few marchers that had shown up. Suddenly, Mathew Maguire of the Central Labor Union of New York (and probably the father of Labor Day) ran across the lawn and told McCabe that two hundred marchers from the Jewelers Union of Newark Two had just crossed the ferry — and they had a band!

Just after 10 a.m., the marching jewelers turned onto lower Broadway — they were playing \”When I First Put This Uniform On,\” from Patience, an opera by Gilbert and Sullivan. The police escort then took its place in the street. When the jewelers marched past McCabe and his aides, they followed in behind. Then, spectators began to join the march. Eventually there were 700 men in line in the first of three divisions of Labor Day marchers.

With all of the pieces in place, the parade marched through lower Manhattan. The New York Tribune reported that, \”The windows and roofs and even the lamp posts and awning frames were occupied by persons anxious to get a good view of the first parade in New York of workingmen of all trades united in one organization.\”

At noon, the marchers arrived at Reservoir Park, the termination point of the parade. While some returned to work, most continued on to the post-parade party at Wendel\’s Elm Park at 92nd Street and Ninth Avenue; even some unions that had not participated in the parade showed up to join in the post-parade festivities that included speeches, a picnic, an abundance of cigars and, \”Lager beer kegs… mounted in every conceivable place.\”

From 1 p.m. until 9 p.m. that night, nearly 25,000 union members and their families filled the park and celebrated the very first, and almost entirely disastrous, Labor Day.\”

As to the originator of Labor Day, the traditional story I learned back in the day gave credit to Peter McGuire, the founder of the Carpenters Union and a co-founder of the American Federation of Labor. At a meeting of the Central Labor Union of New York on May 8, 1882, the story went, he recommended that Labor Day be designated to honor \”those who from rude nature have delved and carved all the grandeur we behold.\” McGuire also typically received credit for suggesting the first Monday in September for the holiday, \”as it would come at the most pleasant season of the year, nearly midway between the Fourth of July and Thanksgiving, and would fill a wide gap in the chronology of legal holidays.\” He envisioned that the day would begin with a parade, \”which would publicly show the strength and esprit de corps of the trade and labor organizations,\” and then continue with \”a picnic or festival in some grove.

But in recent years, the International Association of Machinists have also staked their claim, because one of their members named Matthew Maguire, a machinist, was serving as secretary of the Central Labor Union in New York in 1882 and who clearly played a major role in organizing the day. The U.S. Department of Labor has a quick summary of the controversy.

 \”According to the New Jersey Historical Society, after President Cleveland signed into law the creation of a national Labor Day, The Paterson (N.J.) Morning Call published an opinion piece entitled, \”Honor to Whom Honor is Due,\” which stated that \”the souvenir pen should go to Alderman Matthew Maguire of this city, who is the undisputed author of Labor Day as a holiday.\” This editorial also referred to Maguire as the \”Father of the Labor Day holiday. …

According to The First Labor Day Parade, by Ted Watts, Maguire held some political beliefs that were considered fairly radical for the day and also for Samuel Gompers and his American Federation of Labor. Allegedly, Gompers did not want Labor Day to become associated with the sort of \”radical\” politics of Matthew Maguire, so in a 1897 interview, Gompers\’ close friend Peter J. McGuire was assigned the credit for the origination of Labor Day.\”

Those Who Pay Zero in U.S. Income Tax

Back in January 1969, the story goes, U.S. Treasury Secretary Joseph Barr testified before the Joint Economic Committee of Congress that 155 Americans with income of over $200,000 had paid no income tax in 1967. Adjusted for inflation, $200,000 in 1967 income would be equal to about $1.4 million in 2013. Back in 1969, members of Congress received more letters from constituents about 155 non-taxpayers than they did about the Vietnam War.

The public outrage notwithstanding, it\’s not obvious to me that 155 high-income people paying no income taxes is a problem that needs a solution. After all, a few high-income people will have made very large charitable donations in a year, knocking their tax liability down to zero. A few taxophobes will invest all their funds in tax-free municipal bonds. A few may have high income this year, but be able to offset it for tax purposes with large losses from previous years. It only takes a few dozen people in each of these and similar categories to make a total of 155 high-income non-taxpayers.

The ever-useful Tax Policy Center has now published some estimates of what share of taxpayers will pay no income taxes in 2013 and future years. Just to be clear, these estimates are based on their microsimulation model of the tax code and taxpayers–the actual IRS data for 2013 taxpayers won\’t be available for a couple of years. But the estimates are nonetheless thought-provoking. Here\’s a summary table:

At the high end, the top 0.1% of the income distribution would kick in at about $1.5 million in annual income for 2013–not too different, adjusted for inflation, from the $200,000 level that created such controversy back in 1969. Of the 119,000 \”tax units\” in the top 0.1% of incomes, 0.2% paid no income tax–so about 200-250 people. Given the growth of population in the last four decades, it\’s a very similar number to those 155 non-taxpayers that caused such a stir back in 1969.

The big change, of course, is that after 1969 an Alternative Minimum Tax was enacted in an attempt to ensure that all those with high incomes would pay something in taxes. But with about 162 million tax units in the United States, and an  income tax code that has now reached 4 million words, it\’s not a big shock to me that a few hundred high-income people would be able to find legitimate and audit-proof ways of knocking their tax liability down to zero.

Rather than getting distracted by the lack of tax payments by a few hundred outliers, I\’d much rather focus on the tax payments of all 119,000 in the top 0.1%, or all 1,160,000 in the top 1%. As I\’ve written before on this blog, I\’m open to policies that would raise marginal tax rates on those with the highest incomes, especially if they are part of an overall deal to reduce the future path of U.S. budget deficits. But I would prefer a tax reform approach of seeking to reduce \”tax expenditures,\” which is the generic name for all the legal deductions, exemptions and credits with which those with higher incomes can reduce their taxes (for earlier posts on this subject, see here, here, and here).

The other pattern that jumps out when looking at those who pay zero in income taxes is that the overwhelming majority of them have low incomes. The table shows that 87% of those in the lowest income quintile, 52% of those in the second income quintile, and 28% of those in the middle income quintile owed nothing in federal income taxes. This situation is nothing new, of course. The original federal income tax back in 1913 was explicitly aimed only at those with high incomes, and only about 7% of households paid income tax. Even after the income tax expanded during World War I, and then was tweaked through the 1920s and into the 1930s, only about 20-30% of households owed federal income tax in a given year.

The standard explanation for why the federal income tax covers only a portion of the population is that it is the nation\’s main tax for having those with higher incomes pay a greater share of income in taxes. With payroll taxes for Social Security and Medicare, as well as with state and local sales taxes, those with higher incomes don\’t pay a higher share of their income–in fact, they typically pay a lower share of income.

The standard argument for why a higher share of people should pay into the income tax is that democracy is healthier if more people have \”skin in the game\”–that is, if tax increases and tax cuts affect everyone, and aren\’t just a policy that an untaxed or lightly-taxed majority can impose on a small share of the population. I recognize the theoretical power of this argument, but in practical terms, it doesn\’t seem to have much force. If those with low incomes made minimal income tax payments so that they had \”skin in the game,\” and then saw those minimal payment vary by even more minimal amounts as taxes rose and fell, it doesn\’t much alter their incentives to impose higher taxes on others. Also, it doesn\’t seem like the US has been subject to populist fits of expropriating the income of the rich, so I don\’t worry overmuch about it. Sure, it would be neat and tidy if we could reach a broad social agreement on how the income tax burden should be distributed across income groups, and then with that agreement in hand, we could raise or lower all taxes on everyone together. But I\’m not holding my breath for such an agreement on desirable tax burdens to be reached.

Will a Computer at Home Help My Children in School?

How much do students benefit from having access to a computer at home? Obviously, one can\’t just compare the test performance of students who have a computer at home with those who don\’t, because families and households that do provide a computer at home to their students are likely to be different in many ways from families that do not do so. It\’s possible to make statistical adjustments for these differences, but such adjustments only account for \”observable\” factors like income, ethnicity, gender, family structure, employment status of parents, and the like. Differences across families that aren\’t captured in readily-available statistics will remain.

Thus, an alternative approach is a social experiment. Take a substantial number of families who don\’t have a home computer, and randomly give some of them a home computer. Then compare the results for those with and without a computer. Robert W. Fairlie and Jonathan Robinson report the results of the largest field experiment thus far conducted along these lines in \”Experimental Evidence on the
Effects of Home Computers on Academic Achievement among Schoolchildren,\” recently published in the American Economic Journal: Applied Economics (5:3, pp. 211–240). (This journal is note freely available on-line, although many readers will have access through library subscriptions or their membership in the American Economic Association.) Here\’s the conclusion from their abstract: 

\”Although computer ownership and use increased substantially, we find no effects on any educational outcomes, including grades, test scores, credits earned, attendance, and disciplinary actions. Our estimates are precise enough to rule out even modestly-sized positive or negative impacts. The estimated null effect is consistent with survey evidence showing no change in homework time or other “intermediate” inputs in education.\”

And here\’s a bit more detail on their results. They note: \”There are an estimated 15.5 million instructional computers in US public schools, representing one instructional computer for every three schoolchildren. Nearly every instructional classroom in these schools has a computer, averaging 189 computers per school … [M]any children do not have access to a computer at home. Nearly 9 million children ages 10–17 in the United States (27 percent) do not have computers with Internet connections at home …\”

The sample for this study includes students in grades 6–10 in 15 different middle and high schools in 5 school districts in the Central Valley area of California, during the two school years from 2008-2010. The researchers surveyed students at the beginning of the school year, about whether they had a computer at home. After going through parental consent forms and all the paperwork, they ended up with about 1,100 students, and they gave half of them a computer at the beginning of the year and half a computer at the end of the year. Everyone got a home computer–but the researchers could study the effect of having one a year earlier.

Having a computer at home increased computer use. Students without a computer at home (the \”control group\”) reported using a computer (at school, the library, or a friend\’s house) about 4.2 hours per week, while students who now had a computer at home (the \”treatment group\”) used a computer 6.7 hours per week. Of that extra computer time , \”Children spend an additional 0.8 hours on schoolwork, 0.8 hours per week on games, and 0.6 hours on social networking.\”

Of course, any individual study is never the final say. Perhaps having access to a home computer for several years, rather than just one year, would improve outcomes. Perhaps in the future, computer-linked pedagogy will improve in a way where having a computer at home makes a demonstrable difference to education outcomes. Perhaps there is some overall benefit from familiarity with computers that pays off in the long run, even if not captured in any of outcomes measured here. It\’s important to remember that this study is not about use of computers in the classroom or in education overall, just about access to computers at home. My wife and I have three children ranging in age from grades 6 to 10–the same age group as represented in this study–and they have access to computers at home.  The evidence suggests that while this may be more convenient for them in various ways, I shouldn\’t be expecting it to boost their reading and math scores.

(Full disclosure: The American Economic Journal: Applied Economics is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as the managing editor.)

The Myth Behind the Origins of Summer Vacation

Why do students have summer vacation? One common answer is that it\’s a holdover from when America was more rural and needed children to help out on the farm, but even just a small amount of introspection suggests that answer is wrong. Even if you know very little about the practical side of farming, think for just a moment about what are probably the most time-sensitive and busiest periods for a farmer: spring planting and fall harvest. Not summer!

I\’m not claiming to have made any great discovery here that summer vacation didn\’t start as result of following some typical pattern of agricultural production.  Mess around on the web a bit, and you\’ll find more accurate historical descriptions of how summer vacation got started (for example, here\’s one from a 2008 issue of TIME magazine and here\’s one from the Washington Post last spring). My discussion here draws heavily on a 2002 book by Kenneth M. Gold, a professor of education at the City University of New York, called School\’s In: The History of Summer Vacation in American Public Schools.

Gold points out that back in the early 19th century, US schools followed two main patterns. Rural schools typically had two terms: a winter term and a summer one, with spring and fall available for children to help with planting and harvesting. The school terms in rural schools were relatively short: 2-3 months each. In contrast, in urban areas early in the first half of the 19th century, it was fairly common for school districts to have 240 days or more of school per year, often in the form of four quarters spread over the year, each separated by a week of official vacation. However, whatever the length of the school term, actual school attendance was often not compulsory.

In the second half of the 19th century, school reformers who wanted to standardize the school year found themselves wanting to length the rural school year and to shorten the urban school year, ultimately ending up by the early 20th century with the modern school year of about 180 days.  Indeed, Gold cites an 1892 report by the U.S. Commissioner of Education William Torrey Harris which sharply criticized \”the steady reduction that our schools have suffered\” as urban schools had reduced their school days down toward 200 per year over the preceding decades.

With these changes, why did summer vacation arise as a standard pattern during the second half of the 19th century, when it had not been common in either rural or urban areas before that? At various points, Gold notes a number of contributing factors.  

1)  Summer sessions of schools in the first half of the 19th century were often viewed as inferior by educators at that time. It\’s not clear that the summer sessions were inferior: for example, attendance didn\’t seem to drop off much. But the summer sessions were more often taught by young women, rather than by male schoolteachers.

2) School reformers often argued that students needed substantial vacation for their health. Horace Mann wrote that overtaxing students would lead to \”a most pernicious influence on character and habits … not infrequently is health itself destroyed by overstimulating the mind.\” This concern over health seemed to have two parts. One was that schoolhouses were unhealthy in the summer: education reformers of the time reminded teachers to keep windows open, to sprinkle floors with water, and to build schools with an eye to good air ventilation. Mann wrote that \”the small size, ill arrangement, and foul air, of our schoolhouses, present serious obstacles to the health and growth of the bodies and minds of our children.\”  The other concern over health was that overstudy would lead to ill-health, both mental and physical. An article in the Pennsylvania School Journal expressed concern that children \”were growing up puny, lank, pallid, emaciated, round-shouldered, thin-breasted all because they were kept at study too long. Indeed, there was an entire medical literature of the time that \”mental strain early in life\” led to lifelong \”impairment of medical and physical vigour.\”

Of course, these arguments were mainly deployed in urban areas as reasons for shortening the school year. In rural areas where the goal was to lengthen the school year, an opposite argument was deployed, that the brain was like a muscle that would develop with additional use.

3) Potential uses of a summer vacation for teachers and for students began to be discussed. For students, there were arguments over whether the brain was a muscle that should be exercised or relaxed during the summer. But there was also a widespread sense at the time, almost a social mythology, that summer should be a time for intense interaction with nature and outdoor play. For teachers, there was a sense that they also needed summer: as one writer put it, \”Teachers need a summer vacation more than bad boys need a whipping.\” There was a sense in both urban and rural areas that something like a 180-day school, with a summer vacation, would be the sort of job that would be attractive to talented individuals and well-paid enough to make teaching a full-time career. For teachers as well, there was a conflict as to whether they should spend summers working on lesson plans or relaxing, but the slow professionalization of teaching meant that more teachers were using the summer at least partially for work.

4) More broadly, Gold argues that the idea of a standard summer vacation as widely practiced by the start of the 20th century grew out of a tension in the ways that people thought about annual patterns itself in the late 19th century. On one side, time was viewed as an annual cycle, not just for agricultural purposes, but as a series of community practices and celebrations linked to the seasons. On the other side, time was starting to be industrial, in a way that seasons mattered much less and the smooth coordination of production effort mattered more. A standard school year with a summer vacation both coordinated society along the lines of time, while offering a respect for seasonality as well.

C-Sections: Trends and Comparisons

It would be comforting to believe that medical decisions are always make based on a clean, clear evaluation of the health of the patient. But when it comes to births by Caesarean section, it\’s hard to believe that this is the case. For example, here\’s a comparison of C-section rates across countries. (The figure was produce for a briefing book distributed by the Stanford Institute for Economic Policy Research, using OECD data.)

I am unaware of any evidence about the health of mothers and children that would explain why the U.S. rate of C-sections is similar to Germany and Portugal, twice as high as Sweden, but only 2/3 the rate of Mexico. China seems to be the world leader, with nearly half of all births occurring via C-section. 

In the U.S., the rate of C-sections has risen dramatically over time, as Michelle J.K. Osterman and Joyce A. Martin lay out in \”Changes in Cesarean Delivery Rates by Gestational Age:United States, 1996–2011,\” a National Center for Health Statistics Data Brief released in June.

In the US, C-sections were 21% of all births in 1996, but 33% of all births by 2009, although the rate has not increased since then. To be sure, the calculation of costs and benefits for doing a C-section will evolve over time, as the surgery gradually become safer. But this sharp increase doesn\’t seem to be driven by health calculations. As Osterman and Martin point out, \”the American College of Obstetricians and Gynecologists developed clinical guidelines for reducing the occurrence of nonmedically-indicated cesarean delivery and labor induction prior to 39 weeks.\” And the much higher rates of C-sections in countries where surgery can be less safe than in the U.S., like China and Mexico, are clearly not driven by concerns over the health of mother and child.

Some C-sections are necessary and even life-saving. But to me, the high and rising rates of C-sections have the feeling of a boulder rolling downhill: as C-sections have become more popular,  they have become more expected and acceptable for a broader range of reasons, which in turn has made them even more popular, and so on. It won\’t be easy to push that boulder back up its hill.

Looking Back at the Baby Boom

The trend toward lower fertility rates seems like an inexorable long-run trend, in the U.S. and elsewhere. The U.S. total fertility rate–that is, the number of average births per woman–is about 2 right now, and the long-run projections published by Social Security Administration assume that it will hold at about 2.0 over the next 75 years or so.

But I recently saw a graph that raised my eyebrows. Here is the fertility rate for US women (albeit only for white women for the early part of the time period) going back to 1800, taken from a report done for the Social Security Administration.  If you were making projections about fertility rates in about 1940, and you had access to this data, you might have predicted that the rate of decline would level off. But it would have taken a brash forecaster indeed to predict the fertility bump that we call the \”baby boom.\”

Two thoughts here:

1) The baby boom was a remarkable demographic anomaly. It gave the U.S. economy a \”demographic dividend\” in the form of a higher-than-otherwise proportion of working-age adults for a time. But the aging of the boomers is already leading to financial tensions for government programs like Medicare and Social Security.

2) There\’s really no evidence at all that another baby boom might happen: but then, there was no evidence that the first one was likely to happen either. Someone who is wiser and smarter than I about social trends–perhaps a science fiction writer–might be able to offer some interesting speculation about what set of factors and events could lead to a new baby boom.  


boomer generation is an enormous historical anomaly

Thoughts on the Diamond-Water Paradox

Water is necessary to sustain life. Diamonds are mere ornamentation. But getting enough water to sustain life typically has a low price, while a piece of diamond jewelry has a a high price. Why does an economy put a lower value on what is necessary to sustain life than on a frivolity? This is the \”diamond-water paradox,\” a hardy perennial of teaching intro economics since it was incorporated into Paul Samuelson\’s classic 1948 textbook. Here, I\’ll offer a quick review of the paradox as it originated in Adam Smith\’s classic The Wealth of Nations, and then some thoughts.

Adam Smith used the comparison of diamonds and water to make a distinction between what he called \”value in use\” and \”value in exchange.\” The quotations here taken from the version of the Wealth of Nations that is freely available on-line at the Library of Economics and Liberty website. Smith wrote:

\”The word VALUE, it is to be observed, has two different meanings, and sometimes expresses the utility of some particular object, and sometimes the power of purchasing other goods which the possession of that object conveys. The one may be called \’value in use ;\’ the other, \’value in exchange.\’ The things which have the greatest value in use have frequently little or no value in exchange; and on the contrary, those which have the greatest value in exchange have frequently little or no value in use. Nothing is more useful than water: but it will purchase scarce any thing; scarce any thing can be had in exchange for it. A diamond, on the contrary, has scarce any value in use; but a very great quantity of other goods may frequently be had in exchange for it.\” 

In the classroom, the example is often then used to make two conceptual points. One is that economics is about value-in-exchange, and that value-in-use is a fuzzy concept that Smith (and the class) can set aside. The other is to explain the importance of scarcity and marginal analysis. Diamonds are high-priced because the demand is high relative to the limited quantity available. Water is inexpensive because it is typically fairly abundant, but if one is dying of thirst, then it would have a much higher value-in-exchange–conceivably even greater than diamonds.

It now seems possible that Uranus and Neptune may have oceans of liquid carbon, with diamond icebergs floating in them. (For a readable overview, see here. For an underlying scientific paper, see J. H. Eggert et al.  2010. \”Melting temperature of diamond at ultrahigh pressure.\” Nature Physics 6, pp. 40-43.) On such planets, the scarcity and price of water and diamonds might well be reversed!

But at a deeper level, Michael V. White pointed out 10 years ago in an article in an article in the History of Political Economy that Smith wasn\’t thinking of this as a \”paradox\” (\”Doctoring Adam Smith: The Fable of the Diamonds and Water Paradox,\” 2002, 34:4, pp. 659-683).  White traces the references to the value and price of diamonds and water through Jeremy Bentham, David Ricardo, William Stanley Jevons, Alfred Marshall, and other luminaries. In various ways, these writers all deconstructed Smith\’s paragraph to argue that value could not depend on use alone, that \”use\” would vary according to scarcity, that supply must be included, and so on.

Of course, Smith was aware of the importance of scarcity. A few chapters later in the Wealth of Nations, he revisited the subject of the price and value of diamonds in several other passages. He wrote:

\”Their highest price, however, seems not to be necessarily determined by any thing but the actual scarcity or plenty of those metals themselves. It is not determined by that of any other commodity, in the same manner as the price of coals is by that of wood, beyond which no scarcity can ever raise it. Increase the scarcity of gold to a certain degree, and the smallest bit of it may become more precious than a diamond, and exchange for a greater quantity of other goods. …The demand for the precious stones arises altogether from their beauty. They are of no use, but as ornaments; and the merit of their beauty is greatly enhanced by their scarcity, or by the difficulty and expence of getting them from the mine.\”

However, White documents that by the late 19th and early 20th century, references to Smith\’s diamond-water paradox tended to condemn Smith for hashing up the conceptual discussion so badly. For example, White describes a contribution from Paul Douglas at a University of Chicago symposium in 1926, 150 years after the publication of The Wealth of Nations. Here\’s White, summarizing Douglas\’s argument (citations omitted):

\”Smith\’s `failure\’ to correctly analyze utility was attributed to his personality, which, Douglas asserted, reflected a national stereotype. The inability to follow the `hints\’ of his predecesssors (Locke, Law, and Harris) was due to Smith\’s `moralistic sense. … in his thrifty Scottish manner with it sopposition to ostentation as almost sinful he concluded that diamonds \’have scarce any value in use.\’ The stingy Scot had thus managed to `divert\’ English (!) political economists `into a cul-de-sac from which they did not emerge … for nearly a century. Smith on value and distribution was embarrassing: `it might seem to be the path of wisdom to pass these topics by in discreet silence.\’\”

Paul Samuelson was a student of Paul Douglas at the University of Chicago, and Samuelson inserted the diamond-water question into his 1948 textbook, where it has remained a standard example–and for all the ambiguity and complexity, I think a useful piece of pedagogy–since then.

American Women and Marriage Rates: A Long-Term View

Julissa Cruz looks at some long-run patterns of marriage rates for U.S. women in \”Marriage: More Than a Century of Change,\” written as one of the Family Profile series published by the National Center for Family and Marriage Research at Bowling Green State University. Here are some striking patterns.

\”The proportion of women married was highest in 1950 at approximately 65%. Today, less than half 47%) of women 15 and over are married— the lowest percentage since the turn of the century.\”

\”The proportion of women married has declined among all racial/ ethnic groups since the 1950s. This
decline has been most dramatic for Hispanic and Black women, who experienced 33% and 60% declines in the proportion of women married, respectively.\”

Back in 1940, education level made relatively little difference to the likelihood that a woman was married, but women with less education were more likely to be married. Those patterns have now changed. Education levels now show a much larger correlation with whether a women is married, and women with less education have become much less likely to be married.

I\’ll forebear from offering a dose of pop sociology about the changing nature of marriage and what it all means. But clearly, the changes over recent decades are substantial.