"Big Oil"–Actually Small and Vulnerable

When it comes to Big Oil, I run into a lot of people who are still living in the 1960s. They still think that Exxon and Mobil and Shell and BP and a few others dominate world oil markets. The October 29 issue of the Economist has a nice article (\”Big Oil’s bigger brothers\”), which puts the modern reality of Big Oil in context. 


When it comes to reserves, the big country-owned firms control 80% of the world\’s oil, while the private oil companies are bit players. 



ExxonMobil, Shell and BP do continue to have enormous expertise in getting to hard-to-reach oil. When world oil prices are high, and this costly technology gets put to work, they can make high profits. But their technological leadership is under continual challenge  both from the state-owned oil companies and from small technology-intensive private firms–often working together. Moreover, Big Oil is very vulnerable to a drop in oil prices. As the Economist writes: 


\”Life is getting harder for the supermajors. Their edge over their rivals—the ability to extract oil from difficult places—is terrifically useful while prices are high. But since it is terrifically costly to extract oil from difficult places, their competitive advantage fizzles if oil prices fall. If it does, their bumper profits could vanish like a pool of petrol into which a lighted match has been carelessly dropped.\”

Recognizing Non-formal and Informal Learning

When information is imperfect, markets may not work well. If consumers are highly uncertain about the quality of what they are buying, they become less likely to buy. If a lender is highly uncertain about whether a potential borrower will repay a loan, the lender is less likely to make that loan. The more uncertain that an employer is about the quality of a potential employee, the less likely the employer is to hire that person. This problem of imperfect information in labor markets is especially severe now, with an unemployment rate that has been between 8.8% and 10.1% since April 2009. If someone hasn\’t been working, how can an employer judge their skills and talents?

Is there a way in which workers with experience could have a way of demonstrating their skills and knowledge that didn\’t involve taking a class or getting a degree? There are some recent experiments along these lines in the U.S. economy, but it turns out that several other countries have already developed processes for recognizing  non-formal and informal learning.

Jeff Selingo, who is editorial director of the Chronicle of Higher Education, tackled the question of whether colleges might lose their near-monopoly power over anointing people with job credentials in a short article last month. Selingo writes:

The day when other organizations besides colleges provide a nondegree credential to signify learning might not be as far off as we think. One interesting project on this front is an effort to create “digital badges,” which would allow people to demonstrate their skills and knowledge to prospective employers without necessarily having a degree. Badges could recognize, for example, informal learning that happens outside the classroom; “soft skills,” such as critical thinking and communication; and new literacies, such as aggregating information from various sources and judging its quality. And in a digital age, the badge could include links back to documents and other artifacts demonstrating the work that led to earning the stamp of approval.

Until now an interesting-but-somewhat-fringe idea, digital badges received a big boost last week, when the John D. and Catherine T. MacArthur Foundation announced a $2-million competition to create and develop badges and a badge system. (The contest is also supported by Mozilla and the Humanities, Arts, Sciences, and Technology Advance Collaboratory, otherwise known as Hastac.)

At the announcement in Washington, the U.S. secretary of education, Arne Duncan, called badges a “game-changing strategy” and said his agency would join with the Department of Veterans Affairs to award $25,000 for the best badge prototype that serves veterans looking for well-paying jobs. Under a badge system, colleges would no longer be the sole providers of a credential. While badges could be awarded by traditional colleges, they could also be given out by professional organizations, online and open-courseware providers, companies, or community groups.

In the Autumn 2011 issue of the Wilson Quarterly (not available free online), Kevin Carey writes in an essay about \”College for All?\” about how the Western Governors University is awarding degrees based on competency, not classroom hours. Carey writes:

\”While American higher education is diverse in many ways, encompassing a variety of missions and constituencies, it is remarkably undiverse when it comes to awarding degrees. Every institution grants the same two- and four-year credentials that signify little more than how many hours the bearer sat in classrooms. Newer institutions such as Western Governors University (WGU) are turning that equation upside-down, awarding degrees when students demonstrate defined competencies, regardless of how long it took to achieve them.\”

WGU is a fully accredited nonprofit institution founded in the 1990s by the governors of 19 western states that now enrolls 25,000 mostly adult students online. It currently focuses on occupation-specific fields such as education, business, and health care. But efforts are afoot to expand the model into more traditional academic fields.

The WGU experiment points to a future public education system in which public subsidies are tied to commonly understood goals for learning, not how old the student happens to be or whether he or she happens to live. In increasingly digital learning environments, it will be possible to track, store, and summarize evidence of learning in ways that render traditional time-based credentials obsolete.\”

On the international front, Patrick Werquin wote an OECD report on \”Recognising Non-Formal and Informal Learning: Outcomes, Policies and Practices\” which was published in spring 2010 (and to my knowledge is not freely available on-line). Some highlights (omitting some references for readability): 

\”All data on lifelong learning indicate that the highest qualification held by the great majority of people is obtained in the formal system of education and initial training, which in the case of many adults occurred some time ago. This is confirmed by other sources revealing that almost 90%of adult learning initiatives do not lead to a qualification, even though, depending on the country, 20-60% of  individuals who embark on learning do so primarily to obtain one. … There is therefore a patent lack of visibility as regards people\’s real knowledge, skills and competences, since those acquired during their working lives or other activities remain invisible. This lack of visibility is all the more significant for those who left the initial education and training system many ears earlier. It is also especially detrimental to those with a low level of qualification …\”

\”More recently, OECD (2007) ranked the recognition of non-formal and informal learning outcomes high on a list of 20 mechanisms identified as potentially capable of motivating learning. At the same time, major international organizations are showing a close interest in the recognition of learning outcomes. All these studies point in the same direction: formal learning alone cannot account for all of the learning encompassed by the concept of lifelong learning. There is thus no shortage of studies that argue for the recognition of non-formal and informal learning outcomes. …\”

Werquin\’s report for the OECD lists mechanisms for recognizing non-formal and informal learning in 21 countries–notably, with no mention of any such effort in United States. Two countries with especially well-developed policies along these lines are Ireland, which has certificates for Recognition of Prior Learning (RPL), Accreditation of Prior Experiential Learning (APEL), Recognition of Current Competences (RCC), Learning Outside Formal Teaching (LOFT) and others, and Norway, which has a \”skills passport\” system.

These systems for recognizing non-formal and informal learning systems vary considerably across countries. I can imagine a number of practical concerns. But for many Americans, maybe especially those with lower and medium skill levels, their educational credentials (often from long-ago) don\’t reveal their true skill set. For many of them, going back to school for some additional degree or certificate is impractical, and frankly a waste of time–because whether they have a piece of paper from an educational institution to prove it, they have already acquired the skills they need for many jobs. America should be thinking more about ways of connecting potential workers to the labor market that don\’t involve telling those who don\’t flourish in school that they need to keep attending. A couple of weeks ago I posted on Apprenticeships for the U.S. Economy as one such option. Ways to recognize competences achieved through non-formal and informal learning seems like a complementary approach.

What if Country Size Was Relative to Population? A World Map

Joseph Chamie, former director of the UN Population Division and now Director of the Center for Migration Studies,is interviewed in the Third Quarter 2011 issue Southwest Economy, a publication of the Dallas Fed: 
On the Record: Shifting from World Population Explosion to Global Aging–A Conversation with Joseph Chamie.

The interview includes one of those usefully provocative maps: What would a map of the world look like if it were distorted so that the size of every country is relative to its population? The patterns are expected: In North America, Canada shrinks and Mexico grows. In the rest of the world, Russia shrinks and China and India grow. Japan looks a lot larger when weighted by population; Australia looks smaller. Africa appears notably larger than South America. For me, such maps also emphasize that U.S. economic growth over the next few decades is likely to be related to how extensively the American economy participates in the growth that is happening in the rest of the world.


Chamie also points out that the growth rate of world population is slowing dramatically. One of the next main demographic preoccupations will be population aging. Soon, the above-65 elderly in the world population will exceed the number of children for the first time in world history.

\”Two thousand years ago, world population was estimated at about 300 million. It reached the first billion mark at the beginning of the 19th century—the estimate is about 1804—when Thomas Jefferson was U.S. president. The second billion mark was reached in 1927. We had a tripling of world population from 1927 to near the end of the 20th century, when it reached 6 billion. We’re now approaching 7 billion people.

Why did that happen? It’s because we had this wonderful thing occur: a decline in mortality rates. This decrease in mortality is humanity’s greatest achievement. Every government wishes to see lower mortality and longer life. The world benefited from modern medicine and public health; antibiotics, of course; also better nutrition, better facilities, better working conditions. What lagged behind were changes in birth rates. This difference between birth rates and death rates gave rise to what is commonly called the population explosion. We reached a peak population growth rate of about 2.1 percent in the late ’60s, and we reached the peak annual increase of about 87 million people in the late ’80s. The latest United Nations projections show a world of about 10.1 billion people by the end of the 21st century. …\”

\”While the 20th century was the century of demographic growth (and this growth will continue through the 21st century—we are likely to add 2 to 3 billion people), the world’s population is aging. Very soon, we will see a reversal where the number of children, which has historically been more than the number of people above 65, will become less than the elderly. The aging of the world’s population will be pervasive; it will affect every household. It will affect the economy, social interactions, voting patterns, lifestyles.\”

Lorenz curves and Gini coefficients: CBO #3.

This is the third of three posts based on the the Congressional Budget Office report,  \”Trends in the Distribution of Household Income Between 1979 and 2007.\”  The first was Incomes of the Top 1%, and the second was about Federal Redistribution is Dropping.

This post focuses on explaining some basic tools for measuring inequality. The Lorenz curve offers an intuitively clear picture of inequality. The Gini coefficient, which is based on the curve, offers a way of measuring inequality across the income distribution as a single number–and thus is often used in graphs and figures about inequality. The CBO report has a nice clear explanation of these topics.

The Lorenz curve

The Lorenz curve was developed by an American statistician and economist named Max Lorenz when he was a graduate student at the University of Wisconsin. His article on the the topic

\”Methods of Measuring the Concentration of Wealth,\” appeared in Publications of the American Statistical Association , Vol. 9, No. 70 (Jun., 1905), pp. 209-219. The CBO report explains it this way:  

\”The cumulative percentage of income can be plotted against the cumulative percentage of the population, producing a so-called Lorenz curve (see the figure). The more even the income distribution is, the closer to a 45-degree line the Lorenz curve is. At one extreme, if each income group had the same income, then the cumulative income share would equal the cumulative population share, and the Lorenz curve would follow the 45-degree line, known as the line of equality. At the other extreme, if the highest income group earned all the income, the Lorenz curve would be flat across the vast majority of the income range,following the bottom edge of the figure, and then jump to the top of the figure at the very right-hand edge.

Lorenz curves for actual income distributions fall between those two hypothetical extremes. Typically, they intersect the diagonal line only at the very first and last points. Between those points, the curves are bow-shaped below the 45-degree line. The Lorenz curve of market income falls to the right and below the curve for after-tax income, reflecting its greater inequality. Both curves fall to the right and below the line of equality, reflecting the inequality in both market income and after-tax income.\”

The Gini coefficient

The Gini coefficient was developed by an Italian statistician (and noted fascist thinker) Corrado Gini in a 1912 paper written in Italian (and to my knowledge not freely available on the web). The intuition is straightforward (although the mathematical formula will look a little messier). On a Lorenz curve, greater equality means that the line based on actual data is closer to the 45-degree line that shows a perfectly equal distribution. Greater inequality means that the line based on actual data will be more \”bowed\” away from the 45-degree line. The Gini coefficient is based on the area between the 45-degree line and the actual data line. As the CBO writes:

\”The Gini index is equal to twice the area between the 45-degree line and the Lorenz curve. Once again, the
extreme cases of complete equality and complete inequality bound the measure. At one extreme, if
income was evenly distributed and the Lorenz curve followed the 45-degree line, there would be no area
between the curve and the line, so the Gini index would be zero. At the other extreme, if all income was
in the highest income group, the area between the line and the curve would be equal to the entire area
under the line, and the Gini index would equal one. The Gini index for [U.S.] after-tax income in 2007 was
0.489—about halfway between those two extremes.\”

Federal Redistribution is Dropping: CBO #2

This is the second of three posts on the recent Congressional Budget Office Report \”Trends in the Distribution of Household Income Between 1979 and 2007.\”  The first post on Incomes of the Top 1% is here, while the third explains the concepts of the Lorenz Curve and the Gini Coefficient.

The federal government can redistribute income in two ways: by taking those with high incomes relatively more, and by making transfer payments to those with lower income. Using the Gini index (explained in the third post in this grouping) as a measure of inequality, CBO reports: \”The dispersion of after-tax income in 2007 is about four-fifths as large as the dispersion of market income. Roughly 60 percent of the difference in dispersion between market income and after-tax income is attributable to transfers and roughly 40 percent is attributable to federal taxes.The redistributive effect of transfers and federal taxes was smaller in 2007 than in 1979 …\”

Redistribution through federal taxes

Here are three figures showing average federal tax rates paid by the top 1% of the income distribution in each year from 1979 to 2007, by the 81st to 99th percentiles, the 21st to 80th percentiles, and the lowest 20%. The first graph shows average payments as a share of income for individual income tax, the second shows payroll taxes, and the third shows all federal taxes combined. Here are a few patterns that jump out.

  • The top 1% pays more of its income on average in income taxes, but much less in terms of payroll taxes. Of course, this is the income on which Social Security payroll taxes must be paid is  capped, so such taxes are a smaller share of income for those with very high incomes. 
  • With income taxes, the lowest quintile pays on average a negative tax rate: that is, with refundable tax credits, they receive more from the federal government through the tax code than they pay. 
  • Total taxes paid as a share of income have dropped off somewhat for all groups in the last decade or so; if one looks back to the mid-1990s, the drop in tax rates for the top 1% looks larger than for other groups.
  • The overall patterns here seem to be that the federal tax code as a whole became less progressive in the 1980s, more progressive in the 1990s, and since then has either not changed or become slightly less progressive, depending on what statistical measure one chooses to emphasize.

These sorts of graphs always turn my mind to Warren Buffett, and his claim that he pays less in taxes than his secretaries. For example, see Buffett\’s August 14 article, \”Stop Coddling the Super-Rich,\” in the New York Times, where he writes: \”Last year my federal tax bill — the income tax I paid, as well as payroll taxes paid by me and on my behalf — was $6,938,744. That sounds like a lot of money. But what I paid was only 17.4 percent of my taxable income — and that’s actually a lower percentage than was paid by any of the other 20 people in our office. Their tax burdens ranged from 33 percent to 41 percent and averaged 36 percent.\”
What Buffett pays as a share of income sounds plausible to me: he is well-known for taking a fairly small annual salary and then receiving most of his income in the form of gains from his investments. Because many of the investments have been held for long periods of time, they are subject to a lower capital gains tax rate. But the tax burdens that Buffett claims for his staff look unrealistically high. Let\’s say his office staff are in the 81st-99th percentiles of the income distribution. Average tax rates for that group are in the range of 22-23% in recent years. Buffett\’s staff might face a marginal tax rate might be 36% or 41%, depending on rules bout phase-outs of deductions and the like. But if Buffett\’s staff really are paying an average federal tax rate of 36% as a share of total income, they need access to better accountants or tax lawyer. 

 Redistribution through federal transfer payments

Federal government transfer payments are about 10-12% of household market income from 1979 to 2007. Spending in this category is heavily driven by Social Security and Medicare. In recent years, about half of federal transfer spending is Social Security. A third is health-related programs like Medicare and Medicaid. The rest is programs like unemployment insurance and welfare.

The share of federal transfer spending on the elderly is rising. In 1979 about 62% of all federal transfer payments went to elderly childless households, while about 19% went to nonelderly childless households and another 19% to households with children. By 2007, 69% of all federal transfer payments went to elderly childless households. The share going to nonelderly childless households stayed about the same, and the share going to households with children fell to about 11%.

Of course, Social Security and Medicare are not means-tested programs, so as they took a larger share of the federal transfer pie, the share going to the poor declined. Not coincidentally, back in 1979 about 54% of federal transfers went to households in the lowest quintile of income; by 2007, only about 36% of federal transfers went to households in the lowest quintile of income.

Summing up the redistribution by federal taxes and transfers
Here\’s a final figure showing how federal transfers and taxes affect income inequality, which is measured by the percent amount that these policies reduce the Gini index of inequality. The extent to which these policies reduce income inequality dropped in the 1980s, rose in the early 1990s, dropped in the late 1990s, rose in the early 2000s and has fallen since then. Interestingly, the diminished effect of federal redistribution since the mid-1990s is, by this measure, much more traceable to the changes in transfer payments than to the changes in progressivity of taxes.

Income of the Top 1%: CBO #1

The Congressional Budget Office has put out a report on \”Trends in the Distribution of Household Income Between 1979 and 2007.\”  It\’s a treasure trove of useful figures and explanations.  I can\’t resist offering a bunch of highlights, which I will divide into three posts–with this as the first one.

  1. The gains to the top 1% of the income distribution. 
  2. How the federal role in redistributing income through taxes and transfers has weakened in recent decades. 
  3. An explanation of the Lorenz curve and the Gini coefficient, for those who would like to understand some terminology that is often used when discussing inequality. 

The basic concept of the top 1% ranked by annual income is often muffed in the media. For example, a Washington Post headline about this report read: \”Nation’s wealthiest 1 percent triple their incomes, according to CBO report.\” This is incorrect in two ways. First, wealth is what you have accumulated over time, so it is is not the same as income, which is what is received in a given year. The CBO report says nothing about the \”wealthiest\” 1 percent. Second, referring to the top 1% as a fixed group is incorrect. who is in the top 1 percent will change over time, especially over the period of nearly three decades from 1979-2007. Some people make the top 1% when they get a big annual bonus, but not other years. For example, think about those in the top 1% who were in the 45-50 age bracket in 2007. Back in 1979, they would have been 28 years younger, in the 17-22 age bracket, and very few of them would have been in the top 1% of income at that time. Conversely, those who were in the top 1% and the 45-50 age bracket in 1979 would be 28 years older by 2007, and many of those in the 73-78 age bracket would have retired.

While it is certainly true that the top 1% is a group that evolves over time, this point shouldn\’t be pushed too hard. Inequality as measured by annual income is rising; however, I don\’t know of any evidence that mobility between broad income groups has been rising. Greater inequality isn\’t being offset by greater mobility.

Here\’s a graph showing the cumulative percentage growth in after-tax, after-transfer income for various income groups, measured on an annual basis. Income growth is slowest for t hose in the lowest quintile (or fifth) of the income distribution, and then faster in ascending order for those in the 21st to 80th percentiles, the 81st to 99th percentiles, and the top 1%. The percentage gains for the top 1% are remarkably higher than for the other groups.

One can look at this underlying data in a different way: What share of total market income did these groups receive in 1979, compared to 2007? And what share of after-tax, after-transfer income did these groups receive in 1979, compared to 2007? The timeframe is a useful one, because it runs from one business cycle peak just before a deep recession in 1979 to another year that is a business cycle peak just before a deep recession. Thus, patterns over this time can\’t be attributed to comparing a recession year to a nonrecession year. The overall pattern is fairly clear. Whether looking at market income or at after-tax, after-transfer income, the 80th-99th percentile received about the same share of income in 2007 as in 1979. The top 1% got a notably larger share. Each of the lower four-fifths of the income distribution got a lower share.

There is a modest rise in inequality of annual incomes even leaving out the top 1%, but most of the increase in annual income inequality is being driven by rising incomes of the top 1%. It\’s perhaps useful to add that pointing out the fact of rising inequality doesn\’t say anything about underlying causes or possible policies.
For a July 18 post on causes of inequality, see \”Causes of Inequality: Supply and Demand for Skilled Workers.\” For an overview of philosophical and economic arguments about inequality, see the September 30 post, \”A Critique of the Arguments for Inequality.\” 

Financial Transactions Tax: The Vatican vs. the IMF

In a hedged and roundabout way, the Vatican has endorsed a financial transactions tax. The announcement came in a Note on financial reform from the Pontifical Council for Justice and Peace , which is one branch within the official governing structure of the Catholic Church. The Note advocates a \”world Authority\” and \”global monetary management,\” but also admits: \”However, a long road still needs to be travelled before arriving at the creation of a public Authority with universal jurisdiction.\” When the note gets down to more immediate suggestions, it offers three:

\”On the basis of this sort of ethical approach, it seems advisable to reflect, for example, on: a) taxation measures on financial transactions through fair but modulated rates with charges proportionate to the complexity of the operations, especially those made on the “secondary” market. Such taxation would be very useful in promoting global development and sustainability according to the principles of social justice and solidarity. It could also contribute to the creation of a world reserve fund to support the economies of the countries hit by crisis as well as the recovery of their monetary and financial system; b) forms of recapitalization of banks with public funds making the support conditional on “virtuous” behaviours aimed at developing the “real economy”; c) the definition of the domains of ordinary credit and of Investment Banking. This distinction would allow a more effective management of the “shadow markets” which have no controls and limits.\”

I will sidestep here the second and third points, on what it means to have \”virtuous\” bankers who develop the \”real economy\” and what kind of financial regulation is appropriate for all the institutions in a modern economy. But on the issue of a financial transactions tax, Thornton Matheson of the IMF offers a nice review of the economics of \”Taxing Financial Transactions: Issues and Evidence\” in Working Paper WP/11/54 released last March. Here are a few  highlights (footnotes and citations omitted):

Financial transactions have increased substantially

\”Transaction costs have indeed fallen dramatically across financial markets over the past 35 years due to advances in information technology, deregulation, and product innovation. In the U.S. equity market, commission deregulation (1975) and decimalization (2000) both substantially lowered transactions costs. Bid/ask spreads on the NYSE now average about 0.1 percent, vs. 1.3 percent in the mid-1980s. In the foreign exchange market, bid-ask spreads for major currencies are currently as little as 1–4 basis points, half the level of a decade ago. Spreads in interest rate futures and swaps are also on the order of a few basis points. Development of the interest rate and credit default swap markets has enabled investors to tailor their fixed-income exposure more cheaply than by trading the underlying bonds.\”

\”As economic theory would predict, this steep decline in financial transaction costs has produced an increase in financial transactions relative to real activity. The value of world financial transactions, which was 25 times world GDP in 1995, rose to70 times that value by 2007. The growth of transactions has been concentrated in derivatives markets, which often have much lower transaction costs relative to notional values than spot markets. Growth in interest rate and equity derivatives transactions has far outstripped growth in business investment in North America and Europe, while the ratio of spot transactions to investment has remained fairly steady. As theory would also predict, lower transactions costs have particularly spurred short-term trading. The past decade has witnessed explosive growth in algorithm or computer-driven trading that relies on high-speed transactions. In 2009, algorithm trading accounted for at least 60 percent of U.S. equity trading volume (up from about 30 percent in 2006), and 30–40 percent of European and Japanese equity trading. Algorithm trading also accounts for 10–20 percent of foreign exchange trading volume, 20 percent of U.S. options volume, and 40 percent of U.S. futures volume.\”

 Many countries already have some version of a financial transactions tax at a low level
In the United States, for example: \” The United States’ Securities and Exchange Commission (SEC), its equity market regulator, imposes a 0.17 basis point chargeon stock market transactions to fund its regulatory operations. … New York State levies a tax of up to five cents per share on within-state stock trades with a cap of $350 per trade …\” However, the trend in recent decades is that the level of such taxes has been dropping around the world.

The case for a financial transactions tax is weak

\”The potentially large base of an STT [security transactions tax] promises an opportunity to raise substantial revenue with a low-rate tax. Current estimates of the revenue potential of a low-rate (0.5–1 basis point) multilateral CTT [currency transactions tax] on the four major trading currencies suggest that it could raise about  $20–40 billion annually, or roughly 0.05 percent of world GDP. A one basis point STT on global stocks, bonds and derivatives is estimated to raise approximately 0.4 percent of world GDP.

\”However, financial transactions taxes create many distortions that militate against using an STT to raise revenue. STTs reduce security values and raise the cost of capital for issuers, particularly issuers of frequently traded securities. STTs also reduce trading volume: studies of existing STTs and other transaction costs suggest that the elasticity of trading volume with respect to transactions costs ranges broadly between -0.4 and -2.6, depending on the market studied. Markets with products for which there are more untaxed substitutes, such as derivatives or foreign listings, have higher elasticities. Lower trading volume in turn reduces liquidity and slows price discovery.

\”An STT is also an inefficient instrument for regulating financial markets and preventing bubbles. There is no convincing evidence that STTs lower short-term price volatility, and high transaction costs are likely to increase it. Current economic thought attributes asset bubbles to excessive leverage, not excessive transactions per se. …

\”The short-run incidence of an STT would likely be quite progressive, as securities values fell in response to the tax. Financial activity, particularly short-term trading, would contract, lowering financial sector profits. Financial firms would likely pass the cost of an STT on surviving activity on to clients, which include not only wealthy individuals and corporations but also charities and pension and mutual funds. In the medium term, release of resources from the financial sector could lower the equilibrium return to highly skilled labor. In the long run, the burden of an STT depends on the elasticity of the capital supply: Like the corporate income tax, the higher financing costs imposed by an STT will fall more heavily on labor than on capital owners as the elasticity of the supply of capital increases.\”

This last argument points out that in the long run, if a financial transactions tax makes it more costly to raise capital, then it will lead to a capital stock that is lower than it would otherwise be. As a result, workers in that country who have less capital with which to work will end up bearing the burden of the tax.

If the goal is to discouraging asset bubbles, tax changes to discourage leverage are more appropriate
\”To discourage leverage at the institutional level, a tax on balance sheet debt (net of insured deposits and equity), such as the financial sector contribution (FSC), could be used. The FSC could be tailored to tax systemically important institutions more heavily, since their risks pose a greater danger to the broad economy. Another means of combating leverage at the firm level is reform of the corporate income tax (CIT), which
encourages debt over equity finance due to its disparate treatment of interest and earnings. To discourage debt finance while raising revenue, interest deductibility could be reduced or even eliminated, as in a comprehensive business income tax …\”

If the goal is to raise tax revenue from the financial sector, think VAT or FAT

\”To tax the financial sector, the base of an existing VAT [value-added tax] could be broadened to include fee-based financial services, or an FAT could be introduced.\” A FAT is a “financial activities tax,” which would be levied on the sum of financial institutions profits and wages.

The Natural Resources Curse

Jeffrey Frankel has a readable overview of the arguments over \”The Curse: Why Natural Resources are Not Always a Good Thing,\” in the Fourth Quarter 2011 issue of the Milken Institute Review, available (with free registration) here.  Frankel writes:

\”It is striking how often countries that are rich with oil, minerals or fertile land have failed to grow more rapidly than those without. Angola, Nigeria and Sudan are all awash in petroleum, yet most of their citizens are
bitterly poor. Meanwhile, East Asian economies, including Japan, Korea, Taiwan, Singapore and Hong Kong, have achieved Western-level standards of living despite being rocky islands (or peninsulas) with virtually no exportable natural resources. This is the phenomenon known to economists as the “natural resources curse.” The evidence for its existence is more than anecdotal. The curse shows up in econometric tests of the determinants of economic performance across a comprehensive sample of countries.
Consider the figure on page 31, which plots the relationship between nonagricultural resource exports as a portion of total goods exports and average economic growth rates over the past four decades. The usual
suspects – China, Korea, Thailand – are conspicuously high in growth and low in natural resources. Likewise, resource-rich Liberia, Venezuela and Zambia have little to show for their wealth in terms of economic development. The negative correlation is not very strong because some countries – think Chile and Saudi Arabia – have managed to have it both ways. But the data certainly suggest no positive correlation between natural resource wealth and economic growth.\”

Frankel offers a wide array of examples and evidence on the natural resources curse. Along the way, he reviews the possible reasons why natural resources might hinder economic growth: 1) Commodity prices fluctuate a lot, so an economy that depends on commodity exports will be hit by a series of shocks; 2) An economy focused on natural resources diverts land, labor, and capital from other sectors of the economy, like manufacturing; 3) Natural resource endowments can foster corruption and weak institutions, as different groups jostle for control of the income from the resources; 4) High exports of natural resources can lead to currency appreciation which then disadvantages all other exports; and 5) Natural resources can be depleted.

He also discusses policies that countries can use to reduce the risk of these pitfalls.

• Hedge export proceeds on derivatives markets (in particular, options markets), as Mexico has done with oil. That way, exporting countries can plan government budgets around firm expectations of revenues and, as important, dampen shocks caused by unanticipated  changes in price.

• Denominate debt in terms of the world price of the export commodity. Exporting countries can (and, in some cases, should) borrow abroad, for example, to develop infrastructure. By writing debt contracts in which the principal is indexed to the price of their export commodity, borrowing countries can share the risk of commodity price volatility with lenders. …

• Adopt Chilean-style fiscal rules, which prescribe a structural budget surplus and use independent panels of experts to determine what future price of the export commodity – in Chile’s case, copper – should be assumed in forecasting the structural budget. Thus, when the independent experts determine that copper prices have fallen below long-term expectations, the government is authorized to offset the impact with temporary fiscal stimulus. But when copper prices are above the long term trend, and the bonanza is determined to be entirely temporary, the government must save the proceeds.

• Intervene in foreign exchange markets to dampen upward pressure on an exporter’s currency in the early stages of commodity booms, while seeking to prevent the money supply from swelling. Subsequently, allow gradual appreciation when the commodity boom has proved to be long-lived or when domestic inflation is no longer contained.

• Establish transparent sovereign wealth funds with the proceeds of commodity exports in order to assure that future generations share the bounty. Botswana’s Pula Fund, built on earnings from the sale of diamonds, is a good model. The fund, invested entirely in securities denominated in other currencies, serves both as a sinking fund to offset the depletion of diamonds and as a buffer to smooth economic fluctuations.

• Make lump-sum per capita distributions of revenues from mineral exports in order to make sure the money doesn’t end up in the bank accounts of corrupt officials.

Sunk Costs: Why You SHOULD Pay Attention

Every introductory course in economics points out that rational actors should ignore sunk costs: look forward at costs and benefits, not backward at things that can\’t be changed. But in the November 2011 issue of the American Economic Journal: Microeconomics, Sandeep Baliga and Jeffrey C. Ely offer an argument as to why rational actors should pay attention to sunk costs in \”Mnemonomics: The Sunk Cost Fallacy as a Memory Kludge.\” Those who want the mathematical model and details on the follow-up laboratory experiment need to go through a library to get the article. But in the opening pages, the authors do a nice job of providing the basic intuition.

They start with a reminder of some of the classic evidence on people paying attention to sunk cost: theater tickets and production of the Concorde supersonic jet. They write: \”In a classic experiment, Hal R. Arkes and Catherine Blumer (1985) sold theater season tickets at three randomly selected prices. Those who purchased at the two discounted prices attended fewer events than those who paid the full price. Hal Arkes and Peter Ayton (1999) suggest those who had “sunk” the most money into the season tickets were most motivated to use them. R. Dawkins and T. R. Carlisle (1976) call this behavior the Concorde effect. France and Britain continued to invest in the Concorde supersonic jet after it was known it was going to be unprofitable. This so-called “escalation of commitment” results in an over-investment in an activity
or project.\”

Here is their argument for why paying attention to sunk costs can make sense: \”We provide a theory of sunk cost bias as a substitute for limited memory. We consider a model in which a project requires two stages of investment to complete. As new information arrives, a decision-maker or investor may not remember his
initial forecast of the project’s value. The sunk cost of past actions conveys information about the investor’s initial valuation of the project and is therefore an additional source information when direct memory is imperfect. This means that a rational investor with imperfect memory should incorporate sunk costs into future decisions. … If the investor has imperfect memory of his profit forecast, a high sunk cost signals that the forecast was optimistic enough to justify incurring the high cost. For example, the willingness to incur a high sunk cost digging dry wells may signal that the oil exploration project is worth continuing. If this is the main issue the investor faces, it generates the Concorde effect as he is more likely to continue a project which was initiated at a high cost.\”

They point out various ways in the real world that this dynamic–that is, imperfect information about why a decision was made in the past–can cause current actors to treat sunk costs as useful information. \”There are a few different ways these effects can manifest themselves in practice. Most directly, the decision maker may be an individual responsible for making the initiation and continuation decision and he may simply forget the information. An organization may also forget information or knowledge. Managerial turnover can generate organizational forgetting. In this case, we can think of the investor in the model as representing a long-lived organization headed by a sequence of short-term executives. An executive who inherits an ongoing project will not have access to all of the information available at the time of planning. Existing strategies and plans will then encode missing information and a new executive may continue to implement the plans of the old executive. In our model, data about sunk costs partially substitutes for missing information and a rational executive takes this into account.\”

Finally, and intriguingly, they suggest that paying attention to sunk costs may be hard-wired into people\’s brains as an adaptive mechanism. \”Finally, we can think of sunk-cost bias as a kludge: an adaptive heuristic wherein metaphorical Nature is balancing a design tradeoff. … In our model, the sunk cost bias is an optimal heuristic that compensates for the constraints of limited memory. This can explain the prevalence and persistence of sunk-cost bias despite its appearance, superficially, as a fallacy. To the extent that heuristics are hard-wired or built into preferences, the sunk-cost bias in observed behavior would be adapted to the “average” environment but not always a good fit in specific situations. For example, we would expect that decision-makers display a sunk-cost bias even when full memory is available, and that sometimes the bias goes in the wrong direction for the specific problem at hand.\”

On this final point, I\’m not yet persuaded. But many economic decisions do unfold over time, and many of them may have a period of disutility or losses early in the process, later followed by compensating utility or gains. It would clearly be misguided to make a choice to pursue a path over time, then when partway into the process to forget about or undervalue the gains to come.

Who is Using $1 Trillion in U.S. Currency?

Jeremy Gerst and Daniel J. Wilson of the Federal Reserve Bank of San Francisco have a short essay:

\”What\’s in Your Wallet? The Future of Cash.\” They offer evidence for the intriguing fact that in an economy with increasing use of credit cards, debit card and electronic transfers, the amount of cash outstanding has kept growing. But they don\’t tackle what to me is the most intriguing question in this area: Who is holding all this currency?

Facts first. As Gerst and Wilson write: \”Over the past few decades, the dominant position of cash as a store of value and a means of payment has increasingly been challenged. The growth of electronic payments, especially credit cards and, more recently, debit cards, has radically changed the role of cash in the global economy. Yet, the circulation of the U.S. dollar, the world’s most widely used currency, has continued to grow without interruption. Last year, the value of U.S. currency in circulation reached nearly $1 trillion dollars.\”

Gerst and Wilson offer a figure to describe the rise in currency in circulation, and describe it this way: \”First,
currency in circulation has grown around a steady trend with minimal fluctuations throughout this period.
Despite dramatic changes in the payment landscape over the past 30 years and the recent financial crisis and
recession, cash holdings among households, businesses, and banks have continued to grow at a steady pace.
Second, around 2005 and 2006, the Fed’s gross shipments of cash began to diverge sharply from currency in circulation. Cash shipments fell steeply, while currency in circulation continued to grow. In other words, banks sharply reduced the value of cash they sent to the Fed for processing, but continued to order more new cash than they returned, so that currency in circulation grew. The divergence between Fed cash shipments and currency in circulation was probably due to a recirculation policy that the Fed instituted in 2006 to discourage banks from overusing the Fed’s free cash-processing service. The policy applied specifically to $10 and $20 notes. Indeed, the steep drop in cash volumes was unique to those two denominations …\”

 
Gerst and Wilson offer some estimates and simulations about trends in cash payments and demand for cash in the future. But to me, the more interesting question is who is holding all that cash. In my Principles of Economics textbook, available from Textbook Media (and of course I encourage all those teaching an introductory economics course to check it out), I included a discussion box on this question in Chapter 29 on \”Money and Banking.\” The cash total cited here is from a few years ago, but the basic message remains. I\’ll quote from the textbook here:

The Case of the Missing Currency

Here is a puzzle. The Federal Reserve reported that about $800 billion in currency—that is, paper money and coins—was in circulation in 2006. Dividing the total currency by roughly 210 million U.S. adults over the age of 18 works out to an average of approximately $3,800 in currency for every man and woman.

This average seems ridiculously high. Survey results suggest that the average holdings of currency for each adult are more like $350—and that number takes into account well-off people who hold large amounts of cash in safes and safety deposit boxes. If 210 million adults are each holding $350, then total holdings of U.S. currency by individuals is about $73 billion. Businesses traditionally hold about 3% of all currency, or about $24 billion. After all, most businesses try not to hold currency, but instead quickly get money to the bank where it can earn some interest.

So if households and firms together hold about $97 billion in currency, where is the other $703 billion? No one is quite sure. There are three possible answers: it may be held by children; as part of the unmeasured “underground” economy; or by foreigners. Even if one believes that teenage buying power is important to certain sectors of the economy, and that underground, unreported businesses are everywhere, it is very hard to believe that these two components are larger than the uses of money by adults and legal businesses. As a result, it seems likely that hundreds of billions of dollars of U.S. currency is circulating in the hands
of foreigners. When a country’s own currency is plagued with uncertainty, perhaps because of a high inflation rate, people in that country may start carrying out transactions and saving money by using U.S. dollars.

But this explanation is not completely satisfactory either. In high-income economies like those of the European nations, Japan, and Canada, people have relatively little need to use U.S. currency for transactions or saving. People may invest in the United States, but they can do so with electronic money; they don’t need actual currency in the form of paper and coins. In the middle and low-income countries of the world, the average
families are too poor to account for hundreds of billions of dollars of U.S. currency.

The case of the missing currency remains unsolved.

I would be remiss if I didn\’t add that I first learned about \”The Case of the Missing Currency\” from an article by Case Sprenkle in the Fall 1993 issue of my own Journal of Economic Perspectives. The journal is freely available from the most recent issue back to around 1998, but this is a little too far back to be freely available online. However, it\’s readily available if you search the web, as well as on JSTOR.