The Share of Itemizers and the Politics of Tax Reform

Those who fill out a US tax return always face a choice. On one hand, there is a \”standard deduction,\” which is the amount you deduce from your income before calculating your taxes owed on the rest. On the other hand, there are a group of individual tax deductions: for mortgage interest, state and local taxes, high medical expenses, charitable contributions, and others. If the sum of all these deductions is larger than the standard deduction, then a taxpayer will \”itemize\” deductions–that is, filling out additional tax forms that list all the deductions individually. Conversely, if the standard deduction is larger than the sum of all the individual deductions is not larger than the list of itemized deductions, then the taxpayer just uses the standard deduction, and doesn\’t go through the time and bother of itemizing.

In the last 20 years or so, typically about 30-35% of federal tax returns found it worthwhile to itemize deductions.

But the Tax Cuts and Jobs Act passed into law and signed by President Trump in December 2017 will change this pattern substantially. The standard deduction increases substantially, while limits or caps are imposed on some prominent deductions. As a result, the number of taxpayers who will find it worthwhile to itemize will drop substantially.

Simulations from the Tax Policy Center, for example, suggest that the total number of itemizers will fall by almost 60%, from 46 million to 19 million — which means that in next year\’s taxes, maybe only about 11% of all returns will find it worthwhile to itemize.

Set aside all the arguments over pros and cons and distributional effects of the changes in the standard deduction and the individual deductions, and focus on the political issue. It seems to me that this dramatic fall in the number of taxpayers, especially if it is sustained for a few years, will realign the political arguments over future tax reform. If one-third or so of taxpayers are itemizing–and those who itemize are typically those with high incomes and high deductions who make a lot of noise–then reducing deductions will be politically tough. But if only one-ninth of taxpayers are itemizing, while eight-ninths are just taking the standard deduction, then future reductions in the value of tax deductions may be easier to carry out. It will be interesting to see if the political dynamics of tax reform shift along these lines in the next few years.

When Britain Repealed Its Income Tax in 1816

Great Britain first had an income tax in 1799, but then abolished it in 1816. In honor of US federal tax returns being due tomorrow, April 17, here\’s a quick synopsis of the story.

Great Britain was in an on-and-off war with France for much of the 1790s. The British government borrowed heavily and was short of funds. When Napoleon came to power in 1799, the government under Prime Minister William Pitt introduced a temporary income tax. Here\’s a description from the website of the British National Archives:

‘Certain duties upon income’ as outlined in the Act of 1799 were to be the (temporary) solution. It was a tax to beat Napoleon. Income tax was to be applied in Great Britain (but not Ireland) at a rate of 10% on the total income of the taxpayer from all sources above £60, with reductions on income up to £200. It was to be paid in six equal instalments from June 1799, with an expected return of £10 million in its first year. It actually realised less than £6 million, but the money was vital and a precedent had been set.

In 1802 Pitt resigned as Prime Minister over the question of the emancipation of Irish catholics, and was replaced by Henry Addington. A short-lived peace treaty with Napoleon allowed Addington to repeal income tax. However, renewed fighting led to Addington’s 1803 Act which set the pattern for income tax today. …

Addington’s Act for a ‘contribution of the profits arising from property, professions, trades and offices’ (the words ‘income tax’ were deliberately avoided) introduced two significant changes:

  • Taxation at source – the Bank of England deducting income tax when paying interest to holders of gilts, for example
  • The division of income taxes into five ‘Schedules’ – A (income from land and buildings), B (farming profits), C (public annuities), D (self-employment and other items not covered by A, B, C or E) and E (salaries, annuities and pensions).

 Although Addington’s rate of tax was half that of Pitt’s, the changes ensured that revenue to the Exchequer rose by half and the number of taxpayers doubled. In 1806 the rate returned to the original 10%.

Pitt in opposition had argued against Addington’s innovations: he adopted them almost unchanged, however, on his return to office in 1805. Income tax changed little under various Chancellors, contributing to the war effort up to the Battle of Waterloo in 1815.

Perhaps unsurprisingly, Britain\’s government was not enthusiastic about repealing the income tax even after the defeat of Napoleon. But there was an uprising of taxpayers. The website of the UK Parliament described it this way:

\”The centrepiece of the campaign was a petition from the City of London Corporation. In a piece of parliamentary theatre, the Sheriffs of London exercised their privilege to present the petition from the City of London Corporation in person. They entered the Commons chamber wearing their official robes holding the petition.

\”The petition reflected the broad nature of the opposition to renewing the tax. Radicals had long complained that ordinary Britons (represented by John Bull in caricatures) had borne the brunt of wartime taxation. Radicals argued that the taxes were used to fund \’Old Corruption\’, the parasitic network of state officials who exploited an unrepresentative political system for their own interests.

\”However, the petitions in 1816 came from very different groups, including farmers, businessmen and landowners, who were difficult for the government to dismiss. Petitioners, such as Durham farmers, claimed they had patriotically paid the tax during wartime with \’patience and cheerfulness\’, distancing themselves from radical critics of the government.

\”In barely six weeks, 379 petitions against renewing the tax were sent to the House of Commons. MPs took the opportunity when presenting these petitions, to highlight the unpopularity of the tax with their constituents and the wider public. … Ministers were accused of breaking the promise made in 1799 when the tax was introduced as a temporary, wartime measure and not as a permanent tax. The depressed state of industry and agriculture was blamed on heavy taxation.

\”The tax was also presented as a foreign and un-British measure that allowed the state to snoop into people\’s finances. As the City of London petition complained, it was an \’odious, arbitrary, and detestable inquisition into the most private concerns and circumstances of individuals\’.\”

Also unsurprisingly, the repeal of the income tax led the British government to raise other taxes instead. The BBC writes: Forced to make up the shortfall in revenue, the Government increased indirect taxes, many of which, for example taxes on tea, tobacco, sugar and beer, were paid by the poor. Between 1811 and 1815 direct taxes – land tax, income tax, all assessed taxes – made up 29% of all government revenue. Between 1831 and 1835 it was just 10%.\”
There\’s a story that when Britain repealed its income tax in 1816, Parliament ordered that the records of tax be destroyed, so posterity would never learn about it and be tempted to try again. The BBC reports:

\”Income tax records were then supposedly incinerated in the Old Palace Yard at Westminster. Whether this bonfire really took place we can\’t say. Several historians who have studied the period refer to the event as a story or legend that may have been true. Perhaps the most convincing evidence are reports that, in 1842, when Peel re-introduced income tax, albeit in a less contentious form, the records were no longer available. Another story is that those burning the records were unaware of the fact that duplicates had been sent for safe-keeping to the King\’s Remembrancer. They were then put into sacks and eventually surfaced in the Public Records Office.\”

The Global Rise of Internet Access and Digital Government

What happens if you mix government and the digital revolution? The answer is Chapter 2 of the April 2018 IMF publication Fiscal Monitor, called \”Digital Government.\” The report offers some striking insights about access to digital technology in the global economy and how government may use this technology.

Access to digital services is rising fast in developing countries, especially in the form of mobile phones, which appears to be on its way to outstripping access to water, electricity, and secondary schools.

Of course, there are substantial portions of the world population not connected as yet, especially in Asia and Africa.

The focus of the IMF chapter is on how digital access might improve the basic functions of government taxes and spending. On the tax side, for example, taxes levied at the border on international trade, or value-added taxes, can function much more simply as records become digitized. Income taxes can be submitted electronically. The government can use electronic records to search for evidence of tax evasion and fraud.

On the spending side, many developing countries experience a situation in which those with the lowest income levels don\’t receive government benefits to which they are entitled by law, either because they are disconnected from the government or because there is a \”leakage\” of government spending to others.  The report cites evidence along these lines:

\”[D]igitalizing government payments in developing countries could save roughly 1 percent of GDP, or about $220 billion to $320 billion in value each year. This is equivalent to 1.5 percent of the value of all government payment transactions. Of this total, roughly half would accrue directly to governments and help improve fiscal balances, reduce debt, or finance priority expenditures, and the remainder would benefit individuals and firms as government spending would reach its intended targets (Figure 2.3.1). These estimates may underestimate the value of going from cash to digital because they exclude potentially significant benefits from improvements in public service delivery, including more widespread use of digital finance in the private sector and the reduction of the informal sector.\”

I\’ll also add that the IMF is focused on potential gains from digitalization, which is  fair enough. But this chapter doesn\’t have much to say about potential dangers of overregulation, over-intervention, over-taxation, and even outright confiscation that can arise when certain governments gain extremely detailed access to information on sales and transactions. 

State and Local Spending on Higher Education

\”Everyone\” knows that the future of the US economy depends on a well-educated workforce, and on a growing share of students achieving higher levels of education. But state spending patterns on higher education aren\’t backing up this belief. Here are some figures from the SHEF 2017: State Higher Education Finance report published last month by the State Higher Education Executive Officers Association.

The bars in this figure shows per-student sending on pubic higher education by state and local government from all sources of funding, with the lower blue part of the bar showing government spending and the upper green part of the bar showing spending based on tuition revenue from students. The red line shows enrollments in public colleges, which have gone flat or even declined a little since the Great Recession.

This figure clarifies a pattern that is apparent from the green bars in the above figure: the share of spending on public higher education that comes from tuition has been rising. It was around 29-31% of total spending in the 1990s, up to about 35-36% in the middle of the first decade of the 2000s, and in recent years has been pushing 46-47%. That\’s a big shift in a couple of decades.

The reliance on tuition for state public education varies wildly across states, with less than 15% of total spending on public higher ed coming from tuition in Wyoming and California, and 70% or more of total spending on public higher education coming from tuition in Michigan, Colorado, Pennsylvania, Delaware, New Hampshire, and Vermont.
There are lots of issues in play here: competing priorities for state and local spending, rising costs of higher education, the returns from higher education that encourage students (and their families) to pay for it, and so on. For the moment, I\’ll just say that it doesn\’t seem like a coincidence that the tuition share of public higher education costs is rising at the same time that enrollment levels are flat or declining. 

US Mergers and Antitrust in 2017

Each year the Federal Trade Commission and and the Department of Justice Antitrust Division publish the Hart-Scott-Rodino Annual Report, which offers an overview of merger and acquisition activity and antitrust enforcement during the previous year. The Hart-Scott-Rodino legislation requires that all mergers and acquisitions above a certain size–now set at $80.8 million–be reported to the antitrust authorities before they occur. The report thus offers an overview of recent merger and antitrust activity in the United States.

For example, here\’s a figure showing the total number of mergers and acquisitions reported. The total has been generally rising since the end of the Great Recession in 2009, but there was a substantial from 1832 transactions in 2016 to 2052 transactions in 2017.  Just before the Great Recession, the number of merger transactions peaked at 2,201, so the current level is high but not unprecedented.

The report also provides a breakdown on the size of mergers. Here\’s what it looked like in 2017. As the figure shows, there were 255 mergers and acquisitions of more than $1 billion. 

After a proposed merger is reported, the FTC or the US Department of Justice can request a \”second notice\” if it perceives that the merger might raise some anticompetitive issues. In the last few years, about 3-4% of the reported mergers get this \”second request.\” 
This percentage may seem low, but it\’s not clear what level is appropriate.. After all, the US government isn\’t second-guessing whether mergers and acquisitions make sense from a business point of view. It\’s only asking whether the merger might reduce competition in a substantial way. If two companies that aren\’t directly competing with other combine, or if two companies combine in a market with a number of other competitors, the merger/acquisition may turn out well or poorly from a business point of view, but it is less likely to raise competition issues.
Teachers of economics may find the report a useful place to come up with some recent examples of antitrust cases, and there are also links to some of the underlying case documents and analysis (which students can be assigned to read). Here are a few examples from 2017 cases of the Antitrust Division at the US Department of Justice and the Federal Trade Commission. In the first one, a merger was blocked because it would have reduced competition for disposal of low-level radioactive waste.   In the second, a merger between two sets of movie theater chains was allowed only a number of conditions were met aimed at preserving competition in local markets. The third case involved a proposed merger between the two largest providers daily paid fantasy sports contests, and the two firms decided to drop the merger after it was challenged.

In United States v. Energy Solutions, Inc., Rockwell Holdco, Inc., Andrews County Holdings, Inc. and Waste Control Specialists, LLC, the Division filed suit to enjoin Energy Solutions, Inc. (ES), a wholly-owned subsidiary of Rockwell Holdco, Inc., from acquiring Waste Control Specialists LLC (WCS), a wholly-owned subsidiary of Andrews County Holdings, Inc. The complaint alleged that the transaction would have combined the only two licensed commercial low-level radioactive waste (LLRW) disposal facilities for 36 states, Puerto Rico and the District of Columbia. There are only four licensed LLRW disposal facilities in the United States. Two of these facilities, however, did not accept LLRW from the relevant states. The complaint alleged that ES’s Clive facility in Utah and WCS’s Andrews facility in Texas were the only two significant disposal alternatives available in the relevant states for the commercial disposal of higher-activity and lower-activity LLRW. At trial, one of the defenses asserted by the defendants was that that WCS was a failing firm and, absent the transaction, its assets would imminently exit the market. The Division argued that the defendants did not show that WCS’s assets would in fact imminently exit the market given its failure to make good-faith efforts to elicit reasonable alternative offers that might be less anticompetitive than its transaction with ES. On June 21, 2017, after a 10-day trial, the U.S. District Court for the District of Delaware ruled in favor of the Division. …

In United States v. AMC Entertainment Holdings, Inc. and Carmike Cinemas, Inc., the Division challenged AMC Entertainment Holdings, Inc.’s proposed acquisition of CarmikeCinemas, Inc. AMC and Carmike were the second-largest and fourth-largest movie theatre chains, respectively, in the United States. Additionally, AMC owned significant equity in National CineMedia, LLC (NCM) and Carmike owned significant equity in SV Holdco, LLC, a holding company that owns and operates Screenvision Exhibition, Inc. NCM and Screenvision are the country’s predominant preshow cinema advertising networks, covering over 80 percent of movie theatre screens in the United States. The complaint alleged that the proposed acquisition would have provided AMC with direct control of one of its most significant movie theatre competitors, and in some cases, its only competitor, in 15 local markets in nine states. As a result, moviegoers likely would have experienced higher ticket and concession prices and lower quality services in these local markets. The complaint further alleged that the acquisition would have allowed AMC to hold sizable interests in both NCM and Screenvision post-transaction, resulting in increased prices and reduced services for advertisers and theatre exhibitors seeking preshow services. On December 20, 2016, a proposed final judgment was filed simultaneously with the complaint settling the lawsuit. Under the terms of the decree, AMC agreed to (1) divest theatres in the 15 local markets; (2) reduce its equity stake in NCM to 4.99 percent; (3) relinquish its seats on NCM’s Board of Directors and all of its other governance rights in NCM; (4)transfer 24 theatres with a total of 384 screens to the Screenvision cinema advertising network; and (5) implement and maintain “firewalls” to inhibit the flow of competitively sensitive information between NCM and Screenvision. The court entered the final judgment on March 7, 2017. …

In DraftKings/FanDuel, the Commission filed an administrative complaint challenging the merger of DraftKings and FanDuel, two providers of paid daily fantasy sports contests. The Commission\’s complaint alleged that the transaction would be anticompetitive because the merger would have combined the two largest daily fantasy sports websites, which controlled more than 90 percent of the U.S. market for paid daily fantasy sports contests. The Commission alleged that consumers of paid daily fantasy sports were unlikely to view season-long fantasy sports contests as a meaningful substitute for paid daily fantasy sports, due to the length of season-long contests, the limitations on number of entrants, and several other issues. Shortly after the Commission filed its complaint, the parties abandoned the merger on July 13, 2017, and the Commission dismissed its administrative complaint.

Should the 5% Convention for Statistical Significance be Dramatically Lower?

For the uninitiated, the idea of \”statistical significance\” may seem drier than desert sand. But it\’s how research in the social sciences and medicine decides what findings are worth paying attention to as plausible true–or not. For that reason, it matters quite a bit. Here, I\’ll sketch a quick overview for beginners of what statistical significance means, and why there is controversy among statisticians and researchers over what research results should be regarded as meaningful or new.
To gain some intuition , consider an experiment to decide whether a coin is equally balanced, or whether it is weighted toward coming up \”heads.\” You toss the coin once, and it comes up heads. Does this result prove, in a statistical sense, that the coin is unfair? Obviously not. Even a  fair coin will come up heads half the time, after all. 
You toss the coin again, and it comes up \”heads\” again. Do two heads in a row prove that the coin is unfair? Not really. After all, if you toss a fair coin twice in a row, there are four possibilities: HH, HT, TH, TT. Thus, two heads will happen one-fourth of the time with a fair coin, just by chance.

What about three heads in a row? Or four or five or six or more? You can never completely rule out the possibility that a string of heads, even a long string of heads, could happen entirely by chance. But as you get more and more heads in a row, a finding that is all heads, or mostly heads, becomes increasingly unlikely. At some point, it becomes very unlikely indeed.  

Thus, a researcher must make a decision. At what point are the results sufficiently unlikely to have happened by chance, so that we can declare that the results are meaningful?  The conventional answer is that if the observed result had a 5% probability or less of happening by chance, then it is judged to be \”statistically significant.\” Of course, real-world questions of whether a certain intervention in a school will raise test scores, or whether a certain drug will help treat a medical condition, are a lot more complicated to analyze than coin flips. Thus, so practical researchers spend a lot of time trying to figure out whether a given result is \”statistically significant\” or not.

Several questions arise here.

1) Why 5%? Why not 10%? Or 1%? The short answer is \”tradition.\” A couple of year ago, the American Statistical Association put together a panel to reconsider the 5% standard. The

Ronald L. Wasserstein and Nicole A. Lazar wrote a short article :\”The ASA\’s Statement on p-Values: Context, Process, and Purpose,\” in  The American Statistician  (2016, 70:2, pp. 129-132.) (A p-value is an algebraic way of referring to the standard for statistical significance.) They started with this anecdote:

\”In February 2014, George Cobb, Professor Emeritus of Mathematics and Statistics at Mount Holyoke College, posed these questions to an ASA discussion forum:

Q:Why do so many colleges and grad schools teach p = 0.05?
A: Because that’s still what the scientific community and journal editors use.
Q:Why do so many people still use p = 0.05?
A: Because that’s what they were taught in college or grad school.

Cobb’s concern was a long-worrisome circularity in the sociology of science based on the use of bright lines such as p<0.05: “We teach it because it’s what we do; we do it because it’s what
we teach.”

But that said, there\’s nothing magic about the 5% threshold. It\’s fairly common for academic papers to report the results that are statistically signification using a threshold of 10%, or 1%. Confidence in a statistical result isn\’t a binary, yes-or-no situation, but rather a continuum. 

2) There\’s a difference between statistical confidence in a result, and the size of the effect in the study.  As a hypothetical example, imagine a study which says that if math teachers used a certain curriculum, learning in math would rise by 40%. However, the study included only 20 students.

In a strict statistical sense, the result may not be statistically significant, in the sense that with a fairly small number of students, and the complexities of looking at other factors that might have affected the results, it could have happened by chance. (This is similar the problem that if you flip a coin only two or three times, you don\’t have enough information to state with statistical confidence whether it is a fair coin or not.) But it would seem peculiar to ignore a result that shows a large effect. A more natural response might be to design a bigger study with more students, and see if the large effects hold up and are statistically significant in a bigger study.

Conversely, one can imagine a hypothetical study which uses results from 100,000 students, and finds that if math teachers use a certain curriculum, learning in math would rise by 4%. Let\’s say that the researcher can show that the effect is statistically significant at the 5% level–that is, there is less than a 5% chance that this rise in math performance happened by chance. It\’s still true that the rise is fairly small in size. 

In other words, it can sometimes be more encouraging to discover a large result in which you do not have full statistical confidence than to discover a small result in which you do have statistical confidence.

3) When a researcher knows that 5% is going to be the dividing line between a result being treated as meaningful or not meaningful, it becomes very tempting to fiddle around with the calculations (whether explicitly or implicitly) until you get a result that seems to be statistically significant.

As an example, imagine a study that considers whether early childhood education has positive effects on outcomes later in life. Any researcher doing such a study will be faced with a number of choices. Not all early childhood education programs are the same, so one may want to adjust for factors like the teacher-student ratio, training received by students, amount spent per student, whether the program included meals, home visits, and other factors. Not all children are the same, so one may want to look at factors like family structure, health,  gender, siblings, neighborhood, and other factors. Not all later life outcomes are the same, so one may want to look at test scores, grades, high school graduation rates, college attendance, criminal behavior, teen pregnancy, and employment and wages later in life.

But a problem arises here. If a research hunts through all the possible factors, and all the possible combinations of all the possible factors, there are literally scores or hundreds of possible connections. Just by blind chance, some of these connections will appear to be statistically significant. It\’s similar to the situation where you do 1,000 repetitions of flipping a coin 10 times. In those 1,000 repetitions, at least a few times heads is likely to come up 8 or 9 times out of 10 tosses. But that doesn\’t prove the coin is unfair! It just proves you tried over and over until you got a specific result.

Modern researchers are very aware of the dangers that when you hunting through lots of  possibilities, then just by chance, a random scattering of the results will appear to be statistically significant. Nonetheless, there are some tell-tale signs that this research strategy of hunting to find a result that looks statistically meaningful may be all too common. For example, one warning sign is when other researchers try to replicate the result using different data or statistical methods, but fail to do so. If a result only appeared statistically significant by random chance in the first place, it\’s likely not to appear at all in follow-up research.

Another warning sign is that when you look at a bunch of published studies in a certain area (like how to improve test scores, how a minimum wage affects employment, or whether a drug helps with a certain medical condition), you keep seeing that the finding is statistically significant at almost exactly the 5% level, or just a little less. In a large group of unbiased studies, one would expect to see the statistical significance of the results scattered all over the place: some 1%, 2-3%, 5-6%, 7-8%, and higher levels. When all the published results are bunched right around 5%, it make one suspicious that the researchers have put their thumb on the scales in some way to get a result that magically meets the conventional 5% threshold. 

The problem that arises is that research results are being reported as meaningful in the sense that they had a 5% or less probability of happening by chance, when in reality, that standard is being evaded by researchers. This problem is severe and common enough that a group of 72 researchers recently wrote: \”Redefine statistical significance: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries,\” which appeared in Nature Human Behavior (Daniel J. Benjamin et al., January 2018, pp. 6-10). One of the signatories, John P.A. Ioannidis provides a readable over in \”Viewpoint: The Proposal to Lower P Value Thresholds to .005\” (Journal of the American Medical Association, March 22, 2018, pp. E1-E2). Ioannidis writes: 

\”P values and accompanying methods of statistical significance testing are creating challenges in biomedical science and other disciplines. The vast majority (96%) of articles that report P values in the abstract, full text, or both include some values of .05 or less. However, many of the claims that these reports highlight are likely false. Recognizing the major importance of the statistical significance conundrum, the American Statistical Association (ASA) published3 a statement on P values in 2016. The status quo is widely believed to be problematic, but how exactly to fix the problem is far more contentious.  … Another large coalition of 72 methodologists recently proposed4 a specific, simple move: lowering the routine P value threshold for claiming statistical significance from .05 to .005 for new discoveries. The proposal met with strong endorsement in some circles and concerns in others. P values are misinterpreted, overtrusted, and misused. … Moving the P value threshold from .05 to .005 will shift about one-third of the statistically significant results of past biomedical literature to the category of just “suggestive.”

This essay is published in a medical journal, and is thus focused on biomedical research. The theme is that a result with 5% significance can be treated as \”suggestive,\” but for a new idea to be accepted, the threshold level of statistical significance should be 0.5%– that is the probability of the outcome happening by random chance should be 0.5% or less.\” 

The hope of this proposal is that researchers will design their studies more carefully and use larger sample sizes. Ioannidis writes: \”Adopting lower P value thresholds may help promote a reformed research agenda with fewer, larger, and more carefully conceived and designed studieswith sufficient power to pass these more demanding thresholds.\” Ioannidis is quick to admit that this proposal is imperfect, but argues that it is practical and straightforward–and better than many of the alternatives.

The official \”ASA Statement on Statistical Significance and P-Values\” which appears with the Wasserstein and Lazar article includes a number of principles worth considering. Here are three of them:

Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold. …
A p-value, or statistical significance, does not measure the size of an effect or the importance of a result. …
By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.

Whether you are doing the statistics yourself, or just a consumer of statistical studies produced by others, it\’s worth being hyper-aware of what \”statistical significance\” means, and doesn \’t mean. 

For those who would like to dig a little deeper, some useful starting points might be the six-paper symposium on \”Con out of Economics\” in the Spring 2010 issue of the Journal of Economic Perspectives, or the six-paper symposium on \”Recent Ideas in Econometrics\” in the Spring 2017 issue. 

US Lagging in Labor Force Participation

Not all that long ago in 1990, the share of :\”prime-age\” workers from 25-54 who were participating in the labor force was basically the same in the United States, Germany, Canada, and Japan. But since then, labor force participation in this group has fallen in the United States while rising in the other countries. Mary C. Daly lays out the pattern in \”Raising the Speed Limit on Future Growth\” (Federal Reserve Bank of San Franciso Economic Letter, April 2, 2018).

Here\’s a figure showing the evolution of labor force participation in the 25-54 age bracket in these four economies. Economists often like to focus on this group because it avoids differences in rates of college attendance (which can strongly influence labor force participation at younger years) and differences in old-age pension systems and retirement patterns (which can strongly influence labor force participation in older years).

U.S. labor participation diverging from international trends

Daly writes (ciations omitted):

\”Which raises the question—why aren’t American workers working?

\”The answer is not simple, and numerous factors have been offered to explain the decline in labor force participation. Research by a colleague from the San Francisco Fed and others suggests that some of the drop owes to wealthier families choosing to have only one person engaging in the paid labor market …

\”Another factor behind the decline is ongoing job polarization that favors workers at the high and low ends of the skill distribution but not those in the middle. … Our economy is automating thousands of jobs in the middle-skill range, from call center workers, to paralegals, to grocery checkers.A growing body of research finds that these pressures on middle-skilled jobs leave a big swath of workers on the sidelines, wanting work but not having the skills to keep pace with the ever-changing economy.

\”The final and perhaps most critical issue I want to highlight also relates to skills: We’re not adequately preparing a large fraction of our young people for the jobs of the future. Like in most advanced economies, job creation in the United States is being tilted toward jobs that require a college degree . Even if high school-educated workers can find jobs today, their future job security is in jeopardy. Indeed by 2020, for the first time in our history, more jobs will require a bachelor’s degree than a high school diploma.

\”These statistics contrast with the trends for college completion. Although the share of young people with four-year college degrees is rising, in 2016 only 37% of 25- to 29-year-olds had a college diploma. This falls short of the progress in many of our international competitors, but also means that many of our young people are underprepared for the jobs in our economy.\”

On this last point, my own emphasis would differ from Daly\’s. Yes, steps that aim to increase college attendance over time are often worthwhile.  But as she notes, only 37% of American 25-29 year-olds have a college degree. A dramatic rise in this number would take an extraordinary social effort. Among other things, it would require a dramatic expansion in the number of those leaving high school who are willing and ready to benefit from a college degree, together with a vast enrollments across the higher education sector.  Even if the share of college graduates could be increased by one-third or one-half–which would be a very dramatic change–a very large share of the population would not have a college degree.

It seems to me important to separate the ideas of \”college\” and \”additional job-related training.\” It\’s true that the secure and decently-paid jobs of the future will typically require additional training past high school. At least in theory, that training could be provided in many ways: on-the-job training, apprenticeships, focused short courses focused certifying competence in a certain area, and so on. For some students, a conventional college degree will be a way to build up these skills. However, there are also a substantial number of students who are unlikely to flourish in a conventional classroom-based college environment, and a substantial number of jobs where a traditional college classroom doesn\’t offer the right preparation. College isn\’t the right job prep  for everyone. We need to build up other avenues for US workers to acquire the job-related skills they need, too.

Misconceptions about Milton Friedman\’s 1968 Presidential Address

For macroeconomists, Milton Friedman\’s (1968) Presidential Address to the American Economic Association about \”The Role of Monetary Policy\” marks a central event (American Economic Review, March 1968, pp. 1-17).  Friedman argued that monetary policy had limits. Actions by a central bank like the Federal Reserve could have short-run effects on an economy–either for better or for worse. But in the long-run, he argued, monetary policy affected only the price level. Variables like unemployment or the real interest rate were determined by market forces, and tended to move toward what Friedman called the \”natural rate\”–which is potentially confusing term for saying that they are determined by forces of supply and demand. 

Here, I\’ll give a quick overview of the thrust of Friedman\’s address, a plug for the recent issue of the Journal of Economic Perspectives, which has a lot more, and point out a useful follow-up article that clears up some misconceptions about Friedman\’s 1968 speech.

The Winter 2018 issue of the Journal of Economic Perspectives, where I work as Managing Editor, we published a three-paper symposium on \”Friedman\’s Natural Rate Hypothesis After 50 Years.\” The papers are:

I won\’t try to summarize the papers here, along with the many themes they offer on how Friedman\’s speech influenced the macroeconomics that followed or what aspects of Friedman\’s analysis have held up better than others. But to giver a sense of what\’s a stake, here\’s an overview of Friedman\’s themes from the paper by Mankiw and Reis:

\”Using these themes of the classical long run and the centrality of expectations, Friedman takes on policy questions with a simple bifurcation: what monetary policy cannot do and what monetary policy can do. It is a division that remains useful today (even though, as we discuss later, modern macroeconomists might include different items on each list). 

\”Friedman begins with what monetary policy cannot do. He emphasizes that, except in the short run, the central bank cannot peg either interest rates or the unemployment rate. The argument regarding the unemployment rate is that the trade-off described by the Phillips curve is transitory and unemployment must eventually return to its natural rate, and so any attempt by the central bank to achieve otherwise will put inflation into an unstable spiral. The argument regarding interest rates is similar: because we can never know with much precision what the natural rate of interest is, any attempt to peg interest rates will also likely lead to inflation getting out of control. From a modern perspective, it is noteworthy that Friedman does not consider the possibility of feedback rules from unemployment and inflation as ways of setting interest rate policy, which today we call “Taylor rules” (Taylor 1993).

\”When Friedman turns to what monetary policy can do, he says that the “first and most important lesson” is that “monetary policy can prevent money itself from being a major source of economic disturbance” (p. 12). Here we see the profound influence of his work with Anna Schwartz, especially their Monetary History of the United States. From their perspective, history is replete with examples of erroneous central bank actions and their consequences. The severity of the Great Depression is a case in point.

\”It is significant that, while Friedman is often portrayed as an advocate for passive monetary policy, he is not dogmatic on this point. He notes that “monetary policy can contribute to offsetting major disturbances in the economic system arising from other sources” (p. 14). Fiscal policy, in particular, is mentioned as one of these other disturbances. Yet he cautions that this activist role should not be taken too far, in light of our limited ability to recognize shocks and gauge their magnitude in a timely fashion. The final section of Friedman’s presidential address concerns the conduct of monetary policy. He argues that the primary focus should be on something the central bank can control in the long run—that is, a nominal variable … \”

Edward Nelson offers a useful follow-up to these JEP papers in  “Seven Fallacies Concerning Milton Friedman’s `The Role of Monetary Policy,\’\” Finance and Economics Discussion Series 2018-013, Board of Governors of the Federal Reserve System, Nelson summarizes at the start:

\”[T]here has been widespread and lasting acceptance of the paper’s position that monetary policy can achieve a long-run target for inflation but not a target for the level of output (or for other real variables). For example, in the United States, the Federal Open Market Committee’s (2017) “Statement on Longer-Run Goals and Policy Strategy” included the observations that the “inflation rate over the longer run is primarily determined by monetary policy, and hence the Committee has the ability to specify a longer-run goal for inflation,” and that, in contrast, the “maximum level of employment is largely determined by nonmonetary factors,” so “it would not be appropriate to specify a fixed goal for employment.”

Nelson then lays out seven fallacies. The details are in his paper: here, I just list the fallacies with a few words of his explanations.

Fallacy 1: “The Role of Monetary Policy” was Friedman’s first public statement of the natural rate hypothesis
\”Certainly, Friedman (1968) was his most extended articulation of the ideas (i) that an expansionary monetary policy that tended to raise the inflation rate would not permanently lower the unemployment rate, and (ii) that full employment and price stability were compatible objectives over long periods. But Friedman had outlined the same ideas in his writings and in other public outlets on several earlier occasions in the 1950s and 1960s.\”

Fallacy 2: The Friedman-Phelps Phillips curve was already presented in Samuelson and Solow’s (1960) analysis
\”A key article on the Phillips curve that is often juxtaposed with Friedman (1968) is Samuelson  and Solow (1960). This paper is often (and correctly, in the present author’s view) characterized as advocating the position that there is a permanent tradeoff between the unemployment rate and  inflation in the United States.\”

Fallacy 3: Friedman’s specification of the Phillips curve was based on perfect competition and no nominal rigidities
\”Modigliani (1977, p. 4) said of Friedman (1968) that “[i]ts basic message was that, despite appearances, wages were in reality perfectly flexible.” However, Friedman (1977, p. 13) took exception to this interpretation of his 1968 paper. Friedman pointed out that the definition of the natural rate of unemployment that he gave in 1968 had recognized the existence of imperfectly competitive elements in the setting of wages, including those arising from regulation of labor markets. Further support for Friedman’s contention that he had not assumed a perfectly competitive labor market is given by the material in his 1968 paper that noted the slow adjustment of nominal wages to demand and supply pressures. … Consequently, that (1968 Friedman] framework is
consistent with prices being endogenous—both responding to, and serving as an impetus for, output movements—and the overall price level not being fully flexible in the short run.\”

Fallacy 4: Friedman’s (1968) account of monetary policy in the Great Depression contradicted the Monetary History’s version
\”But the fact of a sharp decline in the monetary base during the prelude to, and early stages of, the 1929-1933 Great Contraction is not in dispute, and it is this decline to which Friedman (1968) was presumably referring.\” 

Fallacy 5: Friedman (1968) stated that a monetary expansion will keep the unemployment rate and the real interest rate below their natural rates for two decades
\”[T]these statements are inferences from the following passage in Friedman (1968, p. 11): “But how long, you will say, is ‘temporary’? … I can at most venture a personal judgment, based on some examination of the historical evidence, that the initial effects of a higher and unanticipated rate of inflation last for something like two to five years; that this initial effect then begins to be reversed; and that a full adjustment to the new rate of inflation takes about as long for employment as for interest rates, say, a couple of decades.” The passage of Friedman (1968) just quoted does not, in fact, imply that a policy involving a shift to a new inflation rate involves twenty years of one-sided unemployment and real-interestrate gaps. Such prolonged gaps instead fall under the heading of Friedman’s “initial effects” of  the monetary policy change—effects that he explicitly associated with a two-to-five-year period, with the gaps receding beyond this period. Friedman described “full adjustment” as comprising decades, but such complete adjustment includes the lingering dynamics beyond the main  dynamics associated with the initial two-to-five year period. It is the two-to-five year period that would be associated with the bulk of the nonneutrality of the monetary policy change.\”

Fallacy 6: The zero lower bound on nominal interest rates invalidates the natural rate hypothesis
\”A zero-bound situation undoubtedly makes the analysis of monetary policy more difficult. In addition, the central bank in a zero-bound situation has fewer tools that it can deploy to stimulate aggregate demand than it has in other circumstances. But, important as these complications are, neither of them implies that the long-run Phillips curve is not vertical.\”

Fallacy 7: Friedman’s (1968) treatment of an interest-rate peg was refuted by the rational expectations revolution. 
\”The propositions that the liquidity effect fades over time and that real interest rates cannot be targeted in the long run by the central bank remain widely accepted today. These valid propositions underpinned Friedman’s critique of pegging of nominal interest rates.\”

Since the JEP published this symposium, I\’ve run into some younger economists who have never read Friedman\’s talk and lack even a general familiarity with his argument. For academic economists of whatever vintage, it\’s an easily readable speech worth becoming acquainted with–or revisiting.

China Worries: Echoes of Japan, and the Soviet Union

There seems to be an ongoing fear in the psyche of Americans that an economy based on intensive government planning will inevitably outstrip a US economy that lacks such a degree of central planning. I first remember encountering this fear with respect to the Soviet Union, which was greatly feared as an economic competitor to the US from the 1930s up through the 1980s. Sometime in the 1970s and 1980s, US fears of agovernment-directed economy transferred over to Japan. And in recent years, those fears seem to have transferred to China.

Back in the 1960s and 1970s, there was a widespread belief among prominent economists that the Soviet Union would overtake the US economy in per capita GDP within 2-3 decades.  Such predictions seem deeply implausible now, knowing what we know about breakup of the Soviet Union in the 1990s and its economic course since then. But at the time, the perspective was that the US economy frittered away output on raising personal consumption, while the Soviet economy led to high levels of investment in equipment and technology. Surely, these high levels of investment would gradually cause the Soviet standard of living to pull ahead?

As one illustration of this viewpoint, Mark Skousen discussed the treatment of Soviet growth in Paul Samuelson\’s classic introductory economic textbook (in \”The Perseverance of Paul Samuelson\’s Economics.\” Journal of Economic Perspectives, Spring 1997, 11:2, pp.  137-152). The first edition of the book was published in 1948. Skousen writes:

\”But with the fifth edition (1961), although expressing some skepticism of Soviet statistics, he [Samuelson] stated that economists \”seem to agree that her recent growth rates have been considerably greater than ours as a percentage per year,\” though less than West Germany, Japan, Italy and France (5:829). The fifth through the eleventh editions showed a graph indicating the gap between the United States and the USSR narrowing and possibly even disappearing (for example, 5:830). The twelfth edition replaced the graph with a table declaring that between 1928 and 1983, the Soviet Union had grown at a remarkable 4.9 percent annual growth rate, higher than did the United States, the United Kingdom, or even Germany and Japan (12:776). By the thirteenth edition (1989), Samuelson and Nordhaus declared, \”the Soviet economy is proof that, contrary to what many skeptics had earlier believed, a socialist command economy can function and even thrive\” (13:837). Samuelson and Nordhaus were not alone in their optimistic views about Soviet central planning; other popular textbooks were also generous in their descriptions of economic life under communism prior to the collapse of the Soviet Union.

\”By the next edition, the fourteenth, published during the demise of the Soviet Union, Samuelson and Nordhaus dropped the word \”thrive\” and placed question marks next to the Soviet statistics, adding \”the Soviet data are questioned by many experts\” (14:389). The fifteenth edition (1995) has no chart at all, declaring Soviet Communism \”the failed model\” (15:714–8).\”

My point here is not to single out the Samuelson text. As Skousen notes, this perspective on Soviet growth was common among many economists. In retrospect, there were certainly signs from the 1960s through the 1980s that the Soviet economy was not in fact catching up. Commonsensical observation of how average people were living in the Soviet Union, especially in rural areas, told a different story. And those projections about when the Soviet Union would catch the US in per capita GDP always seemed to remain 2-3 decades off in the future. Nonetheless, standard economics textbooks taught for about three decades that the Soviets were likely to catch up and pull ahead.
But as the risk of being overtaken by an ever-richer Soviet Union came to seem less plausible, the rising threat from Japan took its place. Again, we now think of Japan\’s as suffering a financial meltdown back in the early 1990s, which has now been followed by a quarter-century of slow growth. But as Japanese competitor rose in world markets in the 1970s and 1980s, the view was quite different. 
For a trip down memory lane on this issue, I recommend a 1998 essay called \”Revisiting the “Revisionists”: The Rise and Fall of the Japanese Economic Model,\” by Brink Lindsey and Aaron Lukas (Cato Institute, July 31, 1998). Here\’s a snippet: 

\”After the collapse of Soviet-style communism, the “Japan, Inc.” economic model stood as the world’s only real alternative to Western free-market capitalism. Its leading American supporters—who became known as “revisionists”—argued in the late 1980s and early 1990s that the United States could not compete with Japan’s unique form of state-directed insider capitalism. Unless Washington adopted Japanese-style policies and abandoned free markets in favor of “managed trade,” they said, America would become an economic colony of Japan. …

\”Four figures in particular stand out: political scientist Chalmers Johnson, whose 1982 book MITI and the Japanese Miracle laid much of the intellectual groundwork for later writers; former Reagan administration trade negotiator Clyde Prestowitz, who authored Trading Places: How We Are Giving Our Future to Japan and How to Reclaim It and later founded the Economic Strategy Institute to advance the revisionist viewpoint; former U.S. News & World Report editor James Fallows, whose 1989 article “Containing Japan” in the Atlantic Monthly cast U.S.-Japan relations in Cold War terms; and Dutch journalist Karel van Wolferen, author of The Enigma of Japanese Power. These men influenced many others—including novelist Michael Crichton, whose 1992 jingoistic thriller Rising Sun became a number-one bestseller.

\”The revisionists asserted that, in contrast to the open-market capitalism of the “Anglo-American” model, Japan practiced a unique form of state-directed insider capitalism. Under that model, close relationships among business executives, bankers, and government officials strongly influence economic outcomes. By strategically allocating capital through a tightly controlled banking system, they argued, Japan would drive foreign competitors out of sector after sector, leading eventually to world economic domination.

\”Revisionists also maintained that because Japan was not playing by the normal rules of Western capitalism, it was useless to employ rules-based trade negotiations to open the Japanese market. Instead, they advocated “results-based” or “managed trade” agreements as the only realistic way to reduce the U.S.-Japan trade imbalance. Beyond that, they proposed elements of a Japanese-style industrial policy as a means of improving U.S. economic performance.\”

I was working as a newspaper editorial writer for the San Jose Mercury News in the mid-1980s, in the heart of Silicon Valley, so I heard lots about the Japanese threat. I remember a lot of anguish about Japan\’s \”Fifth Generation Computer Project,\” which was going to assure Japanese dominance of computing, and the a Japanese program to take the lead in high-definition televisions–a program built on analog rather than digital technology. But again, the overall story was that Japan had high levels of investment that it would focus on key technology areas, and thus would surely outstrip the US level. The fears of Japan as an economic colossus turned out to be considerably overblown, too. 

It seems to me that China has now taken the place of Russia and Japan, and many of the terms used by Lindsey and Lukas to describe attitudes toward Japan fit quite well in cuirrent arguments about of China. Thus, it\’s become fairly common to hear claims that China practices \”state-directed insider capitalism,\” that China has \”close relationships among business executives, bankers, and government officials,\” that China practices \”strategically allocating capital through a tightly controlled banking system,\” and that China is \” not playing by the normal rules of Western capitalism.\” Just as with Japan, the argument is now made that the only way to address US-China trade is with “results-based” or “managed trade” agreements,

Of course, the fact that these very similar arguments and predictions turned out to be incorrect with the Soviet Union and with Japan doesn\’t prove they will be incorrect with regard to China. But it should raise some questions.

It\’s worth remembering that according to the World Bank, per capita GDP in the United States is $57,600 in 2016, which compares with $38,900 in Japan, $8,748 in the Russian Federation, and $8,123 in China.

China\’s economy has of course been growing quickly in recent decades, and has the possibility to continue rapid growth in the future. It\’s also an economy facing a number of challenges: an extraordinary rise in corporate debt in recent years; a risk of its own housing price bubble; the difficulties of shifting from and investment-driven to a consumption-driven economy; and an aging population creating a real possibility that China will get old before it gets rich.

Kenneth Rogoff recently wrote an op-ed on the topic, \”Will China Really Supplant US Economic Hegemony?\” He points out that for a country with an extremely large workforce, like China, the rise of robotics may be especially disruptive. He adds:

\”But China’s rapid growth has been driven mostly by technology catch-up and investment. … China’s gains still come largely from adoption of Western technology, and in some cases, appropriation of intellectual property. … In the economy of the twenty-first century, other factors, including rule of law, as well as access to energy, arable land, and clean water may also become increasingly important. China is following its own path and may yet prove that centralized systems can push development further and faster than anyone had imagined, far beyond simply being a growing middle-income country. But China’s global dominance is hardly the predetermined certainty that so many experts seem to assume.\”

The US economy has its full share of challenges and difficulties, many of which have been chronicled on the blog repeatedly in the last few years. But the fear that the US economy will soon be overtaken by a country using a recipe consisting of state-directed high investment levels and unfair trading practices has not worked in the past. Perhaps the energies of US eocnomic policymakers should be less focused on worries about outside threats, and more focused on how to strengthen US productivity and competitiveness.

Wakanda and Economics

I told my teenage son that the economics of Wakanda, the home country of the \”Black Panther\” in the recent movie, raised some interesting questions. He looked at me for a long moment and then explained very slowly: \”Dad, it\’s not a documentary.\” Duly noted. No superhero movie is a documentary, and pretty much by definition, even asking for plausibility from a superhero movie is asking too much.  But undaunted, I continue to assert that the economy of Wakanda raises some interesting questions. For some overviews, see:

For the 8-10 people on Planet Earth not yet familiar with the premise, Wakanda sits on top of the world\’s only supply of a rare mineral called \”vibranium,\” which absorbs vibrations, including sound, kinetic motion, and also gives off the magic \”radioactivity\” which in superhero movies then creates whatver other human strengths or plant and animals that seem useful for the plot. Wakanda has used this mineral as the basis for building what is portrayed in the movie as a very technologically advanced and sophisticated economy. However, a quick glance around the world economy suggests that countries endowed with valuable natural resources don\’t always show broad-based economic success or participatory forms of governance (think Venezuela, Angola, or Saudi Arabia).

There\’s a substantial research literature on the \”the resource curse question\” of why natural resources have so often been accompanied by a lack of growth. For an overview, see Anthony J. Venables, 2016. \”Using Natural Resources for Development: Why Has It Proven So Difficult?\” Journal of Economic Perspectives (Winter 2016, 30:1, pp. 161-84).  When a country is in a situation in which the export of a key natural resource looms large, it often leads to a situation of economic and political economy. Moreover, a country with very large exports in one area is likely to have an economy that is not well-diversified, and thus become vulnerable to price shocks concerning that resource.

How does Wakanda escape these traps? Well, it isn\’t obvious that it escapes all of them. Political power in Wakanda is concentrated in a hereditary monarchy, with arguments over succession decided by ritual combat. However, the comic books also suggest that Wakanda sells enough vibranium on world markets to raise money for a large national investment in science and technology. Thus, Wakanda manages to become a combination technology-warrior state, in which the leaders seem to be generally beneficent. 

In the real world, countries that have managed to overcome the resource curse to at least some extent, like Norway, Indonesia, Botswana, and others, often have a government focus on spreading the benefits of their mineral exports across the population. In turn, widespread buying power in the economy helps other industries to develop. A \”social fund\” is often set up to fund human capital and financial investment, and to assure that the benefits of mineral exports will be spread out over time.

The level of inequality in Wakanda the \”Black Panther\” movie is not clear. In the old comic books, the Black Panther is described as the richest person in the world, thanks to effective ownership of the mineral wealth, while some people of Wakanda are portrayed as living in traditional huts. The movie portrayal of household life in Wakanda and the full distribution of income is not clear (\”Dad, it\’s not a documentary!\”) but in various montage scenes of daily life, it does not appear that everyone is living in modern luxury. 
As a scientist-warrior state with an unlimited supply of cheap vibranium, Wakanda has managed to develop many upstream uses of vibranium, with a variety of applications to transportation, health, and weapons. In the real world, developing upstream uses of domestic resources is often tricky. It works, sometimes; for example, Botswana has managed to have more diamond-cutting and sales done in Botswana, rather than elsewhere. But it is hard for the economy of a small country to stretch all the way from traditional economic practices all the way up to the highest techology, with all the steps in-between. As the Economist notes:  \”Countries often find it easier to move diagonally, rather than vertically, graduating into products that belong to different value chains but require similar mixes of labour, capital and knowhow.\”
Wakanda has a high level of prosperity while being almost entirely cut off from the outside world. This seems to me an ongoing dream of politicians from communities countries: how can my jurisdiction become well-off by trading with itself. In this view, trade and investment by outsiders almost always ends up being exploitative, and best minimized or avoided. However, rhere are not any countries that have managed to accomplish this combination of economic development and splendid isolation in the real world.  
The underlying issue here is that Wakanda is portrayed as being well-off because of its combination of vibranium and science. But in the real world, economic development typically involves a number of complementary ingredients. For example,  vibranium had to be discovered, mined, and refined. There had to be process for discovering its  scientific properties, and then engineered these properties into products. An obvious question (for economists, at least) was who had the incentive to do the innovation, the trial-and-error learning, and the investments in human and physical capital to make all this happen? In the real world, all of this happens against a backdrop of incentives, competition, property rights, financial contracts, and on The movie doesn\’t give us a back-story here. (\”Dad, it\’s not a documentary.\”) But technologically leading economies typically don\’t emerge through the actions of a beneficent hereditary monarch.  
For an economist, an obvious question is whether the people of Wakanda might be better off with greater openness to the world. From a pure economic point of view, it seems clear that Wakanda could benefit from a greater opening to international trade. At Wakanda\’s stage of technological development, it wouldn\’t even be necessary to sell vibranium. Instead, it could sell services that made use of vibranium like construction and public works, health care, and the like, while still keeping control of the underlying resource.  
But economics isn\’t everything. Wakanda also clearly places a high value on its traditional culture, which can be a two-edges sword. For many people much of the time, traditional culture can be a source of purpose and meaning; but for many of the same people at least some of the time, traditional culture can also feel like a trap and a limitation.  I learned from reading Subrick\’s paper about recent developments in the back story. He writes: 

\”Although the Black Panther has retained political legitimacy throughout the centuries, in the new millennium Wakandans have begun to question the validity of their government. The Black Panther’s support depends on his ability to provide the most basic of public goods—security from outside invasion and domestic turmoil. In the story arc “A Nation Under Our Feet” Ta-Nehisi Coates and Brian Stelfreeze describe a Wakanda on the verge of civil war. The combination of the never-ending attempts by outsiders to steal vibranium, recent internal challenges to his rule, and political intrigue, not to mention a terrorist organization poisoning the minds of the people, have raised the question of whether a monarchy and freedom are compatible. Government by a monarch in the age of democracy creates tensions within society. Furthermore, high levels of income inequality exacerbate these societal pressures. In Wakanda, traditional ways of life exist in the presence of the world’s most sophisticated technologies. Citizens live in huts that are located next to the factories. A disenfranchised and relative poor citizenry are beginning to demand massive social change.\”

To put it another way, the concerns of social scientists about the resource curse, inequality, openness to international trade, and lack of democracy are actually becoming, in their own way, part of the plot.