Some Economics for Martin Luther King Day

On November 2, 1983, President Ronald Reagan signed a law establishing a federal holiday for the birthday of Martin Luther King Jr., to be celebrated each year on the third Monday in January. As the legislation that passed Congress said: \”such holiday should serve as a time for Americans to reflect on the principles of racial equality and nonviolent social change espoused by Martin Luther King, Jr..\” Of course, the case for racial equality stands fundamentally upon principles of justice, not economics. But here are a few economics-related thoughts for the day from the archives:

_________________________
1) Inequalities of race and gender impose large economic costs on society as a whole, because one consequence of discrimination is that it hinders people in developing and using their talents. In \”Equal Opportunity and Economic Growth\” (August 20, 2012), I wrote:

——–
A half-century ago, white men dominated the high-skilled occupations in the U.S. economy, while women and minority groups were often barely seen. Unless one holds the antediluvian belief that, say, 95% of all the people who are well-suited to become doctors or lawyers are white men, this situation was an obvious misallocation of social talents. Thus, one might predict that as other groups had more equal opportunities to participate, it would provide a boost to economic growth. Pete Klenow reports the results of some calculations about these connections in \”The Allocation of Talent and U.S. Economic Growth,\” a Policy Brief for the Stanford Institute for Economic Policy Research.

Here\’s a table that illustrates some of the movement to greater equality of opportunity in the U.S. economy. White men are no longer 85% and more of the managers, doctors, and lawyers, as they were back in 1960. High skill occupation is defined in the table as \”lawyers, doctors, engineers, scientists, architects, mathematicians and executives/managers.\” The share of white men working in these fields is up by about one-fourth. But the share of white women working in these occupations has more than tripled; of black men, more than quadrupled; of black women, more than octupled.


Moreover, wage gaps for those working in the same occupations have diminished as well. \”Over the same time frame, wage gaps within occupations narrowed. Whereas working white women earned 58% less on average than white men in the same occupations in 1960, by 2008 they earned 26% less. Black men earned 38% less than white men in the typical occupation in 1960, but had closed the gap to 15% by 2008. For black women the gap fell from 88% in 1960 to 31% in 2008.\”

Much can be said about the causes behind these changes, but here, I want to focus on the effect on economic growth. For the purposes of developing a back-of-the-envelope estimate, Klenow builds up a model with some of these assumptions: \”Each person possesses general ability (common to
all occupations) and ability specific to each occupation (and independent across occupations). All groups (men, women, blacks, whites) have the same distribution of abilities. Each young person knows how much discrimination they would face in any occupation, and the resulting wage they would get in each occupation. When young, people choose an occupation and decide how
much to augment their natural ability by investing in human capital specific to their chosen
occupation.\”

With this framework, Klenow can then estimate how much of U.S. growth over the last 50 years or so can be traced to greater equality of opportunity, which encouraged many in women and minority groups who had the underlying ability to view it as worthwhile to make a greater investment in human capital.

\”How much of overall growth in income per worker between 1960 and 2008 in the U.S. can be explained by women and African Americans investing more in human capital and working more in high-skill occupations? Our answer is 15% to 20% … White men arguably lost around 5% of their earnings, as a result, because they moved into lower skilled occupations than they otherwise would have. But their losses were swamped by the income gains reaped by women and blacks.\”

At least to me, it is remarkable to consider that 1/6 or 1/5 of total U.S. growth in income per worker may be due to greater economic opportunity. In short, reducing discriminatory barriers isn\’t just about justice and fairness to individuals; it\’s also about a stronger U.S. economy that makes better use of the underlying talents of all its members.

____________________________
2) The black-white wage gap–and the share of the gap that is \”unexplained\”– is rising, not falling. Here\’s part of what I wrote about in \”Breaking Down the Black-White Wage Gap (September 6, 2017):

———
Mary C. Daly, Bart Hobijn, and Joseph H. Pedtke set the stage for a more insightful discussion in their short essay, \”Disappointing Facts about the Black-White Wage Gap,\” written as an \”Economic Letter\” for the Federal Reserve Bank of San Francisco (September 5, 2017, 2017-26). Here are a couple of figures showing the black-white wage gap, and then seeking to explain what share of that gap is associated with differences in state of residence, education, part-time work, industry/occupation, and age. The first figure shows the wage gap for black and white men; the second for black and white women.

Here are some thoughts on these patterns:

1) The black-white wage gap is considerably larger for men (about 25%) than for women (about 15%). Also, the wage gaps seem to have risen since the 1980s.

2) The three biggest factors associated with the wage gap seem to be education level, industry/occupation, and \”unexplained.\”

3) The \”unexplained\” share is rising over time time. As the authors explain: \”Perhaps more troubling is the fact that the growth in this unexplained portion accounts for almost all of the growth in the gaps over time. For example, in 1979 about 8 percentage points of the earnings gap for men was unexplained by readily measurable factors, accounting for over a third of the gap. By 2016, this portion had risen to almost 13 percentage points, just under half of the total earnings gap. A similar pattern holds for black women, who saw the gaps between their wages and those of their white counterparts more than triple over this time to 18 percentage points in 2016, largely due to factors outside of our model. This implies that factors that are harder to measure—such as discrimination, differences in school quality, or differences in career opportunities—are likely to be playing a role in the persistence and widening of these gaps over time.\” The authors also cite this more detailed research paper with similar findings.

4) In looking at the black-white wage gap for women, it\’s quite striking that this gap was relatively small back in the 1980s, at only about 5%, and that observable factors like education and industry/occupation explained more than 100% of the wage gap at the time. But as the black-white wage gap for women increased starting in the 1990s, an \”unexplained\” gap opens up.

5) It is tempting to treat the \”unexplained\” category as an imperfect but meaningful measure of racial discrimination, but it\’s wise to be quite cautious about such an interpretation. On one side, the \”unexplained\” category may overstate discrimination, because it doesn\’t include other possible variables that affect wages (for example, one could include previous years of lifetime work experience, or length of tenure at a current job, scores on standardized tests, or many other variables). In addition, the variables that are included like level of education are being measured in broad terms, and so it is possible that, say, a blacks and whites with a college education are not the same in their skills and background. On the other side, the \”unexplained\” category could easily understate the level of discrimination. After all, education levels and industry/occupation outcomes don\’t happen in a vacuum, but are a result of the income, education, and jobs of family members. For this reason, noting that a wage gap is associated with some different in education or industry/occupation may reflect aspects of social discrimination. The kinds of calculations presented here are useful, but they don\’t offer final answers.

In short, the black-white wage gap is rising, not falling. The wage gap is also less associated with basic measures like level of education or industry/occupation than it was before. I can hypothesize a number of explanations for this pattern, but none of my hypotheses are cheerful ones.

___________________

3) The patterns in which speeding tickets are given for those just a little over the speed limit can  reveal discrimination. I discuss some evidence on this point in \”Leniency in Speeding Tickets: Bunching Evidence of Police Bias\” (April 5, 2017):

——–
Imagine for a moment the distribution of speed for drivers who are breaking the speed limit. One would expect that a fairly large number of drivers break the speed limit by a small amount, and then a decreasing number of drivers break the speed limit by larger amounts.

But here\’s the actual distribution of amount over the speed limit on the roughly 1 million tickets given by about 1,300 officers of the Florida Highway Patrol between 2005 and 2015. The graph is fromFelipe Goncalves and Steven Mello, \”A Few Bad Apples? Racial Bias in Policing,\” Princeton University Industrial Relations Section Working Paper #608, March 6, 2017. The left-hand picture shows the distribution of the amount over the speed limit on the speeding ticket given to whites; the right-hand picture shows the distribution the amount over the speed limit on the speeding tickets given to blacks and Hispanics.


Some observations:

1) Very few tickets are given to those driving only a few miles per hour over the speed limit. Then there is an enormous spike in those given tickets for being about 9 mph over the speed limit. There are also smaller spikes at some higher levels. In Florida, the fine for being 10 mph over the limit is substantially higher (at least $50, depending on the county) compared to the fine for being 9 mph over the limit.

2) The jump at 9 mph is sometimes called a \”bunching indicator\” and it can be a revealing approach in a number of contexts. For example, if being above or below a certain test score makes you eligible for a certain program or job, and one observes bunching at the relevant test score, it\’s evidence that the test scores are being manipulated. If being above or below a certain income level affects your eligibility for a certain program, or whether you owe a certain tax, and there is bunching at that income level, it\’s a sign that income is being manipulated. Real-world data is never completely smooth, and always has some bumps. But the spikes in the figure above are telling you something.

3) Goncalves and Mello note that the spike at 9 mph is higher for whites than for blacks and Hispanics. This suggests the likelihood that whites are more likely to catch a break from an officer and get the 9 mph ticket. The research in the paper investigates this hypothesis in some detail …

In the big picture, one of the reminders from this research is that bias and discrimination doesn\’t always involve doing something negative. In the modern United States, my suspicion is that some of the most prevalent and hardest-to-spot biases just involve not cutting someone an equal break, or not being quite as willing to offer an opportunity that would otherwise have been offered.

____________________

4) Many of the communities that suffer the most from crime are also the communities where the law-abiding and the law-breakers both experience a heavy law enforcement presence, and where large numbers of young men end up being incarcerated. Here are some slices of my discussion from \”Inequalities of Crime Victimization and Criminal Justice\” (May 20, 2016):

———-
And law-abiding people in some communities, many of them predominantly low-income and African-American, can end up facing an emotionally crucifying choice. One one side, crime rates in their community are high, which is a terrible and sometimes tragic and fatal burden on everyday life. On the other side, they are watching a large share of their community, mainly men, becoming involved with the criminal justice system through fines, probation, fines, or incarceration. Although those who are convicted of crimes are the ones who officially bear the costs, in fact the costs when someone needs to pay fines, or can\’t earn much or any income, or can only be visited by making a trip to a correctional facility are also shared with families, mothers, and children. Magnus Lofstrom and Steven Raphael explore these questions of \”Crime, the Criminal Justice System, and Socioeconomic Inequality\” in the Spring 2016 issue of the Journal of Economic Perspectives. …

It\’s well-known that rates of violent and property crime have fallen substantially in the US in the last 25 years or so. What is less well-recognized is that the biggest reductions in crime have happened in the often predominantly low-income and African-American communities that were most plagued by crime. Loftrom and Raphael look at crime rates across cities with lower and higher rates of poverty in 1990 and 2008:

\”However, the inequality between cities with the highest and lower poverty rates narrows considerably over this 18-year period. Here we observe a narrowing of both the ratio of crime rates as well as the absolute difference. Expressed as a ratio, the 1990 violent crime rate among the cities in the top poverty decile was 15.8 times the rate for the cities in the lowest poverty decile. By 2008, the ratio falls to 11.9. When expressed in levels, in 1990 the violent crime rate in the cities in the upper decile for poverty rates exceeds the violent crime rate in cities in the lowest decile for poverty rates by 1,860 incidents per 100,000. By 2008, the absolute difference in violent crime rates shrinks to 941 per 100,000. We see comparable narrowing in the differences between poorer and less-poor cities in property crime rates. … \”

It remains true that one of the common penalties for being poor in the United States is that you are more likely to live in a neighborhood with a much higher crime rate. But as overall rates of crime have fallen, the inequality of greater vulnerability to crime has diminished.

On the other side of the crime-and-punishment ledger, low-income and African-American men are more likely to end up in the criminal justice system. Lofstrom and Raphael give sources and studies for the statistics: \”[N]nearly one-third of black males born in 2001 will serve prison time at some point in their lives. The comparable figure for Hispanic men is 17 percent … [F]or African-American men born between 1965 and 1969, 20.5 percent had been to prison by 1999. The comparable figures were 30.2 percent for black men without a college degree and approximately 59 percent for black men without a high school degree.\”

I\’m not someone who sympathizes with or romanticizes those who commit crimes. But economics is about tradeoffs, and imposing costs on those who commit crimes has tradeoffs for the rest of society, too. For example, the cost to taxpayers is on the order of $350 billion per year, which in 2010 broke down as \”$113 billion on police, $81 billion on corrections, $76 billion in expenditure by various federal agencies, and $84 billion devoted to combating drug trafficking.\” The question of whether those costs should be higher or lower, or reallocated between these categories, is a worthy one for economists. … Lofstrom and Raphael conclude:

\”Many of the same low-income predominantly African American communities have disproportionately experienced both the welcome reduction in inequality for crime victims and the less-welcome rise in inequality due to changes in criminal justice sanctioning. While it is tempting to consider whether these two changes in inequality can be weighed and balanced against each other, it seems to us that this temptation should be resisted on both theoretical and practical grounds. On theoretical grounds, the case for reducing inequality of any type is always rooted in claims about fairness and justice. In some situations, several different claims about inequality can be combined into a single scale—for example, when such claims can be monetized or measured in terms of income. But the inequality of the suffering of crime victims is fundamentally different from the inequality of disproportionate criminal justice sanctioning, and cannot be compared on the same scale. In practical terms, while higher rates of incarceration and other criminal justice sanctions may have had some effect in reducing crime back in the 1970s and through the 1980s, there is little evidence to believe that the higher rates have caused the reduction in crime in the last two decades. Thus, it is reasonable to pursue multiple policy goals, both seeking additional reductions in crime and in the continuing inequality of crime victimization and simultaneously seeking to reduce inequality of criminal justice sanctioning. If such policies are carried out sensibly, both kinds of inequality can be reduced without a meaningful tradeoff arising between them.\”

______________

5) An \”audit study\” of housing discrimination involves finding pairs of people, giving them similar characteristics (job history, income, married/unmarried, parents/not parents) and sending them off to buy or rent a place to live. In \”Audit Studies and Housing Discrimination\” (September 21, 2016), I wrote in part:
______________

Cityscape magazine, published by the US Department of Housing and Urban Development three times per year, has a nine-paper symposium on \”Housing Discrimination Today\” in the third issue of 2015. The lead article by Sun Jung Oh and John Yinger asks: \”What Have We Learned From Paired Testing in Housing Markets?\” (17: 3, pp. 15-59). …

There have been four large national-level paired testing studies of housing discrimination in the US in the last 40 years. \”The largest paired-testing studies in the United States are the Housing Market Practices Survey (HMPS) in 1977 and the three Housing Discrimination Studies (HDS1989, HDS2000, and HDS2012) sponsored by the U.S. Department of Housing and Urban Development (HUD).\” Each of the studies were spread over several dozen cities. The first three involved about 3,000-4,000 tests; the 2012 study involved more than 8,000 tests. The appendix also lists another 21 studies done in recent decades.

Overall, the findings from the 2012 study find ongoing discrimination against blacks in rental and sales markets for housing. For Hispanics, there appears to be discrimination in rental markets, but not in sales markets. Here\’s a chart summarizing a number of findings, which also gives a sense of the kind of information collected in these studies.


However, the extent of housing discrimination in 2012 has diminished from previous national-level studies. Oh and Yinger write (citations omitted): \”In 1977, Black homeseekers were frequently denied access to advertised units that were available to equally qualified White homeseekers. For instance, one in three Black renters and one in every five Black homebuyers were told that there were no homes available in 1977. In 2012, however, minority renters or homebuyers who called to inquire about advertised homes or apartments were rarely denied appointments that their White counterparts were able to make.

The Problem of Questionable Patents

The theoretical case for patents is clear enough: if you want people and companies to have an incentive for investing money and time in seeking innovations, you need to offer them some assurance that others won\’t immediately copy any successful discoveries.  But with the power of patents comes the risk of gaming the patent system and of patents being granted when the proffered invention is either not new, or obvious, or both. Michael D. Frakes and Melissa F. Wasserman tackle these issues in \”Decreasing the Patent Office’s Incentives to Grant Invalid Patents\” (Hamilton Institute Policy Proposal 2017-17, December 2017). Also, Jay Shambaugh, Ryan Nunn, and Becca Portman offer some useful background information in \”Eleven Facts about Innovation and Patents\” (Hamilton Project, December 2017).

The Shambaugh, Nunn, and Portman paper offers a few background figures on patents that, as you look, at them, can raise your eyebrows a bit. The background here is that three main patent-granting agencies in the world–the US Patent Office and Trademark Office, the Japanese Patent Office, and the European Patent Office–are sometimes referred to as the Trilateral Patent Offices. The usual belief is that \”compared to the USPTO, the JPO and EPO are believed to apply stricter scrutiny to applications.\” Getting a patent from all three of these offices is called a \”triadic\” patent, and the number of triadic patents is sometimes used as a measure of quality. Now consider a couple of comparisons.

The number of patent applications in the US had more-or-less doubled since 2000. In that time, the number of patent applications in Japan has dropped by one-quarter, while the number in Europe has risen by about 50%. One possible interpretation of this pattern is that the US economy is the grip of a massive wave of innovation far outstripping Japan and Europe, which may foretell a productivity boom for the US economy. An alternative interpretation is that it\’s so much easier to apply for a patent in the US, and to have a patent granted, that the US Patent Office is attracting lots of low-quality and invalid patent applications, and some of those are sneaking through the system to receive actual patents.

Here\’s a figure that poses a similar question. This graph shows the share of GDP spent on research and development on the horizontal axis. The vertical axis is a measure of the number of \”high-quality\” patents, which in this figure refers to an innovation that is patented in at least two of the three Trilateral Patent Offices. The US level of R&D spending is a bit below that of Germany and Japan, but similar. However, when measured in terms of high-quality patents filed, the US lags well behind. Again, this could be the result that US firms aren\’t bothering to apply for European and Japanese protection for all their great patents. Or it could be a signal that the rise in US patents includes a greater of low-quality or even invalid patents than those in Japan and Europe.

Frakes and Wasserman lay out how the US Patent Office works in greater detail, in a way that for me sharpens these concerns. For example, they write (citations omitted):

\”There is an abundance of anecdotal evidence that patent examiners are given insufficient time to adequately review patent applications. On average, a U.S. patent examiner spends only 19 hours reviewing an application, including reading the application, searching for prior art, comparing the prior art with the application, and (in the case of a rejection) writing a rejection, responding to the patent applicant’s arguments, and often conducting an interview with the applicant’s attorney. Because patent applications are legally presumed to comply with the statutory patentability requirements when filed, the burden of proving unpatentability rests with the Agency. That is, a patent examiner who does not explicitly set forth reasons why the application fails to meet the patentability standards must then grant the patent.\” 

The US Patent Office is funded by the fees it collects, which fall into several categories, as Frakes and Wasserman explain:

\”The overwhelming majority of Patent Office costs are attributed to reviewing and examining applications. To help cover these expenses, the Agency charges examination fees to applicants. These fees fail to cover even half of the Agency’s examination costs, however. To make up for this deficiency, the Agency relies heavily on two additional fees that are collected only in the event that a patent is granted: (1) issuance fees, paid at the time a patent is granted; and (2) renewal fees, paid periodically over the lifetime of an issued patent as a condition of the patent remaining enforceable. Combined with examination fees, these fees account for nearly all of the Patent Office’s revenue. … In fiscal year 2016 the Patent Office estimated that the average cost of examining a patent application was about $4,200 . The examination fee that year was set at only $1,600 for large for-profit corporations; at $800 for individuals, small firms, nonprofit corporations, or other enterprises that qualify for small-entity status; and at $400 for individuals, small firms, nonprofit corporations, or other enterprises that qualify for micro-entity status.\”

An obvious concern is that if the US Patent Office relies heavily on fees that are collected only after a patent is granted, then there is an obvious incentive to grand more patents. Indeed, they cite studies to show that when the Patent Office is facing financial troubles, it tends to grant more patents. 

An additional concern is that the US Patent Office doesn\’t really reject patents, at least not permanently, because you can apply repeatedly. \”Considering that about 40 percent of the applications filed in fiscal year 2016 are repeat applications (up from 11 percent in 1980), a substantial percentage of the Patent Office’s backlog can be attributed to its inability to definitively reject applications.\” To put it another way, an applicant for a patent can just keep applying until it gets assigned to a less-experienced examiner during a budget crunch, and improve your odds that it will eventually be granted.

With these thoughts in mind, Frakes and Wasserman offer some practical solutions, which include: 1) increase patent examination fees and abolish \”issuance\” fees, to reduce the financial incentive to grant patents; 2) limit repeat applications, perhaps by charging higher fees; 3) give patent examiners more time (and charge higher fees to support that additional time as needed).

But the key economic insight between these proposals and others is that in a economy whose future is based on innovation and technology, the danger of granting a substantial number of patents which should not have been allowed has important costs. As Frakes and Wasserman write: 

\”Although patents encourage innovation by helping inventors to recoup their research and development expenses, this comes at a cost—consumers pay higher prices and have less access to the patented invention. Although society can accept such consequences for a properly issued patent, an invalid patent imposes these costs on society without providing the commensurate benefits from additional innovation because, by definition, an invalid patent is one issued for an existing technology or an obvious technological advancement. Invalid patents provide no innovative benefit to society because the public already possessed the patented inventions.

\”In addition to this harm, erroneously issued patents can stunt innovation and competition. Competitors might forgo research and development in areas covered by improperly issued patents to minimize the risk of expensive and time-consuming  litigation. There is growing empirical evidence that invalid patents can increase so-called patent thickets—dense webs of overlapping patent rights—that in turn raise the cost of licensing and complicate business planning. Because a firm needs a license to all of the patents that cover its products, other firms can use questionable patents to opportunistically extract licensing fees. There is mounting evidence that nonpracticing entities—commonly known as patent trolls—use patents of questionable validity to assert frivolous lawsuits and extract licensing revenue from innovative firms. Invalid patents can also undermine the business relations of market entrants because customers might be deterred from transacting with a company out of fear of a contributory patent infringement suit. Finally, erroneously issued patents can inhibit the ability of start-ups to obtain venture capital, especially if a dominant player in the market holds the patent in question.\”

For some other thoughts on the economics of patents, the interested reader might check: 

Does Retirement Raise the Risk of Death?

\”Social Security eligibility begins at age 62, and approximately one third of Americans immediately claim at that age. We examine whether age 62 is associated with a discontinuous change in aggregate mortality, a key measure of population health. Using mortality data that covers the entire U.S. population and includes exact dates of birth and death, we document a robust two percent increase in male mortality immediately after age 62. The change in female mortality is smaller and imprecisely estimated. Additional analysis suggests that the increase in male mortality is connected to retirement from the labor force and associated lifestyle changes.\”

This is a technical research paper that will only be accessible to the initiate, but you can get a good flavor of the results from a couple of figures. A first figure shows the patterns of claiming Social Security. There are various rules about the age at which different kinds of benefits can be claimed. For example Social Security disability benefits can be claimed earlier, but access to what most people think of as the usual Social Security Benefits starts at 62. The figure shows the step-change or discontinuity in number of people retiring at 62, followed by a slower rise in claiming benefits and a smaller jump at age 65.  

There\’s also a step-change discontinuity in the death rate at age 62–but only for men, not for women. Mortality rates increase with age. As this figure shows, the mortality rate for women before and after age 62 is close to a smooth line. But for men, there\’s a jump. 
The authors dig into data on behavioral and economic patterns to see if they can find an underlying reason for this difference. Proving cause-and-effect here is very difficult, as the authors admit, but some patterns do emerge in the data. For example, there doesn\’t seem to be a gender gap in how income or health insurance coverage shifts at age 62. However, one difference is that men are more likely than women to stop working for pay when they start claiming Social Security. Men are more likely to start or increase smoking (even if they have never smoked before) and to become more sedentary. Men at age 62 also see a rise in deaths due to chronic obstructive pulmonary disease, lung cancer, and traffic accidents.
In short, if you are retiring, take up a habit other than smoking. And if you know someone who is retiring, invite them for a walk and give them a hug. 

Measuring the "Free" Digital Economy

The digital economy provides a number of services for which the marginal price (given an internet connection) is zero: games like Candy Crush, email, web searches, access to information and entertainment, and many more. Because users are not paying an additional price for using these services, this form of economic output doesn\’t seem to be captured by conventional economic statistics. Leonard Nakamura, Jon Samuels , and Rachel Soloveichik offer some ways of thinking about the question in in \”Measuring the `Free\’ Digital Economy within theGDP and Productivity Accounts, written for the Economic Statistics Centre of Excellence, an independent UK research center funded by Britain\’s Office of National Statistics (December 2017, ESCoE Discussion Paper 2017-3).

Essentially, they propose that the economic value of \”free\” content can be measured by the marketing and advertising revenue that it generates. In other words, you \”pay\” for \”free\” content not with money, but by selling a slice of your attention to advertising. Thus, their approach is a practical application of the saying: \”If you\’re not paying for it, you\’re the product.\” They write:

”Free” digital content is pervasive. Yet, unlike the majority of output produced by the private business sector, many facets of the digital economy (e.g., Google, Facebook, Candy Crush) are provided without a market transaction between the final user of the content and the producer of the content. … Furthermore, because these technologies are so pervasive and have induced large changes in consumer behavior and business practice, these open questions have evolved into arguments that the exclusion of these technologies from the national accounts leads to a significant downward bias in official estimates of growth and productivity.

The first contribution of this paper is to provide an argument that, yes, it is possible to measure many aspects of the ”free” digital economy via the lens of a production account. … To be clear at the outset, this approach does not provide a willingness to pay or welfare valuation of the “free” content. But this approach does provide an estimate of the value of the content that is consistent with national accounting estimates of production.

We model the provision of “free” content as a barter transaction. Consumers and businesses receive content in exchange for exposure to advertising or marketing. Our approach reduces to treating the provision of the ”free” digital content as payment in kind for viewership services produced by households and businesses. Put differently, the national accounts currently ignore the role of households in the production of advertising and marketing. In our methodology, households are active producers of viewership services that they barter for consumer entertainment. …

We focus on two types of ”free” content: advertising‐supported media and marketing‐supported information. Advertising‐supported media includes digital content like Google search, but also more traditional content like print media and broadcast television. Marketing‐supported information includes digital content like so‐called freemium games for smartphones or recipes from BettyCrocker.com, but also more traditional content like print newsletters and audiovisual marketing. Conceptually, the barter transaction between the producer and user of “free” information is nearly identical to that with advertising‐supported media. The main difference is that advertising viewership is almost exclusively ”purchased” by media companies from the general public and then resold to outside companies. In  contrast, the marketing viewership that is exchanged for “free” information is generally ”purchased” by nonmedia companies from potential customers and used in‐house.

A number of interesting insights emerge from this approach. Here\’s a figure showing total US advertising spending over time as a share of GDP. Total advertising revenue has been fairly stable over time, with the abrupt fall in print advertising being mostly offset by a rise in digital advertising. 
This figure shows total expenditures on marketing over time as a share of GDP. In this case, spending on print marketing has declined, but because of rising expenditures on digital marketing, total spending on marketing has risen by more than 1% of GDP in the last 20 years. 
Overall, measuring the value of the \”free\” digital economy has relatively little effect on output or trends in total factor productivity (TFP)_. They write (citations and footnotes omitted):

\”We are particularly interested in the analysis of “free” digital content beginning in 1995 because that year has been previously identified as an inflection point in the production of information technology (IT) equipment. Moreover, that is when the Internet emerged as a significant source of ”free” content. We calculate that, from 1995 to 2014, our experimental methodology applied to digital content annually raises nominal GDP growth by 0.036 percentage point, real GDP growth by 0.089 percentage point, and TFP growth 0.048 percentage point. The growth of digital content is partially offset by a decrease in ”free” print content like newspapers. From 1995 to 2014, all “free” content
categories together annually raise nominal GDP growth by 0.033 percentage point, raise real GDP growth by 0.080 percentage point, and raise TFP growth by 0.073 percentage point.  …  These revised numbers slightly ameliorate the recent slowdown in economic growth—but not nearly enough to reverse the slowdown.\”

This analysis seems broadly sensible and correct to me: for a previous argument along similar lines, see  \”How Well Does GDP Measure the Digital Economy?\” (July 19, 2016). But it comes with a warning that applies to all discussions of economic output, and is recognized repeatedly by the authors here.

GDP is measured by the monetary value of what is bought and sold, but it doesn\’t measure consumer welfare (or \”happiness\” or \”utility\”) in a direct way. Thus, it\’s possible that even if the gains to GDP from including \”free\” digital services are relatively small, perhaps those small gains are increasing consumer welfare and happiness by a much larger amount. Of course, one can make a similar argument that the monetary value of certain other outputs, from broadcast television back in the 1960s and 1970s, or the availability of aspirin, is a lot less than the consumer welfare generated by these products. Measuring \”the economy\” is an exercise of adding up sales receipts, while thinking about benefits and costs of economic patterns (as has long been recognized) is a much broader exercise. 

The State of Play with Carbon Capture and Storage

Carbon capture and storage technology isn\’t likely to be the silver bullet that slays climate change  by itself. But it may well be a necessary and meaningful part of the package of policy responses. Akshat Rathi has written a series of readable articles for Quartz magazine (listed here) that give a useful sense of the state of the technology in this area, and its real-but-limited potential.

For example, in an overview article on December 4, 2017, \”Humanity’s fight against climate change is failing. One technology can change that,\” Rathi notes that before starting a year of research and writing on the topic, he was skeptical of carbon capture and storage could be cost-effective. However, he also notes that this technology may both be necessary and possible.

On the issue of necessity, Rathi writes: \”The foremost authority on the matter, the Intergovernmental Panel on Climate Change, has modeled hundreds of possible futures to find economically optimal paths to achieving these goals, which require the world to bring emissions down to zero by around 2060. In virtually every IPCC model, carbon capture is absolutely essential—no matter what else we do to mitigate climate change.\” (Rathi and David Yanovsky offer an interactive game to drive home this point here.)

On the issue of possibility, the evidence is scattered, and still more at the proof-of-concept stage than at a full-fledged and ongoing industry. But some of these fledgling projects are intriguing. For example, Rathi discusses an operation in Iceland (discussed in more detail in an earlier article) which issues geothermal heat to capture carbon dioxide and inject it underground in a location where it combines with minerals to form solid rock–an operation which is an overall net subtraction of carbon from the atmosphere. Rathi notes:

\”Since 2014, the plant has been extracting heat from underground, capturing the carbon dioxide released in the process, mixing it with water, and injecting it back down beneath the earth, about 700 meters (2,300 ft) deep. The carbon dioxide in the water reacts with the minerals at that depth to form rock, where it stays trapped. … In other words, Hellisheidi is now a zero-emissions plant that turns a greenhouse gas to stone. … Critics laughed at those pursuing a moonshot in “direct-air capture” only a decade ago. Now Climeworks is one of three startups—along with Carbon Engineering in Canada and Global Thermostat in the US—to have shown the technology is feasible. The Hellisheidi carbon-sucking machine is the second Climeworks has installed in 2017. If it continues to find the money, the startup hopes its installations will capture as much as 1% of annual global emissions by 2025, sequestering about 400 million metric tons of carbon dioxide per year.\”

In another article, Rathi discusses a plant which generates electricity from natural gas near Houston, in  \”A radical startup has invented the world’s first zero-emissions fossil-fuel power plant\” (December 5, 2017). The process involves using \”supercritical\” carbon dioxide, under high termperatures and pressures. He writes: 

\”In the end, the Allam cycle is only slightly more efficient than typical combined-cycle systems. But it has the major added benefit of capturing all potential carbon dioxide emissions essentially for free. … Beyond the greenhouse-gas effect, carbon dioxide has some fascinating properties. At high pressure and temperature, for instance, it enters a state of matter where it’s neither a gas nor a liquid but has properties of both. It’s called a “supercritical fluid.” If you’ve ever had decaf coffee, you’ve likely been an unwitting customer of supercritical carbon dioxide, which is often used to extract caffeine from coffee beans with minimal changes to the taste.\”

China, which leads the countries of the world in carbon emissions, has been experimenting with carbon capture and storage, without yet making a strong commitment to the technology, as Rathi explains here (and a 2015 report from the Asia Development Bank discusses here).

There are a variety of new projects and possible innovations either to capture carbon from emissions at lower cost, or to turn carbon dioxide into solids like soda ash, and other approaches. Carbon capture and storage isn\’t yet a proven large-scale technology, but it\’s a promising one.

For some previous posts on this topic, with links to various reports and articles, see:

No, a Seller Doesn\’t Have to Accept Cash

If private sellers wish to do so, they can require that payment be made in the form of credit cards or checks. They are not required to accept cash. The Federal Reserve crisply explains the law in this FAQ (last updated June 17, 2011):

Is it legal for a business in the United States to refuse cash as a form of payment?

Section 31 U.S.C. 5103, entitled \”Legal tender,\” states: \”United States coins and currency [including Federal reserve notes and circulating notes of Federal reserve banks and national banks] are legal tender for all debts, public charges, taxes, and dues.\”

This statute means that all United States money as identified above is a valid and legal offer of payment for debts when tendered to a creditor. There is, however, no Federal statute mandating that a private business, a person, or an organization must accept currency or coins as payment for goods or services. Private businesses are free to develop their own policies on whether to accept cash unless there is a state law which says otherwise.

The US Department of the Treasury has a very similar statement at its website.

There are, of course, a rising number of examples of sellers who do not accept cash: certain stores, parking garages, flight attendants on most airlines, and others. Other sellers may refuse to accept certain types of currency, like buses that won\’t take pennies for the fare, or stores that won\’t accept bills larger than a certain denomination.

My understanding is that no state enforces laws with actual teeth that require firms to take cash. (Massachusetts seems to have an unenforced law without penalties that retailers must accept cash.) It\’s not obvious that states should enact such laws, either. But as the use of cash fades in many contexts, it\’s worth remembering that not everyone has credit cards. For example, about one-quarter of all US families in the bottom 20% of the income distribution are \”unbanked,\” and for those without a bank account, having access to plastic-based spending power can be difficult or costly or both.

When Invoking Poverty and Necessity is a Ruse

Consider a policy of giving every American $1,000. The argument made in support of the policy is that poor people need money to buy necessities. If you point out that helping poor people does not require writing a check to everyone, the response is to accuse you of being someone who opposes helping poor people.

The logical fallacy in this syllogism is obvious.

Premise A: A policy helps poor people.
Premise B: You oppose the policy.
Conclusion: You oppose helping poor people. 

Of course, it may be that what is actually opposed is not support for those in need, but the broader policy which promises benefits for everyone, and thus imposes much higher cost. One might also support an alternative way of helping the poor, which involves different incentives. In this sense, there are number of cases where invoking \”the poor\” or \”basic necessities\” is a ruse, designed to provide a smokescreen for policies that mainly seek to benefits the middle and upper class. Once you are alerted to the dynamic, examples are numerous.

A standard example arises when some items are exempted from state sales taxes. Here in my home state of Minnesota, a number of items are exempt from state sales tax: food,  clothing home heating fuels, prescription drugs, certain medical devices, and caskets and funeral urns. The usual reason given for these exemptions is that these are \”basic\” items. The list of basic necessities is disputable: for example, only three other states exempt clothing from sales tax. But more broadly, the poverty rate in Minnesota is below 10%. If the goal is to help the bottom 10-20% of the income distribution in affording \”basic\” items,, it is not sensible to exempt 100% of the population from sales tax on these items. It would be straightforward to collect the sales tax from everyone, and then rebate the money to the poor in some way. Or the poor could be issued a \”no sales tax\” ID card that would be given to cashiers when buying certain goods.

Another example on my personal list involves congestion tolls charged for travelling in certain lanes during peak commuting times. The Washington Post recently reported that just-established congestion toll lanes in northern Virginia were charging $34.50 on a certain day to travel 10 miles during the worst of the morning commute. The story quotes the concern that such tolls are \”going to introduce a real hardship for people on low wages or working in the nonprofit or public sector,\” along with arguments that congestion tolls needed to be capped. But of course, a price cap to make congestion fees \”affordable\” won\’t actually get rid of the congestion–and then no one will want to pay the fee, either. If the goal is to reduce transportation costs for people with low incomes, spending the congestion fees on subsidizing the mass transit that they use is going to likely to be a choice that helps more people in a more cost-effective manner.

Another example involves a recent proposal from the US Department of the Interior to raise the price of enter to 17 of the most popular national parks during their peak seasons.  This fee increase doesn\’t seem especially well thought-out: for example, the price for entry into these 17 parks during peak season would rise to $70 for a week of access, while the proposed fee for an annual pass to these same parks would be almost the same at $75, But those who go to national parks typically have above-average incomes, and what they spend on transportation, lodging, food, and gear is typically a lot more than the cost of entering the park. If the goal is to make national parks available and affordable to those with lower income levels, it would be fairly straightforward to set up a system where low-income people could receive vouchers. It would also be useful to provide additional low-cost mass transit and housing within the national parks. But low park entrance fees for everyone is not a cost-effective way of helping those with low incomes enjoy the parks.

Government programs that start out being aimed at the poor often find themselves expanding so that much of the assistance goes to other groups. A classic example is Community Development Block Grants, through which the federal government was going to provide funds to low-income communities. But unsurprisingly, the rules for allocating these funds include more than average income levels in a community, and a lot of the funds end up flowing to destinations and purposes that don\’t seem to fit the broader intention of the program. Steven Malanga makes the case for \”Let’s Kill the CDBG\” in the Autumn 2017 issue of City Journal. He writes:

\”These days, the CDBG hands out money for projects that have little to do with its poverty-combating mission. With an average annual family income of $67,000, well above the poverty line, Manchester, New Hampshire, is no one’s idea of a depressed community; but the city is spending $200,000 in block-grant money to fill an unused pool and convert it into a “splash pad.” Elgin, in suburban Illinois—with a poverty rate of just 8 percent—is sprucing up its parks with $740,000 in CDBG funds. Fast-growing Berkeley County in South Carolina is building a library and recreation complex, including a swimming pool and tennis courts, partly with block-grant money. In 2016, Monmouth County, New Jersey—average household income: $115,000—spent more than $110,000 in CDBG funds on enhancements to a publicly owned entertainment venue, the Count Basie Theater.\”

An April 2017 report from the Urban Institute hits similar notes in its review of the literature: \”For example, wealthy suburbs may have older housing stock and low population growth but are not especially needy. … Studies have found that the formulae’s abilities to match funding to need have diminished over time.\”

Examples of programs that claim to be supporting the poor, while actually designed to confer equal or greater benefits on the non-poor, can easily be multiplied. For example, I\’ve heard attempts to justify universal pre-K education because it might be helpful for low-income families. But subsidizing pre-K education for low-income families doesn\’t require subsidizing it for all. At the other end of the education spectrum, I\’ve heard attempts to justify free college education for all because some people can\’t afford the costs–but it\’s not necessary to make it free for everyone in order to help a subset of the population. I\’ve heard the tax deduction for mortgage interest defended as a way of encouraging homeownership, but  even leaving aside the issue that most homebuyers by definition are not poor, giving a boost to first-time homebuyers can be done in a lot of ways that don\’t involve allowing deductibility of mortgage interest for all. The minimum wage is often defended as a way of helping the poor, but about half of those receiving the minimum wage are not actually below the poverty line. Rather than advocating a plan that seeks to boost the incomes of high school workers from middle-class families, there are a variety of other ways of subsidizing wages for low-income workers.

 And of course, there are a number of international examples of this phenomenon as well, most prominently the many low-income countries that hold down prices or offer broad subsidies for basic necessities like fuel and food.

Many countries have had policies of keeping food prices low to  help the poor, but found that most of the benefits went to the nonpoor.  A 2014 story in the Economist noted: \”In Burkina Faso, Egypt and the Philippines less than 20% of spending on food subsidies goes to poor households. In the Middle East and North Africa only 35% of subsidies reach the poorest 40%, the IMF reckons.\” Similar patterns often emerge for fuel subsidies, which can cost about $1 trillion annually in developing countries. Again, helping the poor could be done through cash transfers, or some form of direct distribution, or through vouchers–pretty much any approach targeted more specifically at the poor will be more affect than universal (or nearly so) lower prices or subsidies.

I\’m often in favor of programs that transfer resources to the poor–and only to the poor. But it\’s worth being wary of the political dynamic which uses the poor as as stalking horse for policies and programs with rather different effects.

NIcholas Stern Interviews Tony Atkinson: Poverty, Inequality, and the Economics Profession

All economists are at least somewhat familiar with the work of Tony Atkinson, who died a year ago on January 1, 2017. The Annual Review of Economics offers a tribute in \”Tony Atkinson on Poverty, Inequality, and Public Policy: The Work and Life of a Great Economist,\” by Anthony Barnes Atkinson and Nicholas Stern (2017, pp. 1-20).  More specifically, Lord Stern interviews Sir Tony. Here are some snippets, among many of the lively exchanges that caught my eye. 

The Value of Measuring and Publicizing Poverty

Atkinson: [I]t’s one principle I work on: I won’t do something unless I actually see, firstly, that it’s something where I actually want to know the answer because I think it’s intellectually interesting but, secondly, that there is some potential way in which it’s feeding into what actually happens.

The European Social Indicators is a good example of that, because it came about because the European Union was, under Jacques Delors, becoming quite concerned about social dimensions and the fact that there was significant poverty in the European Union, which Delors did quite a lot to identify. There was movement, as it were, at the beginning of the 2000s to give it some more priority. And it turned out by coincidence or by chance that the Belgians had the presidency, and the Belgian Minister of Social Affairs was one of my former students. … Well, economists might well say, “Oh, it’s cheap talk.” We know that there are 125 million people in the European Union living in poverty, according to standard measures. But actually, it changed the climate of discussion because each country is peer-reviewed every year as to what their performance is according to indicators, and it is more than mildly embarrassing when, as in Germany at the moment, poverty is going up quite rapidly.

Restoring Official Measurements of Inequality in the United Kingdom, and Elsewhere

Atkinson: [C]learly, since about the early 1990s, I’ve been trying to get the government and other bodies to restore income distribution to being something that they actually publish data on. You have to remember, in this country—the UK—we dropped the income distribution statistics somewhere in the 1980s. After that, there were none.

Stern: We had a Royal Commission on the Distribution of Income and Wealth—

Atkinson: —which I was a member of, indeed. And we were sacked.

Stern: By Margaret Thatcher?

Atkinson: Yes, indeed. And after that, the income distribution statistics were stopped. The OECD [Organisation for Economic Co-operation and Development], for example, after putting their toe in the water in the 1970s, didn’t return to the subject for another 20 years. So the report that I did with Tim Smeeding and Lee Rainwater in 1995 for OECD (Atkinson et al. 1995) was the first time they’d had a publication on income distribution for 20 years.

On the importance of looking at deviations from pure general equilibrium thinking

Atkinson: I can remember the lecture given by Jacques Dreze, which you may have been at, which was called “the firm in general equilibrium theory.” He said, “How do you get the firm into general equilibrium theory? Well, you blow up a paper bag, and then you puncture it. … And so, you’ve let all the air out. The firm has no real existence.”

The Importance of Knowing How Economic Data is Generated

Atkinson: I think the other thing is that our understanding of data on the more macro side is much inferior to what it was. In the early days of national accounts, they were constructed by people who did macroeconomics, as well, people like Richard Stone, Paul Samuelson, James Meade, and so on.

Stern: The best of the best.

Atkinson: Exactly. They were doing work on constructing national accounts, so they knew perfectly well what they were using. Keynes, for example, knew how his younger colleagues were making up those numbers. I fear that, today, that’s one of the areas where people just don’t understand what they’re using, and the origin of the numbers should not just be a footnote point. … And I came across this when I wrote a review of how government output is measured, because the United States—still, as I understand it—measures government output according to the input. Some US economists say this is a general policy, but it is not; the European Union, and the UK as part of it, has been using an output-based measure for quite a long time. When we looked at this issue, we discovered that about half the difference in the recorded growth rates between the UK and the US was due to this difference in method.

On the problems of narrowness and publication pressure for young economists

Stern: Are we really helping create the all-around economist in a way that, perhaps, came more naturally earlier? 

Atkinson: One has to recognize there was this question, again, about change over time. When you and I were students, we could actually read the major journals—there were probably, at most, a dozen—and one could at least cast one’s eye down and see what was going on. They were all a lot less fat than they are today, too. So, I think one has to recognize the subject is partly the victim of its success. The profession is so much bigger, and there’s so much more research going on. But I think this has come at a cost. We have become too specialized, and people define themselves as being specialized economists, whereas I just think of myself as an economist. 

Now, if you meet people, they’ll say, “I’m a labor economist,” or, “I’m an IO [industrial organization] economist,” as if they belong to that tribe. I think that’s fine, but of course you then get seminars taking place on labor economics, which actually would’ve benefited enormously from the seminar on industrial organization that happened at the same time. And people just don’t talk to each other, and I think that’s a loss; at least all my cohorts had an appreciation of what was going on elsewhere. 

Stern: Is there something we could do? 

Atkinson: Well, I think it’s partly a question of training; that is, one needs to have more courses teaching people the appreciation of something rather than the identification of a thesis topic. But also I think the loss, in many places, of the general seminar is an example of an issue with the academic departments; when I was there, Harvard, to its credit, did have three general seminars a term, which were well attended. Probably 60 or 70 people, at least, would come to them. And the talks were, on the whole, at very appropriate levels. Of course, there are various forms of diffusion through media; they all serve this function. But I think it’s perhaps more an issue of persuading younger economists that this is something they ought to take more seriously than they do. …
Whenever I talk to a would-be graduate student, I say, “What is it you want to know?” I’m sure you do the same. Not having an answer to that question is a weakness, and, in some way, it’s partly due to the professionalization. People are doing economics as a profession rather than because they’re really interested in the answers. …

Well, I think the position of young economists is actually very difficult at the moment, at least as far as the academic sphere is concerned, because we’ve now moved to a pretty unforgiving judgement based on journal publication. This means they’re under great pressure, which is often very hard for them to satisfy in the sense that everyone is trying to publish in top journals. I think this pressure affects the choice of subject matter and the style of economics. It’s much easier to publish, I suspect, theoretical than applied economics in major journals; it is certainly easier to publish theory than applied economics concerned with countries other than the US. I think that young economists are being pressured into a very difficult situation where their academic careers are related to things that are often quite opposed to what they want to do. If you ask them what the question is that they want an answer to, many of them have a very good response: They’re doing economics because there is something they really want to find out, they’re really concerned about some particular issue, or they’ve read something that really inspired them and that they want to follow up. I often find it very difficult to advise them. My instinct is to say, “Follow your instincts,” but, on the other hand, they may never get jobs.

Gender Mix in AP Economics

About 30% of undergraduate economics degrees given to US citizens and permanent residents go to women. Perhaps unsurprising, about 30% of the PhD degrees given to  US citizens and permanent residents go to women, too. A sizable literature has tried to spell out possible reasons for these patterns. Here, I will point out that the underrepresentation of females in economic starts before college, and can be seen in the number of students taking the AP economics examinations in  micro and macro.

The data below is from the AP Data–Archived Data 2016 (scroll down to the National and State Summary Reports, and then look just at the National Data for the US).

More male students take the AP economics exams than females, although overall, females take more AP exams than men. The scores of males are higher, and the number of males with a score of 4 or 5 on the exams is higher, too. I have no wish to overinterpret these numbers, but a couple of quick thoughts seem fair.

1) These differences suggest that among those high school students thinking about college, and thus taking AP exams, a greater number of males have an interest in economics than females. Moreover, a greater number of men have received positive feedback for that interest in economics–in terms of a high score.

2) To the extent that many students who arrive in college have at least a fuzzy idea of the areas in which they might wish to concentrate already in place, colleges that want to move toward more gender balance in their undergraduate economic enrollments will need to overcome some patterns that have already been set in high school. If the pipeline of women who major in economics isn\’t expanded substantially, it will be difficult for the number of women who earn PhD degrees in economics to expand substantially.

"If You\’re Not Paying for It, You\’re the Product"

This blog is free in monetary terms. I don\’t pay a fee to an internet company; readers don\’t pay a fee to me.  The costs are mainly in terms of time: that is I spend time writing the blogs, and readers spend time looking them over. But while it\’s comforting and even partially true to think that this blog is a public service provided for my own devious reasons, the software is provided and the hosting is done by Google. Thus, I\’m working without a monetary return to draw your attention to Google, and you are providing your attention to Google.

All of which serves as a reminder of a saying that I\’ve seen repeated a number of times in various forms during last few years: \”If you\’re not paying for it, you\’re the product.\”  Fortunately, I didn\’t have to track down the origins of this quotation. because the Quote Investigator website already did it last summer.

The renaissance of this sentiment seems to trade back to a comment from the Metafilter website back in 2010:

If you are not paying for it, you\’re not the customer; you\’re the product being sold.posted by blue_beetle at 1:41 PM on August 26, 2010.

The comment was then picked up and amplified by other writers. Turns out that the \”blue_beetle\” actually goes by the name of Andrew Lewis.

But the first clear enunciation of the aphorism that the audience of mass media is the product, not the customer, seems to date back to a 7-minute 1973 movie by Richard Serra and Carlota Fay Schoolman called \”Television Delivers People\” (and watchable with the magic of YouTube).  The movie is almost entirely a slow scroll of text, one sentence at a time, with spaces between the sentences (to allow time for your deeper contemplation) and muzak playing in the background. It\’s the kind of movie you would watch in modern art museum.  The scroll starts like this:

\”The product of Television, Commercial Television, is the Audience. 

Television delivers people to an advertiser. 

There is no such thing as mass media in the United States except for television. 

Mass media means that a medium can deliver masses of people. 

Commercial television delivers 20 million people a minute. 

In commercial broadcasting the viewer pays for the privilege of having himself sold. 

It is the consumer who is consumed. 

You are the product of t.v.

You are delivered to the advertiser, who is the customer. 

He consumes you. 

The viewer is not responsible for programming——

You are the end product. 

You are the end product delivered en masse to the advertiser. 

You are the product of t.v.\”

The text goes on to mention the NEW MEDIA STATE (in capital letters, natch) run by corporations to indoctrinate us all in materialism. Setting aside the giggle-worthy levels of portentiousness and pretentiousness, here are a few thoughts: 
In thinking about the social effects of internet and social media, it\’s worth remembering that many of the same issues were raised with some force about television. American households have a television turned on about eight hours per day, and time use surveys suggest that Americans spend more than half of their five hours of \”leisure time\” in a given day watching television. There does seem to be a shift away from watching television screens to watching other screens. But the ability of screens to draw our attention is not new.

In economic terms, the value of broadcast television (and radio) was determined by the revenue collected–which for a long time was mostly advertising revenue. Similarly, when economists today try to put an economic value on the \”free\” services from Google and others, they use advertising (and other revenues)_to estimate how the attention of the audience is valued in the market.

Analogies between different technologies aren\’t likely to be perfect, or course, and the analogy between television and the internet is no exception.  Internet screens offer some greater possibilities for audience participation: as game-players, content providers (written, musical, video), commenters, shoppers, and so on.  But they share the characteristic that content comes and goes, but the platform through which the content is provided lives on. And they share the characteristic while much of the attention given to screens is provided in a household context, there is an ongoing social pressure to be part of the in-group that saw the video clip, the picture, the tweet, the Instagram or Facebook update, the article, the game.

In the old-time days of broadcast television this pressure may have been a little less, because if you missed a certain TV show, you wouldn\’t be able to see it again until summer re-runs. But social media is asychronous, so even if you don\’t see something when it first appears, you can check it out an hour or a day or a week later. Content on old-time broadcast television was like catching a bus that came by now and then; modern internet media is a treadmill where every time you step off, you can step right back on again.

The most fundamental and unbending of all economic tradeoffs is that none of us gets more than 24 hours in a day. For all of us, it is worth considering which roles we actually play for hours each day–whether looking at screens or otherwise.