In Hume’s spirit, I will attempt to serve as an ambassador from my world of economics, and help in “finding topics of conversation fit for the entertainment of rational creatures.”
After recessions, the U.S. economy has typically had a period of bounce-back growth, which then catches the economy up to the path of \”potential GDP\” it would have been on prior to the recession. One of the imponderable questions in the aftermath of the Great Recession that lasted from 2007-2009 was when–or if–this bounceback growth would arrive. The nonpartisan Congressional Budget Office offers a prediction in its just-released \”The Budget and Economic Outlook: Fiscal Years 2013 to 2023\” that \”economic activity will expand slowly in 2013 but will increase more rapidly in 2014.\” In other words, the long-delayed period of bounceback growth is now at least visible on the horizon. Here\’s the forecast in pictures.
\”Potential GDP\” refers to how much an economy could produce with full employment of workers and productive capacity. The blue line shows growth in potential GDP; the gray line shows the actual course of the economy during the recession and its aftermath. Notice the catch-up growth, bringing the economy back to potential GDP.
You can also see the catch-up growth in the CBO predictions for the annual growth rate of GDP.
As the catch-up growth arrives, the CBO is also predicting that the unemployment rate will drop briskly.
In addition, some measures of the grim economy will start to reverse themselves. For example, real investment in residential housing (that is, in building and remodeling houses), after several years of negative growth, has moved back to positive territory.
The underlying story here is that when a recession is accompanied by a financial crisis, the economic bounceback can be painfully slow. When households and firms across the economy are all feeling that they have borrowed too much, and need to get their financial houses back in order, it takes time. The depth of the U.S. recession has actually been somewhat shallower and less prolonged than the experience in many other countries when they experienced the double-whammy of financial crisis and recession, as I discussed here. But by fits and starts, the economic bounceback process does eventually work its way forward.
In the last few years, from 80-90% of all e-mail traffic has been spam. This imposes a considerable cost in terms of computer security and people\’s time. In the Summer 2012 issue of my own Journal of Economic Perspectives, Justin M. Rao and David H. Reiley discuss the \”The Economics of Spam\” and conservatively estimate social costs to businesses and consumers of about $20 billion per year. But in 2012, it looks as if the tide may be turning against e-mail spam, at least a bit.
Some evidence comes from monitoring of spam done by Kaspersky Lab, a seller of information technology security services. In particular, Darya Kudkova has written the \”Kaspersky Security Bulletin: Spam Evolution 2012.\” The first bar chart, from the Economist magazine, relies on Kaspersky Lab data to show monthly patterns of spam from 2006 up through 2012. The second bar chart, from Kudkova\’s report, shows monthly spam patterns just during 2012.
The Rao and Reiley paper in JEP offers an extended discussion of how the spam wars have evolved over time. (Here is my post on this paper from last August.) As example, they describe a study in which a group attempted to send 345 million spam e-mails, but three-quarters were blocked when the server was blacklisted. The 82 million e-mails that escaped the blacklist then had to run the normal gauntlet of anti-spam software, and ultimately, there were ultimately just 28 purchases.
However, the Kaspersky report suggests that even better anti-spam software has been the main driver of the decline. \”This continual and considerable decrease in spam volumes is unprecedented…. The main reason behind the decrease in spam volume is the overall heightened level of anti-spam protection. To begin with, spam filters are now in place on just about every email system, even free ones, and the spam detection level typically bottoms out at 98%. Next, many email providers have introduced mandatory DKIM signature policies (digital signatures that verify the domain from which emails are sent).\”
The other big change mentioned in the report is that, partly as a result of the improvements in shutting down spam on e-mail, the spammers are trying to use other pathways to your credit cards. The Kaspersky report comments:
\”When anti-spam experts answer questions about what needs to be done in order to reduce the amount of spam, in addition to anti-spam legislation, quality filters and user education, one factor that is always mentioned is inexpensive advertising on legal platforms. With the emergence of Web 2.0, advertising opportunities on the Internet have skyrocketed: banners, context-based advertising, and ads on social networks and blogs. Ads in legal advertising venues are not as irritating for users on the receiving end, they aren’t blocked by spam filters, and emails are sent to target audiences who have acknowledged a potential interest in the goods or services being promoted. Furthermore, when advertisers are after at least one user click, legal advertising can be considerably less costly than advertising through spam.
\”Based on the results from several third-party studies, we have calculated that at an average price of $150 per 1 million spam emails sent, the final CPC (cost per click, the cost of one user using the link in the message) is a minimum of $.4.45. Yet the same indicator for Facebook is just $0.10. That means that, according to our estimates, legal advertising is more effective than spam. Our conclusion has been indirectly confirmed by the fact that the classic spam categories (such as fake luxury goods, for example) are now switching over to social networks. We have even found some IP addresses for online stores advertising on Facebook that were previously using spam.\”
\”Advertisers have also been drawn to yet another means of legal Internet promotion: coupon services, or group discount websites where users can purchase so-called coupons. These services appeared several years ago. After a user buys a coupon, he/she presents it when purchasing a product or service and receives a discount. In 2012, coupon services gained a lot of popularity. Many companies around the world are striving to grow their client base, and in turn, clients receive generous offers. … The popularity of coupon services has made the migration of advertisers from spam to other platforms more noticeable. At the same time, the prevalence of coupon services has had an impact on spam. Malicious users have started to copy emails from major coupon services, using the originals to advertise their own goods or services, or to lure users to a malicious website.\”
In other words, the economics of sending and screening emails, together with the economics of online advertising, is tipping the balance a bit for spammers. Getting people to click on offers in random emails is becoming more costly; getting people to click on random advertisements is becoming easier. Before you send a credit card number, be sure you know who is at the other end.
My father the mathematician first posed the checkerboard puzzle to me back in grade-school, perhaps on some rainy Saturday. His version of the story went something like this:
The jester performs a great deed, and the king asks him how he would like to be rewarded. The jester is aware that the king is a highly volatile individual, and if the jester asks for too much, the king might just kill him then and there. The jester also knows that the king views his promise as sacred, so if the king says \”yes\” to the jester\’s proposal, then the king will honor that promise. So in a way, the jester\’s problem is how to ask for a lot, but have the king at least initially think it\’s not very much, so that the king will give his consent.
So the jester clowns around a bit and then says: \”Here\’s all I want. Take this checkerboard. On the first square, put one piece of gold. On the second square, two pieces. On the third square, four pieces, and on the fourth square, 8 pieces. Double the amount on each square until you reach the end of the checkerboard.\”
In the story, the king laughs at this comic proposal and says, \”Your great deed was so wonderful, I would have happily done much more than this! I grant your request!\”
But of course, when the king starts hauling up gold pieces from the treasury, he will discover that 2 raised to the 63rd power, the final spot on the checkerboard requires about 9 quintillion gold pieces (that is, 9 followed by 18 zeros).
I\’ve had some sense of the power of exponential growth ever since. But what I hadn\’t thought about is the interaction of Moore\’s Law and economic growth. Moore\’s Law is of course named for Gordon Moore, one of the founders of Intel, who noticed this pattern back in 1965. Back in the 1970s, he wrote a paper that contained the following graph showing how much it cost to produce a computer chip with a certain number of components. Here\’s his figure. Notice that the numbers of component on the horizontal axis and the cost figures on the vertical axis are both graphed as logarithm (specifically, each step up the axis is a change by a factor of 10). The key takeaway was that the number of transistors (\”components\”) on an integrated circuit was doubling about every two years, making computing power much cheaper and faster.
Ever since I started reading up on Moore\’s law in the early 1980s, there have been predictions in the trade press that it will soon reach technological limits and come to and end. But Moore\’s law marches on: indeed, the research and innovation targets at Intel and other chip-makers are defined in terms of making sure that Moore\’s law continues to hold for at least awhile longer. Stephen Shankland offers a nice accessible overview of the current situation in an October 15, 2012, essay on CNET: \”\”Moore\’s Law: The rule that really matters in tech\” (The Gordon Moore graph above is copied from Shankland\’s essay.)
As Shankland writes: \”To keep up with Moore\’s Law, engineers must keep shrinking the size of transistors. Intel, the leader in the race, currently uses a manufacturing process with 22-nanometer features. That\’s 22 billionths of a meter, or roughly a 4,000th the width of a human hair.\” He cites a variety of industry and research experts to the effect that Moore\’s law has at least another decade to run–and remember, a decade of doubling every two years means five more doublings!
It\’s hard to wrap one\’s mind around what it means to say that the power of microchipo technology will increase by a factor of 32 (doubling five times) in the next 10 years. A characteristically intriguing survey essay from the January 10 issue of the Economist on the future of innovation uses the checkerboard analogy to think about the potential effects of Moore\’s law. Here\’s a comment from the Economist essay:
Ray Kurzweil, a pioneer of computer science and a devotee of exponential technological extrapolation, likes to talk of “the second half of the chess board”. There is an old fable in which a gullible king is tricked into paying an obligation in grains of rice, one on the first square of a chessboard, two on the second, four on the third, the payment doubling with every square. Along the first row, the obligation is minuscule. With half the chessboard covered, the king is out only about 100 tonnes of rice. But a square before reaching the end of the seventh row he has laid out 500m tonnes in total—the whole world’s annual rice production. He will have to put more or less the same amount again on the next square. And there will still be a row to go.
Erik Brynjolfsson and Andrew McAfee of MIT make use of this image in their e-book “Race Against the Machine”. By the measure known as Moore’s law, the ability to get calculations out of a piece of silicon doubles every 18 months. That growth rate will not last for ever; but other aspects of computation, such as the capacity of algorithms to handle data, are also growing exponentially. When such a capacity is low, that doubling does not matter. As soon as it matters at all, though, it can quickly start to matter a lot. On the second half of the chessboard not only has the cumulative effect of innovations become large, but each new iteration of innovation delivers a technological jolt as powerful as all previous rounds combined.\”
Now, it\’s of course true that doubling the capacity of computer chips doesn\’t translate in a direct way into a higher standard of living: there are many steps from one to the other. But my point here is to note that many of us (myself included) have been thinking about the changes in electronics technology a little too much like the king in the checkerboard story: that is, we think of something doubling a few times, even 10 or 20 times, and we know it\’s a big change, but it somehow seems within our range of comprehension.
But when something has already been doubling every 18 months or two years for a half-century–and it is continuing to double!–the absolute size of each additional doubling is starting to get very large. I lack the imagination to conceive of what will be done with all this cheap computing power in terms of health care, education, industrial process, communication, transportation, entertainment, food, travel, design, and more. But I suspect that these enormous repeated doublings, as Moore\’s law marches forward in the next decade and drives computing speeds up and prices down, will transform lives and industries in ways that we are only just starting to imagine.
Of course, \”literacy\” is a somewhat elastic term, ranging from the most basic functional literacy that lets a person handle day-to-day tasks like reading a map or a drive-through menu, up to the ability to read more complex and specialized texts with comprehension of strengths and weaknesses. Here\’s a figure showing various dimensions of literacy and how U.S. children are performing. Nearly all children manage the basics like letter recognition, and beginning and ending sounds, by second grade. But even by 8th grade, a large share of students have real problems with being able to real well enough to evaluate text, especially when faced with nonfiction or with complex syntax.
What are the trends in literacy over time? Performance hasn\’t changed much. Here\’s a figure showing reading and math test scores on the National Assessment of Educational Progress (NAEP) tests, from 1971 to 2008. There appears to be a bit of rise for 9 year-old readers in recent years, but at least so far, that hasn\’t translated to higher reading scores for 13 or 17 year-olds.
Of course, these figures are averages, and it\’s always important to remember the tails of the distribution. \”At any given age, students vary considerably in their literacy abilities. For example, at age nine, students scoring at the 10th percentile can carry out simple discrete reading tasks (such as following brief written directions), while students scoring at the 90th percentile are already able to make generalizations and interrelate ideas. … Roughly 10 percent of seventeen-year-olds have knowledge-based competencies lower than those of the median nine-year-old student.\”
How does the literacy of U.S. students stack up against those in other high-income countries?
\”On international comparisons, American students perform modestly above average compared with those in other OECD countries, and well above average among the larger set of countries for which the PIRLS [Progress in International Reading Literacy Study] and PISA [Programme for International Student Assessment] studies provide comparative data. Moreover, there is no evidence that U.S. students lose ground relative to those in other countries during the middle-school years. Between ages ten and fifteen, when most students are learning crucial comprehension and evaluation literacy skills, students in the United States appear to learn at a rate that places them at the average among OECD countries. This evidence of average to above-average performance of U.S. students on literacy assessments is in stark contrast to the poor relative performance of U.S. students on internationally administered math and science assessments.\”
And with that comment the authors touch on a point that nags at me from time to time. Literacy sounds like a doesn\’t always get the attention of STEM education: that is, science, technology, engineering, and mathematics. Literacy lacks a cute acronym. It\’s a softer subject in some ways: teaching about deeper levels of comprehension can\’t be as black-and-white as balancing a chemical formula or solving a geometry problem. As the authors suggest at various points, the curriculum pathway to teaching literacy and spoken skills after the most basic level is less clear-cut.
But as I often try to emphasize with students, a national economy is not like an Olympic team, where a few performers can win medals while the rest of us couch potatoes sit at home and watch. In the economy, the vast majority of adults participate, and the economy performs better when workers at all levels have more human capital. For many workers, literacy is much more at the core of their job responsibilities than are specific technical or statistical capabilities. Reardon, Valentino, and Shores offer a useful reminder on the core importance of literacy:
\”Literacy—the ability to access, evaluate, and integrate information from a wide range of textual sources—is a prerequisite not only for individual educational success but for upward mobility both socially and economically. In addition, because much of the growth in the economy in recent decades has been in areas requiring moderate- to high-level literacy skills, economic growth in the United States relies increasingly on the literacy skills of the labor force. Finally, in an information-rich age, thoughtful participation in democratic processes requires citizens who can read, interpret, and evaluate a multitude of often-conflicting information and opinions regarding social and political choices.\”
What\’s to be done about Medicare? The Kaiser Family Foundation has usefully pulled together a list of possible \”Policy Options to Sustain Medicare for the Future.\” I especially liked that the report is fairly exhaustive in listing about 130 options (depending on how one counts options, suboptions, and sub-suboptions), and fairly honest in admitting that no realistic cost estimates for many of those options. Here, I\’ll start with a quick reminder of where Medicare is currently headed, and then list just 12 of the choices–those that in the KFF tally would reduce Medicare costs or raise Medicare taxes by at least $4 billion per year over the next few years.
Medicare spending is taking off for two reasons: as the baby boomer retire, a rising proportion of Americans will become eligible, and continually rising health care costs will push up costs still further. The first figure shows projections for the rising number of Medicare enrollees and Medicare spending as a share of GDP. The second figure shows Medicare spending projected as a rising share of the overall federal budget.
Discussions of how to fix Medicare often head for happy talk about how, if we all just provide patients and doctors with the right information and incentives, and link them together with the right network of health information technology and thoughtful counselors, we can save billions while improving everyone\’s health. For a recent example, see this report from the United Health UnitedHealth Center for Health Reform & Modernization, which suggests that steps along these lines could save up to $542 billion in Medicare and Medicaid spending over the next decade. It\’s a cheerful story, and I\’m certainly fine with pursuing these kinds of win-win possibilities. But the U.S. health care system has been facing ever-rising costs and talking about win-win solutions for several decades. While we\’re waiting for the cost savings from these kinds of more enlightened and efficient practices to arrive, we need to start thinking about some less pleasant options.
Here\’s the list of 12 possibilities from the KFF report that would involve Medicare cost savings or revenue increases of at least $4 billion per year. In that report, all the proposals for better information sharing and quality control and improved decision making by patients and providers have the effect on costs and revenues listed as \”Not available,\” which seems fair to me, given historical experience with attempts along these lines as overall health care costs have continues to rise. What\’s left are choices that sting (with the effect on costs or revenues in parentheses). The KFF Report gives a couple of pages of more detailed explanation for each of these, along with the other 100+ choices.
1) Raise the age of Medicare eligibility from 65 to 67 ($113 billion over 10 years)
2) 10% coinsurance payment on all home health episodes ($40 billion over 10 years)
3) Restrict first-dollar Medigap coverage ($53 billion over 10 years)
4) Increasing premiums for Part B and Part D: for example, raise Part B premiums by 2% per year until they cover 35% of total Part B expenses ($231 billion over 10 years)
5) Increase Medicare payroll tax by 1 percentage point for all workers ($651 billion over 10 years)
6) Require manufacturers to pay a minimum rebate on drugs covered under Medicare Part D for beneficiaries receiving low-income subsidies ($137 billion over 10 years).
7) Repeal provisions in the Affordable Care Act that would close the Part D coverage gap by 2020 ($51 billion over 10 years)
8) Reduce and restructure graduate medical education payments to hospitals ($69 billion over 10 years)
9) Rebase SNF and home health payment rates: for example, reducing payment updates for post-acute care by 1.1 percentage points ($45 billion over 10 years)
10) Adopt traditional tort reforms at the Federal level ($40 billion to $57 billion over 10 years)
11) Establish a combined deductible, uniform coinsurance rate, and a limit on out-of-pocket spending, along with Medigap reforms ($93 billion over 10 years)
12) Set Federal contributions per beneficiary at the average plan bid in a given area, including traditional Medicare as a plan, weighted by enrollment ($161 billion over 10 years)
A few thoughts:
1) One of the policy changes would dramatically increase costs. Congress has been playing a game for years now in which it lowballs the future costs of Medicare by proposing very large cuts in payments to health care providiers that will take place a few years in the future. Then Congress perpetually pushes back those cuts. To their credit, the official Medicare actuaries have been quite blunt in pointing out \”Why Official Medicare Costs are Understated.\” But if, for example, the currently legislated future cuts in payments to health care providers were replaced with a 10-year freeze on fees and a \”only\” a 5.9% cut in fees for non-primary care services each year for the first three years, Medicare costs would be $200 billion higher over 10 years than the current legislative estimates. If fees for health care providers rise at the rate of GDP growth, or a percentage point or two faster, then Medicare costs will be $300 billion or more higher over the next 10 years. Thus, take your first few hundred billion in cost savings or revenue increases above, and assume that it\’s going to go to sidestepping the huge future cuts to health care providers in current legislation.
2) I did leave out a few proposals on the KFF list for increasing taxes on other items and earmarking the funds for Medicare. For example, one could raise taxes on alcohol, tobacco, soft drinks, or employer-provided health insurance and earmark the funds for Medicare. But one could also raise those taxes and spend the money on deficit reduction or some other program, so at least to me, these are not specifically \”Medicare\” reforms.
3) Just for the record, you can\’t just add up the cost estimates several of these proposals with, because they interact in various ways. For example, option #3 on restricting first-dollar Medigap coverage overlaps heavily with option #11 on Medigap reforms. If the Medicare age was raised to 67, it would alter the cost changes from all of the other proposals.
My bottom line is that too many of the arguments over Medicare spending are magically nonspecific. Sometimes they describe innovations in health care delivery that would improve health and save money and leave everyone with a big rosy smile. I\’m all for such changes, and I\’ll believe in their effectiveness as soon as they are actually effective in reducing costs–but not before. Other time, politicians talk tough about how they will just put a cap on Medicare spending, or just not let it rise at faster than some certain rate. Again, I\’ll believe in the workability of such caps when I\’ve seen them operate for a few years.
In contrast, the list above is not a pleasant one. Some of these proposals reduce coverage for the elderly or require them to pay more. Some reduce payments to health care providers. One raises taxes on current workers. I am fully aware that none of these are popular options! Which options are more palatable is an argument for another day. But these are real choices, and the inexorable arithmetic of Medicare\’s rising costs is likely to force choices among these sorts of options.
\”What makes ideas so remarkable is their capacity for shared use. A bottle of valuable medicine can heal one person, but the formula that is used to make the medicine is as valuable as the total number of people on Earth. Economists call this concept “non-rivalry.”… There is a saying that you all know that we use to capture this character of non-rivalry: If you give someone a fish, you feed them for a day, but if you teach someone to fish, you destroy another aquatic ecosystem.\”
For me, the classic statement about the economic power of ideas and their relation to the patent system comes from Thomas Jefferson, in a letter he wrote in 1813:
\”If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation.
\”Inventions then cannot, in nature, be a subject of property. Society may give an exclusive right to the profits arising from them, as an encouragement to men to pursue ideas which may produce utility, but this may or may not be done, according to the will and convenience of the society, without claim or complaint from anybody. Accordingly, it is a fact, as far as I am informed, that England was, until we copied her, the only country on earth which ever, by a general law, gave a legal right to the exclusive use of an idea. In some other countries it is sometimes done, in a great case, and by a special and personal act, but, generally speaking, other nations have thought that these monopolies produce more embarrassment than advantage to society; and it may be observed that the nations which refuse monopolies of invention, are as fruitful as England in new and useful devices.\”
I\’ll just admit up front that the vast inequities that exist even before children start school bother me, and that I am predisposed to favor programs that would help disadvantaged children early in life. Thus, I was delighted when Head Start announced some years back that it was going to carry out a randomized control trial–that is, to assign some preschool children randomly to Head Start and others not–so that it would be possible to do a statistically meaningful test of how well Head Start worked. I presumed that the test would provide ammunition for my pre-existing views.
But as the evidence has built up, Head Start is failing its test. The latest evidence appears in the \”Third Grade Follow-up to the Head Start Impact Study: Final Report,\” which was released in December. The report was carried out by a company called Westat and published by the Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services. Basically, the report shows that Head Start provides short-term gains to preschool children, but those gains have faded to essentially nothing by third grade.
To appreciate how depressing this conclusion is, you need to appreciate the high quality of the study. It\’s based on a nationally representative sample of more than 5,000 3 and 4 year-olds from low-income families who were eligible for Head Start. These children were randomly either assigned to Head Start, or not. Data collection started in 2002, and so by 2008, data was available on how the children were performing in third grade. The study didn\’t just look at test scores: it considered a range of data on how Head Start might affect aspects of cognitive development, social-emotional development, health status and services, and even parenting practices.
The findings are summarized in this way: \”In summary, there were initial positive impacts from having access to Head Start, but by the end of 3rd grade there were very few impacts found for either cohort in any of the four domains of cognitive, social-emotional, health and parenting practices. The few impacts that were found did not show a clear pattern of favorable or unfavorable impacts for children.\”
Of course, because I am predisposed to favor these kinds of programs, I look for silver linings. Perhaps for certain specific subgroups such preschool programs can be useful? Perhaps certain kinds of curriculum are more likely to make a lasting difference? Perhaps helping children from low-income families catch up before they start school is insufficient, but a sustained set of interventions continuing through elementary school would show lasting results? Sometimes studies of early preschool interventions have found little gain in measured outcomes a few years into school, but later gains like greater rates of high school completion or reductions in certain risky behaviors in adolescence. Maybe as the Head Start study continues, these sorts of longer-term gains will emerge?
I don\’t have an answer here. Some years ago, I edited a paper in a special issue of the Future of Children about the enormous gaps in school readiness for preschool children. Equal opportunity is an important goal of public policy, and as a society, we are clearly not providing equal opportunity to many children–who are already well behind before their first day of school. If the Head Start study had positive results about the long-run efficacy of preschool programs, I\’d trumpet it to the hills. But the unfolding evidence isn\’t backing up the conclusion I would prefer.
Bruce Everett of the Fletcher School at Tufts University offers a healthy double-helping of reality in his essay entitled \”Back to Basics on Energy Policy: For the past 40 years, political leaders have promised that government can plan and engineer a fundamental transformation of our energy industry They were wrong.\” It appears in the Fall 2012 edition of Issues in Science and Technology. He begins:
\”In June 1973, President Richard Nixon addressed the emerging energy crisis, saying that “the answer to our long-term needs lies in developing new forms of energy.” He asked Congress for a five-year, $10 billion budget to “ensure the development of technologies vital to meeting our future energy needs.” With this speech, the federal government set out to engineer a fundamental transformation of our energy supply. All seven subsequent presidents have endorsed Nixon’s goal, and during the past 40 years, the federal government has spent about $150 billion (in 2012 dollars) on energy R&D, offered $35 billion in loan guarantees, and imposed numerous expensive energy mandates in an effort to develop new energy sources. During this time, many talented and dedicated people have worked hard, done some excellent science, and learned a great deal. Yet federal energy technology policy has failed to reshape the U.S. energy market in any meaningful way.\”
For example, about 30% of that energy R&D spending went to nuclear power, with President Nixon forecasting 40 years ago that nuclear would provide half of the nation\’s electricity supply by 2000. But nuclear power plateaued at 20% of the electricity supply in 1991, and given the lack of new plants and the gradual retirement of older ones, it seems certain to be a declining contributor in the next few decades.
Over the last 40 years, the U.S. government has backed a number of renewable energy technologies: hydro-power, solar, wind, solar, geothermal, synthetic fuels including ethanol, burning municipal waste, and others. Over that time, the share of all these renewables in energy consumption went from 6% in 1973 to 8% at present. Hydropower and corn ethanol comprise more than half that total. Current projections from the Energy Information Administration hold that solar power will quadruple by 2035–at which point it will still be less than 0.5% of U.S. energy consumption.
The fundamental problem, Everett argues, is that showing something is possible at high cost is one thing, but commercializing it at low costs is quite another. He writes: \”The mantra of the energy R&D program has always been, “If we can put a man on the Moon, we can do anything,” but this comparison is wrong. Apollo was a conceptual and technical triumph with no commercial aspirations. Between 1969 and 1972, the United States landed 12 astronauts on the Moon at a cost of $12.5 billion (in 2012 dollars) per astronaut. The purpose of the program was to accomplish a technically difficult feat a few times despite the enormous cost. Civilian technology requires the exact opposite: the ability to do something on a large scale at a low cost.\”
As a matter of public policy, Everett argues, the government has shown a pattern of trying to force-feed commercialization before it is actually ready to happen, through subsidies to nuclear power, or synthetic fuels, or wind, or battery-powered cars. It\’s always politically enticing to promise that a few temporary subsidies will jump-start large industries with many new jobs, but the record in energy is that the subsidies are often long-lasting and large, while the subsidized companies are short-lived. Managers get their bonuses, but sustainable jobs aren\’t created. (And for those who argue that fossil fuels are subsidized as well, Everett points out that the taxes collected on oil use are vastly higher than any public support received by the oil industry.)
Everett doesn\’t emphasize the point, but the newfound ability of U.S. energy producers to access vast reserves of natural gas will reshape energy markets in manifold ways. I\’ve posted in the past about \”Unconventional Natural Gas and Environmental Issues\” and also about my own preference for \”The Drill-Baby Carbon Tax,\” which would be a policy of moving ahead with all deliberate speed in developing U.S. fossil fuel resources while also imposing a carbon tax and addressing the costs of other environment issues as well.
But after 40 years of watching the U.S. government try to force energy markets on to a different path, it\’s time for an alternative approach. The U.S. government should stop subsidizing commercial energy firms, and instead put that money into a dramatic increase in energy research and development.
The proportion of U.S. adults who are \”in the labor force\”–that is, who either have jobs or are unemployed and looking for a job–has been falling for a decade, as I explored in an April 26, 2012, post on \”Falling Labor Force Participation.\” But for one demographic group, the elderly, labor force participation is rising substantially.
Braedyn Kromer and David Howard of the U.S. Census Bureau offer some snapshots of the data in their just-released survey brief, \”Labor Force Participation and Work Status of People 65 Years and Older.\” For example, here are some comparisons for the labor force participation of men and women, for those over 65 and for some age subgroups. While labor force participation is down a bit for the 75+ age group, it is noticeably higher since 1990 for the 65-69 and 70-74 age group.
From a slightly longer-run perspective, 1990 was roughly the time period when labor force participation rates among the elderly were at their lowest. Here are a couple of figures from \”The Increasing Labor Force Participation of Older Workers and its Effect on the Income of the Aged,\” by Michael V. Leonesio, Benjamin Bridges, Robert Gesumaria, and Linda Del Bene, which appeared in the Social Security Bulletin earlier in 2012. The figures show that for men over age 62, rates of labor force participation were falling through the 1980s, bottomed out around 1990, and have been rising since then. For women, the pattern is a little different, because a much greater proportion of women entered the paid workforce in the 1970s and 1980s, and so compared with earlier generations, a larger share of women continued working into their 60s and 70s, too.
The rising labor force participation of the elderly in the last two decades represents a remarkable social change. Here\’s a figure created with the ever-useful FRED website at the Federal Reserve Bank of St. Louis, showing the labor force participation rate of those over age 55, going back to the late 1940s.Through the 1950s, 1960s, and 1970s, the notion that more and more people would retire earlier and earlier seemed like an inexorable social trend. But the patterns have changed–and they changed long before the Great Recession.
When I was starting off as a young adult in the paid workforce in the 1980s, I remember looking at this kind of data and thinking about how I might be retire–like many other people!–in my mid-50s. But that\’s clearly not going to happen for me; instead, my current expectation is to be working well into my late 60s or 70s. A steadily rising average age of retirement has become the new normal. But then, a pattern of \”keep living more years while working fewer years\” was never a viable long-term option.
Early in 2012, my book The Instant Economist: Everything You Need to Know About How the Economy Works, was published by Penguin Plume. Here\’s the Amazon link; here\’s the Barnes & Noble link. At the tail end of the year, the book was named an \”Outstanding Academic Title\” by Choice magazine, which is published by the American Library Association. It was also listed as one of the Best Books for 2012 in the \”Business\” category by Library Journal, another prominent trade publication for librarians.
Here\’s the review from the August 2012 issue of Choice:
\”Currently The Instant Economist is the most readable and up-to-date summary of a typical US college principles of economics course. Following the traditional table of contents–from microeconomics through macroeconomics and international topics–and using original, helpful metaphors (and only two graphs), Taylor (managing editor, Journal of Economic Perspectives) takes the reader through the terminology, key concepts, and controversies dominant in today\’s economics profession. Noteworthy additions to the standard textbook canon are a chapter on personal investing and detailed accounts of the minimum wage, corporate merger, and inequality debates, introducing readers to the data issues that lie behind these controversies. The 36 short chapters reflect the book\’s origin in the author\’s Teaching Company recording, Economics; however, the book is a valuable stand-alone option. For supplementary coverage of the history of economic thought and more complete institutional context, see Robert Heilbroner and Lester Thurow\’s Economics Explained (4th ed., 1998; 1st ed., CH, Oct\’82). Summing Up: Highly recommended. All levels of undergraduate students as well as general readers wanting a readable introduction to economics. — M. H. Maier, Glendale Community College
And here\’s the review from the Library Journal:
\”Taylor’s (managing editor, Journal of Economic Perspectives) volume can help conversationalists looking to raise the bar for their watercooler chats and casual readers who want to understand better the current economic condition of the United States. Taylor uses simple language with field-specific vocabulary to explain economic concepts, and each concept is successfully reinforced with a real-life—and usually entertaining—example. He hits all the subjects that might interest a layperson, such as division of labor, supply and demand, wages, competition and monopoly, inflation, banking, and trade, for a total of 36 petite chapters—just enough information to give the reader a basic but well-rounded understanding of the subject. VERDICT This highly readable, nonpoliticized look at some of the economic principles that shape our society, presented in an engaging, anecdotal fashion, is highly recommended for armchair economists and anyone with a general interest in the state of our economy. —Poppy Johnson-Renvall, Central New Mexico Community Coll. Lib., Albuquerque
As these reviews emphasize, the book is written for the general non-economist reader who would like to gain some insight into the terminology and structure of economic thinking. Those who are interested in knowing a bit more about the genesis of the book might check here. As one of the reviews notes, this book was rooted in a course I several years ago for the Teaching Company, which is available here. Or if you are teaching or taking an introductory college-level course in economics, I of course recommend my Principles of Economics textbook, available here.