MuseLetter #224 / January 2011 by Richard Heinberg
Download printable PDF version here (PDF, 214 KB)
This article is an excerpt from Richard’s new book (working title ‘The End of Growth’), which is set for publication by New Society Publishers in July 2011. Given the urgency and fragility of the global economic crisis, we are serializing the rough content as Richard writes it. Follow Richard’s writing process through Facebook and Twitter accounts created expressly for this publication.
The Sound of Air Escaping
If the previous chapter had been written as a novel, one wouldn’t have to read long before concluding that it is a story unlikely to end well. But it is not just a story, it is a description of the system in which our lives and the lives of everyone we care about are embedded. How economic events unfold from here on is a matter of more than idle curiosity or academic interest.
It’s not hard to find plenty of opinions about where the economy is, or should be, headed. There are Chicago School economists, who see debt and meddling by government and central banks as problems and austerity as the solution; and Keynesians, who see the problem as being insufficient government stimulus to counter deflationary trends in the economy. There are those who say that bloated government borrowing and spending mean we are in for a currency-killing bout of hyperinflation, and those who say that government cannot inject enough new money into the economy to make up for commercial banks’ hesitancy to lend, so the necessary result will be years of deflationary depression. As we’ll see, each of these perspectives is probably correct up to a point. Our purpose in this chapter will be not to forecast exactly how the global economic system will behave in the near future—which is impossible in any case because there are too many variables at play—but to offer a brief but fairly comprehensive, non-partisan survey of the factors and forces at work in the post-2008 world financial economy, integrating various points of view as much as possible.
To do this, we start with a brief overview of the meltdown that began three years ago, then look at the theoretical and practical limits to debt; we review the bailout and stimulus packages deployed to lessen the impact of the crisis; and finally we explore a few scenarios for the short- and mid-term future.
Houses of Cards: The Last Bubble
Lakes of printer’s ink have been spilled in recounting the events leading up to the financial crisis that began in 2007-2008; here we will add only a small puddle. Nearly everyone agrees that it unfolded in essentially the following steps:
- Easy credit (due to Fed’s lowering of interest rates in an attempt to limit the consequences of the dot-com crash of 2000) led to a
- Housing bubble, which was made much worse by sub-prime lending.
- Partly because of the prior deregulation of the financial industry, the housing bubble also was magnified by over-leveraging within the financial services industry, which was in turn exacerbated by financial innovation and complexity (including the creation of derivatives, collateralized debt obligations, and a dizzying variety of related investment instruments)—all feeding the boom of a shadow banking system, whose potential problems were hidden by incorrect pricing of risk on the part of ratings agencies.
- A commodities boom (which drove up gasoline and food prices) and temporarily rising interest rates (especially on adjustable-rate mortgages) ultimately undermined consumer spending and confidence, helping to burst the housing bubble—which, once it started to deflate, set in motion a chain reaction of defaults and bankruptcies.
Each of the elements of that brief description has been unpacked at great length in books like Andrew Ross Sorkin’s Too Big to Fail and Bethany McLean and Joe Nocera’s All the Devils Are Here: The Hidden History of the Financial Crisis, and in the documentary film “Inside Job.” It’s old, sad news now, though many parts of the story are still controversial (e.g., was the problem deregulation or bad regulation?). It is important that we review this recent history in a little more detail, and it is even more important that we understand that these events were merely the manifestations of a deeper trend toward dramatically and unsustainably increasing debt, credit, and leverage, in order to see why, from a purely financial point of view, growth is currently on hold and is unlikely to return for the foreseeable future.
Starting in the 1970s, GDP growth rates in Western countries began to taper off. The U.S. had been the world’s primary petroleum producer; now its oil production was sliding into permanent decline, and that meant that oil imports would have to grow to compensate, thus encouraging trade deficits. Moreover, markets for major consumer goods were starting to become saturated.
In the U.S., wages—particularly for the hourly workers who comprise 80 percent of the workforce—were stagnating after two decades of major gains. Relatively constant wage levels meant that most households couldn’t afford to increase their spending (remember: the health of the economy requires growth) unless they saved less and borrowed more. Which they began to do.
With the rate of growth of the real economy stalling somewhat, profitable investment opportunities in manufacturing companies dwindled; this created a surplus of investment capital looking for high yields. The solution hit upon by wealthy investors was to direct this surplus toward financial markets.
The most important financial development during the 1970s was the growth of securitization—a financial practice of pooling various types of contractual debt (such as residential mortgages, commercial mortgages, auto loans, or credit card debt obligations) and selling it to investors in the form of bonds, pass-through securities, or collateralized mortgage obligations (CMOs). The principal and interest on the debts underlying the security are paid back to investors regularly. Securitization provided an avenue for more investors to fund more debt. In effect, securitization caused (or allowed) claims on wealth to increase far above previous levels. In the U.S., aggregate debt began rising faster than GDP, and the debt-to-GDP ratio began to grow from about 150 percent (where it had been for many years until 1980) up to its level of about 300 percent today.
Also starting in the late 1970s, economists and policy makers began arguing that, in order to end persistent “stagflation” largely caused by high oil prices, government should cut taxes on the rich—who, seeing more money in their bank accounts, would naturally invest capital in ways that would ultimately benefit everyone.[1] At the same time, policy makers decided it was time to liberate the financial sector from various New Deal-era restraints so that it could create still more innovative investment opportunities.
Some commentators insist that the Community Reinvestment Act of 1977 (with updates in 1989, 1991, etc.)—which was designed to encourage commercial banks and savings associations to meet the needs of borrowers in low- and moderate-income neighborhoods—would later contribute to the housing bubble of 2000-2006. However, this notion has been widely contested. Nevertheless, the chartering of Fannie Mae (the Federal National Mortgage Association, or FNMA) and Freddie Mac (the Federal Home Loan Mortgage Corporation, or FHLMC) by Congress in 1968 as government-sponsored enterprises (GSEs) with the purpose of expanding the secondary mortgage market by securitizing mortgages in the form of mortgage-backed securities (MBSs) would certainly have implications much later, when the real estate market crashed in 2007. But we are getting ahead of ourselves.
The process of deregulation and regulatory change continued for the next quarter-century, and included, for example, the Commodity Futures Modernization Act, drafted by Senate Republican Phil Gramm and signed into law by President Bill Clinton in 2000, that contained an “Enron loophole” (so-called for its treatment of energy derivatives), and that legalized the trafficking in packages of dubious home mortgages.
These regulatory changes were accompanied by a shift in corporate culture: executives began running companies more for the benefit of management than for shareholders, paying themselves spectacular salaries and putting increasing emphasis on boosting share prices rather than dividends. Auditors, boards of directors, and Wall Street analysts encouraged these trends, convinced that soaring share prices and other financial returns (often via derivatives) justified them.
America’s distribution of income, which had been reasonably equitable during the post-WWII era, began to return to the disparity seen in the 1920s, in the lead-up to the Great Depression. This was partly due to changes in tax law, begun during the Reagan Administration, that reduced taxes on the wealthiest Americans. In 1970 the top 100 CEOs earned about $45 for every dollar earned by the average worker; by 2008 the ratio was over 1,000 to one.
In the 1990s, as the surplus of financial capital continued to grow, investment banks began inventing a slew of new securities with high yields. In assessing these new products, rating agencies used mathematical models that, in retrospect, seriously underestimated their levels of risk. Until the early 1970s, bond credit ratings agencies had been paid for their work by investors who wanted impartial information on the credit worthiness of securities issuers and their various offerings. Starting in the early 1970s, the “Big Three” ratings agencies (Standard & Poors, Moody’s, and Fitch) were paid instead by the securities issuers for whom they issued those ratings. This eventually led to ratings agencies actively encouraging the issuance of collateralized debt obligations (CDOs).
The Clinton administration adopted “affordable housing” as one of its explicit goals (this didn’t mean lowering house prices; it meant helping Americans get into debt), and over the next decade the percentage of Americans owning their homes increased 7.8 percent. This initiated a persistent upward trend in real estate prices.
In the late 1990s investors piled into Internet-related stocks, creating a speculative bubble. The dot-com bubble burst in 2000 (as with all bubbles, it was only a matter of “when,” not “if”), and a year later the terrifying crimes of September 11, 2001 resulted in a four-day closure of U.S. stock exchanges and history’s largest one-day decline in the Dow Jones Industrial Average. These events together triggered a significant recession. Seeking to counter a deflationary trend, the Federal Reserve lowered its federal funds rate target from 6.5 percent to 1.0 percent, making borrowing more affordable.
Downward pressure on interest rates was also coming from the nation’s high and rising trade deficit. Every nation’s balance of payments must sum to zero, so if a nation is running a current account deficit it must balance that amount by earning from foreign investments, by running down reserves, or by obtaining loans from other countries. In other words, a country that imports more than it exports must borrow to pay for those imports. Hence American imports had to be offset by large and growing amounts of foreign investment capital flowing into the U.S. Higher bond prices attract more investment capital, but there is an inevitable inverse relationship between bond prices and interest rates, so trade deficits tend to force interest rates down.
Foreign investors had plenty of funds to lend, either because they had very high personal savings rates (in China, up to 40 percent of income saved), or because of high oil prices (think OPEC). A torrent of funds—it’s been called a “Giant Pool of Money” that roughly doubled in size from 2000 to 2007, reaching $70 trillion—was flowing into the U.S. financial markets. While foreign governments were purchasing U.S. Treasury bonds, thus avoiding much of the impact of the eventual crash, other foreign investors, including pension funds, were gorging on mortgage-backed securities (MBSs) and CDOs. The indirect consequence was that U.S. households were in effect using funds borrowed from foreigners to finance consumption or to bid up house prices.
By this time a largely unregulated “shadow banking system,” made up of hedge funds, money market funds, investment banks, pension funds, and other lightly-regulated entities, had become critical to the credit markets and was underpinning the financial system as a whole. But the shadow “banks” tended to borrow short-term in liquid markets to purchase long-term, illiquid, and risky assets, profiting on the difference between lower short-term rates and higher long-term rates. This meant that any disruption in credit markets would result in rapid deleveraging, forcing these entities to sell long-term assets (such as MBSs) at depressed prices.
Between 1997 and 2006, the price of the typical American house increased by 124 percent. House prices were rising much faster than income was growing. During the two decades ending in 2001, the national median home price ranged from 2.9 to 3.1 times median household income. This ratio rose to 4.0 in 2004, and 4.6 in 2006. This meant that, in increasing numbers of cases, people could not actually afford the homes they were buying. Meanwhile, with interest rates low, many homeowners were refinancing their homes, or taking out second mortgages secured by price appreciation, in order to pay for new cars or home remodeling. Many of the mortgages had initially negligible—but adjustable—rates, which meant that borrowers would soon face a nasty surprise.
People bragged that their houses were earning more than they were, believing that the bloating of house values represented a flow of real money that could be tapped essentially forever. In a sense this money was being stolen from the next generation: younger first-time buyers had to burden themselves with unmanageable debt in order to enter the market, while older homeowners who bought before the bubble were able to sell, downsize, and live on the profit.
Wall Street had connected the “Giant Pool of Money” to the U.S. mortgage market, with enormous fees accruing throughout the financial supply chain, from the mortgage brokers selling the loans, to small banks funding the brokers, to giant investment banks that would ultimately securitize, bundle, and sell the loans to investors the world over. This capital flow also provided jobs for millions of people in the home construction and real estate industries.
Wall Street brokers began thinking of themselves as each deserving many millions of dollars a year in compensation, simply because they were smart enough to figure out how to send the debt system into overdrive and skim off a tidy percentage for themselves. Bad behavior was being handsomely rewarded, so nearly everyone on Wall Street decided to behave badly.
By around 2003, the supply of mortgages originating under traditional lending standards had largely been exhausted. But demand for MBSs continued, and this helped drive down lending standards—to the point that some adjustable-rate mortgage (ARM) loans were being offered at no initial interest, or with no down payment, or to borrowers with no evidence of ability to pay, or all of the above.
Bundled into MBSs, sold to pension funds and investment banks, and hedged with derivatives contracts, mortgage debt become the very fabric of the U.S. financial system, and, increasingly, the economies of many other nations as well. By 2005 mortgage-related activities were making up 62 percent of commercial banks’ earnings, up from 33 percent in 1987.
As a result, what would have been a $300 billion sub-prime mortgage crisis, when the bubble inevitably burst, turned into a multi-trillion dollar catastrophe engulfing the financial systems of the U.S. and many other countries as well.
Between July 2004 and July 2006, the Fed raised its federal funds rate from 1 percent to 5.25 percent. This contributed to an increase in 1-year and 5-year adjustable mortgage rates, pushing up mortgage payments for many homeowners. Since asset prices generally move inversely to interest rates, it suddenly became riskier to speculate in housing. The bubble was deflating.
In early 2007 home foreclosure rates nosed upward and the U.S. sub-prime mortgage industry simply collapsed, with more than 25 lenders declaring bankruptcy, announcing significant losses, or putting themselves up for sale.
The whole scheme had worked fine as long as the underlying collateral (homes) appreciated in value year after year. But as soon as house prices peaked, the upside-down pyramid of property, debt, CDOs, and derivatives wobbled and began crashing down.
For a brief time between 2006 and mid-2008, investors fled toward futures contracts in oil, metals, and food, driving up commodities prices worldwide. Food riots erupted in many poor nations, where the cost of wheat and rice doubled or tripled. In part, the boom was based on a fundamental economic trend: demand for commodities was growing—due in part to the expansion of economies in China, India, and Brazil—while supply growth was lagging. But speculation forced prices higher and faster than physical shortage could account for. For Western economies, soaring oil prices had a sharp recessionary impact, with already cash-strapped new homeowners now having to spend eighty to a hundred dollars every time they filled the tank in their SUV. The auto, airline, shipping, and trucking industries were sent reeling.
Between mid-2006 and September 2008, average U.S. house prices declined by over 20 percent. As prices dove, many recent borrowers with adjustable-rate mortgages found themselves “underwater”—that is, with houses worth less than the amount of their loan; this meant they could not refinance to avoid higher payments as interest payments on their loans reset. Default rates exploded. In 2007, foreclosure proceedings increased 79 percent over 2006 (affecting nearly 1.3 million properties). The trend worsened in 2008, with 2.3 million properties foreclosed, an 81 percent increase over the previous year. By August 2008, 9.2 percent of all U.S. mortgages outstanding were either delinquent or in foreclosure; in September the following year, the figure had jumped to 14.4 percent.
Once property prices began to plummet and the subprime industry went bust, dominos throughout the financial world began toppling.
In September 2008, the entire financial system came within 48 hours of complete ruin. The giant investment house of Lehman Brothers was allowed to go bankrupt, sending shock waves through global financial markets. The global credit system froze, and the U.S. government stepped in with an extraordinary set of bailout packages for the largest Wall Street banks and insurance companies. All told, the U.S. package of loans and guarantees added up to an astounding $12 trillion. GDP growth for the nation as a whole went negative and eight million jobs disappeared in a matter of months. [Both the president and Congress pledged to put in place new regulations that would make the recurrence of such a fiasco impossible. The result was a mild rewrite of laws; according to Christine Harper of Bloomberg, “Lawmakers spurned changes that would wall off deposit-taking banks from riskier trading. They declined to limit the size of lenders or ban any form of derivatives. Higher capital and liquidity requirements agreed to by regulators worldwide have been delayed for years to aid economic recovery.”]
Much of the rest of the world was infected, too, due to interlocking investments based on mortgages. The Eurozone countries and the UK experienced economic contraction or dramatic slowing of growth; some developing countries that had been seeing rapid growth saw significant slowdowns (for example, Cambodia went from 10 percent growth in 2007 to nearly zero in 2009); and by March 2009, the Arab world had lost an estimated $3 trillion due to the crisis—partly from a crash in oil prices.
Then in 2010, Greece faced a government debt crisis that threatened the economic integrity of the European Union. Successive Greek governments had run up large deficits to finance public sector jobs, pensions, and other social benefits; in early 2010, it was discovered that the nation’s government had paid Goldman Sachs and other banks hundreds of millions of dollars in fees since 2001 to arrange transactions that hid the actual level of borrowing. Between 2009 and May 2010, official government deficit estimates rose from 6 percent to 13.6 percent of GDP—the latter figure being one of the highest in the world. The direct effect of a Greek default would have been small for the other European economies, as Greece represents only 2.5 percent of the overall Eurozone economy. But it could have caused investors to lose faith in other European countries that also have high debt and deficit issues: Ireland, with a government deficit of 14.3 percent of GDP, the U.K. with 12.6 percent, Spain with 11.2 percent, and Portugal at 9.4 percent, were most at risk. And so Greece was bailed out with loans from the E.U. and the IMF, whose terms included the requirement to slash social spending. By late November of 2010, it was clear that Ireland needed a bailout, too—and got one, along with its own painful austerity package and loads of political upheaval. But this raised the inevitable questions: Who would be next? Could the IMF and the E.U. afford to bail out Spain? What would happen if the enormous U.K. economy needed rescue?
China, whose economy continued growing at an astonishing 8 to 10 percent per year, and which has run a large trade surplus for the past three decades, had inflated an enormous real estate bubble (average housing prices in the country tripled from 2005 to 2009; and price-to-income and price-to-rent ratios for property, as well as the number of unoccupied residential and commercial units, were all sky-high).
In short, a global economy that had appeared robust in 2007 had become fragile, suffering from several persistent maladies any one of which could erupt into virulence, spreading rapidly and sending the world back into the throes of crisis.
The U.S. real estate bubble of the early 2000s was the largest (in terms of the amount of capital involved) in history. And its crash carried an eerie echo of the 1930s: Austrian and Post-Keynesian economists have argued that it wasn’t the stock market crash that drove the Great Depression so much as farm failures making it impossible for farmers to make mortgage payments—along with housing bubbles in Florida, New York, and Chicago.
Real estate bubbles are essentially credit bubbles, because property owners generally use borrowed money to purchase property (this is in contrast to currency bubbles, in which nations inflate their currency to pay off government debt). The amount of outstanding debt soars as buyers flood the market, bidding property prices up to unrealistic levels and taking out loans they cannot repay. Too many houses and offices are built, and materials and labor are wasted in building them. Real estate bubbles also lead to an excess of homebuilders, who must retrain and retool when the bubble bursts. These kinds of bubbles lead to systemic crises affecting the economic integrity of nations.
Indeed, the housing bubble of the early 2000s had become the oxygen of the U.S. economy—the source of jobs, the foundation for Wall Street’s recovery from the dot-com bust, the attractant for foreign capital, the basis for household wealth accumulation and spending. Its bursting changed everything.
And there is reason to think it has not fully deflated: commercial real estate may be waiting to exhale next. Over the next five years, about $1.4 trillion in commercial real estate loans will reach the end of their terms and require new financing. Commercial property values have fallen more than 40 percent nationally since their 2007 peak, so nearly half the loans are underwater. Vacancy rates are up and rents are down.
The impact of the real estate crisis on banks is profound, and goes far beyond defaults upon outstanding mortgage contracts: systemic dependence on MBSs, CDOs, and derivatives means many of the banks, including the largest, are effectively insolvent and unable to take on more risk (we’ll see why in more detail in the next section).
The demographics are not promising for a recovery of the housing market anytime soon: the oldest of the Baby Boomers are 65 and entering retirement. Few have substantial savings; many had hoped to fund their golden years with house equity—and to realize that, they must sell. This will add more houses to an already glutted market, driving prices down even further.
In short, real estate was the main source of growth in the U.S. during the past decade. With the bubble gone, leaving a gaping hole in the economy, where will the new jobs and further growth come from? Can the problem be solved with yet another bubble?
Limits to Debt
Figure 1
Let’s step back a moment and look at our situation from a slightly different angle. Take a careful look at Figure 1, the total amount of debt in the U.S. since 1979. The graph breaks the debt down into four categories—household, corporate, financial, and government. All have grown very substantially during these past 30+ years, with the largest percentage growth having taken place in the financial sector. Note the shape of the curve: it is not a straight line (which would indicate additive growth); instead, up until 2008, it more closely resembles the J-curve of compounded or exponential growth (as discussed in the Introduction).
Growth that proceeds this way, whether it’s growth in U.S. oil production from 1900 to 1970 or growth in the population of Entamoeba histolytica in the bloodstream of a patient with amoebic dysentery, always hits hard limits eventually,
With regard to debt, what are those limits likely to be and how close are we to hitting them?
A good place to start the search for an answer would be with an exploration of how we have managed to grow our debt so far. It turns out that, in an economy that’s based on money creation through fractional reserve banking, with ever more loans being taken out to finance ever more consumer purchases and capital projects, it is usually possible to repay earlier debts along with the interest attached to those debts. There is never enough money in the system at any one time to repay all outstanding debt with interest; but, as long as total debt (and therefore the money supply as well) is constantly growing, that doesn’t pose a practical problem. The system as a whole does have some of the characteristics of a bubble or a Ponzi scheme, but it also has a certain internal logic and even the potential for (temporary) dynamic stability.
However, there are practical limits to debt within such a system, and those limits are likely to show up in somewhat different ways for each of the four categories of debt indicated in the graph.
With government debt, problems arise when required interest payments become a substantial fraction of tax revenues. Currently for the U.S., the total Federal budget amounts to about $3.5 trillion, of which 12 percent (or $414 billion) goes toward interest payments. But in 2009, tax revenues amounted to only $2.1 trillion; thus interest payments currently consume almost 20 percent, or nearly one-fifth, of tax revenues. For various reasons (including the economic recession, the wars in Iraq and Afghanistan, the Bush tax cuts, and various stimulus programs) the Federal government is running a deficit of over a trillion dollars a year currently. That adds to the debt, and therefore to future interest payments. Government debt stands at $13.6trillion now (it has increased by more than 50 percent since 2006), and it’s growing at over $1 trillion a year due to the deficits, which are officially projected to continue for several years.[2] By the time the debt reaches $20 trillion, roughly ten years from now, interest payments may constitute the largest Federal budget outlay category, eclipsing even military expenditures.[3] If Federal tax revenues haven’t increased by that time, Federal government debt interest payments will be consuming 20 percent of them. Interest already eats up nearly half the government’s income tax receipts, which are estimated at $901 billion for fiscal year 2010.[4]
Clearly, once 100 percent of tax revenues have to go toward interest payments and all government operations have to be funded with more borrowing—on which still more interest will have to be paid—the system will have arrived at a kind of financial singularity: a black hole of debt, if you will. But in all likelihood we would not have to get to that ultimate impasse before serious problems appear. Many economic wags suggest that when government has to spend 30 percent of tax receipts on interest payments, the country is in a debt trap from which there is no easy escape. Given current trajectories of government borrowing and interest rates, that 30 percent mark could be hit in just a few years. Even before then, U.S. credit worthiness and interest costs will take a beating.
However, some argue that limits to government debt (due to snowballing interest payments) need not be a hard constraint—especially for a large nation, like the U.S., that controls its own currency. The United States government is constitutionally empowered to create money, including creating money to pay the interest on its debts. Or, the government could in effect loan the money to itself via its central bank, which would then rebate interest payments back to the Treasury (this is in fact what the Treasury and Fed are doing with Quantitative Easing 2, which we shall discuss more below). For a perspective on why U.S. government debt may not face limits anytime soon, as long as the economy returns to growth, see James K Galbraith.
The most obvious complication that might arise is this: If at some point general confidence that external U.S. government debt (i.e., money owed to private borrowers or other nations) could be repaid with debt of equal “value” were deeply and widely shaken, potential buyers of that debt might decide to keep their money under the metaphorical mattress (using it to buy factories or oilfields instead), even if doing so posed its own set of problems. Then the Fed would become virtually the only available buyer of government debt, which might eventually undermine confidence in the currency, possibly igniting a rapid spiral of refusal that would end only when the currency failed. There are plenty of historic examples of currency failures, so this would not be a unique occurrence.[5]
Some who come to understand that government deficit spending is unsustainable immediately conclude that the sky is falling and doom is imminent. It is disquieting, after all, to realize for the first time that the world economic system is a kind of Ponzi scheme that is only kept going by the confidence of its participants. But as long as deficit spending doesn’t exceed certain bounds, and as long as the economy resumes growth in the not-too-distant future, then it can be sustained for quite some time. Ponzi schemes theoretically can continue forever—if the number of potential participants is infinite. The absolute size of government debt is not necessarily a critical factor, as long as future growth will be sufficient so that the proportion of debt relative to revenues remains the same. Even an increase in that proportion is not necessarily cause for alarm, as long as it is only temporary. This, at any rate, is the Keynesian argument. Keynesians would also point out that government debt is only one category of total debt, and that U.S. government debt hasn’t grown proportionally relative to other categories of debt to any alarming degree (until the current recession). Again, as long as growth returns, further borrowing can be justified (up to a point)—especially if the goal is to restart growth.
The limits to household debt are different, but somewhat analogous: consumers can’t create money the way banks (and some governments) do, and can’t take on more debt if no one will lend to them. Lenders usually require collateral, so higher net worth (often in the form of house equity) translates to greater ability to take on debt; likewise, lenders wish to see evidence of ability to make payments, so a higher salary also translates to a greater ability to take on increased levels of debt.
As we have seen, the actual inflation-adjusted income of American workers has not risen substantially since the 1970s, but home values did rise during the 2000-2006 period, giving many households a higher theoretical net worth. Many homeowners used soaring house value as collateral for more debt—in many cases, substantially more. At the same time, lenders found ways of easing consumer credit standards and making credit generally more accessible—whether through “no-doc” mortgages or blizzards of credit card offers. The result: household debt increased from less than $2 trillion to $13.5 trillion between 1980 and 2008. This borrowing and spending on the part of U.S. households was the major engine not only of domestic economic expansion during most of the last decade, but of worldwide economic expansion as well.
But with the crash in the U.S. real estate market starting in 2007, household net worth also crashed (falling by a total of $17.5 trillion or 25.5 percent from 2007 to 2009—the equivalent loss of one year of GDP); and as unemployment rose from 4.6 percent in 2007 to almost ten percent (as officially measured) in 2010, average household income declined. At the same time, banks tightened their lending standards, with credit card companies slashing the number of offers and mortgage lenders requiring much higher qualifications from borrowers. Thus the ability of households to take on more debt has contracted substantially. Less debt means less spending (households usually borrow money so they can spend it—whether for a new car or a kitchen makeover). This is potentially a short-term problem; however, the only way the situation will change is if somehow the economy as a whole begins to grow again (leading to higher house prices, lower unemployment, and easier credit). Here’s the catch: increased consumer demand is a big part of what would be needed to drive that shift back to growth.
So we just need to get households borrowing and spending again. Perhaps government could somehow put a bit of seed money in citizens’ pockets (cash for clunkers, anyone?) to start the process. But any such strategy must fly against a demographic headwind: As mentioned earlier, Baby Boomers (the most numerous demographic cohort in the nation’s history, encompassing 70 million Americans) are reaching retirement age, which means that their lifetime spending cycle has peaked. It’s not that Boomers won’t continue to buy things (everybody has to eat), but their aggregate spending is unlikely to increase, given that cohort members’ savings are, on average, inadequate for retirement (one-third of them have no savings whatever). Out of necessity, Boomers will be saving more from now on, and spending less. And that won’t help the economy grow.
When demand for products declines, corporations aren’t inclined to borrow to increase their productive capacity. Even corporate borrowing aimed at increasing financial leverage has limits. Too much corporate debt reduces resiliency during slow periods—and the future is looking slow for as far as the eye can see. Durable goods orders are down, housing starts and new home sales are down, savings are up. As a result, banks don’t want to lend to companies, because the risk of default on such loans is now perceived as being higher than it was a few years ago; in addition, the banks are reluctant to take on more risk of any sort given the fact that many of the assets on their balance sheets consist of now-worthless derivatives and CDOs.
Meanwhile, ironically and perhaps surprisingly, U.S. corporations are sitting on over a trillion dollars because they cannot identify profitable investment opportunities and because they want to hang onto whatever cash they have in anticipation of continued hard times.
If only we could get to the next upside business cycle, then more corporate debt would be justified for both lenders and borrowers. But so far confidence in the future is still weak.
The category of financial debt—which, of the four categories, has grown the most—consists of debt and leverage within the financial system itself. This category can mostly be disregarded, as financial institutions are primarily acting as intermediaries for ultimate borrowers. But this category does not directly include the notional value of derivatives contracts, which is roughly five times the amount of U.S. government, household, corporate, and financial debt combined (roughly $260 trillion in outstanding derivates, versus $55 trillion in debt), and derivatives have arguably helped create a situation that limits further growth in the financial system’s ability to perform its only truly useful function within society—to provide investment capital for productive enterprise.
One of the main reforms enacted during the Great Depression, contained in the Glass Steagall Act of 1933, was a requirement that commercial banks refrain from acting as investment banks. In other words, they were prohibited from dealing in stocks, bonds, and derivatives. This prohibition was based on an implicit understanding that there should be some sort of firewall within the financial system separating productive investment from pure speculation, or gambling. This firewall was eliminated by the passage of the Gramm–Leach–Bliley Act of 1999 (for which the financial services industry lobbied tirelessly). As a result, all large U.S. banks have for the past decade become deeply engaged in speculative investment, using both their own and their clients’ money.
With derivatives, since there is no requirement to own the underlying asset, and since there is often no requirement of evidence of ability to cover the bet, there is no effective limit to the amount that can be wagered. It’s true that many derivatives largely cancel each other out, and that their ostensible purpose is to reduce financial risk. Nevertheless, if a contract is settled, somebody has to pay—unless they can’t.
Credit default swaps (CDSs, discussed in the last chapter) are usually traded “over the counter”—meaning without the knowledge of anyone other than the two counterparties; they are a sort of default insurance: a contract holder acts as “insurer” against default, bankruptcy, or other “credit event,” and collects regular “insurance” payments as premiums; this comes as “free money” to the “insurer.” But if default occurs, then a huge payment becomes due. Here’s one example: In 2005, auto parts maker Delphi defaulted on $5.2 billion in outstanding bonds and loans—but over $20 billion in credit default derivative contracts had been written on those bonds and loans (the result: massive losses on the part of derivative holders, much more than for those who held the bonds or loans). This degree of leverage was not uncommon throughout corporate America and the U.S. financial system as a whole. Was this really reducing risk, or merely spreading it throughout the economy?
An even more telling example relates to the insurance giant AIG, which insured the obligations of various financial institutions through CDSs. The transaction went like this: AIG received a periodic premium in exchange for a promise to pay party A if party B defaulted. As it turned out, AIG did not have the capital to back its CDS commitments when defaults began to spread throughout the U.S. financial system in 2008, and a failure of AIG would have brought down many other companies in a kind of financial death-spiral. Therefore the Federal government stepped in to bail out AIG with tens of billions of dollars.
In the heady years of the 2000s, even the largest and most prestigious banks engaged in what can only be termed criminally fraudulent behavior on a massive scale. As revealed in sworn Congressional testimony, firms including Goldman Sachs deliberately created flawed securities and sold tens of billions of dollars’ worth of them to investors, then took out many more billions of dollars’ worth of derivatives contracts essentially betting against the securities they themselves had designed and sold. They were quite simply defrauding their customers, which included foreign and domestic pension funds. To date, no senior executive with any bank or financial services firm has been prosecuted for running these scams. Instead, most of the key figures are continuing to amass immense personal fortunes, confident no doubt that what they were doing—and in many cases continue to do—is merely a natural extension of the inherent logic of their industry.
The degree and concentration of exposure on the part of the biggest banks with regard to derivatives was and is remarkable: as of 2005, JP Morgan Chase, Bank of America, Citibank, Wachovia, and HSBC together accounted for 96 percent of the $100 trillion of derivatives contracts held by 836 U.S. banks.[6]
Even though many derivatives were insurance against default, or wagers that a particular company would fail, to a large degree they constituted a giant bet that the economy as a whole would continue to grow (and, more specifically, that the value of real estate would continue to climb). So when the economy stopped growing, and the real estate bubble began to deflate, this triggered a systemic unraveling that could be halted (and only temporarily) by massive government intervention.
Suddenly “assets” in the form of derivative contracts that had a stated value on banks’ ledgers were clearly worth much less. If these assets had to be sold, or if they were “marked to market” (valued on the books at the amount they could actually sell for), the banks would be shown to be insolvent. Government bailouts essentially enabled the banks to keep those assets hidden, so that banks could appear solvent and continue carrying on business.
Despite the proliferation of derivatives, the financial system still largely revolves around the timeworn practice of receiving deposits and making loans. Bank loans are the source of money in our modern economy. If the banks go away, so does the rest of the economy.
But as we have just seen, many banks are probably actually insolvent because of the many near-worthless derivative contracts and bad mortgage loans they count as assets on their balance sheets.
One might well ask: If commercial banks have the power to create money, why can’t they just write off these bad assets and carry on? Ellen Brown explains the point succinctly in her useful book Web of Debt:
[U]nder the accountancy rules of commercial banks, all banks are obliged to balance their books, making their assets equal their liabilities. They can create all the money they can find borrowers for, but if the money isn’t paid back, the banks have to record a loss; and when they cancel or write off debt, their assets fall. To balance their books . . . they have to take the money either from profits or from funds invested by the bank’s owners [i.e., shareholders]; and if the loss is more than its owners can profitably sustain, the bank will have to close its doors. [7]
So, given their exposure via derivatives, bad real estate loans, and MBSs, the banks aren’t making new loans because they can’t take on more risk. The only way to reduce that risk is for government to guarantee the loans. Again, as long as the down-side of this business cycle is short, such a plan could work in principle.
But whether it actually will in the current situation is problematic. As noted above, Ponzi schemes can theoretically go on forever, as long as the number of new investors is infinite. Yet in the real world the number of potential investors is always finite. There are limits. And when those limits are hit, Ponzi schemes can unravel very quickly.
These are the four categories of debt. Over the short term, there is no room for growth of debt in the household or corporate sectors. Within the financial sector, there is little room for growth in productive lending. The shadow banks can still write more derivative contracts, but that doesn’t do anything to help the real economy and just spreads risk throughout the system. That leaves government, which (if it controls its own currency and can fend off attacks from speculators) can continue to run large deficits, and the central banks, which can enable those deficits by purchasing government debt outright—but unless such efforts succeed in jump-starting growth in the other sectors, that is just a temporary end-game strategy.
A single statistic is revealing: in the U.S., the ratio of total debt to GDP rose to more than 300 percent by 2005, exceeding the previous record of 290 percent achieved immediately prior to the stock market crash of 1929. External debt, what the U.S. owes the rest of the world, increased to $3 trillion, this capital balance having been in surplus just a few years previously.
Remember: in a system in which money is created through bank loans, there is never enough money in existence to pay back all debts with interest. The system only continues to function as long as it is growing.
So, what happens to this mountain of debt in the absence of economic growth? Answer: Some kind of debt crisis. And that is what we are seeing.
Debt crises have occurred frequently throughout the history of civilizations, beginning long before the invention of fractional reserve banking and credit cards. Many societies learned to solve the problem with a “debt jubilee”: According to the Book of Leviticus in the Bible, every fiftieth year is a Jubilee Year, in which slaves and prisoners are to be freed and debts are to be forgiven. Evidence of similar traditions can be found in an ancient Hittite-Hurrian text entitled “The Song of Debt Release”; in the history of Ancient Athens, where Solon (638 BC–558 BC) instituted a set of laws called seisachtheia, canceling all current debts and retroactively canceling previous ones that had caused slavery and serfdom (thus freeing debt slaves and debt serfs); and in the Qur’an, which advises debt forgiveness for those who are genuinely unable to pay.
For householders facing unaffordable mortgage payments or a punishing level of credit card debt, a jubilee may sound like a capitol idea. But what would that actually mean today, if carried out on a massive scale—when debt has become the very fabric of the economy? Remember: we have created an economic machine that needs debt like a car needs gas.
Realistically, we are unlikely to see a general debt jubilee in coming years; what we will see instead are defaults and bankruptcies that accomplish essentially the same thing—the destruction of debt. Which, in an economy like ours, effectively means a destruction of wealth and claims upon wealth. Debt will have to be written off in enormous amounts—by the trillions of dollars. Over the short term, government will attempt to stanch this flood of debt-shedding in the household, corporate, and financial sectors by taking on more debt of its own—but eventually it simply won’t be able to keep up, given the inherent limits on government borrowing discussed above.
We began with the question, “How close are we to hitting the limits to debt?” The evident answer is: we have already probably hit realistic limits to household debt and corporate debt; the ratio of U.S. total debt-to-GDP is probably near or past the danger mark; and limits to government debt may be within sight, though that conclusion is more controversial and doubtful.
Stimulus Duds, Bailout Blanks
In response to the financial crisis, governments and central banks have undertaken a series of extraordinary, dramatic measures. In this section we will focus primarily on the U.S. (the bailouts of banks, insurance and car companies, and GSEs; the stimulus packages of 2008 and 2009; and actions by, and new powers given to the Federal Reserve), but we will also briefly touch upon some actions by governments and central banks in other nations (principally China and the Eurozone).
For the U.S., actions undertaken by the Federal government and the Federal Reserve bank system have so far resulted in totals of $3 trillion actually spent and $11 trillion committed as guarantees. Some of these actions are discussed below; for a complete tally of the expenditures and commitments, see the online CNN Bailout Tracker.[8]
Bailouts
Bailouts directly funded by the U.S. Department of the Treasury were mostly bundled together under the Troubled Assets Relief Program (TARP), signed into law October 3, 2008, which allowed the Treasury to purchase or insure up to $700 billion worth of “troubled assets.” These were defined as residential or commercial mortgages and “any securities, obligations, or other instruments that are based on or related to such mortgages,” issued on or before March 14, 2008. Essentially, TARP allowed the Federal government to purchase illiquid, difficult-to-value assets (primarily CDOs) from banks and other financial institutions in order to prevent a wave of insolvency from sweeping the financial world. The list of companies receiving TARP funds included the largest, wealthiest, and most powerful firms on Wall Street—Citigroup, Bank of America, AIG, JPMorgan Chase, Wells Fargo, Goldman Sachs, and Morgan Stanley—as well as GMAC, General Motors, and Chrysler.
The program was controversial, with some calling it “lemon socialism” (privatization of profits and socialization of losses). Critics were especially outraged when it became known that executives in the bailed-out companies were continuing to reward themselves with enormous salaries and bonuses. Some instances of fraud were uncovered, as well as the use of substantial amounts of money by participating companies to lobby against financial reforms.
Nevertheless, some of the initial fears about good money being thrown after bad did not appear to be borne out: Much of the TARP outlay was quickly repaid (for example, as of mid-2010, over $169 billion of the $245 billion invested in U.S. banks had been paid back, including $13.7 billion in dividends, interest and other income). Some of the repayment efforts appeared to be motivated by the desire on the part of companies to get out from under onerous restrictions (including restrictions from the Obama Administration on executive pay).
A bailout of Fannie Mae and Freddie Mac was announced in September 2008 in which the federal government, via the Federal Housing Finance Agency, placed the two firms into conservatorship, dismissed the firms’ chief executive officers and boards of directors, and made the Treasury 79.9 percent owners of each GSE. The authority of the U.S. Treasury to continue paying to stabilize Fannie Mae and Freddie Mac is limited only by statutory constraints to Federal government debt. The Fannie-Freddie bailout law increased the national debt ceiling $800 billion, to a total of $ 10.7 trillion, in anticipation of the potential need for government mortgage purchases.
The U.S. market for mortgage-backed securities had collapsed from $1.9 trillion in 2006 to just $50 billion in 2008. Thus the upshot of the Freddie-Fannie bailout was that the Federal government became the U.S. mortgage lender of first and last resort.
Altogether, the bailouts succeeded in preventing an immediate meltdown of the national (and potentially the global) financial system. But they did not significantly alter the culture of Wall Street (i.e., the paying of exorbitant bonuses for the acquisition of inappropriate risk via cutthroat competition that ignores long-term sustainability of companies or economies). And they did not relieve the underlying solvency crisis faced by the banks—they merely papered these problems over temporarily, until the remaining bulk of the “troubled” assets are eventually marked to market (listed on banks’ balance sheets at realistic values). Meanwhile, the U.S. government has taken on the burden of guaranteeing most of the nation’s mortgages, in a market in which residential and commercial real estate values may be set to decline much further than they have already done.
Stimulus packages
During 2008 and 2009, the U.S. Federal government implemented two stimulus packages, spending a total of nearly $1 trillion.
The first (the Economic Stimulus Act of 2008) consisted of direct tax rebates, mostly distributed at $300 per taxpayer, or $600 per couple filing jointly. The total cost of the bill was projected at $152 billion.
The second, the American Recovery and Reinvestment Act of 2009, or ARRA, was comprised of an enormous array of projects, tax breaks, and programs—everything from $100 million for free school lunch programs to $6 billion for the cleanup of radioactive waste, mostly at nuclear weapons production sites. The total nominal worth of the spending package was $787 billion. A partial list:
- Tax incentives for individuals (e.g., a new payroll tax credit of $400 per worker and $800 per couple in 2009 and 2010). Total: $237 billion.
- Tax incentives for companies (e.g., to extend tax credits for renewable energy production). Total: $51 billion.
- Healthcare (e.g., Medicaid). Total: $155.1 billion.
- Education (primarily, aid to local school districts to prevent layoffs and cutbacks). Total: $100 billion.
- Aid to low-income workers, unemployed, and retirees (including job training). Total: $82.2 billion ($40 billion of this went to provide extended unemployment benefits through Dec. 31, and to increase them).
- Infrastructure Investment. Total: $105.3 billion.
- Transportation. Total $48.1 billion.
- Water, sewage, environment, and public lands. Total: $18 billion.
In addition to these two programs, Congress also appropriated a total of $3 billion for the temporary Car Allowance Rebate System (CARS) program, known colloquially as “Cash for Clunkers,” which provided cash incentives to U.S. residents to trade in their older gas-guzzlers for new, more fuel-efficient vehicles.
The New Deal had cost somewhere between $450 and $500 billion and had increased government’s share of the national economy from 4 percent to 10 percent. ARRA represented a much larger outlay that was spent over a much shorter period, and increased government’s share of the economy from 20 percent to 25 percent.
Given the scope and cost of the two stimulus programs, they were bound to have some effect—though the extent of the effect was debated mostly along political lines. The 2008 stimulus helped increase consumer spending (one study estimated that the stimulus checks increased spending by 3.5 percent). And unemployment undoubtedly rose less in 2009-2010 than it would have done without ARRA.
Whatever the degree of impact of these spending programs, it appeared to be temporary. For example, while “Cash for Clunkers” helped sell almost 700,000 cars and nudged GM and Chrysler out of bankruptcy, once the program expired U.S. car sales languished at their lowest level in 30 years.
At the end of 2010, President Obama and congressional leaders negotiated a compromise package of extended and new tax cuts that, in total, would reduce potential government revenues by an estimated $858 billion. This was, in effect, a third stimulus package.
Critics of the stimulus packages argued that transitory benefits to the economy had been purchased by raising government debt to frightening levels. Proponents of the packages answered that, had government not acted so boldly, an economic crisis might have turned into complete and utter ruin.
Actions by, and new powers of, the Federal Reserve
While the U.S. government stimulus packages were enormous in scale, the actions of the Federal Reserve dwarfed them in terms of dollar amounts committed.
During the past three years, the Fed’s balance sheet has swollen to more than $2 trillion through its buying of bank and government debt. Actual expenditures included $29 billion for the Bear Sterns bailout; $149.7 billion to buy debt from Fannie Mae and Freddie Mac; $775.6 billion to buy mortgage-backed securities, also from Fannie and Freddie; and $109.5 billion to buy hard-to-sell assets (including (MBSs) from banks. However, the Fed committed itself to trillions more in insuring banks against losses, loaning to money market funds, and loaning to banks to purchase commercial paper. Altogether, these outlays and commitments totaled a minimum of $6.4 trillion.
Documents released by the Fed on December 1, 2010 showed that more than $9 trillion in total had been supplied to Wall Street firms, commercial banks, foreign banks, and corporations, with Citigroup, Morgan Stanley, and Merrill Lynch borrowing sums that cumulatively totaled over $6 trillion. The collateral for these loans was undisclosed but widely thought to be stocks, CDSs, CDOs, and other securities of dubious value.
In one of its most significant and controversial programs, known as “quantitative easing,” the Fed twice expanded its balance sheet substantially, first by buying mortgage-backed securities from banks, then by purchasing outstanding Federal government debt (bonds and Treasury certificates) to support the Treasury debt market and help keep interest rates down on consumer loans. The Fed essentially creates money on the spot for this purpose (though no money is literally “printed”), thus monetizing U.S. government debt.
In addition, the Federal Reserve has created new sub-entities to pursue various new functions:
- Term Auction Facility (which injects cash into the banking system),
- Term Securities Lending Facility (which injects Treasure securities into the banking system),
- Primary Dealer Credit Facility (which enables the Fed to lend directly to “primary dealers,” such as Goldman Sachs and Citigroup, which was previously against Fed policy), and
- Commercial Paper Funding Facility (which makes the Fed a crucial source of credit for non-financial businesses in addition to commercial banks and investment firms).
Finally, while remaining the supervisor of 5,000 U.S. bank holding companies and 830 state banks, the Fed has taken on substantial new regulatory powers. Under the Wall Street Reform and Consumer Protection Act, known as the Dodd-Frank law (signed July 21, 2010), the central bank gains the authority to control the lending and risk taking of the largest, most “systemically important” banks, including investment banks Goldman Sachs Group and Morgan Stanley, which became bank holding companies in September 2008. The Fed also gains authority over about 440 thrift holding companies and will regulate “systemically important” nonbank financial firms, including the biggest insurance companies, Warren Buffett’s Berkshire Hathaway Inc., and General Electric Capital Corp. It is also now required to administer stress tests at the biggest banks every year to determine whether they need to set aside more capital. The law prescribes that the largest banks write “living wills,” approved by the Fed, that will make it easier for the government to break them up and sell the pieces if they suffer a Lehman Brothers-style meltdown. The Fed also houses and funds a new federal consumer protection agency (headed, as of September 2010, by Elizabeth Warren), which operates independently.
All of this makes the Federal Reserve a far more powerful actor within the U.S. economy. The justification put forward is that without the Fed’s bold actions the result would have been utter financial catastrophe, and that with its new powers and functions the institution will be better able to prevent future economic crises. Critics say that catastrophe has merely been delayed.
Actions by other nations and central banks
In November 2008 China announced a stimulus package totaling 4 trillion yuan ($586 billion) as an attempt to minimize the impact of the global financial crisis on its domestic economy. In proportion to the size of China’s economy, this was a much larger stimulus package than that of the U.S. Public infrastructure development made up the largest portion, nearly 38 percent, followed by earthquake reconstruction, funding for social welfare plans, rural development, and technology advancement programs. The stimulus program was judged a success, as China’s economy (according to official estimates) continued to expand, though at a slower pace, even as many other nations saw their economies contract.
In December 2009, Japan’s government approved a stimulus package amounting to 7.2 trillion yen ($82 billion), intended to stimulate employment, incentivize energy-efficient products, and support business owners.
Europe also instituted stimulus packages: in November 2008, the European Commission proposed a plan for member nations amounting to 200 billion Euros including incentives to investment, lower interest rates, tax rebates (notably on green technology and eco-friendly cars), and social measures such as increased unemployment benefits. In addition, individual European nations implemented plans ranging in size from .6 percent of GDP (Italy) to 3.7 percent (Spain).
The European Central Bank’s response to sovereign debt crises, primarily affecting Greece and Ireland but likely to spread to Spain and Portugal, has included a comprehensive rescue package (approved May 2010) worth almost a trillion dollars. This was accompanied by requirements to cut deficits in the most heavily indebted countries; the resulting austerity programs led, as already noted, to widespread domestic discontent. Greece received a $100 billion bailout, along with a punishing austerity package, in the spring of 2010, while Ireland got the same treatment in November.
A meeting of central bankers in Basel, Switzerland in September 2010 resulted in an agreement to require banks in the OECD nations to progressively increase their capital reserves starting Jan. 1, 2013. In addition, banks will be required to keep an emergency reserve known as a “conservation buffer” of 2.5 percent. By the end of the decade each bank is expected to have rock-solid reserves amounting to 8.5 percent of its balance sheet. The new rules will strengthen banks against future financial crises, but in the process they will curb lending, making economic recovery more difficult.
What’s the bottom line on all these stimulus and bailout efforts? In the U.S., $12 trillion of total household net worth disappeared in 2008, and there will likely be more losses ahead, largely as a result of continued fall in real estate values though increasingly as a result of job losses as well. The government’s stimulus efforts, totaling less than $1 trillion, cannot hope to make up for this historic evaporation of wealth. While indirect subsidies may temporarily keep home prices from falling further, that just keeps houses less affordable to workers making less income. Meanwhile, the bailouts of banks and shadow banks have been characterized as government throwing money at financial problems it cannot solve, rewarding the very people who created them. Rather than being motivated by the suffering of American homeowners or governments in over their heads, the bailouts of Fannie Mae and Freddie Mac in the U.S., and Greece and Ireland in the E.U. were (according to critics) essentially geared toward securing the investments of the banks and the wealthy bonds holders.
These are perhaps facile criticisms: it is no doubt true that, without the extraordinary measures undertaken by governments and central banks, the crisis that gripped U.S. financial institutions in the fall of 2008 would have deepened and spread, hurling the entire global economy into a Depression surpassing that of the 1930s.
Facile or not, however, the critiques nevertheless contain more than a mote of truth.
The stimulus-bailout efforts of 2008-2009—which in the U.S. cut interest rates from 5 percent to zero, spent up the budget deficit to 10 percent of GDP, and guaranteed $6.4 trillion to shore up the financial system—arguably cannot be repeated. These constituted quite simply the largest commitments of funds in world history, dwarfing the total amounts spent in all the wars of the 20th century in inflation-adjusted terms (for the U.S., the cost of World War II amounted to $3.2 trillion). Not only the U.S., but Japan and the European nations as well have exhausted their arsenals.
But more will be needed as countries, states, counties, and cities near bankruptcy due to declining tax revenues. Meanwhile the U.S. has lost 8.4 million jobs—and if loss of hours worked is considered that adds the equivalent of another 3 million; the nation will need to generate an extra 450,000 jobs each month for three years to get back to pre-crisis levels of employment. The only way these problems can be allayed (not fixed) is through more central bank money creation and government spending.
Austrian-School and post-Keynesian economists have contributed a basic insight to the discussion: Once a credit bubble has inflated, the eventual correction (which entails destruction of credit and assets) is of greater magnitude than government’s ability to spend. The cycle must sooner or later play itself out.
There may be a few more arrows in the quiver of economic policy makers: central bankers could try to drive down the value of domestic currencies to stimulate exports; the Fed could also engage in more quantitative easing. But these measures will sooner or later merely undermine currencies (we will return to this point in Chapter 6).
Further, the way the Fed at first employed quantitative easing in 2009 was minimally productive. In effect, QE1 (as it has been called) amounted to adding about a trillion dollars to banks’ balance sheets, with the assumption that banks would then use this money as a basis for making loans.[9] The “multiplier effect” (in which banks make loans in amounts many times the size of deposits) should theoretically have resulted in the creation of roughly $9 trillion within the economy. However, this did not happen: because there was reduced demand for loans (companies didn’t want to expand in a recession and families didn’t want to take on more debt), the banks just sat on this extra capital. A better result could arguably have been obtained if the Fed were somehow to have distributed the same amount of money directly to debtors, rather than to banks, because then at least the money would either have circulated to pay for necessities, or helped to reduce the general debt overhang.
In November 2010, the Fed again resorted to quantitative easing (“QE2”). This time, instead of purchasing mortgage securities, thus inflating banks’ balance sheets, the Fed set out to purchase Treasuries–$600 billion worth, in monthly installments lasting through June 2011. While QE1 was essentially about saving the banks, QE2 was about funding Federal government debt interest-free. Because the Federal Reserve rebates its profits (after deducting expenses) to the Treasury, creating money to buy government debt obligations is an effective way of increasing that debt without increasing interest payments. Critics describe this as the government “printing money” and assert that it is highly inflationary; however, given the extremely deflationary context (trillions of dollars’ worth of write-downs in collateral and credit), the Fed would have to “print” far more than it is doing to result in real inflation. Nevertheless, as we will see in Chapter 5 in a discussion of “currency wars,” other nations view this strategy as a way to drive down the dollar so as to decrease the value of foreign-held dollar-denominated debt—in effect forcing them to pay for America’s financial folly.
In any case, the Federal Reserve has effectively become a different institution since the crisis began. It and certain other central banks have taken on most of the financial bailout burden (dealing in trillions rather than mere hundreds of billions of dollars) simply because they have the power to create money with which to guarantee banks against losses and buy government debt. Together, central banks and governments are barely keeping the wheels on society, but their actions come with severe long-term costs and risks. And what they can actually accomplish is most likely limited anyway. Perhaps the situation is best summed up in a comment from a participant at the central bankers’ annual gathering in Jackson Hole, Wyoming in August 2010: “We can’t create growth ourselves, all we can do is create the conditions that make growth possible.”[10]
Deflation or Inflation?
If the bailouts and economic stimuli are effectively just a way of buying time, then there is probably further trouble ahead—but trouble of what sort? Typically, economic crises play out as inflation or deflation. There is considerable controversy among forecasters as to which will ensue. Let’s examine the arguments.
The inflation argument
Many economic observers (especially the hard money advocates) point out that the amount of debt that many governments have taken on cannot realistically be repaid, and that the U.S. government in particular will have great difficulty fulfilling its obligations to an aging citizenry via programs like Social Security, Medicare, and Medicaid. The only way out of the dilemma—and it is a time-tested if dangerous strategy—is to inflate the currency. The risk is that inflation undermines the value of the currency and wipes out savings.
There are many fairly recent historic examples, going back to the very earliest days of money. The Romans generated inflation by debasing their coinage—gradually reducing the precious-metal content until coins were almost entirely made of base metals. With paper money, currency inflation became much easier and more tempting: Germany famously inflated away its onerous World War I reparations burdens during the early 1920s. Between June and December 1922, Germans’ cost of living increased approximately 1,600 percent, and citizens resorted to carrying bundles of banknotes in wheelbarrows merely to purchase daily necessities and even used currency as wallpaper. In the United States, hyperinflation occurred during the Revolutionary War and the Civil War. Hungary inflated its currency at the end of World War II, as did Yugoslavia in the late 1980s just before breakup of the country. During the last decade, Zimbabwe inflated its currency so dramatically that eventually banknotes were being circulated each with a face value of 100 trillion Zimbabwe dollars. The result has always been the same: a complete gutting of savings and an eventual re-valuation of the currency—in effect, re-setting the value of money from scratch.
How does a nation inflate its currency? There are two primary routes: maintaining very low interest rates encourages borrowing (which, with fractional reserve banking, results in the creation of more money); or direct injection by government or central banks of new money into the economy. This in turn can happen via the central bank creating money with which to buy government debt, or by government creating money and distributing it either to financial institutions (so they can make more loans) or directly to citizens.
Those who say we are heading toward hyperinflation argue either that existing bailouts and stimulus actions by governments and central banks are inherently inflationary; or that, as the economy relapses, the Federal Reserve will create fresh money not only to buy government debt, but to purchase stocks and securities and perhaps even to buy real estate directly from homeowners. The addition of all this new money, chasing after a limited pool of goods and services, will inevitably cause the currency to lose value.
The deflation argument
Others say the most likely course for the world economy is toward continued deleveraging by businesses and households, and this ongoing shedding of debt (mostly through defaults and bankruptcies) will exceed either the ability or willingness of governments and central banks to inflate the currency, at least over the near-term (the next few years). Those who see government actions so far as inflationary fail to see that increasing public debt has simply replaced a portion of the amount of private debt that has vanished through deleveraging; total debt has actually declined, even in the face of massive government borrowing.
If a bubble consists of lots of people all at once taking advantage of what looks like a once-in-a-lifetime opportunity to get rich quick, deflation is lots of people all at once doing what appears to be perfectly sensible (under a different set of circumstances)—saving, paying off debts, walking away from underwater homes, and pulling back on borrowing and spending. The net effect of deflation is the destruction of businesses, the layoff of millions of workers, a drop in consumption levels, and consequently further bankruptcies of businesses due to insufficient purchases of overabundant goods and services.
Deflation represents a disappearance of credit and money, so that whatever money remains has increased purchasing power. Once the bubble began to burst back in 2007-2008, say the deflationists, a process of contraction began that inevitably must continue to the point where debt service is manageable and prices for assets such as homes and stocks are compelling based on long-term historical trends.
However, many deflationists tend to agree that the inflationists are probably right in the long run: at some point, perhaps several years from now, some future U.S. administration will resort to truly extraordinary means to avoid defaulting on interest payments on its ballooning debt, as well as to avert social disintegration and restart economic activity. There are several scenarios by which this might happen—including government simply printing money in enormous quantities and distributing it directly to banks or citizens. The net effect would be the same in all cases: a currency collapse.
In general, what we are actually seeing so far is neither dramatic deflation nor hyperinflation. Despite the evaporation of trillions of dollars in wealth during the past four years, and despite government and central bank interventions with a potential nameplate value also running in the trillions of dollars, prices (which most economists regard as the signal of inflation or deflation) have remained fairly stable. That is not to say that the economy is doing well: the ongoing problems of unemployment, declining tax revenues, and business and bank failures are obvious to everyone. Rather, what seems to be happening is that the efforts of the U.S. Federal government and the Federal Reserve have temporarily more or less succeeded in balancing out the otherwise massively deflationary impacts of defaults, bankruptcies, and falling property values. With its new functions, the Fed is acting as the commercial bank of last resort, transferring debt (mostly in the form of MBSs and Treasuries) from the private sector to the public sector. The Fed’s zero-interest-rate policy has given a huge hidden subsidy to banks by allowing them to borrow Fed money for nothing and then lend it to the government at a 3 percent interest rate. But this is still not inflationary, because Federal Reserve is merely picking up the slack left by the collapse of credit in the private sector. In effect, the nation’s government and its central bank are together becoming the lender of last resort and the borrower of last resort—and (via the military) increasingly also both the consumer of last resort and the employer of last resort.
How can the U.S. continue to run up deficits at a sizeable proportion of GDP? If other nations did the same, the result would be currency devaluation and inflation. America can get away with it for now because the dollar is the reserve currency of the world, and so if the dollar entirely failed most or all of the global economy would go down with it. Meanwhile some currency devaluation actually works to America’s advantage by making its exports more attractively priced.
Over the short to medium term, then, the U.S.—and, by extension, most of the rest of the world—appears to have achieved a kind of tentative and painful balance. The means used will prove unsustainable, and in any case this period will be characterized by high unemployment, declining wages, and political unease. While leaders will make every effort to portray this as a gradual return to growth, in fact the economy will be losing ground and will remain fragile, highly vulnerable to upsetting events that could take any of a hundred forms—including international conflict, terrorism, the bankruptcy of a large corporation or megabank, a sovereign debt event (such as a default by one of the European countries now lined up for bailouts), a food crisis, an energy shortage or temporary grid failure, an environmental disaster, a curtailment of government-Fed intervention based on a political shift in the makeup of Congress, or a currency war (again, more on that in Chapter 5).
What should be done to avert further deterioration of the global financial system? Once again, the public debate (such as it is) is dominated by the opposed viewpoints of the Keynesians and the Chicago Schoolers—which are approximately reflected in the positions of the U.S. Democrat and Republican political parties.
The Keynesians still see the world through the lens of the Great Depression. During the 1930s, industrialized countries were in the early stages of their shift from an agrarian coal-based rural economy to an electrified, oil-based, urban economy—a shift that required enormous infrastructure investments (in new highways, airports, dams, and power lines) that would ultimately pay off handsomely for a nation on the verge of realizing a consumer utopia. All that was needed to initiate the building of that infrastructure was credit—grease for the wheels of commerce. Government got those wheels rolling by taking on debt, with private companies increasingly taking up the lead after World War II. The expansion that occurred from the 1950s through 2000, as that infrastructure was built out and put to use, easily justified the government pump-priming that initiated the process. Interest payments on the government debt could be paid through growth of the tax base.
Now is different. As we will see in the next two chapters, both the U.S. and the world as a whole have passed a fundamental crossroads characterized by increasing scarcity of energy and crucial minerals. Because of this, strategies of growth that worked spectacularly well in the mid-to-late 20th century—via various forms of business and technological development—have reached a point of diminishing returns.
Thus the Keynesian spending bridge today leads nowhere.
But stopping its construction now will result in a catastrophic weakening of the entire economy. The backstop provided by government spending and central bank guarantees and debt acquisition is the only thing keeping the system from hurtling into a deflationary spiral. Fiscal conservatives who rail against bigger government and more government debt need to comprehend the alternative—a gaping, yawning economic void. For a mere glimpse of what major government spending cutbacks might look like in the U.S., consider the impacts on European nations that are being subjected to fiscal austerity measures as a corrective for too-rosy expectations of future growth. The picture is bleak: rising poverty, disappearing social services, and general strikes and protests.
Extreme social unrest would be an inevitable result of the gross injustice of requiring a majority of the population to forego promised entitlements and economic relief following the bailout of a small super-wealthy minority on Wall Street. Political opportunists can be counted on to exacerbate that unrest and channel it in ways utterly at odds with society’s long-term best interests. This is a toxic brew with disturbing precedents in recent European history.
If the Keynesian remedy doesn’t cure the ailment but merely extends the suffering (while increasing government debt to truly toxic levels), the medicine of austerity may have such severe side effects that it could kill the patient outright. Both sides—left and right, the socialists and free-marketers—assume and hope to the point of desperation that their prescription will result in a rapid return to continuous economic growth and low unemployment. But as we are about to see, that hope is futile.
There is no “silver bullet,” no magic solution that will turn back the clock to an era of abundant resources and easy growth. For now, all that governments can do is buy time through further deficit spending—ideally, using that time to build infrastructure that will continue to function in the coming era of reduced flows of energy and resources. Meanwhile, we must all find ways to come out from under a burden of debt that will otherwise crush us. The inherent contradiction within this prescription is obvious but unavoidable.
Footnotes
2. U.S. Department of the Treasury, TreasuryDirect, “Historical Debt Outstanding, 2000-2010,” http://www.treasurydirect.gov/govt/reports/pd/pd.htm.
5. Carmen M. Reinhart and Kenneth Rogoff, This Time Is Different: Eight Centuries of Financial Folly (New Jersey: Princeton University Press, 2009).
8. CNN Money.com, “Bailout Tracker,” webpage, http://money.cnn.com/news/storysupplement/economy/bailouttracker/index.html.
10. Henny Sender, “Spectre of Deflation Kills the Mood at Jackson Hole,” Financial Times, September 13, 2010.