Finance Isn’t Free and Never Really Was

Don Watkins
62 min readMar 1, 2017

--

Now that Trump is in office there is talk that his administration will support repealing or revising Dodd-Frank — the government’s regulatory response to the financial crisis of 2008. The bill was sold as a way to protect ourselves from future crises by making the financial system more stable.

One of the most striking things about financial crises is how sudden and unexpected they are. Nearly everyone, including America’s top bankers and financial regulators, were unprepared for September 2008. Few foresaw the collapse of many of the nation’s leading financial institutions, government bailouts putting taxpayers on the hook for hundreds of billions of dollars — not to mention the sheer fear of the unknown this calamity produced. We wondered: How many businesses would fail? How many jobs would be lost? How far would the stock market (and our retirement savings) fall?

The Great Recession made viscerally real for people the dangers of financial instability. We want the benefits of a healthy financial system — thriving businesses, available credit, low unemployment, stable prices — but we want to make sure the system doesn’t collapse and leave us struggling to pick up the pieces.

What creates financial instability? The most popular narrative says that banking and finance are inherently unstable unless overseen and controlled by the government. Absent massive government intervention, greedy financiers will engage in reckless and sometimes predatory practices in order to line their own pockets, and then leave us with the tab once the system implodes.

That narrative shaped our response to the Panic of 1907, which led to the creation of America’s central bank, the Federal Reserve, in 1913. It shaped our response to the Great Depression, which led to a massive new regulatory infrastructure, including the creation of the SEC (Securities and Exchange Commission), which regulates stocks and other securities, the FDIC (Federal Deposit Insurance Commission), which insures bank deposits, and Glass-Steagall’s separation of commercial and investment banking. And it shaped our response to 2008’s financial crisis, which led to Dodd-Frank, the most sweeping set of financial controls since the New Deal.

Part of what has made this narrative plausible is the belief that financial markets and institutions were free in the lead up to the crisis — if not totally then at least in important respects. During the 2008 financial crisis, for instance, it was not uncommon to hear blame cast on “deregulation,” “cowboy capitalism,” and “laissez-faire banking.”

That’s a recipe for government intervention. If people think the unrestrained pursuit of self-interest by bankers, traders, speculators, and other financiers leads to crises, and they believe that financial markets lacked government control prior to crises, then why wouldn’t the answer be greater government control going forward?

The truth is that finance has never been free in the United States — not even close. And the system that nearly collapsed in 2008 was in most ways more controlled by government than at any time in U.S. history.

Even more important, it is government interference that makes financial systems fragile. If we look at the most stable financial systems in history — the 19th-century Scottish system or the Canadian system, for instance — they are inevitably the freest systems. If we look at financial systems prone to panics and crises, what we inevitably find are price distortions, twisted incentives and regulatory straitjackets created by government intrusion.

It isn’t the unrestrained pursuit of self-interest by financiers that makes financial markets fragile — it is the distortions and restraints governments put on the pursuit of self-interest. If we value a stable financial system, then government intervention is not the answer to our problem, it is the problem.

Financial freedom has two components: freedom from regulation and freedom from government support.

Financial regulations generally fall into two categories — I call them “fairness” regulations and “fragility” regulations. Fairness regulations include everything from proscriptions on fraud (which are proper) to rules governing conflicts of interest to undefined “crimes” like “stock manipulation” — all aimed at allegedly making sure financial dealings are, in the government’s eyes at least, fair. Fragility regulation aims to promote prudence — to minimize risks that can potentially threaten a financial institution and the entire financial system. This can mean anything from bank capital requirements to regulators dictating which loans a bank can make. (In this series, I’m going to focus on fragility regulation, although the two categories sometimes overlap.)

“Financial support” refers to the ways in which the government intervenes in the economy to protect financial institutions, including subsidies, bail outs, restrictions on competition — typically under the guise of benefiting the public (bank customers, mortgage borrowers, etc.).

Such “protections” are no less destructive than regulations: they reduce market discipline, distort banking incentives, and supply the justification for many financial regulations, making our financial system incredibly fragile.

In this essay I’ll cover:

1. How government intervention, not free banking or the gold standard, led to the bank panics of the late 19th and early 20th centuries.

2. How government intervention, not the gold standard or Wall Street speculators, made possible the Great Depression.

3. How the New Deal’s response to the Great Depression did not address its root causes and laid the groundwork for future crises.

4. How, despite the so-called deregulation of the late 20th century, the financial system was more controlled than ever on the eve of the 2008’s financial crisis.

5. How the Great Recession was made possible by government intervention in the financial system — notably including the Federal Reserve’s control of money and the moral hazard created by federal deposit insurance and its progeny, the “too big to fail” doctrine.

Defining what a fully free financial system would look like and why it would be resilient rather than unstable is a complex undertaking, and beyond my scope. (If you’re interested, start with the work of free-banking scholars such as George Selgin, Lawrence H. White, and Kevin Dowd.)

What we will see is how deeply wrong the conventional narrative blaming crises on “unregulated free markets” is — and why anyone concerned with a healthy financial system should take the time to look for solutions that don’t involve handing the government enormous new powers.

BANKING PANICS AND THE CREATION OF THE FEDERAL RESERVE

The Myth: We tried free banking and the result was constant bank runs and panics. The Federal Reserve was created to make the system stable and it succeeded.

The Reality: America’s recurrent panics were the product of financial control, and there is no evidence the Federal Reserve has made things better.

No one disputes that America’s banking system prior to the Federal Reserve’s (the Fed’s) creation in 1914 was unstable, prone to money shortages and recurrent panics. But what was the cause of that instability?

The conventional wisdom says that it was the inherent weakness of a free banking system — in particular, not having a central bank that could act as a “lender of last resort” to banks in need of cash during times of stress and panic.

One major reason to doubt that story, however, is that the phenomenon of recurrent banking panics was unique to the U.S. during the late 19th century, even though the U.S. was far from the only country without a central bank. Canada, for example, lacked a central bank and was far less regulated than the U.S., yet its financial system was notoriously stable.

In the U.S., government control over the banking system goes back to the earliest days of the republic. But when people speak about pre-Fed panics, what they usually have in mind is the period that runs from the Civil War to the creation of the Federal Reserve in 1913 (when the U.S. was on what was known as the National Currency System). During that era, there were two regulations that explain why the U.S. system was so volatile, while freer systems in Canada, Scotland, and elsewhere were remarkably stable:

(1) bond-collateral banking

(2) restrictions on branch banking

How Bond-Collateral Banking and Branch Banking Restrictions Fostered Crises

To understand bond-collateral banking, we need to take a step back and look at the monetary system at the time. Today we think of money as green pieces of paper issued by the government. But during the 19th and early 20th centuries, money meant specie: gold (or sometimes gold and silver). Paper money existed, but it was an IOU issued by a bank, which you could redeem in specie. A $10 bank note meant that if you brought the note to the bank, the bank had to give you $10 worth of gold.

In a fully free system, banks issue their own notes, and although those are redeemable in specie, banks don’t keep 100 percent of the gold necessary to redeem their notes on hand. Instead, they hold some gold as well as a variety of other assets, including government bonds, commercial paper (basically a short-term bond issued by businesses), and the various loans on their books.

This is what’s known as fractional reserve banking. The basic idea is that not every depositor will seek to redeem his notes for gold at the same time, and so some of the funds deposited at the bank can be invested by the bank and earn a return (which gold sitting in the vault does not). This was an important innovation in banking, which among other benefits meant that banks could pay depositors interest on their deposits rather than charge depositors for holding their gold in the vault.

But fractional reserve banking also carries with it what’s called liquidity risk. Even a solvent bank can be illiquid under a fractional reserve system. Although its assets (what it owns) are worth more than its liabilities (what it owes), the bank may not be able to quickly turn assets like long-term loans into cash. As a result, if too many depositors want to redeem their bank notes at once, the bank won’t be able to meet its obligations, which can lead it to suspend redemptions or even go out of business.

In the banking systems that most closely approximated free banking, such as Scotland’s system up to 1845, this was rarely a problem. Even highly illiquid banks were able to operate without facing bank runs so long as they remained solvent (i.e., so long as their assets were worth more than their liabilities, meaning they could pay their debts).

But in the post-Civil War era, solvent banks frequently experienced liquidity crises. Why? Because of banking regulations.

We’re taught to think of regulations as efforts to prevent “greedy” businesses from harming people. But historically banking regulations have often been designed to exploit the banking system in order to finance government spending. The typical pattern is to make the freedom of individuals to start banks or to engage in some banking activity, like issuing notes, contingent upon filling the government’s coffers. That’s what happened with the bond-collateral system imposed by the National Bank Act during the Civil War.

At the time, the federal government was in desperate need of funds to support the war effort, and so among other provisions it created an artificial market for its bonds by essentially forcing banks to buy them. Under the bond-collateral system, U.S. banks could only issue notes if those notes were backed by government bonds. For every $100 of government bonds a bank purchased, it was allowed to issue up to $90 in notes.

How did this make U.S. banking unstable? Imagine a bank that carries two liabilities on its books: the bank notes it has issued and checking account deposits. Now imagine that a customer with a checking account worth $200 wants to withdraw $90 worth of bank notes. In a free system, that’s no problem: the bank simply debits his account and issues him $90 in notes. There is no effect on the asset side of the bank’s balance sheet.

But consider what happens under the bond-collateral system. In order to issue the bank customer $90 in notes, the bank has to sell some of its assets and buy $100 of government bonds. At minimum that takes time and imposes a cost on the bank. But those problems were exacerbated because the U.S. government began retiring its debt in the 1880s, making its remaining bonds harder and more expensive to buy. The result was that, at a time when the economy was growing quickly, the available supply of paper money was shrinking.

This led to the problem of an inelastic currency. The demand for paper currency isn’t constant — it rises and falls. This was especially true in 19thcentury America, which was still a heavily agricultural society. During harvest season, farmers needed extra currency, say, to pay migrant workers to help them bring their crops to market. After the harvest season, demand for currency would shrink, as farmers deposited their notes back at the banks.

This left banks with a lousy set of options. They could either keep a bunch of expensive government bonds on their books (assuming they could get them), so that they could meet a temporary increase in demand for notes — or they could try to meet the temporary demand for cash by drawing down their gold reserves. Typically, they did the latter.

That would be bad enough if it simply meant that a small country bank would find its gold reserves dwindling. But making matters worse was the impact of branch banking restrictions.

Throughout America’s history, banks were legally prevented from branching — that is, the same bank was barred from operating in multiple locations spread around the country, the way you can find a Chase bank whether you’re in Virginia or California today. Instead Americans were left with what was known as a unit banking system. For the most part, every bank was a stand alone operation: one office building serving the surrounding community.

One result was a banking system that was highly undiversified. A bank’s fortunes were tied to its community. In an oil town, for instance, a downturn in the petroleum market could put the local bank out of business.

But the bigger problem was that unit banking made it harder for banks to deal with liquidity crises. A branched bank always had the option of calling on the cash reserves of its sister branches. This option was off limits to American banks. What developed instead was a system of correspondent banking and the so-called pyramiding of reserves, which concentrated problems in the heart of America’s financial center: New York. As economist George Selgin explains, unit banking

forced banks to rely heavily on correspondent banks for out-of-town collections, and to maintain balances with them for that purpose. Correspondent banking, in turn, contributed to the “pyramiding” of bank reserves: country banks kept interest-bearing accounts with Midwestern city correspondents, sending their surplus funds there during the off season. Midwestern city correspondents, in turn, kept funds with New York correspondents, and especially with the handful of banks that dominated New York’s money market. Those banks, finally, lent the money they received from interior banks to stockbrokers at call.

The pyramiding of reserves was further encouraged by the National Bank Act, which allowed national banks to use correspondent balances to meet a portion of their legal reserve requirements. Until 1887, the law allowed “country” national banks — those located in rural areas and in towns and smaller cities — to keep three-fifths of their 15 percent reserve requirement in the form of balances with correspondents or “agents” in any of fifteen designated “reserve cities,” while allowing banks in those cities to keep half of their 25 percent requirement in banks at the “central reserve city” of New York. In 1887 St. Louis and Chicago were also classified as central reserve cities. Thanks to this arrangement, a single dollar of legal tender held by a New York bank might be reckoned as legal reserves, not just by that bank, but by several; and a spike in the rural demand for currency might find all banks scrambling at once, like players in a game of musical chairs, for legal tender that wasn’t there to be had, playing havoc in the process with the New York stock market, as banks serving that market attempted to call in their loans. . . .

Nationwide branch banking, by permitting one and the same bank to operate both in the countryside and in New York, would have avoided this dependence of the entire system on a handful of New York banks, as well as the periodic scramble for legal tender and ensuing market turmoil.

It sounds complex, but in the final analysis it’s all pretty straightforward. Bankers were not free to run their businesses in a way that would maximize their profits and minimize their risks. The government forced them to adopt an undiversified, inflexible business model they would have never chosen on their own. America’s banking system was unstable because government regulations made it unstable, and the solution would have been to liberate the system from government control.

That’s not what happened.

The Creation of the Federal Reserve and Its Unimpressive Record

There was widespread recognition at the time that branching restrictions and bond-collateral banking were responsible for the turmoil in the American system. Neither of these regulations existed in Canada, and Canada’s stability was anything but a secret. As Americans debated what to do about the financial system during the early 20th century, many pointed to Canada’s success and urged repealing these restrictions in the U.S. As economist Kurt Schuler observes:

Many American economists and bankers admired Canada’s relatively unregulated banking system. The American Bankers’ Association’s ‘Baltimore plan’ of 1894 and a national business convention’s ‘Indianapolis plan’ of 1897 referred to Canada’s happy experience without American-style bond collateral requirements. (The Experience of Free Banking, chapter 4).

And Selgin also notes:

Proposals to eliminate or relax regulatory restrictions on banks’ ability to issue notes had as their counterpart provisions that would allow banks to branch freely. The Canadian system supplied inspiration here as well. Canadian banks enjoyed, and generally took full advantage of, nationwide branching privileges.

Of course, the push for deregulation of banking did not carry the day, thanks to various pressure groups and the general ideological climate of the country, which had shifted away from the pro-capitalist ideas that had characterized the 19th century. Instead, following the Panic of 1907, America got the Federal Reserve.

The Federal Reserve is America’s central bank, which today exercises enormous control over the money supply and the entire financial system. At the time of its creation, however, the Fed was seen as having a more limited function: to protect the safety and soundness of the banking system primarily by furnishing an elastic currency and acting as a “lender of last resort,” providing liquidity to banks in times of crises.

So what was the Fed’s track record? Did it put an end to the instability of the not-so-free banking period? Most people thinks so. But most people are wrong.

Bank runs and panics did not decrease in the first decades after the Fed was established. As economist Richard Salsman observes, “Bank failures reached record proportions even before the Great Depression of 1929–1933 and the collapse of the banking system in 1930. From 1913–1922, bank failures averaged 166 per year and the failure rate increased to 692 per year from 1923–1929 despite that period’s economic boom.”

True, bank panics do decline following the Great Depression, but that’s not thanks to the Fed — the credit for that goes to deposit insurance. (And, as we’ll see, deposit insurance laid the groundwork for severe troubles down the road.)

But even if we ignore the period from 1914, when the Fed was established, to the end of World War II…even then it is not clear that the Federal Reserve has been a stabilizing force in the financial system. In their study “Has the Fed Been a Failure?”, economists George Selgin, William D. Lastrapes, and Lawrence H. White find that:

(1) The Fed‘s full history (1914 to present) has been characterized by more rather than fewer symptoms of monetary and macroeconomic instability than the decades leading to the Fed‘s establishment. (2) While the Fed’s performance has undoubtedly improved since World War II, even its postwar performance has not clearly surpassed that of its undoubtedly flawed predecessor, the National Banking system, before World War I. (3) Some proposed alternative arrangements might plausibly do better than the Fed as presently constituted.

Those may be controversial claims — although the evidence the authors marshal is impressive — but the key point is this: the conventional wisdom that America’s history shows that an unregulated financial system leads to disaster and only a government controlled one can save the day is without merit. On the contrary, there is far more reason to suspect that the story runs the other way: that it’s government control that takes a naturally stable financial system and makes it fragile.

THE GREAT DEPRESSION AND THE ROLE OF GOVERNMENT INTERVENTION

The Myth: An unregulated free market and unrestricted Wall Street greed caused the Great Depression and only the interventionist policies of Franklin D. Roosevelt got us out.

The Reality: The Great Depression was caused by government intervention, above all a financial system controlled by America’s central bank, the Federal Reserve — and the interventionist policies of Hoover and FDR only made things worse.

The precise causes of the Great Depression remain a subject of debate, although, as economist Richard Timberlake observed in 2005, “Virtually all present-day economists . . . deny that a capitalist free-market economy in any way caused” it.

At the time, however, the free market was blamed, with much of the ire directed at bankers and speculators. Financiers were seen as having wrecked the economy through reckless speculation. President Hoover came to be viewed as a laissez-faire ideologue who did nothing while the economy fell deeper and deeper into depression, and Franklin D. Roosevelt’s interventionist policies under the New Deal were credited with rescuing us from disaster.

Americans came to conclude that the basic problem was the free market and the solution was government oversight and restraint of financiers and financial markets. It’s a view that the public, unaware of the consensus of modern economists, continues to embrace.

But the conventional story ignores the elephant in the room: the Federal Reserve. To place the blame for the Great Depression on a free financial system is like placing the blame for the fall of Rome on credit default swaps: you can’t fault something that didn’t exist. And by the time of the Great Depression, America’s financial system was controlled by the Fed.

It’s hard to overstate the importance of this fact. The Federal Reserve isn’t just any old government agency controlling any old industry. It controls the supply of money, and money plays a role in every economic transaction in the economy. If the government takes over the shoe industry, we might end up with nothing but Uggs and Crocs. But when the government messes with money, it can mess up the entire economy.

The two deadly monetary foes are inflation and deflation. We tend to think of inflation as generally rising prices and deflation as generally falling prices. But not all price inflation or price deflation is malignant — and not all price stability is benign. What matters is the relationship between the supply of money and the demand for money — between people’s desire to hold cash balances and the availability of cash.

Economic problems emerge when the supply of money does not match the demand for money, i.e., when there is what economists call monetary disequilibrium. Inflation, on this approach, refers to a situation where the supply of money is greater than the public’s demand to hold money balances at the current price level. Deflation refers to a situation where the supply of money is less than necessary to meet the public’s demand to hold money balances at the current price level.

In a free banking system, as George Selgin has argued, market forces work to keep inflation and deflation in check, i.e., there is a tendency toward monetary equilibrium. Not so when the government controls the money supply. Like all attempts at central planning, centrally planning an economy’s monetary system has to fail: a central bank has neither the knowledge nor the incentive to match the supply and demand for money. And so what we find when the government meddles in money are periods where the government creates far too much money (leading to price inflation or artificial booms and busts) or far too little money (leading to deflationary contractions).

And it turns out there are strong reasons to think that the Great Depression was mainly the result of the Federal Reserve making bothmistakes.

The goal here is not to give a definitive, blow-by-blow account of the Depression. It’s to see in broad strokes the way in which government regulation was the sine qua non of the Depression. The free market didn’t fail: government intervention failed. The Great Depression doesn’t prove that the financial system needs regulation to ensure its stability — instead it reveals just how unstable the financial system can become when the government intervenes.

Creating the Boom

Was the stock market crash of 1929 rooted in stock market speculation fueled by people borrowing money to buy stock “on margin,” as those who blamed the bankers for the Great Depression claimed? Few economists today think so. As economist Gene Smiley observes:

There was already a long history of margin lending on stock exchanges, and margin requirements — the share of the purchase price paid in cash — were no lower in the late twenties than in the early twenties or in previous decades. In fact, in the fall of 1928 margin requirements began to rise, and borrowers were required to pay a larger share of the purchase price of the stocks.

For my money, the most persuasive account of the initial boom/bust that set off the crisis places the blame, not on speculators, but on central bankers.

Prior to the publication of John Maynard Keynes’s General Theory in 1936, the most influential account of the cause of the Great Depression was the Austrian business cycle theory pioneered by Ludwig von Mises and further developed by Friedrich Hayek. The Austrians, in fact, were among the few who predicted the crisis (though not its depth).

What follows is a highly simplified account of the Austrian theory. For a more in depth treatment, see Lawrence H. White’s uniformly excellent book The Clash of Economic Ideas, which summarizes the Austrian theory and its account of the Great Depression. For a detailed theoretical explanation of the Austrian theory of the business cycle see Roger W. Garrison’s Time and Money: The Macroeconomics of Capital Structure.

The Austrian theory, in the briefest terms, says that when a central bank creates too much money and expands the supply of credit in the economy, it can spark an artificial boom that ultimately has to lead to a bust.

It’s a pretty technical story, so let’s start with a simple analogy. Imagine you are planning a dinner party, and you’re an organized person, so you keep an inventory of all the items in your kitchen. But the night before your party, some prankster decides to sneak in and rewrite the list so that it shows you have double the ingredients you actually have.

The next morning you wake up and check your inventory list. With so many ingredients available, you decide to invite a few more friends to the dinner. Meanwhile, your kid unexpectedly comes home from college and decides to make herself a large breakfast — but it’s no big deal. According to your inventory, you have more than enough eggs and butter to finish your recipe. Of course, your inventory is wrong, and half an hour before your guests arrive, you realize you’re short what you need to finish the meal. The dinner is a bust.

Well, something like that happens when the government artificially expands the supply of credit in the economy. It causes everyone to think they’re richer than they are and, just like someone planning a meal with an inaccurate inventory list, they end up making decisions — about what to produce and how much to consume — that wouldn’t have made sense had they known how many resources were actually available to carry out their plans.

Under the Austrian theory, the key mistake is for the central bank to inject new money into the economic system, typically by creating additional bank reserves.* Bank reserves are a bank’s cash balance. Just as your cash balance consists of the money you have in your wallet and in your checking account, so a bank’s cash balance consists of the cash it has in its vault and in the deposit account it maintains with the central bank.

When a central bank creates additional bank reserves, it encourages the banks to lend out the new money at interest, rather than sit on a pile of cash that isn’t earning a return. To attract borrowers for this additional money, the banks will lower the interest rate they charge on loans, leading entrepreneurs to invest in plans that would not have been profitable at the previous, higher interest rate.

This is a big problem. In a free market, interest rates coordinate the plans of savers and investors. Investment in productive enterprises requires that real resources be set aside rather than consumed immediately. If people decide to spend less today and save more for the future, there are more resources available to fund things like new businesses or construction projects, and that will be reflected in a lower rate of interest.

But when the central bank pushes down interest rates by creating new money, the lower interest rate does not reflect an increase in genuine savings by the public. It is artificially low — the prankster has falsified the inventory list. The result is unsustainable boom. The increased business activity is using up resources while at the same time people start consuming more thanks to cheaper consumer credit and a lower return on savings — there is what economist Lawrence H. White calls “a tug-of-war for resources between longer processes of production (investment for consumption in the relatively distant future) and shorter processes (consumption today and in the near future).”

Eventually prices and interest rates start to rise, and entrepreneurs find that they cannot profitably complete the projects they started. The unsustainable boom leads inevitably to a bust. As Mises writes in his 1936 article “The ‘Austrian’ Theory of the Trade Cycle,” once

a brake is thus put on the boom, it will quickly be seen that the false impression of “profitability” created by the credit expansion has led to unjustified investments. Many enterprises or business endeavors which had been launched thanks to the artificial lowering of the interest rate, and which had been sustained thanks to the equally artificial increase of prices, no longer appear profitable. Some enterprises cut back their scale of operation, others close down or fail. Prices collapse; crisis and depression follow the boom. The crisis and the ensuing period of depression are the culmination of the period of unjustified investment brought about by the extension of credit. The projects which owe their existence to the fact that they once appeared “profitable” in the artificial conditions created on the market by the extension of credit and the increase in prices which resulted from it, have ceased to be “profitable.” The capital invested in these enterprises is lost to the extent that it is locked in. The economy must adapt itself to these losses and to the situation that they bring about.

This, the Austrians argued, was precisely what happened in the lead up to the 1929 crash. (Two economists, Barry Eichengreen and Kris Mitchener, who are not part of the Austrian school and who by their own admission “have vested interests . . . emphasizing other factors in the Depression,” nevertheless found that the empirical record is consistent with the Austrian story.)

The Federal Reserve during the late 1920s held interest rates artificially low, helping spark a boom — notably in the stock market, which saw prices rise by 50 percent in 1928 and 27 percent in the first 10 months of 1929. Starting in August of 1929, the Fed tried to cool what it saw as an overheated stock market by tightening credit. The boom came to an end on October 29.

Magnifying the Bust

When the government sparks an inflationary boom, the boom has to end eventually. One way it can end is that the government can try to keep it going, ever-more rapidly expanding the money supply until price inflation wipes out the value of the currency, as happened in Germany during the 1920s.

The other way is for the central bank to stop expanding credit and allow the boom to turn into a bust. Some businesses go out of business, some people lose their jobs, investments lose their value: the market purges itself of the mistakes that were made during the boom period.

That adjustment process is painful but necessary. But what isn’t necessaryis for there to be an economy-wide contraction in spending — a deflationary contraction. A deflationary contraction occurs when the central bank allows the money supply to artificially contract, thus not allowing the demand for money to be met. As people scramble to build up their cash balances, they cut back on their spending, which sends ripple waves through the economy. In economist Steven Horwitz’s words:

As everyone reduces spending, firms see sales fall. This reduction in their income means that they and their employees may have less to spend, which in turn leads them to reduce theirexpenditures, which leads to another set of sellers seeing lower income, and so on. All these spending reductions leave firms with unsold inventories because they expected more sales than they made. Until firms recognize that this reduction in expenditures is going to be economy-wide and ongoing, they may be reluctant to lower their prices, both because they don’t realize what is going on and because they fear they will not see a reduction in their costs, which would mean losses. In general, it may take time until the downward pressure on prices caused by slackening demand is strong enough to force prices down. During the period in which prices remain too high, we will see the continuation of unsold inventories as well as rising unemployment, since wages also remain too high and declining sales reduce the demand for labor. Thus monetary deflations will produce a period, perhaps of several months or more, in which business declines and unemployment rises. Unemployment may linger longer as firms will try to sell off their accumulated inventories before they rehire labor to produce new goods. If such a deflation is also a period of recovery from an inflation-generated boom, these problems are magnified as the normal adjustments in labor and capital that are required to eliminate the errors of the boom get added on top of the deflation-generated idling of resources.

In short, a deflationary contraction can unleash a much more severe and widespread drop in prices, wages, and output and a much more severe and widespread rise in unemployment than is necessary to correct the mistakes of an artificial boom.

Unfortunately, that’s exactly what happened during the Great Depression. Three factors were particularly important in explaining the extreme deflationary contraction that occurred during the 1930s.

1. Bank failures

In my last post, I discussed how government regulation of banking made banks more fragile. In particular, I noted that government regulations prevented banks from branching, making them far less robust in the face of economic downturns.

That remained true throughout the 1920s and ’30s, leaving U.S. banks vulnerable in a way that Canadian banks, which could and did branch, were not. Not a single Canadian bank failed during the Depression. In the United States, 9,000 banks failed between 1930 and 1933 (roughly 40 percent of all U.S. banks), destroying the credit these banks supplied and so further contracting the money supply.

A report from the Federal Reserve Bank of St. Louis describes it this way:

Starting in 1930, a series of banking panics rocked the U.S. financial system. As depositors pulled funds out of banks, banks lost reserves and had to contract their loans and deposits, which reduced the nation’s money stock. The monetary contraction, as well as the financial chaos associated with the failure of large numbers of banks, caused the economy to collapse.

Less money and increased borrowing costs reduced spending on goods and services, which caused firms to cut back on production, cut prices and lay off workers. Falling prices and incomes, in turn, led to even more economic distress. Deflation increased the real burden of debt and left many firms and households with too little income to repay their loans. Bankruptcies and defaults increased, which caused thousands of banks to fail.

(The banking panics of 1932, it should be noted, were at least in part the result of fears that incoming president FDR would seize Americans’ gold and take the nation off the gold standard — which he ultimately did. Another contributing factor was the protectionist Smoot-Hawley tariffpassed in 1930, which, among many other negative impacts on the economy, devastated the agricultural sector and many of the unit banks dependent on it.)

Thanks to these massive bank failures, the U.S. was being crippled by a severe deflation, and yet the Federal Reserve — which, despite being on a pseudo-gold standard, could have stepped in (see here and here) — did nothing.

2. The check tax

Also contributing to the collapse of the money supply was the check tax, part of the Revenue Act of 1932, signed into law by Hoover. The Act raised taxes in an effort to balance the budget, which was bad enough in the midst of a deflationary crisis. But the worst damage was done by the check tax. This measure placed a 2-cent tax (40 cents today) on bank checks, prompting Americans to flee from checks to cash, thereby removing badly needed cash from the banks. The result, economists William Lastrapes and George Selgin argue, was to reduce the money supply by an additional 12 percent.

3. Hoover’s high wage policy

The net result of the bank failures and the check tax was a credit-driven deflation the likes of which the U.S. had never seen. As Milton Friedman and Anna Schwartz explain in their landmark Monetary History of the United States:

The contraction from 1929 to 1933 was by far the most severe business-cycle contraction during the near-century of U.S. history we cover, and it may well have been the most severe in the whole of U.S. history. . . . U.S. net national product in constant prices fell by more than one-third. . . . From the cyclical peak in August 1929 to the cyclical trough in March 1933, the stock of money fell by over a third.

Why is a deflationary contraction so devastating? A major reason is because prices don’t adjust uniformly and automatically, which can lead to what scholars call economic dis-coordination. In particular, if wages don’t fall in line with other prices, this effectively raises the cost of labor, leading to — among other damaging consequences — unemployment. And during the Great Depression, although most prices fell sharply, wage rates did not.

One explanation is that wages are what economists call “sticky downward”: people don’t like seeing the number on their paychecks go down, regardless of whether economists are assuring them that their purchasing power won’t change. The idea of sticky prices is somewhat controversial, however — in earlier downturns, after all, wages fell substantially, limiting unemployment.

What is certainly true is that government intervention kept wages from falling — particularly the actions of President Hoover and, later, President Roosevelt.

Hoover believed in what was called the “high wage doctrine,” a popular notion in the early part of the 20th century. The high wage doctrine said that keeping wages high helped cure economic downturns by putting money into the pockets of workers who would spend that money, thereby stimulating the economy.

When the Depression hit and prices began falling, Hoover urged business leaders not to cut wages. And the evidence suggests that they listened (whether at Hoover’s urging or simply because they too accepted the high wage doctrine). According to economists John Taylor and George Selgin:

Average hourly nominal wage rates paid to 25 manufacturing industries were 59.3 cents in October 1929, and 59.5 cents by April 1930. Wage rates had fallen only to 59.1 cents by September 1930, despite substantially reduced output prices and profits. Compare this to the 20 percent decline in nominal wage rates during the 1920–21 depression. During the first year of the Great Depression the average wage rate fell less than four-tenths of one percent.

Hoover would go on to put teeth into his request for high wages, signinginto law the Davis-Bacon Act in 1931 and the Norris-LaGuardia Act of 1932, both of which used government power to prop up wages. FDR would later go on to implement policies motivated by the high wage doctrine, including the 1933 National Industrial Recovery Act, the 1935 National Labor Relations Act, and the 1938 Fair Labor Standards Act.

The problem is that the high wage doctrine was false — propping up wages only meant that labor became increasingly expensive at the same time that demand for labor was falling. The result was mass unemployment.

The Aftermath

It’s worth repeating: this is far from a full account of the Great Depression. It’s not even a full account of the ways the Federal Reservecontributed to the Great Depression (many scholars fault it for the so-called Roosevelt Recession of 1937–38). What we have seen is that there are strong reasons to doubt the high school textbook story of the Great Depression that indicts free markets and Wall Street.

We’ve also started to see a pattern that recurs throughout history: government controls create problems, but the response is almost never to get rid of the problematic controls. Instead, it’s to pile new controls on top of old ones, which inevitably creates even more problems.

And that’s what happened with the Great Depression.

Did we abolish the Fed? No.

Did we return to the pre-World War I classical gold standard? No.

Did we abolish branch banking restrictions? No.

Instead, we created a vast new army of regulatory bodies and regulatory acts, which would spawn future problems and crises: above all, the Glass-Steagall Act of 1933, which separated investment and commercial banking and inaugurated federal deposit insurance. I’ll turn to that in the next post.

*How central banks go about conducting monetary policy has varied throughout history. Richard Timberlake explains the process as it took places during the 1920s and 1930s. George Selgin describes the process in more recent times, both prior to the 2008 financial crisis and since.

HOW THE NEW DEAL MADE THE FINANCIAL SYSTEM LESS SAFE

The Myth: New Deal regulation of the financial system made the system safer.

The Reality: New Deal regulation of the financial system failed to address the real source of the problems that led to the Great Depression and laid the foundation for future crises.

Although there is widespread agreement among economists that the Great Depression was not caused by the free market, there is also widespread, if not universal, agreement that the government’s regulatory response to the Great Depression made the system safer. Many commentators on the 2008 financial crisis argue that it was the abandonment of the post–New Deal regulatory regime during the 1980s and 1990s that set the stage for our current troubles.

There are three major parts of the government’s regulatory response to the Great Depression:

  1. 1. Banking regulation
  2. 2. Housing regulation
  3. 3. Securities regulation

The government’s top priority on housing was to bail out mortgage borrowers and lenders, spawning the creation of the Federal Housing Administration and Fannie Mae. The Securities Act of 1933 and the Securities Exchange Act of 1934, which established the Securities and Exchange Commission, were passed to control the trading of securities in the name of protecting investors and making securities markets more orderly and fair.

Here I’m going to focus on banking regulation, specifically the Banking Act of 1933, often referred to as Glass-Steagall. Among other provisions, Glass-Steagall created a separation between commercial and investment banking activities, and established the Federal Deposit Insurance Corporation (FDIC), which insures banking deposits.

Conventional wisdom says Glass-Steagall made the system safer. The truth is that it failed to address the causes of the Great Depression, and instead contributed to future crises.

The Senseless Separation of Commercial and Investment Banking

During the 1920s, commercial banks (i.e., those that accepted deposits and made loans) started expanding into lines of business traditionally dominated by investment banks, such as underwriting and trading securities. The development of universal banking allowed commercial banks to become, in effect, one-stop shops for their customers, and they grew quickly by taking advantage of economies of scope and offering customers major discounts on brokerage services. (Technically, commercial banks did not usually engage in investment banking activities, but instead operated through closely allied security affiliates.)

In 1932, the government launched an investigation of the crash of ’29, which became known as the Pecora hearings. The hearings regaled Americans with claims of banking abuses arising from banks’ involvement in securities, although the evidence for these claims was, to be generous, scant. (See, for instance, here, here, here, and here.)

Whatever the truth, the Pecora hearings enraged the public, and bolstered a number of pressure groups and politicians who argued that universal banking made banks and the financial system more fragile, and demanded the separation of commercial and investment banking activities.

The opponents of universal banking made several arguments to support their agenda, but the central claim was that securities were inherently more risky than the traditional banking activities of taking deposits and making loans, and so allowing banks to have securities affiliates made them less sound.

But the starting premise — that securities activities were riskier than commercial banking activities — was not obviously true. As economist Robert Litan writes, “the underwriting of corporate securities probably involves less risk than extending and holding loans.” That’s because underwriting risk typically only lasts a few days and involves assets that are more liquid than a standard loan, which can stay on a bank’s books for years and be difficult to sell.

Certainly some activities of securities affiliates were riskier than some activities of traditional commercial banks. But it doesn’t follow that a commercial bank that engages in securities activities via its affiliate is taking on more risk overall. That’s because it is also gaining the benefits of diversification.

Diversification reduces risk. A single bond may be less risky than any given stock, yet a diversified portfolio of stocks can be less risky than the single bond. Similarly, even if a commercial bank that accepts deposits and makes loans enjoys less risk than an investment bank, that doesn’t imply that the commercial bank increases its overall risk by taking on investment banking activities. On the contrary, it is entirely possible for the risk-reducing features of diversification to outweigh the additional risk.

Apparently, this was true of most banks with securities affiliates in the lead up to the Great Depression. The best analysis of the pre-1933 period, by economist Eugene White, finds that banks with securities affiliates were more stable than those without them:

One of the most convincing pieces of evidence that the union of commercial and investment banking posed no threat to parent banks is the significantly higher survival rate of banks with securities operations during the massive bank failures of 1930–1933. While 26.3% of all national banks failed during this period, only 6.5% of the 62 banks which had affiliates in 1929 and 7.6% of the 145 banks which conducted large operations through their bond departments closed their doors.

This suggests that, by limiting banks’ ability to diversify their activities, Glass-Steagall made banks more risky. This risk would become manifest later in the century when commercial banks increasingly found themselves unable to compete with foreign universal banks. (As for the claim that the repeal of Glass-Steagall in 1999 contributed to the 2008 financial crisis, I’ll address that in the next post.)

Deposit Insurance and the Problem of Moral Hazard

The proximate cause of the Great Depression was the wave of bank failures that took place in the early 1930s. Federal deposit insurance was touted as a way to stop bank runs, protecting depositors and shielding sound but illiquid banks from the so-called contagion effects of bank failures.

But why was deposit insurance seen as the solution? Canada, as I’ve noted, did not experience a single bank failure during the Depression, even though it lacked deposit insurance. U.S. banks were unstable because, unlike Canadian banks, they could not branch, a fact that was widely recognized at the time.

And deposit insurance did not exactly have a great record. It had been tried at the state level for more than a hundred years, and every deposit insurance scheme that looked anything like the system eventually adopted under Glass-Steagall ended in failure.

The obvious solution to banking instability would have been to eliminate branch banking restrictions, allowing banks to consolidate and diversify geographically. But there were pressure groups who wanted to protect unit banking and who thereby benefited from deposit insurance. As Representative Henry Steagall, the politician who was the driving force behind deposit insurance, admitted, “This bill will preserve independent dual banking [i.e., unit banking] in the United States . . . that is what the bill is intended to do.”

What were the effects? As is so often the case in the history of finance, government support for the industry creates problems that are used to justify government control of the industry.

Deposit insurance encourages risk-taking. Given the nature of the doctrine of limited liability, bank owners are always incentivized to take risks, since they enjoy unlimited upside gains and are insulated from the downside: their stock can become worthless, but they aren’t personally liable for the business’s debts. (Although, it’s worth mentioning that prior to 1933, U.S. bankers faced double liability: if their bank went out of business, they could be required to pay up to two times their initial investment to reimburse depositors.) Depositors act as a counterweight: they are risk averse and will flee imprudent banks.

Deposit insurance reduces that counterweight by introducing moral hazard into the banking system. “Moral hazard” refers to the fact that when risks are insured against, people take more risks because they bear a smaller cost if things go wrong. In the case of deposit insurance, depositors are incentivized to patronize the bank that offers the highest interest rate, regardless of how much risk it is taking. As economist Richard Salsman puts it, “Deposit insurance was established in order to avert future bank runs. But its history has demonstrated a singular inducement to bankers to become reckless and pay excess yields, while encouraging depositors to run to bad banks instead of away from them.” If things go bad, after all, the depositors will be bailed out — at least up to the cap set by the FDIC, a cap that has ballooned over time from $2,500 in 1934 (more than $40,000 in 2008 dollars) to $250,000 in 2008.

The moral hazard introduced by deposit insurance was particularly intense given the scheme adopted by the FDIC. In normal insurance plans, such as car insurance or life insurance, if you are riskier you pay more for your insurance. But until a 1991 rule change, the FDIC charged banks a flat rate based on the size of their deposits. This meant that riskier banks were effectively being subsidized by prudent banks.

The government was not blind to this moral hazard problem. FDR had initially opposed deposit insurance on the grounds that, as he put it in a letter to the New York Sun in 1932:

It would lead to laxity in bank management and carelessness on the part of both banker and depositor. I believe that it would be an impossible drain on the Federal Treasury to make good any such guarantee. For a number of reasons of sound government finance, such plan would be quite dangerous.

(There’s no evidence FDR ever changed his mind on this point: deposit insurance made it into law because the president saw no other way to get his banking bill passed.)

In order to deal with the moral hazard problem created by deposit insurance, the government sought to limit risk-taking through command and control regulation. Discussing Glass-Steagall, economist Gerald O’Driscoll writes:

Among other things, the act prevented banks from being affiliated with any firm engaged in the securities business; established limits on loans made by banks to affiliates, including holding company affiliates; prohibited the payment of interest on demand accounts; and empowered the Federal Reserve Board to regulate interest rates paid on savings and time deposits. These regulations were intended to provide for the safety and soundness of the banking system.

However, these and other regulations meant to address the risks created by deposit insurance would fail to restrain government-encouraged risk-taking by banks and actually create even greater problems in the future. I’ll be discussing those problems in future posts. But it’s worth noting here that it was deposit insurance that set the stage for the doctrine that would eventually become known as “too big to fail.”

The Origins of “Too Big to Fail”

Businesses fail all the time and life goes on. What’s so different about financial institutions? It goes back to the peculiar nature of their business model; namely, even healthy financial institutions are typically illiquid. In industry parlance, banks borrow short and lend long. That is, they take in money from depositors who can draw down their accounts at any time and they lend those funds to business and consumer borrowers who repay their loans over a longer time horizon.

It’s a brilliant system in that it dramatically increases the financial capital available in the economy without forcing depositors to tie up their money in long-term investments. But it also carries with it a vulnerability: a healthy bank can fail if too many of its depositors demand their money back at once.

Most people — today and in the past — have believed that banking failures are “contagious”: a run on an insolvent bank can lead depositors at healthy banks to fear their money isn’t safe, setting off a cascade of bank failures and the collapse of the financial system.

Historically, this was seldom a genuine problem in systems that approximated free banking: solvent banks rarely suffered bank runs as the result of runs on insolvent banks. And financiers had developed effective private mechanisms, such as last-resort lending by clearinghouses, for dealing with widespread panics when they did occur. Nevertheless, concern over the contagion effects of bank failures has played an important role in justifying the expansion of government control over banking.

One solution to the problem of contagion was for the government to institute central banks, which would act as a lender of last resort. The idea, as formulated by Walter Bagehot in his famous 1873 work Lombard Street, was that a central bank’s role in a crisis should be to lend to solvent banks on good collateral at high interest rates.

But during the 1930s, the Federal Reserve didn’t perform this function. As Norbert Michel points out, “In 1929, the Federal Reserve Board prohibited the extension of credit to any member bank that it suspected of stock market lending, a decision that ultimately led to a 33 percent decline in the economy’s stock of money.” But instead of insisting that the central bank do better, politicians decided that additional regulations were needed to address the problem.

This led to the creation of deposit insurance. Now, instead of propping up solvent but illiquid institutions, the FDIC would try to prevent runs by promising to bail out depositors (up to a legally defined limit) even of insolvent banks.

But now regulators started to see contagion lurking around every corner, and came to believe that large financial institutions could not be allowed to fail lest that lead to the failure of other banking institutions tied to them in some way, thus setting off a chain of failures that could bring down the system. Thus was born the doctrine of “too big to fail.”

Actually, that name is misleading. A “too big to fail” institution can be allowed to fail in the sense that the company’s shareholders can be wiped out. What the government doesn’t let happen to such companies is for their debt holders (including depositors) to lose money: they are made whole.

Under Section 13(c) of the Federal Deposit Insurance Act of 1950, the FDIC was empowered to bail out a bank “when in the opinion of the Board of Directors the continued operation of such bank is essential to provide adequate banking service in the community.” It would first use that authority in 1971 to save Boston’s Unity Bank, but such bailouts would quickly become the norm, with the major turning point being the bailout of Continental Illinois in 1984.

As a result of “too big to fail,” much of the remaining debt holder-driven discipline was eliminated from the system. Thanks to the moral hazard created by the government’s deposit insurance and “too big to fail” subsidies, financial institutions were able to grow larger, more leveraged, and more reckless than ever before, creating just the sort of systemic risk that deposit insurance was supposed to prevent.

The bottom line is that Glass-Steagall failed on two counts: it did not fix the problems that had led to the Great Depression and it created new problems that would in time contribute to further crises.

THE MYTH OF BANKING DEREGULATION

Myth: Finance was deregulated during the 1980s and 1990s, laying the groundwork for the 2008 financial crisis.

Reality: Although some financial regulations were rolled back during the late 20th century, the overall trend was toward increased government control.

According to many commentators, the New Deal regulatory regime led to the longest period of banking stability in U.S. history, but that regime was destroyed by free market ideologues who, during the late 20th century, oversaw a radical deregulation of the financial industry. This, they conclude, laid the groundwork for the 2008 financial crisis.

But while some restrictions on finance were lifted during this period, other controls were added — and the subsidization of finance that drained the system of market discipline only increased. As we entered the 21st century, our financial system was not a free market but a Frankenstein monster: large and imposing but inflexible and unstable.

The Collapse of the New Deal Regulatory Regime and the Re-Regulatory Response

The banking system was in many respects fairly stable in the decades following the New Deal, with far fewer bank failures than in the past.

By far the most important factor in postwar stability was not New Deal financial regulations, however, but the strength of the overall economy from the late 1940s into the 1960s, a period when interest rates were relatively stable, recessions were mild, and growth and employment were high.

Part of the credit for this stability goes to monetary policy. Although the classical gold standard that had achieved unrivaled monetary stability during the late 19th century had fallen apart during World War I, the Bretton-Woods agreement struck in the aftermath of World War II retained some link between national currencies and gold, limiting the government’s power to meddle with money. According to economist Judy Shelton:

[T]here can be little question that the sound money environment that reigned in the postwar years contributed to the impressive economic performance of both the victors and the vanquished and enabled the world to begin reconstructing an industrial base that would raise living standards to new heights for the generations that followed.

This would change as an increasingly expansive and expensive U.S. government cut its remaining ties to gold in 1971. The volatile inflation and interest rates that followed would throw the financial system into disarray, revealing the hidden weaknesses created by the New Deal regulatory regime. The failure of the New Deal regime would become most clear during the Savings & Loan crisis.

The New Deal had divided up the financial industry into highly regimented, tightly controlled silos. Insurance companies, investment banks, commercial banks, and Savings & Loans (or thrifts, as they were often called) all operated in their own universes, free from outside competition. The players in each sub-industry faced their own unique set of restrictions as well as their own government subsidies and privileges.

Thrifts were limited by the government almost exclusively to accepting deposits and making loans to homebuyers. In exchange for promoting home ownership, they were given special privileges by the government, including protection from competition and the ability to pay a slightly higher interest rate on their deposits than traditional banks. It was a simple business model best summed up by the famous 3–6–3 rule: borrow at 3 percent, lend at 6 percent, and be on the golf course by 3.

But this setup made thrifts enormously vulnerable to interest rate risk. They were making long-term loans — often 30 years — at fixed interest rates, yet were borrowing short-term via savings accounts. What would happen if depositors could suddenly get a higher return on their savings elsewhere, say by parking their savings in one of the new money market accounts? What would happen if inflation rose and their savings actually began losing its purchasing power? Depositors might flee, depriving the thrifts of capital. Thrifts, meanwhile, would have their hands tied: Regulation Q set a cap on the interest rate they could pay on deposits. And even their hands weren’t tied by Regulation Q, paying higher interest rates would cause thrifts to lose money on their existing loans: they could end up paying out 10 percent or more in interest to their depositors while receiving only 6 percent in interest payments from the loans already on their books.

All of this is exactly what happened when, starting in the late 1960s, the Federal Reserve began expanding the money supply to help the government finance a burgeoning welfare state and the Vietnam War. By the late 1970s, inflation had reached double digits.

As interest rates rose, thrifts began to fail in large numbers, but rather than unwind them, the government tried to save them. It did so in part through a program of partial deregulation. For example, the government allowed thrifts to diversify their assets, e.g., by moving into commercial real estate or through purchasing high yield bonds, and eliminated Regulation Q’s cap on deposit interest rates. Meanwhile, the government also dramatically expanded its deposit insurance subsidy for banks, including thrifts, increasing coverage in 1980 from $40,000 to $100,000.

The government’s program was disastrous — but not because of any problem inherent in deregulation. Had the government pursued a genuine free-market policy by allowing failed institutions to go out of business, ending the moral hazard created by deposit insurance, and then allowing the remaining thrifts to enter new lines of business and pay market interest rates, there still would have been pain and the process would have been messy, but the financial system would have moved in a more sound, more stable direction. Instead, the government created one of the greatest catastrophes in U.S. banking history by propping up and subsidizing insolvent “zombie banks,” giving them the power and incentive to gamble with taxpayers’ money.

To say a thrift is insolvent is to say that its capital has been wiped out. The bank no longer has any skin in the game. That creates a perverse set of incentives. It pays the thrift’s owners to make huge gambles, which, if they pay off, will make them rich, and if they don’t, will leave them no worse off. Deposit insurance, meanwhile, gives them virtually unlimited access to capital, since they can promise to pay high interest rates on deposits to depositors who don’t have to worry about the risks the bank is taking.

Well, the thrifts that took huge gambles generally ended up taking huge losses, destroying far more wealth than if they had simply been wound down when they reached insolvency. This was not an indictment of deregulation. It was an indictment of re-regulation — of regulatory reform that removed or changed some controls while retaining and expanding other controls and subsidies.

There are two lessons here. The first is that the New Deal regulatory regime could not last. It was (partially) dismantled because it collapsed under the pressure of bad monetary policy from the Federal Reserve and the perverse constraints and incentives imposed by regulators. (Technological innovations in the financial industry and other economic forces, such as increased global competition, also played a role.)

The second lesson is that if we want to evaluate the conventional narrative about financial deregulation, we have to investigate more carefully which regulations and subsidies were repealed, which regulations and subsidies were changed (and in what way), which regulations and subsidies weren’tchanged or repealed, and what the consequences were. To speak simply of “deregulation” blinds us to the fact that in many respects financial intervention was increasing during this period, and that even when some regulations were altered or rescinded, the system itself was dominated by government distortions and controls.

The Big Picture

At the time of the 2008 financial crisis, there were — in addition to hundreds of state-level regulators — seven federal regulators overseeing the financial industry:

  • Federal Reserve
  • Office of the Comptroller of the Currency
  • Office of Thrift Supervision
  • Securities and Exchange Commission
  • Federal Deposit Insurance Corporation
  • Commodities Futures Trading Commission
  • National Credit Union Administration

No matter what metric you look at, it’s hard to find any evidence that financial regulation by these bodies was decreasing overall. According to a study from the Mercatus center, outlays for banking and financial regulation grew from $190 million in 1960 to $1.9 billion in 2000. By 2008 that number had reached $2.3 billion. (All in constant 2000 dollars.) In the years leading up to the financial crisis, regulatory staff levels mostly rose, budgets increased, and the annual number of proposed new rules went up. There were also major expansions of government regulation of the financial industry, including Sarbanes-Oxley, the Privacy Act, and the Patriot Act.

None of this comes close to conveying the scale of industry regulation, however. The simple fact is that there was virtually nothing a financial firm could do that wasn’t overseen and controlled by government regulators.

There were, to be sure, some cases of genuine deregulation, but on the whole, these were policies that were undeniably positive, such as the elimination of Regulation Q and other price controls, and the removal of branch banking restrictions. And typically the bills that instituted these policies expanded regulation in others ways.

But consider what didn’t change. As we’ve seen, the major sources of instability in the U.S. financial system were branch banking restrictions, the creation of the Federal Reserve with its power to control the monetary system, and the creation of deposit insurance and the “too big to fail” doctrine, which encouraged risky behavior by banks.

Yet it was only the first of those problems that was addressed during the era of deregulation, when the Siegle-Neal Interstate Banking and Branching Efficiency Act eliminated restrictions on branching in 1994. The Federal Reserve was left untouched, and the scope of deposit insurance expanded: the government raised the cap on uninsured deposits to $100,000, though in reality it effectively insured most deposits through its policy of bailing out the creditors of institutions seen as “too big to fail.”

What, then, do people have in mind when they say that deregulation led to the Great Recession? Advocates of this view generally point to two examples: the “repeal” of Glass-Steagall, and the failure of the government to regulate derivatives.

Did the “Repeal” of Glass-Steagall Make the Banking System More Fragile?

When people say that Glass-Steagall was repealed, they’re referring to the Gramm-Leach-Bliley Act of 1999 (GLBA). The GLBA did not actually repeal Glass-Steagall. Instead, it repealed Section 20 and Section 32 of the Glass-Steagall Act. There was nothing banks could do after the repeal that they couldn’t do before the repeal, save for one thing: they could be affiliated with securities firms. Under the new law, a single holding company could provide banking, securities, and insurance services, increasing competition and allowing financial institutions to diversify.

Why this change? There were numerous factors. First of all, the barriers between commercial and investment banks had been eroding, due in part to innovations in the financial industry, such as money market mutual funds, which allowed investment banks to provide checking deposit-like services. Glass-Steagall didn’t change what was going on in financial markets so much as recognize that a distinction between commercial and investment banking was no longer tenable.

At a theoretical level, the case for Glass-Steagall had always been tenuous, and this had been reinforced by more recent scholarship that argued that the Great Depression was not in any significant way the result of banks dealing in securities.

Even more compelling, virtually no other country separated commercial and investment banking activities. In fact, as the authors of a 2000 reporton the GLBA noted, “compared with other countries, U.S. law still grants fewer powers to banks and their subsidiaries than to financial holding companies, and still largely prohibits the mixing of banking and commerce.” The authors go on to observe that less restrictive banking laws were associated with greater banking stability, not less.

The question, then, is whether the GLBA’s marginal increase in banking freedom played a significant role in the financial crisis. Advocates of this thesis claim that it allowed the risk-taking ethos of investment banks to pollute the culture of commercial banking. But here are the facts:

  • The two major firms that failed during the crisis, Bear Stearns and Lehman Brothers, were pure investment banks, unaffiliated with depository institutions. Merrill-Lynch, which came close to failing, wasn’t affiliated with a commercial bank either. Their problems were not caused by any affiliation with commercial banking, but by their traditional trading activities.
  • On the whole, institutions that combined investment banking and commercial banking did better during the crisis than banks that didn’t.
  • Glass-Steagall had stopped commercial banks from underwriting and dealing securities, but it hadn’t barred them from investing in things like mortgage-backed securities or collateralized debt obligations: to the extent banks suffered losses on those instruments during the crisis, Glass-Steagall wouldn’t have prevented it.

In light of such facts, even Barack Obama acknowledged that “there is not evidence that having Glass-Steagall in place would somehow change the dynamic.”

Finally, it is important to emphasize that the GLBA was not a deregulatory act, strictly speaking. As with much else that went on during the era, it was an instance of re-regulation. The government still dictated what financial institutions could and couldn’t do down to the smallest detail. Indeed, aside from repealing two sections from Glass-Steagall, the GLBA expanding banking subsidies and regulations, including regulations on thrifts, new privacy and disclosure rules, as well as new Community Reinvestment requirements for banks.

Were Derivatives Unregulated?

The role of derivatives in fostering the financial crisis has been wildly overstated. Take the credit default swaps (CDSs) that contributed to the downfall of insurance giant AIG. In the simplest terms, a CDS is a form of insurance. If I make a loan to Acme Corp., I can buy a CDS from a CDS seller that pays me if Acme defaults on its obligations. All I’ve done is transfer an existing risk — Acme’s default on a debt — from me to the CDS seller.

On the whole, CDSs and other derivatives didn’t create new risks: they mainly transferred risks among financial players, from those who didn’t want to bear them to those who did. True, these instruments were used by some firms, not just to hedge existing risks, but to take on new risks in the belief their bets would pay off — and the firms that made bad bets should have suffered the consequences. But focusing on derivatives detracts from the real story of the financial crisis.

At the most basic level, the financial crisis resulted from financial institutions using enormous leverage to buy mortgage-backed securities that turned out to be far riskier than most people assumed. Take CDSs out of the equation, and the crisis still would have happened. The details would have played out differently, but the bottom line would have been the same.

That said, it simply wasn’t true that derivatives were “unregulated.” As Heritage’s Norbert Michel points out, “Federal banking regulators, including the Federal Reserve and the OCC [Options Clearing Corporation], constantly monitor banks’ financial condition, including the banks’ swaps exposure.” In particular, banking capital requirements explicitly took into account swaps. (To the extent CDSs were a problem they were a problem encouraged by regulation, since, under Basel I capital regulations, CDSs allowed banks to hold less capital.)

When people say that derivatives were unregulated, they are typically referring to the 2000 Commodity Futures Modernization Act (CFMA). But the CFMA didn’t prevent regulation of CDSs. It merely prevented the Commodities Futures Trading Commission from regulating them, and (likely) treating them as futures contracts that had to be traded on an exchange. (For various technical reasons, CDSs don’t generally make sense to trade on an exchange rather than “Over-the-Counter.”)

It is possible that different regulations or behavior by regulators might have prevented the financial crisis. Certainly it is easy to concoct such scenarios after the fact. But the “deregulation” story pretends that regulators were eager to step in and prevent a crisis and simply lacked the power. That view is completely without merit. The government had all the power it needed to control the financial industry, and such deregulation as did take place was largely (though not universally) good.

The real problem, as we’ll see, was that government intervention had created an unstable system that encouraged the bad decisions that led to the crisis.

FREE MARKETS DIDN’T CREATE THE GREAT RECESSION

Myth: The Great Recession was caused by free-market policies that led to irrational risk taking on Wall Street.

Reality: The Great Recession could not have happened without the vast web of government subsidies and controls that distorted financial markets.

As with the Great Depression, the causes of the Great Recession remain controversial, even among free-market-leaning economists. What we know for sure is that the free market can’t be blamed, because there was no free market in finance: finance (including the financial side of the housing industry) was one of the most regulated industries in the economy. And we also know that, absent some of those regulations, the crisis could not have occurred.

What Everyone Agrees On

The basic facts aren’t in dispute. During the early to mid 2000s, housing prices soared. At the same time, lending standards started to decline as the government encouraged subprime lending (i.e., lending to borrowers who had a spotty credit history and found it difficult to get conventional mortgages), and as businesses saw profit opportunities in extending loans to riskier borrowers and in offering riskier kinds of loans.

Increasingly, mortgage originators did not keep the loans they made on their own books, but sold them off to Fannie Mae, Freddie Mac, investment banks, or other financial firms, which bundled these loans into mortgage-backed securities (MBSs) and other financial instruments — instruments often rated super-safe by the three government-approved credit ratings agencies — and sold them to investors.

Financial institutions of all kinds invested heavily in housing, often financing these investments with enormous leverage (i.e., far more debt than equity). These investments went bad when housing prices began to decline and the underlying loans began to default at higher rates than expected.

As the value of MBSs and other mortgage-related instruments fell, the financial institutions that held them started to suffer losses, setting off a chain of failures and bailouts by the federal government, and ultimately causing credit markets to freeze up, threatening the entire financial system.

On these points, there is agreement. But why did this happen? What led so many institutions to invest so heavily in housing? Why did they make these investments using extreme amounts of leverage — and why were they able to take on so much debt in the first place? What led credit markets to break down in 2008? And what led the problems in housing and finance to spill over into the rest of the economy, turning a financial crisis into the Great Recession?

As with our discussion of the Great Depression, this is not intended to be a definitive, blow-by-blow account of the crisis. The goal is to lay to rest the myth that our financial system was anything close to free, and to see some of the ways in which government intervention played a role in creating the Great Recession.

The Federal Reserve Makes The Housing Boom Possible

We typically speak of central bankers controlling interest rates. More precisely, they influence interest rates by expanding or contracting the money supply. Recall from our discussion of the Great Depression that central bankers can make two crucial mistakes when it comes to monetary policy: they can be too loose (leading to price inflation or credit booms) or they can be too tight (leading to deflationary contractions).

The best explanation of the root cause of the housing boom is that, during the early 2000s, the Federal Reserve’s monetary policy was too loose, setting off — or at least dramatically magnifying — a boom in housing.

There are various metrics you can look at to assess whether monetary policy is too tight or too expansionary, but they all point in the same direction during this period. Take interest rates. As economist Lawrence H. White points out:

The Fed repeatedly lowered its target for the federal funds interest rate until it reached a record low. The rate began 2001 at 6.25 percent and ended the year at 1.75 percent. It was reduced further in 2002 and 2003; in mid-2003, it reached a then-record low of 1 percent, where it stayed for one year. The real Fed funds rate was negative — meaning that nominal rates were lower than the contemporary rate of inflation — for more than three years. In purchasing power terms, during that period a borrower was not paying, but rather gaining, in proportion to what he borrowed.

As White and others have argued, the Fed’s easy credit found its way (mostly) into the residential home market, where it had two major effects.

First, it helped drive up housing prices, as lower interest rates made buying a home more attractive. A $150,000 mortgage would have cost $2,400 a month at the 18 percent interest rates borrowers faced in 1980. But at the 6 percent rate they could often get during the 2000s that fell to a mere $1,050 a month. Low interest rates, then, made it possible for more people to buy homes, to buy bigger homes, and to speculate in housing, helping spark the boom in housing.

Second, the Fed’s policies encouraged riskier lending practices. Partly this was a side-effect of the rising price of housing. As long as home prices are rising, the risk that a borrower will default on his mortgage is low, because he can always sell the house rather than quit paying down the debt. But if housing prices stop rising or even fall? Then the home might end up being worth less than what is owed on the mortgage and it can make economic sense for the underwater home buyer to walk away from the home.

Fed policy also encouraged more risky kinds of loans. One obvious example was the proliferation of adjustable-rate mortgages (ARMs), where borrowers took on the risk that interest rates would rise. ARMs dominated subprime and other non-prime lending by 2006 as borrowers sought to take advantage of the Fed’s ultra-low short-term interest rates — a trend encouraged by Greenspan. But the net result was that when interest rates did eventually rise, defaults went, well, through the roof.

And the riskiest kinds of loans — no-money down, interest-only adjustable-rate mortgages, low-doc and no-doc loans? All of them seemed to make sense only because of the boom in housing prices.

Absent cheap money from the Fed, there would have been no crisis. The groundwork for 2008 was laid in 1914.

What Role Did Government Housing Policy Play?

During the 1990s and 2000s, the government attempted to increase home ownership, especially by subprime borrowers. Through the Community Reinvestment Act, tax incentives, Fannie Mae and Freddie Mac, and other channels, the government actively sought to put more Americans in homes.

But what role did the government’s housing crusade play in creating the Great Recession? There seem to be at least two important roles.

First, it contributed to the Fed’s easy money becoming concentrated in the housing market. In 1997, the government passed the Taxpayer Relief Act, which eliminated capital gains taxes on home sales (up to $500,000 for a family and $250,000 for an individual). According to economists Steven Gjerstad and Vernon Smith, “the 1997 law, which favored houses over all other investments, would have naturally led more capital to flow into the housing market, causing an increased demand — and a takeoff in expectations of further increases in housing prices.” By the time the Federal Reserve started easing credit in 2001, they argue, the housing market was the most rapidly expanding part of the economy and became a magnet attracting the Fed’s new money.

Second, government housing policy encouraged the lowering of lending standards that further inflated the housing bubble. Two key forces here were the Community Reinvestment Act (CRA) and especially the Government-Sponsored Enterprises (GSEs), Fannie Mae and Freddie Mac, which were the main conduits through which the government pursued its affordable housing agenda.

Starting in 1992, Fannie and Freddie were required to help the government meet its affordable housing goals by repurchasing mortgages made to lower income borrowers. Over the next decade and a half, the Clinton and Bush administrations would increase the GSEs’ affordable housing quotas, which over time forced them to lower their underwriting standards by buying riskier and riskier mortgages. American Enterprise Institute scholar Peter J. Wallison sums up the role this would ultimately play in the crisis:

By 2008, before the financial crisis, there were 55 million mortgages in the US. Of these, 31 million were subprime or otherwise risky. And of this 31 million, 76% were on the books of government agencies, primarily Fannie and Freddie. This shows where the demand for these mortgages actually came from, and it wasn’t the private sector. When the great housing bubble (also created by the government policies) began to deflate in 2007 and 2008, these weak mortgages defaulted in unprecedented numbers, causing the insolvency of Fannie and Freddie, the weakening of banks and other financial institutions, and ultimately the financial crisis.

To be sure, lending standards would decline industry-wide during the 2000s. In large part this was because other financial institutions could not compete with the GSEs without dropping their own lending standards. And although the government was not the sole force driving increased risk-taking in housing, it was the government that first insisted it was virtuous to exercise less caution if it meant getting more people into homes, and that continued to approve of declining lending standards throughout the housing boom. It started the trend of lower standards, which only later spread to the rest of the market.

Had the government not encouraged the imprudent lending that defined the crisis, it is unlikely the crisis would have occurred.

Government Policy and the Financial Sector

The Fed’s monetary policy and the government’s housing policy helped ensure that there would be a massive malinvestment in real estate. But why did those risks become concentrated and magnified in the financial sector?

The main transmission mechanism was MBSs and other derivatives, which moved mortgage risk from mortgage originators to large financial institutions such as Fannie Mae, Freddie Mac, and the big commercial and investment banks, as well as institutional investors. Not only did these players make big bets on housing, they did so using enormous leverage — often 30 or 40 dollars of debt for every 1 dollar of equity by the end of the crisis. (Fannie and Freddie were levered even more.)

Why? Was it irrationality and greed run amok? Well, no. Although there was plenty of irrationality and greed, government interference in financial markets once again played a key role in what happened.

Specifically, there are at least three major forces at play here: (1) the credit ratings agencies, (2) bank capital regulation, and (3) government-created moral hazard.

1. The Ratings Agencies

The conventional view is that financiers loaded up on mortgage derivatives because they placed the desire for riches above fear of risk. The truth is more complex. In large part the reason mortgage products became so popular was because they seemed relatively safe.

Why did they appear safe? One reason is that the credit ratings agencies tasked with evaluating credit instruments said they were safe.

The three credit ratings agencies in the lead up to the crisis — Moody’s, Standard and Poor’s, and Fitch — were not free-market institutions. By the time of the crisis, they were the only institutions the government permitted to supply official ratings on securities. As political scientist Jeffrey Friedman notes, “A growing number of institutional investors, such as pension funds, insurance companies, and banks, were prohibited from buying bonds that had not been rated ‘investment grade’ (BBB- or higher) by these firms, and many were legally restricted to buying only the highest-rated (AAA) securities.”

But no one could compete with the ratings agencies, and so they had virtually no incentive to assess risks accurately. Thanks to bad incentives, incompetence, and honest error, the ratings agencies stamped many mortgage derivatives AAA — as safe as ExxonMobil’s and Berkshire Hathaway’s debt. These products thereby seemed to be safe but relatively high-yielding assets.

Did the buyers of mortgage-backed securities put stock in the quality of the ratings agencies’ assessments? Many did. Research from economist Manuel Adelino found that while investors did not rely on ratings agencies to assess the riskiness of most investments, they did take AAA ratings at face value. Anecdotal evidence backs Adelino up. For example, a New York Times article from 2008 reported:

When Moody’s began lowering the ratings of a wave of debt in July 2007, many investors were incredulous.

“If you can’t figure out the loss ahead of the fact, what’s the use of using your ratings?” asked an executive with Fortis Investments, a money management firm, in a July 2007 e-mail message to Moody’s. “You have legitimized these things, leading people into dangerous risk.”

But from another perspective, it hardly mattered whether anyone believed the ratings were accurate. The sheer fact these instruments were rated AAA and AA gave financial institutions an incentive to load up on them thanks to government-imposed capital regulations.

2. Bank-Capital Regulations

As we saw when we looked at the New Deal’s regulatory response to the Great Depression, at the same time that the government began subsidizing banks through federal deposit insurance it started regulating banks to limit the risk taking deposit insurance encouraged. In particular, the government sought to limit how leveraged banks could be through bank-capital regulations.

Bank capital is a bank’s cushion against risk. It’s made up of the cash a bank holds and the equity it uses to finance its activities, which can act as a shock absorber if its assets decline in value. The greater a bank’s capital, the more its assets can decline in value before the bank becomes insolvent. Prior to the FDIC, it wasn’t unusual for banks’ capital levels to hover around 25 percent. By the time of the 2008 financial crisis, bank capital levels were generally below 10 percent — sometimes well below 10 percent.

Bank-capital regulations forced banks to maintain a certain amount of capital. Until the 1980s, there were no worked out standards governing capital regulation, but in 1988, the U.S. and other nations adopted the Basel Capital Accord, or what became known as Basel I.

Basel is what’s known as risk-based capital regulation. That means that the amount of capital a bank has to have is determined by the riskiness of its assets, so that the riskier an asset, the more a bank has to finance it with equity-capital rather than debt. Assets the Basel Committee on Banking Supervision regarded as riskless, such as cash and government bonds, required banks to use no capital. For assets judged most risky, such as commercial loans, banks had to use at least 8 percent of equity-capital to fund them. Other assets fell somewhere in between 0 and 8 percent.

What’s important for our story is that securities issued by “public-sector entities,” such as Fannie and Freddie, were considered half as risky as conventional home mortgages: a bank could dramatically reduce its capital requirements by buying mortgage-backed securities from Freddie Mac and Fannie Mae rather than making mortgage loans and holding them on its books. In 2001, the U.S. adopted the Recourse Rule, which meant that privately issued asset-backed securities rated AAA or AA were considered just as risky as securities issued by the GSEs.

The net result was that banks were encouraged by government regulators to make big bets on mortgage derivatives rated AAA or AA by the ratings agencies.

3. Moral Hazard

As we’ve seen, many financial players really did believe that mortgage-related products were relatively safe. Government certainly encouraged that impression. The Federal Reserve was assuring markets that interest rates would remain low and that it would fight any significant declines in securities markets with expansionary monetary policy. The credit ratings agencies were stamping many mortgage derivatives AAA. Congress and the president was touting the health of the mortgage market, which was putting ever-more Americans into homes.

But it is mysterious why more people weren’t worried. Contrary to the housing industry’s bromide, there were examples of housing prices going down — even nationally, as in the Great Depression. There was also evidence that many of the loans underlying the mortgage instruments were of increasingly poor quality. And we can find plenty of examples of Cassandras who foresaw the problems that were to come.

Some market participants did understand how risky mortgage-related derivatives were, but were not overly concerned with those risks because they could pass them on to others. Mortgage originators, for instance, were incentivized to make bad loans because they could pawn off the loans to securitizers such as Fannie and Freddie.

But what about the people ultimately buying the mortgage securities? Why were they willing to knowingly take big risks? Part of the answer is that the moral hazard introduced through government policies including (but not limited to) “too big to fail” had convinced them that they would reap the rewards on the upside, yet would be protected on the downside thanks to government intervention. We’ve seen, after all, how the government had started bailing out financial institutions seen as “too big to fail” decades before the crisis.

Many people resist this hypothesis. It simply doesn’t seem plausible that investors were thinking to themselves, “This could easily blow up, but it’s okay, I’ll get bailed out.” But tweak that thought just a bit: “There’s some risk this will blow up, as there is with every financial investment, but there’s also a good probability the government will step in and shield me from most if not all of the losses.” How could that not influence investor decision-making?

And there is also another, more subtle effect of moral hazard to consider. Over the course of decades, the government had increasingly insulated debt holders from downside risk. Thanks to deposit insurance, “too big to fail,” and other government measures, debt holders simply weren’t hit over the head by the message that they could get wiped out if they weren’t careful.

More generally, the regulatory state had taught people that they need not exercise their own independent judgment about risk. Is your food safe? The FDA has seen to that. Is your airplane safe? The FAA has seen to that. Is your doctor competent? If he wasn’t, the government wouldn’t allow him to practice medicine. Is it surprising, then, that even many sophisticated investors thought they didn’t need to check the work of the ratings agencies?

To be clear, I don’t think government regulation fully explains the widespread failure to accurately assess the risks of mortgages. Part of it I chalk up to honest error. It’s easy to see the folly of people’s judgment with hindsight, but people aren’t making decisions with hindsight. I also think there are psychological reasons why many people are vulnerable to speculative bubbles. But moral hazard almost certainly played a role in reducing investors’ sensitivity to risk and in allowing many financial institutions to take on dangerous amounts of leverage.

The Federal Reserve Made Things Worse

Given the massive malinvestment in residential real estate, the declining lending standards, and the concentration of mortgage risks in the financial sector that took place during the 2000s, the bust was inevitable. But was the bust sufficient to explain the economy-wide recession that followed?

No doubt there was going to be a recession as the result of the crisis, but there is a compelling argument that the severity of the recession — the thing that made it the Great Recession — was causally tied to the government’s response. In particular, what turned a crisis into a catastrophe was overly tight monetary policy from the Federal Reserve in response to the crisis.

Tight money, recall, can lead to deflationary spirals, where debtors have trouble repaying their debts, putting stress on financial institutions, and output and employment fall as people have trouble adjusting to declining prices. And the argument is that although the Federal Reserve started easing money in mid-2008, it did not do so enough nearly enough, leading to a monetary contraction and hence the deflation that turned a financial crisis into the Great Recession.

Judging whether monetary policy is too tight isn’t straightforward. Typically people look to interest rates, but interest rates alone can be deceiving. Although the low interest rates of the early 2000s were associated with easy money, easy money can also lead to high interest rates, as it did in the late 1970s (or in Zimbabwe during its bout of hyperinflation).

But by looking at other, more revealing indicators, a number of economists have concluded that monetary policy tightened substantially during 2008–2009, leading to a decline in total spending in the economy and helping spread the pain in the housing and financial sectors to the rest of the economy.

The Ultimate Lesson

Ayn Rand often stressed that the U.S. isn’t and has never been a fully free, fully capitalist nation. Rather, it’s been a mixed economy, with elements of freedom and elements of control. This means that we cannot, as is so often done, automatically blame bad things on the free element and credit good things to the controlled element. As Rand explained:

When two opposite principles are operating in any issue, the scientific approach to their evaluation is to study their respective performances, trace their consequences in full, precise detail, and then pronounce judgment on their respective merits. In the case of a mixed economy, the first duty of any thinker or scholar is to study the historical record and to discover which developments were caused by the free enterprise of private individuals, by free production and trade in a free market — and which developments were caused by government intervention into the economy.

As we’ve seen, the field of finance has been dominated by government intervention since this country’s founding. In this series, I’ve tried to highlight some of the most important government subsidies and controls affecting the industry, and indicate how they were often responsible for the very problems they were supposedly created to solve.

If you examine the historic and economic evidence carefully, the conclusion that follows is clear: if we value economic stability, our top priority should be to liberate the field of finance from government support and government control.

--

--