On the role of the state in economic crises

Essay written in response to the call for submissions launched by the International Policy and Leadership Institute (IPLI) in relation with the European Public Policy Conference 2012 about the origins of economic crises.

When it became clear in 2008 that the burst of the housing bubble in the United States was no longer a mere financial crisis and that the “real” economy would be also hit, many economists and columnists turned to the Great Depression in order to find similarities and possibly, solutions. At first sight, the two crises seemed indeed to have a common origin: an oversized asset bubble — stocks in the 1920s, houses during the last decade — fed by an accommodative monetary policy and a poorly regulated banking sector which was able to provide everyone with loans, regardless of their solvency. Excessive liquidity was then spent on bubbling markets with the firm belief that prices would endlessly rise, transforming assets into risk-free cash machines. As long as the illusion lasted, it was even possible to use these assets as collateral for new loans and add momentum to the vicious circle. In this system, the intrinsic value of collateral matters very little because the main engine of trade is individual expectations. If economic agents are convinced that prices will never go down and that they can easily find buyers for their assets, even tulip bulbs may become the object of speculations, as the Dutch experienced in the 1630s. A single bulb could then be worth ten years of a craftman’s labour. Until the bubble bursted and the bulb would just be a bulb again.

While the tulipomania example sounds nowadays absurd, the belief in brick and mortar remains, despite the latest crisis, very firmly anchored. Real estate would be a rock-solid investment because it has a physical reality, unlike stocks or other pieces of paper. It would also have a superior use value since it meets one of the basic human needs, i.e. shelter. Last but not least, it would be comparatively more profitable than renting a house, where money is basically “wasted” instead of helping building a personal estate. The validity of all these arguments could be discussed, yet they are not able on their own to generate a bubble. Bankers may be greedy, financial markets irrational, but crises do not occur everyday. Moreover, they do not always turn into an economic downturn. If they do, it must be that another factor enters the game, which is neither human nature — relatively stable, at least in the short term — nor market mechanisms — probably as old as bargaining.

Ineffective state responses…

A part of the answer lies in the state or, more exactly, in public policy choices. Interestingly enough, over the past four years, Western countries have radically changed their minds about the role of public authorities in tackling the financial and economic crisis. After Lehman Brothers’ collapse in September 2008, the United States and, to a lesser extent, European countries, decided to resort to Keynesian economic policy instruments in order to mitigate the effects of the shock. By pumping money into the financial system through very low interest rates or “non-conventional” measures, central banks were to ensure that no credit crunch would happen and that interbank transactions remained smooth. At the same time, governments tolerated even larger public deficits to fund recovery plans and avoid a sharp rise of unemployment. These actions were taken on the premises that the crisis of 2008 was comparable to the initial burst of 1929 and that an active, counter-cyclical economic policy would be sufficient to absorb the impact of the blow until the economy recovers.

Such a parallel was ill-founded for two reasons. First, contrary to the 1920s, the world economy did not really face a situation of overproduction in 2008. Keynesian budgetary policies, which were largely based on a Marxian analysis of capitalism, were first and foremost designed to smooth boom and bust cycles, where erratic investment and consumption decisions led either to a shortage of goods and services in the economy — with subsequent inflation risks — or to a surabundancy, growing stocks and eventually production capacity reduction combined with redundancy. In the meantime, the picture of international division of labour has become such that government-supported consumption did not result in increasing demand for domestic firms, but in additional imports from cheap labour countries like China where most manufactured, end products are made. Second, the level of public debt was already relatively high in 2008 due to the scissor effect of expanding military expenses and tax cuts in the United States, and unreformed welfare systems in Southern and Eastern Europe. Piling up even more public deficit therefore brought Western economies to the limits of debt sustainability, a case made worse by the crisis of confidence triggered after rating agencies’ failures to correctly assess the default risk of subprime mortgages.

In other words, the wrong diagnosis led to the wrong cure — even if monetary policy and government involvement in the banking sector at least prevented a complete collapse of the financial system — and when this became clear soon afterwards, public authorities radically changed course. Monetary policy orientations have remained, true, to a large extent untouched due to the lack of visible negative effects on the economy and persisting fragilities of credit channels. However, all Western countries decided to withdraw their supportive budgetary policies and to follow instead the path of austerity. By a curious trick, in certain situations like Greece, public spending cuts were not only sold as solutions to restore public finances, but also to enhance competitiveness after years of relative decline in regard to trade powers such as Germany. This is hardly a new therapy, and whatever results it will bring about, it is above all a convenient alibi to dissimulate that public deficits are only one face of the coin, private over-indebtedness being the other.

… to crises it has itself triggered

Here again, the comparison between 1929 and 2007/2008, despite some shortfalls, is very illustrative. The stock bubble that was swelling until the Black Thursday was being fed by very cheap credit supplies on behalf of banks, in the absence of wise prudential regulations. Lessons were then drawn from the crisis, and new rules introduced. In this framework, banks could normally not count on public bail-out and their ability to grant loans was limited by stringent solvency criteria applied to potential borrowers. Greedy or not, bankers had therefore no incentive to lend money to insolvent applicants since they had no possibility to recover probable losses. They were simply doing what their job is mainly about: collecting savings, evaluating default risks and setting accordingly interest rates for loans.

Yet something changed at the beginning of the 1990s. Short of ideas to get out of a decade of poor economic growth, several governments started to get mixed in this business, in particular concerning mortgage loans. The ideological victory of the West, after half a century of cold war, encouraged them to believe that a “nation of houseowners” was a desirable model to achieve, not only because it would appeal voters but also because it would be intrinsically more efficient from a economic and social point of view. The triumph of private property, against state ownership, would help fostering economic growth and social cohesion thanks to the greater care citizens pay to their own goods. Moreover, having a personal estate would entitle them to have better access to loans, as experienced by Chile in the 1980s. Credit could then be used to make up for stagnating incomes, boosting consumption and putting the economy on the right track again.

The reasoning sounded very convincing but suffered one major weakness. Most people simply did not have money to become homeowners and had very few chances to be granted a loan by banks due to the bad shape of the labour market. Yet, if external forces could start up the machine, it would be later on be able to function on its own in a form of a virtuous circle. Public authorities, notably in the United States, were to give this first impetus. Mortgage loans became government-sponsored, which meant in practice that banks were either hedged against default risks, politically “encouraged” to grant more loans or that applicants could benefit from tax credits when contracting mortgages. As a result, an artificially attractive market was created, almost risk-free, very profitable for lenders, appealing for borrowers and ensuring politicians a high level of popularity without additional public expenditure — a rarity in the world of politics.

Securitization, a much-criticized “financial innovation” taken for being partly responsible of the subprime crisis, was from this point of view a logical response to the set of perverse incentives put into place. Many bankers certainly understood that their new, subprime customers were mostly insolvent and should not be granted loans. Yet if they could pass the brunt of the risk to someone else through government-sponsored guarantees or securities trading, it was not irrational from them to take advantage of these business opportunities. Like in Keynes’ beauty contest, following the herd is not always irrational if the rules of the game privilege those who do so. The problem occurs then when the aggregate of individually rational decisions leads to a collective failure.

Improper institutional framework rather than “human nature” to blame

This articulation between individual decisions and collective outcomes is probably the key question raised by the crisis, and it goes much beyond the boundaries of the economics discipline. Since Ancient Greece, political philosophers have been striving to reconciliate individual behaviours with social imperatives. Those who held an optimistic vision of human nature hoped civic virtues would be strong enough to sustain an orderly society, but they soon appeared very unreliable. On the other hand, more pessimistic thinkers who advocated for a iron fist regime neglected people’s appetite for freedom and eventually realized that there would never be enough policemen to protect a government unanimously considered as illegitimate.

In the field of politics as well as of economics, market mechanisms were to provide a crucial solution to this equation. Personal virtues, by definition unevenly distributed, were no longer indispensable for the system to be functional. If everyone was made free to bring their input on a common place, the subsequent conjunction of wills would be as close as possible to a collective optimum. The mathematical model of the barycenter was to inspire both the formation of nation-wide democracies — in contrast with their remote Greek ancestors — and of decentralized, market-based economies. This paradigm, whose first concrete foundations have been laid down during the 18^th^ century, has remained dominant up to now, even if it has been challenged several times.

Yet as all the crisis since then have shown, it is far from being flawless. The “invisible hand” is no silver bullet and in repeated situations, letting everyone pursuing their own interests has not led to the common good but to the opposite. After having read about how the housing bubble swelled, one may believe that the trouble only comes when the model is being perverted by state intervention, i.e. when a visible hand replaces the invisible hand with the pretention of doing a better job. This was essentially true for 2007, but not for 1929 for example, when regulation was actually deficient.

The choice is not between an abstentionist government and an omnipotent government. One has to understand that there is no such thing as a neutral state, and that absence of action is already an action. Those who thought in 2008 that we were witnessing a “return of the state” after three decades of neo-liberal ideological predominance proved to be wrong less than two years later. It does not follow though that partisans of a minimalist government are right, and the consequences of reckless public spending cuts are very likely to be felt for a long time ahead. At the end of the day, the question is simply not whether we should have rules or not, but about what rules we should have.

As it appears, many crises occur because institutional frameworks, by default or by design, provide individuals with inappropriate signals and incentives. One should then not be surprised to see people responding to them in a way that meets what they defined as their interests, even if it will eventually lead to a collective disaster. This is not to deny that people may sometimes react to signals in an unexpected manner: after all, we are human beings, not machines, and the exercice of our liberty might contradict with what is supposed to be a rational “behaviour”. Nevertheless, this is the exception rather than the rule, and that is why we live in relatively peaceful societies, where a high degree of complexity has been reached despite the lack of a master plan. Crisis or not, our rubbish is still collected, store shelves plenty of food and newspapers printed and delivered everyday up to the most remote places. These institutions work, and are very resilient.

Would it be conceivable to achieve the same level of stability on a macroeconomic scale? While Keynesian policy instruments had been relatively effective against endogeneous shocks from the post-war period until the late 1970s, they have been to a large extent disqualified by the second wave of globalization and up to now, we have been left with no substitute. If it were to turn out that no quick fix could be applied anymore to mitigate the effects of crises reputed inherent, considering their devastating effects on people’s lives, it may be high time to revise the rules of the game and to work on an alternative model.