Wednesday, May 29, 2019

Anosognosia paradox (AOT) & Cold War libertarianism (narcissism of small difference)

Some patients do not perceive that there is anything wrong with themselves, despite what is obvious to all.


As a leading neurologist at the Hôpital de la Pitié in Paris, Joseph Babinski was used to seeing all sorts of unusual phenomena. But in 1914, two patients stuck out. Both had damage to the right hemispheres of their brains, leaving them paralysed on the left side of their body (each brain hemisphere controls the opposite side of the body).
For an experienced neurologist like Babinski, this was hardly noteworthy. What did strike him was that both patients insisted they were completely normal. They were unaware that there was anything wrong with them. In a 1914 medical journal article, Babinski explained that when he recommended electrotherapy to one of these patients, she replied, “Why do you want to electrify me? I am not paralysed.” He coined a new word to describe this symptom: anosognosia, literally meaning ‘without knowledge’.

The concept of anosognosia is politically charged because it turns out that among the homeless who are mentally ill, they typically don't perceive themselves as ill -- and for the past generation, the law has been committed to respecting the autonomy of the patient. Typically, people cannot be treated for mental illness against their will unless they are deemed to be dangerous to others or to themselves, so it is impossible to help them. Unfortunately, many of them eventually do become a danger to themselves or to others, but by that time, it is too late.

This sort of legal understanding emerged in 1960s, with the rise of improved medication and a concern for patients' rights.

Antipsychotics were only one factor in the shift towards deinstitutionalisation, which also included a growing awareness of the civil and human rights abuses that often occurred inside psychiatric hospitals. In 1967, California’s then-governor Ronald Reagan signed the Lanterman–Petris–Short Act, a landmark bill intended to halt the “inappropriate, indefinite, and involuntary commitment of persons with mental health disorders, developmental disabilities, and chronic alcoholism”. The bill formed the basis of much of the psychiatric legislation for the rest of the US.

What resulted wasn’t just a shift away from long-term psychiatric hospitalisation, but also towards giving patients greater control over their own treatment, including the decision whether to receive treatment at all.

The big problem with this is that once out on the streets, patients did not get the support that they needed, and once they fell off their medication, they avoided treatment. Patients would end up institutionalized again -- this time in prisons, not hospitals.

But a major problem was that outpatient treatment was simply not as available as it needed to be. As new legislation narrowed the criteria by which patients could be involuntarily committed, many people affected by mental illness were excluded. Repeated threats of violence were no longer grounds for commitment. The result was that many of the most severely mentally ill were left to fend for themselves on the streets. By the 1980s, one observer noted that a patient’s basic rights were “being exercised in a vacuum with none of the attendant supports and backup systems in place to make it viable”. Instead of hospitals, many ended up in jail, arrested for nuisance crimes. As a result, the country’s prisons have now taken the place of its former mental institutions.

There has been a movement toward mandatory treatment of high-risk persons -- "Assisted Outpatient Treatment". In the US, studies show that these forcibly treated patients improve. This policy also conserves precious time and funding (on police and medical services, etc.). 

Six years after Kendra’s Law was implemented in New York, officials had logged a 77% decrease in psychiatric hospitalisations and a 74% decline in homelessness for people in the AOT programme; incarcerations had dropped by 87%. In 2015, seven years after the implementation of Laura’s Law, Nevada County reported that people who had completed the AOT programme spent 43% less time in hospital, 52% less time in prison and 54% less time homeless than before they were treated.

Ironically, forcibly institutionalizing delusional people ultimately increases their freedom.

“The way I describe it is when you see somebody walking down the street [who thinks] they have a transmitter in their head, it is not because they believe they have a transmitter in their head. They know it. Their illness tells them so. And this is the group who won’t accept treatment, and treatment can restore their free will. Being psychotic is not an exercise of free will. It is the inability to exercise free will.”

However, some controversial studies in the UK show no improvement in the patients' condition after commitment. Moreover, there have been instances of patients who are debilitated by depression and anxiety -- yet who are self-aware and rational -- being classified as delusional and then forcibly institutionalized. Taken to an extreme, the implication is that, as in the former Soviet Union, anyone who disagrees with authority is obviously delusional and should be institutionalized indefinitely for their own good.

Hence, there is a practical ethical dilemma in forcibly treating mental illness that parallels and reflects a logical paradox about notions of free will. Forced treatment may, ironically, be the path to freedom and rationality for those who are delusional, but for those who are rational yet rebellious, it may be a trap. 

Libertarianism in the shadow of the Cold War

The avoidance of forced treatment for those who are obviously schizophrenic is largely a legacy of the Cold War.

In the Soviet Union, dissidents who openly disagreed with the Soviet regime were often put in mental institutions because they were "obviously" mentally ill if they were opposed to communism. Importantly, psychiatrists in the USSR sincerely believed in this policy.


In the American mind during the Cold War, there were generally two responses to the USSR: 
1) The Soviet model needs to be avoided, and 
2) the USA is already remarkably similar to the USSR. 

One can see this ambivalence in the 1956 movie "Invasion of the Body Snatchers".


The film's storyline concerns an extraterrestrial invasion that begins in the fictional California town of Santa Mira. Alien plant spores have fallen from space and grown into large seed pods, each one capable of reproducing a duplicate replacement copy of each human. As each pod reaches full development, it assimilates the physical characteristics, memories, and personalities of each sleeping person placed near it; these duplicates, however, are devoid of all human emotion. Little by little, a local doctor uncovers this "quiet" invasion and attempts to stop it.

The slang expression "pod people" that arose in late 20th century American culture references the emotionless duplicates seen in the film.

The movie has been taken as a criticism of both communism and the conformism of the 1950s.

Some reviewers saw in the story a commentary on the dangers facing America for turning a blind eye to McCarthyism, "Leonard Maltin speaks of a McCarthy-era subtext."[19] or of bland conformity in postwar Eisenhower-era America. Others viewed it as an allegory for the loss of personal autonomy in the Soviet Union or communist systems in general.[20]
For the BBC, David Wood summarized the circulating popular interpretations of the film as follows: "The sense of post-war, anti-communist paranoia is acute, as is the temptation to view the film as a metaphor for the tyranny of the McCarthy era."[21] Danny Peary in Cult Moviespointed out that the addition of the framing story had changed the film's stance from anti-McCarthyite to anti-communist.[17] Michael Dodd of The Missing Slate has called the movie "one of the most multifaceted horror films ever made", arguing that by "simultaneously exploiting the contemporary fear of infiltration by undesirable elements as well as a burgeoning concern over homeland totalitarianism in the wake of Senator Joseph McCarthy's notorious communist witch hunt, it may be the clearest window into the American psyche that horror cinema has ever provided".

In fact, the film's "political commentary" might have been largely unintended because the film was seen by its makers as an entertainment vehicle onto which critics projected their own political meaning. 

Despite a general agreement among film critics regarding these political connotations of the film, actor Kevin McCarthy said in an interview included on the 1998 DVD release that he felt no political allegory was intended. The interviewer stated that he had spoken with the author of the novel, Jack Finney, who professed no specific political allegory in the work. DVD commentary track, quoted in Feo Amante's homepage.[25]

In his autobiography, I Thought We Were Making Movies, Not HistoryWalter Mirisch writes: "People began to read meanings into pictures that were never intended. The Invasion of the Body Snatchers is an example of that. I remember reading a magazine article arguing that the picture was intended as an allegory about the communist infiltration of America. From personal knowledge, neither Walter Wanger nor Don Siegel, who directed it, nor Dan Mainwaring, who wrote the script nor original author Jack Finney, nor myself saw it as anything other than a thriller, pure and simple."

From a psychological point of view, this lack of intention might actually confirm that the USSR and American conformity were relevant to the subconscious appeal of the movie, an aspect that only emerged into consciousness in the later critical interpretation of the movie. 

For the contemporary viewer, at least since the advent of the political satire of Saturday Night Live in the 1970s, it is difficult not to instantly imagine a political subtext.

[Invasion of the Body Snatchers, 1956, trailer]


The equivalence of the Soviet Union and the USA in the popular mind was evident in the rise of a counterculture in the 1960s.


The Making of a Counter Culture "captured a huge audience of Vietnam War protesters, dropouts, and rebels--and their baffled elders. Theodore Roszak found common ground between 1960s student radicals and hippie dropouts in their mutual rejection of what he calls the technocracy--the regime of corporate and technological expertise that dominates industrial society.

In Roszack's account, the failure of the Soviet model loomed large for the critics of American life in the 1960s. The classic Marxist model was that the economic transformation of society into a model of sharing (socialism) would transform what is traditionally regarded as a fixed "human nature". That did not happen, and the new leftist model in the West was to reverse the Marxist model by first transforming one's own way of thinking (e.g., taking drugs, meditating, reading Asian philosophy) and then getting together in small groups and therapeutically discussing how brainwashed everyone is by society ("consciousness raising"). Rather than having a centrally directed economy that would promote the common good, people should "get off the grid" and develop technology that would promote independence (solar power, geodesic domes, personal computers). This general counterculture framework is still arguably the dominant paradigm within liberal academia. (In fact, the popular, non-Kuhnian conception of a "paradigm" is beholden to the counterculture paradigm.) 

It is in this context that the Hungarian psychiatrist Thomas Szasz rose to prominence.


A distinguished lifetime fellow of the American Psychiatric Association and a life member of the American Psychoanalytic Associationhe was best known as a social critic of the moral and scientific foundations of psychiatry, as what he saw as the social control aims of medicine in modern society, as well as scientism. His books The Myth of Mental Illness(1961) and The Manufacture of Madness (1970) set out some of the arguments most associated with him.

Szasz argued throughout his career that mental illness is a metaphor for human problems in living, and that mental illnesses are not "illnesses" in the sense that physical illnesses are; and that except for a few identifiable brain diseases, there are "neither biological or chemical tests nor biopsy or necropsy findings for verifying DSM diagnoses."[5]

Szasz maintained throughout his career that he was not anti-psychiatry but was rather anti-coercive psychiatry. He was a staunch opponent of civil commitment and involuntary psychiatric treatment but believed in, and practiced, psychiatry and psychotherapy between consenting adults.

His views on special treatment followed from libertarian roots, based on the principles that each person has the right to bodily and mental self-ownership and the right to be free from violence from others, and he criticized the "Free World" as well as the communist states for their use of psychiatry. 

The Scottish psychiatrist R.D. Laing professed a similar perspective.


Laing’s views on the causes and treatment of psychopathological phenomena were influenced by his study of existential philosophy and ran counter to the chemical and electroshock methods that had become psychiatric orthodoxy. Taking the expressed feelings of the individual patient or client as valid descriptions of lived experience rather than simply as symptoms of mental illness, Laing regarded schizophrenia as a theory not a fact. Though associated in the public mind with anti-psychiatry he rejected the label.[2] Politically, he was regarded as a thinker of the New Left.

The so-called "New Left" was related to the counterculture, and focused on identity politics rather than class struggle. That the New Left was alienated by orthodox Marxism is apparent.


The New Left was a broad political movement mainly in the 1960s and 1970s consisting of activists in the Western world who campaigned for a broad range of social issues such as civil and political rightsfeminismgay rightsabortion rightsgender roles and drug policy reforms.[2] Some saw the New Left as an oppositional reaction to earlier Marxist and labor unionmovements for social justice that focused on dialectical materialism and social class, while others who used the term saw the movement as a continuation and revitalization of traditional leftist goals.
Some who self-identified as "New Left"[5] rejected involvement with the labor movement and Marxism's historical theory of class struggle,[6] although others gravitated to their own takes on established forms of Marxism, such as the New Communist movement (which drew from Maoism) in the United States or the K-Gruppen in the German Sprachraum. In the United States, the movement was associated with the anti-war college-campus protest movements, including the Free Speech Movement

It has been noted more than once that the hippies grew up and became capitalists. But to be fair, the counterculture tended to be libertarians and anarchists opposed to the New Deal fusion of big business, big government and big labor. The hippies were always capitalists insofar as they saw themselves as insurgent entrepreneurs. 

Again, in line with the New Left was a historical review of the idea of "madness" by the French philosopher Michel Foucault.


Foucault's first major book, Madness and Civilization is an examination of the evolving meaning of madness in European culture, law, politics, philosophy and medicine from the Middle Ages to the end of the eighteenth century, and a critique of historical method and the idea of history. It marks a turning in Foucault's thought away from phenomenology toward structuralism: though he uses the language of phenomenology to describe an evolving experience of the mad as "the other", he attributes this evolution to the influence of specific powerful social structures.

Foucault utilizes a common division of Western intellectual history into three distinct periods: 1) the tradition ("Renaissance"), 2) the Scientific Revolution of the mid-17th century (the "classical" age), and 3) the "modern" age beginning in 1800. 

In the Renaissance, the mad lived in public, were seen to possess a touch of divine knowledge, and embodied moral innocence.

He argues that in the Renaissance the mad were portrayed in art as possessing a kind of wisdom – a knowledge of the limits of our world – and portrayed in literature as revealing the distinction between what men are and what they pretend to be. Renaissance art and literature depicted the mad as engaged with the reasonable while representing the mysterious forces of cosmic tragedy such as the Ship of fools,[5] but the Renaissance also marked the beginning of an objective description of reason and unreason (as though seen from above) compared with the more intimate medieval descriptions from within society.

Following the Scientific Revolution, the mad were confined, managed and studied, and were understood as fundamentally rational people who could be rehabilitated.

Foucault contends that at the dawn of the age of reason, in the mid-seventeenth century, the rational response to the mad, who until then had been consigned to society's margins, was to separate them completely from society by confining them, along with prostitutes, vagrants, blasphemers and the like, in newly created institutions all over Europe – a process he calls "the Great Confinement".[2]

The condition of these outcasts was seen as one of moral error. They were viewed as having freely chosen prostitution, vagrancy, blasphemy, unreason, etc. and the regimes of these new rational institutions were meticulous programs of punishment and reward aimed at causing them to reverse those choices.[2]

The social forces Foucault sees driving this confinement include the need for an extra-judicial mechanism for getting rid of undesirables, and the wish to regulate unemployment and wages (the cheap labour of the workhouses applied downward pressure on the wages of free labour). He argues that the conceptual distinction between the mad and the rational was in a sense a product of this physical separation into confinement: confinement made the mad conveniently available to medical doctors who began to view madness as a natural object worthy of study and then as an illness to be cured.

Around the period of the French Revolution, the mad remained confined with the purpose of treatment, but without the prior assumption of their rationality. Notably, there was no improvement in their conditions, despite the enlightened rhetoric.

For Foucault the modern experience began at the end of the eighteenth century with the creation of places devoted solely to the confinement of the mad under the supervision of medical doctors, and these new institutions were the product of a blending of two motives: the new goal of curing the mad away from their family who could not afford the necessary care at home, and the old purpose of confining undesirables for the protection of society. These distinct purposes were lost sight of, and the institution soon came to be seen as the only place where therapeutic treatment can be administered. He sees the nominally more enlightened and compassionate treatment of the mad in these modern medical institutions as just as cruel and controlling as their treatment in the earlier, rational institutions had been.

Foucault's interrogation of the idea of "mental illness" seems very much a piece with the intellectual re-thinking of the period.

Kenneth Lewes writes that Madness and Civilization is an example of the "critique of the institutions of psychiatry and psychoanalysis" that occurred as part of the "general upheaval of values in the 1960s". Lewes sees Foucault's work as being similar to, but more profound than, Thomas Szasz's The Myth of Mental Illness (1961).

One question is whether Foucault's research actually expresses a form of romanticism which views the past as a period of freedom and happiness that was later corrupted and repressed.

The sociologist José Guilherme Merquior discusses Madness and Civilization in Foucault(1985). Merquior argues that while Foucault raises important questions about the influence of social forces on the meaning of, and responses to, deviant behavior, Madness and Civilizationis nonetheless so riddled with serious errors of fact and interpretation as to be of very limited value. Merquior notes that there is abundant evidence of widespread cruelty to and imprisonment of the insane during eras when Foucault contends that the mad were perceived as possessing wisdom, and that Foucault has thus selectively cited data that supports his assertions while ignoring contrary data. Madness was typically linked with sin by Christian Europeans, noted Merquior, and was therefore regarded as much less benign than Foucault tends to imply. Merquior sees Madness and Civilization as "a call for the liberation of the Dionysian id" similar to Norman O. Brown's Life Against Death (1959), and an inspiration for Gilles Deleuze and Félix Guattari's Anti-Oedipus (1972).

There might be a kernel of romanticism in Foucault's thought as well as Foucault's politics and his personality. But the overall drift of Foucault's historical studies is not that the past was rosy and beautiful, but that "the past is a foreign country" fundamentally alien to the way we are now. (For example, Foucault points out how capital punishment today is sanitized and hidden, whereas centuries ago it was a gory public spectacle.)

In any case, the theories of Szasz, Laing and Foucault contributed to the intellectual backdrop of a period in which psychiatric hospitals were closed down and their patients were disgorged upon a society that championed freedom but did not want to spend money on their welfare.


Deinstitutionalisation (or deinstitutionalization) is the process of replacing long-stay psychiatric hospitals with less isolated community mental health services for those diagnosed with a mental disorder or developmental disability. In the late 20th century, it led to the closure of many psychiatric hospitals, as patients were increasingly cared for at home or in halfway houses, clinics and regular hospitals.

Deinstitutionalisation works in two ways. The first focuses on reducing the population size of mental institutions by releasing patients, shortening stays, and reducing both admissions and readmission rates. The second focuses on reforming psychiatric care to reduce (or avoid encouraging) feelings of dependency, hopelessness and other behaviors that make it hard for patients to adjust to a life outside of care.[1]

The modern deinstitutionalisation movement was initiated by three factors:
  • A socio-political movement for community mental health services and open hospitals;
  • The advent of psychiatric drugs able to manage psychotic episodes;
  • Financial imperatives (in the US specifically, to shift costs from state to federal budgets)[2]
The movement to reduce institutionalisation was met with wide acceptance in Western countries, though its effects have been the subject of many debates. Critics of the policy include defenders of the previous policies[3] as well as those who believe the reforms did not go far enough to provide freedom to patients.

This turns the conversation away from the left-wing libertarian intellectual history of the 1960s to the right-wing libertarian economic policies of the 1980s. The actual reality of why localities do not institute mandatory treatment for high-risk psychiatric patients boils down to an unwillingness by voters to commit resources to the issue. This might suggest a certain continuity between elements of libertarianism (anarchism) in the New Left and the economic libertarianism of the New Right.


In the United States, New Right refers to three historically distinct conservative political movements.[33]:624–25 These American New Rights are distinct from and opposed to the more moderate tradition of the so-called Rockefeller Republicans. The New Right also differs from the Old Right (1933–55) on issues concerning foreign policy with neoconservatives being opposed to the non-interventionism of the Old Right.

Indeed, it might be the similarity of the New Left and the New Right that is the source of their mutual animosity. Sigmund Freud pointed out that the great hatreds in European history were not between Christians and Jews, but between Protestant northern Germans and Catholic southern Germans, and between the Spanish and the Portuguese -- people who were otherwise indistinguishable. 


The narcissism of small differences (Germander Narzissmus der kleinen Differenzen) is the thesis that communities with adjoining territories and close relationships are especially likely to engage in feuds and mutual ridicule because of hypersensitivity to details of differentiation.[1]The term was coined by Sigmund Freud in 1917, based on the earlier work of British anthropologist Ernest Crawley. In language differing only slightly from current psychoanalytic terminology, Crawley declared that each individual is separated from others by a taboo of personal isolation, a narcissism of minor differences.

In 2010, author Christopher Hitchens cited the phenomenon when talking about ethno-national conflicts.[9] "In numerous cases of apparently ethno-nationalist conflict, the deepest hatreds are manifested between people who—to most outward appearances—exhibit very few significant distinctions." 

Mr. Spock understands.

[Star Trek, S3E15, Let That Be Your Last Battlefield, @33:28)


Aside from mutual animosity between political parties that are as similar as Tweedledum and Tweedledee, the American political establishment is entrenched in ways of thinking that are rooted in the Cold War era. For better or worse, Donald Trump is one of the few major political voices diverging from an outmoded establishment.

(In the philosophy of organic gardening, weeds are to be seen as a symptom of an underlying imbalance in the ecology of the garden, and weeds in their own imperfect and unattractive way help to supply the garden with what it lacks until the garden is rebuilt. Politically, Donald Trump is a weed.) 

Sunday, May 26, 2019

The geography of success

An article on the ascendancy of the large city in the 21st century:


In the early 20th century, American cities were industrial powerhouses that were fed by natural resources extracted from the rural interior of the United States. The ties that once bound the city to the hinterland have now been severed. Materially, big cities have detached from the surrounding region because they can now be supported with natural resources from around the world; intellectually, big cities are now interconnected with one another as they focus more on services.

The companies that now drive the Bay Area’s soaring wealth — and that represent part of the American economy that’s booming — don’t need these communities in the same way. Google’s digital products don’t have a physical supply chain. Facebook doesn’t have dispersed manufacturers. Apple, which does make tangible things, now primarily makes them overseas.

“These types of urban economies need other major urban economies more than they need the standardized production economies of other cities in their country,” said Saskia Sassen, a sociologist at Columbia who has long studied the global cities that occupy interdependent nodes in the world economy. New York, in other words, needs London. But what about Bethlehem, Pa.?

Such a picture, Ms. Sassen said, “breaks a past pattern where a range of smaller, more provincial cities actually fed the rise of the major cities.” Now major cities are feeding one another, and doing so across the globe.

Ram Mudambi, a professor in the Fox School of Business at Temple University, offers an even more unnerving hypothesis, in two parts: The more globally connected a city, the more prosperous it is. And as such cities gain global ties, they may be shedding local ones to the “hinterland” communities that have lost their roles in the modern economy or lost their jobs to other countries.

Today, major cities are where knowledge workers gather together. Their interaction with one another has a magnifying effect that dramatically increases their productivity and draws in other professionals and creative types. There is no substitute for a metropolis as an economic engine fed by the presence of white-collar "symbolic manipulators".

Cities full of highly educated workers like Boston, San Francisco and New York began to pull away. And that pattern, Ms. Giannone finds, has been driven entirely by what’s happening with high-skilled workers: When they cluster together in these places, their wages rise even more. That widens inequality both within wealthy cities and between wealthy regions and poorer ones.

“Big changes have been happening over the last 30 years,” Ms. Giannone said. “Now we’re actually seeing the impact of them.”

Those changes have come from multiple directions — from globalization, from computerization, from the shift in the United States away from manufacturing toward a knowledge and service economy. These trends have buffeted many smaller cities and nonurban areas. The uncomfortable political truth is that they’ve also benefited places like San Francisco and New York.

“The economic base has shifted in a way that highly favors cities — and big cities — because it’s now based on knowledge, on idea exchange, on agglomeration,” said Mark Muro, the policy director of the Metropolitan Policy Program at the Brookings Institution.

Programmers benefit from having more programmers nearby, in ways different than whenassembly line workers gather together. The forces of agglomeration, which big cities enable, are strongest in the kind of knowledge work that has become central to the economy.

Major cities also benefit from their connections with other major cities.

The advantages bestowed by the global economy keep compounding from there. Research by Filipe Campante at Harvard and David Yanagizawa-Drott at the University of Zurich finds that when two cities are linked by direct flights across the globe, business links between them increase as well, such that places with more connections grow more economically. Those economic benefits, though, don’t appear to touch places more than 100 miles beyond the airport.

Harald Bathelt at the University of Toronto has found that firms in leading tech clusters in Canada tend to invest in leading tech clusters in China, and vice versa. They’re pouring resources into and linking up to places that are already similarly successful.

“The Torontos, Ottawas and Waterloos in countries like Canada and the U.S., they will link with Shenzhen in China, they will link with Munich and Stockholm in Europe,” Mr. Bathelt said. “And other places will be kind of left out.”

The article in no way contradicts the general view of Trump supporters -- in particular, that the liberal urban elites have economically abandoned the working class, especially in the industrial and rural sectors. However, Trump supporters differ from the gloomy diagnosis of the article in that Trump fans imagine that the Old Economy will come roaring back with Trump's leadership. The article, in contrast, points out that these changes in the economy can be traced back to the year 1980 -- when Ronald Reagan ascended to the presidency -- and, more importantly, are rooted in the predictable long-term evolution of the global economy that has been ongoing for centuries. 

In an editorial from the economist Paul Krugman in response to the above article, Krugman says that he found the article "stimulating", but he neither confirms nor denies its validity.


In particular, I’ve been trying to clarify my thoughts after reading Emily Badger’s stimulating piece on how megacities seem to have less and less need for smaller cities. I found myself asking what might seem like an odd question: what, in the modern economy, are small cities even for? What purpose do they serve? And this question leads me to a chain of thought that’s a bit different from Badger’s, although not necessarily contradictory.

At first Krugman follows Badger's narrative of the previous relationship between city and region. 

Rural areas were once dedicated to agriculture, and later factories supplemented the rural economy.

Once upon a time, it was obvious what towns and small cities did: they served as central placesserving a mainly rural population engaged in agriculture and other natural resource-based activities. The rural population was dispersed because arable land and other resources were dispersed, and so you had lots of small cities dotting the landscape.
Over time, however, agriculture has become ever less important as a share of the economy, and the rural population has correspondingly declined as a determinant of urban location. Nonetheless, many small cities survived and grew by becoming industrial centers, generally specialized in some cluster of industries held together by the Marshallian trinity of information exchange, specialized suppliers, and a pool of labor with specialized skills.

But for Krugman, this raises a question. Why did some rural cities die and others continue to prosper?

What determined which industries a small city developed? In some cases particular features of the location and nearby resources were important, but often it was more or less random chance at first, then a sequence in which one industry created conditions that favored another.

In effect, Krugman turns the issue on its head by observing that obsolescence is natural and inevitable, and that it is survival that is peculiar. Farms, villages, small towns and small cities have been disappearing for centuries in the wake of modernity and its transformations. The real puzzle is how some of them managed to survive. The problem is not that urban elites have forsaken the small-town heartland, but that the small towns was economically doomed centuries ago with the advent of the Scientific Revolution and the Industrial Revolution. How did these rural areas manage to persist for so long?

Krugman has developed the model of "path dependence", which might help to explain how some localities survive while most do not.


Path dependence explains how the set of decisions one faces for any given circumstance is limited by the decisions one has made in the past or by the events that one has experienced, even though past circumstances may no longer be relevant.

That sound like the Hindu notion of karma, albeit without the ethical component.

There are many models and empirical cases where economic processes do not progress steadily toward some pre-determined and unique equilibrium, but rather the nature of any equilibrium achieved depends partly on the process of getting there. Therefore, the outcome of a path-dependent process will often not converge towards a unique equilibrium, but will instead reach one of several equilibria (sometimes known as absorbing states). 

The orthodoxy within the field of economics is Adam Smith's concept of "comparative advantage". Countries and regions tend to specialize economically in what they are naturally good at. Scotland is good at raising sheep, and Portugal is good at growing grapes, and so Britain would be best at exporting wool and Portugal should export wine. Free trade meshes perfectly with the way countries naturally specialize in products based on their geography.

But that is not always true. Some places develop despite their geographic disadvantages. It is early investment that give such locales a disproportionate advantage. One example might be the rise of Las Vegas. There is no reason why a major American city would emerge in the deserts of Nevada, but gambling laid the foundation for further economic diversification. Now that Las Vegas is established, few other locales in North America can seriously compete with it in terms of gambling, hosting conventions and entertainment. 

This dynamic vision of economic evolution is very different from the tradition of neo-classical economics, which in its simplest form assumed that only a single outcome could possibly be reached, regardless of initial conditions or transitory events. With path dependence, both the starting point and 'accidental' events (noise) can have significant effects on the ultimate outcome.

Most relevant to the issue at hand, there does seem to be some agreement between Adam Smith and Krugman on how the fate of localities is not based on their inherent characteristics, but on which of them developed first and subsequently diversified the most. 

Economists from Adam Smith to Paul Krugman have noted that similar businesses tend to congregate geographically ("agglomerate"); opening near-similar companies attracts workerswith skills in that business, which draws in more businesses seeking experienced employees. There may have been no reason to prefer one place to another before the industry developed, but as it concentrates geographically, participants elsewhere are at a disadvantage, and will tend to move into the hub, further increasing its relative efficiency. This network effect follows a statistical power law in the idealized case,[13] though negative feedback can occur (through rising local costs).

Diversification is risky. Sometimes it works, and sometimes it fails. Big cities can fail multiple times with fewer repercussions, whereas smaller cities cannot weather such setbacks so easily. So size matters, and so does luck. If a small city can somehow manage to grow and diversify, then that buys it time so that in the future it can again grow and diversify, buying it more time yet again. The alternative is obsolescence and doom. Krugman writes:

This was typical of small industrial cities: even if what a city was doing in, say, 1970 seemed very different from what it was doing in 1880, there was usually a sort of chain of external economies creating the conditions that allowed the city to take advantage of particular new technological and market opportunities when they arose.

Obviously, this was a chancy process. Some localized industries created fertile ground for new industries to replace them; others presumably became dead ends. And while a big, diversified city can afford a lot of dead ends, a smaller city can’t. Some small cities got lucky repeatedly, and grew big. Others didn’t; and when a city starts out fairly small and specialized, over a long period there will be a substantial chance that it will lose enough coin flips that it effectively loses any reason to exist.

I’m not saying that there weren’t patterns of success and failure. Small cities were and are more likely to fail if they have miserable winters, more likely to come up with new tricks if they’re college towns and/or destinations for immigrants. Still, if you back up enough, it makes sense to think of urban destinies as a random process of wins and losses in which small cities face a relatively high likelihood of experiencing gambler’s ruin.

Again, it was not always thus: once upon a time dispersed agriculture ensured that small cities serving rural hinterlands would survive. But for generations we have lived in an economy in which smaller cities have nothing going for them except historical luck, which eventually tends to run out.

Krugman points out that trade is not a villain in the processes of obsolescence. With or without international trade, small cities and rural regions tend to become obsolete over time, anyway. No, the problem is not China and Mexico.

Notice, by the way, that globalization and all that isn’t central to this story. If I’m right, the conditions for small-city decline and fall have been building for a very long time, and we’d be seeing much the same story – maybe more slowly – even without the growth of world trade.

Are there policy implications from this diagnosis? Maybe. There are arguably social costs involved in letting small cities implode, so that there’s a case for regional development policies that try to preserve their viability. But it’s going to be an uphill struggle. In the modern economy, which has cut loose from the land, any particular small city exists only because of historical contingency that sooner or later loses its relevance.

Complicating Krugman's analysis might be the infamous "resource curse". Societies that have an abundance of natural resources tend to fail to diversify their economies. In fact, they often remain in a state of frozen economic, political and cultural development. A classic case would be Saudi Arabia, which would seem to have been a fossil in every societal respect -- until the revolution in unconvential oil and natural gas extraction (the "fracking boom") brought permanently "low" oil prices (oil at $60 a barrel is, in fact, several times the historical norm for the price of oil). Now Saudi Arabia seems racked by a "revolution from above" as the ruling elites panic. But there is no revolution from above in rural America. Small-town people perceive their own ever-growing bounty in coal and corn and are puzzled by its diminishing economic returns, but the desire for radical transformation of their economy or lifestyle is alien to them.


The resource curse, also known as the paradox of plenty, refers to the paradox that countries with an abundance of natural resources (like fossil fuels and certain minerals), tend to have less economic growth, less democracy, and worse development outcomes than countries with fewer natural resources. There are many theories and much academic debate about the reasons for and exceptions to these adverse outcomes. Most experts believe the resource curse is not universal or inevitable, but affects certain types of countries or regions under certain conditions.

What should the response be to rural economic obsolescence? Paul Krugman is sad to say that in terms of economic policy, not much can be done. 


There do not seem to be any voices evident in the mass media offering practical, realistic, honest, useful, hopeful and creative advice to those who live in rural areas. In fact, the closest thing to encouraging people to move away from rural stagnation is an article on how people in New Orleans who were permanently relocated far from their poor, dysfunctional urban neighborhoods were eventually way better off.


Things are so dire in rural economies and realistic solutions are so lacking that perhaps brainstorming amongst the general public is in order. 

Here goes nothing....

At the personal level, perhaps individuals should move away from rural areas. They are most resistant to move from their communities, however, because community and family mean everything to them. For those who do move, they tend to move to the outer suburbs (for example, the ever-expanding suburbs of Texas), but never to a city or its inner suburbs. Perhaps a useful word of advise might be to suggest that it is a good idea to move to a place within one's region that has twice the population of one's current town. This might be well within an emotional comfort zone. 

Here are some random thoughts on government policy in rural areas. The government should: 
1) assist migration to somewhat larger places; 
2) pay for people living in rural areas to move to a more populous and more dense areas; 
3) close down government services like the post office in places where it simply is not efficient; 
4) have programs to re-wild rural areas, which would temporarily bring in revenue as rural areas transition; 
5) declare such areas national parks.