Abstract: Classic American foreign policy doctrines such as the Monroe, Truman and Carter doctrines assert that Latin America, Europe and the Middle East, respectively, are of vital interest to the USA and are therefore in the American sphere of influence. The USA thereby maintains the right to intervene militarily in those regions. In these regions, this is often perceived as American imperialism. From an American viewpoint, the purpose of these doctrines is anti-imperialist in that the primary purpose is to warn away other global superpowers. In the event of Soviet involvement in Latin America, Europe or the Middle East, for example, there existed the very real danger of rapid escalation to global nuclear war. In contrast, when global superpowers are absent from regional conflict, as with the Iraqi invasion of Kuwait in 1991, the American reaction can be expected to be less draconian and more uncertain. The USA might go to war to protect a ally (Saudi Arabia) from a regional threat (Iran); but then again, maybe not.
The Carter Doctrine, articulated in 1980, states that the Middle East is an area of vital interest to the USA and that the USA reserves the right to intervene with military force if those American interests (oil) are threatened.
The Carter Doctrine was a policy proclaimed by President of the United States Jimmy Carter in his State of the Union Address on January 23, 1980, which stated that the United States would use military force, if necessary, to defend its national interests in the Persian Gulf. It was a response to the Soviet Union's intervention in Afghanistan in 1979, and it was intended to deter the Soviet Union, the United States' Cold War adversary, from seeking hegemony in the Persian Gulf region.
The wording of the document stresses that it would be against forces from outside the region that the USA would retaliate against.
Let our position be absolutely clear: An attempt by any outside force to gain control of the Persian Gulf region will be regarded as an assault on the vital interests of the United States of America, and such an assault will be repelled by any means necessary, including military force.
The Carter Doctrine was modeled on the Truman Doctrine of 1947, which asserted that the USA will globally counter the spread of communism, and which signified the beginning of the Cold War. Notably, it was not so much communism itself which was perceived as a threat, but Soviet Communism, and the real area of concern was Europe.
The Truman Doctrine was an American foreign policy whose stated purpose was to counter Soviet geopolitical expansion during the Cold War. It was announced to Congress by President Harry S. Truman on March 29, 1947,[1] and further developed on July 4, 1948, when he pledged to contain threats in Greece and Turkey. Direct American military force was usually not involved, but Congress appropriated financial aid to support the economies and militaries of Greece and Turkey. More generally, the Truman Doctrine implied American support for other nations allegedly threatened by Soviet communism. The Truman Doctrine became the foundation of American foreign policy, and led, in 1949, to the formation of NATO, a military alliance that is still in effect. Historians often use Truman's speech to date the start of the Cold War.
The Truman Doctrine was informally extended to become the basis of American Cold War policy throughout Europe and around the world.[5] It shifted American foreign policy toward the Soviet Union from anti-fascism ally to a policy of containment of Soviet expansion as advocated by diplomat George Kennan. It was distinguished from rollback by implicitly tolerating the previous Soviet takeovers in Eastern Europe.
The Truman Doctrine pragmatically tolerated the preexisting conquest of Eastern Europe by the Soviets, rather than calling for regime change and revolution ("rollback"). The Truman Doctrine's (conservative) realism was quite different from the later Reagan Doctrine, which called for the USA to support the global overthrow of communism.
In its pragmatism, the Truman Doctrine resembled the earlier Monroe Doctrine, which declared that further European attempts to colonize the western hemisphere would be regarded by the USA as an act of war -- but nevertheless recognized current European colonies in the New World as legitimate.
The Monroe Doctrine was a United States policy of opposing European colonialism in the Americas beginning in 1823. It stated that further efforts by European nations to take control of any independent state in North or South America would be viewed as "the manifestation of an unfriendly disposition toward the United States."[1] At the same time, the doctrine noted that the U.S. would recognize and not interfere with existing European colonies nor meddle in the internal concerns of European countries. The Doctrine was issued on December 2, 1823 at a time when nearly all Latin American colonies of Spain and Portugal had achieved, or were at the point of gaining, independence from the Portuguese and Spanish Empires.
In the USA, the Monroe, Truman and Carter Doctrines are understood as anti-imperialist policies warning away other great powers; for the people in these regions, these doctrines are often seen as the imposition of American imperialism. This is classic "sphere of influence" politics. A superpower will inevitably dominate its neighbors. Moreover, that superpower will also recognize the right of other superpowers to likewise dominate their own neighbors (e.g., Soviet domination of eastern Europe) -- even if those other superpowers are arch-enemies.
In the field of international relations, a sphere of influence (SOI) is a spatial region or concept division over which a state or organization has a level of cultural, economic, military, or political exclusivity, accommodating to the interests of powers outside the borders of the state that controls it.
In more extreme cases, a country within the "sphere of influence" of another may become a subsidiary of that state and serve in effect as a satellite state or de facto colony. The system of spheres of influence by which powerful nations intervene in the affairs of others continues to the present. It is often analyzed in terms of superpowers, great powers, and/or middle powers.
A sphere of influence is sometimes referred to as a "sphere of interest".
The national interest, often referred to by the French expression raison d'État (English: "reason of State"), is a country's goals and ambitions, whether economic, military, cultural or otherwise. The concept is an important one in international relations, where pursuit of the national interest is the foundation of the realist school.
The USA has at least three major spheres of interest: Latin America, Europe and the Middle East. Any intervention by an outside power in these regions would endanger the USA according to established American foreign policy doctrine. That's a lot of real estate to police.
This background might help to explain why the USA is not rushing off to war despite Iran's attacks on oil production in Saudi Arabia. Iran is a regional power, not a global superpower posing an existential risk to the USA the way the USSR did. If the Soviet Union still existed and was allied with Iran and these strikes against Saudi Arabia were carried out, the world might well have very rapidly found itself veering toward nuclear war. Without the USSR, the "Big One" happened in the Middle East, yet to no consequence (yet).
From a realist perspective, the problem in the Cold War was not primarily ideology or communism, but great power conflict between the USA and the USSR. (Hence Nixon and Kissinger's recognition of communist China in order to counter the power of the USSR.) Communism has disappeared, but with the revival of the Russian economy, Russia has emerged as an adversary to the USA. Idealists in the USA are flummoxed by this, but to a realist, it is all very predictable.
This tells us something about the ascent of Ronald Reagan and Donald Trump to the presidency in the USA. When there is a life-or-death crisis, people tend to become pragmatic and realistic. When Reagan was elected, the Cold War was heating up with the Soviet invasion of Afghanistan. But that invasion was itself a sign of the steep decline in all areas in the USSR (the Soviet leadership was literally senile). The political sphere in the USA seems to have become incorporated into the entertainment industry precisely because there are so few obvious existential threats.
Ironically, things have never been better. More ironically, the cocky sense of overconfidence that the absence of real crisis inspires among the masses is itself a source of instability and grievous concern.
Robert Reich asserts that President Trump is dangerously unstable and needs to be forced from office. What the good professor fails to realize is that thanks to democracy, someone far worse than Trump -- and far more fun! -- would be elected in Trump's place.
The American financier Jim Rogers asserts that the worst economic downturn since the Great Depression will happen in 2020. Japan in particular will be hit hard because of its high national debt, low birth rates and virtual ban on immigration.
In particular, the world has loaded up on debt, and debt is a time bomb.
In the book, he foresees a catastrophic economic downturn within a year or two due to the “unprecedented” level of debt worldwide. The Institute of International Finance reported that global debt soared to $247 trillion in the first quarter of 2018, and Rogers said debt has increased by $75 trillion, or 43 percent, since 2008.
Debt to GDP of the most populous nations in the world:
It is as though the Japanese government imagines that borrowed money can be a substitute for a lack of people.
The "replacement rate" for fertility -- at which a population exactly replaces itself from one generation to the next -- is 2.1 children per family. The fertility rate in Japan is almost half of that.
The immigration rate in the USA is 14%. That is, 14% of Americans are foreign born. That is one of the highest rates of immigration in American history.
The immigration rate in Australia is an astonishing 28 percent. Australia has not had a recession in 27 years because immigrants provide both a workforce and demand for goods and services. (The sore point of high immigration rates is higher housing prices and a fear of immigrants, especially Muslims.)
A black swan event is a catastrophe that nobody foresaw -- although after the event all sorts of "experts" chime on how it was obviously going to happen for whatever reasons.
The black swan theory or theory of black swan events is a metaphor that describes an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight. The term is based on an ancient saying that presumed black swans did not exist – a saying that became reinterpreted to teach a different lesson after black swans were discovered in the wild.
A classic black swan event might be the bubonic plague that afflicted Europe in the mid-14th century. No one expected it then, although in retrospect, classic conditions existed for the Black Death to sweep through Europe:
globalization (the plague originated in Mongolia and spread with the Mongol conquest);
overpopulation in Europe (especially with dense cities and with people living near animals, either on farms or next to the wilderness); and
economic crisis caused by climate change.
Because the Black Death did happen and the conditions for it have been identified, another plague devastating humanity would not be a classic black swan event. Rather, a potential for a subsequent plague would be better described as a "white swan" threat in that it would be predictable because the conditions for a plague are now better known.
A white swan threat might remind us of the saying "Fool me once, shame on you; fool me twice, shame on me." Something unexpected and totally unpredictable once happened, and now we should know better. But people forget.
Interestingly, the conditions for the transmission of the plague that existed in the 14th century are widely expected to exist throughout the 21st century. Yet there is little discussion about or preparation for the possibility of a pandemic that might kill half the world's population within our lifetime. It would appear that white swan threats can induce collective amnesia.
Is the potential for Japan's collapse in 2020 also a white swan threat that is being overlooked? After all, Jim Rogers is not a fringe conspiracy theorist or a perpetually gloomy economist with a weird foreign name (Nouriel Roubini). Jim Rogers seems to be a totally positive and friendly American guy doing conventional, plain vanilla economic forecasting.
Jim Rogers' advice to Japan to allow more immigration does not seem to be gaining traction. Jim Rogers does give some advice to young Japanese to deal with their reduced prospects.
In 2017, he said on a financial podcast that “if I were a 10-year-old Japanese, I’d get myself an AK-47 or I’d leave” because of the country’s harrowing economic prospects.
What is Jim Rogers' advice for the rest of the world in terms of dealing with the impending economic and political collapse of Japan? Should we also buy AK-47s? How would a long economic depression change politics in Japan? The world has been afflicted by a wave of populism, nationalism and authoritarianism, yet after 30 years of stagnation, Japan has not succumbed to even a trace of anything like that. The reason for that might be that for almost 75 years in Japan, the conservative Liberal Democratic Party has had a monopoly on power and has institutionalized an elitist form of nationalism and authoritarianism. In the face of economic collapse, that ingrained ultra-conservatism might either prevent or promote the rise of fascism in Japan.
Nassim Taleb might offer some useful advice. Because we cannot predict a black swan event, Taleb argues that the best policy is to reduce vulnerability in general to prepare for potential unexpected catastrophes. For example, a low debt level is advisable because debt blows up in a crisis.
To prepare for a white swan event like the collapse of Japan, it might be imperative to reduce dependency on Japan. South Korea might have an inadvertent head start limiting its dependency on Japan. Because the two countries are now in a trade dispute, South Korea has been forced to consider alternative suppliers for its tech industry.
Two practical questions are suggested by the case of Japan's economic vulnerability:
In particular, How to reduce dependence on Japan?
More generally, as individuals and as a society, How can we reduce vulnerability?
Perhaps one of the greatest white swan threat today is the possibility of a oil price spike that could disrupt the world economy. The rise of oil prices to $147 per barrel in 2008 was a classic black swan event.
Amnesia seems to have set in since 2008. The initiatives to shift to renewable energy that were launched in 2008 were aimed at achieving greater energy security. Since then, the campaign to shift to renewable energy has been appropriated by the discourse of "global warming". Moreover, the so-called "fracking revolution" in the US oil and natural gas industry has also fostered the collective forgetting of the imperative to adopt local, distributed generation in order to promote energy security. In sum, liberal rhetoric and American ingenuity have together diminished the sense of urgency that existed in 2008 across the political spectrum that an energy transition was necessary for state security. However, periodic crises can trigger a collective remembering of white swan threats. In the face of a threat to the oil supply, people will -- temporarily -- remember that local generation is more safe and reliable than imported energy in any form.
Abstract: The concept of a "host culture" has been appropriated by the tourism industry from sociology, and is used incoherently when applied to Hawaii. Native Hawaiians are typically described as a "host culture" distinct from the "local culture", although no one would consider locals in Hawaii to be "guests". Native Hawaiians culture might better be understood as having been a very influential "hegemonic" culture, despite the relative declining fortunes of Native Hawaiians. In the latter half of the 20th century, it was ethnic Japanese in Hawaii whose values and attitudes became covertly culturally hegemonic within the local population, much to the benefit of Hawaii. In the 21st century, mainstream American culture might have a greater purchase on Hawaii's youth than locals might recognize.
The concept of a "host culture"
The term "host culture" does not seem to be used much in an academic context today. One of the earliest glimmers of the use of the term is from post-WW2 sociology in referring to the conflict between immigrants and the dominant culture that they encounter in their new country. The assumption was that a "failure" to assimilate on the part of immigrants led to a racist backlash in mainstream society.
The immigrant-host model was an approach that developed in postwar sociology to explain the new patterns of immigration and racism. It explained the racism of hosts as a reaction to the different cultural traditions of the immigrants, which acted as obstacles for their economic development and social integration.[1] The model assumed that the disruption immigration caused to stability would be solved by the cultural assimilation of immigrants into the dominant culture.
This perspective is rooted in the integrationist model that underlay the civil rights movement of the 1960s, which began to disintegrate in the face of the successes and failures of that movement. Even after the enforcement of voting rights and desegregation, blacks found that old prejudiced did not change with the change in laws, and even where attitudes did change, economic conditions did not. Also, among northern whites who supported the civil rights movement when it rolled through the South, the quasi-liberal segregationist slogan of "separate but equal" became appealing when the movement took hold in their own neighborhoods. White supremacy had been squelched in the South, but white and black separatism replaced it. The South transformed and became the "New South" that relinquished explicit racial domination in favor of an unofficial policy of avoidance -- and so did the rest of the USA. Hence the 1970s renaissance of American multiculturalism. This is the current orthodoxy across the political spectrum.
Subsequently, in sociology, the integrationist immigrant-host model fell out of fashion. The perceived problem was no longer immigrants clinging to their culture and refusing to assimilate, but an unfair and exploitative situation in which immigrants were discriminated against on the pretext of their difference. The 1950s immigrant-host model was itself now perceived as yet another example of the racist "blame-the-victim" mentality of the dominant society.
Since the 1970s, however, the assumptions in this model have been increasingly discarded in the sociology of race relations. The core idea of the model that the immigrants' children would gradually assimilate and, thus, that racism and racial inequality would cease proved false. The model was criticized and blamed for reflecting and, even, reinforcing the racist assumptions by describing the cultures of immigrants as social problems and ignoring the role structural inequality plays in their subjugation.
The very concept of a "host culture" seems to have disappeared from the social sciences. In fact, there is no coherent definition of a "host culture" out there anywhere. The concept of a host culture seems to have been appropriated by the travel industry, and is no longer used an academic world that would at least take pains to define its terms. The typical travel industry website makes a distinction between tourists or "guests", on the one hand, and the "host culture" of the tourist destination, on the other hand. The host culture is the mainstream culture of those destinations. Notably, the host culture is not the culture of indigenous people in places like Australia or Brazil, but rather the mainstream, dominant culture of those countries.
The "host culture" in Hawaii
The one exception to this trend is found on the Travel Weekly website, where the author distinguishes between the "host culture" and the "local culture". He takes Hawaii as his example.
https://www.travelweekly.com/Arnie-Weissmann/Host-culture-local-culture
One consequence of globalization is an acceleration in the gap that's widening between host cultures and local cultures. The implications for the travel industry are significant.
Hawaii is the destination that perhaps provides the clearest distinction between a host culture and a local culture.
The Hawaiian traditions, language, customs and, above all, the aloha spirit, define its host culture. Sacred as that is, the influences of everything and everyone who have passed through or settled on the islands has made an impact on modern Hawaiian life and shaped a distinctive local culture that's quite different from the host culture.
World War II GIs can take credit, if they want, for the prevalence of Spam on local menus. And Japanese tourists' demand for familiar ingredients has helped the local cuisine evolve in, well, less processed ways.
In Hawaii, host culture and local culture coexist in relative harmony, in large measure because few in the local culture challenge the importance of the host culture, and the host culture's spirit is so inherently welcoming that many outside influences are not viewed as conflicting with traditional beliefs.
Today, both local and host cultures are being erased by the spread of corporate mass culture -- Starbucks, McDonalds, Walmart.
But elsewhere, globalization has been accused of contributing to the dilution of dominant host cultures while concurrently contributing to the homogenization of local cultures.
This part of the story is familiar: Fast-food outlets, retail stores and hotels have become brands without borders. Popular culture, from singers to movie franchises, is similarly ubiquitous.
It would seem that the world is moving toward sameness just as it's getting easier for more people to move around and explore the world's differences.
As far back as the 1950s, the French historian Fernand Braudel spoke of "global civilization" as a modernizing force that was transforming the world into somewhat similar societies. There is some irony in the French fear of globalization -- in particular, the spread of American corporate mass culture -- as a threat to the French sense of distinctiveness. Historically, it was other countries -- notably, German-speaking lands -- that chafed under the universal appeal of a French culture that aggressively broadcast itself throughout Europe and the world. Paralleling the split between Protestantism and Catholicism, Germans distinguished between "Kultur" as distinctive inward spiritual values that were under threat from the outward, material aspects of "civilization", such as political and legal systems, economies, technology, infrastructure, even rationality and science. (Adam Kuper's 1999 book "Culture: The anthropologist's account".)
Historically, the role of mass culture is downplayed in normative debates on how immigrants should behave in relation to society. In the USA, three models vied for legitimacy:
Anglo-dominant assimilation, in which immigrants gratefully conform -- at least outwardly -- to the sociocultural norms of the society that has admitted them;
fusion and synthesis of various cultures, from which a "New American" culture would be born; and
multiculturalism, in which subcultures maintain their distinct identity.
Interestingly, these three competing models are also the conceptual parameters of the 1990s debates on globalization, and whether the emerging world order fosters diversity or is really an "artificially intelligent" way of imposing Anglo-American domination (e.g., Benjamin Barber's "Jihad versus McWorld: How Globalism and Tribalism Are Shaping the World"). This itself mirrors Cold War-era foreign policy debates on whether the 20th-century American project ("liberal internationalism") of "liberating" the world by promoting the spread of liberty and democracy is itself merely a sneaky form of imperialism (so insidious that it even fools the American leadership that buys into it).
These normative debates on how people should act might fail to notice the descriptive realities of how ethnic groups actually do act. The reality of how groups behave turns out to be a mix of all three models: ethnic groups simultaneously conform, create a new cultures and also remain distinct. Moreover, there is a huge element of "global civilization" and "mass culture" in ethnic cultures distinct from those three models. Making things even more complicated, "global civilization" is comprised to a remarkable extent of creative, even subversive, proudly local popular cultures of lower class minorities -- for example, the rap music of urban America -- that has been packaged and sold by multinational corporations that are usually associated with bland, generic "mass culture".
But back to the distinction between "host cultures" and "local cultures" made above by the travel writer. Usually the host culture is associated with the majority population ("British culture"), and local cultures refer to particular subsets of that population (rural Yorkshire). The typical use of these terms when applied to Hawaii, however, reverses this definition -- the local culture is associated with the majority, and the host culture is associated with Native Hawaiians who are a subset of that local population. Logically, Native Hawaiians should be considered a "native" element within the local culture, and this local culture would properly be considered the host culture of Hawaii.
However ... Hawaii might be a special case. As the travel writer explained, there is a sense that the Native Hawaiian culture exerted and still exerts a formative influence on the locality. Outside of the cultural influence of Native Hawaiians on contemporary Hawaii, in the legal-political institutions one finds early influences, such as the very strong powers granted to Hawaii's governor (as opposed to Texas, where the governor has more limited power) down to simple government functions (in Hawaii, the deed to a house is kept not by the owner but by two separate State bureaucracies, an arrangement dating back to the Kingdom). This is to say that Hawaii's local population is to some extent still operating within the a certain framework that is a holdover from a host society that was predominantly Native Hawaiian. Economically, Native Hawaiians might have lost ground in relation to the rest of Hawaii's population, but they -- as did their ancestors -- still exert an outsized cultural influence on the local population.
The concept of "cultural hegemony"
This confounds the notion of "cultural hegemony", usually understood as the upper classes imposing their values, attitudes and perspectives onto the general population. At one time in Hawaii, the Hawaiian upper classes did do just that, but that culture has persisted despite the waning of the Hawaiian upper classes. As stated above, it is ordinary Hawaiians who still exert a strong cultural influence on the local population.
Matthew ("Matt") King is a Honolulu-based attorney and the sole trustee of a family trust of 25,000 pristine acres on Kauai. The land has great monetary value, but is also a family legacy.
By the end of the movie, Matt King rejects selling the land, arguing that his family was originally Native Hawaiian, but they married into a long series of white Americans until they became white Americans. The final scene in the movie involves multiple reconciliations.
Later, the three are at home sitting together sharing ice cream and watching television, all wrapped in the Hawaiian quilt Elizabeth had been lying in.
The film rankled a critic in Hawaii, Chad Blair, because of its elitism.
Things seemed promising early on in the film: shots of boxy apartment buildings, congested freeways, Diamond Head devoid of rainfall, homeless people downtown, Chinatown shoppers, squatters on beaches.
“Paradise?” says Clooney’s character. “Paradise can go fuck itself.”
I thought: Tell it like it is, Alexander Payne!
But that all changed, right about the time I learned that Clooney’s surname is King and that he’s descended from Hawaii royalty and haole aristocracy and that he lives in a beautiful house in Manoa (or Nuuanu or Tantalus; someplace like that) with a swimming pool and the kids go to Punahou and they hang out at the Outrigger Canoe Club and they own a beach and mountain on the Garden Isle.
“The Descendants” had gone from hibiscus and Gabby Pahinui to a dashboard hula doll and Laird Hamilton in just minutes, and the film kept right on descending.
Blair implies that the book was written by a member of Hawaii's white elite, and that the non-white elite loved it.
On March 1, the Hawaii State Senate honored Kaui Hart Hemmings, the author of the book version of “The Descendants” and an advisor to the film. (And stepdaughter of former GOP state Sen. Fred Hemmings.)
“We are very proud of Kaui Hart Hemmings and the role she played in showcasing Hawaii in her novel. She shares the story of life in our islands through the eyes of a Kamaaina, which everyone in the world would be able to appreciate,” said Senate Majority Leader Brickwood Galuteria, who presented the certificate. That’s from a Senate press release.
“Scenery from Kauai’s iconic properties and landscapes are beautifully photographed and highlighted in the film, thanks to Kaui and the producers of the movie,” said Sen. Ron Kouchi of Kauai. “We are pleased with being able to share our island lifestyle with those who watch the movie.”
“Our island lifestyle”? More like the richest 1 percent, the ones teeing up at Princeville.
The local politicians loved the movie, and maybe that in part reflects a complacent attitude toward hierarchy in Hawaii that winds through Hawaii's colonial era but was even more profound in the pre-contact era. The flip side of the Native Hawaiian legacy was authoritarianism, and one can see that strain in Hawaii's political culture, which otherwise exhibits a casual populism and egalitarianism.
One the other hand, Chad Blair applauds how a little bit of social criticism was sneaked into the movie in George Clooney's big speech, which is included in the Chad Blair's article (in big bold letters).
We didn’t do anything to own this land — it was entrusted to us. Now, we’re haole as shit and we go to private schools and clubs and we can hardly speak Pidgin let alone Hawaiian.
But we’ve got Hawaiian blood, and we’re tied to this land and our children are tied to this land.
Now, it’s a miracle that, for some bullshit reason, 150 years ago, we owned this much of paradise. But we do. And, for whatever bullshit reason, I’m the trustee now and I’m not signing.
In the case of Hawaii, the term "host culture" might be understood as the culture that is secretly hegemonic amidst a mix of all sorts of ethnicities. "The Descendants" does a pretty good job of showing how among those who descend from the old white elite, there is still some sort of residue of values and attitudes that derive from Native Hawaiian culture. That might be something that people in Hawaii -- and their politicians -- could recognize when they saw the movie.
California uber alles?
A sharp contrast to "The Descendants" might be found in the 2010 comedy-drama "The Kids Are Alright".
Nic (Annette Bening) and Jules (Julianne Moore) are a married same-sex couple living in the Los Angeles area. Nic is an obstetrician and Jules is a housewife who is starting a landscape design business. Each has given birth to a child using the same sperm donor.
The kids locate the donor, Paul, who has an affair with Jules and wants to marry her and adopt the kids. This causes a crisis, but by the end of the movie, there is a reconciliation.
The next morning, the family drives Joni to college. While Nic and Jules hug Joni goodbye, they also affectionately touch each other. During the ride home, Laser tells his mothers that they should not break up because they are too old. Jules and Nic giggle, and the film ends with them smiling at each other and holding hands.
The actual feel of the final scene of the movie is not as warm as a description of it might seem. Joni breaks down as she watches her family drive away because her parents may split, but also perhaps because she is now an adult and on her own. It is a very different final scene from that of "The Descendants". It's much more classically American, and all very Californian.
["The Kids Are Alright", 2010, final scene, "You're too old."]
Does California exert an outsized cultural influence on suburban America? And insofar that California is culturally a proverbial "postcard from the future", does California help the rest of America deal with the social changes that California has already undergone prior to other places (for example, the blended family of the "Brady Bunch")? That is, through a television industry that is very Californian, does the rest of middle-class, suburban America finds a culture that it can relate too despite the provincialism of their own surroundings? For example, the TV show "Freaks and Geeks" was set in the Midwest in the 1980s, but that region was still adjusting to the realities that had already beset more urban places like California in the 1970s -- realities like the rise of lonely latchkey children.
A little setup: Bill Haverchuck (Martin Starr) is having a terrible time of it. He's a latchkey kid. He’s horrible in phys-ed class. And he learns, in this episode, that his mom is seriously dating his gym teacher, whom he hates.
First, the music. “I’m One” by the Who, from the 1973 album “Quadrophenia.” It builds from mournfulness (“I’m a loser / No chance to win”) to a defiant chorus. And it's a great example of how “Freaks and Geeks” chose its soundtracks. The episode is set in 1981, but it avoids on-the-nose ’80s-song choices. Paul Feig, the show’s creator, once told me that the thing about the early ’80s in the Midwest was that they were really still the ’70s.
The old switcheroo
If the dominant values can be a product of social groups that are not dominant (Native Hawaiians in Hawaii) or are not physically present in the locality (Californian influence in the Midwest, or French hegemony in traditional Europe), could one group's secret influence be replaced by that of another group?
It could be that through the latter half of the 20th century, cultural hegemony was exerted in Hawaii by local ethnic Japanese who disproportionately made up the schoolteachers and politicians in this period (in fact, a potential majority).
The Japanese in Hawaii (simply Japanese or “Local Japanese”, rarely Kepanī) are the second largest ethnic group in Hawaii. At their height in 1920, they constituted 43% of Hawaii's population.[2] They now number about 16.7% of the islands' population, according to the 2000 U.S. Census. The U.S. Census categorizes mixed-race individuals separately, so the proportion of people with some Japanese ancestry is likely much larger.
During elections in Hawaii, politicians are constantly talking about the welfare of "keiki" and the "kupuna". Hawaiian language is being utilized, but the target audiences are Asians in general and the coveted Japanese voting block in particular. The image and identity of Hawaii might be located in a Polynesian past, but during the second half of the 20th century this masked the hegemony of Asian American values and attitudes. Similarly, Japanese American soldiers in WW2 supposedly suffered the highest level of fatalities in that war among American soldiers, and the entrenched rhetoric of patriotic self-sacrifice that one still encounters in Hawaii among the local Japanese ironically serves as a vehicle for perpetuating culturally conservative Asian values under the guise of Americanization.
Asian cultural hegemony in Hawaii might have been crucial in terms of accelerating Hawaii's economic development in the second half of the 20th century. As it became clear in the 2016 American presidential election, there is in places like rural West Virginia a segment of the population that identifies with family, community, tradition, religion and hard physical labor, and it resists urbanization and formal education. In Hawaii, the Asian and especially the Japanese population identify with those culturally conservative values, but they also put a premium on professional orientation and technical education, and this serves as a crucial bridge into the future. Without this cultural bridge between the rural community-oriented working class and the individualistic middle-class suburbs, there might not have been that much different between the economic fortunes of Hawaii and a place like Puerto Rico. Moreover, the idea that you can "Have it both ways" in Hawaii means that almost anyone can live in Hawaii and in their own little niche enjoy a very high quality of life (if they can afford it).
That was then and this is now
That being said, the 21st century might be another matter entirely. Cultural hegemony in Hawaii might have shifted yet again, this time toward mainstream American culture.
For example, the orientation of children of Asian immigrants in Hawaii today seems quite different from the past. In the past, Asian immigrants in Hawaii would typically assimilate into local culture. Today, they assimilate just as much into American culture. For example, there is a local literature in Hawaii that stresses the dysfunctional continuities with the past. This contrasts with an Asian American literature in the continental USA that stresses the profound chasms between the generations that signify not just a generation gap, but thousands of years of difference ("The Joy Luck Club"). Today, however, the young writers in Hawaii who are the educated offspring of immigrants engage in a form of literature that is really typically Asian American. The torch has passed to a new generation.
The economist Nouriel Roubini argues that there is going to be a recession in 2020. Unfortunately, at some point the USA will not be able to use the usual stimulus measures to revive the economy. This is because there will be several "permanent supply shocks" at work, including trade and technology conflict between the USA and China, as well as military conflict between the USA and Iran that would lead to rising oil prices. In the face of inflation, the USA will not be able stimulate the economy in the midst of recession because this could trigger hyperinflation.
All three of these potential shocks would have a stagflationary effect, increasing the price of imported consumer goods, intermediate inputs, technological components and energy, while reducing output by disrupting global supply chains. Worse, the Sino-American conflict is already fuelling a broader process of deglobalisation, because countries and firms can no longer count on the long-term stability of these integrated value chains. As trade in goods, services, capital, labour, information, data and technology becomes increasingly balkanized, global production costs will rise across all industries.
Roubini is know as "Dr. Doom" because he is a pessimist. As far back as 2005 he accurately predicted the meltdown in real estate and financial markets. Roubini worked in the Clinton administration, and is associated with the Democratic Party.
Here is a list of economists who accurately predicted the 2008 meltdown:
It is said that very few economists foresaw the financial crisis of 2008. Actually, quite a few topnotch economists predicted the 2008 crisis. The problem was that few listened.
Just prior to that crisis, the premier economist within the Republic Party was David Rosenberg. He accurately explained why the USA was headed into the worst recession in its history. He is again saying something like that now. For instance, there is a bubble in corporate debt.
Since the last recession, nonfinancial corporate debt has ballooned to more than $9 trillion as of November 2018, which is nearly half of U.S. GDP. As you can see below, each recession going back to the mid-1980s coincided with elevated debt-to-GDP levels—most notably the 2007-2008 financial crisis, the 2000 dot-com bubble and the early '90s slowdown.
Through 2023, as much as $4.88 trillion of this debt is scheduled to mature.And because of higher rates, many companies are increasingly having difficulty making interest payments on their debt, which is growing faster than the U.S. economy, according to the Institute of International Finance (IIF).
On top of that, the very fastest-growing type of debt is riskier BBB-rated bonds—just one step up from “junk.” This is literally the junkiest corporate bond environment we’ve ever seen.
Combine this with tighter monetary policy, and it could be a recipe for trouble in the coming months.
Along with the bubble in corporate debt, there is a bubble in consumer spending. Earnings are down, and spending is being financed by borrowing.
“What no one seems to talk about is the underlying fundamentals behind the consumer are actually deteriorating before our very eyes,” the firm’s chief economist and strategist said Tuesday on CNBC’s “Futures Now. ”
Yet the latest economic data suggests the consumer, considered the main driver of the U.S. economy, is on solid ground. Commerce Department figures show retail sales in July rose 0.7%, after a 0.3% increase in June.
In a recent note, Rosenberg criticized the government’s retail report. He estimated the bullish retail sales number was completely financed by credit and, therefore, unsustainable.
According to Rosenberg, the problems lie in the Bureau of Labor Statistics’ real average weekly earnings decline in July of 0.3% from June.
“It’s actually been flat to down now in six of the past eight months,” said Rosenberg. “I’m actually wondering how long the consumer is actually going to keep it up and hold the glue together for the economy when real incomes are starting to subside as much as they are.”
In order to maintain a middle-class lifestyle, Americans are going into debt.
Cars, college, houses and medical care have become steadily more costly, but incomes have been largely stagnant for two decades, despite a recent uptick. Filling the gap between earning and spending is an explosion of finance into nearly every corner of the consumer economy.
Average tuition at public four-year colleges, however, went up 549%, not adjusted for inflation, according to data from the College Board. On the same basis, average per capita personal health-care expenditures rose about 276% over a slightly shorter period, 1990 to 2017, according to data from the Centers for Medicare and Medicaid Services.
And average housing prices swelled 188% over those three decades, according to the S&P CoreLogic Case-Shiller National Home Price Index.
Growing debt reflects confidence in the economy, but when borrowers bet wrong and the economy sours, debt blows up in their face -- and in the economy. The crucial question is whether the debt is paying for investments or for luxuries. Going into debt to buy status symbols makes people poorer, and this increases wealth inequality.
Taking on a mortgage to buy a house that could appreciate, or borrowing for a college degree that should boost earning power, can be wise decisions. Borrowing for everyday consumption or for assets such as cars that lose value makes it harder to save and invest in stocks and real estate that tend to create wealth. So the rise in consumer borrowing exacerbates the wealth gap.
A case study suggests that typical borrowing is for both investments and luxury.
Jonathan Guzman and Mayra Finol earn about $130,000 a year, combined, in technology jobs. Though that is more than double the median, debt from their years at St. John’s University in New York has been hard to overcome.
The two 28-year-olds in West Hartford, Conn., have about $51,000 in student debt, plus $18,000 in auto loans and $50,000 across eight credit cards. Adding financial pressure are a baby daughter and a mortgage of around $270,000.
“I’m normally a worrier, but this is next-level stuff. I’ve never been more stressed,” Mr. Guzman said. “Never would I have thought with the amount we make I would have these problems.”
They no longer dine out several times a week. Other hits to their budget were hard to avoid, such as a wrecked car that forced them to borrow more.
First of all, these are not middle class people, they are the upper-middle class. Second, in an earlier generation, middle class people did not eat out several times a week, they ate out maybe several times a month or several times a year. And when they did dine out, they ate food like chicken and waffles.
One consistent pattern in the rising cost of cars, college, houses, medical care is that everything has changed, but the way of thinking remains stuck in the past.
Cars (actually, trucks and SUVs)
The rising price of automobiles is telling, and so is the rising method of paying for cars -- borrowing.
Nowhere is the struggle to maintain a middle-class lifestyle more apparent than in cars. The average new-car price in the U.S. was $37,285 in June, according to Kelley Blue Book. It didn’t deter buyers. The industry sold or leased at least 17 million cars each year from 2015 to 2018, its best four-year stretch ever. Partly because of demand satisfied by that run, sales are projected to be off modestly this year.
How households earning $61,000 can acquire cars costing half their gross income is a story of the financialization of the economy. Some 85% of new cars in the first quarter of this year were financed, including leases, according to Experian. That is up from 76% in the first quarter of 2009.
Car trouble
And 32% of new-car loans were for six to seven years. A decade ago, only 12% were that long. The shorter-term loans of the past gave many owners several years of driving without car payments.
Now, a third of new car buyers roll debt from their old loans into a new one. That’s up from roughly 25% in the years before the financial crisis. The average amount rolled into the new loan is just over $5,000, according to Edmunds, an auto-industry research firm.
Leasing, which often entails lower payments than purchase loans, accounted for 34% of financed new vehicles in the first quarter, up from 20% a decade earlier, according to Experian. Drivers of used cars also finance them—more than half did last year.
The primary driver of the increasing price of automobiles is improved safety features. The gorgeous cars of the 1950s would today be seen as flying coffins. Some of the money that Americans save by not winding up in the hospital after a car accident they must now pay up front when they buy a car.
So, dent a fender or smack a mirror and you might need to repair or replace one of these sensors. AAA found that in a minor front or rear collision involving a car with ADAS technology the repair costs can run as high as $5,300. That’s about $3,000 more than repairing the same vehicle without the safety features.
Subsequently, the chances of dying in an automobile accident continue to fall (even as Vehicle Miles Travelled has increased).
In the 1960s and ’70s, when odometers typically registered no more than 99,999 miles before returning to all zeros, the idea of keeping a car for more than 100,000 miles was the automotive equivalent of driving on thin ice. You could try it, but you’d better be prepared to swim.
But today, as more owners drive their vehicles farther, some are learning that the imagined limits of vehicular endurance may not be real limits at all. Several factors have aligned to make pushing a car farther much more realistic.
As with safety features, the increasing longevity of motor vehicles is partly a result of government regulation.
Customer satisfaction surveys show cars having fewer and fewer problems with each passing year. Much of this improvement is a result of intense global competition — a carmaker simply can’t allow its products to leak oil, break down or wear out prematurely.
But another, less obvious factor has been the government-mandated push for lower emissions.
“The California Air Resources Board and the E.P.A. have been very focused on making sure that catalytic converters perform within 96 percent of their original capability at 100,000 miles,” said Jagadish Sorab, technical leader for engine design at Ford Motor. “Because of this, we needed to reduce the amount of oil being used by the engine to reduce the oil reaching the catalysts.
“Fifteen years ago, piston rings would show perhaps 50 microns of wear over the useful life of a vehicle,” Mr. Sorab said, referring to the engine part responsible for sealing combustion in the cylinder. “Today, it is less than 10 microns. As a benchmark, a human hair is 200 microns thick.
“Materials are much better,” Mr. Sorab continued. “We can use very durable, diamondlike carbon finishes to prevent wear. We have tested our newest breed of EcoBoost engines, in our F-150 pickup, for 250,000 miles. When we tear the engines down, we cannot see any evidence of wear.”
Another reason for more durable cars is competition from Japan.
The trend toward better, longer-lasting cars seems to have begun way back in the ’60s, when the first imports from Asia started to encroach on American and European carmakers’ sales figures.
Another factor is that cars from the ’60s and ’70s were susceptible to rust and corrosion — many literally fell apart before their engines and transmission wore out. But advances in corrosion protection, some propelled by government requirements for anticorrosion warranties, have greatly reduced that problem.
“Competition is part of it,” said Peter Egan, a former auto mechanic and now editor at large of Road & Track magazine. “Japanese cars kind of upped everyone’s game a bit. With some exceptions, the engines would go a long time without burning oil or having other major problems.”
Hyundai and Kia, the South Korean carmakers, now include 100,000-mile/10-year warranties on their cars’ powertrains. If a relatively abusive driver can count on no major mechanical failures before 100,000 miles, a careful owner can — and does — expect his car to go much farther.
In a sense, American cars slowly morphed into German cars -- more solid, more safe, more expensive. Americans, however, have not adopted German habits of automobile ownership. Germans buy cars with cash, not credit, and keep the car forever, replacing every part of a car until it essentially becomes a new car. The American mentality is rooted in the 1950s practice of trading in a car for a better car every few years, a practice based on two assumptions: 1) the income of the car owner will rise in the interval, and 2) a car is the equivalent of a disposable razor blade that needs to be swapped out.
Also, Americans generally don't buy cars. Seven of the top ten best-selling vehicles in the USA in 2018 were trucks and SUVs. While Americans swear that they need these big vehicles for practical reasons, these vehicles are actually much less practical.
In sum, automobile technology has changed, but Americans haven't.
College
New realities afflicted by old ways of thinking is also true in terms of the rising cost of college.
Back in the 1950s, only about 5% of American adults completed college. That percentage now approaches 30%.
Relatively few people went to college, and they tended to be either the wealthiest people or the smartest people. Mostly likely, the banker was the son (and grandson) of a banker, went to Harvard and got mediocre grades, and focused instead on getting into the right clubs and marrying the right woman from the right family. In fact, studying was seen as unseemly. Elite universities were in cahoots with the American aristocracy's self-replication.
Given the decades-long history of grade inflation, particularly in Ivy League schools in the non-engineering disciplines, a “gentleman’s C” is proxy for “it should have been an F but we are too civilized to tarnish your record as such.”
In graduate schools this is perhaps even more pronounced.
I went to a top 30 or so law school where the grades were curved to a strict 3.17.
The saying there was “It’s hard to get an A; it’s even harder to get a C.” In practice the distribution was something like 15% As, 75% Bs, 10% Cs in each class.
I never heard of anyone actually failing, but it was well known that a C+ was the equivalent of a D, and a C- was tantamount to an F.
This stands in contrast to those students who graduated from elite schools and who were not from the elite. The educational career of Robert McNamara is a case in point.
Immediately thereafter, McNamara worked a year for the accounting firm Price Waterhouse in San Francisco. He returned to Harvard in August 1940 to teach accounting in the Business School and became the institution's highest paid and youngest assistant professor at that time.
UC Berkeley charged no tuition up until Governor Ronald Reagan attempted to raise tuition in 1968. Berkeley might be understood to have been a free public school for the intellectual elite (like City College of New York) that has become a partly subsidized private elite university for the elites in general.
It is often stated that there is a hierarchy of intellectual rigor in the university.
STEM fields like math are the most difficult.
The social sciences and humanities are not quite as hard.
Business degrees are the easiest.
Historically, the aristocracy that went to college majored in things like history and business -- things that would teach practical skills and critical thinking skills useful for leadership without being unduly difficult. The ordinary person, in contrast, would learn vocational skills, often on the job. The intellectual elite not born into wealth might have a trajectory very similar to McNamara. They might major in a practical and rigorous field like economics but they would also be fascinated by abstract, rigorous fields like math and philosophy. (In the 2003 documentary "The Fog of War", McNamara says that he had never been so excited in his life as when he discovered philosophy in college.)
Rich students study general things; smart students study difficult things; everyone else learns a trade. The Bush dynasty is a case in point. President George W. Bush majored in history at Yale; his father, President George H.W. Bush, majored in economics at Yale; his father, Senator Prescott Bush, attended Yale (major unknown); his father, the industrialist Samuel Bush, attended Stevens Institute of Technology; his father, Rev. James Smith Bush, attended Yale; his father, Obadiah Bush, was a schoolteacher, prospector and abolitionist; his father, Timothy Bush, was blacksmith. As the Bushes moved up in the world, they went from the trades at the bottom to a generalized education for the aristocracy.
As more people go to college, this pattern might have become inverted.
In order to graduate from college, academically challenged students whose parents did not go to college gravitate to the social sciences and humanities, which have a reputation for being easy, "Mickey Mouse" majors. Ironically, this is the kind of academic career that once typified the aristocracy. One problem is that this liberal arts education for ordinary students is funded today by student loans, not by trust funds from Swiss bank accounts. Another problem is that unless one is going to Yale and one's father is President of the United States of America, such a degree is only really good for going to graduate school. Graduate school is perfectly fine for the likes of Robert McNamara, but does it really make sense for 90% of the population whose IQ is below 120?
In the meantime, the aristocracy has developed its own quasi-vocational career path that includes a traditional aristocratic education. In the UK, the PPE course of study is exemplary.
In the 1980s, the University of York went on to establish its own PPE degree based upon the Oxford model; King's College London, the University of Warwick, the University of Manchester, and other British universities later followed. According to the BBC, the Oxford PPE "dominate[s] public life" (in the UK).[7] It is now offered at several other leading colleges and universities around the world. More recently Warwick University and King’s College added a new degree under the name of PPL (Politics, Philosophy and Law) with the aim to bring an alternative to the more classical PPE degrees.
The PPE major is a weird hybrid in that it is modern vocational training for an established aristocracy.
Oxford PPE graduate Nick Cohen and former tutor Iain McLean consider the course's breadth important to its appeal, especially "because British society values generalists over specialists". Academic and LabourpeerMaurice Glasman noted that "PPE combines the status of an elite university degree – PPE is the ultimate form of being good at school – with the stamp of a vocational course. It is perfect training for cabinet membership, and it gives you a view of life". However he also noted that it had an orientation towards consensus politics and technocracy.[4]
Geoffrey Evans, an Oxford fellow in politics and a senior tutor, critiques that the Oxford course's success and consequent over-demand is a self-perpetuating feature of those in front of and behind the scenes in national administration, in stating "all in all, it's how the class system works". In the current economic system he bemoans the unavoidable inequalities besetting admissions and thereby enviable recruitment prospects of successful graduates. The argument itself intended as a paternalistic ethical reflection on how governments and peoples can perpetuate social stratification.[7]
Stewart Wood, a former adviser to Ed Miliband who studied PPE at Oxford in the 1980s and taught politics there in the 1990s and 2000s, acknowledged that the programme has been slow to catch up with contemporary political developments, saying that "it does still feel like a course for people who are going to run the Raj in 1936... In the politics part of PPE, you can go three years without discussing a single contemporary public policy issue". He also stated that the structure of the course gave it a centrist bias, due to the range of material covered: "...most students think, mistakenly, that the only way to do it justice is to take a centre position".[4]
To what extent is the quasi-vocational training of the elites a disaster? After the 2016 election, Barack Obama lectured Mark Zuckerberg on why Facebook's mission to "connect the world" and its policy to "move fast and break things" was maybe not such a good idea, yet Zuckerberg only came away confused. Focused on computer science, it is as though Zuckerberg never learned how to think while he was at Harvard.
Silicon Valley's Peter Thiel argues that college is obsolete and needs to be disrupted. The standard reply to Thiel is that the irrelevance of college only applies to a few Silicon Valley success stories like Bill Gates, Steve Jobs and Zuckerberg, men who dropped out of elite universities or prestigious colleges. But Zuckerberg's cluelessness is evidence that even though Thiel is wrong, the usual criticism of Thiel is even more wrongheaded. That is, future decision-makers in particular should be like Robert McNamara and be exposed to, fascinated by and engaged in the humanities. In fact, Steve Jobs would insist as much. (Jobs's great disappointment in life was dropping out of Reed College because he lacked the funds and the educational preparation because he was adopted by a working class family, as he angrily stated in his Stanford speech.)
The point is that in the past, very few people -- the aristocracy, men of talent -- went to college. The current assumption that everyone should to go to college is based on assuming what was once true for those few who did go to college will also apply to everyone.
"Men of talent have rising incomes thanks to their college diplomas." This is a true statement.
"The Aristocracy is generally not so talented, but they can handle college." This is likewise true.
"The liberal arts are an excellent preparation for life." This is true for both the aristocracy and the Talented Tenth.
"There is a correlation between a college degree and higher salaries." This is true for men of talent.
But these statements do not apply to 90% of the population who neither possess superior ability nor attended elite preparatory academies. The reality is that as more people go to college, the college degree becomes less useful in the marketplace, and this is particularly true for the liberal arts.
The thesis here is that everything has changed, but our way of thinking is stuck in the past. The irony is that the previous system of higher education might serve as a better model.
Elite but easy education for the aristocracy, as long as they pay for it (Bushes);
Elite, rigorous education for the Talented Tenth, free of cost (McNamara); and
Vocational training for 90%, free of cost (although it might be called "college").
There would be no student debt in this model and, in fact, the cost of "college" would fall. Students would attend state universities for free and they would take classes in the humanities and social sciences as part of the general curriculum, but only the Talented Tenth would be allowed to major in such liberal arts disciplines. In fact, 90% of students would be encouraged to first attend a (free) community college and attain a vocational degree. They would be encouraged to patronize their local library.
Houses
The rising cost of houses in the USA has many unexpected causes. Home sizes keep increasing, even while family sizes continue to decrease. Homeowners, many of them liberals, oppose upzoning their neighborhoods for higher density, which would lower home prices and make homes affordable for young families. Lonely seniors will not downsize and move out of their big houses, a lifestyle becoming popular among younger childless couples in the suburbs.
One common theme on rising home prices is young Millennials conflicting with the older generation(s). The classic stereotype of Millennials is of the minimalist lifestyle: small, spare rented apartments in the city with few possession. However, studies show that Millennials somehow expect to someday own a house even larger than their parents' house.
As Albert Einstein said about nuclear war, everything has changed except the way we think.
Medical care
The only things I know about healthcare policy are from reading articles by the surgeon Atul Gawande.
However, I do know a little about healthcare spending on pets. During the worst days of the 2009 recession, spending on all things fell except for two categories: vitamins and pets. People ate less food and ate cheap food, and supplemented their diets with vitamins. They reexamined their lives and concluded that the best thing in their lives was their pet, and so they spent more on pet food and veterinary care.
This is a "crisis trend", in which a trend is exposed and accelerated by crises like war and recession.
Let's contrast this with the past, specifically with the BBC classic "All Creatures Great and Small" about a veterinary clinic in rural northern England.
The primary job of veterinarians used to be to help large farm animals like cows and horses give birth. The primary tool of the trade was a stethoscope.
Today, dogs and cats are primarily companion animals, and as members of the family, people are willing to spend large amounts of money on them. The tools of the trade now include MRIs, insulin shots and stem cell therapy.
We talk about the "rising cost of technology", but technology drives costs down over time. What rises are expectations. People import expectations from their knowledge of human life spans and human medical technology and apply that to their pets. The lifespan of a serious working dog might be five years (which is the typical lifespan of a wolf in the wild). As a pet, a dog might now live 12 years (the lifespan of a wolf in captivity), but people assume that there is something wrong with this, this is too short a period. So while the technology of veterinary technology has transformed, people have unrealistic expectations based on earlier ways of thinking (about humans).
Trends versus predictions
The forecasts of preeminent economists feel a bit like fortune-telling. The fortune teller will amaze the customer with insider knowledge and then make a very specific prediction about exactly what will happen in the future and precisely when it will happen.
The insider knowledge consist of processes and trends that no one else is observing. Out come the charts about previous downturns and statistics that everyone else has forgotten about. It seems plausible and illuminating.
The predictions have an odd certainty. These two countries will go to war and interest rates will go down by so many points. After so many months, the credit bubble will then burst.
It seems difficult enough to observe or at least try to apprehend trends without making predictions.
One general trend now is for the USA to move away finally from global leadership with the demise of the Soviet Union and the end of the Cold War almost three decades ago. "America First" isolationism might not have been inevitable, but the old role of global policeman began to ring hollow over a generation. In this situation, one would expect increasingly open conflict between American allies like Japan and South Korea, and regional trade conflicts would ensue. So the collapse of free trade might be much worse than Roubini expects from the current tariff spats between the USA and China.