The house below in Morocco is completely autonomous.
(Autonomous House B – Green Solutions Awards 2019)
.https://youtu.be/U8alMuoCM2s.
For example, one reason it uses very little electricity is that it relies on the sun for lighting.
Opaque blinds outside are manually lowered to diffuse direct sunlight.
In fact, the glass doors and windows on the south side are contained
behind a wall that opens mechanically which both protects from and
channels the sunlight.
Notably, three-quarters of the material of the house were sourced locally from craft workshops.
.
Some of these features might make this kind of home less appealing for most people.
For example, aside from their garage door, most people might not want sections of their house to be mechanized and mobile.
It feels like a novelty and one more thing that can go wrong.
Also, sourcing material from local craftsmen is fine for a unique architectural project, but not for mass home construction.
Also, does the average person want a home in which one of the walls in each room is a solid piece of glass?
This creates a sense of vulnerability.
It also feels alien.
In their domestic architecture, people seem to prefer tradition, domesticity, security, and convention.
It might be more appealing to create such an autonomous house as a hybrid with a kasbah, which is a traditional fortified house.
In fact, an autonomous house coupled to a kasbah would make the house even more fortified.
And that is what most people want from a house.
Almost no one is thinking about global warming when they buy or build a house.
The clear craze was a marketing fad from the late 1980s to early 2000s, often equating transparency with purity. Inspired by Ivory‘s
“99 and 44/100 percent pure” campaign for bath soap, and by low-calorie
or “light” beverages, sodas were redesigned in the 1980s and 1990s as
being free of artificial dyes, such as the caffeine-free and
preservative-free Crystal Pepsi. Personal hygiene products were then relaunched as clear dye-free gels, and many electronics have transparent cases.
Since the development of plastics, there was always the odd
transparent product that was designed to showcase its internal
mechanics.
Since the introduction of Plexiglas in the late 1930s, devices have been made with clear shells to expose the electromechanical components inside. At the 1939 New York World’s Fair, a 139 Pontiac Deluxe Six engine with a clear Plexiglas body was on display.[1][2] Peaking
in the 1960s and 1970s, transparent-shelled devices fell out of fashion
until the clear craze in the late 1980s. Following the breakup of the Bell System in
the mid 1980s, a surge of manufacturers began creating phones, many of
them transparent and having flashing neon lights when the phone rings.
The clear craze was grafted onto the diet craze of the 1980s in the marketing of low-calorie beer.
A trend of “light” beer with
fewer calories started in the 1960s. Then, color was identified in the
marketing industry as a “tool for visual persuasion” toward a product’s
purity and health consciousness. Ivory soap was adapted from its classic milky solution and its slogan of “99 and 44/100 percent pure”.[4] This
led up the clear craze starting in the 1980s. To showcase the reduction
of calories or artificial flavors, many companies released clear
versions of their products.
However, the transparency of the beer was linked to its “lite”-ness, not to its healthiness.
It was in the soft-drink market of the early 1990s that the clear craze came into its own in terms of the marketing of purity.
The clear cola market was entered by Crystal Pepsi on April 13, 1992[5] featuring no preservatives or caffeine, although the existing Pepsi also did not have preservatives, and a caffeine-free version was already available.[4][6]Coca-Cola soon responded with Tab Clear.[7] In August 1992, Coors announced Zima, a clear, carbonated malt beer and in 1993, Miller released Miller Clear to mixed reviews.
However, it turned out that Americans considered transparent things to be quirky novelty items.
.
Why did marketers believe that purity (or, at least, its appearance) would be a selling point in the early 1990s?
At the time, marketers explained that the USA was emerging from the
HIV-AIDS pandemic, and Americans (supposedly) yearned for purity.
(At least, that’s what some guy on the PBS NewsHour explained back then.)
Historically, pandemics brought in their wake a new emphasis on cleanliness and purity.
.
One can see a precursor in this kind of hygiene-oriented design in the aftermath of the 1918 “Spanish flu” influenza pandemic.
In architecture, there was a shift toward modernist minimalism that emphasized light, fresh air, and openness.
With this impulse in mind,
architect Walter Gropius founded the Bauhaus School in Weimar, Germany,
in 1919. The Bauhaus aimed to bridge art and design, training students
to reject frivolous ornamentation in order to create art objects that
were practical and useful in everyday life. Marcel Breuer, who started
at the Bauhaus in 1920 and eventually taught there, designed furnishings
that historians believe were influenced by the flu. In contrast to the
heavy, upholstered furniture that was popular at the time, Breuer’s
minimalist pieces were made of hygienic wood and tubular steel, able to
facilitate cleaning. Lightweight and movable, works like the designer’s
bicycle-inspired Wassily Chair and Long Chair met modern sanitary needs by being easy to disinfect and rid of dust build-up.
“The
rise of modern architecture and design in the 1920s was inextricably
linked to the prevailing discourse on health and social hygiene,” says
Monica Obniski, curator of decorative arts and design at Atlanta’s High
Museum of Art.
Closets were added and expanded to replace hard-to-clean armoires.
White kitchen tiles and linoleum replaced wallpaper and wooden floors.
Powder
rooms—or half baths on the ground floor of a house near the front
door—are also the result of the attempt to prevent the spread of
infectious diseases in the early 20th century.
.
Yet, the clear craze did not catch on in the wake of the HIV-AIDS pandemic.
In fact, just the opposite seemed to happen.
The 1990s were characterized by the aesthetics of darkness.
.
In 1989, “Batman” was released into theaters.
It had the dark gothic theme typical of director Tim Burton’s films.
The aesthetics of the 1989 Batman film seemed to have been inspired
by the post-punk “goth” fashion that emerged in London in the 1980s.
During the emergence of the goth subculture in 1980’s London,[13] many genres of music played a large role in establishing the fashion trends – fashion spelled
out the music an individual would listen to. Because of its origins,
the major music inspirations during the early emergence of the goth
subculture were similarly English bands. Some bands who have influence gothic fashion over the years include bands like Bauhaus, The Cure, Sisters of Mercy, and Siousxie and the Banshees[14]
The
Batcave Club was a nightclub in London, between 1982-1986, that hosted
live music and paid homage to all things goth. The interior, as
described by Kelly Rankin, included cob-webbed ceilings and a real
coffin at the entrance. She says that “The Batcave became iconic because
it aided the progression of this movement”.
Within the mainstream media, there were only a few hints of goth fashion.
Nevertheless, the dark aesthetics that characterized goth fashion
became a counterpoint to the generic nothingness of the 1990s pop
culture.
.
What’s strange about the Batman franchise is that Hollywood seems to be stuck in the 1990s.
Rather than a passing fad that exists on the margins, the gothic
Batman aesthetic has persisted, even while it has altered and evolved
and has gone mainstream.
In fact, it seems to have become ever more dominant within the cinematic culture at large.
Remarkably, for a dark genre that seems to have taken over the
American film industry, from the very beginning of the franchise, the
various takes on Batman were bizarre.
Granted, that strangeness might be inherent to the Batman universe.
But the strangeness was extreme from the beginning.
.
For example, Tim Burton’s 1992 “Batman Returns” has the earmarks of a
Nazi propaganda film, with obvious anti-Jewish motifs that have nothing
to do with the Batman universe.
Where did such fascistic themes come from, especially considering how
Tim Burton films typically show sympathy for outsiders and a disdain of
conformity (for example, 1990’s “Edward Scissorhands”)?
The supervillain Penguin has an obvious physical resemblance to Nazi propaganda images of the Jewish financier.
In the Batman comics, the Penguin is portrayed as an underworld kingpin who disguises himself as a respectable businessman.
The Penguin is a Gotham City mobster who fancies himself the number one “Gentleman of Crime”. He is most often seen wearing a monocle, top hat, and tuxedo while carrying his signature umbrella. The character appears most times as a short, fat man with a long nose.
The Penguin owns and runs a nightclub called the Iceberg Lounge which provides a cover for his criminal activity.
According to his creator Bob Kane, the character was inspired by the advertising mascot of Kool cigarettes
in the 1940s; a penguin with a top hat and cane. Cocreator Bill Finger
thought that the image of high-society gentlemen in tuxedos was
reminiscent of emperor penguins.
However, the Penguin in “Batman Returns” was a dweller in the sewers, not a wealthy criminal with business interests.
It seems that the Penguin of the comic books was broken up into two characters in the movie:
A deformed sewer-dwelling criminal pariah known as The Penguin.
The ruthless businessman Max Shreck who broke the law, but was no kingpin.
One can see the fission of a famous character into two main characters in the sequels of franchises.
For example, Captain James T. Kirk of “Star Trek” was broken into the
two main characters Captain Jean-Luc Picard and Commander Will Riker
for “Star Trek: TNG”.
Hence, Picard and Riker are strangely in sync with one another in so many scenes, such as when they are simply walking around.
Kirk = Picard + Riker
.
Another answer that helps to explain the racism of “Batman Returns” might found in the bat as a motif in vampire movies.
In a sense, “Batman Returns” is a crossover movie that taps deeply into one strain of the vampire genre.
This reflects the tendency and necessity of commercial filmmakers to cannibalize and recycle EVERYTHING.
It also reflects the tendency of inspired artists like Tim Burton to
tap into the collective unconscious and boldly combine elements and
images from adjacent genres (for example, “The Nightmare Before
Christmas”).
.
One variant of the vampire movie is the creepy, exotic foreigner who brings over vermin (rats) and pestilence in his wanderings.
This cinematic genre stretches back a century to the 1922 film “Nosferatu”.
It was exactly 100 years ago, in
March 1922, that Berlin’s movers and shakers attended the premiere of FW
Murnau’s Nosferatu: A Symphony Of Horror, and saw the nightmarish Count
Orlok springing bolt upright from his coffin. Those unsuspecting
viewers could well have witnessed the first great jump scare in the
history of horror movies. They had certainly witnessed its first great
monster. An unofficial adaptation of Bram Stoker’s Dracula – hence the
Count’s name-change from Dracula to Orlok – this silent masterpiece
pioneered techniques and established horror tropes that have been used
ever since. But the creation of the iconic Orlok, played by Max Schreck,
is its supreme achievement. He is, says Cristina Massaccesi, in her
guide to Nosferatu for the Devil’s Advocates horror history series, “the
Ur-Vampire, the father of all undead creatures lurking in the darkest
recesses of a cinema screen”.
He is also one of the few monsters
to be instantly recognisable, even in silhouette. Murnau makes
spine-tingling use of his shadow – and once you see the outline of
Orlok’s domed, bald head, his pointed ears, his hunched shoulders, his
stick-thin body and his snaking talons, you know who’s on the prowl.
Then you see his gaunt, chalk-white face. More animal than human, Orlok
has huge bushy eyebrows, sunken eyes, a beaky nose, and a rodent’s
incisors in the centre of his mouth (far odder than the sharp canines
possessed by later screen vampires). As Kevin Jackson says in
Constellation of Genius, his survey of 1922 in the arts, Orlok “must be
the strangest and most hideous leading man in all cinema”.
This portrayal of the vampire as a big, foreign nocturnal rat has been criticized as anti-Jewish.
Indeed, in the annals of European xenophobia, there is a historical
association of Jews with unclean, cunning rats that can survive
anywhere.
Again, the name of the actor who played the vampire Count Orlov in the 1922 film was Max Schreck.
And again, Max Shreck is the name of Christopher Walken’s ruthless businessman in 1992’s “Batman Returns”.
So, the vampire theme — and, with it, the undercurrent of racism —
does seem to carry over from “Nosferatu” to “Batman Returns” in direct
and obvious ways.
.
However, a closer analysis of the making of the 1922 film shows how
the filmmakers — who had no history of racism — were focused on
embodying a generalized fear of foreign contagion.
That fear of contagion has been explained as being at the heart of the human disgust response (by the psychologist Paul Rozin).
Some commentators have condemned this “vermin-like creature” as an anti-Semitic caricature. J Hoberman, a film critic who specialises in Jewish representation, notes in a 2020 essay for Tablet magazine that Orlock is an
“ancient, tremendously powerful creature, a sort of humanoid rodent
given an imposing hooked nose, who communicates with his minions in a
mysterious code, which includes several Hebrew letters as well as the
Star of David”. On the other hand, Hoberman argues,
Nosferatu may project a primal fear of “foreign contagion” which isn’t
specifically fixated on Jewishness. “Nosferatu’s script was
written by a Jew, Henrik Galeen,” he wrote. “The cast included several
Jewish actors… [and] there is no suggestion that Murnau or Grau, who
weren’t Jewish, were anti-Semitic. Indeed, the love of Murnau’s life,
poet Hans Ehrenbaum-Degele, killed in the war, was the son of a Jewish
banker.”
Rather than racism, the filmmakers might have been tapping into the trauma of WW1 and the 1918 influenza pandemic that followed.
Others have theorised how the
abominable, ratty design of Orlok had to do with the traumas of the war –
a conflict that Grau described as a “cosmic vampire” – and the
subsequent Spanish flu epidemic: in the hold of a ship bound to Germany from Transylvania, the Count is accompanied by plague-carrying rats.
However, there were various vague connections in the Victorian mind between Jews, vampires, and contagion.
The novel’s representation of
vampirism has been discussed as symbolising Victorian anxieties about
disease. The theme is discussed with far less frequency than others
because it is discussed alongside other topics rather than as the
central object of discussion.[86] For
example, some connect its depiction of disease with race. Jack
Halberstam points to one scene in which an English worker says that the
repugnant odour of Count Dracula’s London home smells like Jerusalem, making it a “Jewish smell”.[87] Jewish people were frequently described, in Victorian literature, as parasites;
Halberstam highlights one particular fear that Jews would spread
diseases of the blood, and one journalist’s description of Jews as
“Yiddish bloodsuckers”.[88] In contrast, Mathias Clasen writes parallels between vampirism and sexually-transmitted diseases, specifically syphilis.[89][m] Martin
Willis, a researcher focused on the intersection of literature and
disease, argues that the novel’s characterisation of vampirism makes it
both the initial infection and resulting illness.
.
Another type of vampire movie portrays the vampire not as repulsive, but as irresistibly attractive.
This is the image of the vampire as a suave, aristocratic gentleman
that was personified in the Hungarian-American actor Bela Lugosi in the
1931 film “Dracula”.
Lugosi’s accent, as mentioned by
Newman, was heard by the world in 1931, when he starred in Tod
Browning’s Hollywood film of Dracula. Ever since, when we think
of vampires, we tend to think of him: his Hungarian lilt, his oiled
black hair, his bow tie, and his wonderfully swishable cape. Lugosi’s
suave, seductive Dracula would influence countless other actors,
from Christopher Lee in the Hammer films to Adam Sandler, who voices
Drac in the first three Hotel Transylvania cartoons. But if Lugosi’s
romantic interpretation of Dracula is the most influential, Schreck’s
repulsive Orlok casts his own shadow across the genre. “There
are two main strains of cinematic vampires,” says Professor Stacey
Abbott, author of Undead Apocalypse: Vampires and Zombies in the 21st
Century. “You have the Bela Lugosi tradition of the attractive, alluring
vampire, but Orlok set the template for the macabre, pestilence-ridden
vampire who is associated with disease and plague. He’s who you turn to when you want to really capture the monstrosity of the vampire.”
.
To be sure, the very first vampire novel was all about an
aristocratic vampire committed to an eternity of seducing — and
exsanguinating — women.
It was John William Polidori’s “The Vampyre”, published in 1819.
Polidori got the idea of the novel from a short tale told by his friend Lord Byron.
According to John Polidori, Byron
intended to have Darvell reappear, alive again, as a vampire, but did
not finish the story. Polidori’s account of Byron’s story in a letter to
his publisher in 1819 indicates it “depended for interest upon the
circumstances of two friends leaving England, and one dying in Greece, the other finding him alive upon his return, and making love to his sister.”
.
The next famous vampire novel that followed Polidori’s did not come out until 1872.
It was Sheridan Le Fanu’s “Carmilla” and was set in the southeast of Austria (Styria).
This time, it was about a FEMALE aristocratic vampire (sort of) seducing and killing young women.
Carmilla is an 1872 Gothic novella by Irish author Sheridan Le Fanu and one of the early works of vampire fiction, predating Bram Stoker‘s Dracula (1897) by 26 years. First published as a serial in The Dark Blue (1871–72), the story is narrated by a young woman preyed upon by a female vampire named Carmilla, later revealed to be Mircalla, Countess Karnstein (Carmilla is an anagram of Mircalla). The character is a prototypical example of the lesbian vampire,
expressing romantic desires toward the protagonist. The novella notably
never acknowledges homosexuality as an antagonistic trait, leaving it
subtle and morally ambiguous. The story is often anthologised, and has been adapted many times in film and other media.
.
Bram Stoker’s “Dracula” came out 25 years after “Carmilla”, and was heavily influenced by that predecessor.
Interestingly, both La Fanu and Stoker were Irishmen.
In fact, the emergence and rise of a parasitic foreign aristocracy
might have been an unspoken theme in “Dracula” — whether the author knew
it or not.
Dracula became the subject of critical interest into Irish fiction during the early 1990s. Dracula is set largely in England, but Stoker was born in Ireland, which was at that time a British colony, and lived there for the first 30 years of his life. As a result, a significant body of writing exists on Dracula,
Ireland, England, and colonialism. Calvin W. Keogh writes that Harker’s
voyage into Eastern Europe “bears comparison with the Celtic fringe to
the west”, highlighting them both as “othered” spaces. Keogh notes that
the Eastern Question has been both symbolically and historically associated with the Irish question.
In this reading, Transylvania functions as a stand-in for Ireland.
Several critics have described Count Dracula as an Anglo-Irish landlord.
In this case, “Dracula” would be an inverted “Big House” novel.
The Irish Big House novel was typically written by wealthy English
landlords in Ireland, who wrote fondly of the peculiar and amusing
Celtic peasantry in whose midst they dwelt in luxury.
The
Big House novel is a peculiarly Irish phenomenon and is based on an
Irish reality, namely the big house where the landlord (often English)
lived, surrounded by the poor Irish peasants. The novel, as written in
the eighteenth and nineteenth centuries, was about the situation that
then prevailed. However, it continued on into the twentieth century,
well after the big house had all but ceased to exist, at least as a
social phenomenon. There were several early exponents, mainly though
certainly not exclusively women. The best-known may well be Maria Edgeworth and her best-known big house novel is undoubtedly Castle Rackrent, published in 1800 and set in 1782. Nineteenth century novelists in this genre include Somerville and Ross, particularly their Big House at Inver, Charles Lever, William Carleton, particularly his The Squanders of Castle Squander and Lady Morgan.
By the 20th century, the Irish had taken over the Big House novel and subverted it every which way.
Stoker and La Fanu might have unknowing been pioneers of this literary maligning of a parasitic foreign aristocracy.
In the twentieth century, while
there have certainly been conventional big house novels, there have also
been ones that aim to mock or subvert the concept. Though, of course,
not a novel, Brendan Behan‘s The Big House, originally a radio play and later a stage play, even has the big house as a character. Aidan Higgins‘ first novel, Langrishe, Go Down shows the decline of the big house culture, a theme that will be found in other twentieth century big house novels. John Banville‘s Birchwood shows the decline of the inhabitants of the big house, with lunacy, chaos and death to the fore. Elizabeth Bowen‘s Last September takes this one step further, by having the big house symbolically burned down at the end. Other novels in this genre include Molly Keane‘s Loving and Giving, Good Behaviour and Time After Time, Padraic Colum‘s Castle Conquer, Joyce Cary‘s Castle Corner and A House of Children, based on his own upbringing, Mervyn Wall‘s Leaves for Burning, Julia O’Faolain‘s No Country For Young Men, David Thomson‘s Woodbrook, Jennifer Johnston‘s How Many Miles to Babylon?, William Trevor‘s Fools of Fortune and Thomas Kilroy‘s The Big Chapel. Paul Murray‘s An Evening of Long Goodbyes blows the idea apart.
And what would it be like if it were an Irishman who was the landlord of an English official?
One more theory might help to explain the relationship between the
theme of seduction and the vampire as aristocrat in literature.
The French philosopher Michel Foucault asserted that there was a
shift in the modern world away from the display of brute force by
authorities toward careful regulation.
Foucault called this modern mode of governance “bio-power” because it was all about refined population control.
Bio-power had a dual focus on regulating the forces of production at
the macro level and the intricacies of re-production at the micro level.
Foucault has suggested that the
two major forms of bio-power are the discipline of the body and the
regulation of population. Sex has become such a preoccupation in the
modern world because it deals with both these forms of bio- power.
In contrast, the old regime was all about blood in terms of the
spilling of blood as a source of power and on the legitimacy of
aristocratic pedigree (bloodlines).
Foucault characterizes the
transition between the right of death and power over life as a
transition from a “symbolics of blood” to an “analytics of sex.”
Previously, blood was taken as a symbol of power. Blood lines and purity
of blood were all important, the right of death was exercised by
spilling blood, and so on. Now, power is exercised through sex. This
interest in sexuality has rendered possible unprecedented knowledge,
power, and control over a population. This transition was far from
smooth, and Foucault identifies a symbolics of blood lingering in the
racism of the Nazis and their demands for racial “purity.” In
psychoanalysis, sexuality is also read as being born out of earlier laws
based on blood ties.
.
This theory deserves a little more articulation to explain its relevance to the modern vampire genre.
Again, bio-power consists of modern disciplinary practices over individuals and the regulations of entire populations.
Sexuality lay at the intersection of the regulation of individuals and the collective.
This was at variance with traditional governance in European
societies, which consisted of a policy of (benign?) neglect punctuated
by occasional violent suppression.
The traditional European order was akin to slash-and-burn
agriculture in its annual harvest of taxes, its impulse to colonize new
territory, its relative indifference to maintenance, and its periodic
bouts of destruction.
The modern state is more like an aggressively micro-managed factory farm.
Where discipline is about the
control of individual bodies, biopolitics is about the control of entire
populations. Where discipline constituted individuals as such,
biopolitics does this with the population. Prior to the invention of
biopolitics, there was no serious attempt by governments to regulate the
people who lived in a territory, only piecemeal violent interventions
to put down rebellions or levy taxes. As with discipline, the main
precursor to biopolitics can be found in the Church, which is the
institution that did maintain records of births and deaths, and did
minister to the poor and sick, in the medieval period. In the modern
period, the perception grew among governments that interventions in the
life of the people would produce beneficial consequences for the state,
preventing depopulation, ensuring a stable and growing tax base, and
providing a regular supply of manpower for the military. Hence they took
an active interest in the lives of the people. Disciplinary mechanisms
allowed the state to do this through institutions, most notably perhaps
medical institutions that allowed the state to monitor and support the
health of the population. Sex was the most intense site at which
discipline and biopolitics intersected, because any intervention in
population via the control of individual bodies fundamentally had to be
about reproduction, and also because sex is one of the major vectors of
disease transmission. Sex had to be controlled, regulated, and monitored
if the population was to be brought under control.
However, under the velvet glove of modern regulation lies the old
brutal iron fist of state power — which reveals itself when regulation
falters.
In particular, the harsh spectacles of blood and gore that were so
common in the traditional world persist most openly in the modern use of
military force in foreign policy.
There is another technology of
power in play, however, older than discipline, namely “sovereign power.”
This is the technology we glimpse at the beginning of Discipline and Punish,
one that works essentially by violence and by taking, rather than by
positively encouraging and producing as both discipline and biopolitics
do. This form of power was previously the way in which governments dealt
both with individual bodies and with masses of people. While it has
been replaced in these two roles by discipline and biopower, it retains a
role nonetheless at the limits of biopower. When discipline breaks
down, when the regulation of the population breaks down, the state
continues to rely on brute force as a last resort. Moreover, the state
continues to rely on brute force, and the threat of it, in dealing with
what lies outside its borders.
.
The old order of blood and the new order of population control (and economic productivity) are incompatible.
Perhaps in psychological terms this contradiction would trigger deep cognitive dissonance when they collided in literature.
The mixing of the two social orders, ancient and modern, in
literature would inspire in the modern reader a mix of horror,
revulsion, and fascination.
The image of the aristocratic vampire as seducer would intersect in
the most dissonant way between the old politics of blood, bloodlines,
and death, and the new order of bio-power.
Earlier in Western history, such an image might have had little emotional resonance at all.
Umberto Eco illustrated in “The Name of the Rose” that the murder
mystery would have made no sense and for people in the Middle Ages.
Likewise, the vampire novel would have seemed nonsensical in an earlier period.
For example, the eastern European folktales of vampire seem more like
simple zombie stories than elaborate semi-tragic tales of undead
aristocrats.
For a modern person, the nobility no longer commands respect, and is
instead viewed as a quaint romantic relic (“Downton Abbey”) or a
decrepit , if pitiable, fossil (“Brideshead Revisited”).
Similarly, the Christian view of sexuality, like the attitude toward
the accumulation of wealth, was that it was a distraction from the true
goal of salvation.
Of course, material possessions and sexuality are daily existential
realities in any society, but in Europe of old they were not topics of
public discussion.
Today, money and sexuality are front and center in all things, be
they academic disciplines, advertising, social media, journalism, and so
on.
Modern life involves the public celebration of worldly fruitfulness —
which is more in line with the ethos of the Old Testament than the New
Testament.
However, unlike any section of the Bible or any other ancient literature, the realistic portrayal of violence has been pushed back to the margins where sex and material acquisition once dwelt.
For example, the 1986 movie “Henry: Portrait of a Serial Killer” is much less violent than many television shows.
In fact, the worst of the violence in the movie is off-camera — much
the way violence is offstage yet vivid and real in the most horrific
Greek tragedies.
But because the film involves the realistic depiction of a prolific serial killer, the movie was initially banned in the UK and given an X rating in the USA.
That’s very different from American and British society of yesteryear, when the fighting of animals was a common spectacle.
For instance, a century ago in rural America, one might find the local mayor and the local clergy at a dog fight.
That kind of blood sport has been pushed off into the margins of
society, while the discourses and images of sex and money have moved
from the periphery to the focus.
Only rarely in the modern world do we glimpse the spectacles of violence that were central in the ancient world.
For example, the film critic Roger Ebert complained that Mel Gibson’s
2004 film “The Passion of the Christ” was the most violent movie that
he ever saw.
A more accurate criticism is that the film’s violence was REALISTIC.
The realistic portrayal of the Stations of the Cross is a part of
traditional Christianity that has been edited out of modern
consciousness.
In contrast, a John Wick film is much more violent, but it is patently unrealistic.
.
Again, as incompatible as the old and new orders may be, the two
orders still exist in the modern world, with the old brutal sovereign
power ensconced in international relations.
In fact, the two orders quietly intermingle in domestic policy in the form of biological discrimination.
While it was stated that the old brutal order has been relegated to
foreign policy, it still exists within the modern state in the form of
racial policy that picks winners and losers.
For Foucault, there is a
mutual incompatibility between biopolitics and sovereign power. Indeed,
he sometimes refers to sovereign power as “thanatopolitics,” the
politics of death, in contrast to biopolitics’s politics of life. Biopolitics
is a form of power that works by helping you to live, thanatopolitics
by killing you, or at best allowing you to live. It seems impossible for
any individual to be simultaneously gripped by both forms of power,
notwithstanding a possible conflict between different states or state
agencies. There is a need for a dividing line between the two, between
who is to be “made to live,” as Foucault puts it, and who is to be
killed or simply allowed to go on living indifferently. The most
obvious dividing line is the boundary between the population and its
outside at the border of a territory, but the “biopolitical border,” as
it has been called by recent scholars, is not the same as the
territorial border. In Society Must Be Defended, Foucault
suggests there is a device he calls “state racism,” that comes variably
into play in deciding who is to receive the benefits of biopolitics or
be exposed to the risk of death.
That is, the mixing of the old aristocratic culture that emphasized
purity of blood and the new governing model of population regulation
results in “racism”.
Foucault does not use this term in any of the works he published himself, but nevertheless does point in The Will to Knowledge to
a close relationship between biopolitics and racism. Discourses of
scientific racism that emerged in the nineteenth century posited a link
between the sexual “degeneracy” of individuals and the hygiene of the
population at large. By the early twentieth century, eugenics, the
pseudo-science of improving the vitality of a population through
selective breeding, was implemented to some extent in almost all
industrialized countries. It of course found its fullest expression in
Nazi Germany. Nevertheless, Foucault is quite clear that there is
something quite paradoxical about such attempts to link the old theme of
“blood” to modern concerns with population health. The essential point
about “state racism” is not then that it necessarily links to what we
might ordinarily understand as racism in its strict sense, but that
there has to be a dividing line in modern biopolitical states between
what is part of the population and what is not, and that this is, in a
broad sense, racist.
.
This new order of bio-power might sit uneasily even in the USA, which
is often considered the most modern and most Western of countries.
The new order might not even exist in technologically modernized
non-Western societies like Japan (or, at least, not in the standard way
it exists in the West).
For example, take the case of the Frenchman who wrote of his travels around the world.
In the USA, he discovered a society saturated by public images of sexuality.
Later, he discovered that Americans are actually quite prudish and squeamish.
In contrast, in Japan, the public realm is devoid of traces of sex.
And yet Japan has long been a sexually permissive society, in which
promiscuity serves as a safety valve in a highly regimented social
order.
In America, sex is seemingly everywhere in public — but not so much in private.
In Japan, sex is invisible — but its going on all over.
Interestingly, in contemporary Japan, dog fighting is not only largely legal, but is considered a family outing.
Many Japanese proudly consider dog fighting to be one of their great traditions.
Blood is still front and center in Japan.
In fact, in Japan, even membership in organized-crime groups is
legal, and these groups exist openly, with their own offices (although
their crimes are certainly illegal).
But neither JAWS nor any other
group has made a concerted effort to ban dogfighting—for two reasons.
The first is the perception that it is a Japanese tradition. The “dog
men,” as dogfighters are called, say it’s part of their country’s
cultural history, much like whaling or dolphin hunting. Enough members of the parliament agree with them to block changes to the law, Yamaguchi says.
The
other reason is dogfighting’s deep ties to the yakuza. “In Japan,
anything to do with dogs is run by gangsters,” Oliver says. “In the old
days, they made money from prostitution and gunrunning, but now they
make a huge profit in the pet business.”
.
After director Tim Burton was replaced in the Batman franchise, sexuality became more of an obvious theme.
Director Joel Schumacker’s 1995 “Batman Forever” and 1997 “Batman and Robin” veered into campness.
That’s especially strange in terms of how the wholesome clear craze was expected to be the legacy of the HIV-AIDS pandemic.
However, to be fair, camp is part of the traditional Batman universe.
The Dark Knight Trilogy consists of Batman Begins (2005), The Dark Knight (2008), and The Dark Knight Rises (2012), all directed by Christopher Nolan. Collectively grossing over $2.4 billion at the worldwide box office, the trilogy has been ranked among the greatest ever made.
.
The critic A.O. Scott writes about the progression of the Batman movies from fun to grimness.
The Batman — not just any Batman! — is less the enemy of this state of things than its avatar. On television in the 1960s,
Batman was playful. Later, in the Keaton-Clooney-Kilmer era of the ’80s
and ’90s, he was a bit of a playboy. In the 21st century, through
Christopher Nolan’s “Dark Knight” trilogy and after, onscreen
incarnations of the character have been purged of any trace of joy,
mischief or camp. We know him as a brooding avenger, though not an
Avenger, which is a whole different brand of corporate I.P.
But going back to the clear craze of the early 1990s, Batman in any
form was not supposed to happen because of the jitters of the HIV-AIDS
crisis.
The psychological theory is that in the aftermath of pandemics,
purity becomes popular, and that deathly trauma inspires a shift toward
peace and harmony.
If anything, the 1990s should have been a clean-cut, wholesome Superman decade.
It could be that Americans were not deeply disturbed by the HIV-AIDS
crisis, which killed over 700,000 Americans over a 40-year period.
After all, more Americans than that died from Covid over a two-year
period — and throughout the pandemic there was a stubborn sense of
normality in the USA.
It could be that AIDS only reached an emotional crisis level in certain high-risk groups and communities.
.
It might be worth noting that tattoos took off in popularity in the mainstream during the 1990s.
Middle-class Americans were swearing off tobacco in the 1990s, and
tattoos were becoming the healthier, salubrious symbolic replacement for
telegraphing one’s coolness.
But not much earlier, getting a tattoo was (falsely) considered a high-risk activity in terms of contracting HIV-AIDS.
Did this aura of danger in fact enhance the status of tattoos in the 1990s?
Likewise, did a film industry traumatized by loss embrace darkness as
a result, rather than embrace the wholesomeness, transparency, and
hygiene that the clear craze represented?
Moreover, like tattoos, did this darkness represent a new sense of lighthearted creative indulgence, rather than mourning?
In any case, the tendency in film and popular culture was toward increasing darkness.
.
So, why did Superman lose his cinematic monopoly to Batman?
It was the Superman franchise that preceded all the other current superhero films with the launch of “Superman” in 1978.
The Superman franchise movies were the only superhero movies until “Batman” came out in 1989.
.
Superman and Batman seem to be the two iconic superheroes in the
American imagination (with Spider-Man as a strong third contender).
But now, even Superman has gone Batman in a turn toward darkness.
Indeed, perhaps all Superhero movies have become dark Batman movies (and superhero movies come to dominate American cinema).
It’s like the 1990s never really ended.
Like the title of the 1995 film, it’s been “Batman Forever”.
.
What led to this Batman-ification of everything?
The first film in the Batman franchise came out in 1989.
It was in November of that year that the Berlin wall was torn down.
The first few Batman movies were dark but goofy, and one would expect
that the franchise would become more and not less optimistic with time.
After all, in the American view, human history had finally culminated in the Americanization of the world.
Instead, the Batman franchise only became darker and darker.
And then, like gangrene, this gloom spread through all superhero
movies — which themselves have seemingly taken over mainstream cinema.
.
To understand how superhero movies have become so dark one must first understand how Superman and Batman comprise a dualism.
That is, the relationship between Batman and Superman and all the
other superheroes can be understood through the young Nietzsche’s theory
of ancient Greek tragedy.
The book in which Nietzsche explained this theory is his 1872 “The Birth of Tragedy from the Spirit of Music“.
For the Greeks, tragedy represented an affirmation of life in the face of the inherent meaninglessness of the universe.
Nietzsche found in classical Atheniantragedy an art form that transcended the pessimism and nihilism of
a fundamentally meaningless world. The Greek spectators, by looking
into the abyss of human suffering and affirming it, passionately and
joyously affirmed the meaning of their own existence. They knew
themselves to be infinitely more than petty individuals, finding
self-affirmation not in another life, not in a world to come, but in the
terror and ecstasy alike celebrated in the performance of tragedies.
Nietzsche distinguished between an Apollonian world of order versus a Dionysian world of flux and change.
Dionysian: reality as disordered and undifferentiated by forms.
Apollonian: reality as ordered and differentiated by forms.
Nietzsche claims life always
involves a struggle between these two elements, each battling for
control over the existence of humanity. In Nietzsche’s words, “Wherever
the Dionysian prevailed, the Apollonian was checked and destroyed….
wherever the first Dionysian onslaught was successfully withstood, the
authority and majesty of the Delphic god Apollo exhibited
itself as more rigid and menacing than ever.” And yet neither side ever
prevails due to each containing the other in an eternal, natural check
or balance.
Tragedy united the Apollonian and Dionysian by combining dialog with music, respectively.
Nietzsche argues that the tragedy
of Ancient Greece was the highest form of art due to its mixture of both
Apollonian and Dionysian elements into one seamless whole, allowing the
spectator to experience the full spectrum of the human condition. The Dionysian element was to be found in the music of the chorus,
while the Apollonian element was found in the dialogue which gave a
concrete symbolism that balanced the Dionysian revelry. Basically, the
Apollonian spirit was able to give form to the abstract Dionysian.
Prior to the development of tragedy, Greek cultural life was split
between the Apollonian forms (like statues) and Dionysian excess (the
drunken revelry of festivals).
In a sense, there were only the two gods, Apollo and Dionysus.
The pantheon of the various gods was merely a manifestation of the Apollonian principle of form, order, and individuation.
Before the tragedy, there was an
era of static, idealized plastic art in the form of sculpture that
represented the Apollonian view of the world. The Dionysian element was
to be found in the wild revelry of festivals and drunkenness, but, most
importantly, in music. The combination of these elements in one art form
gave birth to tragedy. He theorizes that the chorus was originally
always satyrs, goat-men. (This is speculative, although the word “tragedy” τραγωδία is contracted from trag(o)-aoidiā = “goat song” from tragos = “goat” and aeidein =
“to sing”.) Thus, he argues, “the illusion of culture was wiped away by
the primordial image of man” for the audience; they participated with
and as the chorus empathetically, “so that they imagined themselves as
restored natural geniuses, as satyrs.” But in this state, they have an
Apollonian dream vision of themselves, of the energy they’re embodying.
It’s a vision of the god, of Dionysus, who appears before the chorus on
the stage. And the actors and the plot are the development of that dream
vision, the essence of which is the ecstatic dismembering of the god
and of the Bacchantes‘ rituals, of the inseparable ecstasy and suffering of human existence.
Under the influence of the rise of philosophy, tragedy died.
After the time of Aeschylus and Sophocles, there was an age where tragedy died. Nietzsche ties this to the influence of writers like Euripides and the coming of rationality, represented by Socrates.
Euripides reduced the use of the chorus and was more naturalistic in
his representation of human drama, making it more reflective of the
realities of daily life. Socrates emphasized reason to such a degree
that he diffused the value of myth and suffering to human knowledge. For
Nietzsche, these two intellectuals helped drain the ability of the
individual to participate in forms of art, because they saw things too
soberly and rationally. The participation mystique aspect of art and myth was lost, and along with it, much of man’s ability to live creatively in optimistic harmony
with the sufferings of life. Nietzsche concludes that it may be
possible to reattain the balance of Dionysian and Apollonian in modern
art through the operas of Richard Wagner, in a rebirth of tragedy.
.
Likewise, there are only two superheroes, Superman and Batman — and all the other superheroes are expressions of Superman.
Superman has been interpreted and discussed in many forms in the years since his debut, with Umberto Eco noting that “he can be seen as the representative of all his similars”.
According to Wikipedia, Superman is the prototypical superhero.
Superman is considered the prototypical superhero.
He established the major conventions of the archetype: a selfless,
prosocial mission; extraordinary, perhaps superhuman, abilities; a
secret identity and code name; and a colorful costume that expresses his
nature.[212] Superman’s cape and skintight suit are widely recognized as the generic superhero costume.
What is it that Superman and Batman represent?
Superman and Batman form a different dualism than that of Apollonian form and Dionysian flux.
Superman represents the ideal of America, and Batman is the reality.
When Americans look at Superman, they see what they want to be.
American exceptionalism is the idea that the United States is inherently different from other nations.[2]Its
proponents argue that the values, political system, and historical
development of the U.S. are unique in human history, often with the
implication that the country is both destined and entitled to play a
distinct and positive role on the world stage.[3]
Political scientist Seymour Martin Lipset traces the origins of American exceptionalism to the American Revolution, from which the U.S. emerged as “the first new nation” with a distinct body of ideas.[4] This ideology is based on liberty, equality before the law, individual responsibility, republicanism, representative democracy, and laissez-faire economics; these principles are sometimes collectively referred to as “American exceptionalism”,[5] and
entail the U.S. being perceived both domestically and internationally
as superior to other nations or having a unique mission to transform the
world.[6]
On one hand, in terms of political culture, the USA would seem to
typical among the family of English-speaking countries (the
“Anglo-sphere”).
For example, it has been endlessly pointed out by high-school civics
teachers that the American form of government is 95% derived from the
British form of government.
On the other hand, the USA does diverge in its political culture and ideals even from these other Anglo-phone countries.
In fact, American political culture differs so much from these other
Anglo-phone countries to the point that these countries are somewhat
alienated by the USA (for example, Canada).
This would suggest that, contrary to what many Americans believe, the
USA has no special mystical destiny to convert and transform the world
to its own unusual value system.
In fact, Americans are weird even to have this feeling of being exceptional.
Most countries have no such sense of being extremely special.
Again, all of this might mean that the American ideal would be incompatible as a potential donor ideology to other countries.
.
The distinctiveness of American ideals does not mean that the USA is
radically different from other countries in terms of the reality of
American history.
The USA is very similar to other countries in terms of the actual history of the USA.
Indeed, Page Smith wrote that the United States as reality is really just like any other country.
Every country’s history is all about the will to survive and the
relentless pursuit of prestige, money, and power — and the United States
is no different.
And here the USA does have something to offer the world — namely, the means to attain those worldly goods.
The United States is a font of new technology and methods for people
in other countries to attain more and higher status, class, and power.
.
Again, Superman as an ideal of America was the “mother” of all
superheroes — including Batman, who is the reality of the United States.
The history of Superman as the original superhero illustrates this (literally).
An influence on early Superman stories is the context of the Great Depression. Superman took on the role of social activist, fighting crooked businessmen and politicians and demolishing run-down tenements.
A.C. Grayling, writing in The Spectator,
traces Superman’s stances through the decades, from his 1930s campaign
against crime being relevant to a nation under the influence of Al Capone, through the 1940s and World War II, a period in which Superman helped sell war bonds, and into the 1950s, where Superman explored the new technological threats.
There were multiple iterations of Superman as a character that ranged
from villain to hero as he was developed by the writer Jerry Seigel and
the illustrator Joe Shuster.
Originally, Superman was conceived as a supervillain who was created
when an unscrupulous scientist experimented on a homeless man.
Later, Siegel and Shuster invented Superman’s Clark Kent alter ego
for comic relief, basing Superman and Kent on two very different old
Hollywood staples.
The concept of superheroes having dual identities is widely considered a fundamental breakthrough in the genre.
Originally a crime fighter as a superhero, Superman became overtly political with the rise of the Nazis in Germany.
As time passed they started to
include stories of Superman fighting off anti-Semitic people. With
Hitler’s rise in Europe with his anti-Semitic words and the negative
stereotypes of Jewish people, pushed Siegel and Shuster to make a hero
that defended the weak. They often would portray Superman protecting the
weak and those who were mistreated. He was a hero the world needed as
World War II began in Europe. Shuster and Siegel worked hard to tell
stories of hope that would cheer people on as things looked hopeless,
even as they struggled to keep the rights to create those stories.
This inspired other artists and writers to create superheroes like Superman.
It was stories liked this that
inspired other Jewish artists to create their own comics about
protecting the persecuted. One of those men was Jack Kirby. Along with
his partner, Joe Simon, Kirby created Captain America in 1941.
Batman was likewise created by a Jewish writer (Bill Finger) and a Jewish illustrator (Bob Kane).
Again, the argument here is that Batman is (comparatively) the
“reality-based” version of Superman, in that he has no inherent
superpowers.
According to the Heritage Foundation, Superman used to fight for
“truth, justice, and the American way” — but in 2011 Superman officially
renounced patriotism.
Detective fiction in the
English-speaking world is considered to have begun in 1841 with the
publication of Poe’s “The Murders in the Rue Morgue”, featuring “the
first fictional detective, the eccentric and brilliant C. Auguste Dupin“. When the character first appeared, the word detective had
not yet been used in English; however, the character’s name, “Dupin”,
originated from the English word dupe or deception. Poe devised a “plot
formula that’s been successful ever since, give or take a few shifting
variables.”
In applying the scientific method to solving problems in the social
world, the detective story was a model of Enlightenment virtue.
Poe referred to his stories as “tales of ratiocination“.
In stories such as these, the primary concern of the plot is
ascertaining truth, and the usual means of obtaining the truth is a
complex and mysterious process combining intuitive logic, astute
observation, and perspicacious inference. “Early detective stories
tended to follow an investigating protagonist from the first scene to
the last, making the unraveling a practical rather than emotional
matter.”
The first detective novel was written by the British writer Wilkie Collins 27 years after Poe’s first detective story.
T. S. Eliot called Collins’s novel The Moonstone (1868)
“the first, the longest, and the best of modern English detective
novels… in a genre invented by Collins and not by Poe”,[27] and Dorothy L. Sayers called it “probably the very finest detective story ever written”.[28]The Moonstone contains a number of ideas that have established in the genre several classic features of the 20th century detective story:
Because the detective is unknowingly the culprit in “The Moonstone”,
that novel is also cited as the classic example of self-surveillance —
which is supposedly an element in the genre.
That kind of self-conscious self-surveillance might be found in
hybrids of the detective novel and other genres that also descend from
gothic literature.
The major literary genres that were spawned from gothic literature include:
Detective
fiction can be traced back to the 1800s, around the time of the
Industrial Revolution. Before this time, most people lived in smaller
towns and worked and socialized in closer circles, so people knew
everyone they came into contact with for the most part. But with the
rise of industrial jobs, more people began moving to cities, which lead
to interacting with more strangers on a daily basis, a heightened sense
of suspicion and uncertainty, and yes, more crime. It was around this
time too where police forces were first established. London’s police
force came to be in 1829, and New York City got its police force in
1845. With more people living in cities and crime rates on the rise, the
setting was right for detective genres to flourish.
The difference between the USA and the UK is that the superhero movie took over America and detective fiction conquered Britain.
There is a notable paucity of British superheroes and a surplus of British fictional detectives.
Holmes and Columbo do not completely align with the dualism of ideal and real the way that Superman and Batman do, respectively.
The proper dualism for Holmes and Columbo might be the distinction between the elite and the popular.
But that’s complicated.
Saying that Columbo represents a democratic ideal is like saying Dr. Anthony Fauci is a man of the people.
Clearly, Fauci is from the people in his origins and for the
people in his vocation, but as a leading infectious disease expert he
is a member of the scientific and professional elite (in fact, at their
apex).
Columbo is also comparable to the disheveled and ascetic philosopher
Socrates, who interrogated the most esteemed men of Athens and politely
punctured their reasoning.
In fact, it might be more than merely an analogy because Columbo
might be the American equivalent and direct descendant and avatar of
Socrates.
After all, in every episode of Columbo, one is already learning what one knows.
The spirit of Lieutenant Columbo lives on in the TV franchise “Law
& Order” which was revolutionary in that it refuted the American
bias that the display of wealth is a sign of virtue.
“Law & Order” — Everyone in Manhattan is guilty of something.
“Criminal Intent” — Smart people are extremely dangerous.
“Special Victims Unite” — Men are dangerous and guilty.
Nevertheless, this skepticism of respectability remains a heterodoxy in American life.
The orthodox belief for Americans is the widespread equation of morality with status, wealth, and power.
After all, it is normal for hard-working people to hope that the
reward for their labors will be in this life, and not merely in the
afterlife.
However, among highly educated, urban Americans — some of whom are
wealthy Democrats — there is a skepticism of conventional success as a
marker of ethics.
For example, “Law & Order” is the only television show that the
linguist and radical public intellectual Noam Chomsky watches.
Along these lines, the “Democratic Socialists” like Alexandria
Ocasio-Cortez would be the prime target audience of “Law & Order”.
The value system of the over-educated, romantic “tenured radical” is more like that of upper-middle class Catholic France.
There are at least three different types of symbolic boundaries based on three types of capital.
Moral boundaries:based on honesty, work ethic, personal integrity, consideration
Socioeconomic boundaries:based on people’s social position indicated by their wealth, power and professional success
Cultural boundaries:based on education, intelligence, manners, tastes, command of high culture
Moral boundaries are of tremendous importance in both the USA and France — even though they are interpreted differently.
American upper-middle-class men rely on socioeconomic criteria such as power, success or wealth when it comes to draw “symbolic boundaries”.
In contrast, French men of that same class tends to privilege cultural boundaries more than socioeconomic characteristics.
“First the charioteer of the human
soul drives a pair, and secondly one of the horses is noble and of
noble breed, but the other quite the opposite in breed and character.
Therefore in our case the driving is necessarily difficult and
troublesome.”
The Charioteer represents intellect, reason,
or the part of the soul that must guide the soul to truth; one horse
represents rational or moral impulse or the positive part of passionate
nature (e.g., righteous indignation); while the other represents the
soul’s irrational passions, appetites, or concupiscent nature. The
Charioteer directs the entire chariot/soul, trying to stop the horses
from going different ways, and to proceed towards enlightenment.
Obviously, Spock is rational and McCoy is compassionate.
However, in “Star Trek”, Spock and McCoy are subordinate to Captain Kirk — who is a man of passion, appetite, and ambition.
Thus, contrary to Plato, in “Star Trek”, the charioteer turns out to
be physical desire and the two winged horses it guides are rationality
and conscience.
.
Perhaps since the Industrial Revolution and the Romantic Era of the
19th century, reason (in the form of technology) has increasingly been
seen as the tool of the emotions.
That is, in the modern world, volition or will is increasingly seen
as the core of the personality, and human rationality is seen as either
merely instrumental or even a retrospective illusion.
But perhaps it was always like this — and Plato knew that reason is never in the driver’s seat of the mind.
For whatever reason, Plato might have written a deceptive analogy that he himself did not believe in.
(His clue for the reader that he does not actually believe in
something is supposedly when he gives it a mythical twist, such as a
reference to the fictitious continent of Atlantis — or to winged horses
rather than to just plain old horses.)
That is, willpower rather than reason or conscience is to be seen as
the faculty that in all times and places is in control of the
personality.
After all, the word “volition” means conscious choice or decision.
It has at its root the Latin word “volo”, which means to wish, or want, or intend — plus the word for action.
Thus, desire and action — and not reason — are at the heart of “free will”.
In practical and sociopolitical terms, every society is run by driven high-achievers, and not by brainiacs or saints.
For example, the Catholic Church held sway over kings and peasants —
but the clerical hierarchy was infiltrated and dominated by the
nobility.
Likewise, the Communist Party ruled over both administrators and
workers — but those Party members evolved into classic politicians and
bureaucratic managers rather than revolutionary intellectuals (like
Lenin).
.
If Sherlock Holmes symbolizes intellect and Dr. Watson embodies
conscience, then who represents the appetite in the mysteries written by
Sir Arthur Conan Doyle?
That (dis)honor might go to Sherlock Holmes’ brilliant archnemesis, Professor Moriarty.
He shows a fiery disposition,
becoming enraged when his plans are thwarted, resulting in his being
placed “in positive danger of losing my liberty”. While personally
pursuing Holmes at a train station, he furiously elbows aside
passengers, heedless of whether this draws attention to himself.
.
There is abundant speculation regarding on whom the character Moriarty is based.
What is rarely noticed are the similarities between Professor Moriarty and Arthur Conan Doyle (Irish origins, same schools).
As the quintessential writer of detective fiction, Arthur Conan Doyle was himself a mastermind behind criminal mysteries.
The supervillain Moriarty rather than the superhero Holmes might have been Doyle’s alter ego.
On the one hand, an alter ego can be a very different alternative
self from the normal self — especially if the alter ego is either a
darker, evil, shadow self or a heroic, superhuman self.
On the other hand, in literature, an alter ego can be a realistic
character who represents the author and is conspicuously similar to the
author.
Moriarty was both similar to Arthur Conan Doyle’s background and a
reverse mirror image of Conan Doyle’s healthy, moral, and thoroughly
honorable personality.
Arthur Conan Doyle was the ultimate charioteer in this triad.
An alter ego (Latin for “other I”, “doppelgänger“) means an alternative self, which is believed to be distinct from a person’s normal or true original personality.
A distinct meaning of alter ego is found in the literary analysis used when referring to fictional literature and other narrative forms, describing a key character in
a story who is perceived to be intentionally representative of the
work’s author (or creator), by oblique similarities, in terms of psychology, behavior, speech, or thoughts, often used to convey the author’s thoughts.
The point is that again it is the will — in the form of the Conan
Doyle (Moriarty), in this case — that is in control of reason and
conscience.
.
What reality does Batman represent?
Every Superhero is an ordinary person who has at least one superhuman ability.
They wear a disguise to hide their human identity.
For example, Spiderman has exceptional strength, speed, and agility
because he was bitten by a radioactive spider — and, uniquely, he can
climb walls.
However, Spiderman’s webbing is a technology that Peter Parker devised.
In this way, Spiderman is half superhero, half technology.
In contrast to other superheroes, Batman has no superhuman talents.
In fact, Batman is not even Batman when he puts on his outfit, he is
still just the billionaire Bruce Wayne with lots of technology.
The only super-ability that Bruce Wayne possess is superior technology thanks to his phenomenal wealth.
In a way, Batman is somewhat of a strange superhero with too much money.
And that, in a nutshell, is the United States as a reality.
.
In fact, even the American ideal of liberty and democracy (“liberal democracy”) can be seen as a form of technology.
What Americans refer to as “democracy” is a sophisticated form of “social technology”.
“Freedom” is a useful method of organizing society in order to increase productivity.
Social technology is a way of using human, intellectual and digital resources in order to influence social processes.
Closely related to social technology is the term social engineering. Thorstein Veblen used ‘social engineering’ in 1891, but suggested that it was used earlier.[16] In
the 1930s both ‘social engineering and ‘social technology’ became
associated with the large scale socio-economic policies of the Soviet Union. The Soviet economist Yvgeni Preobrazhensky wrote
a book in which he defined social technology as “the science of
organized production, organized labour, of organized systems of
production relations, where the legality of economic existence is
expressed in new forms.” (p. 55 in the translation of 1963[17])
American society can be seen as an open-ended, improvised form of
social engineering aimed at solving catastrophic problems rather than
creating a centralized, pre-planned utopia.
Karl Popper discusses social technology and social engineering in his book The Open Society and Its Enemies[18] and in the article “The Poverty of Historicism”,[19] in which he criticized the Soviet political system and the marxist theory (Marxism) on which it was based. Eventually he combined “The Poverty of Historicism” series in a book “The Poverty of Historicism”
which he wrote “in memory of the countless men and women of all creeds
or nations or races who fell victim to the fascist and communist belief
in Inexorable Laws of Historical Destiny”.[20] In his book “The Open Society and Its Enemies“, Popper distinguished two
kinds of social engineering, and the corresponding social technology.
Utopian engineering strives to reach “an ideal state, using a blueprint
of society as a whole, is one which demands a strong centralized rule of
a few, and which therefore is likely to lead to a dictatorship” (p. 159). Communism is an example of utopian social Technology.
On
the other hand, there is the piecemeal engineer with its corresponding
social technology, which adopts “the method of searching for, and
fighting against, the greatest and most urgent evils of society, rather
than searching for, and fighting for, its greatest ultimate good”
(p. 158). The use of piecemeal social technology is crucial for
democratic social reconstruction.
.
A classic example of the difference between material and social
technology would be the construction of the pyramids in ancient Egypt.
Today, we gaze at the pyramids and wonder how they were constructed.
Yet historical texts from the ancient Greeks confirm that the
Egyptians used simple material technology like ramps, levers, and sheer
manpower to build the pyramids.
The Egyptians might have had simple material technology.
However, their social technology was so super-sophisticated that they could mobilize their entire society for long-term construction projects.
In a sense, that is the exact opposite of American society, in which
the material technology is complex but the social technology is simple
(money).
In fact, it could be that as material technology grows more complex
in the USA, the social technology — in the form of social cohesion —
erodes further.
In that case, the landing of a man on the moon and the project behind
it represented one giant leap for material technology — but a small
step backward for the American social order.
In contrast, the construction of the pyramids in ancient Egypt saw no
innovations, advances, or breakthroughs in material technology.
Indeed, the conservative ethos of ancient Egypt seems to be to keep
everything frozen in place in line with a changeless universe (where
every day is a cloudless summer day).
Yet the building of the pyramids not only reflected abundant social technology, but contributed to it further.
That is, the pyramids were built when the pharaohs were strong — and,
in turn, the public’s awe at the pyramids would have fed the monarch’s
power.
The pyramids were originally covered with limestone and marble and capped with gold, all of which was later stripped away.
For Egyptians in small villages, to even hear of such a wondrous
structure would have served to cement the rule of a pharaoh, as well as
to united a far-flung country.
.
Superman is a symbol of the American ideal, which is “liberal democracy”.
The relationship between democracy and capitalism is a contentious area in theory and in popular political movements. The extension of adult-male suffrage in 19th-century Britain occurred along with the development of industrial capitalism and representative democracy became
widespread at the same time as capitalism, leading capitalists to posit
a causal or mutual relationship between them. However, according to
some authors in the 20th-century, capitalism also accompanied a variety
of political formations quite distinct from liberal democracies,
including fascist regimes, absolute monarchies and single-party states.[36]Democratic peace theory asserts
that democracies seldom fight other democracies, but critics of that
theory suggest that this may be because of political similarity or
stability rather than because they are “democratic” or “capitalist”.
Moderate critics argue that though economic growth under capitalism has
led to democracy in the past, it may not do so in the future as authoritarian régimes
have been able to manage economic growth using some of capitalism’s
competitive principles without making concessions to greater political freedom.
However, this popular interpretation of American capitalism at the
service of American democracy might involve a certain amnesia.
As the historian Eric Hofstadter has observed, the American political
system has always been conceived by Americans as subordinate to the
values and needs of capitalism.
Hofstadter’s introduction argues
that the major political traditions in the United States, despite
contentious battles, have all “shared a belief in the rights of
property, the philosophy of economic individualism, the value of competition … [T]hey have accepted the economic virtues of a capitalist culture as necessary qualities of man”.
Rather
than focusing on political conflict, Hofstadter proposes that a common
ideology of “self-help, free enterprise, competition, and beneficent
cupidity” has guided the United States since its inception. Through
analyses of the ruling class in the US, Hofstadter argues that this
consensus is the hallmark of political life in the US.
Hofstadter’s critique is not radical — in fact, Hofstadter’s assertion is consonant with the views of American conservatives.
Importantly, America cannot serve as a destiny nor as model for the
world because American “democracy” and American “capitalism” are not
really democracy or capitalism.
That is, both the economic and political structures and ideologies in
the USA are of mixed types — and are under constant reconstruction.
In other words, because America as a model is a moving target, it is difficult to emulate.
It has been periodically noted — and then quickly forgotten — that
“capitalism” in the USA is really a mixed economy, and it is constantly
evolving.
Likewise, Americans unknowingly use the word “democracy” as shorthand
for a mixed form of government that was necessary in order to stabilize
society.
Mixed government (or a mixed constitution) is a form of government that combines elements of democracy, aristocracy and monarchy, ostensibly making impossible their respective degenerations which are conceived as anarchy, oligarchy and tyranny. The idea was popularized during classical antiquity in order to describe the stability, the innovation and the success of the republic as a form of government developed under the Roman constitution.
The ancient Greeks had a notion of history as cyclical in nature.
One of their models of political history involved a predictable
pattern of tacking back and forth between bad and good versions of
government.
However, the general direction of this evolution was toward greater participation — and greater vulgarity.
A polity begins as a noble monarchy, and eventually finds its culmination in mob rule.
Then the cycle begins all over.
There is an underlying logic to how this historical political process plays out.
The political doctrine of anacyclosis (or anakyklosis from Greek:
ἀνακύκλωσις) is a cyclical theory of political evolution. The theory of
anacyclosis is based upon the Greek typology of constitutional forms of
rule by the one, the few, and the many. Anacyclosis states that three
basic forms of “benign” government (monarchy, aristocracy, and democracy) are inherently weak and unstable, tending to degenerate rapidly into the three basic forms of “malignant” government (tyranny, oligarchy, and ochlocracy).
A mixed form of government was advocated by some Greek thinkers as an antidote to this fatal cycle.
One can find the logic of this three-part mixed form of government in the British model involving:
monarch and
a parliament divided between
an elite and
a common chamber.
That is, since the Norman invasion and the signing of the Magna Carta, the British had a hybrid government that consisted of:
a (Roman) god-like king who wielded immense power, but
he governed by consent (in the Germanic communal tradition).
But within this Latin-German hybrid form of government, the ancient Greek triune model of government persisted.
The one, the few, and the many would rule simultaneously.
This model was carried over to the American model of government in
the relationship between an independently elected president (not a prime
minister) and Congress.
However, the Americans innovated in their commitment to establishing an independent judiciary.
The judiciary would be an aristocracy of the few that would balance
the powers of a democratized legislature and a monarchical presidency.
.
American social technology in the form of “democracy” and
“capitalism” is highly advanced and can provide a useful model for other
countries to emulate.
However, other countries have innovated to produce other social technologies.
For example, the economist Branko Milanovich has stated that the
great economic success story of the past generation is China’s “state
capitalism” — and not American liberal capitalism.
State capitalism might be of more interest than liberal capitalism to developing economies (like India).
Milanovich has stated that the Chinese model has not been studied
adequately, so it remains uncertain how to apply it to other societies.
Chinese capitalism does seem closer to the classic Japanese model of development.
South Korea seems to have developed rapidly by adopting the Japanese
model — and has more recently shifted to the American model.
Meanwhile, Japan seems to be stagnating.
It could be that different models of development are more or less appropriate depending on a country’s level of development.
The point here is not economics, but psychology.
People tend to fixate on a particular model of development in terms of a one-size-fits-all universal applicability.
Forms of social technology can be dogmatically worshiped, almost like a cargo cult.
.
What is not technology?
After all, if everything is technology, then nothing is technology.
Things that exist outside the realm of usefulness that are the purpose for useful things are not technology.
For example, art, religion, political action, and science can be useful, but they are also valued in themselves.
Consumption and entertainment exist primarily so that people can
recuperate and be nourished so that they can labor more in the future.
In this in a way, consumption is a form of technology — but it is usually taken for granted as an end in itself.
In fact, increased consumption is how American society evaluates itself as successful or not.
So one problem is that in the USA, everything becomes technology.
But there is a good side to this.
In fact, it could be argued that America has only one true historical
potentiality, which is to create new technology, both social and
material.
America is all about inventing stuff.
But that’s not a preexisting mystical destiny.
.
But there is another problem.
Again, the USA is not about excellence outside of technology.
The problem is that even American social and material technology might be mediocre.
There has been a lot of talk about innovation and how its disrupting everything.
However, there might actually have been less substantial
technological innovation going on in the past decade than in the prior
to that period.
That means that America is failing at its sole historical mission of creating great new technology — both material and social.
Here, we can use the example of junk food to help to explain what is meant here by “technology” and mediocrity.
In France, having a meal is a ritual that has value in itself, and it
should ideally involve wine and conversation and last about two hours.
In contrast, American fast food is designed to be served immediately and eaten with one’s hands (while driving).
In the eyes of the French, this kind of reduces humans to draft
animals like oxen who are being fed food as a fuel as quickly as
possible so that they can labor more.
This seems to be especially true of the educated professional classes in the USA.
They often very carefully regulate their diet and exercise in order to optimize their energy levels for work.
But this American approach to food as technology is doubly perverse in the case of fast food.
This is because it is not healthy and diminishes long-term performance.
Thus, fast food is exemplary of:
the takeover of everything by technology (in this case, food as dehumanizing fuel), and
how it has also become mediocre, dysfunctional, and degraded as a technology.
.
And so the interpretation here is that superhero movies are an
example of a mediocre technocracy — and a critique of that technocracy
in the ever-darkening mood of those films.
This raises issues about interpretation.
When an anthropologists writes about a culture, he or she is translating that culture for an audience in a different culture.
That is, the anthropologist is explaining to outsiders what the
people he is studying mean or intend by their actions or communication.
However, the study of superheroes movies is a different kind of interpretation.
This is because it assumes that the superheroes represent something unconscious.
Even the artists, writers, actors, and directors involved do not consciously know what their artwork represents.
Like dreams, popular commercial art forms do not explicitly state
what it is that makes them emotionally resonate with the public.
Moreover, popular culture can be seen at times to present a critique
of the social order — even if it is only an unconscious critique.
In fact, when artists are unknowingly and unconsciously engaged in a
social critique, it might be all the more powerful, unrestrained, and
creative.
In fact, popular culture might even involve self-critique — and self-criticism — of popular culture itself.
Superhero movies might unconsciously offer a critique of technology’s appropriation of non-technology.
But the superhero movie it itself the prime example of that appropriation.
To understand this, one has to study and compare various superheroes.
.
The quirks of superheroes are therefore important.
For example, in terms of superpowers, Superman is the opposite of Batman.
Batman has no super-abilities, whereas Superman possesses all of the major super-powers.
But Superman is also the opposite of all the other Superheroes in
that while they usually have only one big talent, he has only one big
vulnerability — namely, cryptonite.
There is also the issue of identity.
Superheroes hide their ordinary, prosaic, imperfect “backstage” self
when they don their costumes and assume their “onstage” self that has
one main superpower.
In contrast, when Bruce Wayne puts on the mask and cape, he is still an ordinary man — only now encased in expensive body armor.
Contrary to Batman, Superman does not wear a mask because as Superman
he is being his true self — which is a god from another planet.
In fact, Superman’s disguise is when he puts on a pair of glasses and pretends to be Clark Kent.
(Kill Bill: Vol. 2 (2004) – Superman and Clark Kent Scene)
One other obvious difference between Batman and Superman is that Batman is named after an animal.
Indeed, it’s fairly common for superheroes like Spiderman to be named
after a species of fauna — much the way that modern sports teams are
often named after species of animals.
Superman diverges from other superheroes in this way.
Yet Superman embodies all superheroes in that he is their ultimate embodiment — which is evident in his name.
But the use of an animal as an emblem has deeper roots from a sociological perspective.
It has been argued that in the earliest religions, a particular plant
or animal was worship by a clan as a representation of the clan itself —
although the clan did not know this.
In other words, a society’s object of worship is secretly a representation of society itself.
According to Durkheim, through worship of the sacred, a culture becomes spiritually aware of its own existence.
In summing up, then, we must say
that society is not at all the illogical or a-logical, incoherent and
fantastic being which it has too often been considered. Quite on the
contrary, the collective consciousness is the highest form of the
psychic life, since it is the consciousness of the consciousnesses.
Being placed outside of and above individual and local contingencies, it
sees things only in their permanent and essential aspects, which it
crystallizes into communicable ideas. At the same time that it sees from
above, it sees farther ; at every moment of time, it embraces all known
reality ; that is why it alone can furnish the mind with the moulds
which are applicable to the totality of things and which make it
possible to think of them. It does not create these moulds artificially ;
it finds them within itself ; it does nothing but become conscious of
them.
.
There is a related but distinct sociology of religion that traces how
the modern world has become increasingly disenchanted because of the
growth of science.
In fact, for Weber, all sorts of modern forces were arrayed against religious faith.
Durkheim shared Weber’s view that
modern society was one in which traditional forms of religion were in
terminal decline. Weber saw modernity in terms of the rise of secular,
rationalised and bureaucratic social systems. Durkheim described it as
an age in which the influence of the old gods of traditional religion
was being replaced by new, more scientific ways of understanding the
world.
Weber was a pessimist in terms of his prognosis for religion in the modern world.
But Durkheim was optimistic that new forms of the sacred would emerge in this modern world.
Arguably what is most important,
though, is not what Weber and Durkheim shared in terms of their beliefs
about the inevitable decline of traditional religion in modern society,
but what they disagreed about. While Weber saw the rise of a soulless,
rationalised society (“specialists without spirit, sensualists without
heart”), Durkheim believed the society of his day to be in a
transitional moment in which the old gods might have faded, but new
forms of the sacred were emerging. Religion might
be dying, in its traditional forms, but sacred passions were not. We
might, in Durkheim’s terms, be living in a more secular age, but not in a
desacralised one.
.
There is one frequently overlooked footnote to Weber’s gloomy
sociology of religion that might be relevant to the rise of the
superhero movie.
Weber observed that economic and social forces in modern societies
have the kind of god-like power over modern man that natural forces once
had over archaic man.
In the ancient world, these mysterious, powerful, inexorable natural
forces were often represented as spirits or gods in polytheistic
religions.
Could these overwhelming societal forces manifest themselves as divine beings in a future religion?
Perhaps they already have manifested themselves — not as gods, but as comic book supervillains.
Comic books heroes might represent the different aspects of the social order and the modern self and value system, and
the supervillains would represent modern problems and crises.
Again, Batman represents the reality of the United States as having
only a purely technological legacy for humanity (and this includes the
social technology glibly referred to as “democracy”).
Then what do Batman’s enemies symbolize?
Batman’s foes represent the problems and dysfunctions of technology.
Notably, Batman’s nemeses, like the caped crusader himself, generally have no inherent superpowers and must rely on technology.
To understand these villains and how they represent perversions of
technology, it is helpful to distinguish between “instrumental
rationality” versus “value rationality”.
“Instrumental” and “value rationality” are terms scholars use to identify two ways humans reason when coordinating group behavior to maintain social life.
Every society maintains itself by coordinating instrumental means with value rational ends.
Together they make humans rational.
These two ways of reasoning seem to operate separately.
Instrumental rationality recognizes means that
“work” efficiently to achieve ends. Efficient means are recognized
inductively in heads or brains or minds. Instrumental rationality
provides intellectual tools—scientific and technological facts and
theories—that appear to be impersonal, value-free means. This is
determined by expectations as to the behavior of objects in the
environment and of other human beings; these expectations are used as
“conditions” or “means” for the attainment of the actor’s own rationally
pursued and calculated ends.
Value rationality recognizes
ends that are “right,” legitimate in themselves. Legitimate ends are
felt deductively in hearts or guts or souls. Value rationality provides
legitimate rules—moral valuations—that appear to be emotionally
satisfying, fact-free ends. This is determined by a conscious belief
in the value for its own sake of some ethical, aesthetic, religious, or
other form of behavior, independently of its prospects of success; … …
the more the value to which action is oriented is elevated to the status
of an absolute value, the more “irrational” in this [instrumental]
sense the corresponding action is. For the more unconditionally the
actor devotes himself to this value for its own sake, … the less he is
influenced by considerations of the [conditional] consequences of his
action.
An example of value rationality would be
when a person does volunteer work, or goes to an art museum, or goes to
church — without any expectation of extrinsic reward because these
activities have value in themselves.
An example of a corruption of value rationality is when a
celebrity does community service, or a professor goes to a symphony
concert, or a politician goes to a religious event — in order to raise
or rehabilitate their public persona and media profile.
Again, Batman’s foes represent various perversions of instrumental rationality — that is, the technical methods to accomplish a goal.
The Penguin is a straightforward criminal. He is a sane but evil man rationally engaged in the pursuit of wealth through crime. “Unlike
most of Batman’s rogues gallery, the Penguin is completely sane and in
full control of his actions, giving him a unique relationship with
Batman.”
Ra’s al Ghul, as an eco-terrorist seeking to bring balance to the world, embraces mass murder to attain a righteous goal. “Raʼs
al Ghul is an international criminal mastermind whose ultimate goal is a
world in perfect environmental balance. He believes that the best way
to achieve this balance is to eliminate most of humanity.“
The Riddler is
obsessed with puzzles. Insofar as puzzles are entertainment, they are
an end in themselves (value rationality); but puzzles are educational,
and can also serve to hone practical problem solving (instrumental
rationality). The Riddler stages crimes that involve a puzzle that
challenges the public or investigators, but the puzzle serves as a ruse
to accomplish the crime (instrumental rationality); at other times, the
Riddler engages in crime as a form of puzzle solving (value rationality)
“The character’s origin story recounts Edward Nigma’s fascination
with puzzles from a young age. After a teacher announces that a contest
will be held over who can solve a puzzle the fastest, Nigma sets his
sights on winning this, craving the glory and satisfaction that will
come with the victory. He breaks into the school at night to practice
the puzzle until he is able to solve it in under a minute. Due to this
he wins the contest and is given a book of riddles as a prize.”
The Joker is irrational, and neither his methods or his goals make sense. “The Joker commits whimsical, brutal crimes for reasons that, in Batman’s words, ‘make sense to him alone’.”
.
What does Spider Man personify?
If Superman symbolizes the ideals of America, and
Batman represents the technological means to attain worldly goals, then
Spider Man embodies SCIENCE, and his enemies represent the corruption of science.
Science is value rational insofar as it is engaged as an end in
itself, much like religion, art, or altruism — but science can serve the
needs of technology.
Spider Man and most of the supervillains that he fights against are
typically unintentional victims of accidents during scientific
experiments.
The new superpowers are of benefit to society in the case of Spider
Man, but in the case of the villains, it’s all about scientific research
gone wrong.
Although they have been technically enhanced, the fact that their
transformation was not deliberate severs the connection to purposive
instrumental rationality and technology.
That is, their new powers were neither a goal nor meant to be a means to some nefarious premeditated end.
In fact, so many villains in Spider Man’s world were good (if
difficult) men who were frustrated with a corrupt status quo that had
trapped them prior to their transformation.
This lends a tragic aspect to the villains that can deepen the audience’s engagement with the characters.
But it also underscores the alienation and disconnection that the
villains once experienced with the “greedy” industry that had ruined
their lives and stymied their ambitions.
Again, this diminishes and attenuates their relationship to instrumental rationality and technology.
Also, so many of the animal totems in the Spider Man franchise —
spiders, octopuses, vultures, chameleons, lizards, scorpions, jackals,
cats, and even goblins and hobgoblins — are either trickster animals who
hunt by stealth rather than by the chase, or are scavengers.
Their relative passivity in not being active hunters detaches them somewhat from active, instrumental reason.
Again, what’s with the “Batman-ification” or darkening of the Spider Man movies?
It’s been said that the great crisis of the 20th century was the industrialization of science.
Science has become a conveyor-belt, factory-like activity.
In other words, science has not only become subsumed under technology
in the role it has been assigned in modern society, but research has
itself become technology in its practice.
If it is true that current scientific and technological progress is
stagnating, the colonization of science by technology might be one
reason.
There are supposedly more working scientists in the world today than
there have been throughout all of history combined — and yet the great
scientific breakthroughs seem to be a thing of the past.
Again, a classic Batman supervillain is a scientist frustrated by the
corruption and petty politics of the military-industrial-university
complex.
.
The Superhero movie is both a protest against and the
perfect example of the takeover of everything by the
military-industrial-university-entertainment complex (technocracy).
Superman symbolizes American ideals (of truth, justice, and hope) and
Batman represents the reality of the United States as a technocracy, and
the current gloom in superhero movies reflects:
the full subordination and subsumption of the ideal (Superman) to technology (Batman), as well as
the resulting mediocrity.
Thus, the proper role of technology at the service of values has been inverted.
.
A microcosm of the inversion of the roles of values to technocracy is found in Silicon Valley propaganda, with tech companies:
telling their workers that they are living fulfilling,
meaningful, beneficial, “disruptive”, and creative lives through
constant, tireless labor, and
telling the public that if their
lives will be fulfilling, meaningful, beneficial, “disruptive”, and
creative if they purchase the latest tech product.
This corporate and governmental “gaslighting” or “brainwashing” of is exactly what the 1960s popular critique warned against.
The protests of the 1960s was against governance by experts who secretly manipulated all aspects of society.
The Making of a Counterculture … chronicled and gave explanation to the European and North American counterculture of the 1960s. The term “counterculture” was first used by Roszak in this book.
The Making of a Counter Culture “captured a huge audience of Vietnam War protesters,
dropouts, and rebels–and their baffled elders. Theodore Roszak found
common ground between 1960s student radicals and hippie dropouts in their mutual rejection of what he calls the technocracy–the
regime of corporate and technological expertise that dominates
industrial society. He traces the intellectual underpinnings of the two
groups in the writings of Herbert Marcuse and Norman O. Brown, Allen Ginsberg and Paul Goodman.“
The critique of the technocracy finds its origins in the elite university — which is heart of the technocracy.
For example, the Beat Generation of the 1950s was a precursors to the Hippies of the 1960s.
The
Beat Generation was a group of American post-World War II writers who
came to prominence in the 1950s, including the cultural phenomena they
documented and inspired. Central elements of Beat culture included the
experimentation with drugs, alternative forms of sexuality, interest in
Eastern religions (such as Buddhism), rejection of materialism, and
idealizing exuberant means of expression and being.
Allen Ginsberg’s Howl (1956), William S. Burroughs’s Naked Lunch (1959), and Jack Kerouac’s On the Road (1957) are among the best known examples of Beat literature. Both Howl and Naked Lunch became
the focus of obscenity trials. The publishers won the trials, however,
and publishing in the United States became more liberalized. The members
of the Beat Generation developed a reputation as new bohemian hedonists
who celebrated non-conformity and spontaneous creativity.
Many of the foundational members of the Beat Generation were students at Columbia University.
Origin of the Beats
The origins of the Beat Generation can be traced to Columbia University,
where Kerouac, Ginsberg, Lucien Carr, Hal Chase, and others first met.
Classmates Carr and Ginsberg discussed the need for a new vision to
counteract what they perceived as their teachers’ conservative,
formalistic literary ideals. Later, in the mid-1950s, the central
figures of the Beat Generation (with the exception of Burroughs) ended
up living in San Francisco together.
Complicating matters, this anti-technocratic attitude finds a
conservative counterpart in an academic classicism that emphasizes the
liberal arts.
Indeed, the left-wing counter-culture might be an unconscious expression of this traditionalism.
The idea of the great books emerged at the same time as the modern university.
The
idea made its way into universities after 1900 as part of a backlash
against the research model, led by proponents of what was called
“liberal culture.” These were professors, mainly in the humanities, who
deplored the university’s new emphasis on science, specialization, and
expertise. For the key to the concept of the great books is that you do
not need any special training to read them.
It’s not an
accident or a misfortune that great-books pedagogy is an antibody in the
“knowledge factory” of the research university, in other words. It was intended as
an antibody. The disciplinary structure of the modern university came
first; the great-books courses came after. As Montás says, “The practice of liberal education, especially in the context of a research university, is pointedly countercultural.”
Hence the protests of professors in the humanities against
pragmatically oriented research of the modern university — which is
deeply intertwined with the military-industrial complex.
However, the traditional liberal arts and modern scientific research
both share the same exact focus on rigorous critical inquiry.
In contrast, the original mission of universities like Harvard was to train the next generation of religious clergy.
So, the modern research university is actually not so different from
stodgy old academic traditionalism that celebrates “The Great Books”.
The real divide is between the religious mission of universities in
the past with modern secular research — whether it is in the STEM
subjects or the humanities.
In the creation of the modern university, science was the big winner. The big loser was not literature. It was religion. The
university is a secular institution, and scientific research—more
broadly, the production of new knowledge—is what it was designed for.
All the academic disciplines were organized with this end in view.
Philology prevailed in literature departments because philology was
scientific. It represented a research agenda that could produce
replicable results. Weinstein is not wrong to think that critical theory
has played the same role. It does aim to add rigor to literary
analysis.
Moreover, much of the support for the liberal arts has been because
STEM subjects have historically been expensive to teach (laboratories,
instructors in great demand in the private sector, etc.).
In contrast, the lecture courses of the liberal arts are (were?)
relatively cheap (indeed, a cash cow in the case of law school).
That is, modern universities have funded the liberal arts not out of a
deep reverence for tradition, but because they were comparatively
inexpensive.
However, the calculus might be changing as fewer students choose them as their major.
.
The point is that within academia, the technocracy and the
countercultures that oppose it — whether radical or conservative — are
not so different.
In fact, the technocracy and the countercultures are identical with one another.
Something like this has been asserted about the new commercial and
consumer capitalism of the 1960s and the 1960s counterculture.
The central values of the 1960s counterculture — such as spontaneity,
creativity, rebellion, and freedom — both influenced and had their
origin in the advertising industry and the postwar shift to consumer
capitalism.
The greatest concern in the 1960’s counter-cultural critique was the manipulation of desires through propaganda and advertising.
A classic example of cool-blooded, rational technicians finessing
public opinion can be found in Germany’s long-serving leader Angela
Merkel.
Merkel was able to rule for so long because whenever she was faced
with a crisis, she would radically change her positions in order to seem
hip and cool and progressive.
For instance, Merkel is pro-nuclear.
However, after the nuclear disaster in Fukushima, Japan in 2011,
Merkel vowed to permanently close Germany’s nuclear power plants.
Likewise, Merkel was strict on immigration.
In 2015, Merkel tried to comfort a Palestinian girl who expressed her
fears of imminent deportation to an impoverished Middle East.
Merkel’s efforts to calm the girl and explain the necessity of a
strict immigration policy only made the girl more upset — and also
creeped-out the the German television audience.
Merkel soon thereafter confirmed her openness to foreign immigration.
The effects of Merkel’s policy reversals are not simply that she
adopted policies that she knew would be disastrous in order to burnish
her popularity.
The greater problem is precisely that her long rule did produce the kind of remarkable stability that a conservative technocrat values so highly.
That kind of smoothed-over, artificial “stability” only serves to
foster a moribund society, economy, and political culture — as in the
cautionary tale of a stagnating Japan.
Perhaps that kind of static stability ultimately leads to a
Soviet-bloc scenario, in which governments that seemed permanently
entrenched suddenly crumble overnight.
That line of thought leads to a rather disturbing conclusion….
Although it is fashionable to predict that the USA is headed toward
another civil war, the USA might actually be profoundly stable — in
fact, perhaps more stable than any time in its history.
Instead, it is rather Germany and Japan that might be headed toward a dire crisis and day of reckoning.
In any case, it might have been better for Merkel to have opened
public debates on immigration and nuclear energy rather than do a sudden
about-face in policy.
To her credit, Merkel did engage in a rational public discourse
during the worst days of the Covid pandemic — although as a scientist
explaining concepts, not as a debater.
.
The concept of “technocracy” seems to be flexible enough to be applied to almost any political faction.
Technocracy is usually associated with rational management.
This was the goal of the Progressive movement in the early 1930s.
The progressivism was a reform movement that sought to replace corruption with science.
For example, instead of elected mayors, towns and cities would be run by an appointed manager.
Also, progressives sought to shatter business monopolies.
President Theodore Roosevelt was a classic example of a progressive
who sought to break up large conglomerates that stifled competition.
The Progressive Era (1896–1916) was a period of widespread social activism and political reform across the United States of America that spanned the 1890s to World War I.[1] The main objectives of the Progressive movement were addressing problems caused by industrialization, urbanization, immigration, and political corruption. Social reformers were primarily middle-class citizens who targeted political machines and their bosses. By taking down these corrupt representatives in office, a further means of direct democracy would be established. They also sought regulation of monopolies through methods such as trustbusting and corporations through antitrust laws,
which were seen as a way to promote equal competition for the advantage
of legitimate competitors. They also advocated for new government roles
and regulations, and new agencies to carry out those roles, such as
the FDA.
One finds an updated version of the Progressive movement in the
figure of Ralph Nader during his heyday of the 1970s, when he crusaded
for:
consumer protection,
environmentalism, and
government reform.
The critique of progressive reformers like Nader is that governance
in the USA is dominated by corrupt “iron triangles” comprised of:
politicians,
bureaucrats, and
special interests (business and labor).
For instance, one criticism of the US Forest Service is that it
should be renamed the US Lumber Service because it caters to the logging
industry.
Fees for logging on public lands are supposed to go toward road construction;
but those fees goes entirely to road construction on public lands used by loggers.
The point here is that progressives who believe in rational
management are accused of being technocrats — but progressives have
their own critique of technocracy.
Moreover, the technocracy of the iron triangle is not based on
rational management and open debate, but on secretive negotiations.
Complicating matters more, the iron triangle is a legacy of the New Deal.
The New Deal brought social harmony and economic growth to the USA by uniting big business, big government, and big labor.
Republicans like Eisenhower in the 1950s consolidated rather than abolished the New Deal.
(This is when small-town small businessmen — who were outsiders in
the New Deal — turned away from conservatism and toward right-wing
populism.)
Another wrinkle is that all of the above might suggest that a “Green
New Deal” might be more problematic than its advocates would imagine.
That is, environmentalism and the New Deal would make an odd couple.
Yet another wrinkle is the figure of President Joe Biden, who, as a
man of the New Deal, is a dinosaur who values inclusive coalitions that
work in secret.
“Democratic Socialists” like Alexandria Ocasio-Cortes consider this to be patronage and the essence corruption.
But Biden is also an outsider from the Democratic Party’s
establishment since the 1990s, which champions deregulation — albeit
with strong social investment.
Unlike during the 1950s, right-wing populists today don’t have so
much of a problem with Joe Biden, who represents yesteryear’s New Deal
technocracy.
Today, the right-wing populist considers the quasi-libertarian
policies of the Democratic establishment — such as free trade with China
— to be treason.
So, each outsider political faction seems to demonized an insider faction that is in power as an “evil” technocracy.
Indeed, perhaps because they are the ultimate outsiders, “Democratic
Socialists” just might conceive everyone else to be an evil technocracy.
But socialism is often perceived as being the ultimate rationalistic, centralized technocracy.
.
It seems that every faction in American politics has its own discourse on a version of technocracy against which it is opposed.
At the same time, this faction is itself conceived as a technocracy by another.
The historical pattern might be for this anti-technocratic agenda to
find its origins in elite circles and then filter down into society.
For example, the anti-technocratic agenda of the 1960s counterculture
found its origin in elite academia of the 1950s, and went mainstream in
the 1970s.
But one finds a similar mainstreaming of anti-technocratic sentiment among conservatives.
Perhaps most famously, there was President Dwight Eisenhower’s 1961
Farewell Address to the Nation, which warned of a “military-industrial
complex”:
A vital element in keeping the
peace is our military establishment. Our arms must be mighty, ready for
instant action, so that no potential aggressor may be tempted to risk
his own destruction…
This conjunction of an immense military
establishment and a large arms industry is new in the American
experience. The total influence—economic, political, even spiritual—is
felt in every city, every statehouse, every office of the federal
government. We recognize the imperative need for this development. Yet
we must not fail to comprehend its grave implications. Our toil,
resources and livelihood are all involved; so is the very structure of
our society. In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military–industrial complex. The potential for the disastrous rise of misplaced power exists, and will persist.
We
must never let the weight of this combination endanger our liberties or
democratic processes. We should take nothing for granted. Only an alert
and knowledgeable citizenry can compel the proper meshing of the huge
industrial and military machinery of defense with our peaceful methods
and goals so that security and liberty may prosper together.
During this period, this sentiment is reflected within elite intellectual circles across the political spectrum.
Attempts to conceptualize
something similar to a modern “military–industrial complex” existed
before Eisenhower’s address. Ledbetter finds the precise term used in
1947 in close to its later meaning in an article in Foreign Affairs by Winfield W. Riefler.[14][18] In 1956, sociologist C. Wright Mills had claimed in his book The Power Elite that
a class of military, business, and political leaders, driven by mutual
interests, were the real leaders of the state, and were effectively
beyond democratic control. Friedrich Hayek mentions in his 1944 book The Road to Serfdom the danger of a support of monopolistic organization of industry from World War II political remnants.
Just near the end of the Cold War, the diplomat George Kennan made an
prediction about the persistence of the military-industrial complex.
In order to justify defense spending in a post-Soviet age, it would eventually create a fake Cold War.
In a self-fulfilling process, by antagonizing foreign powers, it would eventually manage to create a new Cold War.
George F. Kennan wrote in his preface to Norman Cousins‘s 1987 book The Pathology of Power,
“Were the Soviet Union to sink tomorrow under the waters of the ocean,
the American military–industrial complex would have to remain,
substantially unchanged, until some other adversary could be invented.
Anything else would be an unacceptable shock to the American economy.”
.
Thus, during the 1950s, across the political spectrum among the
American elites, there was a wariness of a technocracy generally — and,
more specifically, of a military-industrial complex.
However, within mainstream America, it was the left-wing who were early adopters of fear of the technocracy.
It started with a countercultural fringe in 1960s who sought to
change society through psychological transformation (free love,
psychedelics, music festivals).
By the 1970s, the hedonistic trappings of the counterculture (sex,
drugs, and rock and roll) are cynically appropriated by the mainstream
more generally .
The critique of the technocracy persists — but in hopeless form — in
1970s Hollywood paranoid thriller that postulated a hidden government.
The deep state is a widely discredited conspiracy theory which claims the existence of a clandestine group of actors who exercise power from within high levels of government, finance, and industry in the United States.
A common perception is that the idea of the “deep state” originated in a right-wing fringe and then migrated to the mainstream.
originated in various elite circles in the 1950s, then
entered the mainstream in the far left in the 1960s, and then
assimilated into the mainstream in bastardized form in the 1970s, then
popped up on the far right, and finally
wound up on Fox News.
.
But the critique of the deep state goes back even further in American
history to the creation of a post-election “spoils system” — in which
the victory replaces government staff with his own supporters.
In politics and government, a spoils system (also known as a patronage system) is a practice in which a political party, after winning an election, gives government jobs to its supporters, friends (cronyism), and relatives (nepotism) as a reward for working toward victory, and as an incentive to keep working for the party—as opposed to a merit system, where offices are awarded on the basis of some measure of merit, independent of political activity.
The spoils system finds its first systematic example in the election of President Andrew Jackson in 1828.
The term was derived from the phrase “to the victor belong the spoils” by New York Senator William L. Marcy,[1][2] referring to the victory of Andrew Jackson in the election of 1828, with the term spoils meaning goods or benefits taken from the loser in a competition, election or military victory.
This aggressive new patronage system wreaked havoc on the federal civil service.
The Jackson administration aimed at creating a more efficient system where the chain of command of
public employees all obeyed the higher entities of government. The
most-changed organization within the federal government proved to be the
Post Office. The Post Office was the largest department in the federal
government, and had even more personnel than the War Department. In one
year, 423 postmasters were deprived of their positions, most with
extensive records of good service.
What the Wikipedia entry does not mention is that Jackson ideologically justified the spoils system.
Nowhere was the Jacksonian ideal
of openness made more concrete than in Jackson’s theory of rotation in
office, known as the spoils system. In his first annual message to
Congress, Jackson defended the principle that public offices should be
rotated among party supporters in order to help the nation achieve its
republican ideals.
Performance in public office, Jackson
maintained, required no special intelligence or training, and rotation
in office would ensure that the federal government did not develop a
class of corrupt civil servants set apart from the people. His
supporters advocated the spoils system on practical political grounds,
viewing it as a way to reward party loyalists and build a stronger party
organization. As Jacksonian Senator William Marcy of New York
proclaimed, “To the victor belongs the spoils.”
However, to some extent Jackson’s gutting the civil service of elites
and replacing them with ordinary people was limited and fake.
The spoils system opened
government positions to many of Jackson’s supporters, but the practice
was neither as new nor as democratic as it appeared. During his first 18
months in office, Jackson replaced fewer than 1,000 of the nation’s
10,000 civil servants on political grounds, and fewer than 20 percent of
federal officeholders were removed during his administration. Moreover,
many of the men Jackson appointed to office had backgrounds of wealth
and social eminence. Jackson did not originate the spoils system. By the
time he took office, a number of states, including New York and
Pennsylvania, practiced political patronage.
.
However, at least in its rhetoric, Jackson’s spoils system had the
radical intent of the Cultural Revolution in China in the 1960s.
Alarmed at the rise of a bureaucratic state soon after the Communist Revolution, Mao pursued various policies to undermine it.
In the case of the Cultural Revolution, things quickly degenerated into anarchy and civil war.
Launching the movement in May 1966 with the help of the Cultural Revolution Group, Mao charged that bourgeois elements had infiltrated the government and society with the aim of restoring capitalism. Mao called on young people to “bombard the headquarters“, and proclaimed that “to rebel is justified”. The youth responded by forming Red Guards and “rebel groups” around the country. A selection of Mao’s sayings were compiled into the Little Red Book, which became a sacred text for Mao’s personality cult. They held “denunciation rallies” against revisionists regularly, and grabbed power from local governments and CCP branches, eventually establishing the revolutionary committees in 1967. The committees often split into rival factions and became involved in armed fights known as ‘violent struggles‘, to which the army had to be sent to restore order.
The
Cultural Revolution was characterized by violence and chaos. Death toll
estimates vary widely, with roughly 250,000 to 20 million people
perishing during the Revolution.
.
Preventing the rise of an entrenched elite — as well as the formation
of a centralized state — might also be purpose of elaborate gift-giving
rituals among the indigenous peoples of the Pacific Northwest.
The abundant resources of the region create economic surpluses that
could undermine tribal life and its profound social cohesion.
Importantly, surpluses were not only given away, but ostentatiously destroyed.
A potlatch involves giving away or
destroying wealth or valuable items in order to demonstrate a leader’s
wealth and power. Potlatches are also focused on the reaffirmation of
family, clan, and international connections, and the human connection
with the supernatural world. Potlatch also serves as a strict resource
management regime, where coastal peoples discuss, negotiate, and affirm
rights to and uses of specific territories and resources. Potlatches
often involve music, dancing, singing, storytelling, making speeches,
and often joking and games. The honouring of the supernatural and the
recitation of oral histories are a central part of many potlatches.
.
In sum, the darkening or “Batman-ification” of the superhero movie is
the superhero movie making a critique of itself — which is a
self-critique of the technocracy.
In particular, the ideals of American society (embodied in Superman) have been subsumed under technology (Batman).
However, there can be dual ideals in society — an official ideal that
promotes the functioning of society, and the greater ideal that society
serves.
As the poet Charles Bukowsky said, above and behind every ideal there is a secret hidden ideal.
But the relationship between these two ideals is not that of the
perfect Platonic form like the idea of a circle versus the actual flawed
thing.
Rather, these two ideals are two very different things.
.
The relationship between these two ideals is comparable to that
between “manifest” and “latent” functions in social institutions.
For example, the manifest function of a rain dance within an indigenous culture is to produce rain during a drought.
In contrast, the latent function of the rain dance that skeptical
outside observers would find obvious is the promotion of social cohesion
during a crisis.
“…the “manifest” function of
antigambling legislation may be to suppress gambling, its “latent”
function to create an illegal empire for the gambling syndicates. Or
Christian missions in parts of Africa “manifestly” tried to convert
Africans to Christianity, “latently” helped to destroy the indigenous
tribal cultures and this provided an important impetus towards rapid
social transformation. Or the control of the Communist Party over all
sectors of social life in Russia “manifestly” was to assure the
continued dominance of the revolutionary ethos, “latently” created a new
class of comfortable bureaucrats uncannily bourgeois in its aspirations
and increasingly disinclined toward the self-denial of Bolshevik
dedication (nomenklatura). Or the “manifest” function of many
voluntary associations in America is sociability and public service, the
“latent” function to attach status indices to those permitted to belong
to such associations.” “
The relationship between society’s manifest ideal and its latent
ideal finds an analogy within the discipline of political science.
Political scientists must learn political theory in order to function within their fields like public policy.
But that kind of basic, local political theory is distinct from
political philosophy, which stands outside and above political science
(at least for the philosopher).
Likewise, on closer inspection, popular culture exhibits latent ideals above and behind manifest ideals.
Hence, Superman is obviously the manifest ideal of American liberal democracy.
But
it is easy to forget that Superman is not Clark Kent in disguise, but
rather the benevolent space alien Kal-El — the son of a Kryptonian
scientist father and an astronaut mother.
In this way, Kal-El is
similar to Sherlock Holmes, who is the manifest ideal of the aristocrat
as scientific investigator dedicated to public service.
.
It should be noted that within society, there are a plurality of ideals, some of which are covert.
For example, the dominant ideal of happiness in the USA might be the
display of success as a sign and culmination of a virtuous life — in
particular, owning a house.
That version of the American dream would be a legacy of secularized Protestantism.
Materialism is one would expect from a democracy because democratic cultures reflect the mentality of the ordinary person.
But the American dream of achieving a material goal that is grandiose is probably not suited to 90% of people.
That is, the ordinary person might be tempted and transfixed by the fantasy of owning a giant house and giant truck.
However, they would not be happy in either the pursuit or the attainment of those goals.
So often, they are really nice people trying hard to appear to be individualistic high-achievers.
Their real goal is to simply fit in — and there is nothing wrong with that.
They might be better off being content with what they have and living
moderately and responsibly — in line with Buddhism and the original
Christianity.
However, ten percent of the population is only happy when they are striving for the summits of accomplishment.
One can find this elitist notion of happiness in ancient Greek philosophy.
First, happiness is a goal that stands outside of technological ways to attain things.
In the Nicomachean Ethics, written in 350 BCE, Aristotle stated
that happiness (also being well and doing well) is the only thing that
humans desire for their own sake, unlike riches, honour, health or
friendship. He observed that men sought riches, or honour, or health not
only for their own sake but also in order to be happy.
Second, this is happiness as fulfillment through action in which a
worldly goal is being pursued, and not happiness a state of being
attained by proper attitude.
For Aristotle the term eudaimonia, which is translated as ‘happiness’ or ‘flourishing’ is an activity rather than an emotion or a state.[43] Eudaimonia
(Greek: εὐδαιμονία) is a classical Greek word consists of the word “eu”
(“good” or “well-being”) and “daimōn” (“spirit” or “minor deity”, used
by extension to mean one’s lot or fortune). Thus understood, the happy
life is the good life, that is, a life in which a person fulfills human
nature in an excellent way.
Third, the active life is a rational life.
Specifically, Aristotle argued
that the good life is the life of excellent rational activity. He
arrived at this claim with the “Function Argument”. Basically, if it is
right, every living thing has a function, that which it uniquely does.
For Aristotle human function is to reason, since it is that alone which
humans uniquely do. And performing one’s function well, or excellently,
is good. According to Aristotle, the life of excellent rational activity
is the happy life.
Finally, this rational active life is not for everybody, and the
alternative is to lead a more limited simple ethical life (for example,
Buddhism and Christianity).
Aristotle argued a second-best life for those incapable of excellent rational activity was the life of moral virtue.
The argument here is that this rational active life is, in fact, not appropriate for most people.
For most people, the best life is that of modest, reasonable
small-town living with one’s circle of family and friends (much like
Clark Kent growing up in Smallville).
This should be the proper manifest ideal of American life.
For example, the billionaire investor Warren Buffett lives in a relatively modest house in Omaha, Nebraska.
Arguable, if every billionaire and millionaire in America lived like Warren Buffett:
there would be much greater social cohesion in the USA;
Americans would live more within their means; and
millionaires and billionaires would actually be happier.
.
Warren Buffet would then be the exemplar of the manifest ideal of American life.
The rational active life would be the latent ideal of society pursued by few.
That is, the unspoken, tacit purpose of the American way of life
would be to produce a few rational men of action like Thomas Jefferson
and Ralph Nader.
Also, the latent goal would be to produce great scientists and artists.
These latent elitist goals are very necessary to avoid society
getting “stuck in place” because of the moderation and reasonableness of
manifest goals.
In particular, the materialism of the ordinary person makes them easily tempted by the consumerist values of the technocracy.
Society needs to learn to accept societal projects that are not
designed to increase material productivity and that have value in
themselves.
For example, there is a popular notion that society can avoid stagnation by engaging in a grandiose infrastructure project.
Actually, that would worsen exacerbate stagnation.
For example, fusion power is touted as a potentially safe, clean, and cheap form of energy.
However, these material utopian expectations only lead to critics complaining that “fusion power is always five years away”.
Instead, research on fusion power should be pursued because it has value in itself.
The proposal is that the world would invest $10 billion per year in
fusion power research, and this would increase by five percent every
year — without any foreseen practical benefit.
It would be the modern equivalent of a sacrifice to the gods in order to avoid spiritual stagnation.
It could be that Americans simply would not understand this.
We would then turn out hopes toward Eurasia as a source of permanent funding for pure research on fusion power.
Even the least developed countries of Eurasia could contribute to
funding research that explicitly might not have any practical payback.