• HOME
  • Film
  • Words
  • About

Bon Anomie Productions

  • HOME
  • Film
  • Words
  • About

On Individuality

Everyone is different; it is the extent to which one perceives, acknowledges and acts on such difference that matters. Community negates difference, subsumes it. It puts things in their proper place, orders them, naturalises. By the same token, individuality, in its strictest sense, reveres difference. To be an individual is to recognise that no social structure is self-evident, that one’s community is arbitrary, that society’s constraints are internal, not external.

Individuality is celebrated in western culture, and aspired to. Yet expressions of one’s uniqueness are highly regulated, by moral norms, public discourse, state institutions. One is permitted to be an individual only on the condition that the manifestations of this being accord with broader social phenomena (such as fashions) and exist independently of behaviours associated with deviance and mental illness. Individuality must be confined to specific domains of activity, such that its expressions do not contaminate society as a whole. By orchestrating this quarantine, which communalises difference, society protects itself from threats to its ontological givenness.

Popular expressions of difference, as in art, music and fashion, serve less to challenge social values than to reinforce them. They are communal in nature, dependent on the recognition and endorsement of such differences by others, both within and beyond one’s community. Difference is domesticated. In other words, western individuality is affected, ostensible. Its surface forms betray an underlying conformity, an acceptance – and perhaps even a celebration – of existing social structures. Thus, in the West, to be different is to opt into society. True difference, in contrast, is anti-social.

 

tags: Individuality, Society, Norms, Philosophy
categories: 2014
Sunday 10.06.24
Posted by David Jobanputra
 

Writing & Ideology

An ideology (from the Greek idea (image, form, idea) and logos (system of thought, discipline) can be understood as a set of aims and ideas that direct one’s action within the world. The term may refer to a worldview, to a political philosophy or, in Marxian sense, to an instrument of social reproduction – the ruling class’ means to justify the social order. In epistemology, ideology is often synonymous with paradigm (a broad philosophical and theoretical framework); in semiotic theory, it signifies ‘a unitary object that incorporates complex sets of meanings with the social agents and processes that produced them’ (Hodge 1999). Foucault’s notion of discourse (an entity of sequences of signs (enouncements) that assign specific repeatable relations to objects, subjects and other enouncements (Foucault 1969)) is comparable to ideology, though the former is concerned chiefly with verbal systems. For the purpose of this short essay, however, it appears quite apposite.

My aim here is not to present a detailed discursive or ideological analysis of academic writing (which besides is well beyond the abilities of any one writer), but rather to consider briefly that which is ‘left out’ in such an approach. I am interested in the limits of the discursive representational form on which academia is predicated, in the modes of thought and symbolisation that are excluded through an emphasis on verbal communication. To extricate myself momentarily from this thicket of irony, and to clarify, I offer this vivid invocation from Isadora Duncan, the mother of modern dance: ‘If I could tell you what it meant, there would be no point in dancing it’.

It has been argued by authorities from diverse fields that there are two (or more) orderings of reality simultaneously present in humans. Freud compared the unconscious processes of dreaming to symbolisation in the arts, and contrasted these with the structured systems of speech. Gregory Bateson, the founder of cybernetic theory, drew a similar distinction between verbal (or digital) and iconic (or analogue) coding, a duality that he related to the conscious and unconscious levels of the mind. For Bateson, the messages and meanings communicated by art forms are achieved at least in part at an unconscious level, or at the interface between conscious and unconscious levels. The devices of propositional language and verbal discourse – tense, simple negatives, modal markers, etc. – are here absent, yet the iconic can communicate with an intensity of experience that is largely unavailable in speech. For the communication of ideas and experiences, then, language may both help and hinder.

What is it to argue that language constrains? To the extent that discourse sets ‘the laws of possibility, rules of existence for the objects that are named, designated or described within it, and for the relations that are affirmed or denied in it’ (Foucault 1972:91), it is certainly true that individuals are not only constrained but indeed constituted by language. Unlike those forms of iconic communication discussed previously, language consists of strict conventions to which we are obliged to adhere, lest all meaning be lost. (Poetry is perhaps an exception to this rule.) As such we are constrained, but not only that. For in thought itself, in our eternal inner monologues, we are slaves to verbosity. Language defines not only what we can say, but also what we can think.

Professor Ivanic invokes a theory of practice to situate the self as writer; we both shape and are shaped by the structures of language. Yet this is a Pyrrhic victory. For the greater part we are ‘subjected’ to language, blind to ‘resemblance’ (Foucault 1966) and ‘the form of life’ (Wittgenstein). Thus, writing is to ideology (in its purest sense) a pollutant; the perfect system of thought is inexpressible.

tags: Writing, Creativity, Ideology
categories: 2009
Sunday 10.06.24
Posted by David Jobanputra
 

The Bareback Human Oasis

“And although it seems heaven sent / we ain't ready, to see a black President”
– Tupac Shakur

"When Barack wins this evening, it's a victory for all of America - because black people and brown people and red people and yellow people all understand that he understands that all villages matter."
– Oprah Winfrey

Tupac is dead. And as of November 5th 2008, so too perhaps is history. Not in a Fukiyama-melodrama kind of a way. No. But in becoming the most eligible candidate for assassination in a generation, Barack Obama appears inadvertently to have ‘redefined’ history. Or to have inverted the space-time continuum, which is likely well within his powers.

Consider: the term ‘history’ has traditionally been applied to an aggregate of past events, or to a branch of knowledge concerned with the systematic narrative of such events. (Granted, history is a bit of a cad, having fathered numerous illegitimate intellectual offspring and probably screwing Hegel on alimony. So treat the preceding definition loosely, like history has his women.) ‘Past events’, like a wall coming down in Berlin, or bombs falling on a harbour, or the penning of a timeless creed that sums up the spirit of a people (cf. Tribe Called Quest, A. 1991. Can I Kick It? New York: Jive Records). Past events.

What business, then, have contemporary affairs in being packaged up and sold as ‘historic’? Does not ‘the historic’, as a temporal division, necessitate some future period of reflection, the privilege of, in President-speak, our children, and our children’s children, and so on? Is it not this interval itself that ultimately constitutes history?

It seems the post-modern obsession with reflective histrionics has finally caught up with itself. The temporal dislocation of history, which allows for events to exist simultaneously both in the present and in an imagined future-past, has rendered its study overtly metaphysical. But wait. Is this really anything new? Haven’t prophets, leaders, sportsmen and the like been looking to ‘situate’ themselves in future-history for many centuries past? As social beings predisposed to seeking the approval of others, is it not inevitable that we entertain ideas about how we will be perceived by coming generations? Has anything changed? And how long is this string of rhetorical questions?

In recent times, the definition of the term ‘historic’ appears to have been expanded, with many new meanings having been appropriated and introduced into public discourse by politically dominant groups. (That singular, current events can now be deemed ‘historic’ testifies to this subtle semantic manipulation.) The word is increasingly applied to the cessation of processes, such as wars or Olympic medal droughts, and occasionally even to incipient ones. Consider the recent example:

When Barack Hussein Obama was ordained as the 44th President of the United States, leader of the free-world, saviour of the solar system and legitimate puppy-purchaser, celebration was unconfined. A jubilant nation witnessed scenes of unparalleled excitement; not since November 1989 had the world’s media captured in such totality the joy and anticipation of a people. What do these events have in common? Teary, rosy-cheeked girls; a cross-cultural camaraderie; the collective warmth of heart that fought off the chill of a November midnight? In Berlin they played Beethoven’s 9th, with the word “Joy” (Freude) changed to “Freedom” (Freiheit); in Chicago’s Grant Park, where some 750 billion people had gathered to witness the Second Coming, they played Jackie Wilson’s 1967 hit “(Your Love Keeps Lifting Me) Higher and Higher”. Yet despite these startling similarities, the two occasions differed in their temporal orientations. While the fall of the Berlin Wall was celebrated chiefly as an ending, the coming of Barack was heralded as a beginning. Of what remains to be seen, and as such the widespread elation that has accompanied his ascension seems at once baffling and injudicious.

Yet the excitement is neither illusory nor localised. The global community has endorsed the occasion of Obama’s election as ‘historic’, based on novel, pervasive and slightly contradictory definitions of this term. These new strains of ‘the historic’ interbreed with older, grander and more abstract ones, in turn producing highly resistant, air-wave-borne forms. Upon contact with these virulent ideas, people come to believe that they are indeed witnessing a pivotal moment in some logical sequence – human progress perhaps - leading to symptoms of (unbounded and unfounded) hope and joy. Side-effects include back-slapping and fist-pumping; celebrity sufferers may choke on their own soundbites.

Seduced by the sirens of history, paralysed in a future-past, it is an excessive pattern of emotionality and attention-seeking behaviour that characterises the children of HPD (Histrionic Personality Disorder). Hungry for history’s approval, and impatient for his arrival, they build history in their own image. Chants resound: “Yes we can, yes we can, yes we can”. In sun-scorched canyons, through bristling forests and ‘cross the icy wastes, sweet melody on the breeze: “…taking me higher, than I’ve ever been lifted be-…”

In the distant valleys of time, history lies in wait.

All this is absolute nonsense of course. Except maybe Tupac’s quote at the beginning – I can think of 49% of a certain adult population who would probably agree with him there. But as Oprah so astutely observed, it’s not about black and brown and red and yellow, not anymore. We have to rid ourselves of this Lego mentality if we’re not to be overcome by those Eastern powerhouses, the Reds, the Yellow Reds and the Browns.

tags: History, Politics, USA
categories: 2008
Sunday 10.06.24
Posted by David Jobanputra
 

The End of Innovation

“Inventions reached their limit long ago, and I see no hope for further development.”
– Julius Frontinus, 1st Century AD

Humanity has reached a “technological plateau”. So say the so-called “end of innovation” theorists – a small but increasingly vocal group of economists heralding an era of technological and economic stagnation. In the words of Tyler Cowen, a leading proponent, there are “no more low-hanging fruit” – all the good inventions (or at least the easy ones) have already been invented. This view is shared by Robert Gordon, another key theorist, who argues that there are only a handful of truly fundamental innovations – electrification, transportation, communication – and that most of these have already happened. "There will be more innovation”, he predicts, “but it will not change the way the world works in the way electricity, internal-combustion engines, plumbing, petrochemicals and the telephone have."[1]

Gordon cites the example of human mobility. In the year 1900, the quickest way to move people and goods was by horse and cart. There was no heating or air-conditioning, and certainly no USB charger. A horse and cart travels at one per cent of the speed of sound. Fast-forward 60 years, and we have the Boeing 707, which travels at 80 per cent of the speed of sound. Fast-forward another 60 years, and we’re still travelling at the same speed. A similar “plateauing” is observable in electrification, car ownership and central heating, all of which, Gordon notes, “went from zero to 100 per cent” [in the US] in a few short decades. These innovations spurred massive growth, but their reach is now absolute, and hence their influence is waning. The IT revolution, he argues, cannot match the impact of these earlier innovations in terms of growth in productivity, and thus living standards.

Yet the “end of innovation” is not a universally accepted theory, even among economists. The McKinsey Global Institute, one of the world’s leading private-sector think tanks, recently published a 152-page report titled “Disruptive technologies: Advances that will transform life, bus­­iness, and the global economy”, which identified 12 emerging technologies with “the potential to truly reshape the world”.[2] These include mobile internet, autonomous vehicles, cloud technology and the Internet of Things. In contrast to Gordon’s bleak projection, the report posits a direct correlation between future innovation and growth. For example, emerging energy technologies, such as improved energy storage and renewable energy, could boost overall economic growth, while others, such as advanced robotics and 3D printing, could bolster productivity and growth within the manufacturing sector. As noted by the Forbes technology journalist Gil Press, “what is interesting…about this list of disruptive technologies is how most of them are derived from a single disruption or revolution, that of information technology”.[3] Crucially, Press argues, information technology is radically different from the “fundamental innovations” that preceded it: whereas electricity, steam trains and the like have a narrow, specific purpose, information technology has limitless applications. IT is not a single innovation, then, but a whole new way of innovating.

“The only function of economic forecasting is to make astrology look respectable.”
– John Kenneth Galbraith.

So what should we make of the “end of innovation”? It is the wont of economists to predict the future, yet predicting innovation may be beyond even them. In the words of Ben Bernanke, Chairman of the US Federal Reserve Board, “innovation, almost by definition, involves ideas that no one has yet had, which means that forecasts of future technological change can be, and often are, wildly wrong.” We will continue to innovate, he predicts, just as we’ll continue to forecast the end of innovation.[4]

[1] https://www.economist.com/briefing/2013/01/12/has-the-ideas-machine-broken-down

[2] https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/disruptive-technologies

[3] https://www.forbes.com/sites/gilpress/2013/06/15/the-end-of-innovation-crowd-doesnt-get-it/?sh=2003225840c0

[4] https://www.federalreserve.gov/newsevents/speech/bernanke20130518a.htm

tags: Innovation, Technology, Future
categories: 2021
Sunday 10.06.24
Posted by David Jobanputra
 

Ethical Imperialism

“Man is the only animal that laughs and weeps, for he is the only animal that is struck
with the difference between what things are and what they ought to be.
”

– William Hazlitt (1878 – 1930)

Our society is forged upon ideals of the sanctity of human life, though paradoxically the chief authority of the age, the decontextualised rationality of science, upholds no such belief.  That the life of the individual human is more valuable than that of another species, let alone that it is somehow ‘sacred’, is a notion in direct contradiction to the West’s founding ontological model, namely the theory of evolution, and is, as many have observed, largely derived from archaic theological and philosophical separations of humanity and ‘nature’.

The implications of this are far-reaching, but cannot be reasonably accepted.  The current strategy is to try to widen the privileged domain of ‘humanity’, granting citizenship to some genetically similar and charismatic mega-fauna, such as the higher primates.  Still the arbitrary factors – language, transmitted learning, tool usage, self-recognition, etc. – that once accounted for humanity’s self-separation from the rest of creation are employed here.  Other life, meanwhile, remains unquestioningly of lesser value.

What is the future of such attempts at ethical imperialism?  Can other species be given the rights of humans, and should they be?  Would they want them?  Applied ethics works solely as an anthropocentric model of customs and values.  Human beings, by virtue of their evolutionary past, are not geared towards the protection of other species (except perhaps for domesticates, most of which enjoy protection only preceding predation).  A human ethical code is thus heavily skewed.  For humans, operating within the dual streams of evolution and ethics, existence (specifically, social interaction) is an unceasing trade-off between ‘natural’ (evolved) and ‘cultural’ (moralised) behaviours.  One example is the ‘natural’ act of rape, which is morally objectionable in most human cultures.  No other species exhibit such dynamic restraints on their behaviour - they are, if you like, immersed solely in the evolutionary stream – and so should arguably not be afforded human ethical consideration, ‘less in a completely revised fashion.

What alternatives exist to extending the boundaries of personhood and its concomitant system of ‘rights’?  It is now widely believed, if not with any demonstrable change in attitude, that humans, animals, plants and other life forms have evolved from the same stock in a process that began around 3.7 billion years ago.  One might fairly suggest, therefore, that all life is in fact of equal status.  Extending the logic of ethics, one might well conclude that our interactions with other life forms should not exhibit preference or prejudice, also known as speciesism.  Needless-to-say, such a position is untenable.  Most life-forms depend on the consumption of other life for their continued existence, and this is often only accomplished through cooperation with other members of the same species (e.g. pack-hunting).  Preference and prejudice, it seems, are part and parcel of existence. 

If other species can not reasonably be granted human moral status, should humans’ ethical codes follow those of the rest of the natural world?  In short, should ethics exist at all?  Such a question requires a detailed understanding of morality’s evolutionary context.  Darwin likened human morality to the altruistic behaviour of other social species, suggesting that, as for meerkats and flocking birds, altruistic acts, though of little or no benefit to the individual, are adaptive at the level of the group:

It must not be forgotten that although a high standard of morality gives but a slight or no advantage to each individual man and his children over the other men of the same tribe, yet that an increase in the number of well-endowed men and advancements in the standard of morality will certainly give an immense advantage to one tribe over another.  There can be no doubt that a tribe including many members who, from possessing in a high degree the spirit of patriotism, fidelity, obedience, courage, and sympathy, were always ready to aid one another, and to sacrifice themselves for the common good would be victorious over most other tribes; and this would be natural selection (Darwin, C. 1871.  The Descent of Man. New York: Appleton. p.166).

For Darwin, morality represented not humanity’s most vital break from the tyranny of nature, as some still hold, but rather an evolved response to inter-group competition.  As such, it was not associated with the belief in the immutable sanctity of human life, which owed its existence to the dualist epistemologies of Judeo-Christian doctrine and Cartesian philosophy.  Instead, moral tendencies functioned, at least initially, as a means for ensuring group survival.  The human propensity for unselfish acts thus stems from an innate capacity for social cooperation in the name of one’s family, tribe or clan.  It is neither inclusive of other species nor indicative of the inviolability of human life. 

Human ethics, then, has a biological origin, one which has been instrumental in our cultural evolution.  Biomorphising ethics – bringing human morality in line with that exhibited by the other species – is therefore not a solution, for this would demand a deconstruction of age-old systems of laws and rights, upon which all civilisation has been constructed. 

We have now rebuilt some parameters for our understanding of ethics.  Ethical judgements cannot reasonably be applied to any species others than our own, as this is to betray the origins and function of morality.  [The urge to do so can be seen as an evolutionary by-product, or spandrel, of our ancestral inclination towards pro-social behaviour for the benefit of the group, just as our fondness for baby animals is triggered by the ‘hard-wired’ obligation to protect our own infants.]  Neither can ethical codes be totally jettisoned, as this would be to commit societal suicide.  Instead, we must acknowledge the limitations of our value systems and seek to apply and refine these only within our species.

This brings us to the problem of empathy.  It is all very well to state that our morality is merely a device for ensuring our clan’s survival, but what of the myriad permutations and nuances that now, after thousands of years of cultural development, constitute a society’s ethical code?  Surely, in seeking to iron out every inequality, human morality has truly attained a righteous position?  This is a question best left to the ethicists.  The notions of right and wrong, though by no means universal, are so much a part of most cultures that to question them is to risk accusations of sociopathy.  Despite this, most people in the western world are confronted by moral ambiguities on a daily basis.  Many of these stem from apparent contradictions to the idea, mentioned previously, that all human life is holy.  We are, for example, less concerned by the loss of human life in far off places than on our own doorstep (which, incidentally, fits well with the Darwinian perspective).  Whether it is desirable to attempt to ‘rectify’ these contradictions, that is, whether we should strive to expand upon our evolved capacity for altruism and empathy, is again dependent on this notion of human life as sacrosanct.  This is a problem which has only originated within the last two or three thousand years, perhaps since the development of city states, and which has only really gained currency since the dawning of global mass media.  It can be summarised as follows:

  • Humans living in small groups develop a tendency towards pro-social/moral behaviour in the interests of the group.

  • Moral behaviour is codified through religions, still in the interests of group-living.

  • Moral behaviour is further codified and intellectualised in philosophy.

  • Religious and philosophical notions of right and wrong become increasingly rigid, and are used, in conjunction with Judeo-Christian and Enlightenment thought etc., as evidence of humanity’s uniqueness.

  • Humanism challenges Judeo-Christian orthodox but continues to assert humanity’s uniqueness, leading to the idea of the ‘sanctity’ of human life.

  • The ‘sanctity’ of human life becomes the dominant ideology of the modern era.

  • Global communication stretches the boundaries of one’s ‘moral community’ to include one’s whole country and, finally, the entire planet.

  • Disproportionate concern over the plight of others in one’s global moral community hints at limitations of human empathy.

Most people feel a moral obligation towards their family and local community.  Others extend this feeling to their country (patriotism).  In recent years many people have been encouraged to include all humanity in their moral community, a scheme facilitated by global mass media, which can trigger feelings of empathy – originally reserved for our close kin – from a great distance.  In the days prior to widespread media coverage of wars and famines, such events were morally unproblematic.  Now, however, television images engage our empathetic brains, giving rise to apparent moral ambiguities.  In a similar vein, the separation, objectification and commoditisation of domestic animals (and the extinction of some wild ones) is causing increased confusion for that part of our reason that serves to protect our close kin.  Vegetarianism and wildlife protection agencies are just two phenomena triggered by this ancestral predisposition to kin protectionism.

What can be done about these alleged moral ambiguities?  Should we confine altruistic acts to our closest kin, as ‘dictated’ by our evolutionary history, or should we confront our ‘hypocrisies’ and strive to extend protection to all humanity (as suggested by our ‘evolutionary present’)?  These questions prompt further, larger ones.  Again, the idea of human life as sacred demands we seek to look out for all our species.  But once the myth of sanctity is neutralised, where does that leave us?  To whom, if anyone, do we have moral obligations?  In this modern atomised society the idea of a moral community of close kin has lost all meaning; is this therefore also true of the concepts of good and bad, right and wrong?

The answer to such questions, if any indeed exists, lies within the domain of meta-ethics.  Meta-ethical schools of thought are commonly divided into realist and anti-realist.  Theorists of the former hold that moral values are objective, intrinsic properties of the world that are simply discovered or intuited.  Anti-realists, to the contrary, assert that moral values are contingent on the history and belief systems of individuals and cultures, and that differing moral codes are as numerous as there are people on earth.  It is here not my intention to provide tentative responses to the above anomalies.  Rather, I wish to explore the nature of this dilemma in the hope of shedding light on some causal mechanisms. 

It should be apparent that the paradox discussed here is above all else a secular problem: unlike moral codes enmeshed in religion, secular ethics afford no consensus and no ultimate salvation.  Morality, extrapolated over centuries by human reason, is now crumbling under the weight of global relativism (akin to what Henri Bergson termed the ‘dissolving power’ of human intelligence).  The real issue here is not so much ‘what is good:bad/right:wrong?’ as ‘how can we agree on a notion of good:bad/right:wrong?’  It is this writer’s opinion that the lack of consensus on moral issues in western secular society derives from the decontextualisation of many fields of knowledge, such as scientific and economic thought.  By disembedding such knowledge, reducing it to that which can be tested empirically, modelled and predicted, we have inadvertently narrowed our realm of understanding and meaning.  Economics, for instance, gives objects, experience and beings a specious commensurability:

The application of a common monetary metric to dissimilar things reduces their qualitative distinctiveness to the status of mere qualitative difference.  The most appropriate answer to questions of the type “What is the difference between a forest and a parking lot?” becomes so many dollars per acre (Rappaport, R. 1979.  Ecology, Meaning and Religion. Richmond, C. A.: North Atlantic Books. pp. 130).

Of great importance to the sustainability of human societies is ‘higher-order meaning’[1] - not of distinctions but of similarities underlying distinctions, as in metaphor, symbol, aesthetics and so on.  It has been argued by Gregory Bateson among others that such levels of meaning compensate for the inability of the linear, problem-solving nature of human consciousness to comprehend the circular connectedness of living systems.  In other words, aesthetics, ritual and so on can make people aware of the holisticity of existence by alluding to underlying unity.  A common understanding of this could do much to dampen the pathological insecurity of modern moralists.

What does this mean for the future of ethics?  Given the failure of conscious reason alone to construct solid moral foundations for secular society (as described above), it seems much could be gained from re-embedding ethical knowledge.  Through a more holistic conception of the nature of living systems - their circularity and interconnectedness – we would likely be better qualified to answer questions relating to the way in which individuals should act in the world.  This is a less a matter of building water-tight moral systems as it is of gaining some perspective on, and reverence for, the systems of which we are a part.  It should be of little surprise that all cultures and creeds who have managed their societies and environments sustainably have exhibited a heightened awareness of their part in a great network of life, coupled with respect for each node and interconnection.  The future course of ethics, if it is to be anything other than deleterious, depends on realigning it with the spiritual and, in so doing, restoring meaning to the natural world.

 

[1] This term was coined by the anthropologist Roy Rappaport (1979) in his book Ecology, Meaning and Religion. Richmond, C. A.: North Atlantic Books.

 

tags: Imperialism, Colonialism, Ethics, Speciesism
categories: 2011
Sunday 10.06.24
Posted by David Jobanputra
 

Ode to the Open-Minded

There can be few aspects of modern paleaopathological research more morbidly intriguing than trephination (also known as trepanation), the surgical removal of a disc of bone from the skull of a living individual without causing damage to the underlying blood vessels and brain. The procedure constitutes the earliest form of surgical intervention for which we have objective evidence, and its continued practice amongst both isolated tribal societies and a rare ilk of free-thinking Westerners has cemented its current position at the intersection of age-old and New Age healing.

In its ancient form, trephination was a lengthy and traumatic affair. Prehistoric surgeons experimented with a variety of tools and techniques to facilitate the extraction of bone from the cranium. In Europe, skulls were commonly trephined using the flint-scraping method, whereby an elliptical orifice was created by gradually scraping away the lamina externa (outer table) and diploë and then, with considerable delicacy, the lamina interna (inner table) to expose the dura mater. Elsewhere, the modus operandi involved ‘boring’ a series of small, closely adjoining perforations that extended to the lamina interna. The strands of bone between perforations were then cut, and the resulting piece of bone levered out. Arguably the most macabre approach was the push-plough method, in which a series of curved grooves were scraped into the skull to form a thin recessed circle. Repeated scraping would eventually release a smooth roundel of bone, although this could take several hours, or even days. It is therefore unsurprising to learn that many societies utilised native plants with psychoactive and medicinal properties to provide much-needed pain relief. (The high survival rates in Peruvian trephinations hint at the adoption of an effective surgical antiseptic or anaesthetic, and given the fundamental and spiritual role of the coca plant in their culture, it is likely that its cocaine-bearing leaves fulfilled this function.)

Skulls exhibiting trephination’s telltale orifices have been excavated at sites of almost every archaeological period and location, from Stone Age East Africa to 17th century Scandinavia. There appear to have been centres of concentrated surgical activity in Neolithic France and pre-Incan and Incan Peru, although it is now evident that the procedure predates both cultures by several millennia. The earliest documented trephined skull was recovered from the Vasilyevka II cemetery in the Dnieper Rapids region of the Ukraine (radiocarbon dated to 7,300-6,220 B.C.). The skull belonged to a man who was over 50 years old at death and showed a healed legion of several centimetres in diameter on the left of the frontal bone. The discovery of similar healed trephinations on every habitable continent has widespread implications for our understanding of the movement of prehistoric peoples and ideas. Trephination was being practiced on a global scale long before the earliest known trans-Atlantic and Pacific voyages, yet many academics have refused to believe that such a bizarre procedure could have come about as an independent innovation in more than one location. We are forced to conclude, therefore, that our prehistoric forefathers boasted either a precocious seafaring capability or an instinctive predilection for neurosurgery.

The most arresting aspect of trephination concerns our ancestors’ motivation for indulging in such a hazardous and traumatic procedure. Current consensus holds that trephinations were performed chiefly for the relief of intracranial maladies, such as depressed fractures, scalp wounds, concussion and lesions of a syphilitic nature in Peru. However, due to the fact that the majority of trephined skulls show no signs of trauma, it is likely that alleviation of headaches was the fundamental motive. Other authorities have argued that while trephinations were undoubtedly performed for the relief of intracranial maladies, in prehistory these ailments were ascribed to evil spirits thought to be dwelling within the head. The cure, therefore, was to open the skull and release the spirits. This may have inadvertently improved the patient’s condition, and hence the surgical intervention persisted. The paracetamol generation may have little sympathy for such severe curative methods, but for many centuries it was thought that the bodily organs were the seat of individual emotional attributes, such as courage (to be found in the heart), anger (based in the spleen) and so on, and that these could be manipulated accordingly. It is little wonder then that prehistoric skull surgery was approached in a similarly blasé manner, perhaps stemming from the belief that the brain was the seat of one’s psychological characteristics. Indeed, the anxiety associated with modern neurosurgery is derived largely from the relatively recent realisation of the brain’s fundamental importance as the central control system of the body.

Paul McCartney, in a 1986 interview in Musician magazine, recalls John Lennon asking him and his wife, Linda, “You fancy getting the trepanning done?” They declined. This was during the late ‘60s, when the Beatles were extolling the virtues of LSD, the Velvet Underground had chosen to ‘nullify’ their lives with heroin, and millions of other young people were beginning to appreciate that eluding the semblance of known things through the use of drugs is one of the perennial avocations of mankind. Some nine millennia after its conception, trephination was seized upon by this enlightened bunch as a subtler, safer, and more prolonged remedy for the pain of consciousness.

Among the new proponents of the age-old practice was Amanda Feilding, an Oxford dropout who performed and filmed her own trephination with the aid of a bathroom mirror and an electric drill. Like many others, she reported the onset of a mildly euphoric sensation, of relaxation and peacefulness. (According to trephination advocacy groups these changes result from the restoration of the full pulsation that is lost when the skull is sealed in childhood. Such pulsation inflates the brain’s capillaries, accelerating its metabolism and empowering it to permanently regain its youthful level.) Fielding ran for parliament in the late ‘70s under the banner of “Trepanation for the National Health”, and over the course of two elections she succeeded in convincing 188 voters of their entitlement to free professional trephining. She later married Lord James Neidpath, a former professor at Oxford, where he taught international relations to a young Bill Clinton. Encouraged by his new wife, the couple travelled to Cairo where they found a surgeon willing to trephine Neidpath’s skull for around $2000. Within a few hours Neidpath says he felt the effects. “It seemed to be very beneficial.”

So what does the future hold for trephination and, indeed, other forms of mind-enhancing masochism? Among its primitive practitioners, trephination will persist so long as they do. In the West, meanwhile, it is likely that the development of highly sophisticated virtual worlds will negate the need for conventional avenues of escape. Virtual reality might seem preferable to skull surgery or drug use, but it is no less perilous. As John Gray notes in Straw Dogs: ‘the world disclosed in ordinary perception is a makeshift of habit and convention. Virtual worlds disrupt this consensual hallucination, but in doing so they leave us without a test for a reality that is independent of ourselves’. Rarely does the past challenge the future for shock value.

 

 

tags: Trephination, Trepanation, Archaeology, Anthropology, History, Medicine, Altered States of Consciousness
categories: 2003
Sunday 10.06.24
Posted by David Jobanputra
 

Better Off Without?

It is a commonly held belief in many secular societies that religion is not only unnecessary but also, in an historical perspective, inextricably linked with violence and hatred on an unparalleled scale. The induced assertion, then, is that religion has been little more than an ornamental obstruction to our species' development.

Neither history nor anthropology knows of any society in which religion has been totally absent, and even those modern states that have attempted to abolish religion have replaced it with beliefs and practices which themselves seem religious (Rappaport 1971). The anthropologist E. B. Tylor, writing in 1871, attempted to account for the universality of human religious beliefs by reference to the psychic unity of humankind. It is the experience of dreaming, posited Tylor, that has suggested to all men the existence of a soul, and it is from this primordial notion that all religion has evolved.

At the turn of last century, Tylor's view was challenged by the great sociologist Emile Durkheim, who asked "How could a vain fantasy have been able to fashion the human consciousness so strongly and so durably?". It cannot be accepted, he argued, that "systems of ideas like religions, which have held so considerable a place in history, and from which, in all times, men have come to receive the energy which they must have to live, should be made up of a tissue of illusions (1961)."

As Rappaport (1971) has noted, it is both plausible and prudent to assume, at least initially, that anything which is universal to human culture is likely to contribute to human survival:

"Phenomena that are merely incidental, or peripheral, or epiphenomenal to the mechanisms of survival are hardly likely to become universal, nor to remain so if they do. When we consider further that religious beliefs and practices have frequently been central to human concerns and when we reflect upon the amount of time, energy, emotion, and treasure that men have expended in building religious monuments, supporting priestly hierarchies, fighting holy wars, and in sacrifices to assure their well-being in the next world, we find it hard to imagine that religion, as bizarre and irrational as it may seem or even be, has not contributed positively to human evolution and adaptation."

Would not an enterprise as expensive as religion have been defeated by selective pressures if it were merely frivolous and illusory? Surely its benefits must outweigh its costs? Rappaport's hypothesis, therefore, is that religion has not merely been important but crucial to human adaptation.  If such a contention is valid, as much evidence since amassed has suggested it is, it may prove a further thorn in the side of those secular humanists who so readily and naively engage in the religious hatred they supposedly abhor. The reactionary dismissal of "Religion" based only on consideration of its costs is akin to throwing away one's stove because one occasionally burns one's fingers. So, would humankind somehow have been "better off" without religion? Ask an orangutan.

 

Further Reading

Tylor, E. B. 1871. Religion in Primitive Culture. London: John Murray.

Durkheim, E. 1961. The Elementary Foms of Religious Life. New York: Collier.

Rappaport, R. A. 1971. The sacred in human evolution. Annual Review of Ecology and Systematics 2: 23-44.

 

tags: Religion, Ritual
categories: 2007
Sunday 10.06.24
Posted by David Jobanputra
 

The Ethics of Domestication

It is evident that Western perceptions of and attitudes toward animals (and ‘nature’ in general) have their roots firmly imbedded in the Judaeo-Christian philosophical tradition (Serpell 1986:122). According to this tradition, the Earth and the animal and plant species which inhabit it were created specifically to serve the interests of humanity (Thomas 1983:17). In the biblical accounts of creation, for example, God creates humans ‘in His own image’, and awards Man ‘dominion over every living thing that moveth upon the earth’ (Genesis 1:24-28). Here it is not my intension merely to highlight further examples of such out-dated narratives. Rather, I wish to demonstrate that Western attitudes toward animals, from at least the time of Greek writing, are part of a wider-scale anthropocentricism inherent in Western theological and philosophical discourse.

The myth of human supremacy has two major cornerstones: the rationalist philosophies of ancient Greece (Singer 1995:188) and the fundamentalist dogmas of Medieval and Renaissance Europe. In the case of the former, chiefly the works of Plato and Aristotle, intellect and the power of reason were exalted above all other human faculties. By the seventeenth century anthropocentric thought had become far more than a mere arbitrary ideal; ‘it was a fundamental and fiercely dogmatic moral precept whose exponents vigorously and…violently opposed alternative doctrines’ (Serpell 1986:124-25). With the establishment of the Inquisition, those who doubted the Aristotelian notions of a human-animal divide or a geocentric Universe could be harshly persecuted. Copernicus (1473-1543) and Giordano Bruno (1548-1600) revived the theory that the Earth revolved around the Sun. Bruno even speculated that the Universe was in fact infinite, and concluded that our solar system was neither at the centre of the Cosmos nor unique: ‘man is no more than an ant in the present of the infinite’ (Singer 1995:199). Having refused to recant these heresies, Bruno was burnt at the stake (Lovell 1979:3). A similar fate awaited anyone who ‘threatened to undermine the distinction between human and animal, culture and nature’ (Serpell 1986:125). Harmless cults involving nature worship, and superstitious rituals related to ‘pagan divinities of grove, stream and mountain’ were ruthlessly suppressed (Thomas 1983:22).

The martyrdom of Giordano Bruno and the persecution of Galileo in the early seventeenth century marked a (minor) turning point in the tide of humanism that afflicted European thought for the previous five centuries. Along with the astronomers Kepler, Hooke, and Newton (Lovell 1979:4), the moral philosopher Jeremy Bentham played a significant part in this transition. With regards to the question of duties toward animals, Bentham did not dispute the fact that, in many respects, humans were superior to animals. This fact was, however, irrelevant. For Bentham, ‘the question is not, Can they reason? nor Can they talk? but, Can they suffer?' (Singer 1995:7). Serpell (1986:129) notes that ironically, the Cartesian vivisectors of the previous century had sown the seeds of their own destruction. The evidence amassed on animals’ internal anatomy and physiology, particularly the underlying mechanisms and responses, seemed to suggest that animals and humans experienced similar sensations of pain and discomfort. Cruelty to animals was, therefore, comparable to cruelty towards an irrational and speechless human infant.

The humanist heritage is a pervasive one. Despite several centuries of near-constant challenges to Christian dogma, many of which (Copernicus 1543; Darwin 1859) are now taken as ‘gospel’ fact, the ‘overwhelming majority of humans…are speciesists’ (Singer 1995:9). True, public vivisections are now a rare sight in town squares, and products tested on animals are becoming increasingly unpopular. Yet our treatment of domesticated animals does little to reflect this trend. The detached commoditisation synonymous with intensive farming, or agribusiness, has ushered in a new era of abuse, and spawned a paradoxical out-of-sight-out-of-mind attitude:

…most pigs now spend their entire lives indoors. They are born and suckled in a farrowing unit, raised initially in a nursery, and brought to slaughter weight in a growing-feeding unit…They are sent to market at between five and six months of age weighing about 220 pounds (Singer 1995:123)

Broiler chickens are killed when they are seven weeks old (the natural lifespan of a chicken is about seven years old. [They] may have as little as half a square foot of space per chicken…Under these conditions, when there is normal lighting, the stress of crowding and absence of natural outlets for the birds’ energies lead to outbreaks of fighting, with birds pecking at each other’s feathers and sometimes killing and eating one another. Very dim lighting has been found to reduce such behaviour and so the birds are likely to live out their last weeks in near darkness (Singer 1995:99).

This prompts the enquiry: do humans have an ethical obligation towards domesticates? On a deeper level, it raises the question whether there is something wrong with the condition of domestication itself (Palmer 1995:13). In the following section, these issues will be addressed in the light of current and past consensus on the origins and nature of domestication.

Domestication as Contractualism

Any investigation into the ethics of domestication requires consideration of both the process of domestication and its beneficiaries. Animal domestication is essentially an evolutionary phenomenon involving a symbiotic relationship between two species (Bökönyi 1989:24). It is widely acknowledged that the species which were amenable to domestication all shared similar characteristics, such as sociability, hardiness, free-breeding, and the ability to communicate (Clutton-Brock 1987:15-16). Budiansky (1992:15) and others have maintained that the earliest domesticates were scavengers who chose for their own benefit to live close to human settlements. Thus domestication is conceptualised as a process of co-evolution between two species, one of which gradually ‘exchanges’ wild attributes (fear, aggression, etc.) for the food and protection afforded by the other. Similarities in communication structures and dominance/submission hierarchies between human beings and other species thenceforth permitted the former to assume a role equivalent to pack- or herd-leader, thus facilitating domestication. Budianksy (1992) has argued that such associations are mutually beneficial: by cooperating with the dominant species on Earth, domesticates have ensured a steadily-increasing population while their wild counterparts slowly veer towards extinction. In this respect, domestic animals are an evolutionary success story. For this reason, some authorities speak of domestication as a bargain or contract between humans and certain other animals. As J. B. Callicott once put it, there is a kind of ‘evolved and unspoken contract between man and beast’.

If one takes as fact the notion that animals voluntarily chose to associate with humans prior to domestication, one might feel justified in applying contractualism to the relationship between humans and domestic animals. Contractualist theories are those that justify moral principles by appealing to a social contract that is voluntarily committed to under ideal conditions for such commitment. According to Rawls (1972:11), however, contract theory is dependant on such contracts being recognised as just by ‘free and rational persons’. Animals are, of course, not in such a position, and hence contract theory has generally been seen as fruitless in examining the moral considerability of animals (Palmer 1995:16). It is entirely possible that animals learnt to coexist with humans for their own benefit, and that this was encouraged by humans for their benefit. However, this does not constitute a contract, as animals generally act in ways that are immediately beneficial to their own well being, and thus would have remained largely unaware of any long-term implications of such associations. Furthermore, had animals fully understood the content of such a ‘bargain’, it is at best questionable whether they would have accepted it, given the ‘unequal power and irreversible change in animal nature resulting from such a contract’ (Palmer 1995:20). The terms of this quasi-contract were dictated exclusively by humans, for human benefit. In most cases this involves a period of protection and provision of food, followed by slaughter. Such a contract also grants humans agency over breeding patterns, permitting them to select characteristics which best suited human convenience (Palmer 1995:17). This, then, situates the fate of both the individual animal and the species in general firmly in human hands.

What advantages, if any, do domesticated animals accrue from an association with humans? Budiansky (1992:143) notes that ‘the struggle between species is a grim reality of the world, and the evolutionary advantages that led to the “domestic alliance”…underscore some genuine improvements in the lives of species that cast their lot with man’s’. Freedom from predators, starvation, and parasites, are, he proposes, reason enough to believe that the terms of the aforementioned contract are equally satisfied. Would this be an acceptable bargain? The animal rights philosopher Peter Singer (1995) has claimed that the reasoning employed by defenders of animal agriculture is comparable to that adopted by advocates of slavery, who claimed improvement in the lot of ‘inferior’ Africans brought to America. Regardless of the quality of living conditions on a farm, by keeping animals at all we are depriving them of a basic right to freedom. Even the steady food supply on a farm is an unmitigated blessing, since ‘it deprives the animal of its most basic natural activity, the search for food….the result is a life of utter boredom…surely the life of freedom is to be preferred’ (Singer 1995). But if domestication is to be viewed as an evolutionary phenomenon, as has been argued throughout this dissertation, then we must grant that the evolutionary changes that led to domestication occurred precisely because ‘freedom’ was not to be ‘preferred’ (Budiansky 1992:144). Moreover, if an animal no longer has the biological urge or ability to search for food, as is the case for most domesticates, one cannot argue that the animal is being deprived of its basic natural behaviour. With regards to Singer’s suggestion that life for the domesticate is one of ‘utter boredom’, one must concede that this factor is unquantifiable. Nevertheless, the systematic neotony that has occurred in all domesticated animals has rendered them genetically much more dependent than the wild cousins; their dependence is not merely a consequence of their confinement in barns and fields, it is a result of their evolutionary history (Budiansky 1992:148).

The Ethics of Dependence

The notion of dependence, then, represents a fundamental moral aspect of the process of domestication. Palmer (1995:12) notes three different ways in which animals may become dependant on human beings: by apparently voluntary association, by being kept in captivity, and by being bred into dependence. Domestic animals fit into this last category. They have either been bred for dependence or for other characteristics which have, as side-effects, increased their dependence. Budiansky (1992:122) gives the example of problem births among sheep. In many farm flocks the proportion of problem births is growing steadily, since the ‘problem’ lambs are usually the ones farmers want to save for breeding: ‘the very traits that make them troublesome – the propensity to conceive twins or triplets, or to grow large lambs – are both heritable and economically desirable’ (Budiansky 1992:122). As a result, within a few generations the majority of ewes will become dependent on humans for assistance during lambing. For almost all domestic animals, the possibility of independent survival has been lost, not just for the individual, but for the whole lineage. Through dependency, the reduction of ‘natural’ selective pressures, and excess ‘kindness’ from humans, domesticates have become increasingly handicapped. They are, in this respect, degenerates (Budiansky 1992:123). This does not mean, however, that they are less worthy of our consideration. In fact, their degeneracy, which we had a hand in (albeit indirectly), dictates a greater responsibility on our part.

How, then, can one assume a more viable moral stance in our relationships with domesticated animals? Regarding the question of whether an animal gets a better deal out of living and being slaughtered, rather than never being bred at all, Nozick (1974:287) argues that a similar scenario for humans would ‘not look very convincing’. If humans bred other humans for consumption using this same justification, we would not be satisfied, as ‘an existing person has claims, even against those whose purpose in creating him was to violate those claims’ (Nozick 1974:287). Similarly, where the alternative to domestication is a life in the wild, we find the process to be morally problematic. Palmer (1995:18) comments that if aliens bred humans for food in captivity, reasoning that we were generally better off in captivity than chancing in the wild, we would likely find this an unsuitable ‘bargain’. One might argue that humans have different psychological needs, and that they would suffer differently from confinement and the knowledge that they were to be eaten. But, as Palmer (1995:19) hypothesises, such qualities could be ‘bred-out’ of humans. Would this make the proposed bargain any more acceptable, given the alternative life as a hunter-gatherer would involve the ever-present threat of attack and disease?

No Turning Back: The Future of Domesticatory Relations

Regarding the dependency of domesticated animals, Coppinger & Smith (1983) have suggested that evolution and time are in fact now on the side of the ‘degenerates’. Dependence not only means there is no turning back; dependence has actually become such a powerful evolutionary force unto itself that is leading inevitably to a new evolutionary age: an ‘Age of Interdependent Forms’ (Coppinger & Smith 1983:284). In this new age, the dominance of domestic symbioses in the global ecosystem will spell mass extinction for the more specialized and independent species on a level comparable to that which removed the dinosaurs. As Coppinger & Smith (1983) point out, ‘current biological concepts of what is fit and adaptive are at least 15,000 years out of date’. Those species traditionally cast as nature’s fittest (the lion, for example) will likely be out-competed by those with more cooperative, interdependent tendencies (the house cat). The age of dinosaurs may seem no more foreign to future generations than the world of ‘self-sufficient, highly specialised “dinosaurs” of our day’ (Coppinger & Smith 1983).

The crux of this argument, as a glance at the recent population growth figures of humans and domesticates readily substantiates, is that species outside the dependent alliance are simply incapable of reproducing as rapidly. In 1860, humans and domesticated species accounted for 5 percent of terrestrial biomass – the net total weight of plant and animal life on Earth. Today the figure is roughly 20 percent; Coppinger & Smith (1983) suggest that by 2020 it will be 40 percent, and, if the world population reaches twelve billion, the point at which it is expected to level off, the figure will be around 60 percent. But as Coppinger & Smith (1983) are quick to point out, this is not necessarily the environmental disaster that is so regularly ‘depicted by those who point with alarm to the extinction of species and the appropriation of ever more of the earth’s resources by humans’ (Budiansky 1992:125). From an evolutionary standpoint, change is not good or bad; it is simply inevitable.

Bibliography

Bökönyi, S. 1989. Definitions of animal domestication in Clutton-Brock, J. (ed.) The Walking Larder: Patterns of Domestication, Pastoralism and Predation. London: Unwin Hyman.

Budiansky, S. 1992. The Covenant of the Wild: Why Animals chose Domestication. New York: William Morrow & Co.

Clutton-Brock, J. 1987. A Natural History of Domesticated Mammals. Cambridge: Cambridge University Press.

Coppinger, R. P. & Smith, C. K. 1983. The domestication of evolution. Environmental Conservation, 10: 283-92.

Lovell, B. 1979. In the Centre of Immensities. London: Hutchinson.

Nozick, R. 1974. Anarchy, State and Utopia. New York: Basic Books

Palmer, C. 1995. Animal liberation, environmental ethics and domestication. OCEES Research Paper No 1. Oxford: Oxford Centre for the Environment, Ethics and Society.

Rawls, J. 1972. A Theory of Justice. Oxford: Clarendon Press.

Serpell, J. 1986. In the Company of Animals: a study of human animal relationships. Oxford: Blackwell.

Singer, P. 1995. Animal Liberation (2nd edition). New York: New York Review.

Thomas, K. 1983. Man and the Natural World: Changing Attitudes in England 1500-1800. London: Allen Lane.

 

tags: Animals, Ethics, Domestication, Environment, Morality
categories: 2004
Sunday 10.06.24
Posted by David Jobanputra
 

Prejudice

Prejudice, said Mark Twain, is “the very ink with which history is written”. It is the inevitable consequence of dividing the world into “us” and “them” – what psychologists call “ingroups” and “outgroups” – whether in the form of tribes, faiths, nations or even football teams. We are “hardwired” to think favourably of our own group – known as ingroup bias – and unfavourably of others – known as prejudice.  

There are countless forms of prejudice in the world. Racism and sexism are perhaps the most common, but there is also widespread prejudice on the basis of class, age, sexuality, disability and religious affiliation, to name but a few. In all of these cases, prejudice forms from a preconceived belief in the superiority of one’s own group over another; the behavioural manifestation of this belief is termed discrimination.

At its core, prejudice is a matter of categorisation. It’s about how we divide up the world in ours heads, grouping things together for ease of cognition. We use these categories to define ourselves as well as those around us. Thus, “I” may be male, European, young, Buddhist, vegetarian, teetotal, and so on, while “They” are anything outside this: female or Asian or elderly or Christian or alcoholic, and so on. Clearly, there are innumerable groups in society, some of which we’re a part, others we are not. Crucially, some of these categories are stigmatised, which marks their members as inherently inferior to the dominant group. (In Ancient Greece, a stigma was literally a mark, like a tattoo, that was cut or burnt into the skin of criminals, slaves or other disgraced persons.)

So, prejudice arises when (a) we put people into categories, and (b) some of these categories come with a social stigma, meaning their members are seen as lower status. And since some categories, such as race and sex, are so obvious to us that we use them chronically – that is, continually, without thinking – so prejudice comes to be ingrained in society. Let us consider these two cases in more detail.

Racism is prejudice against someone based on their race or ethnicity. In the past, racial prejudice was largely overt and blatant, meaning it was expressed openly in the form of negative stereotypes and derogatory language. Identifying a particular race as lazy, stupid, greedy or dishonest is an example of this form of discrimination, which is often referred to as old-fashioned racism – the type your grandparents may have used. In the West, such explicit racist attitudes are now socially taboo, and it is therefore tempting to conclude that racism is on the decline. However, there is a second, more pervasive form of racism that is implicit or unconscious, which psychologists refer to as aversive racism.

Aversive racism is a distinctly modern phenomenon. It is characterised by the presence of both egalitarian attitudes and negative feelings towards particular minorities. In other words, it is a form of racism that persists (unconsciously) even when we know racism to be wrong. This dissonance provokes feelings of shame and guilt, which we seek to suppress by avoiding triggering encounters and concealing our “true” emotions. And since people rarely admit to this form of prejudice, it is difficult to investigate.[1]

Sexism is prejudice against someone based on their sex or gender. Though sexism can affect anyone, it is typically directed at women and girls, whether overtly (as in sexual harassment) or covertly (as in workplace inequality). Around the world, sexual discrimination has been observed in almost every cultural sphere, from politics and education to legal justice and employment. At its most extreme, it can lead to the murder, rape and mutilation of women, girls and new-born babies.

Like racism, sexism comprises two distinct strains: hostile sexism and benevolent sexism. Hostile sexism refers to blatant expressions of sexual prejudice, such as the view that women are irrational or incompetent in comparison to men. This underlying prejudice is manifested in myriad forms of discrimination, from the gender pay gap to female infanticide. Benevolent sexism, in contrast, refers to sexist attitudes that are subjectively positive, such as the reverence of women as wives, mothers and homemakers, and as the objects of male affection. Although these attitudes may appear harmless to their holder, they are in fact detrimental to gender equality, since they restrict women to particular roles (such as mother, lover, homemaker) and thus limit their personal and professional opportunities.

Benevolent sexism can be difficult to counteract, since it is often conflated with tradition and politeness. An obvious example is modern-day chivalry, which may take the form of men holding the door open for women or paying the bill in a restaurant. For many people – both men and women – this is simply “the done thing” – an unquestioned, apparently harmless, social norm born of kindness and consideration. However, traditions such as these are rooted in historical ideas of women as the “weaker” or “fairer” sex, and as such serve to perpetuate these sexist notions. This ambivalence can lead to disagreements between people over what constitutes sexism in today’s world. Furthermore, it is possible for people to hold both hostile and benevolent sexist views simultaneously, such as a man who feels threatened by women in the workplace but venerates them in the home. This may explain why sexism is so hard to stamp out.

Racism and sexism are just two of the many forms of prejudice that exist in society. And since all individuals belong to numerous, fluid and often overlapping groups, they may be subject to multiple modes of discrimination acting simultaneously – a phenomenon known as intersectionality.

According to the contact hypothesis, developed by the social psychologist Gordon Allport, overcoming such prejudice depends on establishing closer relations between ingroups and outgroups. Above all else, these groups must be interdependent, working together to achieve common goals. Only then, Allport argues, can the stain of prejudice be erased.

[1] The psychologists Samuel Gaertner and Leonard Bickman found one way around this in a now-famous 1971 study.

tags: Prejudice, Psychology, Racism, Sexism, Inequality
categories: 2020
Sunday 10.06.24
Posted by David Jobanputra
 

Social Categorisation

In the world, there are a lot of things. Things like penguins and guavas and diamonds and house music and double-decker buses and so on and so forth. Keeping track of all this “stuff” takes time and energy, resources which could be better spent on other things, things like eating or sleeping or hunting or dating or posting cat videos on YouTube. For this reason, when confronted with the vast and ever-widening array of “things” in the world, we place them into groups. Bananas, oranges and apples become “fruit”. Elephants, pandas and walruses become “animals”. Tables, chairs and wardrobes become “furniture”. It’s a lot easier to ask someone to pass the fruit bowl than it is to ask someone to pass the bowl containing bananas, oranges, apples, pears, and maybe a few mouldy grapes. Likewise, it’s a lot easier to refer to “weather” or “jobs” or “sports” or “countries” than it is to list all the thousands of objects, events or concepts that comprise these groups. The brain loves a shortcut, and placing things into groups – categorisation – is exactly that (even if it sometimes gets us into trouble – more on this later).

Categorisation can be defined as “the process of understanding what something is by knowing what other things it is equivalent to, and what other things it is different from” (McGarty, 1999:1). In other words, categorisation is about similarities and differences: when two or more things are similar or interconnected, we place them in the same category; when these things are different or unconnected, we place them in different categories. It is important to note that things are not always different or always the same; rather, this depends on the nature of the categorisation. For example, carrots and oranges may be categorised differently (as vegetables and fruits), or similarly (as food, or organic matter, or orange things). Categories are hierarchical: some are very specific, while others are much more broad – just think of the category of “thing”.

Categories are not rigid. Just as human perception is fluid and flexible, so categories may likewise have porous boundaries. Some things are easier to categorise than others. Most people would have no problem identifying a cabbage as a vegetable and an apple as a fruit. But what about tomatoes? Or rhubarb? Or aubergines? The same is true for almost every other category: some members are easy to categorise, while others are more difficult. The more typical a member, the easier it is to categorise, and vice versa. So a dog is a typical mammal, as most people would acknowledge, while a duck-billed platypus is a wholly atypical one. (In fact, when the first European naturalists encountered a preserved platypus, they dismissed it as a fake, cobbled together from bits of other animals.) We call the most typical members of a category (like dogs and apples and cabbages) prototypes. These are easy to bring to mind and quick to categorise, unlike atypical members (like tomatoes and platypuses), which take time and effort.

When applied to social categories, we often refer to prototypes as stereotypes (literally, a solid or rigid type). Stereotyping, like all categorisation, is a heuristic – a mental shortcut that reduces complex judgements to simple rules of thumb. We use stereotypes for two main reasons. First, by “filling in the blanks" for people we know little about, they save us time and energy. And second, by providing a framework for our perception of the world, they help us to predict what others will do, thereby reducing uncertainty. This sort of categorisation is fundamental to the way we, as humans, create meaning.

Categorisation is pervasive, which means it can influence our thinking even when we are trying to think systematically. In other words, some categories are activated without our awareness. For example, categories such as race, age and gender are so common that their use becomes automated, a phenomenon known as chronic accessibility. Similarly, we tend to categorise based on what we encounter first (known as temporal primacy), and what stands out most to us (known as perceptual salience). Imagine, for example, that the first dog you ever encountered was a rabid, ferocious, bloodthirsty poodle – this now forms the basis of your canine template. Likewise, hearing about a rabid-poodle attack can make us more scared of dogs in general, since stories of poodles attacking humans are particularly notable, or salient. In all of these cases, categorisation occurs unconsciously, shaping our judgement without us even knowing it.

We use social categories to form a general impression about the people we meet and interact with. How exactly we do this has been much studied by social psychologists, beginning in the 1940s with the work of Solomon Asch, and continuing to this day with research by Alexander Todorov. There are several troubling consequences of social categorisation. When categories are formed on the basis of race or ethnicity, for example, stereotyping can easily lead to prejudice. Even when such stereotypes are apparently positive – for example, ethnicity X is good at business; ethnicity Y is good at maths – they still reinforce social distinctions rather than transcending them. Similarly, judgements that are inconsistent with negative stereotypes can actually serve to perpetuate such stereotypes. For example, lauding a woman who can drive well or a man who can cook presents these cases as exceptions to the categorical rule, which itself remains intact. Only when enough such exceptions are registered can the overall stereotype be challenged, leading to a shift in the category prototype.

Another consequence of social categorisation is stereotype threat – the fear of conforming to a negative stereotype of a group to which one belongs. This threat can cause one to underperform on tasks in the threatened domain. In the case of the female driver, for instance, being confronted with the stereotype that “women can’t drive” can itself provoke impaired performance. The same is true for other groups, and other negative stereotypes.

It is clear, then, that social categorisation is not an inert or neutral process. Rather, it is variable and value-laden, owing to the countless cognitive factors that determine what we categorise, why, and when. These categories structure our experience of ourselves, and the world around us. Just as we create categories, categories create us.

tags: Social Categorisation, Psychology, Stereotypes, Racism, Prejudice
categories: 2020
Sunday 10.06.24
Posted by David Jobanputra
 

Conformity

Look where you’re going!

Don’t talk with your mouth full!  

Say “Cheese”!

What do these various imperatives have in common? They are all examples of social norms.

Norms are templates for acceptable conduct. They tell us how we should behave in a given situation - everything from walking down the street to running a business. Queuing is a social norm. So is tipping your waiter. So is minding your Ps and Qs. Swearing, belching and nose-picking are all proscribed by social norms; bowing, wearing a tie and flushing the toilet are all sanctioned. Some norms – such as the incest taboo – are universal, while others are context-specific: class, culture and gender are prominent variables.

How and why norms form has long been of interest to social psychologists, beginning with the work of Muzafer Sherif (1935). Social norms do not always reflect our individual or private opinion. It is perfectly possible to hold a public attitude that is at odds with our personal one. Personally, you might think that shaking hands is unhygienic or outdated, but publicly, you go along with it, upholding the socially norm. How many of us went along with our peers at school, picking on other students or smoking behind the bike shed, even when we felt inside it was wrong? These are examples of conformity.

Generally speaking, conformity is incidental, which means it’s not deliberate. This distinguishes conformity from other forms of social influence, such as compliance, persuasion and obedience. The first major psychological study of conformity was carried out by Solomon Asch in 1951. 

Sherif’s and Asch’s experiments reflect two different explanations for conformity. The first, relevant to Sherif’s study, is informational influence, where people conform to a group norm in order to gain information. (This occurs when someone is unsure of their own opinion, and needs more data to make a judgement.) The second explanation, evident in Asch’s experiment, is normative influence, where people conform in order to gain acceptance or avoid exclusion. (This occurs when someone is sure of their own opinion, but doesn’t want to risk being “the odd one out”.)

Conformity can be constructive,[1] but it can also be dangerous. This was famously demonstrated by John Darley and Bibb Latané, in one of the most widely cited experiments in the history of psychology: the smoke-filled room. In the experiment, a number of university students were invited to share their thoughts about urban life. Those who agreed to take part were asked to first fill out some forms in a waiting room prior to being interviewed. As they did so, smoke began to enter the room through a small vent in the wall. After four minutes, the smoke was sufficient to affect one’s vision and breathing.

When the participants were alone, most of them investigated the smoke and informed someone about it. However, Darley and Latané were more interested in how people reacted when others were present. For the second phase of the experiment, the students were joined in the waiting room by two undercover actors, who had been instructed not to react to the smoke. At most, they would shrug their shoulders before returning to their forms. What happened next was significant. In the words of Darley and Latané, “only one of the ten subjects…reported the smoke. The other nine subjects stayed in the waiting room for the full six minutes while it continued to fill up with smoke. … They coughed, rubbed their eyes, and opened the window – but they did not report the smoke.”

The smoke-filled room study is an example of the so-called bystander effect, which holds that individuals are less likely to help others when other people are present. Though this theory has been widely criticised in recent years, it has become part of psychological lore, and continues to be implicated in tragedies around the world.

Social influence works not only on individuals but also on groups, shaping the norm itself. This can be seen in research on group polarisation and groupthink. Group polarisation is a process through which a social norm becomes more extreme, or polarised, among people with similar beliefs and attitudes. Imagine, for example, a group of environmental activists discussing the climate crisis. Naturally, people will share their ideas and findings with other group members, resulting in informational influence. At the same time, people will be motivated to “fit in” with others, expressing public attitudes in line with the group – this is normative influence. Together, these pressures cause people to converge on the group norm, which in turn causes the consensus of the group to become stronger, shifting the norm to a more extreme position. Group polarisation is particularly prevalent in political groups, social media, juries, and other opinion-based assemblies. In extreme cases, group polarisation can give way to groupthink, where the desire for conformity results in irrational or immoral decision making.

Groupthink was first studied in the early 1970s by Irving Janis, who identified three broad symptoms of the problem: close-mindedness (such as dismissing any ideas that might challenge the group’s thinking); heightened pressure to conform (such as through self-censorship); and overestimation of the group’s value (such as a sense of moral and intellectual infallibility). According to Janis, groupthink occurs when the cohesiveness of a group is so high that there is no scope for disagreement or dissent. This is particularly common in high-stress situations, and in contexts where the flow of information is disrupted. The consequences of groupthink can be catastrophic. A classic example is the 1986 Challenger disaster, in which NASA’s high cohesiveness, overconfidence and close-mindedness contributed to fundamentally flawed decision making, and led, ultimately, to the deaths of all seven astronauts on board. (Groupthink was also evident in the Cuban Missile Crisis of 1961 and the 2016 US presidential election, among other events.)

Conformity, then, is morally ambiguous – both opportunity and constraint. On the one hand, conforming helps us to learn “the rules of society”, which in turn reduces risk and disruption. On the other hand, excessive conformity can give way to coercion, corruption and complacency, which in turn can lead to disaster.

 

 

[1] Public smoking laws = positive conformity? https://greatergood.berkeley.edu/article/item/how_conformity_can_be_good_and_bad_for_society

tags: Conformity, Pyschology, Norms, Group Think
categories: 2020
Sunday 10.06.24
Posted by David Jobanputra
 

© David Jobanputra 2019