You could say that the question this post poses is both rhetorical and not. Why wouldn’t living forever, whether through cycles of reincarnation, or as a disembodied ‘ancestor spirit’, or in heaven, jannah, elysium or wherever, be appealing? And what could possibly be appealing about the finality of death?
But it’s worth exploring this question more deeply, as I believe it’s a major key to understanding many aspects of religion and ‘spirituality’. I’ve written about this subject before in the context of children and the origins of religious and magical thinking, but this time I want to focus on the afterlife in more detail.
I like to focus on childhood because it’s fertile ground for thinking beyond the bounds and the limits of our mortality and our physical constraints. Shapeshifting, super-powers, magic, and the absolutes of good and evil, they come very easily to young children, and immortality is just another element of that thinking. I want to emphasise this because I object to claims made by some atheists that a lot of this thinking, about magic and absolutes and immortality, is irrational. I don’t think that’s a useful term in this instance.
I’ve given the example, which I’ll repeat here, of kids playing life-and-death games like cops and robbers, cowboys and indians, goodies and baddies. When a kid’s shot dead, he accepts it reluctantly, lies down for a few seconds, then declares he’s ‘alive again’, and this encapsulates time-honoured attitudes towards mortality.
Because death is literally unimaginable, and kids, with their vivid and unrestrained imaginations, don’t need much time to work that one out. What’s more, even playing dead is boring. Not moving, holding your breath, trying to get your brain to shut down its thinking and imagining, it’s all hard and unnatural work.
On the other hand thinking about the afterlife can bear rich fruit. To give just one of hundreds of literary examples, Dante’s Divine Comedy divides the afterlife, from which no-one can escape, into 3 realms, hell, purgatory and heaven, with each realm being divided into nine, or actually 10. Nine descending circles of the inferno, with Lucifer lurking at the bottom as number 10, nine rings around Mount Purgatory, with the garden of Eden at its summit representing number 10, and nine celestial bodies of heaven, with the tenth at the top, representing the Empyrean, filled with the essence of god. And their are various other divinely numerical schemes operating throughout the work. Another very interesting depiction of the afterlife occurs in Plato’s Republic, in which a soldier, Er, brought from the battlefield as a corpse, reveals himself after a number of days not to be dead but unconscious, and on recovering consciousness tells a richly detailed tale of the afterlife, which he’s been privileged to witness, and also to recall, as he was excused from the requirement of drinking from the river Lethe’s ‘waters of forgetfulness’.
The two points to be drawn from these afterlife descriptions is, first, that they offer great scope for the imagination, but second, they’re constrained by the particular time and space of their own culture, not unlike current descriptions of UFOs and alien abductions. So the Divine Comedy is a large-canvas imaginative rendering of Christian revelation and eschatology as experienced, at least by one atypical individual, in thirteenth and fourteenth century Italy, while Er’s tale reveals much of how Greeks living not far away but nearly 2000 years earlier might have imagined the life to come.
Interestingly, while there are many cultural peculiarities to these descriptions, they have one key feature in common – the afterlife constitutes a punishment or reward for the life lived on earth. It’s a theme repeated in many religions, as well as in beliefs in reincarnation which aren’t strictly religious. There are those who manage to believe that, even though there’s no deity pulling the strings, we get reincarnated into something ‘higher’ or ‘lower’ depending on how we behaved in the life just completed. How this happens, without some conscious being making judicial decisions, is not a question that seems to bother their brains. But what interests me more is that this kind of thinking goes back a long long way. It appears to have a very powerful appeal, one that, as I’ve said, is way too prevalent to be dismissed as irrational.
So I want to explore not only why the afterlife is so appealing, but why a particular kind of afterlife, based on perfect justice, is so appealing. I prefer ‘perfect justice’ to ‘divine justice’, as it takes away the religious trappings while preserving the most important ideal of many religions – the ideal hope that nobody will evade proper justice in the end.
Again I turn to early childhood, a period when rationality and logic mean little, to look for clues to this appeal. I suspect that one of the great events of childhood, or it might be a series of events, is the experience that your parents or your guardians are not the all-protecting beings that you’d more or less unconsciously assumed them to be. I think this experience is made much of in certain branches of psychoanalytic theory, and I associate it with the name of Jacques Lacan, but I have a very limited acquaintance with his views or theories.
In talking of all-protecting beings, I’m really thinking of them in god-like terms. Beings who protect us from harm caused by dangerous objects or predators, but also from harm caused by our own ignorance or folly, by correcting us and guiding us. Our early survival is, of course, entirely dependent on being nurtured by these all-protecting entities, so that it’s all the more shocking when, at some stage in our development, we actually see these entities, even if only for brief moments, as actually threatening our existence. I’m not sure when this may happen. It could be at a very early stage, when, say, a mother refuses the breast to her child, resulting in a screaming fit, and perhaps a great sense of inner trauma and crisis. Or it could be later, when the child has developed an independent sense of justice and realises, or at least strongly feels, that her parent is punishing her unjustly, and quickly infers from this that the parent could be a real threat to her freedom and even her life.
I see an obvious association between this very real experience, which may be near-universal in humans, and the garden of eden story, though the fact that in the eden story it’s the humans who have ‘fallen’, rather than the gods, is well worth pondering. It seems to me that monotheistic religions, by creating a perfect deity or parent, shift the focus of the world’s obvious injustices from that parent to the children, which has at least the advantage of avoiding what could become a problem for children who ‘see through’ their parents – the problem of blame-shifting. Not that this has always stopped irate believers from berating their perfect Dad for their sufferings.
Of course the more developed way of seeing the parent-child relation is as one between two faulty, all-too-human entities, but face it, the seemingly utterly powerless child and the seemingly all-powerful parent are neither likely to possess such equipoise, at least not for long. Both are profoundly frustrated, the child at not being able to get the parent to see the justice of her situation, or at least at not being able to penetrate the imperviousness and the mystery of the parent’s judgment, and the parent at not having the power to transform the child by his judicious punishment. Frustration leads to idealist fantasies, in which everyone understands each other, everyone judges and measures each other in perfect understanding and harmony. Of course this never happens in this world, bitter experience reveals this, especially in the harsh and often desperate environments out of which so many religions have been born.
It all happens in another life, in another world, another place, a world that doesn’t bear too much thinking about it, but a world that can absorb all the hope aimed at it, all the dreams of the ‘faithful’. In absorbing all these hopes and dreams and cries for justice it just keeps expanding, like a balloon, ever more diaphanous, amorphous, enticing. Who’d want to be the prick that bursts it?
I happen to be reading an enjoyable little book in the ‘brief history’ series, A brief history of life in the middle ages by Martyn Whittock. His focus is England, and he covers a period from around the ninth century through to the fifteenth, but he provides enough interesting data from approximately a millennium ago and onwards to make the above question worth pursuing – with a bit more research too of course.
Australia is generally regarded as a Christian country, but Christianity sure ain’t what it used to be. Generally when talking about the decline of Christianity, pundits refer to the past few decades, but it’s worth taking a much longer view to see just how Christianity is faring compared to what it once was. It’s also convenient that Christianity is around 2000 years old – so going back a thousand years takes us to half its life-span up to now. We don’t know how much longer it will live, but I’m more interested in its ‘quality of life’ compared to what it once was. Is it in a near-vegetative state, or is it still thriving?
Obviously we can’t look at Christianity in Australia 1000 years ago, so England seems the obvious choice as the nation that brought Christianity to this country, so very recently.
Eleventh century England was thoroughly Christian, chockful of powerful bishops and clerics. The Norman conquest had little effect on Christianity generally except that the sees of bishops tended to be relocated to the commercial centres along continental lines, and the continental style of church architecture replaced the Anglo-Saxon, resulting in the loss of virtually all the great Anglo-Saxon churches. Edward the Confessor had already signalled this change before the Norman invasion with his reconstruction of Westminster, but of course after William I’s accession this rebuilding process was a deliberate sign of the new order – an erasing of Anglo-Saxon taste, style, and political influence rather than its version of Christianity.
The Church, undivided as it was then, played a vastly greater role in eleventh century society than it does today. The Church hierarchy, with its higher levels of literacy, played a significant, indeed dominant, role in civil administration, and of course the Church was a major landowner, charged with all the minutiae of running large estates, so that you could be a senior Church official without being in any way engaged in what we see as the domain of Christian workers today – sermons, spirituality and charitable works. The Church was in fact an international administrative network dominated by Rome, and administering estates for two masters in a sense – the ‘local’ royalty or nobility, and the pope. Chancery was run more or less entirely by Church officials until major changes occurred in the early fifteenth century.
It’s probably fair to say that atheism wasn’t even a concept in eleventh century England or Europe. Godlessness might’ve been a term of abuse for those who weren’t sufficiently orthodox, but essentially everyone was Christian, to a degree unthinkable today. One quite small but economically successful religious minority existed, namely the Jews, expelled from England in 1290, and increasingly harassed and oppressed from the mid-twelfth century onwards. The whole nation was divided into parishes, each overseen by a diocesan bishop, over-ruled by two archdioceses, Canterbury, which had seniority, and York. It was expected that everyone in the parish attend mass on Sundays, and on various festival days. A yearly procession called Rogationtide served to remind everyone of the boundaries of their particular parish.
All parishioners paid a tithe of their income to the church. A tithe is literally a tenth, though the amount no doubt varied. The practice originated with Judaism, and has been followed in a variety of ways by Christianity and Islam, as well as in secular terms, though this was caught up in the confusion of medieval views of Church and State, with the monarchy being seen as a quasi-religious inheritance.
In the wealthiest parishes tithes were held in tithe barns, for all to see, but of course there was always tension about this form of taxation, especially if the churches or monasteries and their abbots were displaying conspicuous wealth, as a good part of the tithes were expected to support the needy of the parish.
Of course, as among the religious today, the Church presided over all the Main Events – baptism (for babies), confirmation (for toddlers) and penance (for all the rest), as well as the Eucharist (regularly), marriage, ordination (for many, but only performed by bishops) and extreme unction (for everyone in the end). However it would be wrong to assume that religious belief was uniform, either in thought or practice. It was always changing, over time, and according to many and varied regional influences. Early medieval Christianity interacted with local folk practices, and various trends and fashions had a general impact, such as the rise of the mendicant friar movement, as a response to the perceived or actual corruption of the fixed monastic orders. This movement, largely intended as a return to the simple peripatetic teachings of Jesus, in turn suffered from its own popularity, and eventually became associated with a new form of parasitism. Another major impact on religious thinking in the later medieval period was plague, and the devastation it brought, which led to a darker and more personal relation to the deity among many. Chantry chapels for the burial of the dead were built, with special clergy to deal with the overload, since priests were only allowed by law to say one mass a day.
The concept of the ‘clergy’ in medieval Britain was necessarily vague – to the advantage of offenders against the law. In the 13th and 14th centuries any schoolboy (only boys of course) who achieved some literacy could be given the tonsure, the clerical cut, and wrong-doers could claim ‘benefit of clergy’ if they were literate, the test for which was to recite psalm 51:1 in Latin – ‘ Have mercy on me, Oh God, according to your unfailing love; according to your great compassion blot out my transgressions.’ The verse became known as ‘the neck verse’ presumably because it saved your neck, canon law penalties being much lighter than secular ones. A reaction against this avoidance of proper justice led to the benefit of clergy provision being restricted to minor crimes by the end of the 16th century (when England had broken with Rome). Of course, this controversial relationship between canon and secular law is still a problem today, with the Catholic Church still unable to accept the paramountcy of secular law.
Orthodoxy and its maintenance was a problem, as ever, what with Dominicans (blackfriars), Franciscans (friars minor, or greyfriars), Cistercians, Carmelites (whitefriars), and other assorted monks, nuns, canons, priors, churchwardens etc roaming the land or administering estates and distributing finances (at least 20% of all land was owned by the Church in the late middle ages), not to mention anchorites and mystical eccentrics such as Margery Kempe keeping the pot stirred. The Peasants Revolt of 1381 and the Lollard movement, both led by religious figures and both savagely repressed, gave an indication of the tenuous hold of religious authority in times of stress, but again these movements never threatened Christianity and were aimed at reinforcing it through renovation.
And then came the great church schism that fueled the genocidal treatment of the Catholic Irish, not to mention the Thirty Years War in middle-Europe and the English civil war…
As a lover of history I could go on and on, but the essential point is clear. We’ve never lived in a more secular age, nowhere near it. We can easily live our lives without interference from Christianity, to a degree that was impossible even 200 years ago let alone 1000. A situation which certainly gives added perspective to such recent apologist texts as The Twilight of Atheism.
Here in Australia, voted the happiest country in the world for the 3rd year in a row by the Paris-based OECD (Organisation for Economic Co-operation and Development), the rise of the nones is as spectacularly speedy as it is anywhere else. And it seems to me there are great historical reasons for embracing secularism. The current approach of the Catholic Church with respect to canon law and the behaviour of its clergy is an example, but one just has to look at those states where the churches, mosques, synagogues etc have political power, and compare them to those where religion plays little or no political role. Compare also the Europe and England of today with the pre-Enlightenment versions, when the official language was God-saturated but when the kind of justice we now take for granted was in very short supply. It’s taken a long time, and the situation continues patchy, but Aristotelian empiricism, so far as ethics is concerned, is winning out.
There’s no turning back. It seems to me that, as far as Christianity is concerned, it’s the long, long fade-out.
The small ancient Greek city of Epidaurus, about 50 ks due south-west of Athens, was a place of pilgrimage and hope to the sick, the halt and the lame for centuries, throughout the Hellenistic period, and well into the early Christian era. It was the haunt and reputed birthplace of the healer god Asclepius, as popular as modern-day Lourdes and no doubt just as efficacious. But it’s not the healing powers of Epidaurus that I want to focus on, it’s its amphitheatre, situated a few ks out of town, and renowned for its miraculous acoustic qualities.
The theatre of Epidaurus was designed by the sculptor and architect Polykleitos the Younger and built in the 4th century BCE. It was added to in the Roman era but fell into disuse after the fall of the empire. In 1881 it was rediscovered and renovated, bringing to light for modern audiences its extraordinary acoustic properties. The term ‘amphitheatre’ means a theatre in the round, and the one at Epidaurus was one of the largest of the Hellenistic era, though the great Roman amphitheatres were larger and more visually spectacular.
When I was a kid one of the first things I ‘learned’ about the ancient Greeks was that they were great speculators and hypothesisers but not much chop at proofs and other such practicalities. I’ve been unlearning that fact ever since, and the Epidaurus amphitheatre is another step along the way. It seats around 15,000 people, and was deliberately set in the open air against a beautiful backdrop of shrubbery. But no matter where you stand or sit amongst the tiers, you’ll be able to hear a coin drop or a match being struck centre stage. Tour guides are on hand these days to prove it to you.
A 2007 study of the amphitheatre by Nico Declercq and Cindy Dekeyser of the Georgia Institute of Technology proved that the superb acoustics were no accident. The seating, built from limestone, filters out sound waves of low frequency, thus damping background noise from the crowd. In addition, high-frequency waves are reflected from the rows of seats, which enhances the effect. The seats had a corrugated design which acted as an acoustic trap, damping the low frequencies, but the actors’ lines could still be heard because of a phenomenon known as virtual pitch, a complex process in hearing and harmonics which enables the brain to reconstruct missing frequencies, which we do all the time on our mobile phones and other electronic gadgetry.
All of this raises the question – did Polykleitos the Younger know exactly what he was doing? I think the only proper answer is that we’ll never know for sure. If he left any written explanations as to why the seating should be built of limestone, with corrugations in the surfaces, those explanations have been lost, along with the majority of classical and Hellenistic Greek writings. Perhaps more importantly for these speculations, Greek and Roman amphitheatres built afterwards didn’t copy the Epidaurus design features.
This doesn’t necessarily mean Polykleitos didn’t know what he was doing – though he probably didn’t know exactly what he was doing. He would likely have been experimenting with materials and designs, trying to find the right acoustic effect for an amphitheatre, You might say he was fumbling about in the dark (or the acoustic equivalent) with a specific goal in mind. It wasn’t so much a matter of chance as chance favouring the prepared mind. And it’s the preparedness of mind of so many Greek intellectuals of this era that is so impressive.
Today we take for granted a scientific approach in which we work on the findings of others in order to reproduce them or disprove them or augment them, and so build up, tiny piece by piece, a more accurate and reliable picture of how our world works. To return to the ancient Greek world is to find the first glimmerings of such an approach, along with numerous attempts to ‘reinvent the wheel’, to start from scratch, because the kind of fleshed out, multi-tested, cumulative knowledge of the world that is today’s scientific picture didn’t exist then, and the difference is such that we can barely imagine ourselves in that world. It’s only by trying to think ourselves into that context that we can appreciate the achievements of such contributors to a world we now take for granted as Polycleitos and so many others – Pythagoras, Aristotle, Theophrastus, Hippocrates, Archimedes, Eratosthenes and Heron to name but a few.
I’ll end with a reminder of the importance of scientific research and the struggle to understand our world, from Erasistratus – a physician and early researcher on the heart, the nervous system and the digestive processes – writing some 2,300 years ago:
Those who are totally unfamiliar with research, once they begin to exercise their minds, become dumbfounded and immediately abandon the pursuit out of mental exhaustion, collapsing like runners who enter a race without prior conditioning. But the person who is experienced at research keeps trying every possible approach and every possible angle and, rather than giving up after a single day’s labor, persists for the remainder of his life. Focusing on one idea after another that bears upon what he seeks, he presses on until he reaches his goal.
As I’m overwhelmed and a bit stressed by work issues, I’ve not posted here for a while, or to be precise I’ve got three or four posts going which I’ve not been able to finish. So I’ve decided to throw something down and push it out today no matter what.
A fascinating post on the John Hawks blog, alerted to me by Butterflies and Wheels. He goes into much detail on an issue that has fascinated me, in my dilettantish way. My general reading on human ancestry, which turned up names such as Homo erectus, Homo habilis, Homo ergaster, Homo rudolfensis, Homo heidelbergensis et al, together with the information that the remains of these hominids or hominins were scanty and their precise identities disputed, made me wonder from my distant armchair whether they all represented different species or just variants of the one. Of course I have no expertise at all, and I don’t know the difference between a species and a subspecies, but my reading did make me aware that this was a genuine issue amongst paleoanthropologists.
The Hawks post, which takes its departure from a paper published on a recently revealed specimen of Homo erectus, goes into some detail on all this. The cranial specimen, D4500, from Dmanisi in Georgia, is the best-preserved of any so far discovered. The writers of the paper take the opportunity to put forward the view that the early Homo finds, such as D4500 and remains from the Malapa fossil site in South Africa, and by inference a number of others, represent a single lineage, a view with which Hawks largely concurs. So there, I told you so.
Of course Hawks goes into a lot of detail, and expresses his views with the diffidence we generally find in true scientists, but I’m delighted to find my vague sense of things so thoroughly supported. i must be a lumper, but of course I’m prepared to change my mind at the slightest change in the winds of research. Now I just need to work out where all those Australopithecines fit into the general picture, without moving too far from my armchair, of course.
The term ‘autism’ was coined in the 1940s by two physicians working independently of each other, Hans Asperger in Austria and Leo Kanner in the USA, to describe a syndrome the key feature of which was a problem with interacting with others in ‘normal’ ways. Sounds vague, but the problem was anything but wishy-washy to these individuals’ parents and families, and over time a more detailed profile has built up.
The term itself is from the Greek autos, or ‘self’, because those with the syndrome had clear difficulties in interpreting others’ moods and responses, resulting in a withdrawn, often antisocial state. Autistic kids often avoid eye contact and are all at sea over the simplest communication.
Already though, I feel I’m saying too much. When describing autism, it’s common to use words like ‘often’ or ‘sometimes’ or ‘some’, because the symptoms are seemingly so disparate. Much of what follows relies on the neurologist V S Ramachandran’s book The tell-tale brain, especially chapter 5, ‘Where is Steven? The riddle of autism’.
Autistic symptoms can be categorised in two major groups, social-cognitive and sensorimotor. The social-cognitive symptoms include mental aloneness and a lack of contact with the world of other humans, an inability to engage in conversation and a lack of emotional empathy. Also a lack of any overt ‘playfulness’ or sense of make-believe in childhood. These symptoms can be ‘countered’ by heightened, sometimes obsessive interest in the inanimate world – e.g. the memorising of ostensibly useless data, such as lists of phone numbers.
On the sensorimotor side, symptoms include over-sensitivity and intolerance to noise, a fear of change or novelty, and an intense devotion to routine. There’s also a physical repetitiveness of actions and performances, and regular rocking motions.
These two types of symptoms raise an obvious question – how are the two types connected to each other? We’ll return to that.
Another motor symptom, which Ramachandran thinks is key, is a difficulty in physically imitating the actions of others. This has led him to pursue the hypothesis that autism is essentially the result of a deficiency in the mirror neuron system.
In recent years there’s been a lot of excitement about mirror neurons – possibly too much, according to some neurologists. A mirror neuron is one that fires not only when we perform an action but also when we observe it being performed by others. They’ve been found to act in mammals and also, it seems, in birds, and in humans they’ve been found in the premotor cortex, the supplementary motor area, the primary somatosensory cortex and the inferior parietal cortex. It’s easier, however, to locate them than it is to determine their function. Clearly, to describe them as ‘responsible’ for empathy, or intention, is to go too far. As Patricia Churchland points out, ‘a neuron is just a neuron’, and what we describe as empathy or intention will likely involve a plethora of high-order processes and connections, in which mirror neurons will play their part.
With that caveat in mind, let’s continue with Ramachandran’s speculations on autism and mirror neurons. First, we’ll need to be reminded of the term ‘theory of mind’, used regularly in psychology. It’s basically the idea that we attribute to others the same sorts of intentions and desires that we have because of the assumption that they, like us, have that internal feeling and processing and regulating system we call a ‘mind’. A sophisticated theory of mind is one of the most distinctive features of the human species, one which gives us a unique kind of social intelligence. That autism would be related to theory-of-mind deficiencies seems a reasonable assumption, so what is the brain circuitry behind theory of mind, and how do mirror neurons fit into this picture?
Although neuro-imaging has revealed that autistic children have larger brains with larger ventricles (brain cavities) and notably different activity within the cerebellum, this hasn’t helped researchers much, because autism sufferers don’t present any of the usual symptoms of cerebellum damage. It could be that these changes are simply the side effects of genes which produce autism. Some researchers felt it was better to focus on mirror neurons straight-off, as obvious suspects, and to see how they fired and where they connected in particular situations. They used EEG (electroencephalography) as a non-invasive way to observe mirror neuron activity. They focused on the suppression of mu waves, a type of brain wave. It has long been known that mu waves are suppressed when a person makes any volitional movement, and more recently it has been discovered that the same suppression occurs when we watch others performing such movements.
So researchers used EEG (involving electrodes placed on the scalp) to monitor neuronal activity in a medium-functioning autistic child, Justin. Justin exhibited a suppressed mu wave, as expected, when asked to make voluntary movements. However, he didn’t show the same suppression when watching others perform those movements, as ‘neurotypical’ children do. It seemed that his motor-command system was functioning more or less normally, but his mirror-neuron system was deficient. This finding has been replicated many times, using a variety of techniques, including MEG (magnetoencephalography). fMRI, and TMS (transcranial magnetic stimulation). Reading about all these techniques would be a mind-altering experience in itself.
According to Ramachandran, all these confirmations ‘provide conclusive evidence that the [mirror neuron] hypothesis is correct.’ It certainly helps to explain why a subset of autistic children have trouble with metaphors and literality. They have difficulty separating the physical and the referential, a separation that mirror neurons appear to mediate somehow.
A well-developed theory of mind which can anticipate the behaviour of others is clearly a feature of understanding our own minds better. In Ramachandran’s words:
If the mirror-neuron system underlies theory of mind and if theory of mind in normal humans is supercharged by being applied inward, towards the self, this would explain why autistic individuals find social interaction and strong self-identification so difficult, and why so many autistic children have a hard time correctly using the pronouns ‘I’ and ‘you’ in conversation. They may lack a mature-enough self-representation to understand the distinction.
Of course, tons more can be said about the ‘mirror network’ and tons more research remains to be done, but there are many promising signs. For example, the findings about lack of mu wave suppression could be used as a diagnostic tool for the early detection of autism, and some interesting work is being done on the use of biofeedback to treat the disorder. Biofeedback is a process whereby physiological signals picked up by a machine from the brain or body of a subject are represented to the subject in such a way that he or she might be able to affect or manipulate that signal by a conscious change of behaviour or thinking. Experiments have been done to show that subjects can alter their own brain waves through this process. Some experimental work is also being done with drugs such as MDMA (otherwise known as the party drug ‘ecstacy’) which appear to enhance empathy through their action on neurotransmitter release.
So that’s a very brief introduction to autism. Hopefully I’ll come back to it in the future to explore the progress being made in understanding and treating the syndrome.
Having had an interesting conversation-cum-dispute recently over the question of male-female differences, and having then listened to a podcast, from Stuff You Should Know, on the neurological differences between the human male and the human female, which contained some claims which astonished me (and for that matter they astonished the show’s presenters), I’ve decided to try and satisfy my own curiosity about this pretty central question. Should be fun.
The above link is to How Stuff Works, which I think is the written version of the Stuff You Should Know podcast, that’s to say with more content and less humour (and less ads), but I do recommend the podcast, because the guys have lots of fun with it while still delivering plenty of useful and thought-provoking info. Anyway, the conversation I was talking about was one of those kitchen table, wine-soaked bullshit sessions in which one of the participants, a woman, was adamant that nurture was pretty well entirely the basis for male-female differences. I naturally felt sympathetic to this view, having spent much of my life trying to blur the distinctions between masculinity and femininity, having generally been turned off by ultra-masculine and ultra-feminine traits and wanting to push for blended behaviour, which obviously suggests we can control these things through nurturing such a blending. However, I had just enough knowledge of what research has revealed about the matter to say, ‘well no, there are distinct neurological differences between males and females’, but I didn’t have enough knowledge to give more than a vague idea of what these differences were. The podcast further whetted my appetite, but writing about it here should pin things down in my mind a bit more, here’s hoping.
I’ve chosen the title of this post reasonably carefully, with apologies for its clunkiness. For the fact is, we still know little enough about our brains. I’ve mentioned humans, but I expect there are gender differences in the brains of all mammals, so I’m particularly interested in that part of the brain that distinguishes us, though not completely, from other mammals, namely the prefrontal cortex.
Here’s an interesting summary, from a blurb on a New Scientist article by Hannah Hoag from 2008;
Research is revealing that male and female brains are built from markedly different genetic blueprints, which create numerous anatomical differences. There are also differences in the circuitry that wires them up and the chemicals that transmit messages between neurons. All this is pointing towards the conclusion that there is not just one kind of human brain, but two. …
Men have bigger brains on average than women, even accounting for sexual dimorphism, but the two sexes are bigger in different areas. A 2001 Harvard study found that some frontal lobe regions involved in problem-solving and decision-making were larger in women, as well as regions of the limbic cortex, responsible for regulating emotions. On the other hand, areas of the parietal cortex and the amygdala were larger in men. These areas regulate social and sexual behaviour.
The really incredible piece of data, though, is that men have about 6.5 times more grey matter (neurons) than women, while women have about ten times more white matter (axons and dendrites, that’s to say connections) than men. These are white because they’re sheathed in myelin, which allows current to flow much faster. On the face of it, I find this really hard, if not impossible, to believe. I mean, that’s one effing huge difference. It comes from a study led by Richard Haier of the University of California, Irvine and colleagues from the University of New Mexico, but this extraordinary fact appears to be of little consequence for male performance in intellectual tasks as compared to female. What appears to have happened is that two different ‘brain types ‘ have evolved alongside and in conjunction with each other to perform much the same tasks. Other research appears to confirm this amazing fact, finding that males and females access different parts of the brain for performing the same tasks. In an experiment where men and women were asked to sound out different words, Gina Kolata reported on this back in early 1995 in the New York Times:
The investigators, who were seeking the basis of reading disorders, asked what areas of the brain were used by normal readers in the first step in the process of sounding out words. To their astonishment, they discovered that men use a minute area in the left side of the brain while women use areas in both sides of the brain.
After lesions to the left hemisphere, men more often develop aphasia (problems with understanding and formulating speech) than women.
While I’m a bit sceptical about the extent of the differences between grey and white matter in terms of gender, it’s clear that these and many other differences exist, but they’re difficult to summarise. We can refer to different regions, such as the amygdala, but there are also differences in hormone activity throughout the brain, and so many other factors, such as ‘the number of dopaminergic cells in the mesencephalon’, to quote one abstract (it apparently means the number of cells containing the neurotransmitter dopamine in the midbrain). But let me dwell a bit on the amygdala, which appears to be central to neurophysiological sex differences.
Actually, there are 2 amygdalae, located within the left and right temporal lobes. They play a vital role in the formation of emotional memories, and their storage in the adjacent hippocampus, and in fear conditioning. They’re seen as part of the limbic system, but their connections with and influences on other regions of the brain are too complex for me to dare to elaborate here. The amygdalae are larger in human males, and this sex difference appears also in children from age 7. But get this:
In addition to size, other differences between men and women exist with regards to the amygdala. Subjects’ amygdala activation was observed when watching a horror film. The results of the study showed a different lateralization of the amygdala in men and women. Enhanced memory for the film was related to enhanced activity of the left, but not the right, amygdala in women, whereas it was related to enhanced activity of the right, but not the left, amygdala in men.
This right-left difference is significant because the right amygdala connects differently with other brain regions than the left. For example, the left amygdala has more connections with the hypothalamus, which directs stress and other emotional responses, whereas the right amygdala connects more with motor and visual neural regions, which interact more with the external world. Researchers are of course reluctant to speculate beyond the evidence, but as a non-scientist, but as a pure dilettante I don’t give a flock about that – just don’t pay attention to my ravings. It seems to me that most female mammals, who have to tend offspring, would be more connected to the flight than the fight response to danger than the unencumbered males would be??? OMG, is that evolutionary psychology?
It’s interesting but hardly surprising to note that studies have shown this right-left amygdala difference is also correlated to sexual orientation. Presumably – speculating again – it would also relate to those individuals who sense from early on that they’re born into ‘the wrong gender’.
Neuroimaging studies have found that the amygdala develops structurally at different rates in males and females, and this seems to be due to the concentration of sex hormone receptors in the different genders. Where there’s a size difference there appears to be a big difference in number of sex hormones circulating in the area. Again this is difficult to interpret, and it’s early days for this research. One brain structure, the stria terminalis, a bundle of fibres that constitute the major output pathway for the amygdala, has become a focus of controversy in the determination of our sense of gender and sexual orientation. As a dilettante I’m reluctant to comment much on this, but the central subdivision of the bed nucleus of the stria terminalis is on average twice as large in men as in women, and contains twice the number of somatostatin neurons in males. Somatostatin is a peptide hormone which helps regulate the endocrine system, which maintains homeostasis.
What all this means for the detail of sex differences is obviously very far from being worked out, but it seems that the more we examine the brain, the more we find structural and process differences between the male and female brain in humans. And it’s likely that we’ll find similar differences in other mammals.
It’s important to note, though, that these differences, as in other mammals, exist in the same species, in which the genders have evolved to be codependent and to work in tandem towards their survival and success. Just as it would seem silly to say that female kangaroos are smarter/dumber than males, the same should be said of humans. The terms smart/dumb are not very useful here. The two genders, in all mammals, perform complementary roles, but they’re also also both able to survive independently of one another. The amazing thing is that such different brain designs can be so similar in output and achievement. It’s more impressive evidence of the enormous diversity of evolutionary development.
Some years ago, when I was a bit more financially solvent than I am these days, I went to a gym for a while, and even employed a personal trainer. I learned from that experience, thanks to some simple exercises the trainer put me through, and my own quick development through these exercises, that, once I’d gotten this kick start, I didn’t need the expense of a gym, or a personal trainer for that matter, which is just as well, as I soon went broke and abandoned both.
Since then I’ve been using a combo of my trainer’s tips and some CSIRO-recommended exercises to stay moderately in shape at home, happily far from the sight of buffed-up men hefting obscene weights, not to mention bubble-butted women with sweat sparkling from their flawless sun-tinted flesh..
Anyway, one of the things that sometimes worried me when I turned up for gym was my footwear. I noticed that most of the inhabitants wore all the ‘right’ gear including what looked like the latest state-of-the-art top-brand ‘gym shoes’ or running shoes or whatever. I wore a pair of $10 canvas slip-ons, and I always expected the trainer to query them, though I’d also heard or read somewhere that all these expensive ‘scientifically tested’ exercise shoes were a load of malarkey, and you’re possibly better off with good old-fashioned plimsolls, or even nothing at all…
So it was with some interest that I listened to a little segment on a recent science show podcast, dealing precisely with this subject. An English researcher, Mick Wilkinson, who’s also a keen amateur runner, has been looking at running barefoot v running shod, and he ran a half-marathon barefoot in 2011 just to test things out. He came out of it more or less unscathed in spite of some less than barefoot-friendly surfaces.
As to the evidence, much of it was a summary of what a Professor Lieberman of Harvard has found, findings published in Nature and a recent issue of New Scientist. Basically, Lieberman has found that we are born – that’s to say, evolutionarily adapted – to run, considering our skeleton and muscles, and issues of endurance and heat loss (the latter being an obvious consideration in going barefoot). An analysis of ‘peak impact forces and the rate at which those forces are absorbed by the body’ indicates that barefoot running, because it favours a ‘forefoot landing’, a type of foot strike pattern that’s associated with ‘a lower loading rate’ (presumably meaning less overall pressure), is less jarring than its alternative.
Looking at joint movements and rotational forces around the ankles and the knees, the evidence is that, with barefoot running, forces around the ankles are increased, forces around the knees are decreased. This is very interesting to me, as I stopped jogging years ago because one of my knees would stiffen up every time I did it. I was running in fairly basic running shoes, but more importantly to my mind I was running on a hard gravel track. Years later when I did a bit of jogging on grass I didn’t have a problem. Generally though I hate jogging and much prefer cycling, with a nice café at the end.
But what about the effect on the ankles? According to Lieberman the evolution of structures on the rear of the leg, the Achilles, the calf and the soleus (a powerful muscle in the lower calf) have generally evolved to cope with these stresses on the ankle region. More research needs to be done, but there are some pretty serious difficulties, as Wilkinson points out:
So we’ve got biomechanical aspects linking forces, we know that forces are theoretically linked to some kinds of injuries, but that’s where it stops. What is missing is the next piece of the puzzle which would be the randomised control prospective studies examining injury rates in people who are learning to run barefoot, people who are learning to run in shoes. But the design of the study would be so complex, it would be prohibitive. I mean, you’d have to get people who were matched for training history, matched for age, matched for injury status. In fact it would probably be better to start off with people who had never run at all and just say, right, randomly allocate you into a group who are going to learn to run in shoes, you’re going to learn to run barefoot, and then track them over a very long period of time to find out what injury rates are per so many thousand miles. But again, it’s so difficult to operationalise a study like that, probably why one hasn’t been done.
In any case these studies wouldn’t so much answer the question of whether you run faster in shoes, as whether you run better – that’s to say, with less general impact on the body. It, not quite the same thing, though they are connected. And obviously there are hazards in running barefoot in modern urban environments. But there doesn’t seem to be much evidence to support all the advertising claptrap trying to get you to buy ultra-expensive running shoes. In fact, there’s been little noticeable difference in times for running marathons – the real test for shoes v bare feet – in spite of, not only high-tech footwear but all the other-high tech analysis in terms of diet, running technique and so forth. Wilkinson tells us that the American distance runner Steve Prefontaine still holds the American marathon record from the early seventies (Prefontaine was killed in a car crash in 1975, aged 24), and he always wore a standard pair of plimsolls.
So it looks like another case of advertising, and dare I say pseudo-science, winning out over the evidence..