The 2007 Indonesian film the photograph definitely has some power in spite of certain manipulations and conventions which I’ll get to later. It boils down to a very simple story, a two-hander essentially, about a relationship between an old and infirm photographer, and a young, struggling single mother, Sita (Shanty), teetering on the abyss. Sita sings in a karaoke bar and is clearly being forced into pleasing the customers in other ways by a hectoring standover figure. She’s separated from her young son Yani who she rings whenever she can, as well as sending money home (she also has an ailing grandmother).
But let’s begin at the beginning. The film opens as we enter the photographer’s dilapidated studio, with old pictures on the wall in old gilt frames. The old man shuffles among these images, regularly contemplates a trunk of photographic and other memorabilia, and spends some of his time burning offerings to his ancestors, or whatever gods he believes in, on an abandoned rail line just outside of town.
The beautiful Sita, having been forced to leave her living quarters, asks the old man if she can rent the room above his studio. The photographer’s responses are always non-committal if not grudging, and he seems to be lost in another world. Sita takes advantage of this to simply move in.
That’s when we turn to Sita’s life as a karaoke singer and spruiker for clients. Her ‘pimp’, if that’s what he is, is presented rather one-dimensionally as a whining, bullying little packet of evil who bangs on the door of the phone booth while she speaks to her son, and cajoles her into a room where three thugs rape and abuse her. He appears also to take all her earnings because she apologizes to the photographer for not being able to pay for her room and begs him to let her stay on. Having been beaten up, she’s unable to work, and so she makes herself useful to him by cleaning his studio and helping with the occasional customers he photographs against painted backdrops of the countryside.
The film dwells on this awkward relationship, contrasting the spent, secretive old photographer with his face toward the past, and the struggling young woman with a mixture of pragmatic hopes and idealistic dreams for her and her son’s future. The old man is looking to groom a successor, but he needs someone who can carry on the spirit of his ancestors. Sita is half-interested herself in taking on the role, but realises that the tradition-bound old man, in spite of his growing kindness toward her, would find her unsuitable, just as a woman.
Sita hasn’t told the pimp her new address but he soon finds her and starts haranguing her, but is beaten away by the neighbours. Later he returns, and in one of the film’s most unconvincing scenes, chases her out of the town along a railway track, where, conveniently, the old man turns up and somehow the pimp manages to get himself run over by a train, though the impact is not presented and the likelihood of this young man, who’s clearly been living by his wits for years, allowing himself to be hit by a train in this way is just about zero.
Anyway, being freed of this man, she’s able to look more clearly towards the future – she’d love to become a chanteuse on a cruise ship. Meanwhile the photographer is getting more tottery, and while he’s on what might be his deathbed she explores the place further, including a trunk that he’s strictly forbidden her to open. It contains, inter alia, some tattered photos of the mutilated victim or victims of a train accident. The old man, suddenly recovered, catches her snooping, and we get a flashback to his youth, when he was on a train which hit someone on the line. He took photos of various parts of the victim’s body, the photos Sita found in the trunk, and he’s been haunted by the event ever since.
The old man returns to his dying, and he may already be dead when a last photograph is taken, with him propped in a chair and Sita by his side. This is the photo of the film’s title, and it eventually comes into the possession of Yani, Sita’s son, who narrates the final moments of the film, uniting past and future through the power of photography among other things. A pleasant and sometimes moving film, a little marred by some unlikely plot elements, and by a slightly unreal spareness of scene, with little of the bustle you would surely find in urban Indonesia. Film-makers, of course, create their own reality in a film, which is never the ‘real’ reality. At the same time a degree of verisimilitude is essential to evoke the sorts of responses you want to evoke in viewers. This is one of the essential balancing acts in any film, and the hardest thing to manage (and that’s what makes James Bond films such abject failures in my view). The photograph, unfortunately, doesn’t quite succeed in this regard, but the characters, especially Sita, are interesting enough to compensate.
To me – and I’ve written about this before – the invocation of the supernatural, the ‘call’ of the supernatural, if you will, is something deeply psychological, and so not to be sniffed at, though sniff at it I often do.
I’m prompted to write about this because of a program I saw recently on Heath Ledger (Australia’s own), an understandably romantic, mildly hagiographic presentation, in which a few film directors and friends fondly remembered him as wise beyond his years, with hidden depths, a kind of inner force, a certain je ne sais quoi, that sort of thing. As both a romantic and a skeptic, I was torn as usual. The word ‘spiritual’ was given an airing, unsurprisingly, though mercifully it wasn’t dwelt on. I once came up with my own definition of spirituality: ‘To be spiritual is to believe there’s more to this world than this world, and to know that by believing this you’re a better person than those who don’t believe it’. This might sound a mite cynical but I didn’t mean it to be, or maybe I did.
Anyway one of Ledger’s associates, a film director I think, told this story of the young Heath. A number of friends were partying in his apartment when he, the director, picked up a didgeridoo, which obviously Ledger had brought with him from Australia, and attempted to play it, but not knowing much about the instrument, held it upside-down. Heath gently took it from him and corrected him, saying ‘no, no, if you hold it that way it will lose its power, the power of the instrument and its maker,’ or some such thing. And the seriousness and respectfulness with which this young actor spoke of his didge impressed the director, who considered this a favourite memory, something which caught an ‘essence’ of Ledger that he wanted to preserve.
I’ve been bothered by this tale, and by my ambivalent response to it, ever since. It would be superfluous, I suppose, to say that I don’t believe that briefly holding a didge upside-down has any permanent effect on its musical power.
It’s quite likely that Ledger didn’t believe this either, though you never know. What I’m fairly sure of, though, was that his respectfulness was genuine, and that there was something very likeable, to me at least, in this.
All of this takes me back to a piece I wrote some years ago, since lost, about big and small religions. I was contrasting the ‘big’ religions, like Catholicism and the two main strands of Islam, with their political power in the big world, often horrific in its impact, with the ‘small’ religions or spiritual belief systems, such as those found among Australian Aboriginal or some African societies, who have no political power in the big world but provide their adherents with identity and a kind of social energy that’s marvelous to contemplate. My piece focused on the art work of Emily Kame Kngwarreye, whose prolific and astonishing oeuvre, with its characteristic energy and vitality, clearly owed so much to the beliefs and practices of her ‘mob’, the so-called Utopian Community in Central Australia, between Alice Springs and Tenant Creek to the north.
Those beliefs and practices include dreaming stories and totemic identifications that many western skeptics, such as myself, might find difficult to swallow, in spite of a certain romantic appeal. The fact is, though, that the Utopian Community has been remarkably successful, in terms of the usual measures of well-being, and particularly in the area of health and mortality, compared to other Aboriginal groups, and its success has been put down to tighter community living, an outdoor outstation life, the use of traditional foods and medicines, and a greater resistance to the more destructive western products, such as alcohol.
This might put a red-blooded but reflective skeptic in something of a quandary, and the response might be something like – ‘well, the downside of their vitality and health, derived from spiritual beliefs which have served them well for thousands of years, is that, in order to preserve it, they must live in this bubble of tribal thinking, unpierced by modern evolutionary or cosmological knowledge, and this bubble must inevitably burst.’ Must it? Is there a pathway from tribalism to modern globalism that isn’t entirely destructive? Is the preservation of tribal spiritual beliefs a good thing in itself? Can we take the statement, that holding a didgery-doo upside-down affects its spirit, as a truth over and above, or alongside, the contrasting truths of physical laws?
I don’t know the answer to these questions, of course. Groping my way through these issues, I would say that we should respect and acknowledge those beliefs that give a people their dignity, and which have served them for so long, but perhaps that’s because we’re feeling the generosity of someone outside that system who’s unlikely to be affected or to feel diminished by it. These are, after all, small religions, from our perspective, not the big, profoundly ambitious religions intent on global domination, with their missionaries and their jihadists and their historical trampling of other belief systems, as in Mexico and South America and Africa and here in Australia.
Of course there’s the question – what if those small religions grew bigger and more ambitious? Highly unlikely – but what if?
While at Victor Harbour, we did the usual walk around Granite Island, marveling at these massive lichen-covered granite boulders and reading the signs about their origins, and their hardness and consequent permanence, compared to, say, limestone.
Granite is a composite of 3 minerals – quartz (bluish), feldspar (pink and white) and mica (black biotite). On the island it’s found in great heaps of rocks, called xenoliths, subject to weathering known as tafoni – though the examples there aren’t spectacular, compared to others, such as Kangaroo Island’s Remarkable Rocks. This granite has upwelled from – well, somewhere – back in the Cambrian, about 520 million years ago. Granite is igneous rock, generally formed from molten lava under the surface which slowly pushes its way through cracks and spaces to just below the surface, over millions of years, where it’s finally revealed through soil erosion – at least that’s the story I’m getting through my reading. What I see, though, is a mixture – boulders in heaps, at the tops of hills, that look like they’ve rained down from the sky; great cliff faces that look more like upwellings; and, in gullies, a combo of large and small boulders that look the end-product of an avalanche.
Well, a lot can happen in 500 million years, but I’ll try to make sense of it, not only for Granite Island but the region around it. Here’s an intro from the geological society of Australia:
Grey metamorphic rocks are exposed in natural outcrops, road cuttings and along the sea coast over much of southern Fleurieu Peninsula and Kangaroo Island.
They are called the Kanmantoo Group by geologists and were deposited into a rapidly subsiding ocean basin as fine grained sand and silt eroded from large land masses to the west and south in the Cambrian Period about 520 million years ago.
After the basin filled, this sequence of sediments was buried deeply below the earth’s surface and altered (metamorphosed) by heat and pressure into their present form. They were also intruded by masses of molten granite (called the Encounter Bay Granite) and were then thrust up into a mountain range in a major earthmoving event called the Delamerian Orogeny which ended about 475 million years ago.
So it would seem that the Delamerian Orogeny was responsible for the granite formations on Granite Island and thereabouts. They were igneous intrusions resulting from the uplift and folding of the lithosphere (the earth’s crust and mantle) due to the clash of tectonic plates (the meaning of orogeny). This particular orogeny, occurring at the end of the Cambrian period and into the Ordovician, created the Flinders and Mount Lofty Ranges (the Adelaide geosyncline). In those days, the area was part of the supercontinent called Gondwana – in fact the Delamerian was one of several orogenies that contributed to its formation.
Fast forward a few hundred million years, to the Paleozoic era, and Gondwana was located around the South Pole, though parts of it extended almost to the equator. In those days the highlands of the Adelaide geosyncline, which had eroded down over the years, were often covered in ice caps, though the planet overall was warmer than today. Geologists find evidence of glacial activity from that period, from Port Elliot round to Hallet Cove:
Boulders of Encounter Bay Granite and Kanmantoo Group rocks, plucked off the surface and moved many kilometres by the ice from their original location, are a common feature of this glacial terrain. They are called erratics.
There’s so much more to explore in this line, obviously, and it’s a perfect example of a little scratching at the surface of a subject revealing, for me, a whole world of ‘known unknowns’, to quote the immortal Donald Rumsfeld. Science is amazing in its accumulations from researchers across the globe. So now, when I see strange boulders in out of the way places and unrelated, apparently, to the rocks around them, I’ll think of glaciation and erratics as a possible explanation.
So here I am at lovely Victor Harbour on Encounter Bay where England’s Matt Flinders and co encountered France’s Nick Baudin and co most unexpectedly over 200 years ago as each expedition was sailing round this great south land in opposite directions, mapping and exploring and discovering, but I’m not going to tell that story, I’m going to explore a much earlier era, as we spent a little over an hour in the heat of the day in the local cinema, watching a thing called Walking with dinosaurs – the movie. I think this was a companion-piece to Walking with dinosaurs – the real thing, or something like that. Anyway, it was aimed largely at kids, with a horribly anthropomorphised storyline replete with Yank cliches, in Yank accents, in spite of its being a BBC production. The animation was fine though, and hey it was dinosaurs, so more or less bearable.
But what about historical accuracy? Wouldn’t want to be leading kids up the garden path. The story, we’re told, takes place 70mya, in what’s now Alaska. Our hero starts life as the runt of the litter, and of course ends up as the leader of a herd of hundreds if not thousands. He’s a pachyrhino or something, and they headbutt for control of the females, and other males, and have to fight off their natural predators, the omnivorous gorgosauri. He also at one stage gets adopted by a wandering herd of gigantic edmontosauri, a herbivorous bunch. I’m no dinosaur expert but I’ve never heard of any of these beasties, whose names are presented to us with an air of scientific authenticity.
Well, as it turns out they’re all quite real (what was I thinking, BBC and all). Gorgosaurus (‘fierce lizard’) is known to have roamed about the region of modern Alberta, Canada some 75mya (the late or upper Cretaceous). Weighing in at more than 2 tonnes, it was an apex predator, a genus of tyrannosaurid therapod dinosaur, and is one of the best-represented tyrannosaurid therapods, with dozens of specimens found, so shame on me for my ignorance. Smaller than Tyrannosaurus, to which it’s distantly related, it’s often confused with Albertosaurus, and they may simply be variants. As with all tyrannosaurids, its massive head is crammed with teeth, though not so many, and not so blade-like, as T rex. The Wikipedia article on gorgosaurus is incredible detailed and overwhelmingly rich for dilettantes comme moi, but it’s well worth a visit.
The protagonist of the movie was a Pachyrhinosaurus. They inhabited the Alberta and Alaska regions from 79 t0 66mya. They’re a genus (of which 3 separate species have been recognised) of centrosaurine ceratopsid dinosaurs. They were gentle giants (when they weren’t headbutting), weighing up to 4 tonnes, and their presentation in the film as herd animals is backed up by the most important find of pachyrhinosaurus fossils, a bone-bedalong Pipestone Creek in Alberta, where some 3500 bones and 14 skulls have been found, apparently the site of a mass mortality, possibly a failed river crossing.
Pachyrhinosaurus has become a popular dino since being relatively recently discovered, in the forties. I’ve mentioned it’s a centrosaurine ceratopsid, the centrosaurinae being a subfamily of ceratopsid dinosaurs (which doesn’t include Triceratops, the most well-known ceratopsid). The centrosaurines are divided into two tribes, the centrosaurins and the pachyrhinosaurins. Ceratopsids all have these fearsome-looking great horny heads, like elephantine frill-necked lizards, but they’re all quadrupedal herbivores, so not only are we safe from being eaten by them, we might be able to eat them ourselves if we could bring them back to life. And I’m sure their horns would have aphrodisiac qualities.
The other dinosaur type featured, Edmontosaurus, was a hadrosaurid or duck-billed dinosaur, some 12 metres long and 4 tonnes in weight. There are two known species, one of which is known to have lived right up to the Cretaceous-Paloegene extinction event (the one that killed off all non-avian dinosaurs). They were coastal-dwelling herbivores, from North America (so-named because first found near modern Edmonton), and if the general rule is – and I’m largely guessing here – that the herbivorous dinos roamed about in herds, like modern-day bison, antelopes and kangaroos, then the scenario in Walking with dinosaurs, in which our young pachyrhino and his bro hook up with a herd of edmontosauri for a while, and were savaged by scavenging is almost plausible for the time and place.
So, with the help of Wikipedia mainly – it’s very comprehensive on this stuff – I managed to get quite a lot out of Walking with dinosaurs, though I have to say, some of it was strictly for the birds.
Well, the atheist wars continue to provide an amusing spectacle, as the rise of the nones proceeds apace. I recently took a look at 3 Quarks Daily, and this piece was heading the bill. Atheist David Johnson has launched into the soi-disant new atheists, dubbing them ‘undergraduate atheists’, and wondering whether we might not be better off with religion. The 3 quarks essay, written by Stefany Ann Golberg and Morgan Meis, is an interesting consideration of just this question, but I’ll look at it from a different perspective, before returning to their essay, which deals largely with existential doubt.
The question of whether are not we’re better off with religion appears to be answering itself, in the modern world, and my reading of history – and surely any reasonable reading of history – would support the view that our slow, patchy emergence from religion has been a positive thing. So I’m going to look, or glance, at it from both a diachronic and a synchronic perspective, as the post-modernists used to say.
Starting with the diachronic, Steven Pinker’s The better angels of our nature (which I’ve not read) chronicles human violence and brutality from our hunter-gatherer ancestors through to the early civilisations and their religio-cultural practices, then the long years of Christendom, the enlightenment, modern warfare and more or less secular modern western society. I don’t have to read it to know that his case for our society having become less violent is a convincing one. I’ve read enough history myself to independently verify this. To cite just a few texts – Robert Hughes’ The fatal shore, Martyn Whittock’s Brief history of life in the middle ages, Geoffrey Robertson’s The tyrannicide brief, Simon Schama’s Rough crossings and Ben Kiernan’s Blood and soil: a world history of genocide and extermination from Sparta to Darfur – these readings have overwhelmingly confirmed to me how lucky I am to be alive in the here and now, in the country I happen to live in, Australia.
So what does that have to do with religion, specifically? Well, the fact that life was nastier, more brutish and shorter in the past than it is today, and the more so the further back you go, suggests some kind of positive evolution. It hasn’t been smoothly linear, it’s bumped along in fits and starts, but being poor (which I am) in the 21st century, in a fairly wealthy country, has been far less of a pain than it was in most of the 20th century. The 19th century would’ve been far worse, and the 9th century – well, then I would’ve been subject to the vagaries of the politico-religious forces way above my head. I certainly wouldn’t have been able to read or write, I’d never have moved beyond my local district, my thoughts would’ve dwelt far more on basic survival, and knowing what I do today, I surely would’ve considered it a miserable and frustratingly circumscribed life (but at least I wasn’t a woman). And religion would’ve played its part in this. It’s been said recently, by an ultra-conservative government appointee looking into ‘reforming’ our schools, that Australia’s a Christian country. But the very backlash created by such a statement, which would’ve been completely uncontroversial in the 1950s, indicates that in the interim things have changed, and quite dramatically. What it means to be a Christian country has changed over the past few decades, but you get an even better perspective if you look back over centuries, as I roughly did here. And it would be hard to deny that, as religion has loosened its grip, both politically and economically, life has improved. For example, the general view that held sway centuries ago, that probing into and questioning God’s creation was idle and pointless if not downright devilish (with sometimes devastating consequences for those who didn’t toe the line), held us back from a multitude of discoveries that would’ve improved the lot of humanity. The prevailing attitude seemed to be, don’t expect too much from this world, cause that’s just how God made it, but if you get through it and obey God, you won’t believe what’s waiting for you on the other side.
So let’s move now to the synchronic. Even if you agree that generally life’s better now than in the past, you might want to point to Somalia, or Syria, or Sudan, etc, so it matters which country or region you’re looking at. I’ve mentioned before the extraordinary fact that the Paris-based OECD has assessed Australia as ‘the happiest country in the world’ for the last three consecutive years, based on 11 separate social and economic indices. There are a few such regular surveys floating around, and Australia obviously doesn’t win them all, but it’s up there with the same handful of countries every time. Australia, New Zealand, Canada, and a few western European countries, particularly the Scandinavian ones – these are always at the top of the lists of best countries to live in, and they also happen to be the least religious countries on the planet. They share other features too, of course, such as affluence combined with social safety nets, long-term political stability, low crime rates, and high levels of social mobility and social participation, all of which prompts reflection on whether this is more influencing or influenced by the nones. In any case, it’s very clear that a large and growing percentage of the populations of the world’s happiest and most successful countries are deciding they’re better off without religion. And that’s not an opinion, it’s a fact.
So there’s an answer, on broad political and economic grounds. Now to look at the more existential, personal issues around belief and doubt, as explored in the essay that prompted this post. David Johnson argues that maybe we’re better off with religion, even though it’s not true. This poses immediate problems in that Johnson’s an atheist. I would find his argument more convincing if he himself embraced religion (but which one?) and denied its falsehood. Otherwise he’s muddying the term ‘we’ and taking on a position of Napoleonic condescension. ‘Religion keeps the people happy so I’ll make a pact with the pope and even build a few churches, but for me it’s all BS’. It’s a kind of bad faith argument, for once we’ve eaten of the tree of knowledge, we can’t return to a state of innocence.
Golberg and Meis look at the issue by re-examining the thoughts of Miguel de Unamuno, an early 20th century Spanish writer and philosopher not much read nowadays. Unamuno was raised Catholic, lost his faith, embraced ‘scientific rationalism’, found it unsatisfactory, and explored in his writing a kind of tortured existence between faith and doubt. What his work brings back to our mind, as we’re sometimes forgetful of it, is that doubt, not of the scientifically skeptical kind, but of the soul-gripping and sometimes soul-destroying kind, is a major factor in our lives, whether or not we’re religious. Not believing in the supernatural is easy, for me at least, but it doesn’t solve any of the major problems of life, such as what we owe to ourselves, what we owe to others, whether we should stay or go, whether we should submit or fight back, whether we should take a situation seriously or lightly, whether we should forgive or take umbrage, or even, in some instances, whether life is worth the candle. And of course most people who strongly believe in an afterlife, or profess to, still cling to this life tenaciously at the end. Doubt is everywhere.
Having said this, I must say that I personally take great solace in science, for many reasons, but perhaps mostly because it takes me out of myself into a much larger world. And I don’t think of it in rational terms. In fact, describing it as science doesn’t quite do it for me, it’s more a world of exploration and adventure, which, like the world of fractals, never ends, but just just keeps on giving. And if we’re talking of mysteries, as Unamuno does, there’s an ever-replenishing supply in such explorations.
Food irradiation is a well-known process for preserving food and eliminating or reducing bacteria. It’s used for much the same purpose that pressure cooking of tinned food is used, or the pasteurization of milk. All food used by NASA astronauts in space is irradiated, to reduce the possibility of food-borne illness.
advantages and disadvantages of irradiation
According to the US Department of Health’s Center for Disease Control and Prevention (CDC), irradiation, if applied correctly, has been clearly shown to reduce or eliminate food pathogens, without reducing the nutritional value of the food. It should be noted that irradiation doesn’t make food radioactive. I’ll look at the science of irradiation shortly.
Of course it’s not a cure-all. For example, it doesn’t halt the ageing process, and can make older fruit look fresher than it is. The reduction in nutritional value of the food, caused by the ageing process, can be masked by irradiation. It can also kill off bacteria that produce an odour that alerts you that the food is going off. Also, it doesn’t get rid of neurotoxins like those produced by Clostridium botulinum. Irradiation will kill off the bacteria, but not the toxins produced by the bacteria prior to irradiation.
how does food irradiation work?
Three different types of irradiation technology are used, using gamma rays (cobalt-60), electron beams and x-rays. The idea is the same with each, the use of ionising radiation to break chemical bonds in molecules within bacteria and other microbes, leading to their death or greatly inhibiting their growth. The amount of ionising radiation is carefully measured, and the radiation takes place in a special room or chamber for a specified duration.
When radioactive cobalt 60 is the energy source, it’s contained in two stainless steel tubes, one inside the other, called ‘source pencils’. They’re kept on a rack in an underground water chamber, and raised out of the water when required. The water isn’t radioactive. Food products move along a conveyor belt into a room where they’re exposed to the rack containing the source pencils. Gamma rays (photons) pass through the tubes and treat the food. The cobalt 60 process is generally used in the USA.
An Electron-beam Linear Accelerator generates, concentrates and accelerates electrons to up to 99% of light-speed.These electron beams are scanned over the product. The machine uses energy levels of 5, 7.5 or 10 MeV (million electron volts). Again the product is usually guided under the beam by a conveyor system at a predetermined speed to obtain the appropriate dosage. This will clearly vary with product type and thickness.
The X-ray process starts with an electron beam accelerator targeting electrons on a metal plate. The energy that isn’t absorbed is converted into x-rays, which, like gamma rays, can penetrate food containers more than 40cms thick. Shipping containers, for example.
Most of the radiation used in these processes passes through the food without being absorbed. It’s the absorbed radiation, of course, that has the effect, destroying microbes and so extending shelf life, and slowing down the ripening of fruits and vegetables. The potential is there for food irradiation to replace chemical fumigants and fungicides used after harvest. It also has the potential, through the use of higher doses, to kill contaminating bacteria in meat, such as Salmonella.
Food irradiation is a cold treatment. It doesn’t significantly raise the temperature of the food, and this minimises nutrient loss or changes in texture, colour and flavour. The energy it uses is too low to cause food to become radioactive. It has been compared to light passing through a window. Food irradiation uses the same principle as pasteurization, and can be described as pasteurization by energy instead of heat, or cold pasteurization..
the use of food irradiation in Australia
Due largely to fears about irradiation having to do with radioactivity and nuclear energy, the process isn’t used as widely in Australia (or indeed the USA) as it could be. Irradiation is used in some 50 countries, but the level of usage varies for each country, from very limited in Austria and other EU countries, to a very widespread usage in Brazil. Food Standards Australia New Zealand (FSANZ) summarises our situation thus:
In Australia and New Zealand, only herbs and spices, herbal infusions, tomatoes, capsicums and some tropical fruits can be irradiated.
FSANZ has established that there is a technological need to irradiate these foods, and that there are no safety concerns or significant loss of nutrients when irradiating these foods.
Irradiated food or ingredients must be labelled clearly as having been treated by ionising radiation.
food irradiation, health and safety
Since 1950 hundreds of studies have been carried out on animals fed with irradiated products, including multi-generational studies. On the basis of these studies, food irradiation has been approved by the World Health Organization, the American Dietetic Association, the Scientific Committee of the European Union and many other national and international monitoring bodies. Of course this hasn’t stopped many individuals and organisations from complaining and campaigning against the practice. Concerns include: chemical changes harmful to the consumer; impairment of flavour; the destruction of more ‘good’ than ‘bad’ bacteria; and that it’s an unnecessary process which runs counter to the movement towards regional product, seasonality and real freshness. I’ve already mentioned other problems, such as that it can mask spoiled food, and that it doesn’t destroy toxins already released by bacteria.
opposition from the organic food movement
Food products must be irradiation-free if they are to certified as ‘organic’, in Australia and elsewhere. Now, I’ve fairly regularly expressed irritation with the ‘organic’ food ideology, most particularly in this post, but I recognise that it appeals to a very diverse set of people, with perhaps a majority simply believing, on faith, that ‘organic’ food will be more nutritious, safer and better for the environment than conventional food. Most of those people wouldn’t know much about food irradiation, but hey, it sounds dodgy, so why not avoid it? I’ve no great argument to make with such people, apart from the old ‘knowledge is power’ arguments, but there are a few individuals and organisations trying to get food irradiation banned, based on what they claim to be evidence. Unsurprisingly, most of these critics are also ‘organic’ food proponents. I’ll look at some criticisms from Eden Organic Foods, a US outfit, which admittedly represents the extreme end of the spectrum (nature before the fall?).
Firstly, in their ‘factsheet’ on irradiation, linked to above (and reprinted verbatim here by another alarmist organisation, the Center for Food Safety), they waste no time in informing us that the beams used are ‘millions of times more powerful than standard medical x-rays’. This sounds pretty scary, but it’s a bogus comparison. Irradiation is designed to kill bugs and bacteria, whereas medical x-rays are for making visible what is invisible to the naked eye. Clearly, the first and foremost concern in testing and studying the technology is to make sure that the chemical changes it induces are safe for humans. Comparisons with medical x-rays are more than irrelevant to this concern, as the author of this factsheet well knows.
Next comes this disturbing claim:
Radiation can do strange things to food, by creating substances called “unique radiolytic products.” These irradiation byproducts include a variety of mutagens – substances that can cause gene mutations, polyploidy (an abnormal condition in which cells contain more than two sets of chromosomes), chromosome aberrations (often associated with cancerous cells), and dominant lethal mutations (a change in a cell that prevents it from reproducing) in human cells. Making matters worse, many mutagens are also carcinogens
Wow. So much for the poor people of Brazil – they’re obviously done for. But how is it that the world’s top scientific agencies missed all these mutagens and carcinogens? Let’s take a closer look.
The term ‘radiolytic products’ simply means the products created by chemical changes that occur when food is irradiated. Similarly, the products created by heat treatment, or simply cooking, might be called ‘thermolytic products’. These are not ‘strange’, they’re quite predictable, for irradiation would be totally ineffective if it didn’t bring about some chemical changes. One of the differences is that radiolytic products are generally undetectable and produce only minor changes in the food compared to the major operation we call cooking. It is, of course, precisely these products that the scientific community scrutinises when determining the safety of irradiated foods.
Interestingly, in an article, dating back to 1999, called ‘Scientific answers to irradiation bugaboos’, for 21st Century Science & Technology magazine, Marjorie Mazel Hecht has this to say:
The July 1986 report of the Council for Agricultural Science and Technology (CAST), which reviewed all the research work on food irradiation, defined unique radiolytic products “as compounds that are formed by treating foods with ionizing energy, but are not found normally in any untreated foods and are not formed by other accepted methods of food processing.”
The report states that “on the basis of this definition no unique radiolytic compounds have been found in 30 years of research. Compounds produced in specific foods by ionizing energy have always been found in the same foods when processed by other accepted methods or in other foods” (Vol. 1, p. 15).
This slightly contradicts the factsheet put out by Idaho University’s Radiation Information Network, which acknowledges the existence of such products while insisting on their nugatory nature:
Scientists find the changes in food created by irradiation minor to those created by cooking. The products created by cooking are so significant that consumers can smell and taste them, whereas only a chemist with extremely sensitive lab equipment may be able to detect radiolytic products.
Needless to say, alarmists thrive on these contradictions. So what evidence is there of mutagenic irradiation byproducts? Well, there are radiolytic byproducts of fatty acids in meat, called alkylcyclobutanones (2-ACBs), first detected a few decades ago, and the research done on them seems to be so far inconclusive. A book entitled Food Irradiation Research and Technology, the second edition of which was published last year, states that ‘knowledge about the toxicological properties of 2-ACBs is still scarce’, and that ‘it may be prudent to collect more knowledge on the toxicological and metabolic properties of 2-ACBs in order to quantify a possible risk – albeit minimal.’ The book describes a number of studies on rats and humans, going into more detail than I can comprehend, but the results have been difficult to interpret and generally not easily replicable in other studies, indicating very minute and hard-to-measure effects. No doubt such studies will be ongoing. As far as I know, 2-ACBs are the only products about which there is any concern.
What is obvious though, in looking at the research material available online, is the difference between the caution, skepticism and uncertainty of researchers compared to the adamantine certainty of such critics as the Center for Food Safety.
But what about polyploidy? Polyploid cells contain more than two paired sets of chromosomes. Eukaryotic cells, those of multicellular creatures, are diploid (two sets), and prokaryotic, bacterial cells are haploid (one set). Polyploidy is regarded as a chromosomal aberration, common in many plants and some invertebrates, but relatively rare in humans. However it is present in humans, and the percentage varies from individual to individual, and within individuals from day to day and week to week, depending on a range of factors including diet, age, and even circadian rhythms. Levels of up to 3-4% in human lymphocytes have been found in healthy individuals, though some researchers have claimed much higher percentages, in liver cells. The overall finding so far is that fluctuations in polyploidy are the norm, and no clear correlation has been found so far between these fluctuations and health profiles. It seems that the biological significance of polyploidy isn’t known.
Critics of irradiation have been going on about polyploidy and other mutations supposedly caused by irradiation for decades, and unsurprisingly, some are fanatically obsessed with the issue, accompanying their rants with long reference lists, mostly from like-minded activists. However, the text Safety of irradiated foods, 2nd edition discusses polyploidy in some detail, with particular reference to a study of malnourished Indian children fed irradiated wheat, a study regularly cited by anti-irradiation activists. It turns out that there were many problems with the study. First, not enough cells were counted to validly pinpoint an effect, such as a change in diet. Secondly, polyploidy is notoriously difficult to detect – superimposed diploid cells can be easily mistaken for polyploid cells under a microscope (in fact when two independent observers looked at the same microscope slides, one found 34 polyploid cells, the other found 9). Further, the study only gave group results rather than individual results, so it wasn’t possible to know whether the polyploidy was restricted to one or two individuals rather than spread over the group. Another problem was that the reference or control group was found to have no polyploidy at all, a very strange finding given that other researchers always found some degree of polyploidy in their subjects, regardless of irradiation or other effects. In fact, the study was so poorly written up that it’s impossible to replicate – for example the exact diet given the children wasn’t described. How was the wheat fed to the children?. Presumably it was prepared in some way, but how? The omission is crucial. The study also didn’t take into account the effect of malnutrition itself on chromosomal abnormalities. And so on.
You get the picture, and it’s the same with other claims about mutations and carcinogens. Every time you look into the claims you find the same problems that no doubt other scientific watchdog organisations have found – poorly conducted studies that either can’t be replicated or haven’t survived replication. That, of course is no reason for complacency, and at least the activists can assist, in their sometimes muddle-headed ways, in improving our knowledge of 2-ACBs, polyploidy and other biological effects, just as the creationists who bang on about a lack of transitional forms, or ‘irreducible complexity’, help us to focus on refutations, clarifications and further evidence.
Finally, food irradiation, while clearly not the zappo-horrorshow that activists are determined to make it, doesn’t replace proper handling techniques and a good instinct about food quality. The fact is, though, that it does increase shelf life, and is a useful tool in our increasingly global economy, where food is shipped from here to there and everywhere, in season and out. If you prefer to eat locally, with fresh and seasonal produce, fine, and we can argue about the sustainability of that approach on a worldwide scale, but let’s none of us pretend that food irradiation is other than what it is. Let the evidence, properly evaluated, be your guide.
In this post I want to try to avoid politics, and to focus on the English language, its use and abuse. If you google the word ‘coward’, followed by the word ‘meaning’ (I often ask my NESB students to do this with words they don’t know), you’ll come up first with this definition: a person who is contemptibly lacking in the courage to do or endure dangerous or unpleasant things. Second comes this: [a person who is] excessively afraid of danger or pain.
These are, to me, bog-standard, uncontroversial definitions of the word ‘coward’. To be a coward is to be nothing more and nothing less than what these definitions describe.
So, as a person who cares about language, it disturbs and aggravates me that the word ‘coward’ is now regularly used by the media and by commentators of all kinds, from world leaders to pub philosophers, to refer to suicide bombers, mass shooters, Wikileakers and terrorists of every description. I would ask you to pause for a moment, and think of these categories of people, and the people themselves, if you can bear it. Think of, say, Thenmozhi Rajaratnam, a member of the Tamil Tigers and the suicide killer of Rajiv Ghandi and 14 others beside herself in 1991. Or Reem Riyashi, the wealthy Palestinian mother of two and Hamas operative who killed herself and 4 Israelis at the Erez Crossing in Gaza in 2004. Think of Martin Bryant, the murderer of 35 people at Port Arthur in Tasmania in 1996, or Anders Behring Breivik, killer of 77 people by bombing and gunfire in Norway in 2011. Think again of Bradley (now Chelsea) Manning, who leaked large quantities of classified US information to Wikileaks in early 2010, or Edward Snowden, recent leaker of classified documents from the USA’s National Security Agency to various media outlets. Now think finally of Mohamed Atta, a principal player in the September 11 2001 attacks in the USA, and pilot of the plane that crashed into the North Tower of the world trade centre, or Noordin Top, mastermind of several fatal bombings in Indonesia, and indefatigable recruiter and indoctrinator for the Jihadist organisation Jemaah Islamiya.
No doubt these characters will awaken many diverse thoughts, but it’s unlikely that cowardice would be part of your description of any of them, especially after having been primed with the definitions of a coward at the top of this post. It seems like describing any of these various characters as cowards would simply be what philosophers call a ‘category mistake’, so far from cowardly, in the bog-standard sense of that term, have been the actions that have made them notorious.
So what is going on here? Intellectual laziness? Overblown rhetoric? Well, yes and no. To dismiss this rhetoric of cowardice as just plain ignorant or lazy would be to miss the point of it, for there is method in this apparent madness, intended or not. The real point of describing any or all of these people as cowards is to remove them as far as possible from any association with another word, more or less directly opposed to cowardice: courage.
Courage is seen as positive of course. It’s seen as a virtue, yet when we delve further into it, as Socrates and his interlocutors did in the Laches, we find it be a more slippery concept than at first glance. It rather sticks in our craw, to say the least, to claim that Mohamad Atta was courageous in carrying out his mission to fly an unfamiliar Boeing 767 into the World Trade Centre, or that Thenmozhi Rajaratnam showed amazing courage in blowing herself up with Rajiv Ghandi and many others, or that Anders Brehvik displayed steely resolve and courage in carrying out his long-planned slaughter of scores of innocent children. The actions of Manning and Snowden have naturally received more mixed responses, with some feeling that the term ‘courageous’ is singularly apt in describing them, while others would baulk at the term.
So let’s perform the same operation on ‘courage’ as we did on the word ‘coward’. Here’s the very straightforward result:
Courage: the ability to do something that frightens one.
Now, it’s worth noting that this bog-standard definition, as with that of ‘coward’, has nothing whatever to say about the moral implications of the action or actions that the brave person engages in and the coward avoids. That action might be the slaughter, or the rescue, of thousands. This is key: the moral implications or the consequences of the actions are irrelevant to the definition. For some, it seems, this point is hard, if not impossible, to swallow. That’s the problem; because of the negative load that the term ‘coward’ carries, some people are determined to describe any action that they consider has negative consequences as cowardly. But to try to extend the meaning of the term from the bog-standard, more limited definition quoted at the top means moving away from consensus into a field of contestation that enormously diminishes the coherence and so the usefulness of the term.
I was prompted to write this piece because a recent editorial in a major Australian newspaper, attacking Edward Snowden as a coward, was brought to my attention. It was the last straw, you might say. I admit I haven’t read the editorial, and I can’t recall the newspaper, but really, you don’t need to read the detail – and I may well be convinced by the newspaper editor’s views of the implications of Snowden’s actions – to know that the application of the term ‘coward’ to Snowden’s leaking of classified information is just wrong, by the definition of terms.
This sort of thing should matter to those who respect language and its value as an effective communicative tool. By the bog-standard consensus definitions given above, we need to admit that the actions of Atta and Rajaratnam, for example, were courageous. As people in full possession of their faculties, as I assume they were, they would have had to overcome enormous fear and anxiety to perform their suicidal actions. Of course we can and should condemn their actions on a whole host of ethical grounds, but to call them cowardly doesn’t add anything to the ethical debate, it just muddies definitions (while allowing us to let off steam, to vent our indignation and disgust). It’s just name-calling.