the new ussr illustrated

assorted reflections from the urbane society for sceptical romantics

the anthropic principle lives on and on

leave a comment »

Believable-Creationism24c

The anthropic principle, the idea that the universe – and let’s not muddle up our heads with multiverses – appears to be tweaked just right, in a variety of ways, for the existence and flourishing of humans, has long been popular with the religious, those invested in the idea of human specialness, a specialness which evokes guided evolution, both in the biological and the cosmological sense. And, of course, God is our guide.

Wikipedia, God bless it, does an excellent job with the principle, introducing it straight off as the obvious fact that anyone able to ascertain the various parameters of the universe must necessarily be living in a universe, or a particular part of it, that enables her to do the ascertaining. In other words the human specialness mob have it arse backwards.

So I’ll happily refer all those questing to understand the anthropic principle, in strong and weak forms, it proponents and critics, etc, to Wikipedia. I’ve been brought to reflect on it again by my reading of Stephen Jay Gould’s essay, ‘mind and supermind’, in his 1985 collection, The Flamingo’s Smile. 

Yes, the anthropic principle, which many tend to think is a clever new tool for deists, invented by the very materialists who dismiss the idea of supernatural agency as unscientific, is an old idea – much more than 30 years old, because Gould was critiquing not only Freeman Dyson’s reflections on it in the eighties, but those of Alfred Russel Wallace more than a century ago, in his 1903 book Man’s Place in the Universe. Gould had good reason for comparing Dyson and Wallace; their speculations, almost a century apart, were based on vastly different understandings of the universe. It reminds us that our understanding of the universe, or that of the best cosmologists, continues to develop, and, I strongly suspect, will never be settled.

Theories and debates about our universe, or multiverse, its shape and properties, are more common, and fascinating, than ever, and accompanied by enough mathematics to make my brain bleed. The other day one of my regular emails from Huff Po science declared that maybe the universe didn’t have a beginning after all. This apparently from some scientists trying to grab attention in a pretty noisy field. I’ve only scanned the piece, which I would hardly be qualified to pass judgment on. But not long ago I read The Unknown Universe, a collection of essays from New Scientist magazine, dedicated to all ideas cosmological. I didn’t understand all of it of course, but genuine questions were raised about whether the universe is finite or infinite, about whether we really understand the time dimension, about how the laws that govern the universe came into being, and many other fundamental concepts. It’s interesting then to look back to more than a century ago, before Einstein, quantum mechanics, and space probes, and to reflect on the scientific understanding of the universe at that time.

A version of the universe, based on Lord Kelvin's calculations, used by Wallace

A version of the universe, based on Lord Kelvin’s calculations, used by Wallace

In Wallace’s time (a rather vague term because the great scientist’s life spanned 90 years, which saw substantial developments in astronomy) the universe, though considered almost unimaginably massive, was calculated to be much smaller than today’s reckoning. According to a diagram in Man’s Place in the Universe, it ended a little outside the Milky Way galaxy, because we had no tools at the time to measure any further, though Lord Kelvin, the dominant figure in physics and astronomy in the late 19th century, made a number of dodgy calculations that were taken seriously at the time. In fact, Kelvin’s figures for the size of the universe, and for the age of the earth, though too small by orders of magnitude, were considered outrageously huge by most of his contemporaries; but they at least began to accustom the educated public to the idea of ginormity in space and time.

But size wasn’t of course the only thing that made the universe of that time so different from our own conceptions. The universe of Wallace’s imagination was stable, timeless, and, to Wallace’s mind, lifeless, apart of course from our planet. However, he doesn’t appear to have any good argument for this, only improbability. And an odd kind of hope, that we are unique. This hope is revealed in a passage of his book where he goes off the scientific rails just a bit, in a paean to our gloriously unique humanity. A plurality of intelligent life-forms in the universe

… would imply that to produce the living soul in the marvellous and glorious body of man – man with his faculties, his aspirations, his powers for good and evil – that this was an easy matter which could be brought about anywhere, in any world. It would imply man is an animal and nothing more, is of no importance in the universe, needed no great preparations for his advent, only, perhaps, a second-rate demon, and a third or fourth-rate earth.

Wallace, though by no means Christian, was given to ‘spiritualism’, souls and the supernatural, all in relation to humans exclusively. That’s to say, he was wedded to ‘human specialness’, somewhat surprisingly for his theory of evolution by wholly natural selection from random variation. This is the chain, it seems, that links him to modern clingers-to the anthropic principle, such as William Lane Craig and his epigones, who must needs believe in a value-laden universe, with their god as the source of value, and we humans, platonically created as the feeble facsimiles of the godhead, struggling to achieve enlightenment in the form of closeness to the Creator, with its appropriate heavenly rewards. And so we have such typical WL Craigisms as ‘God is the best explanation of moral agents who apprehend necessary moral truths’, ‘God is the best explanation of why there are self-aware beings’ and ‘God is the best explanation of the discoverability of the universe [by humans of course]’. These best explanation ‘arguments’ can be added to ad nauseum, of course, for they’re all of a part, and all connected to the Wallace quote above. We’re special, we must be special, we must be central to the creator’s plan, and our amazingness, our so-much-more-than-animalness, in spite of our many flaws, suggests a truly amazing creator, who made all this just for us.

That’s the hope, captured well by the great French biologist Jaques Monod when he wrote

All religions, nearly all philosophies, and even a part of science testify to the unwearying, heroic effort of mankind desperately denying its contingency.

I think modern philosophy has largely moved on from desperate denialism, but Monod’s remarks certainly hold true for religions, past present and future. Basically, the denial of our contingency is the central business of religion. It’s hardly surprising then that the relationship between religion and science is uneasy at best, and antagonistic at its heart. The multiverse could surely be described as religion’s worst nightmare. But that’s another story.

a couple of controversial subjects

leave a comment »

OUR NUCLEAR FUTURE?

Nuclear_Plant

Our state government has, surprisingly, ordered a Royal Commission into nuclear power, which will bring on the usual controversy, but everything I’ve read on the science and energy front suggests that our energy future will rely on a mix of sources, including renewables, clean coal (if that’s not an oxymoron), nuclear energy and perhaps even fusion. Fukushima scared everyone, but it was an old reactor, built in the wrong place, and nuclear technology has advanced considerably since then. I hope to write more on this, probably on my much-neglected solutions ok? blog.

 

PNG WITCH-HUNTING GOES ON AND ON

c58b3e5830cc409b7d31053668de26a8

VLAD SOKHIN These men call their gang “Dirty Dons 585” and admit to rapes and armed robberies in the Port Moresby area. They say two-thirds of their victims are women. Photo taken from The Global Mail

 

I was shocked to hear the other day about people (mostly women, but also children) being accused of witchcraft/sorcery in PNG, right on our doorstep, over the past couple of years, and hunted down and brutally slaughtered, often in bizarrely sexualised ways. Somehow this story has passed me by until now. The country, or part of it, seems to have been gripped by a witch craze, like those that broke out in Europe from time to time, especially in the seventeenth century, not really so long ago. And they fizzled out as mysteriously as they burst into being. In fact the USA’s Committee for Skeptical Inquiry was reporting on these horrors from back in 2008, with some 50 victims claimed for that year. The article ended on an optimistic note:

Papua New Guinea is in dire need of skepticism, education, and legal reform. It appears that the latter is finally happening. These latest horrific killings, and no doubt the ensuing media outrage, have prompted the country’s Constitutional Review and Law Reform Commission to create new laws to prevent (or at least reduce) witchcraft-related deaths.

However, this 2013 article, from the sadly no longer extant Global Mail, indicates that the problem is far more deeply-rooted and long-standing than first thought, and it’s mutating and shifting to different regions of the country. Brutality is on the increase, fuelled by drugs and alcohol, but above all by massive social dysfunction, with children being regularly indoctrinated in methods of public torture. Sanguma, or sorcerers, are usually blamed for any deaths, and gangs of unemployed men (and almost all the men of the region are unemployed) go hunting for them amongst the most marginalised and unprotected women in the community. The article makes for harrowing reading, going into some detail on the suffering of these women, and highlighting the intractability of the problem. There are heroes too, including Catholic priests and nuns working at the coal-face. Unfortunately, tossing around terms like skepticism, education and legal reform won’t cut it here. This is a problem of deep social malaise, suspicion, superstition, poverty and despair that will take generations to resolve, it seems.

Written by stewart henderson

February 12, 2015 at 6:19 pm

some reflections on Christianity in the 1630s

with one comment

puritans off to benight the new world

puritans off to benight the new world

The past is a foreign country: they do things differently there

L P Hartley,  The go-between, 1953

You occasionally read that atheists or non-believers are having a hard time of it these days, and I’ve certainly encountered some Dawkins-haters and ‘arrogant atheist’ bashers, both in person and online. I’ve even had a go at the likes of Terry Eagleton, Melvyn Bragg and Howard Jacobson for their puerile arguments – which I’m really quite fond of doing. But the fact is that we atheists have never had it so good, and it’s getting better all the time.

This post is partly a response to one by the Friendly Atheist, in which he expresses skepticism about a report by the International Humanist and Ethical Union (IHEU) on the worldwide treatment of non-believers, but doesn’t really develop his argument. It’s also partly inspired by a book I’m reading, God’s fury, England’s fire: a new history of the English civil wars, by Michael Braddick, which is extraordinarily detailed and begins with a comprehensive scene-setting, describing the civil and ecclesiastical context in which ordinary lives were lived in England circa 380 years ago.

I’ve written before about taking the long view. We tend to be impatient, understandably, for our lives are short, and we’re keen to see worldwide transformation within its span, but I invite you to travel back in time to another country, our ‘mother country’, or mine at least, to see for yourself how foreign, and how hostile to non-belief, it was back then.

Essentially, there were no atheists in Britain in the 1630s, and the way Christianity was practiced was a hot political issue, central to most people’s lives. Sunday church attendance was compulsory, subject to government fines, but there was a plurality of positions within both Protestantism and the more or less outlawed Catholicism. Due to the horrific religious wars then raging in the Germanic regions, there was more than a whiff of the Last Days in the air. Parishes often took up collections for the distressed Protestants of Europe, and although the government of Charles I maintained an uneasy neutrality, many volunteers, especially from Scotland, went off to join the fighting on the continent.

Braddick’s book begins with an event that underlines the everyday religiosity of the era. In 1640, a Scottish army passed solemnly through Flodden, just south of the border with England. It wasn’t an invasion, though, it was more like a funeral procession. The Scots were engaging in a very public mourning of ‘the death of the Bible’. Trumpeters death-marched in front, followed by religious ministers bearing a Bible covered by a funeral shroud. After them came a number of elderly citizens, petitions in hand, and then the troops, their pikes trailing in the ground. Everyone was wearing black ribbons or other signs of mourning.

This was not quite an official Scottish army, it was an army of the Covenanters, essentially Calvinists or Presbyterians, defenders of the ‘true religion’, who were protesting about the imposition, in 1637, of a new Prayer book upon their congregations. Considering the history of Scots-English warfare, this was a provocative incursion, but the Scots met with little resistance, and after a brief battle at Newburn, they marched into Newcastle, a major northern English town, unopposed.

To understand how this bizarre event could’ve occurred involves analysing the complex religious politics of Britain at a time when religion and politics were almost impossible to separate – as any analysis of the contemporaneous Thirty Years’ War would show. The fact is that many of the English were sympathetic to the Scots cause and becoming increasingly disgruntled at the government of Charles, the long proroguing of parliament, and the perceived turning away from the ‘true religion’ towards a more embellished form that resembled the dreaded ‘papism’.

England and Scotland were both governed by Charles I, a nominally Scots king who, since moving to London to join his father as a young child in 1604, had never been back to his native country. However, as is still the case today, the two countries perceived themselves as, and in fact were, quite distinct, with separate churches, laws, administration and institutions. The Covenanters, were in a sense, nationalists, though their attitude to Charles was, unsurprisingly, ambivalent. In a propaganda campaign preceding their march south, they generally made it clear that they had no quarrel with England (though some went further and hoped to ‘rescue’ England from religious error), but were acting to defend their religious liberty.

Charles and his advisers were naturally alarmed at this development, and a proclamation was issued describing the Covenanters as ‘rebels and traitors’. At the same time it was felt that Charles’ physical presence, if not in Scotland at least in the north of England, was needed to stop the traitorous rot. Charles’ attitude was that if he was to enter ‘foreign territory’, it had better be at the head of an army. However, to raise and arm a military force required money, which required taxation – usually sanctioned by parliament. It also required the goodwill of the people, from whom a force would have to be raised, and here’s where politics, bureaucratic administration and religious attitudes could combine to create a dangerous brew, a brew made more poisonous by the king’s unbending temperament.

Charles was married to a Catholic, the not-so popular Henrietta Maria of France. Henrietta Maria’s Catholicism was devout, public and extravagant. The famous architect Inigo Jones designed a chapel for her in a style that outraged the puritans, and she held her own court at which Catholics were welcomed and protected. Charles’ own tastes, too, were hardly in line with the move towards austere Protestantism that was sweeping the country (though there were plenty who resisted it). Charles had in fact been moving in the opposite direction since his accession to the throne in the 1620s, as had his father James I. It wasn’t that they were about to embrace Catholicism, but they were reacting against strict Calvinism, in terms of outward display if not in terms of theology. But in many ways it was the theology of Calvinism – not only the weird doctrine of predestinarianism but the ideas of justification by faith alone, and of a direct, unmediated connection with the deity – that attracted the populace, to varying degrees, though it never caught on as strongly in England as in Scotland. The term ‘popery’, which didn’t always refer in an uncomplicated way to Catholicism, was increasingly used to indicate suspect if not heretical tendencies.

A key figure in all this turmoil was William Laud, the most influential religious authority in England. He was the Bishop of London from 1628, and became Archbishop of Canterbury in 1633. It was Laud who was largely responsible for issuing the new prayer-book in 1637, along with many other reforms in line with Charles’ more formal approach to Protestant religious practice, an approach that later became known as High Church Anglicanism. But so much was at stake with even the mildest reforms, and by the end of the thirties, a wave of puritan hysteria was gripping the country, which created an equal and opposite reaction. Laud was arrested and imprisoned in the tower in 1641, and executed in 1645, by which time the civil war was in full swing, with the tide having turned decisively against Charles.

However, I don’t want to get into the details of the religious factionalism and strife of those days here, I’m simply wanting to emphasise just how religious – and barbaric – those days were. The civil war was horrifically brutal, and as the primary documents reveal, it was accompanied by wagonloads of biblical rhetoric and god-invocations on both sides. The royalists’ principal argument was the king’s divine right to rule, while parliament was always referred to as ‘God’s own’. It was theocracy in turmoil, though many of the points of discontent were decidedly worldly, such as taxation and what we would now call conscription – forced service in the the king’s military. Besides monitoring of church attendance there were the ‘Holy Days of Obligation’ such as Ascension Day and the Rogation Days surrounding it, when the bounds of the parish were marked out on foot – and sometimes by boat if it was a seaside parish – so that jurisdictions were imprinted in the minds of God’s subjects, for in those days the local church had control and responsibility over the care of the poor, elderly and infirm. Certainly in those days the church acted as a kind of social glue, keeping communities together, but it was never as idyllic and harmonious as it sounds. Rogation processions were often proscribed or limited to ‘respectable citizens’ because of the drunken revelry they attracted, and there were always the political dissensions, usually related to some church leader or other being too popish or too puritan. Just like today, it was a world of noisy, opinionated, half-informed people, some of them very clever and frustrated, who demanded to be heard.

Witchcraft, though, was very much a thing in this period. Recently a workmate was expressing understandable disgust at the brutish burning of infidels or traitors or whatever by the Sunni invaders of northern Iraq – and she might also have mentioned the brutish slaughter of women and children as ‘witches’ on our own doorstep in Papua New Guinea. When I mentioned that our culture, too, used to burn witches, the response was predictable – ‘but that was in the Middle Ages’. We like to push these atrocities back in time as far we can get away with. In fact, the largest witch-hunt in English history occurred in East Anglia in 1645, when 36 women were put on trial, 19 were executed and only one was acquitted. Like an earthquake, this mass trial caused a number of aftershocks throughout the country, with some 250 women tried and more than 100 executed. A large proportion of all the witch-killings in England occurred in this one year. These women were hanged rather than burnt, but burning at the stake – the punishment reserved for heresy, an indication of how theocratic the state was – wasn’t abandoned until 1676, under Charles II.

We should be grateful for having emerged from the theocratic thinking of earlier centuries, and we can look around at theocratic states today, or just at those with theocratic mindsets, to see how damaging they can be. To have gods on your side is to be absolutely right, fighting against or punishing the absolutely wrong. In this superhuman world with its superhuman stakes, the mere human is a cypher to be trampled in the dust, or burned, beheaded, sacrificed on the altar of Divine Justice. The past, our past, is another country, but we need to visit it from time to time, and examine it unflinchingly, though it’s sometimes hard not to shudder.

Written by stewart henderson

February 8, 2015 at 11:24 pm

a change of focus, and Charlie Darwin’s teenage fantasies

leave a comment »

He's just so moi, though I'm more rough than ruff

He’s just so moi, though I’m more rough than ruff

“bashful, insolent; chaste, lustful; prating, silent; laborious, delicate; ingenious, heavy; melancholic, pleasant; lying, true; knowing, ignorant; liberal, covetous, and prodigal”

Michel de Montaigne, ‘Myself’

Sitting at my computer with the ABC’s ‘Rage’ on in the background, when on came a video by an artist who’s taken the moniker ‘Montaigne’, and how could I not be attracted? Good luck to her. I first stumbled on the original Montaigne decades ago, and like thousands before and since, I was fairly blown away. He’s been an inspiration and a touchstone ever since, and to think I’m now approaching his age at his death. One thing he wrote has always stayed with me, and I’ll misquote in the Montaignian tradition, being more concerned with the idea than the actual words – something like ‘I write not to learn about myself, but to create myself’. This raises the importance of writing, of written language, to an almost ridiculous degree, and I feel it in myself, as I’ve sacrificed much to my writing, such as it is. Certainly relationships, friendships, career – but I was always bad at those. All I have to show for it is a body of work, much of it lost, certainly before the blogosphere came along, the blogosphere that retains everything, for better or worse.

The New Yorker captures the appeal of Montaigne well. He wasn’t an autobiographical writer, in that he didn’t dwell on the details of his own life, but as a skeptic who trusted little beyond his own thoughts, he provided a fascinating insight into a liberal and wide-ranging thinker of an earlier era, and he liberated the minds of those who came later and were inspired by his example, including moi, some 400 years on. So, I’d like to make my writings a bit more Montaignian in future (I’ve been thinking about it for a while).

I’ve been focussing mainly on science heretofore, but there are hundreds of bloggers better qualified to write about science than me. My excuse, now and in the future, is that I’m keen to educate myself, and science will continue to play a major part, as I’m a thorough-going materialist and endlessly interested in our expanding technological achievements and our increasing knowledge. But I want to be a little more random in my focus, to reflect on implications, trends, and my experience of being in this rapidly changing world. We’ll see how it pans out.

what's in that noddle?

what’s in that noddle?

Reading the celebrated biography of Charles Darwin by Adrian Desmond and James Moore, I was intrigued by some remarks in a letter to his cousin and friend, William Darwin Fox, referring to the ‘paradise’ of Fanny and Sarah Owen’s bedrooms. This was 1828, and the 19-year-old Darwin, already an avid and accomplished beetle collector and on his way to becoming a self-made naturalist, was contemplating ‘divinity’ studies at Cambridge, having flunked out of medicine in Edinburgh. Fanny was his girlfriend at the time. These bedrooms were

‘a paradise… about which, like any good Mussulman I am always thinking… (only here) the black-eyed Houris… do not merely exist in Mahomets noddle, but are real substantial flesh and blood.’

It’s not so much the sensual avidity shown by the 19-year-old that intrigues me here, but the religious attitude (and the fascinating reference to Islam). For someone about to embark on a godly career – though with the definite intention of using it to further his passion for naturalism – such a cavalier treatment of religion, albeit the wrong one, as ‘inside the noddle’, is quite revealing. But then Darwin’s immediate family, or the males at least, were all quasi-freethinkers, unlike his Wedgewood cousins. Darwin never took the idea of Holy Orders seriously.

Written by stewart henderson

February 8, 2015 at 10:53 am

the low-down on antioxidants

leave a comment »

forget ORAC, just eat them coz they look so yummy

forget ORAC, just eat them coz they look so yummy

I’m going to risk alienating other colleagues here, but this post follows on from the last set in being inspired by work conversations, this time about plants and antioxidants. A plant was brought in by a staffer who apparently dabbles in naturopathy on the side, and its antioxidant properties were extolled. What do I know about antioxidants? Very little, except that some years ago red wine and various berries were being sold to us as containing life-enhancing quantities of these good molecules or whatever they are. It had something to do with binding to and neutralising ‘bad’ free radicals in our bodies. Of course I had no idea what free radicals were. Then later, via the Skeptics’ Guide to the Universe and other sources, I heard that the experts were back-tracking heavily on these life-enhancing properties.

So, what with being told in the staff room that antioxidants could cure cancer or some such thing, while elsewhere hearing that they’ve been wildly over-hyped, I’ve been considering for some time that I should do a post on these beasties, for dummies like me.

As usual, the first thing that greets me when I attempt to research this kind of thing is the pile of propadandist rubbish you have to wade through in order to find bona fide, science-based info sites. The good thing is that, over time, you get quicker at dodging bullshit.

I immediately homed in on a link saying ‘beware of antioxidant claims’, as being right up my alley. It took me to the ‘Berkeley Wellness‘ site out of the University of California. There I’m given the first definition – that an antioxidant is ‘a substance that helps mop up cell-damaging substances known as free radicals’, which leaves me hardly the wiser. I’m also told that selling products with claimed antioxidant properties is real big business in the US.

I’m also introduced to the ORAC (oxygen radical absorbance capacity) concept. My neighbour has an ORAC diet book and I’ve wondered what it meant. It seems that in the USA there’s a trend towards advertising the ‘antioxidant power’ of products based on ORAC scores – 7,300 ORAC units per 100 grams for a certain cereal, for example, or 6000 ORACs for a pack of corn chips. Are these numbers reliable, and what do they mean exactly?

Not much, apparently. The fact is that antioxidant interactions in the body are extremely complex and little understood. ORAC is only one of a number of different antioxidant tests used by different scientists in different labs, and even when they use the same test, such as ORAC, different labs come up with widely different results. Let me quote the Berkeley site directly:

Moreover, ORAC and other tests measure antioxidant capacity of substances only in test tubes. How well the antioxidants suppress oxidation and protect against free radicals in people is pretty much anyone’s guess.

A lot can happen to antioxidants once a food is digested and metabolized in the body, and little is known about their interactions. What has high antioxidant activity in a test tube may end up having little or no effect in the body. Preliminary research has found that when people eat high-ORAC foods, their blood antioxidant levels rise, but such results still don’t prove that this translates into actual health benefits.

The article ends with the usual smart advice. Choose a balanced diet, don’t eat too much, not too heavy on the meat, and with a fair quantity of whole grains, nuts and legumes, fresh fruit and veg, and you’ll get all the antioxidants and other nutrients you need. Actually, this article from I fucking love science, which gathers together expert advice on avoiding cancers, covers it all – keep your weight down, keep to the above-mentioned diet, exercise regularly in moderation, watch the sugar and salt intake and usually she’ll be right, whether it’s cancer, heart disease or whatever.

Not much more to say, really. But no doubt a lot more can be said about the science, and I’ll say just a bit about it here. Antioxidants, as the name suggests, are compounds that reduce oxidation in the body. Free radicals – unstable molecules – are produced when oxygen is metabolised. Free radicals remove electrons from other molecules, damaging DNA and other cellular material. They’re necessary for the body to function, but an overload can cause serious problems, and that’s where a common-sense diet comes in – though there are other factors which can bring about an overload, including stress, pollution, smoking (pollution by another name), sunlight and alcohol. Everything counts in large amounts.

Antioxidants come in many varieties. Nutrient antioxidants found in a variety of foods include vitamins A, C and E, as well as copper, zinc and selenium. Non-nutrient antioxidants, believed it have even greater effects (raising antioxidant levels), include phytochemical such as lycopene in tomatoes, and anthocyanins, found in blueberries and cranberries. I can’t find any clear info on the difference between non-nutrient and nutrient antioxidants, and it doesn’t appear to be important. There is, of course, a lot of ongoing research on all of this, and it would be easy to get obsessed with it all, raising your stress levels and sending those free radicals zinging through your body in legions. And if that’s what you want, why not buy this book, for a small fortune, and find out all that we currently know about how frying food affects its nutritive value, with particular attention to antioxidants. Of course, by the time you’ve finished it, it’ll likely be out of date.

There’s a ton of material out there on antioxidants, but Wikipedia is an excellent place to start, and to finish. One key piece of advice, in this as with other matters of diet, is – don’t rely on supplements when you can simply improve your diet (recent large-scale trials have shown they don’t work anyway). Get what you need from real food, as far as you can.

Written by stewart henderson

February 1, 2015 at 9:52 pm

on vaccines and type 1 diabetes, part 3 – causes

leave a comment »

imrs.php

 

As mentioned earlier, it’s not precisely known what causes diabetes type 1, more commonly known as childhood diabetes. There’s a genetic component, but it’s clearly environmental factors that are leading to the recent apparently rapid rise in this type.

I use the word ‘apparent’ because it’s actually hard to put figures on this rise, due to a paucity of historical records. This very thorough and informative article, already 12 years old, from the ADA (American Diabetes Association – an excellent website for everything to do with the evidence and the science on diabetes), tries to gather together the patchy worldwide data to cover the changing demography and the evolving disease process. At the beginning of the 2oth century childhood diabetes was rare but commonly fatal (before insulin), and even by mid-century no clear rise in childhood incidence had been recorded. To quote the article, ‘even by 1980 only a handful of studies were available, the “hot spots” in Finland and Sardinia were unrecognized, and no adequate estimates were available for 90% of the world’s population’. Blood glucose testing in the early 20th century was far from being as simple a matter as it is today, and the extent of undiagnosed cases is hard to determine.

There’s no doubt, however, that in those countries keeping reliable data, such as Norway and Denmark, a marked upturn in incidence occurred from the mid 20th century, followed by a levelling out from the 1980s. Studies from Sardinia and the Netherlands have found a similar pattern, but in Finland the increase from mid-century has been quite linear, with no levelling out. Data from other northern European countries and the USA, though less comprehensive, show a similar mid-century upturn. Canada now (or as of 12 years ago) has the third highest rate of childhood diabetes in the world. The trend seems to have been that many of the more developed countries first showed a sharp increase, followed by something of a slow-down, and then other countries, such as those of central and eastern Europe and the Middle East, ‘played catch-up’. Kuwait, for example, had reached seventh in the world at the time of the article, confounding many beliefs about the extent of the disease’s genetic component.

The article is admirably careful not to rush to conclusions about causes. It may be that a number of environmental factors have converged to bring about the rise in incidence. For example, it’s known that rapid growth in early childhood increases the risk, and children do in fact grow faster on average than they did a century ago. Obesity may also be a factor. Baffled researchers naturally look for something new that has entered the childhood environment, either in terms of nutrition (e.g. increased exposure to cow’s milk) or infection (enteroviruses). Neither of these possibilities fit the pattern of incidence in any obvious way, though there may be subtle changes in antigenicity or exposure at different stages of development, but there’s scant evidence of these.

Another line of inquiry is the possible loss of protective factors, as part of the somewhat vague but popular ‘hygiene hypothesis’, which argues that lack of early immune system stimulation creates greater susceptibility, particularly to allergies and asthma, but perhaps also to childhood diabetes and other conditions. The ADA article has this comment:

Epidemiological evidence for the hygiene hypothesis is inconsistent for childhood type 1 diabetes, but it is notorious that the NOD mouse is less likely to develop diabetes in the presence of pinworms and other infections. Pinworm infestation was common in the childhood populations of Europe and North America around the mid-century, and this potentially protective exposure has largely been lost since that time.

The NOD (non-obese diabetic) strain of mice was developed in Japan as an animal model for type 1 diabetes.

The bottom line from all this is that more research and monitoring of the disease needs to be done. Type 1 diabetes is a complex challenge to our understanding of the human immune system, and of the infinitely varied feedback loops between genetics and environment, requiring perhaps a broader questioning and analysis than has been applied thus far. Again I’ll quote, finally, from the ADA article:

In conclusion, the quest to understand type 1 diabetes has largely been driven by the mechanistic approach, which has striven to characterize the disease in terms of defining molecular abnormalities. This goal has proved elusive. Given the complexity and diversity of biological systems, it seems increasingly likely that the mechanistic approach will need to be supplemented by a more ecological concept of balanced competition between complex biological processes, a dynamic interaction with more than one possible outcome. The traditional antithesis between genes and environment assumed that genes were hardwired into the phenotype, whereas growth and early adaptation to the environment are now viewed as an interactive process in which early experience of the outside world is fed back to determine lasting patterns of gene expression. The biological signature of each individual thus derives from a dynamic process of adaptation, a process with a history.

However, none of this appears to provide any backing for those who claim that a vaccine is responsible for the increased prevalence of the condition. So let’s wade into this specific claim.

It seems the principle claim of the anti-vaxxers is that vaccines suppress our natural immune system. This is the basic claim, for example, of Dr Josef Mercola, a prominent and heavily self-advertising anti-vaxxer whose various sites happen to come up first when you combine and google key terms such as ‘vaccination’ and ‘natural immunity’. Mercola’s railings against vaccination, microwaves, sunscreens and HIV (it’s harmless) have garnered him quite a following among the non compos mentis, but you should be chary of leaping in horror from his grasp into the waiting arms of the next site on the list, that of the Vaccination Awareness Network (VAN), another Yank site chock-full of of BS about the uselessness of and the harm caused by every vaccine ever developed, some of it impressively technical-sounding, but accompanied by ‘research links’ that either go nowhere or to tabloid news reports. Watch out too for the National Vaccination Information Centre (NVIC), another anti-vax front, full of heart-rending anecdotes which omit everything required to make an informed assessment. The best may seem to lack conviction, being skeptics and all, but it’s surely true that the worst are full of passionate intensity.

There is no evidence that the small volumes of targeted antigens introduced into our bodies by vaccines have any negative impact on our highly complex immune system. This would be well-nigh impossible to test for, and the best we might do is look for a correlation between vaccination and increased (or decreased) levels of disease incidence. No such correlation has been found between the MMR vaccine and diabetes, though this Italian paper did find a statistically significant association between the incidence of mumps and rubella viral infections and the onset of type 1 diabetes. Another paper from Finland found that the incidence of type 1 diabetes levelled out after the introduction of the MMR vaccine there, and that the presence of mumps antibodies was reduced in diabetic children after vaccination. This is a mixed result, but as yet there haven’t been any follow-up studies.

To conclude, there is just no substantive evidence of any kind to justify all the hyperventilating.

But to return to the conversation with colleagues that set off this bit of exploration, it concluded rather blandly with the claim that, ‘yes of course vaccinations have done more good than harm, but maybe the MMR vaccine isn’t so necessary’. One colleague took a ‘neutral’ stance. ‘I know kids that haven’t been vaccinated, and they’ve come to no harm, and I know kids that have, and they’ve come to no harm either. And measles and mumps, they’re everyday diseases, and relatively harmless, it’s probably not such a bad thing to contract them…’

But this is a false neutrality. Firstly, when large numbers of parents choose not to immunise their kids, it puts other kids at risk, as the graph at the top shows. And secondly, these are not harmless diseases. Take measles. While writing this, I had a memory of someone I worked with over twenty years ago. He had great thick lenses in his glasses. I wear glasses too, and we talked about our eye defects. ‘I had pretty well perfect vision as a kid,’ he told me, ‘and I always sat at the back of the class. Then I got measles and was off school for a fortnight. When I went back, sat at the back, couldn’t see a thing. Got my eyes tested and found out they were shot to buggery.’

Anecdotal evidence! Well, it’s well known that blindness and serious eye defects are a major complication of measles, which remains a killer disease in many of the poorest countries in the world. In fact, measles blindness is the single leading cause of blindness in those countries, with an estimated 15,000 to 60,000 cases a year. So pat yourself on the back for living in a rich country.

In 2013, some 145,700 people died from measles – mostly young children. In 1980, before vaccination was widely implemented, an estimated 2.6 million died annually from measles, according to the WHO.

Faced with such knowledge, claims to ‘neutrality’ are hardly forgivable.

Written by stewart henderson

January 30, 2015 at 6:02 pm

on vaccines and diabetes [type 1], part 2

leave a comment »

1.d                         42943_type1diabetes

Okay, before I look at the claimed dangers of vaccines in general, I’ll spend some time on diabetes, which, as mentioned, I know precious little about.

Diabetes mellitus, to use its full name, is a metabolic disease which causes blood sugar levels to be abnormally high. Some of the immediate symptoms of prolonged high blood sugar include frequent urination and feelings of hunger and thirst, but the disease can lead to many serious complications including kidney failure, heart disease and strokes. Diabetes is generally divided into type 1, in which the pancreas fails to produce enough insulin, and type 2, in which the body’s cells fail to process the insulin produced. Type 2, which accounts for some 90% of cases, can lead to type 1. There’s a third recognised type called gestational diabetes, a sudden-onset form occurring in pregnant women, which usually disappears after giving birth. As I’m not sure whether the claim about the MMR vaccine was related to type 1 or type 2, I’ll examine both.

type 1 diabetes and vaccination

A factsheet from Australia’s National Centre for Immunisation Research and Surveilance (NCIRS), a joint service of Westmead Hospital and Sydney University, and part of the World Health Organisation’s Vaccine Safety Net system of public information websites, summarises type 1 diabetes thus:

This is thought to be an autoimmune disease, where the immune system malfunctions to cause destruction of the insulin-producing cells in the pancreas. This is the usual type of diabetes in children, and requires treatment with insulin injections. Without insulin, people with Type 1 diabetes will die. Diabetes is thought to be due to an interaction between inherited and environmental factors, not all of which have been identified.

It goes on to describe an ‘unexplained’ increase in cases in Australia and many other (but not all) countries. There are regional variations in rates of increase, with higher rates in Northern European countries, lower in Asia and Africa, probably due to genetic factors. A number of  environmental factors that may also contribute to the incidence of the disease have been studied, including breast feeding, infections, immunisation, nitrates and vitamin D. Breast feeding slightly reduces the risk of contracting diabetes, and drinking cow’s milk may increase the risk. As to infections, few have been proven to cause diabetes – though one of them, interestingly, is mumps. Diabetes incidence is affected by seasonal variation, and it’s likely that seasonal viral infections may trigger the onset of diabetes in some people. It’s also possible that some strong medications may compromise the immune system and so cause or promote the onset of the disease. High levels of nitrates in drinking water have been shown to increase the incidence.

The factsheet is entitled ‘Diabetes and vaccines’, so it deals head-on with the vaccination issue, and its conclusion is uncompromising: ‘No, there is no evidence that vaccines cause diabetes’. It cites 15 separate studies in its reading list, three of which are authored or co-authored by a Dr John Classen. These three are the only studies to suggest a link. Here’s the NCIRS response:

There have been a number of studies which have searched for links between diabetes and immunisations. The only studies suggesting a possible increase in risk have come from Dr John B Classen. He found that if the first vaccination in children is performed after 2 months of age, there is an increased risk of diabetes. His laboratory study in animals also found that certain vaccines, if given at birth, actually decrease the risk of diabetes. This study was based on experiments using anthrax vaccine, which is very rarely used in children or adults. Dr Classen also compared diabetes rates with vaccination schedules in different countries, and interpreted his results as meaning that vaccination causes an increased risk of diabetes. This has been criticised because the comparison between countries included vaccines which are no longer used or used rarely, such as smallpox and the tuberculosis vaccine (BCG).

The study also failed to consider many reasons other than vaccination which could influence rates of diabetes in different countries. Later, in 2002, Dr Classen suggested that vaccination of Finnish children with Hib vaccine caused clusters of diabetes 3 years later, and that his experiments in mice confirmed this association.

Other researchers who have studied the issue have not verified Dr Classen’s findings. Two large population-based American studies failed to support an association between any of the childhood vaccines and an increased risk of diabetes in the 10 years after vaccination. The highly respected international Cochrane Collaboration reviewed all the available studies and did not find an increased risk of diabetes associated with vaccination.

Dr Classen, it turns out, is an established anti-vaxxer who has more recently tried to prove a link between vaccines and autism.

I should point out also that the above factsheet, which is a few years old, doesn’t include a more recent study, on a very large scale, which showed a significant decrease in the incidence of type 1 diabetes with various vaccinations, including MMR.

Classen, though, wasn’t looking at the MMR vaccine, his claims were about the Hib vaccine, which prevents invasive disease caused by the Haemophilus influenzae type b bacterium. It also significantly reduces the incidence of early childhood meningitis. The NCIRS factsheet doesn’t even mention MMR, stating that the vaccines being debated are Hib, BCG (for tuberculosis) and hepatitis B.

The Philadelphia Children’s Hospital’s Vaccine Education Centre (whose director, Dr Paul Offit, is one of the world’s leading immunologists and experts on vaccines), cites a long list of studies – have a look yourself – which together find no evidence of a causal connection between diabetes (mostly type 1) and various vaccines. I’ve yet to find any published studies, even poorly conducted ones, that claim a specific negative connection between the MMR vaccine and diabetes. If anybody out there can point me to such a study, I’d be grateful.

So, while I wait for someone to get back to me on this (ho, ho), I’ll explore what immunologists and epidemiologists are saying about the rise of type 1 diabetes in recent decades in my next piece.

Written by stewart henderson

January 23, 2015 at 5:11 pm

Follow

Get every new post delivered to your Inbox.

Join 163 other followers