Thursday, August 07, 2014

Calvino's culturomics

Italo Calvino’s If On a Winter’s Night a Traveller is one of the finest and funniest meditations on writing that I’ve ever read. It also contains a glorious pre-emptive critique on what began as Zipf’s law and is now called culturomics: the statistical mining of vast bodies of text for word frequencies, trends and stylistic features. What is so nice about it (apart from the wit) is that Calvino seems to recognize that this approach is not without validity (and I certainly think it is not), while at the same time commenting on the gulf that separates this clinical enumeration from the true craft of writing – and for that matter, of reading. I am going to quote the passage in full – I don’t know what copyright law might have to say about that, but I am trusting to the fact that anyone familiar with Calvino’s book would be deterred from trying to enforce ownership of the text by the baroque level of irony that would entail.

__________________________________________________________

[From Vintage edition 1998, translated by William Weaver]

I asked Lotaria if she has already read some books of mine that I lent her. She said no, because here she doesn’t have a computer at her disposal.

She explained to me that a suitably programmed computer can read a novel in a few minutes and record the list of all the words contained in the text, in order of frequency. ‘That way I can have an already completed reading at hand,” Lotaria says, “with an incalculable saving of time. What is the reading of a text, in fact, except the recording of certain thematic recurrences, certain insistences of forms and meanings? An electronic reading supplies me with a list of the frequencies, which I have only to glance at to form an idea of the problems the book suggests to my critical study. Naturally, at the highest frequencies the list records countless articles, pronouns, particles, but I don’t pay them any attention. I head straight for the words richest in meaning; they can give me a fairly precise notion of the book.”

Lotaria brought me some novels electronically transcribed, in the form of words listed in the order of their frequency. “In a novel of fifty to a hundred thousand words,” she said to me, “I advise you to observe immediately the words that are repeated about twenty times. Look here. Words that appear nineteen times:
“blood, cartridge belt, commander, do, have, immediately, it, life, seen, sentry, shots, spider, teeth, together, your…”
“Words that appear eighteen times:
“boys, cap, come, dead, eat, enough, evening, French, go, handsome, new, passes, period, potatoes, those, until…”

“Don’t you already have a clear idea what it’s about?” Lotaria says. “There’s no question: it’s a war novel, all actions, brisk writing, with a certain underlying violence. The narration is entirely on the surface, I would say; but to make sure, it’s always a good idea to take a look at the list of words used only once, though no less important for that. Take this sequence, for example:
“underarm, underbrush, undercover, underdog, underfed, underfoot, undergo, undergraduate, underground, undergrowth, underhand, underprivileged, undershirt, underwear, underweight…”

“No, the book isn’t completely superficial, as it seemed. There must be something hidden; I can direct my research along these lines.”

Lotaria shows me another series of lists. “This is an entirely different novel. It’s immediately obvious. Look at the words that recur about fifty times:
“had, his, husband, little, Riccardo (51) answered, been, before, has, station, what (48) all, barely, bedroom, Mario, some, Times (47) morning, seemed, went, whom (46) should (45) hand, listen, until, were (43) Cecilia, Delaia, evening, girl, hands, six, who, years (42) almost, alone, could, man returned, window (41) me, wanted (40) life (39)"

“What do you think of that? An intimatist narration, subtle feelings, understated, a humble setting, everyday life in the provinces … As a confirmation, we’ll take a sample of words used a single time:
“chilled, deceived, downward, engineer, enlargement, fattening, ingenious, ingenious, injustice, jealous, kneeling, swallow, swallowed, swallowing…"

“So we already have an idea of the atmosphere, the moods, the social background… We can go on to a third book:
“according, account, body, especially, God, hair, money, times, went (29) evening, flour, food, rain, reason, somebody, stay, Vincenzo, wine (38) death, eggs, green, hers, legs, sweet, therefore (36) black, bosom, children, day, even, ha, head, machine, make, remained, stays, stuffs, white, would (35)"

“Here I would say we’re dealing with a full-blooded story, violent, everything concrete, a bit brusque, with a direct sensuality, no refinement, popular eroticism. But here again, let’s go on to the list of words with a frequency of one. Look, for example:
“ashamed, shame, shamed, shameful, shameless, shames, shaming, vegetables, verify, vermouth, virgins…"

“You see? A guilt complex, pure and simple! A valuable indication: the critical inquiry can start with that, establish some working hypothesis…What did I tell you? Isn’t this a quick, effective system?”

The idea that Lotaria reads my books in this way creates some problems for me. Now, every time I write a word, I see it spun around by the electronic brain, ranked according to its frequency, next to other words whose identity I cannot know, and so I wonder how many times I have used it, I feel the whole responsibility of writing weigh on those isolated syllables, I try to imagine what conclusions can be drawn from the fact that I have used this word once or fifty times. Maybe it would be better for me to erase it…But whatever other word I try to use seems unable to withstand the test…Perhaps instead of a book I could write lists of words, in alphabetical order, an avalanche of isolated words which expresses that truth I still do not know, and from which the computer, reversing its program, could construct the book, my book.

On the side of the angels


Here’s my take on Dürer’s Melencolia I on its 500th anniversary, published in Nature this week.

________________________________________________________

Albrecht Dürer’s engraving Melencholia I, produced 500 years ago, seems an open invitation to the cryptologist. Packed with occult symbolism from alchemy, astrology, mathematics and medicine, it promises hidden messages and recondite meanings. What it really tells us, however, is that Dürer was a philosopher-artist of the same stamp as Leonardo da Vinci, immersed in the intellectual currents of his time. In the words of art historian John Gage, Melencolia I is “almost an anthology of alchemical ideas about the structure of matter and the role of time” [1].

Dürer’s brooding angel is surrounded by, the instruments of the proto-scientist: a balance, an hourglass, measuring calipers, a crucible on a blazing fire. Here too is numerological symbolism in the “magic square” of the integers 1-16, the rows, columns and main diagonals of which all add up to 34: a common emblem of both folk and philosophical magic. Here is the astrological portent of a comet, streaming across a sky in which an improbable rainbow arches, a symbol of the colour-changing processes of the alchemical route to the philosopher’s stone. And here is the title itself: melancholy, associated in ancient medicine with black bile, the same colour of the material with which the alchemist’s Great Work to make gold was supposed to begin.

But why the tools of the craftsman – the woodworking implements in the foreground, the polygonal block of stone awaiting the sculptor’s hammer and chisel? Why the tormented, introspective eyes of the androgynous angel?

Melencolia I is part of a trio of complex etchings on copper plate that Dürer made in 1513-14. Known as the Master Engravings, they are considered collectively to raise this new art to an unprecedented standard of technical skill and psychological depth. This cluttered, virtuosic image is widely thought often said to represent a portrait of Dürer’s own artistic spirit. Melancholy, often considered the least desirable of the four classical humours then believed to govern health and medicine, was traditionally associated with insanity. But during the Renaissance it was ‘reinvented’ as the humour of the artistic temperament, originating the link popularly asserted between madness and creative genius. The German physician and writer Cornelius Agrippa, whose influential Occult Philosophy (widely circulated in manuscript form from 1510) Dürer is almost certain to have read, claimed that “celestial spirits” were apt to possess the melancholy man and imbue him with the imagination required of an “excellent painter”. For it took imagination to be an image-maker – but also to be a magician.

The connection to Agrippa was first made by the art historian Erwin Panofsky, a doyen of symbolism in art, in 1943. He argued that what leaves Dürer’s art-angel so vexed is the artist’s constant sense of failure: an inability to fly, to exceed the bounds of the human imagination and create the truly wondrous. Her tools, in consequence, lie abandoned. Why astronomy, geometry, meteorology and chemistry should have any relation to the artistic temperament is not obvious today, but in the early sixteenth century the connection would have been taken for granted by anyone familiar with the Neoplatonic idea of correspondences in nature. This notion, which pervades Agrippa’s writing, held that, which joined all natural phenomena, including the predispositions of humankind, are joined into a web of hidden forces and symbols. Melancholy, for instance, is the humour governed by the planet Saturn, whence “saturnine.” That blend of ideas was still present in Robert Burton’s The Anatomy of Melancholy, published a century later, which called melancholics “dull, sad, sour, lumpish, ill-disposed, solitary, any way moved, or displeased.” A harsh description perhaps, but Burton reminds us that “from these melancholy dispositions no man living is free” – for melancholy is in the end “the character of Mortality.” But some are more prone than others: Agrippa reminded his readers of Aristotle’s opinion “that all men that were excellent in any Science, were for the most part melancholy.”

So there would have been nothing obscure about this picture for its intended audience of intellectual connoisseurs. It was precisely because Dürer mastered and exploited the new technologies of printmaking that he could distribute these works widely, and he indicated in his diaries that he sold many on his travels, as well as giving others as gifts to friends and humanist scholars such as Erasmus of Rotterdam. Unlike paintings, you needed only moderate wealth to afford a print. Ferdinand Columbus, son of Christopher, collected over 3,000, 390 of which were by Dürer and his workshop [2].

But even if the alchemical imagery of Melencolia I was part of the ‘occult parcel’ that this engraving presents, Besides all this, it would be wrong to imagine that alchemy was, to Dürer and his contemporaries, purely an esoteric art associated with gold-making. As Lawrence Principe has recently argued (The Secrets of Alchemy, University of Chicago Press, 2013), this precursor to chemistry was not just or even primarily about furtive and futile experimentation to make gold from base metals. It was also a practical craft, not least in providing artists with their pigments. For this reason, artists commonly knew something of its techniques; Dürer’s friend, the German artist Lucas Cranach the Elder, was a pharmacist on the side, which may explain why he was almost unique in Northern Europe in using the rare and poisonous yellow pigment orpiment, an arsenic sulphide. The extent of Dürer’s chemical knowledge is not known, but he was one of the first artists to use acids for etching metal, a technique developed only at the start of the sixteenth century. The process required specialist knowledge: it typically used nitric acid, made from saltpetre, alum and ferrous sulphate, mixed with dilute hydrochloric acid and potassium chlorate (“Dutch mordant”).

Humility should perhaps compel us to concur with art historian Keith Moxey that “the significance of Melencolia I is ultimately and necessarily beyond our capacity to define” [3] – we are too removed from it now for its themes to resonate. But what surely endures in this image is a reminder that for the Renaissance artist there was continuity between theories about the world, matter and human nature, the practical skills of the artisan, and the business of making art.

References
1. Gage, J. Colour and Culture, p.149. Thames & Hudson, London, 1993.
2. McDonald, M. in Albrecht Dürer and his Legacy, ed. G. Bartrum. British Museum, London, 2003.
3. Moxey, K. The Practice of Theory, p.93. Cornell University Press, Ithaca, 1994.

Wednesday, August 06, 2014

All hail the man who makes the bangs


The nerd with the safety specs who is always cropping up on TV doing crazy experiments for Jim Al-Khalili or Mark Miodownik or Michael Mosley, while threatening to upstage them with his patter? That’s Andrea Sella of UCL, who has just been awarded the Michael Faraday Prize by the Royal Society. And this is a very splendid thing. With previous recipients including Peter Medawar, Richard Dawkins, David Attenborough, Robert Winston and Brian Cox, it is clear what a prestigious award this is. But whereas those folks have on the whole found themselves celebrated and supported for their science-communication work, Andrea has sometimes been under a lot of pressure to justify doing this stuff instead of concentrating on his research (on lanthanides). I hope very much that this recognition will help to underline the value of what we now call “outreach activities” when conducted by people in regular research positions, rather than just by those who have managed to establish science communication as a central component of their work. Being able to talk about science (and in Andrea’s case, show it in spectacular fashion) is a rare skill, the challenge of which is sometimes under-estimated and under-valued, and so it is very heartening to see it recognized here.

Monday, August 04, 2014

Dreams of invisibility

Here’s my Point of View piece from the Guardian Review a week ago. It’s fair to say that my new book Invisible is now out, and I’m delighted that John Carey seemed to like it (although I’m afraid you can’t fully see why without a subscription).

___________________________________________________________________

H. G. Wells claimed in his autobiography that he and Joseph Conrad had “never really ‘got on’ together”, but you’d never suspect that from the gushing fan letter Conrad sent to Wells, 8 years his junior but far more established as a writer, in 1897. Before their friendship soured Conrad was a great admirer of Wells, and in that letter he rhapsodized the author of scientific romances as the “Realist of the Fantastic”. It’s a perceptive formulation of the way Wells blended speculative invention with social realism: tea and cakes and time machines. That aspect is nowhere more evident than in the book that stimulated Conrad to write to his idol: The Invisible Man.

To judge from Wells’ own account of his aims, Conrad had divined them perfectly. “For the writer of fantastic stories to help the reader to play the game properly”, he wrote in 1934, “he must help him in every possible unobtrusive way to domesticate the impossible hypothesis… instead of the usual interview with the devil or a magician, an ingenious use of scientific patter might with advantage be substituted. I simply brought the fetish stuff up to date, and made it as near actual theory as possible.”

In other words, Wells wanted to turn myth into science, or at least something that would pass for it. This is why The Invisible Man is a touchstone for interpreting the claims of modern physicists and engineers to be making what they call “invisibility cloaks”: physical structures that try to hide from sight what lies beneath. The temptation is to suggest that, as with atomic bombs, Wells’ fertile imagination was anticipating what science would later realise. But the light that his invisible man sheds on today’s technological magic is much more revealing.

It’s likely Wells was explicitly updating myth. One of the earliest stories about invisibility appears near the start of Plato’s Republic, a book that had impressed Wells in his youth. Plato’s narrator Glaucon tells of a Lydian shepherd named Gyges who discovered a ring of invisibility in the bowels of the earth. Without further ado, Gyges used the power to seduce the queen, kill the king and establish a new dynasty of Lydian rulers. In a single sentence Plato tells us what many subsequent stories of invisibility would reiterate about the desires that the dream of invisibility feeds: they are about sex, power and death.

Evidently this power corrupts – which is one reason why Tolkien made much more mythologically valid use of invisibility magic than did J. K. Rowling. But Glaucon’s point has nothing to do with invisibility itself; it is about moral responsibility. Given this power to pass unseen, he says, no one “would be so incorruptible that he would stay on the path of justice, when he could with impunity take whatever he wanted from the market, go into houses and have sexual relations with anyone he wanted, kill anyone, and do the other things which would make him like a god among men.” The challenge is how to keep rulers just if they can keep their injustices hidden.

The point about Gyges’ ring is that it doesn’t need to be explained, because it is metaphorical. The same is true of this and other magic effects in fairy tales: they just happen, because they are not about the doing but the consequences. Fairy-tale invisibility often functions as an agent of seduction and voyeurism (see the Grimms’ “The Twelve Dancing Princesses”), or a gateway to Faerie and other liminal realms. It’s precisely because children don’t ask “how is that possible?” that we shouldn’t fret about filling them with false beliefs.

But it seems to be a peculiarity of our age that we focus on the means of making magic and not the motive. The value of The Invisible Man is precisely that it highlights the messy outcome of this collision between science and myth. True, Wells makes some attempt to convince us that his anti-hero Griffin is corrupted by discovering the “secret of invisibility” – but it is one of the central weaknesses of the tale that Griffin scarcely has any distance to fall, since he is thoroughly obnoxious from the outset, driving his poor father to suicide by swindling him out of money he doesn’t possess in order to fund his lone research. If we are meant to laugh at the superstitions of the bucolic villages of Iping as the invisible Griffin rains blows on them, I for one root for the bumpkins.

No, where the book both impresses and exposes is in its description of how Griffin becomes invisible. A plausible account of that trick had been attempted before, for example in Edward Page Mitchell’s 1881 short story “The Crystal Man”, but Wells had enough scientific nous to make it convincing. While Mitchell’s scientist simply makes his body transparent, Wells knew that it was necessary not just to eliminate pigmentation (which Griffin achieves chemically) but to eliminate refraction too: the bending of light that we see through glass or water. There was no known way of doing that, and Wells was forced to resort to the kind of “jiggery-pokery magic” he had criticized in Mary Shelley’s Frankenstein. He exploited the very recent discovery of X-rays by saying that Griffin had discovered another related form of “ethereal vibration” that gives materials the same refractive strength as air.

Despite this, Griffin finds that invisibility is more a burden than a liberation. He dreams of world domination but, forgetting to vanish his clothes too, has to wander naked in the winter streets of London, bruised by unseeing crowds and frightened that he will be betrayed by the snow that threatens to settle on his body and record his footsteps. His eventual demise has no real tragedy in it but is like the lynching of a common criminal, betrayed by sneezes, sore feet and his digestive tract (in which food visibly lingers for a time). In all this, Wells shows us what it means to domesticate the impossible, and what we should expect when science tries to do magic.

That same gap between principle and practice hangs over today’s “invisibility cloaks”. They work in a different, and technologically marvelous, way: not by transparency, but by guiding light around the object they hide. But when the first of them was unveiled in 2006, it was perplexing: for there it sat, several concentric rings of printed circuits, as visible as you or me. It was, the scientists explained, invisible to microwaves, not to visible light. What had this to do with Gyges, or even with Griffin?

Some scientists argue that, for all their technical brilliance (which is considerable, and improving steadily), these constructs should be regarded as clever optical devices, not as invisibility cloaks. It’s hard to imagine how they could ever conceal a person walking around in daylight. This “magic” is cumbersome and compromised: it is not the way to seduce the queen, kill the king and become a tyrant.

This isn’t to disparage the invention and imagination that today’s “invisibility cloaks” embody. But it’s a reminder that myth is not a technical challenge, not a blueprint for the engineer. It’s about us, with all our desires, flaws, and dreams.

Cutting-edge metallurgy


This is my Materials Witness column for the August issue of Nature Materials. I normally figure these columns are a bit too specialized to put up here, but this subject is just lovely: there is evidently so much more to the "sword culture" of the so-called Dark Ages, the Viking era and the early medieval period than a bunch of blokes running amok with big blades. As Snorri Sturluson surely said, you can't beat a good sword.

__________________________________________________________________

There can be few more mythologized ancient materials technologies than sword-making. The common view – that ancient metalsmiths had an extraordinary empirical grasp of how to manipulate alloy microstructure to make the finest-quality blades – contains a fair amount of truth. Perhaps the most remarkable example of this was discovered several years ago: the near-legendary Damascus blades used by Islamic warriors, which were flexible yet strong and hard enough to cleave the armour of Crusaders, contained carbon nanotubes [1]. Formation of the nanotubes was apparently catalysed by impurities such as vanadium in the steel, and these nanostructures assisted the growth of cementite (Fe3C) fibres that thread through the unusually high-carbon steel known as wootz, making it hard without paying the price of brittleness.

Yet it seems that the skill of the swordsmith wasn’t directed purely at making swords mechanically superior. Thiele et al. report that the practice called pattern-welding, well established in swords from the second century AD to the early medieval period, was primarily used for decorative rather than mechanical purposes and, unless used with care, could even have compromised the quality of the blades [2].

Pattern-welding involved the lamination and folding of two materials – high-phosphorus iron and low-phosphorus mild steel or iron – to produce a surface that could be polished and etched to striking decorative effect. After twisting and grinding, the metal surface could acquire striped, chevron and sinuous patterns that were highly prized. A letter to a Germanic tribe in the sixth century AD, complimenting them for the swords they gave to the Ostrogothic king Theodoric, conqueror of Italy, praised the interplay of shadows and colours in the blades, comparing the pattern to tiny snakes.


This and the image above are modern pattern-welded swords made by Patrick Barta using traditional methods.

But was it all about appearance? Surely what mattered most to a warrior was that his sword could be relied on to slice, stab and maim without breaking? It seems not. Thiele et al. commissioned internationally renowned swordsmith Patrick Barta to make pattern-welded rods for them using traditional techniques and re-smelted medieval iron. In these samples the high-phosphorus component was iron and not, as some earlier studies have mistakenly assumed, steel.

They subjected the samples to mechanical tests that probed the stresses typically experienced by a sword: impact, bending and buckling. In no cases did the pattern-welded samples perform any better than hardened and tempered steel. This is not so surprising, given that phosphoric iron itself has rather poor toughness, no matter how it is laminated with other materials.

The prettiness of pattern welding didn’t, however, have to compromise the sword’s strength, since – at least in later examples – the patterned section was confined to panels in the central “fuller” of the blade, while the cutting edge was steel. All the same, here’s an example of how materials use may be determined as much by social as by technical and mechanical considerations. From the Early to the High Middle Ages, swords weren’t just or even primarily for killing people with. For the Frankish warrior, the spear and axe were the main weapons; swords were largely symbols of power and status, carried by chieftains, jarls and princes but used only rarely. Judging by the modern reproductions, they looked almost too gorgeous to stain with blood.

References
1. Reibold, M. et al., Nature 444, 286 (2006).
2. Thiele, A., Hosek, J., Kucypera, P. & Dévényi, L. Archaeometry online publication doi:10.1111/arcm.12114 (2014).

Thursday, July 24, 2014

Science books you (and I) should read


La Recherche asked me to recommend my favourite science book for a special issue of the magazine. I had to go for Richard Holmes’ The Age of Wonder (Harper Press, London, 2008). Lots of science books have interested me, and many have captivated me with their wonderful writing. But this is the only one that left me feeling quite so excited.

________________________________________________________________________

Who should write books about science? A Nobel laureate once made his views on that plain enough to me, saying “I have a healthy disregard for anybody and everybody who has not made advances in the field in which they are pontificating.” And in compiling The Oxford Book of Modern Science Writing, Richard Dawkins proclaimed that “This is a collection of good writing by professional scientists, not excursions into science by professional writers” – implying not only that those two groups are distinct but that true science writing embraces only the former.

There are plenty of good examples that demonstrate the wisdom of the academic impulse never to stray outside your own field – in which you have perhaps painstakingly accumulated expertise over decades. The results of forays into foreign intellectual territory have sometimes been disastrous. But the idea that non-scientists have nothing to say about science that could possibly be useful or interesting to scientists, or that scientists from one discipline are unlikely to say anything valuable about another, is one that I find not just dismaying but terrifying.

I don’t think Dawkins or my Nobel colleague actually doubts for a moment that science can be effectively popularized by non-experts. After all, as many people have read and been informed by Bill Bryson’s A Short History of Nearly Everything (2005) as they have many of Dawkins’ splendid and profoundly erudite expositions on evolution. But can outsiders actually bring anything new to the table?

My answer to that is Richard Holmes’ book The Age of Wonder. It tells of the period in the late eighteenth and early nineteenth centuries when scientists sat down with artists and poets – Humphry Davy and Samuel Coleridge exchanged mutually admiring correspondence, for example. They all shared a common view of nature as a source of sublime wonder, the exploration of which was a voyage of romantic discovery that needed the cadences of poetry as much as the precision of scientific experiment and observation. This material could have become a formulaic lament about the “two cultures” that have allegedly arisen since, but Holmes does something much subtler, richer and more fulfilling.

Beginning with James Cook’s expedition to Tahiti in 1769, on which the botanist was the future president of the Royal Society Joseph Banks, Holmes takes in episodes of “romantic science” that include William and Caroline’s telescopic investigations of the moon and stars, early balloon flights and Davy’s experiments with laughing gas and the miner’s lamp. Holmes’ sights are trained firmly on the cultural setting and reception of these studies, and the mindset that informed them. “For many Romantic scientists”, he writes,

there was no immediate contradiction between religion and science: rather the opposite. Science was a gift of God or Providence to mankind, and its purpose was to reveal the wonders of His design.

Holmes has insisted that he knows rather little about science. This is excessively modest, but not falsely so. One doesn’t doubt, from the confident tone of the book, either that he spared any pains to find out what he needed to know or that he knew what to do with that information. Precisely because Holmes is an expert on the lives and intellectual milieu of Coleridge, Percy Shelley and the British and French Romantics generally, he was able to draw out themes and ideas that historians of science would not have seen.

But the book isn’t just to be celebrated for its fresh perspective. It is also a joy to read. Every page delivers something interesting, always with elegance and wryness. Even the footnotes (mark this, academics) are not to be missed. I have read a lot of science books – when I first read The Age of Wonder it was as a judge of the Royal Society Science Book Prize (which Holmes won), and so I was wading through literally stacks of them. But none has left me with such genuine exhilaration as this one. And none has better illuminated the case which Holmes makes at the end, and which surely all scientists would applaud:

Perhaps most important, right now, is a changing appreciation of how scientists themselves fit into society as a whole, and the nature of the particular creativity they bring to it. We need to consider how they are increasingly vital to any culture of progressive knowledge, to the education of young people (and the not so young), and to our understanding of the planet and its future.

More challenging to some, I suspect, is Holmes’ corollary:

The old rigid debates and boundaries – science versus religion, science versus the arts, science versus traditional ethics – are no longer enough. We should be impatient with them. We need a wider, more generous, more imaginative perspective.

Here already is the beginning of that perspective.

Wednesday, July 23, 2014

The hidden structure of liquids


Here’s a Commentary written for the collection of essays curated by Nature Materials for the International Year of Crystallography. It is worth also taking a look at Nature’s Crystallography Milestones collection.

___________________________________________________________________________

From its earliest days, crystallography has been viewed as a means to probe order in matter. J. D. Bernal’s work on the structure of water reframed it as a means of examining the extent to which matter can be regarded as orderly.

In 1953, J. Desmond Bernal wrote that, as a result of the development of X-ray crystallography,

“science was beginning to find explanations in terms of atoms and their combinations not only of the phenomena of physics and chemistry but of the behaviour of ordinary things. The beating out of metal under the hammer, the brittleness of glass and the cleavage of mica, the plasticity of clay, the lightness of ice, the greasiness of oil, the elasticity of rubber, the contraction of muscle, the waving of hair, and the hardening of a boiled egg are among the hundreds of phenomena that had already been completely or partially explained." [1]

What is striking here is how far beyond crystalline and ordered matter Bernal perceived the technique to have gone: into soft matter, amorphous solids, and viscous liquids. For biological polymers, Bernal himself had pioneered the study of globular proteins, while William Astbury, Bernal’s one-time colleague in William Bragg’s laboratory at the Royal Institution in London, had by mutual agreement focused on the fibrous proteins that constitute hair and muscle. Of course, in the year in which Bernal was writing, the most celebrated X-ray structure of a fibrous biological macromolecule, DNA, was solved by Crick and Watson under the somewhat sceptical auspices of William’s son Lawrence Bragg, head of the Cavendish Laboratory in Cambridge.

All those macromolecular materials do form crystals. But one of Bernal’s great insights (if not his alone) was to recognize that the lack of long-ranged order in a material was no obstacle to the use of X-rays for deducing its structure. That one could meaningfully talk about a structure for the liquid state was itself something of a revelation. What is sometimes overlooked is the good fortune that the natural first choice for such investigation of liquids – water, ubiquitous and central to life and the environment – happens to have an unusually high degree of structure. Indeed, Bernal first began his studies of liquid-state structure by regarding it as a kind of defective crystal.

The liquid state is notoriously problematic precisely because it bridges other states that can, at least in ideal terms, be considered as perfectly ordered (the crystal) and perfectly disordered (the gas). Is the liquid a dense gas or an imperfect solid? It has become clear today that neither view does full justice to the issue – not least because, in liquids, structure must be considered not only as a spatial but also as a temporal property. We are still coming to terms with that fact and how best to represent it, which is one reason why there is still no consensual “structure of water” in the same way as there is a structure of ice. What is more, it is also now recognized that there is a rich middle ground between crystal and gas, of which the liquid occupies only a part: this discussion must also encompass the quasi-order or partial order of liquid crystals and quasicrystals, the ‘frozen disorder’ of glasses, and the delicate interplay of kinetic and thermodynamic stability. X-ray diffraction has been central to all of these ideas, and it offered Bernal and others the first inkling of how we might meaningfully talk about the elusive liquid state.

Mixed metaphors

One of the first attempts to provide a molecular picture of liquid water came from the discoverer of X-rays themselves, Wilhelm Röntgen. In 1891 Röntgen suggested that the liquid might be a mixture of freely diffusing water molecules and what he termed “ice molecules” – something akin to ice-like clusters dispersed in the fluid state. He suggested that such a ‘mixture model’, as it has become known, could account for many of water’s anomalous properties, such as the decrease in viscosity at high pressure. Mixture models are still proposed today [2,3], attesting to the tenacity of the idea that there is something crystal-like in water structure.

X-ray scattering was already being applied to liquids, in particular to water, by Peter Debye and others in the late 1920s. These experiments showed that there was structural information in the pattern: a few broad but clearly identifiable peaks, which Debye interpreted as coming from both intra- and intermolecular interference. In 1933 Bernal and his colleague Ralph Fowler set out to devise a structural model that might explain the diffraction pattern measured from water. It had been found only the previous year that the water molecule has a V shape, and Bernal and Fowler argued from quantum-chemical considerations that it should have positive charges at the hydrogen atoms, balanced by two lobes of negative charge on the oxygen to produce a tetrahedral motif. On electrostatic grounds, each molecule should then form hydrogen bonds with four others in a tetrahedral arrangement. Noting the similarity with the tetrahedral structure in silicates, Bernal and Fowler developed a model in which water was regarded as a kind of distorted quartz. Their calculations produced fair agreement with the X-ray data: the peaks were in the right places, even if their intensities did not match so well [4].

This work established some of the core ideas of water structure, in particular the tetrahedral coordination. It set the scene for other models that started from a crystalline viewpoint. Notably, Henry Eyring and colleagues at the University of Utah devised a general picture of the liquid state consisting of an essentially crystalline close-packing threaded with many dislocations [5]. Molecules that escape from this close-packing can, in Eyring’s picture, wander almost gas-like between the dense clusters, making it a descendent of Röntgen’s mixture model.

Building liquids by hand

But Bernal was not happy with this view of the liquid as a defective solid, saying that it postulates “a greater degree of order…in the liquid than actually exists there” [6]. In the 1950s he started again, this time by considering a ‘simple liquid’ in which the molecules are spheres that clump together in an unstructured (and presumably dynamic) heap. Bernal needed physical models to guide his intuition, and during this period he constructed many of them, some now sadly lost. He used ball bearings to build dense random packings, or to see the internal structure better he would prop apart ping-pong balls or rubber balls with wires or rods, sometimes trying to turn himself into the required randomizing influence by selecting rods of different length without thinking. He was able to construct models of water that respected the local tetrahedral arrangement while producing no long- or medium-range order among molecules: a random hydrogen-bonded network in which the molecules are connected in rings with between four and seven members, as opposed to the uniformly six-membered rings of ordinary ice. Not only did this structure produce a good fit to the X-ray data (he counted out the interatomic distances by hand and plotted them as histograms), but the model liquid proved to have a higher density than ice, just as is the case for water [7].

This ‘mixed-ring’ random network supplies the basis for most subsequent molecular models of water [8, 9], although it is now clear that the network is highly dynamic – hydrogen bonds have a lifetime of typically 1 ps – and permeated with defects such as bifurcated and distorted hydrogen bonds [9, 10].

But although the tetrahedron seems to fit the local structure of water, that liquid is unusual in this regard, having a local geometry that is dictated by the high directionality of the hydrogen bonds. At the same time as Bernal was developing these ideas in the 1950s, Charles Frank at the University of Bristol proposed that for simple liquids, such as monatomic liquids and molten metals, a very common motif of short-ranged structure is instead the icosahedron [11]. This structure, Frank argued, provides the closest packing for a small number of atoms. But as one adds successive layers to an icosahedral cluster, the close-packing breaks down. What is more, the clusters have fivefold symmetry, which is incompatible with any crystalline arrangement. It is because of this incommensurability, Frank said, that liquid metals can be deeply supercooled without nucleating the solid phase. It was after hearing Frank speak on these ideas about polyhedral packings with local fivefold symmetry – very much within the context of the solid-state physics that was Frank’s speciality – that Bernal was prompted to revisit his model of the liquid state in the 1950s.

Forbidden order

Both X-ray scattering [12, 13] and X-ray spectroscopy [14] now offer some support for Frank’s picture of liquid metals, showing that something like icosahedral structures do form in metastable supercooled melts. Frank’s hypothesis was already recalled in 1984, however, when the first discovery was reported of a material that seemed to have a crystalline icosahedral order: a quasicrystal [15]. X-ray diffraction from an alloy of aluminium and manganese produced a pattern of sharp peaks with tenfold symmetry, which is now rationalized in terms of a solid structure that has local five- and tenfold symmetry but no perfect long-range translational order. Such materials are now recognized by the International Union of Crystallography as formally crystalline, according to the definition that they produce sharp, regular diffraction peaks. Frank’s icosahedral liquid clusters could provide the nuclei from which these quasicrystalline phases form, and indeed synchrotron X-ray crystallography of a supercooled melt of a Ti-Zr-Ni alloy shows that their formation does indeed precede the activated formation first of a metastable quasicrystalline phase and then of a stable crystal [12].

It seems fitting that Linus Pauling, whose work helped to explain the structures of water and ice [16], should have entered the early debate over the interpretation of quasicrystal diffraction. Pauling was on the wrong side, insisting dismissively that this was probably just a case of crystal twinning. But he, Bernal, Frank, and indeed William Bragg himself (a pioneer of X-ray studies of liquid crystals), all grappled with the question of determining how far ideas from crystalline matter can be imported into the study of the liquid state. Or to put it another way, they showed that X-ray crystallography is better viewed not as a method of probing order in matter, but as a means of examining the extent to which matter can be regarded as orderly. With the advent of high-intensity synchrotron sources that reduce the exposure times sufficiently to study ultrafast dynamic processes by X-ray diffraction [17, 18], it is now possible to explore that question as a function of the timescale being probed. It has been suggested that recent ddebate about the structure of water – a discussion that has oscillated between the poles of Bernal’s random tetrahedral network and mixture models – is itself all a matter of defining the notion of ‘structure’ on an appropriate timescale [19].

Such studies also seem to be confirming that Bernal asked the right question about the liquid state in 1959 (even if he phrased it as a statement): “it is not the fluidity of the liquid that gives rise to its irregularity. It is its irregularity that gives rise to its fluidity.” [7] Which is it, really? Do defects such as bifurcated hydrogen-bonds give water its fluidity [10]? Or is it dynamical making and breaking of hydrogen-bonds that undermines the clathrate-like regularity proposed by Pauling [20]? Whatever the case, there is little doubt now that Bernal’s perception that extending X-ray diffraction to biomolecules and liquids – and now to quasicrystals and all manner of soft matter – has led to a broader view of what crystallography is:

“And so there are no rules, or the old rules are enormously changed… We are seeing now a generalized crystallography, although it hasn’t been written up as such… [These materials] have their own inner logic, the same kind of logic but a different chapter of the logic that applies to the three-dimensional regular lattice crystals.” [21]

References
[1] A. L. Mackay, Journal of Physics: Conference Series 57, 1–16 (2007)
[2] C. H Cho, S. Singh & G. W. Robinson, Faraday Discuss. 103, 19-27 (1996)
[3] C. Huang et al., Proc. Natl Acad. Sci. 106, 15214-15218 (2009)
[4] J. D. Bernal and R. H. Fowler, J. Chem. Phys. 1, 515 (1933)
[5] H. Eyring, F. W. Cagle, Jr. and Carl J. Chritiansen, Proc. Natl Acad. Sci. 44 , 123-126 (1958)
[6] J. L. Finney, Journal of Physics: Conference Series 57, 40–52 (2007)
[7] J. D. Bernal, Proc. R. Inst. Great Britain 37, 355-393 (1959)
[8] H. Stillinger, Science 209, 451-457 (1980)
[9] J. L Finney, Philos. Trans. R. Soc. Lond. B 359 1145-1165 (2004)
[10] F. Sciortino, A. Geiger & H. E. Stanley, Nature 354, 218-221 (1991)
[11] F. C. Frank, Proc. R. Soc. London, Ser. A 215, 43-46 (1952)
[12] K. F. Kelton et al., Phys. Rev. Lett. 90, 195504 (2003)
[13] T. Schenk et al., Phys. Rev. Lett. 89, 075507 (2002)
[14] A. Filipponi, A. Di Cicco & S. De Panfilis, Phys. Rev. Lett. 83, 560 (1999).
[15] D. Shechtman, I. Blech, D. Gratias, & J. W. Cahn, Phys. Rev. Lett. 53, 1951–1953 (1984)
[16] Pauling, L., Nature 317, 512–514 (1985)
[17] J. Ihee et al., Science 309, 1223-1227 (2005)
[18] S. Bratos and M. Wulff, Adv. Chem. Phys. 137, 1-29 (2008)
[19] T. D. Kühne and R. Z. Khaliullin, Nat. Commun. 4, 1450 (2013)
[20] L. Pauling, L., in Hadzi, D. & Thompson, H. W. (eds), Hydrogen Bonding, 1-6 (Pergamon Press, New York, 1959).
[21] J. D. Bernal (1966). Opening remarks, in G. E. W. Wolstenholme & M. O’Connor (eds.), Principles of Biomolecular Organization. Little Brown & Co., Boston.

Thursday, July 17, 2014

How to get your starter for ten


This piece started off headed for Prospect's blog, but didn't quite make it. So you get it instead.

__________________________________________________________________

For the first time in its 52-year history, the BBC’s student quiz show University Challenge has allowed the cameras behind the scenes to reveal how the teams get selected and what it’s like for them facing fiercely recondite questions while Jeremy Paxman barks “Come on!” at them.

Well, sort of. For of course what you’re seeing in these entertaining programmes is as carefully stage-managed and engineered as any other documentary. The Oxbridge teams look like cocky snobs, if not downright peculiar, while the redbrick teams are the plucky give-it-a-go underdogs played by James McAvoy in the UC-based film Starter For Ten. In the first episode the students hadn’t got within barking distance of Paxman, but already they had to jump through all manner of hoops, passing the grueling qualifying test and having to convince the BBC’s selection team of their telegenic potential (which makes you wonder, on occasion, what some of the teams who didn’t make the cut must have come across like).

What I think the programmes will struggle to convey, however, is the sheer terror of sitting behind those surprisingly flimsy tables with a red buzzer in front of you and a name panel that will light up to announce your desperate ignorance. I know, because I have done it.

In recent years, UC has staged occasional mini-tournaments dubbed “The Professionals”, in which the teams are composed not of fresh-faced students but jaded oldies representing a particular organization or guild, who have long forgotten, if they ever knew, how to integrate cosines or who was Stanley Baldwin’s Chancellor of the Exchequer. In 2006 Prospect magazine – “Britain’s intelligent conversation”, after all – was invited to take part, and I was asked to be the obligatory “scientist” on the team.

Let me say again: it was unspeakably scary. If I looked cadaverous, as my wife helpfully told me, it was because I had no sleep the night before we travelled up to Manchester to face the Paxman inquisition.

I had the distinct disadvantage – an inexcusable solecism, I now realise, for anyone who professes to know anything about anything – of not having watched the show previously, except for the one where Rik Mayall and pals pour water on the heads of Stephen Fry and the other toffs below. Students, don’t make that mistake. Only after repeated viewing do you see that you must trust your instincts and not double-check your answer before blurting it out. Yes, you might say something spectacularly foolish, but the chance is greater that you’ll be spot on. Now, I might add, I watch UC obsessively, like Christopher Walken driven by his trauma to repeat games of Russian roulette in the dingy bars of Hanoi.

So while I feel for the poor students having to work so hard to get on the show, that preparation is worth it. The Prospect team had the benefit only of watching a couple of old episodes at the editor and captain David Goodhart’s house, then taking a written test one gloomy evening in an empty office block in Farringdon. Then it was off to face the bright lights.

What contestants must know is that button technique is everything. You think that the person who buzzes is the one who knows the answer? As often as not, he or she just has the quickest finger. What’s more, too much knowledge can hinder as much as it helps – you start going down all kinds of blind alleys rather than plumping for the obvious. Our second game opened with the kind of question that contestants dream for, in which some obscure, random cache of information promises to make you look like a cultured genius. “Which cathedral city is associated with…?” Well, Paxman seems to be talking about the twelfth-century theologian John of Salisbury, one-time bishop of Chartres – although did he say the man was a biographer of Anselm of Bec or Anselm and Becket, which changes things…? Well let’s see, thinks the man who has just written a book on Chartres cathedral, John studied in Paris, so maybe… By which time the question has moved on to the vital clue that allows the opposite team to buzz in with “Salisbury”. (You see, I knew, I knew!)

But the Professionals have another disadvantage, which is that they will have been around for long enough to be supposed to have picked up some kind of expertise – and your reputation as an “expert” is therefore on the line in a way that it isn’t for tender undergraduates who have nothing yet to lose. The terror is not that you’ll fail to know obscure Pacific islands but that you’ll foul up on the easiest of questions about your own speciality. This can undo the strongest of us. You might think that Prospect’s previous editor would be justifiably confident in his encyclopaedic knowledge and cultural breadth, but in fact David became so convinced that there was going to be a question on the then-current government of Germany – German politics being his forte – that he was phoning the office moments before the game to get a rundown of all Angela Merkel’s ministers. Strangely, that question never came up.

I found the answer to something that might have puzzled you: how is it that Jeremy Paxman, betrayed by his researchers or his interpretation, occasionally gets away with announcing a wrong answer? The BBC team recognize that their research isn’t infallible, and all contestants are told that they can challenge a response if they think their answer has been wrongly dismissed – you have to buzz again. Such interruptions would be edited out anyway (the filming isn’t quite as smooth and seamless as it appears). But you tell me: who, especially if you are nineteen years old, is going to buzz Jeremy Paxman and tell him that he’s got it wrong? That’s how Paxo once got away with my favourite blooper, pronouncing incorrect the answer “Voodoo Child” to a question about that Jimi Hendrix song because he assumed that the slang spelling “Voodoo Chile” on his card implied that this must be a song about magical practices in the native land of Pablo Neruda.

And what do you do if your answer is not just wrong but spectacularly so, a blunder that plunges you into a James McAvoy moment, filling a million screens with your mortification? A member of our team showed the way. A man of immense learning, his answer was so obviously wrong that my jaw dropped. But if he died inside, you would never have known it from his nonchalant smile. Such a dignified and elegant way of dealing with a screaming gaffe is worth aspiring to.

Yes, it’s all about lessons in life, students. And let me tell you that the most important is the hoary old claim of the loser: it’s how you compete, not where you finish. When we learnt that the winning side in our competition had been practising with home-made buttons for weeks, and when they gracelessly said they hoped we’d win our second round because “we knew we could beat you”, then I knew that there are indeed more important things than coming first.

Would I go through it all again? I’m not sure my wife would let me, but I’m a little ashamed to say that I wouldn’t hesitate.

Wednesday, July 16, 2014

Unnatural creations

Here is a commentary that I have just published in the Lancet.

___________________________________________________________________

“I don’t think we should be motivated by a fear of the unknown.” Susan Solomon, chief executive of the New York Stem Cell Foundation, made this remark in the context of the current debate on mitochondrial transfer for human reproduction. Scientists working on the technique presented evidence to the US Food and Drug Administration last February in hearings to determine whether safety concerns are sufficiently minimal to permit human trials to proceed.

Although the hearings were restricted to scientific, not social or ethical, issues, Solomon was responding to a perception that already the topic was becoming sensationalized. Critics have suggested that this research “could open the door to genetically modified children”, and that it would represent an unprecedented level of experimentation on babies. Newspapers have decreed that, since mitochondrial transfer will introduce the few dozen mitochondrial genes of the donor into the host egg, the technique will create “three-parent babies”. There seems little prospect that Solomon’s appeal will be heeded.

The issue is moot for the present, because the scientific panel felt that too many questions remain about safety to permit human trials. However, the method – which aims to combat harmful genetic mutations in the mitochondria of the biological mother while still enabling her to contribute almost all of her DNA to an embryo subsequently made by IVF – is evidently going to be beset by questions about what is right and proper in human procreation.

In part, this is guilt by association. Because mitochondrial transfer introduces genes foreign to the biological parents, it is seen as a kind of genetic modification of the same ilk as that associated with alleged “designer babies”. That was sufficient justification for Marcy Darnovsky, executive director of the California-based Center for Genetics and Society, to warn that human trials would begin “a regime of high-tech consumer eugenics”: words calculated to invoke the familiar spectre of totalitarian social engineering. But the debate also highlights the way in which technologies like this are perceived as a challenge to the natural order, to old ideas of how babies are “meant” to be made.

All of this is precisely what one should expect. The same imagery has accompanied all advances in reproductive science and technology. It is imagery with ancient roots, informed by a debate that began with Plato and Aristotle about the possibilities and limitations of human art and invention and to what extent they can ever compare with the faculties of nature. J. B. S. Haldane understood as much when he wrote in his 1924 book Daedalus, or Science and the Future that
“The chemical or physical inventor is always a Prometheus. There is no great invention, from fire to flying, which has not been hailed as an insult to some god. But if every physical and chemical invention is a blasphemy, every biological invention is a perversion. There is hardly one which, on first being brought to the notice of an observer from any nation which had not previously heard of their existence, would not appear to him as indecent and unnatural.”

In Haldane’s time one of the most potent mythical archetypes for these ‘perversions’ of nature was Faust’s homunculus, the ‘artificial being’ made by alchemy. The message of the Faust legend seemed to be that human hubris, by trying to appropriate godlike powers, would lead to no good. That was the moral many people drew from Mary Shelley’s secular retelling of the Faust legend in 1818, in which Frankenstein’s punishment for making life came not from God or the Devil but from his creation itself. While Shelley’s tale contains far more subtle messages about the obligations and responsibilities of parenthood, it was all too easy to interpret it as a Faustian fable about the dangers of technology and the pride of the technologists. Many still prefer that view today.

This was surely what led IVF pioneer Robert Edwards to complain that “Whatever today’s embryologists may do, Frankenstein or Faust or Jekyll will have foreshadowed, looming over every biological debate.” But Edwards could have added a more recent blueprint for fears about where reproductive technologies would lead. He had, after all, seen evidence of it already. When Louise Brown was born in 1978, Newsweek announced that it was “a cry around the brave new world.”

One could charge Aldous Huxley with having a lot to answer for. His most famous book now rivals Frankenstein as the off-the-shelf warning about where all new reproductive technologies will lead: to a totalitarian state biologically engineered into a strict social hierarchy, devoid of art, inspiration or human spirit. Science boosterists thought so at the time: H. G. Wells considered that Huxley had “betray[ed] the future.” But Huxley was only exploring the ideas that his biologist brother Julian, along with Haldane, were discussing at the time, including eugenic social engineering and the introduction of in vitro gestation or “ectogenesis”. That we continue to misappropriate Brave New World today, as if it was a work of futurology rather than (like much of the best science fiction) a bleak social satire of its times, suggests that it fed myths we want to believe.

One of the most powerful of these myths, which infuses Frankenstein but began in ancient Greece, is that there is a fundamental distinction between the natural and the artificial, and a “natural order” that we violate at our peril. In a recent study challenging the arguments for draconian restriction of human reproductive technologies, philosopher Russell Blackford remarks that
“where appeals against violating nature form one element in public debate about some innovation, this should sound an alarm. It is likely that opponents of the practice or technology are, to an extent, searching for ways to rationalize a psychological aversion to conduct that seems anomalous within their contestable views of the world.”

In other words, accusations of “unnaturalness” may be the argument of last resort for condemning a technology when a more rational objection is not so easily found. Blackford shows that it is extremely hard to develop such objections with rigour and logical consistency. But the fact is that these accusations are often the arguments of first resort. In public opinion they tend to be dubbed the “Yuk!” factor, which conservative bioethicist Leon Kass dignifies with the term “the wisdom of repugnance”: “the emotional expression of deep wisdom, beyond reason’s power fully to articulate it.” In other words, one can intuit the correct response without being obliged to justify it. Whether there is wisdom in it or not, disgust at “violating nature” has a long history. “We should not mess around with the laws of nature”, insisted one respondent in Life magazine’s survey on reproductive technologies when IVF was becoming a reality in 1969.

These attitudes need probing, not simply ridiculing. One common thread in such responses, both then and now, is a fear for the traditional family. It is a fear that reaches beyond reason, because the new technologies become a lightning rod for concerns that already exist in our societies. Take the worry voiced by around 40 percent of participants in the Life poll that a child conceived by IVF “would not feel love for family”. Such an incoherent collision of anxieties will resist all inroads of reason. A review of my 2011 book Unnatural in the conservative magazine Standpoint took it for granted that a defence of IVF was a defence of single-parent families, making the book merely “erudite propaganda in the ongoing cultural war against the traditional family and the values and beliefs that have traditionally sustained it”.

These assumptions are not always so easily spotted – which brings us back to “three-parent embryos”. This label prejudices the discussion from the outset: what could possibly be more unnatural than three parents? Only on reflection do we realise we probably already know some three-parent families: gay couples with children via sperm donation, step-parents, adoptive families. The boundaries of parental and family units are in any case more fluid in many cultures outside of Europe and the United States. Ah, but three genetic parents – surely that is different? Perhaps so if we like to sustain the convenient fiction that our parents acquired their genes de novo, or that the word “parent” is exclusively linked to the contribution of DNA rather than of love and nurture. Calling an embryo created by mitochondrial replacement a “three-parent baby” perhaps makes sense in a world where we tell children that all babies are made by a mummy and daddy who look after them for life. But I suspect most parents no longer do that, and feel that their duties do not either begin or end with their chromosomes.

In a poll in the early 1980s in Australia – the second country to achieve a successful live birth through IVF – the most common reason given for opposition to the technique was that it was thought to be ‘unnatural’. Why does this idea still have such resonance, and what exactly does it mean?

People have spoken since antiquity about actions that are contra naturam. But they didn’t necessarily mean what we mean. The simple act of lifting up an object was contra naturam according to Aristotelian physics, which ascribed to heavy things a natural tendency to fall. This was a simple, neutral description of a process. Today, saying something is unnatural or ‘against nature’ has a pejorative intent: the German prefix ‘un-’ implies moral reprehension. This is a corollary of the ‘natural law’ outlined by Thomas Aquinas in the thirteenth century, whereby God created a teleological universe in which everything has a natural part to play and which gives a direction to the moral compass. The implication remains in the Catholic Catechism: God intended the natural end of sex to be procreation, ergo the natural beginning of procreation must be sex (not sperm meeting egg, but an approved conjunction of body parts).

Those who oppose mitochondrial transfer on grounds of discomfort about its “naturalness” are not, in all probability, appealing to Aquinas. But those who support it might need to recognize these roots – to move beyond logical and utilitarian defences, and understand that the debate is framed by deep, often hidden ideas about naturalness. This is a part of what makes us fear the unknown.

Further reading

P. Ball (2011), Unnatural: The Heretical Idea of Making People. Bodley Head.
L. Daston & F. Vidal (eds) (2004). The Moral Authority of Nature. Chicago University Press, Chicago.
D. Evans & N. Pickering (eds) (1996). Conceiving the Embryo. Martinus Nijhoff, The Hague.
J. B. S. Haldane (1924), Daedalus; or, Science and the Future. Kegan Paul, Trench, Trubner & Co., London.
L. R. Kass (1985). Toward a More Natural Science: Biology and Human Affairs. Free Press, New York.
M. J. Mulkay (1997). The Embryo Research Debate: Science and the Politics of Reproduction. Cambridge University Press, Cambridge.
S. M. Squier (1994). Babies in Bottles: Twentieth-Century Visions of Reproductive Technology. Rutgers University Press, New Brunswick, NJ.