Wednesday, July 23, 2014

The hidden structure of liquids


Here’s a Commentary written for the collection of essays curated by Nature Materials for the International Year of Crystallography. It is worth also taking a look at Nature’s Crystallography Milestones collection.

___________________________________________________________________________

From its earliest days, crystallography has been viewed as a means to probe order in matter. J. D. Bernal’s work on the structure of water reframed it as a means of examining the extent to which matter can be regarded as orderly.

In 1953, J. Desmond Bernal wrote that, as a result of the development of X-ray crystallography,

“science was beginning to find explanations in terms of atoms and their combinations not only of the phenomena of physics and chemistry but of the behaviour of ordinary things. The beating out of metal under the hammer, the brittleness of glass and the cleavage of mica, the plasticity of clay, the lightness of ice, the greasiness of oil, the elasticity of rubber, the contraction of muscle, the waving of hair, and the hardening of a boiled egg are among the hundreds of phenomena that had already been completely or partially explained." [1]

What is striking here is how far beyond crystalline and ordered matter Bernal perceived the technique to have gone: into soft matter, amorphous solids, and viscous liquids. For biological polymers, Bernal himself had pioneered the study of globular proteins, while William Astbury, Bernal’s one-time colleague in William Bragg’s laboratory at the Royal Institution in London, had by mutual agreement focused on the fibrous proteins that constitute hair and muscle. Of course, in the year in which Bernal was writing, the most celebrated X-ray structure of a fibrous biological macromolecule, DNA, was solved by Crick and Watson under the somewhat sceptical auspices of William’s son Lawrence Bragg, head of the Cavendish Laboratory in Cambridge.

All those macromolecular materials do form crystals. But one of Bernal’s great insights (if not his alone) was to recognize that the lack of long-ranged order in a material was no obstacle to the use of X-rays for deducing its structure. That one could meaningfully talk about a structure for the liquid state was itself something of a revelation. What is sometimes overlooked is the good fortune that the natural first choice for such investigation of liquids – water, ubiquitous and central to life and the environment – happens to have an unusually high degree of structure. Indeed, Bernal first began his studies of liquid-state structure by regarding it as a kind of defective crystal.

The liquid state is notoriously problematic precisely because it bridges other states that can, at least in ideal terms, be considered as perfectly ordered (the crystal) and perfectly disordered (the gas). Is the liquid a dense gas or an imperfect solid? It has become clear today that neither view does full justice to the issue – not least because, in liquids, structure must be considered not only as a spatial but also as a temporal property. We are still coming to terms with that fact and how best to represent it, which is one reason why there is still no consensual “structure of water” in the same way as there is a structure of ice. What is more, it is also now recognized that there is a rich middle ground between crystal and gas, of which the liquid occupies only a part: this discussion must also encompass the quasi-order or partial order of liquid crystals and quasicrystals, the ‘frozen disorder’ of glasses, and the delicate interplay of kinetic and thermodynamic stability. X-ray diffraction has been central to all of these ideas, and it offered Bernal and others the first inkling of how we might meaningfully talk about the elusive liquid state.

Mixed metaphors

One of the first attempts to provide a molecular picture of liquid water came from the discoverer of X-rays themselves, Wilhelm Röntgen. In 1891 Röntgen suggested that the liquid might be a mixture of freely diffusing water molecules and what he termed “ice molecules” – something akin to ice-like clusters dispersed in the fluid state. He suggested that such a ‘mixture model’, as it has become known, could account for many of water’s anomalous properties, such as the decrease in viscosity at high pressure. Mixture models are still proposed today [2,3], attesting to the tenacity of the idea that there is something crystal-like in water structure.

X-ray scattering was already being applied to liquids, in particular to water, by Peter Debye and others in the late 1920s. These experiments showed that there was structural information in the pattern: a few broad but clearly identifiable peaks, which Debye interpreted as coming from both intra- and intermolecular interference. In 1933 Bernal and his colleague Ralph Fowler set out to devise a structural model that might explain the diffraction pattern measured from water. It had been found only the previous year that the water molecule has a V shape, and Bernal and Fowler argued from quantum-chemical considerations that it should have positive charges at the hydrogen atoms, balanced by two lobes of negative charge on the oxygen to produce a tetrahedral motif. On electrostatic grounds, each molecule should then form hydrogen bonds with four others in a tetrahedral arrangement. Noting the similarity with the tetrahedral structure in silicates, Bernal and Fowler developed a model in which water was regarded as a kind of distorted quartz. Their calculations produced fair agreement with the X-ray data: the peaks were in the right places, even if their intensities did not match so well [4].

This work established some of the core ideas of water structure, in particular the tetrahedral coordination. It set the scene for other models that started from a crystalline viewpoint. Notably, Henry Eyring and colleagues at the University of Utah devised a general picture of the liquid state consisting of an essentially crystalline close-packing threaded with many dislocations [5]. Molecules that escape from this close-packing can, in Eyring’s picture, wander almost gas-like between the dense clusters, making it a descendent of Röntgen’s mixture model.

Building liquids by hand

But Bernal was not happy with this view of the liquid as a defective solid, saying that it postulates “a greater degree of order…in the liquid than actually exists there” [6]. In the 1950s he started again, this time by considering a ‘simple liquid’ in which the molecules are spheres that clump together in an unstructured (and presumably dynamic) heap. Bernal needed physical models to guide his intuition, and during this period he constructed many of them, some now sadly lost. He used ball bearings to build dense random packings, or to see the internal structure better he would prop apart ping-pong balls or rubber balls with wires or rods, sometimes trying to turn himself into the required randomizing influence by selecting rods of different length without thinking. He was able to construct models of water that respected the local tetrahedral arrangement while producing no long- or medium-range order among molecules: a random hydrogen-bonded network in which the molecules are connected in rings with between four and seven members, as opposed to the uniformly six-membered rings of ordinary ice. Not only did this structure produce a good fit to the X-ray data (he counted out the interatomic distances by hand and plotted them as histograms), but the model liquid proved to have a higher density than ice, just as is the case for water [7].

This ‘mixed-ring’ random network supplies the basis for most subsequent molecular models of water [8, 9], although it is now clear that the network is highly dynamic – hydrogen bonds have a lifetime of typically 1 ps – and permeated with defects such as bifurcated and distorted hydrogen bonds [9, 10].

But although the tetrahedron seems to fit the local structure of water, that liquid is unusual in this regard, having a local geometry that is dictated by the high directionality of the hydrogen bonds. At the same time as Bernal was developing these ideas in the 1950s, Charles Frank at the University of Bristol proposed that for simple liquids, such as monatomic liquids and molten metals, a very common motif of short-ranged structure is instead the icosahedron [11]. This structure, Frank argued, provides the closest packing for a small number of atoms. But as one adds successive layers to an icosahedral cluster, the close-packing breaks down. What is more, the clusters have fivefold symmetry, which is incompatible with any crystalline arrangement. It is because of this incommensurability, Frank said, that liquid metals can be deeply supercooled without nucleating the solid phase. It was after hearing Frank speak on these ideas about polyhedral packings with local fivefold symmetry – very much within the context of the solid-state physics that was Frank’s speciality – that Bernal was prompted to revisit his model of the liquid state in the 1950s.

Forbidden order

Both X-ray scattering [12, 13] and X-ray spectroscopy [14] now offer some support for Frank’s picture of liquid metals, showing that something like icosahedral structures do form in metastable supercooled melts. Frank’s hypothesis was already recalled in 1984, however, when the first discovery was reported of a material that seemed to have a crystalline icosahedral order: a quasicrystal [15]. X-ray diffraction from an alloy of aluminium and manganese produced a pattern of sharp peaks with tenfold symmetry, which is now rationalized in terms of a solid structure that has local five- and tenfold symmetry but no perfect long-range translational order. Such materials are now recognized by the International Union of Crystallography as formally crystalline, according to the definition that they produce sharp, regular diffraction peaks. Frank’s icosahedral liquid clusters could provide the nuclei from which these quasicrystalline phases form, and indeed synchrotron X-ray crystallography of a supercooled melt of a Ti-Zr-Ni alloy shows that their formation does indeed precede the activated formation first of a metastable quasicrystalline phase and then of a stable crystal [12].

It seems fitting that Linus Pauling, whose work helped to explain the structures of water and ice [16], should have entered the early debate over the interpretation of quasicrystal diffraction. Pauling was on the wrong side, insisting dismissively that this was probably just a case of crystal twinning. But he, Bernal, Frank, and indeed William Bragg himself (a pioneer of X-ray studies of liquid crystals), all grappled with the question of determining how far ideas from crystalline matter can be imported into the study of the liquid state. Or to put it another way, they showed that X-ray crystallography is better viewed not as a method of probing order in matter, but as a means of examining the extent to which matter can be regarded as orderly. With the advent of high-intensity synchrotron sources that reduce the exposure times sufficiently to study ultrafast dynamic processes by X-ray diffraction [17, 18], it is now possible to explore that question as a function of the timescale being probed. It has been suggested that recent ddebate about the structure of water – a discussion that has oscillated between the poles of Bernal’s random tetrahedral network and mixture models – is itself all a matter of defining the notion of ‘structure’ on an appropriate timescale [19].

Such studies also seem to be confirming that Bernal asked the right question about the liquid state in 1959 (even if he phrased it as a statement): “it is not the fluidity of the liquid that gives rise to its irregularity. It is its irregularity that gives rise to its fluidity.” [7] Which is it, really? Do defects such as bifurcated hydrogen-bonds give water its fluidity [10]? Or is it dynamical making and breaking of hydrogen-bonds that undermines the clathrate-like regularity proposed by Pauling [20]? Whatever the case, there is little doubt now that Bernal’s perception that extending X-ray diffraction to biomolecules and liquids – and now to quasicrystals and all manner of soft matter – has led to a broader view of what crystallography is:

“And so there are no rules, or the old rules are enormously changed… We are seeing now a generalized crystallography, although it hasn’t been written up as such… [These materials] have their own inner logic, the same kind of logic but a different chapter of the logic that applies to the three-dimensional regular lattice crystals.” [21]

References
[1] A. L. Mackay, Journal of Physics: Conference Series 57, 1–16 (2007)
[2] C. H Cho, S. Singh & G. W. Robinson, Faraday Discuss. 103, 19-27 (1996)
[3] C. Huang et al., Proc. Natl Acad. Sci. 106, 15214-15218 (2009)
[4] J. D. Bernal and R. H. Fowler, J. Chem. Phys. 1, 515 (1933)
[5] H. Eyring, F. W. Cagle, Jr. and Carl J. Chritiansen, Proc. Natl Acad. Sci. 44 , 123-126 (1958)
[6] J. L. Finney, Journal of Physics: Conference Series 57, 40–52 (2007)
[7] J. D. Bernal, Proc. R. Inst. Great Britain 37, 355-393 (1959)
[8] H. Stillinger, Science 209, 451-457 (1980)
[9] J. L Finney, Philos. Trans. R. Soc. Lond. B 359 1145-1165 (2004)
[10] F. Sciortino, A. Geiger & H. E. Stanley, Nature 354, 218-221 (1991)
[11] F. C. Frank, Proc. R. Soc. London, Ser. A 215, 43-46 (1952)
[12] K. F. Kelton et al., Phys. Rev. Lett. 90, 195504 (2003)
[13] T. Schenk et al., Phys. Rev. Lett. 89, 075507 (2002)
[14] A. Filipponi, A. Di Cicco & S. De Panfilis, Phys. Rev. Lett. 83, 560 (1999).
[15] D. Shechtman, I. Blech, D. Gratias, & J. W. Cahn, Phys. Rev. Lett. 53, 1951–1953 (1984)
[16] Pauling, L., Nature 317, 512–514 (1985)
[17] J. Ihee et al., Science 309, 1223-1227 (2005)
[18] S. Bratos and M. Wulff, Adv. Chem. Phys. 137, 1-29 (2008)
[19] T. D. Kühne and R. Z. Khaliullin, Nat. Commun. 4, 1450 (2013)
[20] L. Pauling, L., in Hadzi, D. & Thompson, H. W. (eds), Hydrogen Bonding, 1-6 (Pergamon Press, New York, 1959).
[21] J. D. Bernal (1966). Opening remarks, in G. E. W. Wolstenholme & M. O’Connor (eds.), Principles of Biomolecular Organization. Little Brown & Co., Boston.

Thursday, July 17, 2014

How to get your starter for ten


This piece started off headed for Prospect's blog, but didn't quite make it. So you get it instead.

__________________________________________________________________

For the first time in its 52-year history, the BBC’s student quiz show University Challenge has allowed the cameras behind the scenes to reveal how the teams get selected and what it’s like for them facing fiercely recondite questions while Jeremy Paxman barks “Come on!” at them.

Well, sort of. For of course what you’re seeing in these entertaining programmes is as carefully stage-managed and engineered as any other documentary. The Oxbridge teams look like cocky snobs, if not downright peculiar, while the redbrick teams are the plucky give-it-a-go underdogs played by James McAvoy in the UC-based film Starter For Ten. In the first episode the students hadn’t got within barking distance of Paxman, but already they had to jump through all manner of hoops, passing the grueling qualifying test and having to convince the BBC’s selection team of their telegenic potential (which makes you wonder, on occasion, what some of the teams who didn’t make the cut must have come across like).

What I think the programmes will struggle to convey, however, is the sheer terror of sitting behind those surprisingly flimsy tables with a red buzzer in front of you and a name panel that will light up to announce your desperate ignorance. I know, because I have done it.

In recent years, UC has staged occasional mini-tournaments dubbed “The Professionals”, in which the teams are composed not of fresh-faced students but jaded oldies representing a particular organization or guild, who have long forgotten, if they ever knew, how to integrate cosines or who was Stanley Baldwin’s Chancellor of the Exchequer. In 2006 Prospect magazine – “Britain’s intelligent conversation”, after all – was invited to take part, and I was asked to be the obligatory “scientist” on the team.

Let me say again: it was unspeakably scary. If I looked cadaverous, as my wife helpfully told me, it was because I had no sleep the night before we travelled up to Manchester to face the Paxman inquisition.

I had the distinct disadvantage – an inexcusable solecism, I now realise, for anyone who professes to know anything about anything – of not having watched the show previously, except for the one where Rik Mayall and pals pour water on the heads of Stephen Fry and the other toffs below. Students, don’t make that mistake. Only after repeated viewing do you see that you must trust your instincts and not double-check your answer before blurting it out. Yes, you might say something spectacularly foolish, but the chance is greater that you’ll be spot on. Now, I might add, I watch UC obsessively, like Christopher Walken driven by his trauma to repeat games of Russian roulette in the dingy bars of Hanoi.

So while I feel for the poor students having to work so hard to get on the show, that preparation is worth it. The Prospect team had the benefit only of watching a couple of old episodes at the editor and captain David Goodhart’s house, then taking a written test one gloomy evening in an empty office block in Farringdon. Then it was off to face the bright lights.

What contestants must know is that button technique is everything. You think that the person who buzzes is the one who knows the answer? As often as not, he or she just has the quickest finger. What’s more, too much knowledge can hinder as much as it helps – you start going down all kinds of blind alleys rather than plumping for the obvious. Our second game opened with the kind of question that contestants dream for, in which some obscure, random cache of information promises to make you look like a cultured genius. “Which cathedral city is associated with…?” Well, Paxman seems to be talking about the twelfth-century theologian John of Salisbury, one-time bishop of Chartres – although did he say the man was a biographer of Anselm of Bec or Anselm and Becket, which changes things…? Well let’s see, thinks the man who has just written a book on Chartres cathedral, John studied in Paris, so maybe… By which time the question has moved on to the vital clue that allows the opposite team to buzz in with “Salisbury”. (You see, I knew, I knew!)

But the Professionals have another disadvantage, which is that they will have been around for long enough to be supposed to have picked up some kind of expertise – and your reputation as an “expert” is therefore on the line in a way that it isn’t for tender undergraduates who have nothing yet to lose. The terror is not that you’ll fail to know obscure Pacific islands but that you’ll foul up on the easiest of questions about your own speciality. This can undo the strongest of us. You might think that Prospect’s previous editor would be justifiably confident in his encyclopaedic knowledge and cultural breadth, but in fact David became so convinced that there was going to be a question on the then-current government of Germany – German politics being his forte – that he was phoning the office moments before the game to get a rundown of all Angela Merkel’s ministers. Strangely, that question never came up.

I found the answer to something that might have puzzled you: how is it that Jeremy Paxman, betrayed by his researchers or his interpretation, occasionally gets away with announcing a wrong answer? The BBC team recognize that their research isn’t infallible, and all contestants are told that they can challenge a response if they think their answer has been wrongly dismissed – you have to buzz again. Such interruptions would be edited out anyway (the filming isn’t quite as smooth and seamless as it appears). But you tell me: who, especially if you are nineteen years old, is going to buzz Jeremy Paxman and tell him that he’s got it wrong? That’s how Paxo once got away with my favourite blooper, pronouncing incorrect the answer “Voodoo Child” to a question about that Jimi Hendrix song because he assumed that the slang spelling “Voodoo Chile” on his card implied that this must be a song about magical practices in the native land of Pablo Neruda.

And what do you do if your answer is not just wrong but spectacularly so, a blunder that plunges you into a James McAvoy moment, filling a million screens with your mortification? A member of our team showed the way. A man of immense learning, his answer was so obviously wrong that my jaw dropped. But if he died inside, you would never have known it from his nonchalant smile. Such a dignified and elegant way of dealing with a screaming gaffe is worth aspiring to.

Yes, it’s all about lessons in life, students. And let me tell you that the most important is the hoary old claim of the loser: it’s how you compete, not where you finish. When we learnt that the winning side in our competition had been practising with home-made buttons for weeks, and when they gracelessly said they hoped we’d win our second round because “we knew we could beat you”, then I knew that there are indeed more important things than coming first.

Would I go through it all again? I’m not sure my wife would let me, but I’m a little ashamed to say that I wouldn’t hesitate.

Wednesday, July 16, 2014

Unnatural creations

Here is a commentary that I have just published in the Lancet.

___________________________________________________________________

“I don’t think we should be motivated by a fear of the unknown.” Susan Solomon, chief executive of the New York Stem Cell Foundation, made this remark in the context of the current debate on mitochondrial transfer for human reproduction. Scientists working on the technique presented evidence to the US Food and Drug Administration last February in hearings to determine whether safety concerns are sufficiently minimal to permit human trials to proceed.

Although the hearings were restricted to scientific, not social or ethical, issues, Solomon was responding to a perception that already the topic was becoming sensationalized. Critics have suggested that this research “could open the door to genetically modified children”, and that it would represent an unprecedented level of experimentation on babies. Newspapers have decreed that, since mitochondrial transfer will introduce the few dozen mitochondrial genes of the donor into the host egg, the technique will create “three-parent babies”. There seems little prospect that Solomon’s appeal will be heeded.

The issue is moot for the present, because the scientific panel felt that too many questions remain about safety to permit human trials. However, the method – which aims to combat harmful genetic mutations in the mitochondria of the biological mother while still enabling her to contribute almost all of her DNA to an embryo subsequently made by IVF – is evidently going to be beset by questions about what is right and proper in human procreation.

In part, this is guilt by association. Because mitochondrial transfer introduces genes foreign to the biological parents, it is seen as a kind of genetic modification of the same ilk as that associated with alleged “designer babies”. That was sufficient justification for Marcy Darnovsky, executive director of the California-based Center for Genetics and Society, to warn that human trials would begin “a regime of high-tech consumer eugenics”: words calculated to invoke the familiar spectre of totalitarian social engineering. But the debate also highlights the way in which technologies like this are perceived as a challenge to the natural order, to old ideas of how babies are “meant” to be made.

All of this is precisely what one should expect. The same imagery has accompanied all advances in reproductive science and technology. It is imagery with ancient roots, informed by a debate that began with Plato and Aristotle about the possibilities and limitations of human art and invention and to what extent they can ever compare with the faculties of nature. J. B. S. Haldane understood as much when he wrote in his 1924 book Daedalus, or Science and the Future that
“The chemical or physical inventor is always a Prometheus. There is no great invention, from fire to flying, which has not been hailed as an insult to some god. But if every physical and chemical invention is a blasphemy, every biological invention is a perversion. There is hardly one which, on first being brought to the notice of an observer from any nation which had not previously heard of their existence, would not appear to him as indecent and unnatural.”

In Haldane’s time one of the most potent mythical archetypes for these ‘perversions’ of nature was Faust’s homunculus, the ‘artificial being’ made by alchemy. The message of the Faust legend seemed to be that human hubris, by trying to appropriate godlike powers, would lead to no good. That was the moral many people drew from Mary Shelley’s secular retelling of the Faust legend in 1818, in which Frankenstein’s punishment for making life came not from God or the Devil but from his creation itself. While Shelley’s tale contains far more subtle messages about the obligations and responsibilities of parenthood, it was all too easy to interpret it as a Faustian fable about the dangers of technology and the pride of the technologists. Many still prefer that view today.

This was surely what led IVF pioneer Robert Edwards to complain that “Whatever today’s embryologists may do, Frankenstein or Faust or Jekyll will have foreshadowed, looming over every biological debate.” But Edwards could have added a more recent blueprint for fears about where reproductive technologies would lead. He had, after all, seen evidence of it already. When Louise Brown was born in 1978, Newsweek announced that it was “a cry around the brave new world.”

One could charge Aldous Huxley with having a lot to answer for. His most famous book now rivals Frankenstein as the off-the-shelf warning about where all new reproductive technologies will lead: to a totalitarian state biologically engineered into a strict social hierarchy, devoid of art, inspiration or human spirit. Science boosterists thought so at the time: H. G. Wells considered that Huxley had “betray[ed] the future.” But Huxley was only exploring the ideas that his biologist brother Julian, along with Haldane, were discussing at the time, including eugenic social engineering and the introduction of in vitro gestation or “ectogenesis”. That we continue to misappropriate Brave New World today, as if it was a work of futurology rather than (like much of the best science fiction) a bleak social satire of its times, suggests that it fed myths we want to believe.

One of the most powerful of these myths, which infuses Frankenstein but began in ancient Greece, is that there is a fundamental distinction between the natural and the artificial, and a “natural order” that we violate at our peril. In a recent study challenging the arguments for draconian restriction of human reproductive technologies, philosopher Russell Blackford remarks that
“where appeals against violating nature form one element in public debate about some innovation, this should sound an alarm. It is likely that opponents of the practice or technology are, to an extent, searching for ways to rationalize a psychological aversion to conduct that seems anomalous within their contestable views of the world.”

In other words, accusations of “unnaturalness” may be the argument of last resort for condemning a technology when a more rational objection is not so easily found. Blackford shows that it is extremely hard to develop such objections with rigour and logical consistency. But the fact is that these accusations are often the arguments of first resort. In public opinion they tend to be dubbed the “Yuk!” factor, which conservative bioethicist Leon Kass dignifies with the term “the wisdom of repugnance”: “the emotional expression of deep wisdom, beyond reason’s power fully to articulate it.” In other words, one can intuit the correct response without being obliged to justify it. Whether there is wisdom in it or not, disgust at “violating nature” has a long history. “We should not mess around with the laws of nature”, insisted one respondent in Life magazine’s survey on reproductive technologies when IVF was becoming a reality in 1969.

These attitudes need probing, not simply ridiculing. One common thread in such responses, both then and now, is a fear for the traditional family. It is a fear that reaches beyond reason, because the new technologies become a lightning rod for concerns that already exist in our societies. Take the worry voiced by around 40 percent of participants in the Life poll that a child conceived by IVF “would not feel love for family”. Such an incoherent collision of anxieties will resist all inroads of reason. A review of my 2011 book Unnatural in the conservative magazine Standpoint took it for granted that a defence of IVF was a defence of single-parent families, making the book merely “erudite propaganda in the ongoing cultural war against the traditional family and the values and beliefs that have traditionally sustained it”.

These assumptions are not always so easily spotted – which brings us back to “three-parent embryos”. This label prejudices the discussion from the outset: what could possibly be more unnatural than three parents? Only on reflection do we realise we probably already know some three-parent families: gay couples with children via sperm donation, step-parents, adoptive families. The boundaries of parental and family units are in any case more fluid in many cultures outside of Europe and the United States. Ah, but three genetic parents – surely that is different? Perhaps so if we like to sustain the convenient fiction that our parents acquired their genes de novo, or that the word “parent” is exclusively linked to the contribution of DNA rather than of love and nurture. Calling an embryo created by mitochondrial replacement a “three-parent baby” perhaps makes sense in a world where we tell children that all babies are made by a mummy and daddy who look after them for life. But I suspect most parents no longer do that, and feel that their duties do not either begin or end with their chromosomes.

In a poll in the early 1980s in Australia – the second country to achieve a successful live birth through IVF – the most common reason given for opposition to the technique was that it was thought to be ‘unnatural’. Why does this idea still have such resonance, and what exactly does it mean?

People have spoken since antiquity about actions that are contra naturam. But they didn’t necessarily mean what we mean. The simple act of lifting up an object was contra naturam according to Aristotelian physics, which ascribed to heavy things a natural tendency to fall. This was a simple, neutral description of a process. Today, saying something is unnatural or ‘against nature’ has a pejorative intent: the German prefix ‘un-’ implies moral reprehension. This is a corollary of the ‘natural law’ outlined by Thomas Aquinas in the thirteenth century, whereby God created a teleological universe in which everything has a natural part to play and which gives a direction to the moral compass. The implication remains in the Catholic Catechism: God intended the natural end of sex to be procreation, ergo the natural beginning of procreation must be sex (not sperm meeting egg, but an approved conjunction of body parts).

Those who oppose mitochondrial transfer on grounds of discomfort about its “naturalness” are not, in all probability, appealing to Aquinas. But those who support it might need to recognize these roots – to move beyond logical and utilitarian defences, and understand that the debate is framed by deep, often hidden ideas about naturalness. This is a part of what makes us fear the unknown.

Further reading

P. Ball (2011), Unnatural: The Heretical Idea of Making People. Bodley Head.
L. Daston & F. Vidal (eds) (2004). The Moral Authority of Nature. Chicago University Press, Chicago.
D. Evans & N. Pickering (eds) (1996). Conceiving the Embryo. Martinus Nijhoff, The Hague.
J. B. S. Haldane (1924), Daedalus; or, Science and the Future. Kegan Paul, Trench, Trubner & Co., London.
L. R. Kass (1985). Toward a More Natural Science: Biology and Human Affairs. Free Press, New York.
M. J. Mulkay (1997). The Embryo Research Debate: Science and the Politics of Reproduction. Cambridge University Press, Cambridge.
S. M. Squier (1994). Babies in Bottles: Twentieth-Century Visions of Reproductive Technology. Rutgers University Press, New Brunswick, NJ.

Monday, July 14, 2014

A feeling for flow

Here is a slightly different version of my article for Nautilus on turbulence – in particular, with several more images, as this piece seemed to cry out for them.

____________________________________________________________________

When the German physicist Arnold Sommerfeld assigned his most brilliant student a subject for his doctoral thesis in 1923, he admitted that “I would not have proposed a topic of this difficulty to any of my other pupils.” Those others included such geniuses as Wolfgang Pauli and Hans Bethe, yet for Sommerfeld the only one who was up to the challenge of this particular subject was Werner Heisenberg.

Heisenberg went on to be a key founder of quantum theory, for which work he was awarded the 1932 Nobel prize in physics. He developed one of the first mathematical descriptions of this new and revolutionary discipline, discovered the uncertainty principle, and together with Niels Bohr he engineered the “Copenhagen Interpretation” of what quantum theory means, to which many physicists still adhere today.

The subject of Heisenberg’s doctoral dissertation, however, wasn’t quantum physics. It was harder than that. The 59-page calculation that he submitted to the faculty of the University of Munich later in 1923 was titled “On the stability and turbulence of fluid flow.”

Sommerfeld had been contacted by the Isar Company of Munich, which was contracted to prevent the Isar River from flooding by building up its banks. The company wanted to know at what point the river flow changed from being smooth (the technical term is ‘laminar’) to being turbulent, beset with eddies. That question requires some understanding of what turbulence actually is. Heisenberg’s work on the problem was impressive – he solved the mathematical equations of flow at the point of the laminar-to-turbulent change – and it stimulated ideas for decades afterwards. But he didn’t really crack it – he couldn’t construct a comprehensive theory of turbulence.

Heisenberg was not given to modesty, but it seems he had no illusions about his achievements here. One popular story says that he was once asked what he would ask God. (Whether Heisenberg possessed a sufficiently unblemished character to get him a divine audience is a matter that still divides historians, but the story implies he had no misgivings about that.) “When I meet God”, he is said to have replied, “I am going to ask him two questions. Why relativity? And why turbulence? I really believe he will have an answer for the first.”

It is probably an apocryphal tale. The same remark has been attributed to at least one other person: the British mathematician and expert on fluid flow, Horace Lamb, is said to have hoped that God might enlighten him on quantum electrodynamics and turbulence, saying that “about the former I am rather optimistic.” You get the point: turbulence, an ubiquitous and eminently practical problem in the real world, is frighteningly hard to understand.

It’s still, almost a century after Heisenberg, a cutting-edge problem in science, exemplified by the award of the 2014 Abel Prize for mathematics – often seen as the “Nobel of maths” – to the Russian mathematician Yakov Sinai, in part for his work on turbulence and chaotic flow.

Yet I propose that, to fully articulate and turbulence, we need the perspectives of both intuitive description and detailed analysis – both art and science. It is no coincidence that the science of turbulence has often been forced to fall back on qualitative accounts, while art that celebrates turbulence sometimes resembles a quasi-scientific gathering of data and idealization of form. Intuition of turbulent flow can serve the mathematician and the engineer, while careful observation and even experiment can benefit the artist. Scientists tend to view turbulence as a form of “complexity”, a semi-technical term which just tells us that there is a lot going on and that everything depends on everything else – and that a reductionist approach therefore has limits. Rather than regarding turbulence as a phenomenon awaiting a complete mathematical description, we should see it as one of those concepts, like life, love, language and beauty, that overlaps with science but is not wholly contained within it. Turbulence has to be experienced to be grasped.

Into the storm

It’s not hard to see why turbulence has been so hard for science to understand – and I mean that literally. When you look at a turbulent flow – cream in stirred coffee, say, or a jet of exhaled air traced out in the smoke of a cigarette – you can see that it is full of structure, a profound sort of organization made up of eddies and whirls of all sizes that coalesce for an instant before dissolving again. That’s rather different to what we imply in the colloquial use of the word, to describe say a life, a history, a society. Here we tend to mean that the thing on question is chaotic and random, a jumble within which it is difficult to identify any cause and effect. But pure randomness is not so hard to describe mathematically: it means that every event or movement in one place or at one time is independent of those at others. On average, randomness blurs into dull uniformity. A turbulent flow is different: it does have order and coherence, but an order in constant flux. This constant appearance and disappearance of pockets of organization in a disorderly whole has a beautiful, mesmerizing quality. For this reason, turbulence has proved as irresistible to artists as it is intransigent to scientists.


Turbulence: complicated, chaotic, but not random

Flows of fluids – liquids and gases – generally become turbulent once they start flowing fast enough. When they flow slowly, all of the fluid moves in parallel, rather like ranks of marching soldiers: this is laminar flow. But as the speed increases, the ranks break up: you could say that the “soldiers” begin to bump into one another or move sideways, and so swirls and eddies begin to form. This transition to turbulence doesn’t happen at the same flow speed for all fluids – more viscous ones can be “kept in line” at higher speeds than very runny ones. For flow down a channel or pipe, a quantity called the Reynolds number determines when turbulence appears: roughly speaking, this is the ratio of the flow speed to the viscosity of the fluid. Turbulence develops at high values of the Reynolds number. The quantity is named after Osborne Reynolds, an Anglo-Irish engineer whose pioneering work on fluid flow in the nineteenth century provided the foundation for Heisenberg’s work.

Many of the flows we encounter in nature have high Reynolds numbers, for example in rivers and atmospheric air currents like the jet streams. The eddies and knots of air turbulence can make for a bumpy ride when an aircraft passes through them.

Turbulence provides a perfect example of why a problem is not solved simply by writing down a mathematical equation to describe it. Such equations exist for all fluid flows, whether laminar or turbulent: they are called the Navier-Stokes equations, and they amount largely to an expression of Isaac Newton’s second law of motion (force = mass times acceleration) applied to fluids. These equations are the bedrock of the modern investigation of flow in the science of fluid dynamics.

The problem is that, except in a few particularly simple cases, the equations can’t be solved. Yet it’s those solutions, not the equations themselves, that describe the world. What makes the solutions so complicated is that, crudely speaking, each part of the flow depends on what all the other parts are doing. When the flow is turbulent, this inter-dependence is extreme and the flow therefore becomes chaotic, in the technical sense that the smallest disturbances at one time can lead to completely different patterns of behaviour at a later moment.

Observation and invention

Pretty much all scientific histories of the problem of turbulence start in the same place: with the sketches of turbulent flow made by Leonardo da Vinci in the fifteenth century. For the most part these commentaries do not really know what to do with Leonardo’s efforts, other than to commend him for his careful observation before leaping ahead to the more recognizably scientific work on turbulence by Reynolds. Artists, meanwhile, sidelined Leonardo’s schematic representations in favour of a more impressionistic or ostensibly realistic play of light and movement in chaotic waters – not until the Art Nouveau movement do we see something like Leonardo’s arabesque sketches return.


Leonardo’s drawing of turbulence in an artificial waterfall.

But what Leonardo was up to was rather profound. In the words of art historian Martin Kemp, Leonardo regarded nature “as weaving an infinite variety of elusive patterns on the basic warp and woof of mathematical perfection.” He was trying to grasp those patterns. So when he drew an analogy between the braided vortices in water flowing around a flat plate in a stream, and the braids of a woman’s hair, he wasn’t just saying that one looks like the other – he was positing a deep connection between the two, a correspondence of form in the manner that Neoplatonic philosophers of his age deemed to exist throughout the natural world. For the artist as much as the scientist, what mattered was not the superficial and transient manifestations of these forms but their underlying essence. This is why Leonardo didn’t imagine that the artist should be painting “what he sees”, but rather, what he discerns within what he sees. It therefore behooves the artist to invent: painting is “a subtle inventione with which philosophy and subtle speculation considers the natures of all forms.” That’s not a bad definition of science, when you think about it.



Sketches of complex flows in water by Leonardo da Vinci (top), in which he saw analogies with braided hair (bottom).

As something approaching a Neoplatonist himself, Leonardo saw this implicit order in fluid flow as a static, almost crystalline entity: his sketches have a solidity to them, seeming almost to weave water into ropes and coils. There can be a similar frozen tangibility to the depictions of turbulent flow in East Asian art, some of which predate Leonardo by several centuries. The early Qing Dynasty painter Shitao in the late seventeenth century drew an analogy between water waves and mountain ranges – a comparison that is explicitly rendered by Shitao’s friend Wang Gai in The Mustard-Seed Garden Manual of Painting. Here the serried ranks of waves could almost be the limestone peaks of Guilin, while the frothy tendrils of breaking wave-crests recall the pitted and punctured pieces of rock with which Chinese intellectuals loved to adorn their gardens. For Chinese artists, working within a context that idealized the artistic contemplation of the Yangtze and the other great waterways, these flow forms are mostly those one can find in rivers and streams. On the island nation of Japan, beset by tsunamis, it is instead the ocean’s waves that supply the archetypes, most famously in the prints of Hokusai.

For Chinese artists, the forms of turbulent flow were defined not by a static but by a dynamic principle: the ebb and flow of a natural energy called qi, which supplies the creative spontaneity of Taoist philosophy. The artist captured this energy not with slow, meticulous attention to detail but with a free movement of the wrist that imparted qi to the watery ink on the brush and thus to the trace it left on silk: the wrist, Shitao wrote, should be “flowing deep down like water.” It is this insistence on dynamic change that makes Chinese art a profound meditation on turbulence.

A new confluence?

One can’t help noticing how several of these images in East Asian art resemble the attempts of modern fluid dynamicists to capture the essentials of complex flow in so-called streamlines, which, to a rough approximation, trace out the trajectories of particles borne along in the flow.



Images of water from the seventeenth-century The Mustard-Seed Garden Manual of Painting (top), and streamlines in modern computer simulations of turbulent flow (bottom).

Are these resemblances more than superficial and coincidental? I think so: they express a recognition both that turbulent flows contain orderly patterns and forms, and that these have to be visualized in order to be appreciated. However, for scientists in the twentieth century this “deep structure” of turbulence became increasingly an abstract, scientific notion. One of the key advances in the science or turbulence came from the Soviet mathematical physicist Andrei Kolmogorov, under whose guidance Yakov Sinai began his work in the 1950s. By this time turbulence as regarded as a hierarchy of eddies of all different sizes, down which energy cascades from the largest to the smallest until ultimately being frittered away as heat in the friction of molecules rubbing viscously against one another. This picture of turbulence was famously captured by the English mathematician Lewis Fry Richardson, another pioneer of turbulence theory, in a 1922 poem indebted to Jonathan Swift:
"Big whirls have little whirls That feed on their velocity, And little whirls have lesser whirls And so on to viscosity."

In the 1940s Kolmogorov calculated how much energy is bound up in the eddies of different sizes, showing that there is a rather simple mathematical relationship called a power law that relates the energy to the scale. This idea of turbulence as a so-called spectrum of different energies at different size scales is one that was already being developed by Heisenberg’s work on the subject: it’s a very fruitful and elegant way of looking at the problem, but one in which the actual physical appearance of turbulent flow is subsumed into something much more recondite. Kolmogorov’s analysis can supply a statistical description of the buffeting, swirling masses of gases in the atmosphere of Earth or Jupiter – but what we see, and sometimes what concerns us most, is the individual vortices of a tropical cyclone or the Great Red Spot.



Out of turbulence: a cyclone and Jupiter’s Great Red Spot.

But there were, at the same time, stranger currents at play. While Heisenberg was juggling with equations, an Austrian forest warden named Viktor Schauberger was grappling towards a more intuitive understanding of turbulent flow. Schauberger’s interest in the subject arose in the 1920s from his wish to improve log flumes so that they didn’t get jammed as they carried timber through the forest. This led him to develop an idiosyncratic theory of turbulent vortices which mutated into something akin to a theory of everything: a view of how energy pervades the universe, which alleged to yield Einstein’s E=mc2 as a special case. It is said that Schauberger was forced by the Nazis to work on secret weapons related to his “implosion theory” of vortices, and even that he was taken for a audience with Hitler. After the war Schauberger was brought to the United States, where he was convinced that all his ideas were being stolen for military use.

Inevitably this is the stuff of conspiracy theory – Schauberger is said to have designed top-secret flying saucers powered by turbulent vortices. The spirit of his approach can be discerned also in the ideas of the German anthroposophist Theodor Schwenk in the 1950s and 60s. Schwenk claimed that his work was “based on scientific observations of water and air but above all on the spiritual science of Rudolf Steiner”, and he believed that the flow forms of water, and in particular the organization of vortices, reflects the wisdom of a teleological, creative nature. These “flow forms”, he said, are elements of a “cosmic alphabet, the word of the universe, which uses the element of movement in order to bring forth nature and man.”


“Flow forms” at a Californian biodynamic vineyard inspired by Theodor Schwenk’s work.

Schauberger and Schwenk were not doing science; it is not unduly harsh to say that, in the way they clothed their ideas in arcane theory disconnected from the scientific mainstream, they were practicing pseudo-science. Their appropriation by New Age thinkers today reflects this. But we shouldn’t be too dismissive of them on that account. One way to look at their work is as an attempt to restore the holistic, contemplative attitude exemplified by Leonardo to a field that seemed to be retreating into abstruse mathematics.

The gorgeous photographs of complex flow forms, of turbulent plumes and interfering waves and rippled erosion features in sand, in Schwenk’s 1963 book Sensitive Chaos offered a reminder that this was how flow manifests itself to human experience, not as an energy spectrum or hierarchical cascade. Such images seem to insist on a spontaneous natural creativity that is a far cry from the deterministic mechanics of a Newtonian universe. Schwenk himself suggested that images of vortices and waves in primitive art, such as the stone carvings on the Bronze Age burial chamber at Newgrange in Ireland, were intuitions of the fecund cosmic language of flow forms.


Are these swirling forms engraved in rock at the Bronze Age chamber at Newgrange prehistoric intuitions of flow patterns?

Flow on film

However sniffy scientists might be about Schauberger and Schwenk, their ideas have captivated artists and designers, and continue to do so. The contemporary British artist Susan Derges, who has made several works concerned with waves and flow in water, says that she was inspired by their ideas. Growing up beside the Basingstoke canal in southern England, Derges spent a lot of time exploring the tow path walks. “I was intrigued by the mixture of orderly patterning and interference set up by barges and bird life moving through the water”, she says. She began to explore how waves and interference patterns give rise to orderly, stable patterns: “It was a way of revealing a sense of mysterious but ordered processes behind the visible world.”


Waves meeting and mingling in Theodor Schwenk’s Sensitive Chaos (1965)

When she moved to Dartmoor in the 1990s, Derges encountered the torrent rivers coming down from the high moor. “I found it fascinating that a huge amount of energy, momentum and complex, chaotic movement could give rise to stable vortices and flow forms that remained in areas of the river’s course”, she says. “It seemed to suggest a metaphor for how one might consider all apparently constant and solid appearances as being sustained by a more fluid energetic underlying process.”

In a series of works in the 1990s Derges captured these turbulent structures in the River Taw on Dartmoor in southwest England by placing large sheets of photographic paper, protected with a waterproof covering, just beneath the water surface at night and exposing them with a single bright flash of light. In her inspiration, motives and even techniques, there is very little distance between what Derges did here and what an experimental scientist might do: such “shadowgraphs” of flow structures are commonly used as data in fluid dynamics. But for Derges this ‘data gathering’ becomes an artistic moment.


Susan Derges, image from River Taw Series (1997-9).

Like Derges, American artist Athena Tacha was inspired by Leonardo’s sketches of vortices, a debt that she made particularly explicit in her 1977 sculpture maquette Eddies/Interchanges (Homage to Leonardo). Much of Tacha’s work over the past several decades is an enquiry into the deep structures of turbulent flow, which, like Leonardo, she often reduces to their abstract essence and transforms into something more permanent and rigid. Because much of her work involves large-scale public commissions, these architectural sculptures allow people to literally get inside the forms and experience them as if they were a particle borne along in the flow – for example, in the brickwork-trellis maze of Mariathne (1985-6) and the stepped crescent forms of Green Acres (1985-7). If you want a visceral sense of the real tantalizing confusion of a turbulent maelstrom, no scientific description will improve on Tacha’s photographic series such as Chaos (1998).


Athena Tacha, Eddies/Interchanges (Homage to Leonardo) (1977).


Athena Tacha, Marianthe (1985-6; brickwork and cedar), Fort Myers, Florida (now destroyed).


Athena Tacha, Green Acres (1985-7), Trenton, New Jersey.


Athena Tacha, Chaos (1998; work in progress).

“I think I respond to turbulence because I am generally interested in fluid forms that evoke the state of ‘chaos’ in nature”, says Tacha – “which I consider a different kind of order, with constant irregularities and changes, but ultimately extremely organized.” Kolmogorov and his scientific successors would find little to object to in that claim.

Nothing, perhaps, better captures the sense of a flow frozen into an instant than Tacha’s sculpture Wave, which allows the viewer to experience the terrifying beauty of Hokusai’s Great Wave without fear of being pulled under. If this work hints at the connection to an East Asian appreciation of flow, that context is unmistakable in the work of Japanese artist Goh Shigetomi. Shigetomi has found a way to disperse black sumi ink into natural streams so that it can imprint an image of the flow on paper: as he puts it, the water “spontaneously draws lines”. Only the right ink and the right (Japanese) paper will work, and it took years of experimentation to refine the technique.



Hokusai, The Great Wave (c.1830) (top), and Athena Tacha, Wave (2004-5; lead sheet and silicone sealant) (bottom).

The results are unearthly, and Shigetomi expresses them in almost magical terms, reminiscent of Schwenk: “‘New-born’ water is full of infinite live force”. He believes that “the water remembers every single thing which has happened on and around the earth”, and that one can see “the fragments of the memories in flows and movements of water as certain patterns.”



Flow forms captured in ink on paper by Goh Shigetomi.

Can these claims be in any sense true from a scientific viewpoint? Not obviously; they seem closer to a form of thaumaturgy, of divination from natural symbols. (Shigetomi literally believes that a ‘spirit of water’ is sending him messages.) But the complexity of the inky traceries, when seen at first hand, are richer and more subtle than anything I have seen in a ‘strictly scientific’ photograph – there is only one printmaker in all of Japan that can reproduce the images with sufficient fidelity. They seem to conjure up much more than a cold physical trace of the technical process of their production.

Shigetomi denies any connection to the traditions of East Asian art, finding more in common with Leonardo, whose drawings he has examined in the notebooks of the collection housed at the royal Windsor Castle in England. But I find it hard not to see these “water figures” as in some sense an extension of Shitao’s instruction that the painter must find a spontaneous, unforced way of applying ink to paper, a way that captures the dynamic force of qi. Shigetomi explains that it takes a finely developed sensibility to make these “experiments” work – one cultivated in his case by 38 years of standing in rivers, waiting for the right moment. Derges says the same: “I had to be very aware of the tide and the wave patterns… One would watch and wait for the seventh wave and one needed split second timing." These artists have had to develop the same patient, observant sensitivity to flow that characterizes both the meditations of the Chinese Tang Dynasty water poets Li Bai and Du Fu and the sketches of Leonardo.

But can this attitude of contemplative observation, rather than careful testing and measurement, serve the scientist too? Certainly it can. In 1934 the French mathematician Jean Leray proved that the Navier-Stokes equations have so-called “weak” solutions, meaning that there are solutions that satisfy the equations on average but not in detail at every point in space: flow patterns that “fit”, you could say, so long as you don’t examine them with a microscope. And Leray is said to have found much of his inspiration for this mathematician tour-de-force not by poring over his desk into the small hours but by leaning over the Pont-Neuf in Paris and watching, for hour after hour, the eddies of the Seine surging around the piles.

A sense of order and chaos

There is, however, a still more dramatic example of how these intuitions of the form of turbulence can cross boundaries between art and science. One of the most striking, and certainly one of the most famous, artistic depictions of turbulence is Vincent van Gogh’s Starry Night (1889). It is a fantastical vision, of course – the night sky is not really alive with these swirling stellar masses, at least not in a way that the eye can see. But spiral galaxies and stellar nebulae were known in van Gogh’s day, having been revealed in particular by the telescopic studies of William Herschel a hundred years earlier. It is tempting to conclude that van Gogh’s notion of a turbulent heavens was simply a metaphor for his tumultuous inner world – but whether or not this is so, the artist seems to have had a startlingly accurate sense of what turbulence is about.


Vincent van Gogh, Starry Night (1889).

Kolmogorov’s work showed how to relate the velocity of the flow at one point to that at some other point a certain distance away: something that varies from place to place but which has a constant mathematical relationship on average. In 2006, researchers in Mexico showed that this same relationship deduced by Kolmogorov also describes the probabilities of differences in brightness, as a function of distance, between points in Starry Night. The same is true of some of van Gogh’s other ‘swirly’ works, such as Road with Cypress and Star (1890) and Wheat Field with Crows (1890). In other words, these paintings offer a way to visualize an otherwise recondite and hidden regularity of turbulence: they show us what Kolmogorov turbulence “looks like”.

These works were created when van Gogh was mentally unstable: the artist is known to have experienced psychotic episodes in which he had hallucinations, minor fits and lapses of consciousness, perhaps indicating epilepsy. “We think that van Gogh had a unique ability to depict turbulence in periods of prolonged psychotic agitation,” says the team leader Jose Luis Aragon. Any psychological explanation is sure to be tendentious, but the connection does seem to be more than just chance – other, superficially similar paintings such as Edvard Munch’s The Scream don’t have this mathematical property connecting the brush strokes, for example.

Of course, it would be absurd to suggest that van Gogh had somehow intuited Kolmogorov’s result before the Russian mathematician deduced it. But the incident does imply that a sensitive and receptive artist can penetrate to the core of a complex phenomenon, even if the result falls short of a scientific account. And here, even what might seem to be a flawed or mystical view of the natural world can offer guidance towards useful insights. How, for example, did Leonardo manage to produce sketches of the aerial topography of mountains laced by river networks that look almost identical to modern satellite images? He was surely guided by his Neoplatonic conviction of a correspondence between the microcosm of the human body and the macrocosm of the wider world: when he spoke of rivers as being the “blood of the earth”, it wasn’t just a visual pun on the resemblance to vein networks.



Leonardo’s sketch of the topography of northern Italy (top), and a modern satellite image of mountainous terrain carved by rivers (bottom).

As scientists strive to make sense of ever more complex phenomena such as turbulence, then, perhaps it is worthwhile listening to what artists think about them. As Derges puts it, “I feel there will probably always be a movement back and forth between the controlled and chaotic environments of simulated and real fluid events in order to be able to make images that communicate something of the mystery of what lies behind the visible.” The most revealing images of flow patterns, she says, “need to be situated in between something that has been closely observed and something that has been emotionally experienced.”

That something which is “emotionally experienced” should find any place in science might horrify some scientists. It needn’t. We now know that emotional experience plays a significant role in cognition: it can be a part of what allows us to grasp the essence of what happens. There are researchers who already accept the value of this. Last fall, for example, physical oceanographer Larry Pratt of the Woods Hole Oceanographic Institution in Massachusetts and performing artist Liz Roncka led a workshop near MIT in Cambridge in which the participants, mostly mathematicians and scientists, were encouraged to dance their interpretation of turbulence. As Genevieve Wanucha, science writer for the “Oceans at MIT” program, reported, Pratt “was able to improvise complex movements that responded fluidly to the motion of his partner’s body, inspired by obvious intuition about turbulence.” Wanucha explains that Pratt uses dance “as a teaching tool to elegantly and immediately represent to the human mind how eddies transport heat, nutrients, phytoplankton or spilled oil down beneath the ocean surface.” His hope is that such an approach will help young scientists working on ocean flows to “gain a more intuitive understanding” of their work.

An intuitive understanding has been an essential part of any great scientist’s mental toolkit. It is what has motivated researchers to make physical models and draw pictures, immerse themselves in virtual sensory environments that display their data, and create “haptic interfaces” that let them feel their way to understanding. I daresay that dance and other somatic experiences could also be valuable guides to scientists. This interplay of art and science should be especially fruitful when applied to a question like turbulence that is so hard to grasp, so elusive and ephemeral yet also governed and permeated by an underlying regularity. It seems unlikely that Heisenberg’s quest will ever be completed until we cultivate a feeling for flow.

Friday, July 11, 2014

Quantum reality and Einstein's moon

Here’s a more expansive and referenced version (in English!) of my article in the latest La Recherche on the quantum view of “reality”. It was something of a revelation writing it, because I realised to my embarrassment that I had not been properly understanding quantum nonlocality for all these years. It’s no excuse to say this, but a big part of the reason for that is that the concept is so poorly explained not only in most other popular articles but also in some scientific papers too. It is usually explained in terms that suggest that, because of entanglement, “Einstein’s spooky action at a distance is real!” No it’s not. Quantum nonlocality, as explored by John Bell, is precisely not action at a distance, but the alternative to it: we only have to see the correlations of entanglement as “action at a distance” if we are still insisting on an Einsteinian hidden variables picture. This is how Johannes Kofler puts it:

“(Quantum) nonlocality” is usually used as an abbreviation for “violating Bell’s inequality”. But only if you stick to hidden variables is there “real” non-locality (= negation of Einstein’s locality). If you keep a Copenhagen-like interpretation (giving up determinism), i.e. not use any hidden variables in the first place, you do not need any non-locality (= negation of Einstein locality) to explain quantum nonlocality. Then there is (quantum) nonlocality without the need for (Einstein) non-locality.”

Duh, you knew that, didn’t you? Now I do too. Similarly, quantum contextuality doesn’t mean that quantum measurements depend on context, but that they would depend on context in a hidden-variable picture. Aha!

___________________________________________________________________

No matter where we look in quantum theory, we seem to play an active part in constructing the reality we observe.

Philosophers and mystics from Plato to the Buddha have long maintained that the reality we perceive is not really there. But quantum theory seems to insist on a far stranger situation than that. In this picture, it is meaningless to ask about what is “there” until we look. Pascual Jordan, one of the physicists working with Niels Bohr who helped to define the new quantum world view in the 1920s, claimed that “observations not only disturb what has to be measured, they produce it… We compel [a quantum particle] to assume a definite position.” In other words, Jordan said, “we ourselves produce the results of measurements” [1].

In this comment lurk all the notorious puzzles and peculiarities of quantum theory. It seems to be an incredibly grandiose, self-obsessed image of reality: nothing exists (or at least, we can’t say what does) until we bring it into being. Isn’t this the antithesis of science, which assumes an objective reality that we can examine and probe with experiments?

No wonder Albert Einstein was uncomfortable with this kind of quantum reality. He expressed his worries very concretely to the young physicist Abraham Pais. “I recall”, Pais later wrote, “that during one walk Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it” [2]. Isn’t the moon, after all, just made up of quantum particles?

So what’s the answer? Is the moon there when nobody looks? How, without looking, could we know anyway?

Increasingly, scientists today are finding ways of “looking” – of conducting experiments that test whether Bohr and his colleagues or Einstein was right. So far, the evidence clearly favours Bohr: reality is what we make it. But what exactly does that mean? No one is sure. In fact, no one even really knows how serious a problem this is. Richard Feynman, who knew more about quantum theory than almost anyone else ever, famously summed it up: “I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem” [3].

Fretting about these questions led Einstein to arguably his greatest contribution to quantum theory. But it’s not one for which he tends to receive much credit, because he was attempting to do the opposite: to banish a quantum property that turns out to be vital.

In the mid-1920s, Bohr, working in Copenhagen with Jordan, Werner Heisenberg and Wolfgang Pauli, came up with his radical interpretation of what quantum mechanics tells us about reality. In this “Copenhagen Interpretation”, the theory doesn’t say anything about “how things are”. All it can do, and all science can ever do (in Bohr’s view), is tell us about “how things seem”: what we can measure. To ask what is really underlying those measurements, said Bohr, is to ask a question that lies beyond science.

In 1935, in collaboration with Boris Podolsky and Nathan Rosen, Einstein described a thought experiment that sought to show how absurd the Copenhagen Interpretation was. He imagined an experiment in which two particles interact to make their quantum states inter-related. Imagine two photons of light, for example, interacting so that one of them gets polarized horizontally (that is, the oscillating electromagnetic fields are oriented in this manner) and the other vertically. According to Bohr’s view of quantum mechanics, the actual photon polarizations aren’t determined until we make the measurement – all we know is that they are correlated.

But what if, once they have been “entangled” in this way, the photons are allowed to separate over vast, even cosmic, distances? Quantum theory would still seem to insist that, if we make a measurement on one of them, it instantly decides the polarization of them both. Yet this sort of instant “action at a distance” had apparently been ruled out by Einstein’s theory of special relativity, which insisted that no influence could be transmitted faster than light – a condition called locality. The only alternative to this “spooky action at a distance”, as Einstein called it, was that the polarizations of the photons had been decided all along – even though quantum mechanics couldn’t say what they were. “I am therefore inclined to believe”, Einstein wrote to his friend Max Born in 1948, “that the description of quantum mechanics… has to be regarded as an incomplete and indirect description of reality” [4]. He suspected that there were “hidden variables” that, while we couldn’t measure them, awarded the two particles definite states.

The Austrian physicist Erwin Schrödinger saw at once that this property of “entanglement” – a word he coined – was central to quantum theory. In it, he said, was the essence of what made quantum theory distinct from the kind of reality we are used to from everyday experience. To Einstein, that was precisely the problem with quantum theory – entanglement was supposed to show not how strange quantum theory was, but why it wasn’t a complete description of reality.

It wasn’t until 1964 that anyone came up with a way to test those assertions. The Irish physicist John Bell imagined another thought experiment involving making measurements on entangled pairs of quantum particles. If the measurements turned out one way, he said, then quantum systems could not be explained by any “realist” hidden-variables theory and simultaneously be “local” in Einstein’s sense. Rather, either the world lacks a realist description or it must permit real “action at a distance”. Such a situation is called quantum nonlocality. If, on the other hand, the results of Bells’ hypothetical experiment came out the other way, then Einstein would be right: reality is local and realist, meaning that all properties are inherent in a system whether we observe them or not. In this way, Bell showed how in principle we might conduct experiments to determine this fundamental question about the nature of reality: does it obey “quantum nonlocality” or Einstein’s “local realism”?

It took almost another 20 years before Bell’s theorem was put to the test. In the early 1980s Alain Aspect at the University of Paris in Orsay figured out a way to do that using laser beams, and he discovered that the observable effects of quantum entanglement can’t be explained by local hidden variables [5]. Bell’s test is statistical: it relies on making many measurements and discovering whether collectively they stay within the bounds prescribed by local realism or whether they exceed them [see Box 1].

___________________________________________________________________________

Box 1: Testing quantum nonlocality

The concept is simple: a source of particles (C) sends one each of an entangled pair to two detectors, well separated in opposite directions (A and B).



In the Aspect experiment the source is a calcium atom, which emits polarized photons that travel 6 metres to the detectors. Each photon can have one of two types of polarization (horizontal or vertical), and so there are in principle four different possibilities for what the two detectors might measure. Aspect and colleagues arranged for the photons to be dispatched at random in the two opposed directions. So it should be easy to work out the statistical probabilities of the various experimental outcomes. But here’s the key point: if Bohr was right to say that quantum quantities are undefined before they are measured, then these seemingly straightforward statistic change: you can’t assume that the photons must have polarizations that are horizontal or vertical until you measure them – even though you know that these are the only possibilities! The correlations between the entangled photons then produce a statistical outcome of measurements that lies outside the bounds of what “common-sense” arithmetic seems to imply. John Bell quantified these bounds, and Aspect’s experiments confirmed that they are indeed violated.

______________________________________________________________

It’s similar to the way you detect the wave properties of quantum particles by looking at how they interfere with each other. Look at one particle, and it just pops up at a certain position in the detector. But look at many, and you see that more of them appear in some regions (where the interference enhances the chances of finding them) than in others. This indeterminacy of any single experiment, Aspect showed, was not due to our inability to access hidden variables, but was fundamental to quantum theory.

But wait – didn’t we say that special relativity forbids this kind of faster-than-light interaction? Well, Einstein thought so, but it’s not quite true. What it actually forbids is events at one place having a causal influence on events at another faster than the time it takes for light to pass between them. Although it is possible to figure out that a particle in one place has displayed the “action at a distance” of entanglement on a particle at another, it turns out that you can only ever deduce this by exchanging information between the two places – which is indeed restricted to light speed. In other words, while it is possible to demonstrate this action, it’s impossible to use it to communicate faster than light. And that restriction is enough to preserve the integrity of special relativity.

According to Johannes Kofler of the Max Planck Institute of Quantum Optics in Garching, Germany, Bell’s theorem is relevant to many aspects of the emerging discipline of quantum information technology, such as quantum computing and quantum cryptography. But he cautions that there are still loopholes – ways one can argue that perhaps something else in Bell-type experiments is masking hidden variables. Experiments have eliminated each of these loopholes individually, he says, but “what is still lacking, is a so-called definitive Bell test where all loopholes are closed simultaneously” [see Box 2]. This isn’t just an academic nicety, he adds, because completely secure telecommunications using quantum cryptography will rely on ensuring that all such loopholes are firmly shut.

___________________________________________________________

Box 2: Closing the loopholes

There have now been many experimental demonstrations of violations of the bounds on nonlocal correlations set by Bell’s theorem. But none is entirely free from ingenious advocates of “local realism”, who would insist that all objects have local properties that fully specify their state. There are three such loopholes, all of them reliant on the fact that the sampling of the particles’ properties at the detectors must be truly random. Loophole number 1 is the “locality” loophole, which says that the measurements at the two detectors could still be influenced by some hidden, fast but slower-than-light communication between them so that the randomization of, say, the detector polarization filters is imperfect. To rule that out demands simply increasing the distance between detectors, which researchers at the University of Innsbruck in Austria achieved in 1998 by placing them 400m apart, with the photons sent along optical fibres [6]. Loophole number 2 is the “freedom of choice” loophole, in which some “local realist” property of the particles themselves influences the choices made in their measurement. This was ruled out in 2010 in an experiment that also closed the locality loophole, by making sure that the detectors were not only distant from one another but also from the photon source: the source and one of the detectors were located on separate islands in the Canaries [7]. This made it possible to control the timing of switching of the polarization at the detectors very precisely, so that it couldn’t possibly be influenced by anything happening at the source.

Finally there is the “fair-sampling” loophole, in which the subset of all the photons that is actually measured is biased in some way by a ‘local realistic’ property. To rule out this possibility demands a high detection efficiency, which was achieved for photons only last year [8]. So all the loopholes have been closed – but not yet all of them simultaneously, arguably giving local realism still a precarious handhold.

________________________________________________________

Three years after Bell proposed his theorem, two physicists named Simon Kochen and Ernst Specker suggested a similar counterintuitive feature of quantum theory: that measurements can depend on their context. The macroscopic world isn’t like this. If you want to count the number of black and white balls in a jar, it doesn’t matter if you count the black ones or the white ones first, or if you tot them up in rows of five or pour them all onto a set of scales and weigh them. But in quantum mechanics, the answer you get may depend on the context of the measurement. Kochen and Specker showed that if quantum systems really display contextuality, this is logically incompatible with the idea that they might be more fully described by hidden variables.

Pawel Kurzynski of the National University of Singapore says that studies of contextuality have lagged behind those of quantum nonlocality by 2-3 decades – the first experiments clearly confirmed it were performed only in 2011. But “the attention now paid to contextuality is similar to the attention paid to nonlocality after the Aspect experiment 30 years ago”, he says, and contextuality seems likely to be just as important. For one thing, it might explain why some quantum computers seem able to work faster than classical ones [9]. “Quite a number of people now hope that contextuality is an important ingredient for the speedup”, says Kofler.

Recently, a team led by Kurzynski have suggested that nonlocality and contextuality might ultimately be expressions of the same thing: different facets of a more fundamental “quantum essence” [10]. The researchers took the two simplest experimental tests of quantum nonlocality and contextuality, and figured out how, in theory, to merge them into one. Then, says Kurzynski, “the joint system either exhibits local contextuality or nonlocality, never both at the same time.” For that reason, they call this behaviour quantum monogamy. “Our result shows that these two issues are related via some more general feature that can take form of either nonlocality or contextuality”, says Kurzynski [see Box 3].

__________________________________________________________

Box 3: Searching for the quantum essence

The simplest experimental tests of quantum nonlocality and contextuality have complicated names: respectively, the Clauser-Horne-Shimony-Holt (CHSH) and Klyachko-Can-Binicioglu-Shumovsky (KCBS) tests. In the CHSH experiment, two observers (say Alice and Bob) each measure the two possible states of two entangled photons, and the statistics of the combined measurements are compared with the predictions of a local realist theory. But the KCBS scenario involves only a single observer making measurements on a single particle, without entanglement. The statistics of certain combinations of successive measurements are again bounded within a realistic theory that doesn’t depend on the context of measurement, so that contexuality shows up as a violation of this bound.

“What we did was to merge both scenarios”, Kurzynski explains. “We have two observers, Alice and Bob, but this time, in addition to the CHSH scenario Alice also performs additional measurements that allows her to test the KCBS scenario on her subsystem.” One might expect that, since it is possible to violate the realist bounds in both cases when the two scenarios are considered separately, it will be also possible to violate both of them when they are considered jointly. However, what the researchers observed when they worked through the calculations was that only one bound can be violated in any one experiment, never both at once.

___________________________________________________________

So Schrödinger may have been premature – as Kurzynksi explains, “the fundamental quantum property is not entanglement, but non-classical correlations in a more general form.” That idea is supported by a recent finding by Maximilian Schlosshauer of the University of Portland in Oregon and Arthur Fine of the University of Washington in Seattle. The theorems of Bell and Kochen-Specker are generally called no-go theorems, because they specify situations that aren’t physically possible – in this case, that certain measurement outcomes can’t be reconciled with a hidden-variables’picture. Schlosshauer and Fine have devised another no-go theorem which shows that, even if two quantum states are not entangled, they can’t be considered independently [11].

“If we put two quantum systems together, and if we want to think of each system as having some ‘real physical state’ that fully determines what we can measure”, says Schlosshauer, “then the real physical state of the two systems together is not simply the combination of the states of each system.” When you make measurements of both the two systems, each looks different from what it would if you measured each one alone. This new form of entanglement-free interdependence of quantum systems has yet to be demonstrated experimentally.

Again, Schlosshauer says, we see that “trying to uphold classical intuitions gets you into trouble with quantum mechanics.” This, he says, underscores what the theorems of Bell and Kochen-Specker have told us: “quantum measurements do not just ascertain what's already there, but create something new.”

Why can’t we just accept that reality isn’t what we thought it was – that the world is nonlocal and contextual and entangled and correlated? The answer is that it just doesn’t seem that way. If I move a coffee cup on my desk, it isn’t going to move one on yours (unless I’ve rigged up some device that transmits the action). In the “classical” world we experience, these weird effects don’t seem to apply. How the physics of the macroscale arises from the quantum physics of fundamental particles is a hot area of research, and many scientists are devising experiments on the “mesoscale” at which one becomes the other: objects consisting of perhaps thousands to billions of atoms. They have, for example, already shown that organic molecules big enough to see in the electron microscope can display quantum behaviour such as wave-like interference [12].

Although there are still questions about exactly how this quantum-to-classical transition happens, you might think that we do at least know where we stand once we get to everyday objects like apples – they, surely, aren’t going to show any quantum weirdness. But can we be sure? In 1985, physicists Anthony Leggett and Anupam Garg proposed some ground rules for what they called macrorealism: the idea that macroscopic objects will behave in the “realistic” way we have come to expect [13]. Perhaps, they said, there’s some fundamental size limit above which quantum theory as we currently know it breaks down and objects are no longer influenced by measurement. Leggett and Garg worked out what observations would be compatible with the macrorealist picture – something like a macroscopic Bell test. If we carried out the corresponding experiments and found that they violate the Leggett-Garg constraint, it would mean that even macroscopic objects could in principle show quantum behaviour. But the challenge is to find a way of looking at the object without disturbing it – in effect, to figure out how to sense that Einstein’s moon is “there” without directly looking. Such experiments are said to be “non-invasive”.

Over the past four years, several experiments of this kind have been devised and carried out [see Box 4], and they suggest that Leggett and Garg’s macrorealism might indeed be violated by large objects. But so far these experiments have only managed to study systems that don’t necessarily qualify as big enough to be truly macroscopic. The problem is that the experiments get ever harder as the objects get bigger. “How to define macroscopic is unfortunately subjective and almost a small research field on its own”, says Kofler. “We’re eagerly awaiting better experiments.”

_____________________________________________________________________

Box 4: Is the world macrorealistic?

In testing the Leggett-Garg condition for macrorealism, one is essentially asking if a macroscopic system initially prepared in some particular state will evolve in the same way regardless of whether or not one observes it. So an experimental test means measuring the state of a single system at various points in time and comparing the outcomes for different measurement sequences. The trick, however, is that the measurements must be “non-invasive”: you have to observe the system without disturbing it. One way to do that is with a “negative” observation: if, say, an object can be either on one side of a chamber or the other, and you don’t see it in one of those sides at some particular time, you can infer – without observing the object directly – that it is on the other side. Sometimes you will see the object when you look, but then you just discard this run and start again, keeping track of the statistics.

But there’s another problem too: you need to be sure that the various states of your system are “macroscopically distinct”: that you can clearly see they are different. That, famously, is the case for Schrödinger’s hypothetical cat: it is either live or dead. But what are the distinct states that you might hope to observe for, say, a cannonball sitting in a box? Its quantum particles might have many different quantum states, but what are the chances that you could initially prepare them all in one state and then see them all jump to another?

That’s why experimental tests of the Leggett-Garg condition have so far tended to be restricted to “mesoscale” systems: they contain many quantum particles, but not so many that distinct collective states can’t be identified. And they have had to find ways of making the observations non-invasively. One test in 2010, for example, monitored oscillations between two different states in a superconducting circuit, which could be regarded as having many thousands of atoms in distinct states [14]. Another test two years later used the “negative-observation” approach to monitor the quantum states of around 10**10 phosphorus impurities in a piece of doped silicon [15]. Both violated the Leggett-Garg condition for macrorealism.

______________________________________________________________

“The various debates about the interpretation of quantum mechanics can be seen as debates about what quantum states refer to”, says Kofler’s colleague Caslav Brukner of the University of Vienna. “There are two major points of view: the states refer to reality, or they refer to our knowledge of the basis from which ‘reality’ is constructed. My current view is that the quantum state is a representation of knowledge necessary for a fictitious observer, within his experimental limitations, to compute probabilities of outcomes of all possible future experiments.”

That brings us back to Einstein’s moon. It now seems that something is there when we don’t look, but exactly what is there is determined only when we look. But there’s no reason to apologize for this intrusion of the observer, Brukner argues. “Fictitious observers are not restricted to quantum theory, but are also introduced in thermodynamics or in the theory of special relativity.” It seems we had better get used to the fact that we’re an essential part of the picture.

References
1. Quoted by M. Jammer, The Philosophy of Quantum Mechanics (Wiley, New York, 1974) p.151.
2. A. Pais, Rev. Mod. Phys. 51, 863 (1979).
3. R.P.Feynman, Int. J. Theor. Phys. 21, 471 (1982).
4. The Born-Einstein Letters, with comments by M. Born (Walker, New York, 1971).
5. A. Aspect et al., Phys. Rev. Lett. 49, 1804 (1982).
6. G. Weihs et al., Phys. Rev. Lett. 81, 5039 (1998).
7. T. Scheidl et al., Proc. Natl Acad. Sci. USA 107, 19708 (2010).
8. M. Giustina et al., Nature 497, 227 (2013).
9. M. Howard, J. Wallman, V. Veitch & J. Emerson, Nature 510, 351 (2014).
10. P. Kurzynski, A. Cabello & D. Kaszlikowski, Phys. Rev. Lett. 112, 100401 (2014).
11. M. Schlosshauer & A. Fine, Phys. Rev. Lett. 112, 070407 (2014).
12. S. Gerlich et al., Nature Commun. 2, 263 (2011).
13. A. J. Leggett & A. Garg, Phys. Rev. Lett. 54, 857 (1985).
14. A. Palacios-Laloy, Nature Phys. 6, 442 (2010).
15. G. C. Knee et al., Nature Commun. 3, 606 (2012).