Friday, February 27, 2015

Mitochondria: who mentioned God?

Oh, they used the G word. The Guardian put “playing God” in the headline of my article today on mitochondrial replacement, and now everyone on the comments thread starts ranting about God. I’m not sure God has had much to say in this debate so far, and it’s a shame to bring him in now. But for the sake of the record, I’ll just add here what I said about this phrase in my book Unnatural. I hope that some of the people talking about naturalness and about concepts of the soul in relation to embryos might be able to take a peek at that book too. So here’s the extract:

“Time and again, the warning sounded by the theocon agenda is that by intervening in procreation we are ‘playing God’. Paul Ramsey made artful play of this notion in his 1970 book Fabricated Man, saying that ‘Men ought not to play God before they learn to be men, and after they have learned to be men they will not play God.’ To the extent that ‘playing God’ is simply a modern synonym for the accusation of hubris, this charge against anthropoeia is clearly very ancient. Like evocations of Frankenstein, the phrase ‘playing God’ is now no more than lazy, clichéd – and secular – shorthand, a way of expressing the vague threat that ‘you’ll be sorry’. It is telling that this notion of the man-making man becoming a god was introduced into the Frankenstein story not by Mary Shelley but by Hollywood. For ‘playing God’ was never itself a serious accusation levelled at the anthropoetic technologists of old – one could tempt God, offend him, trespass on his territory, but it would have been heretical seriously to entertain the idea that a person could be a god. As theologian Ted Peters has pointed out,
“The phrase ‘playing God?’ has very little cognitive value when looked at from the perspective of a theologian. Its primary role is that of a warning, such as the word ‘stop’. In common parlance it has come to mean just that: stop.’”

And yet, Peters adds, ‘although the phrase ‘playing God’ is foreign to theologians and is not likely to appear in a theological glossary, some religious spokespersons employ the idea when referring to genetics.’ It has, in fact, an analogous cognitive role to the word ‘unnatural’: it is a moral judgement that draws strength from hidden reservoirs while relying on these to remain out of sight.”

OK, there you go. Now here’s the pre-edited article.

____________________________________________________________________

It was always going to be a controversial technique. Sure, conceiving babies this way could alleviate suffering, but as a Tory peer warned in the Lords debate, “without safeguards and serious study of safeguards, the new technique could imperil the dignity of the human race, threaten the welfare of children, and destroy the sanctity of family life.” Because it involved the destruction of embryos, the Catholic Church inevitably opposed it. Some scientists warned of the dangers of producing “abnormal babies”, there were comparisons with the thalidomide catastrophe and suggestions that the progeny would be infertile. Might this not be just the beginning of a slippery slope towards a “Frankenstein future” of designer babies?

I’m not talking about mitochondrial replacement and so-called “three person babies”, but about the early days of IVF in the 1970s and 80s, when governments dithered about how to deal with this new reproductive technology. Today, with more than five million people having been conceived by IVF, the term “test-tube baby” seems archaic if not a little perverse (not least because test tubes were never involved). What that debate about assisted conception led to was not the breakup of the family and the birth of babies with deformities, but the formation of the HFEA in the Human Fertilisation and Embryology Act of 1990, providing a clear regulatory framework in the UK for research involving human embryos.

It would be unscientific to argue that, because things turned out fine on that occasion, they will inevitably do so for mitochondrial replacement. No one can be wholly certain what the biological consequences of this technique will be, which is why the HFEA will grant licenses to use it only on the careful worded condition that they are deemed “not unsafe”. But the parallels in the tone of the debate then and now are a reminder of the deep-rooted fears that technological intervention in procreation seems to awaken.

Scientists supportive of such innovations often complain that the opponents are motivated by ignorance and prejudice. They are right to conclude that public engagement is important – in a poll on artificial insemination in 1969, the proportion of people who approved almost doubled when they were informed about the prospects for treating infertility rather than just being given a technical account. But they shouldn’t suppose that science will banish these misgivings. They resurface every time there is a significant advance in reproductive technology: with pre-implantation genetic diagnosis, with the ICSI variant of IVF and so on. They will undoubtedly do so again.

In all these cases, much of the opposition came from people with a strong religious faith. As one of the versions of mitochondrial replacement involves the destruction of embryos, it was bound to fall foul of Catholic doctrine. But rather little was made of that elsewhere, perhaps an acknowledgement that in terms of UK regulation that battle was lost some time ago. (In Italy and the US, say, it is a very different story.) The Archbishops’ Council of the Church of England, for example, stressed that it was worried about the safety and ethical aspects of the technique: the Bishop of Swindon and the C of E’s national adviser for medical ethics warned of “unknown interactions between the DNA in the mitochondria and the DNA in the nucleus [that] might potentially cause abnormality or be found to influence significant personal qualities or characteristics.” Safety is of course paramount in the decision, but the scientific assessments have naturally given it a great deal of attention already.

Lord Deben, who led opposition to the bill in the Lords, addressed this matter head on by denying that his Catholicism had anything to do with it. “I hope no one will say that I am putting this case for any reason other than the one that I put forward,” he said. We can take it on trust that this is what he believes, while finding it surprising that the clear and compelling responses to some of his concerns offered by scientific peers such as Matt Ridley and Robert Winston left him unmoved.

Can it really be coincidental, though, that the many of the peers speaking against the bill are known to have strong religious convictions? Certainly, there are secular voices opposing the technology too, in particular campaigners against genetic manipulations in general such as Marcy Darnovsky of the Center for Genetics and Society, who responded to the ongoing deliberations of the US Food and Drug Administration over mitochondrial transfer not only by flagging up alleged safety issues but also insisting that we consider babies conceived this way to be “genetically modified”, and warning of “mission creep” and “high-tech eugenics”. “How far will we go in our efforts to engineer humans?” she asked in the New York Times.

Parallels between these objections from religious and secular quarters suggest that they reflect a deeper and largely unarticulated sense of unease. We are unlikely to progress beyond the polarization into technological boosterism or conservative Luddites and theologians unless we can get to the core of the matter – which is evidently not scriptural, the Bible being somewhat silent about biotechnological ethics.

Bioethicist Leon Kass, who led the George W. Bush administration’s Council on Bioethics when in 2001 it blocked public funding of most stem-cell research, has argued that instinctive disquiet about some advances in assisted conception and human biotechnology is “the emotional expression of deep wisdom, beyond reason’s power fully to articulate it”: an idea he calls the wisdom of repugnance. “Shallow are the souls”, he says, “that have forgotten how to shudder.” I strongly suspect that, beneath many of the arguments about the safety and legality of mitochondrial replacement lies an instinctive repugnance that is beyond reason’s power to articulate.

The problem, of course, is that what one person recoils from, another sees as a valuable opportunity for human well-being. Yet what are these feelings really about?

Like many of our subconscious fears, they are revealed in the stories we tell. Disquiet at the artificial intervention in procreation goes back a long way: to the tales of Prometheus, of the medieval homunculus and golem, and then to Goethe’s Faust and Shelley’s Victor Frankenstein, E.T.A. Hoffmann’s automaton Olympia, the Hatcheries of Brave New World, modern stories of clones and Ex Machina’s Ava. On the surface these stories seem to interrogate humankind’s hubris in trying to do God’s work; so often they turn out on closer inspection to explore more intimate questions of, say, parenthood and identity. They do the universal job of myth, creating an “other” not as a cautionary warning but in order more safely to examine ourselves. So, for example, when we hear that a man raising a daughter cloned from his wife’s cells (not, I admit, an unproblematic scenario) will be irresistibly attracted to her, we are really hearing about our own horror of incestuous fantasies. Only in Hollywood does Frankenstein’s monster turn bad because he is tainted from the outset by his origins; for Shelley, it is a failure of parenting.

I don’t think it is reading too much into the “three-parent baby” label to see it as a reflection of the same anxieties. Many children already have three effective parents, or more - through step-parents, same-sex relationships, adoption and so forth. When applied to mitochondrial transfer, this term shows how strongly personhood has become equated now with genetics, and indicates to geneticists that they have some work to move the public on from the strictly deterministic view of genetics that the early rhetoric of the field unwittingly fostered.

We can feel justifiably proud that the UK has been the first country to grapple with the issues raised by this new technology. It has shown already that embracing reproductive technologies can be the exact opposite of a slippery slope: what IVF led to was not a Brave New World of designer babies, but a clear regulatory framework that is capable of being permissive and casuistic, not bound by outmoded principles. The UK is not alone in declining to prohibit the technique, but it is right to have made that decision actively.

It is also right that that decision canvassed a wide range of opinions. Some scientists have questioned why religious leaders should be granted any special status in pronouncing on ethics. But the most thoughtful of them often turn out to have a subtle and humane moral sensibility of the kind that faith should require. There is a well-developed strand of philosophical thought on the moral authority of nature, and theology is a part of it. But on questions like this, we have a responsibility to examine our own responses as honestly as we can.

Monday, February 23, 2015

Why dogs aren't enough in Many Worlds

I'm very glad some folks are finding this exchange on Many Worlds instructive. That was really all I wanted: to get a proper discussion of these issues going. The tone that Sean Carroll found “snide and aggressive” was intended as polemical: it’s just a rhetorical style, you know? What I certainly wanted to avoid (forgive me if I didn’t) was any name-calling or implications of stupidity, fraud, chicanery etc. (It doesn’t surprise me that some of the responses failed to do the same.) My experience has been that it is necessary to light a fire under the MWI in order to get a response at all. Indeed, even then it is proving very difficult to keep the feedback to the point and not get led astray by red herrings. For example, Sean made a big point of saying:
“The people who object to MWI because of all those unobservable worlds aren’t really objecting to MWI at all; they just don’t like and/or understand quantum mechanics.”
I’m genuinely unsure if this is supposed to be referring to me. Since I said in my article
“Certainly, to say that the world(s) surely can’t be that weird is no objection at all”
then I kind of assume it isn’t – so I’m not sure why he brings the point up. I even went to the trouble of trying explicitly to ward off attempts to dismiss my arguments that way:
“Many Worlders harp on about this complaint precisely because it is so easily dismissed.”
Puzzling.

But what Sean said next seems to get (albeit obliquely) to the heart of the matter:
“Hilbert space is big, regardless of one’s personal feelings on the matter.”

Whatever these arguments are about, they are surely not about what Hilbert space looks like, since Hilbert space is a mathematical construct – that is simply true by definition, and there is no argument about it. The argument is about what ontological status we ascribe to the state vectors that appear in Hilbert space. I do see the MW reasoning here: the reality we currently experience corresponds to a state vector in Hilbert space, and so why do we have any grounds for denying reality to the other states into which it can evolve by smooth unitary transformation? The problem, of course, is that a single state in quantum mechanics can evolve into multiple states. Yet if we are going to exclude any of those from having objective reality, we surely must have some criterion for doing so. Absent that, we have the MWI. I do understand that reasoning.

So it seems that the arguments could be put like this: is it an additional axiom to say “All states in Hilbert space accessible from an initial one that describes our real world are also describing real worlds” – or is it not? To objectors, it is, and a very expensive one at that. To MWers, it is merely what we do for all theories. “Give us one good reason why it shouldn’t apply here”, they say.

It’s a fair point. One objection, which has nothing whatsoever to do with the vastness of Hilbert space, is to say, well, no one has seriously posited such a vast number of multiple and in some sense “parallel” (initially) worlds before, so it seems fair to ask you to work a bit harder, since don’t we in science say that extraordinary claims require extraordinary evidence?* Might we not ask you to work a bit harder in this particular case to establish the relationship between what the formalism says and what exists in physical reality? After all, whether or not we admit all accessible states in Hilbert space a physical reality, we seem to get identical observational consequences. So right now, the only way we can choose between them is philosophically. And we don’t usually regard philosophy as the final arbiter in science.

___________________________________________
*For example, Sean emphasizes that the many worlds are a prediction, not a postulate of the theory. But most other theories (all others?) can tell us some specific things that they don’t predict too about what we will see happen. But I’m not clear if the MWI can rule out any particular thing actually coming to pass that is consistent with the laws of physics. For example, the Copenhagen interpretation (just to take an example) can exclude the “prediction” that human life came to an end following a nuclear conflict sparked by the Bay of Pigs incident. Correct me if I am wrong, but the MWI cannot rule out this “prediction”. It cannot rule out the “prediction” that Many Worlders were never bothered by this irritating science writer. Even if MWI does not exactly say “everything happens”, can it tell us there is anything in particular (consistent with the laws of physics) that does not?
____________________________________________

So up to this point, I can appreciate both points of view. What makes me uncomfortable is that the MWers seem so determined to pretend that what they are telling us is actually not so remarkable after all. What’s so surprising, they ask, about the idea that you can instantly duplicate a consciousness, again and again and again? What is frustrating is the blithe insistence that we should believe this, I suspect the most extraordinary claim that science has ever made, on the basis simply of Occam’s (fallible) razor. This is not, do please note, at all the same as worrying about “too many worlds”.

Still, who cares about my discomfort, right? But I wanted to suggest that it’s not just a matter of whether we are prepared to accept this extraordinary possibility. We need to acknowledge that it is rather more complicated than coming to terms with a cute gaggle of sci-fi Doppelgängers. This is not about whether or not people are “all that different from atoms”. It is about whether what people say can be ascribed a coherent meaning. Those responses that have acknowledged this point at all have tended to say “Oh who cares about selfhood and agency? How absurd to expect the theory to deal with unplumbed mysteries like that!” To which I would say that interpretations of quantum theory that don’t have multiple physical worlds don’t even have to think about dealing with them. So perhaps even that Ocaam’s razor argument is more complicated than you think.

It’s been instructive to see that the MWI is something of a hydra: there are several versions, or at least several views on it. Some say that the “worlds” bit is itself a red herring, a bit of gratuitous sci-fi that we could do without. Others insist that the worlds must be actual: Sean says that people must be copied, and that only makes any kind of sense if the world is copied around them. Some say that invoking problems with personhood is irrelevant since Many Worlds would be true anyway even without people in it. (The inconvenience with this argument is that there are people in it.) Sean, interestingly, says that copying people is not only real but essential, “for deriving the Born rule” in MWI. This is a pointer to his fascinating paper on “self-locating uncertainty”. Here he and Charles Sebens points out that, in the MWI where branch states are rendered distinct and non-interacting by decoherence, the finite time required for an observer to register which branch she is on means that there is a tiny but inescapable interval during which she exists as two identical copies but doesn’t know which one she is. In this case, Carool and Sebens argue, the rational way to “apportion credence to the different possibilities” is to use the Born rule, which allows us to calculate from the wavefunction the likelihood of finding a particular result when we make a measurement. This, they say, is why probability seems to come into the situation at all, given that the MWI says that everything that can happen does happen with 100% probability.

This sounds completely bizarre: a rule of quantum physics works because of us? But I think I can see how it makes sense. The universe doesn’t care about the Born rule: it’s not forever calculating “probabilities”. Rather, the Born rule is only needed in our mathematical theory of quantum phenomena – and this argument offers an explanation of why it works when it is put there. Now, there is a bit of heavy pulling still to do in order to get from a “rational way to make predictions while we are caught in that brief instant after the universe has split but before we have been able to determine which branch we are in” and a component of the theory that we use routinely even while we are not agreed that this situation arises in the first place. I’m still not clear how that bit works. Neither is it fully clear to me how we are ever really in that limbo between the universe splitting and us knowing which branch we took, given that, in one view of the Many Worlds at least, the universe has split countless times again during that interval. Maybe the answer would be that all those subsequent split produce versions that are identical with respect to the initial “experiment”, unless they involve processes that interact with the “experiment” and so are part of it anyway. I don’t know.

I do think I can see the answer to my question to Sean (not meant flippantly) of whether it has to be humans who split in order to get the Born rule, and not merely dogs. The answer, I think, is that dogs won’t do because dogs don’t do quantum mechanics. What seems weird is that we’re then left with an aspect of quantum theory that, in this argument, is the way it is not because of some fundamental underlying physical reason so much as because we asked the question in the first place. It feels a bit like Einstein’s moon: was the Born rule true before we invented quantum theory? Or to put it another way, how is consciousness having this agency without appearing explicitly anywhere in the theory? I’m not advancing these as critiques, just saying it seems odd. I’m happy to believe that, within the MWI, the logic of this derivation of the Born rule is sound.

But doesn’t that mean that deriving the Born rule, a longstanding problem in QM, is evidence for the MWI? Sadly not. There are purported derivations within the other interpretations too. None is universally accepted.

The wider point is that, if this is Sean’s reason for insisting we include dividing people in MWI, then the questions about identity raised in my article stand. You know, perhaps they really are trivial? But no one seems to want to say why. This refusal to confront the apparent logical absurdities and contradictions of a theory which predicts that “everything” really happens is curious. It feels as though the MWers find something improper about it – as though this is not quite the respectable business for a physicist who should be contemplating rates of decoherence and the emergence of pointer states and so on. But if you insist on a theory like this, you’re stuck with all its implications – unless, that is, you have some means of “disappearing worlds” that scramble the ability to make meaningful statements about anything.

Saturday, February 21, 2015

Many Worlds: can we make a deal?

OK, picking up from my last post, I think I see a way whereby we can leave this. Advocates of the Many World Interpretation will agree that it does not pretend to say anything about humans and stuff, and that expecting it to do so is as absurd as expecting someone to write down and solve the Schrödinger equation for a football game. They will agree that all those popular (and sometimes technical) books and articles telling us about our alternative quantum selves and Many-Worlds morality and so forth, are just the wilder speculative fringes of the theory that struggle with problems of logical coherence. They agree that statements like DeWitt’s that “every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies” aren’t actually what the theory says at all. They acknowledge a bit more clearly that the Alices and Bobs in their papers are just representations of devices that can make an observation (yes, I know this is all they have ever been intended as anyway.) They agree that when they say “The world is described by a quantum state”, they are using “world” in quite a special sense that makes no particular claims about our place(s) or even our existence(s) in it*. They admit that if one tries to broaden this sense of “world”, some difficult conundrums arise. They admit that the mathematical and ontological status of these “worlds” are not the same thing, and that the difference is not resolved by saying that the “worlds” are “really” there in Hilbert space, waiting to be realized.

Then – then – I’m happy to say, sure, the Many Worlds Interpretation, which yes indeed we might better relabel the Everettian Interpretation (shall we begin now?), is a coherent way to think about quantum theory. Possibly even a default way, though I shall want to seek advice on that.

Is that a deal?

*I submit that most physicists and chemists, if they write down the Schrödinger equation for, say, a molecular orbital, are not thinking that they are actually writing down the equation for a “world” but with some bits omitted. One might respond “Well, they should, unless they are content to be “shut up and calculate” scientists”. But I would submit that they are just being good scientists in recognizing the boundaries of the system their equations describe and are not trying to make claims about things they don’t know about or understand.

Friday, February 20, 2015

The latest on the huge number of unobservable worlds

OK, I get the point. Sean Carroll really doesn’t care about problems of the ontology of personhood in the Many World Interpretation. I figured that, as a physicist, these would not be at the forefront of his mind, which is fair enough. But philosophically they are valid questions – which is why David Lewis thought a fair bit about them in his Model Realism theory. It seems to me that a supposedly scientific theory that walks up and says “Sorry, but you are not you – I can’t say what it is you are, but it’s not what you think you are” is obliged to take questions afterwards. I wrote my article in Aeon to try to get those questions, so determinedly overlooked in many expositions of Many Worlds (though clearly acknowledged, if not really addressed, by one of its thoughtful proponents Lev Vaidman) on the table.

But no. We’re not having that, apparently. Sean Carroll’s response doesn’t even mention them. Perhaps he feels as Chad Orzel does: “Who cares? All that stuff is just a collection of foggily defined emergent phenomena that arising from vast numbers of simple quantum systems. Absent a concrete definition, and most importantly a solid idea of how you would measure any of these things, any argument about theories of mind and selfhood and all that stuff is inescapably incoherent.” I’m sort of hoping that isn’t the case. I’m hoping that when Carroll writes of an experiment on a spin superposition being measured by Alice, “There's a version of Alice who saw up and a version who saw down”, he doesn’t really think we can treat Alice – I mean real-world Alices, not the placeholder for a measuring device – like a CCD camera. It’s the business of physics to simplify, but we know what Einstein said about that.

All he picks up on is the objection that I explicitly call minor in comparison: the matter of testing the MWI. His response baffles me:
"The MWI does not postulate a huge number of unobservable worlds, misleading name notwithstanding. (One reason many of us like to call it “Everettian Quantum Mechanics” instead of “Many-Worlds.”) Now, MWI certainly does predict the existence of a huge number of unobservable worlds. But it doesn’t postulate them. It derives them, from what it does postulate."

(I don’t quite get the discomfort with the “Many Worlds” label. It seems to me that is a reasonable name for a theory that “predicts the existence of a huge number of unobservable worlds.” Still, call it what you will.)

I’m missing something here. By and large, scientific theories make predictions, and then we do experiments to see if those predictions are right. MWI predicts “a huge number of worlds”, but apparently it is unreasonable to ask if we might examine that prediction in the laboratory.

But, Carroll says, “You don’t hold it against a theory if it makes some predictions that can’t be tested. Every theory does that. You don’t object to general relativity because you can’t be absolutely sure that Einstein’s equation was holding true at some particular event a billion light years away.” The latter is a non-sequitur: accepting a prediction that can’t be tested is not the same as accepting the possibility of exceptions. And you might reasonably say that there is a difference between accepting a theory even if you can’t get experimentally at what it implies in some obscure corner of parameter space and accepting a theory that “predicts a huge number of unobservable worlds”, some populated by other versions of you doing unobservable things. But OK, might we then have just one prediction that we can test please?

I was dissatisfied with Carroll’s earlier suggestion that you can test MWI just by finding a system that violates the Schrödinger equation or the principle of superposition, because, as I pointed out, it is not a unique interpretation of quantum theory in that regard. His response? “So what?” Alternatives to MWI, he says, have to add to its postulates (or change them), and so they too should predict something we can test. And some do. I understand that Carroll thinks the MWI is uniquely exempt from having to defend its interpretation in particular in the experimental arena, because its axioms are the minimal ones. The point I wanted to raise in my article, though, was that the wider implications of the MWI make it less minimal than its advocates claim. If a “minimal” physical theory predicted something that seemed nonsensical about how cells work, but a more complex theory with an experimentally unsupported postulate took away that problem, would we be right to assert that the minimal theory must be right until there was some evidence for that other postulate? Of course, there may be a good argument for why trashing any coherent notion of self and identity and agency is not a problem. I’d love to hear it. I’d rather it wasn’t just ignored.

“Those worlds happen automatically” – sure, I see that. They are a prediction – sure, I see that. But this point-blank refusal to think any more about them? I don’t get that. Perhaps if Many Worlders were to stop, just stop, trying to tell us anything about how those many unobservable worlds are peopled, to stop invoking copies of Alice as placeholders for quantum measurements, to stop talking about quantum brothers, to say simply that they don’t really have a clue what their interpretation can mean for our notions of identity, then I would rest easier. And so would many, many other physicists. That, I think, would make them a lot happier than being told they don’t understand quantum theory or that they are being silly.

I’m concerned that this sounds like a shot at Sean Carroll. I really don’t want that. Not only is he a lot smarter than me, but he writes so damned well on such intensely interesting stuff. I’m not saying that to flatter him away. I just wanted to get these things discussed.

Many Worlds - a longer view

Here is the pre-edited version of my article for Aeon on the Many Worlds Interpretation of quantum theory. I’m putting it here not because it is any better than the published version (Aeon’s editing was as excellent and improving as ever), but because it gives me a bit more room to go into some of the issues.

In my article I stood up for philosophy. But that doesn’t mean philosophers necessarily get it right either. In the ensuing discussion I have been directed to a talk by philosopher of science David Wallace. Here he criticizes the Copenhagen view that theories are there to make predictions, not to tell us how the world works. He gets a laugh from his audience for suggesting that, if this were so, scientists would have been forced to ask for funding for the LHC not because of what we’d learn from it but so that we could test the predictions made for it.

This is wrong on so many levels. Contrasting “finding out about the world” against “testing predictions of theories” is a totally false opposition. We obviously test predictions of theories to find out if they do a good job of helping us to explain and understand the world. The hope is that the theories, which are obviously idealizations, will get better and better at predicting the fine details of what we see around us, and thereby enable us to tell ever more complete and satisfying stories about why things are this way (and, of course, to allow us to do some useful stuff for “the relief of man’s estate). So there is a sense in which the justification for the LHC derided by Wallace is in fact completely the right one, although that would have been a very poor way of putting it. Almost no one in science (give or take the [very] odd Nobel laureate who capitalizes Truth like some religious crank) talks about “truth” – they recognize that our theories are simply meant to be good working descriptions of what we see, with predictive value. That makes them “true” not in some eternal Platonic sense but as ways of explaining the world that have more validity than the alternatives. No one considers Newtonian mechanics to be “untrue” because of general relativity. So in this regard, Wallace’s attack on the Copenhagen view is trivial. (I don’t doubt that he could put the case better – it’s just that he didn’t do so here.)

What I really object to is the idea, which Wallace repeats, that Many Worlds is simply “what the theory tells you”. To my mind, a theory tells you something if it predicts the corresponding states – say, the electrical current flowing through a circuit, or the reaction rate of an enzymatic process. Wallace asserts that quantum theory “predicts” a you seeing a live Schrödinger’s cat and a you seeing a dead one. I say, show me the equation where those “yous” appear (along with the universes they are in). The best the MWers can do is to say, well, let’s just denote those things as Ψ(live cat) and Ψ(dead cat), with Ψ representing the corresponding universes. Oh please.

Some objectors to my article have been keen to insist that the MWI really isn’t that bizarre: that the other “yous” don’t do peculiar things but are pretty much just like the you-you. I can see how some, indeed many, of them would be. But there is nothing to exclude those that are not, unless you do so by hand: “Oh, the mind doesn’t work that way, they are still rational beings.” What extraordinary confidence this shows in our ability to understand the rules governing human behaviour and consciousness in more parallel worlds than we can possibly imagine: as if the very laws of physics will make sure we behave properly. Collapsing the wavefunction seems a fairly minor sleight of hand (and moreover one we can actually continue to investigate) compared to that. The truth is that we no nothing about the full range of possibilities that the MWI insists on, and nor can we ever do so.

One of the comments underneath my article – and others will doubtless repeat this – makes the remark that Many Worlds is not really about “many universes branching off” at all. Well, I guess you could choose to believe Anonymous Pete instead of Brian Greene and Max Tegmark, if you wish. Or you could follow his link to Sean Carroll’s article, which is one of the examples I cite in my piece of why MWers simple evade the “self” issue altogether.

But you know, my real motivation for writing my article is not to try to bury the MWI (the day I start imagining I am capable of such things, intellectually or otherwise, is the day to put me out to grass), but to provoke its supporters into actually addressing these issues rather than blithely ignoring them while bleating about the (undoubted) problems with the alternatives. Who knows if it will work.

_____________________________________________________________________

In 2011, participants at a conference on the placid shore of Lake Traunsee in Austria were polled on what the conference was about. You might imagine that this question would have been settled before the meeting was convened – but since the subject was quantum theory, it’s not surprising that there was still much uncertainty. The conference was called “Quantum Physics and the Nature of Reality”, and it grappled with what the theory actually means. The poll, completed by 33 of the participating physicists, mathematicians and philosophers, posed a range of unresolved questions, one of which was “What is your favourite interpretation of quantum mechanics?”

The mere question speaks volumes. Isn’t science supposed to be decided by experiment and observation, free from personal preferences? But experiments in quantum physics have been obstinately silent on what it means. All we can do is develop hunches, intuitions and, yes, favourite ideas.

Which interpretations did these experts favour? There were no fewer than 11 answers to choose from (as well as “other” and “none”). The most popular (42%) was the view put forward by Niels Bohr, Werner Heisenberg and their colleagues in the early days of quantum theory, now known as the Copenhagen Interpretation. In third place (18%) was the Many Worlds Interpretation (MWI).

You might not have heard of most of the alternatives, such as Quantum Bayesianism, Relational Quantum Mechanics, and Objective Collapse (which is not, as you might suppose, saying “what the hell”). Maybe you’ve not heard of the Copenhagen Interpretation either. But the MWI is the one with all the glamour and publicity. Why? Because it tells us that we have multiple selves, living other lives in other universes, quite possibly doing all the things that we dream of but will never achieve (or never dare). Who could resist that idea?

Yet you should. You should resist it not because it is unlikely to be true, or even because, since no one knows how to test it, the idea is not truly scientific at all. Those are valid criticisms, but the main reason you should resist it is that it is not a coherent idea, philosophically or logically. There could be no better contender for Wolfgang Pauli’s famous put-down: it is not even wrong.

Or to put it another way: the MWI is a triumph of canny marketing. That’s not some wicked ploy: no one stands to gain from its success. Rather, its adherents are like giddy lovers, blinded to the flaws beneath the superficial allure.

The measurement problem

To understand how this could happen, we need to see why, more than a hundred years after quantum theory was first conceived, experts are still gathering to debate what it means. Despite such apparently shaky foundations, it is extraordinarily successful. In fact you’d be hard pushed to find a more successful scientific theory. It can predict all kinds of phenomena with amazing precision, from the colours of grass and sky to the transparency of glass, the way enzymes work and how the sun shines.

This is because quantum mechanics, the mathematical formulation of the theory, is largely a technique: a set of procedures for calculating what properties substances have based on the positions and energies of their constituent subatomic particles. The calculations are hard, and for anything more complicated than a hydrogen atom it’s necessary to make simplifications and approximations. But we can do that very reliably. The vast majority of physicists, chemists and engineers who use quantum theory today don’t need to go to conferences on the “nature of reality” – they can do their job perfectly well if, in the famous words of physicist David Mermin, they “shut up and calculate”, and don’t think too hard about what the equations mean.

It’s true that the equations seem to insist on some strange things. They imply that very small entities like atoms and subatomic particles can be in several places at the same time. A single electron can seem to pass through two holes at once, interfering with its own motion as if it was a wave. What’s more, we can’t know everything about a particle at the same time: Heisenberg’s uncertainty principle forbids such perfect knowledge. And two particles can seem to affect one another instantly across immense tracts of space, in apparent (but not actual) violation of Einstein’s theory of special relativity.

But quantum scientists just accept such things. What really divides opinion is that quantum theory seems to do away with the notion, central to science from its beginnings, of an objective reality that we can study “from the outside”, as it were. Quantum mechanics insists that we can’t make a measurement without influencing what we measure. This isn’t a problem of acute sensitivity; it’s more fundamental than that. The most widespread form of quantum maths, devised by Erwin Schrodinger in the 1920s, describes a quantum entity using an abstract concept called a wavefunction. The wavefunction expresses all that can be known about the object. But a wavefunction doesn’t tell you what properties the object has; rather, it enumerates all the possible properties it could have, along with their relative probabilities.

Which of these possibilities is real? Is an electron here or there? Is Schrödinger’s cat alive or dead? We can find out by looking – but quantum mechanics seems to be telling us that the very act of looking forces the universe to make that decision, at random. Before we looked, there were only probabilities.

The Copenhagen Interpretation insists that that’s all there is to it. To ask what state a quantum entity is in before we looked is meaningless. That was what provoked Einstein to complain about God playing dice. He couldn’t abandon the belief that quantum objects, like larger ones we can see and touch, have well defined properties at all times, even if we don’t know what they are. We believe that a cricket ball is red even if we don’t look at it; surely electrons should be no different? This “measurement problem” is at the root of the arguments.

Avoiding the collapse

The way the problem is conventionally expressed is to say that measurement – which really means any interaction of a particle with another system that could be used to deduce its state – “collapses” the wavefunction, extracting a single outcome from the range of probabilities that the wavefunction encodes. But the quantum mechanics offers no prescription for how this collapse occurs; it has to be put in by hand. That’s highly unsatisfactory.

There are various ways of looking at this. A Copenhagenist view might be simply to accept that wavefunction collapse is an additional ingredient of the theory, which we don’t understand. Another view is to suppose that wavefunction collapse isn’t just a mathematical sleight-of-hand but an actual, physical process, a little like radioactive decay of an atom, which could in principle be observed if only we had an experimental technique fast and sensitive enough. That’s the Objective Collapse interpretation, and among its advocates is Roger Penrose, who suspects that the collapse process might involve gravity.

Proponents of the Many Worlds Interpretation are oddly reluctant to admit that their preferred view is simply another option. They often like to insist that There Is No Alternative – that the MWI is the only way of taking quantum theory seriously. It’s surprising, then, that in fact Many Worlders don’t even take their own view seriously enough.

That view was presented in the 1957 doctoral thesis of the American physicist Hugh Everett. He asked why, instead of fretting about the cumbersome nature of wavefunction collapse, we don’t just do away with it. What if this collapse is just an illusion, and all the possibilities announced in the wavefunction have a physical reality? Perhaps when we make a measurement we only see one of those realities, yet the others have a separate existence too.

An existence where? This is where the many worlds come in. Everett himself never used that term, but his proposal was championed in the 1970s by the physicist Bryce De Witt, who argued that the alternative outcomes of the experiment must exist in a parallel reality: another world. You measure the path of an electron, and in this world it seems to go this way, but in another world it went that way.

That requires a parallel, identical apparatus for the electron to traverse. More, it requires a parallel you to measure it. Once begun, this process of fabrication has no end: you have to build an entire parallel universe around that one electron, identical in all respects except where the electron went. You avoid the complication of wavefunction collapse, but at the expense of making another universe. The theory doesn’t exactly predict the other universe in the way that scientific theories usually make predictions. It’s just a deduction from the hypothesis that the other electron path is real too.

This picture really gets extravagant when you appreciate what a measurement is. In one view, any interaction between one quantum entity and another – a photon of light bouncing off an atom – can produce alternative outcomes, and so demands parallel universes. As DeWitt put it, “every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies”.

Recall that this profusion is deemed necessary only because we don’t yet understand wavefunction collapse. It’s a way of avoiding the mathematical ungainliness of that lacuna. “If you prefer a simple and purely mathematical theory, then you – like me – are stuck with the many-worlds interpretation,” claims MIT physicist Max Tegmark, one of the most prominent MWI popularizers. That would be easier to swallow if the “mathematical simplicity” were not so cheaply bought. The corollary of Everett’s proposal is that there is in fact just a single wavefunction for the entire universe. The “simple maths” comes from representing this universal wavefunction as a symbol Ψ: allegedly a complete description of everything that is or ever was, including the stuff we don’t yet understand. You might sense some issues being swept under the carpet here.

What about us?

But let’s stick with it. What are these parallel worlds like? This hinges on what exactly the “experiments” that produce or differentiate them are. So you’d think that the Many Worlders would take care to get that straight. But they’re oddly evasive, or maybe just relaxed, about it. Even one of the theory’s most thoughtful supporters, Russian-Israeli physicist Lev Vaidman, seems to dodge the issue in his entry on the MWI in the Stanford Encyclopedia of Philosophy:

“Quantum experiments take place everywhere and very often, not just in physics laboratories: even the irregular blinking of an old fluorescent bulb is a quantum experiment.”

Vaidman stresses that every world has to be formally accessible from the others: it has to be derived from one of the alternatives encoded in the wavefunction of one of the particles. You could say that the universes are in this sense all connected, like stations on the London Underground. So what does this exclude? Nobody knows, and there is no obvious way of finding out.

I put the question directly to Lev: what exactly counts as an experiment? An event qualifies, he replied “if it leads to more than one ‘story’”. He added: “If you toss a coin from your pocket, does it split the world? Say you see tails – is there parallel world with heads?” Well, that was certainly my question. But I was kind of hoping for an answer.

Most popularizers of the MWI are less reticent. In the “multiverse” of the Many Worlds view, says Tegmark, “all possible states exist at every instant”. One can argue about whether that’s the quite same as DeWitt’s version, but either way the result seems to accord with the popular view that everything that is physically possible is realized in one of the parallel universes.

The real problem, however, is that Many Worlders don’t seem keen to think about what this means. No, that’s too kind. They love to think about what it means – but only insofar as it lets them tell us wonderful, lurid and beguiling stories. The MWI seduces us by multiplying our selves beyond measure, giving us fantasy lives in which there is no obvious limit to what we can do. “The act of making a decision”, says Tegmark – a decision here counting as an experiment – “causes a person to split into multiple copies.”

That must be a pretty big deal, right? Not for theoretical physicist Sean Carroll of the California Institute of Technology, whose article “Why the Many-Worlds formulation of quantum mechanics is probably correct” on his popular blog Preposterous Universe makes no mention of these alter egos. Oh, they are there in the background all right – the “copies” of the human observer of a quantum event are casually mentioned in the midst of the 40-page paper by Carroll that his blog cites. But they are nothing compared with the relief of having to fret about wavefunction collapse. It’s as though the burning question about the existence of ghosts is whether they observe the normal laws of mechanics, rather than whether they would radically change our view of our own existence.

But if some Many Worlders are remarkably determined to avert their eyes, others delight in this multiplicity of self. They will contemplate it, however, only insofar as it lets them tell us wonderful, lurid and beguiling stories about fantasy lives in which there is no obvious limit to what we can do, because indeed in some world we’ve already done it.

Most MWI popularizers think they are blowing our minds with this stuff, whereas in fact they are flattering them. They delve into the implications for personhood just far enough to lull us with the uncanniness of the centuries-old Doppelgänger trope, and then flit off again. The result sounds transgressively exciting while familiar enough to be persuasive.

Identity crisis

In what sense are those other copies actually “us”? Brian Greene, another prominent MW advocate, tells us gleefully that “each copy is you.” In other words, you just need to broaden your mind beyond your parochial idea of what “you” means. Each of these individuals has its own consciousness, and so each believes he or she is “you” – but the real “you” is their sum total. Vaidman puts the issue more carefully: all the copies of himself are “Lev Vaidman”, but there’s only one that he can call “me”.

““I” is defined at a particular time by a complete (classical) description of the state of my body and of my brain”, he explains. “At the present moment there are many different “Levs” in different worlds, but it is meaningless to say that now there is another “I”.” Yet it is also scientifically and, I think, logically meaningless to say that there is an “I” at all in his definition, given that we must assume that any “I” is generating copies faster than the speed of thought. A “complete description” of the state of his body and brain never exists.

What’s more, this half-baked stitching together of quantum wavefunctions and the notion of mind leads to a reductio ad absurdum. It makes Lev Vaidman a terrible liar. He is actually a very decent fellow and I don’t want to impugn him, but by his own admission it seems virtually inevitable that “Lev Vaidman” has in other worlds denounced the MWI as a ridiculous fantasy, and has won a Nobel prize for showing, in the face of prevailing opinion, that it is false. (If these scenarios strike you as silly or frivolous, you’re getting the point.) “Lev Vaidman” is probably also a felon, for there is no prescription in the MWI for ruling out a world in which he has killed every physicist who believes in the MWI, or alternatively, every physicist who doesn’t. “OK, those Levs exist – but you should believe me, not them!” he might reply – except that this very belief denies the riposte any meaning.

The difficulties don’t end there. It is extraordinary how attached the MWI advocates are to themselves, as if all the Many Worlds simply have “copies” leading other lives. Vaidman’s neat categorization of “I” and “Lev” works because it sticks to the tidy conceit that the grown-up "I" is being split into ever more "copies" that do different things thereafter. (Not all MWI descriptions will call this copying of selves "splitting" - they say that the copies existed all along - but that doesn't alter the point.)

That isn't, however, what the MWI is really about – it's just a sci-fi scenario derived from it. As Tegmark explains, the MWI is really about all possible states existing at every instant. Some of these, it’s true, must contain essentially indistinguishable Maxes doing and seeing different things. Tegmark waxes lyrical about these: “I feel a strong kinship with parallel Maxes, even though I never get to meet them. They share my values, my feelings, my memories – they’re closer to me than brothers.”

He doesn't trouble his mind about the many, many more almost-Maxes, near-copies with perhaps a gene or two mutated – not to mention the not-much-like Maxes, and so on into a continuum of utterly different beings. Why not? Because you can't make neat ontological statements about them, or embrace them as brothers. They spoil the story, the rotters. They turn it into a story that doesn't make sense, that can't even be told. So they become the mad relatives in the attic. The conceit of “multiple selves” isn’t at all what the MWI, taken at face value, is proposing. On the contrary, it is dismantling the whole notion of selfhood – it is denying any real meaning of “you” at all.

Is that really so different from what we keep hearing from neuroscientists and psychologists – that our comforting notions of selfhood are all just an illusion concocted by the brain to allow us to function? I think it is. There is a gulf between a useful but fragile cognitive construct based on measurable sensory phenomena, and a claim to dissolve all personhood and autonomy because it makes the maths neater. In the Borgesian library of Many Worlds, it seems there can be no fact of the matter about what is or isn’t you, and what you did or didn’t do.

State of mind

Compared with these problems, the difficulty of testing the MWI experimentally (which would seem a requirement of it being truly scientific) is a small matter. ‘It’s trivial to falsify [MWI]’, boasts Carroll: ‘just do an experiment that violates the Schrödinger equation or the principle of superposition, which are the only things the theory assumes.’ But most other interpretations of quantum theory assume them (at least) too – so an experiment like that would rule them all out, and say nothing about the special status of the MWI. No, we’d quite like to see some evidence for those other universes that this particular interpretation uniquely predicts. That’s just what the hypothesis forbids, you say? What a nuisance.

Might this all simply be a habit of a certain sort of mind? The MWI has a striking parallel in analytic philosophy that goes by the name of modal realism. Ever since Gottfried Leibniz argued that the problem of good and evil can be resolved by postulating that ours is the best of all possible worlds, the notion of “possible worlds” has supplied philosophers with a scheme for debating the issue of the necessity or contingency of truths. The American philosopher David Lewis pushed this line of thought to its limits by asserting, in the position called model realism, that all worlds that are possible have a genuine physical existence, albeit isolated causally and spatiotemporally from ours. On what grounds? Largely on the basis that there is no logical reason to deny their existence, but also because accepting this leads to an economy of axioms: you don’t have to explain away their non-existence. Many philosophers regard this as legerdemain, but the similarities with the MWI of quantum theory are clear: the proposition stems not from any empirical motive but simply because it allegedly simplifies matters (after all, it takes only four words to say “everything possible is real”, right?). Tegmark’s so-called Ultimate Ensemble theory – a many-worlds picture not explicitly predicated on quantum principles but still including them – has been interpreted as a mathematical expression of modal realism, since it proposes that all mathematical entities that can be calculated in principle (that is, which are possible in the sense of being “computable”) must be real. Lewis’s modal realism does, however, at least have the virtue that he thought in some detail about the issues of personal identity it raises.

If I call these ideas fantasies, it is not to deride or dismiss them but to keep in view the fact that beneath their apparel of scientific equations or symbolic logic they are acts of imagination, of “just supposing”. Who can object to imagination? Not me. But when taken to the extreme, parallel universes become a kind of nihilism: if you believe everything then you believe nothing. The MWI allows – perhaps insists – not just on our having cosily familial ‘quantum brothers’ but on worlds where gods, magic and miracles exist and where science is inevitably (if rarely) violated by chance breakdowns of the usual statistical regularities of physics.

Certainly, to say that the world(s) surely can’t be that weird is no objection at all; Many Worlders harp on about this complaint precisely because it is so easily dismissed. MWI doesn’t, though, imply that things really are weirder than we thought; it denies us any way of saying anything, because it entails saying (and doing) everything else too, while at the same time removing the “we” who says it. This does not demand broad-mindedness, but rather a blind acceptance of ontological incoherence.

That its supporters refuse to engage in any depth with the questions the MWI poses about the ontology and autonomy of self is lamentable. But this is (speaking as an ex-physicist) very much a physicist’s blind spot: a failure to recognize, or perhaps to care, that problems arising at a level beyond that of the fundamental, abstract theory can be anything more than a minor inconvenience.

If the MWI were supported by some sound science, we would have to deal with it – and to do so with more seriousness than the merry invention of Doppelgängers to measure both quantum states of a photon. But it is not. It is grounded in a half-baked philosophical argument about a preference to simplify the axioms. Until Many Worlders can take seriously the philosophical implications of their vision, it’s not clear why their colleagues, or the rest of us, should demur from the judgement of the philosopher of science Robert Crease that the MWI is ‘one of the most implausible and unrealistic ideas in the history of science’ [see The Quantum Moment, 2014]. To pretend that the only conceptual challenge for a theory that allows everything conceivable to happen (or at best fails to provide any prescription for precluding the possibilities) is to accommodate Sliding Doors scenarios shows a puzzling lacuna in the formidable minds of its advocates. Perhaps they should stop trying to tell us that philosophy is dead.

Monday, February 16, 2015

General relativity's big year?

For the record, my op-ed in the International New York Times.

______________________________________________________________

You might think that physicists would be satisfied by now. They have been testing Einstein’s theory of general relativity (GR), which explains what gravity is, ever since he first described it one hundred years ago this year. And not once has it been found wanting. But they are still investigating its predictions to the nth decimal place, and this centenary year should see some particularly stringent tests. Perhaps one will uncover the first tiny flaw in this awesome mathematical edifice.

Stranger still is that, although GR is celebrated and revered among physicists like no other theory in science, they would doubtless react with joy if it is proved to fail. That’s science: you produce a smart idea and then test it to breaking point. But this determination to expose flaws isn’t really about skepticism, far less wanton nihilism. Most physicists are already convinced that GR is not the final word on gravity. That’s because the theory, which is applied mostly at the scale of stars and galaxies, doesn’t mesh with quantum theory, the other cornerstone of modern physics, which describes the ultra-small world of atoms and subatomic particles. It’s suspected that underlying both theories is a theory of quantum gravity, from which GR and conventional quantum theory emerge as excellent approximations just as Isaac Newton’s theory of gravity, posed in the late seventeenth century, works fine except in some extreme situations.

The hope is, then, that if we can find some dark corner of the universe where GR fails, perhaps because the gravitational fields it describes are so enormously strong, we might glimpse what extra ingredient is needed – one that might point the way to a theory of quantum gravity.

General relativity was not just the last of Einstein’s truly magnificent ideas, but arguably the greatest of them. His annus mirabilis is usually cited as 1905, when, among other things, he kick-started quantum theory and came up with special relativity, describing the distortion of time and space caused by travelling close to the speed of light. General relativity offered a broader picture, embracing motion that changes speed, such as objects accelerating as they fall in a gravitational field. Einstein explained that gravity can be thought of as curvature induced in the very fabric of time and space by the presence of a mass. This too distorts time: clocks run slower in a strong gravitational field than they do in empty space. That’s one prediction that has now been thoroughly confirmed by the use of extremely accurate clocks on space satellites, and in fact GPS systems have to adjust their clocks to allow for it.

Einstein presented his theory of GR to the Prussian Academy of Sciences in 1915, although it wasn’t officially published until the following year. The theory also predicted that light rays will be bent by strong gravitational fields. In 1919 the British astronomer Arthur Eddington confirmed that idea by making careful observations of the positions of stars whose light passes close to the sun during a total solar eclipse. The discovery assured Einstein as an international celebrity. When he met Charlie Chaplin in 1931, Chaplin is said to have told Einstein that the crowds cheered them both because everyone understood him and no one understood Einstein.

General relativity predicts that some burnt-out stars will collapse under their own gravity. They might become incredibly dense objects called neutron stars only a few miles across, from which a teaspoon of matter would weigh around 10 billion tons. Or they might collapse without limit into a “singularity”: a black hole, from whose immense gravitational field not even light can escape, since the surrounding space is so bent that light just turns back on itself. Many neutron stars have now been seen by astronomers: some, called pulsars, rotate and send out beams of intense radio waves from their magnetic poles, lighthouse beams that flash on an off with precise regularity when seen from afar. Black holes can only be seen indirectly from the X-rays and other radiation emitted by the hot gas that surrounds and is sucked into them. But astrophysicists are certain that they exist.

While Newton’s theory of gravity is mostly good enough to describe the motions of the solar system, it is around very dense objects like pulsars and black holes that GR becomes indispensible. That’s also where it might be possible to test the limits of GR with astronomical investigations. Last year, astronomers at the National Radio Astronomy Observatory in Charlottesville, Virginia, discovered the first pulsar orbited by two other shrunken stars, called white dwarfs. This situation, with two bodies moving in the gravitational field of a third, should allow one of the central pillars of GR, called the strong equivalence principle, to be put to the test by making very detailed measurements of the effects of the white dwarfs on the pulsar’s metronome flashes as they circulate. The team hopes to carry out that study this year.

But the highest-profile test of GR is the search for gravitational waves. The theory predicts that some astrophysical processes involving very massive bodies, such as supernovae (exploding stars) or pulsars orbited by another star (binary pulsars), should excite ripples in space-time that radiate outwards as waves. The first binary pulsar was discovered in 1974, and we now know the two bodies are getting slowly closer at just the rate expected if they are losing energy by radiating gravitational waves.

The real goal, though, is to see such waves directly from the tiny distortions of space that they induce as they ripple past our planet. Gravitational-wave detectors use lasers bouncing off mirrors in two kilometres-long arms at right angles, like an L, to measure such minuscule contractions or stretches. Two of the several gravitational-wave detectors currently built – the American LIGO, with two observatories in Louisiana and Washington, and the European VIRGO in Italy – have just been upgraded to boost their sensitivity, and both will start searching in 2015. The European Space Agency is also launching a pilot mission for a space-based detector, called LISA Pathfinder, this September.

If we’re lucky, then, 2015 could be the year we confirm both the virtues and the limits of GR. But neither will do much to alter the esteem with which it is regarded. The Austrian-Swiss physicist Wolfgang Pauli called it “probably the most beautiful of all existing theories.” Many physicists (including Einstein himself) believed it not so much because of the experimental teats but because of what they perceived as its elegance and simplicity. Anyone working on quantum gravity knows that it is a very hard act to follow.

Holding Rome together



Here’s my latest Material Witness column for Nature Materials.

____________________________________________________________________________

Calling it the world’s earliest shopping mall is perhaps a qualified accolade, but Trajan’s Market in Rome is certainly a remarkable structure. These vaulted arcades, built early in the second century AD and perhaps originally administrative offices, have withstood almost two millennia of moderate-scale earthquakes. They aren’t alone in that: the Pantheon, Hadrian’s Mausoleum and the Baths of Diocletian in Rome have all shown comparable longevity and resilience. What is their secret?

The structures use concrete made from the pyroclastic volcanic rock of the region: coarse rubble of tuff and brick bound with a mortar made from volcanic ash. It is this mortar that provides structural stability, but the properties that give it such durability have only now been examined. Jackson et al. [Proc. Natl Acad. Sci. USA 111, 18484 (2014) – here] have reproduced the mortar used by Roman builders and used microdiffraction and tomography to study how it acquires its remarkable cohesion.

The Roman mortar was the result of a century or more of experimentation. It used pozzolan, an aluminosilicate volcanic pumice found in the region of the town of Pozzuoli, near Naples, which, when mixed with slaked lime (calcium hydroxide) in the presence of moisture, recrystallizes into a hydrated cementitious material. Although named for its Roman use, pozzolan has a much longer history in building and remained in use until the introduction of Portland cements in the eighteenth century.

The production of the volcanic ash–lime cement was described by the Roman engineer Vitruvius in his book an architecture from the first century BC, and Jackson et al. followed his recipe to make modern analogues. They found that the tensile strength and fracture energy increased steadily over several months, and used electron microscopy and synchrotron X-ray diffraction to look at the fracture surfaces and the chemical nature of the cementitious phases. Among the poorly crystalline matrix are platey crystals of a calcium aluminosilicate phase called strätinglite, crystallized in situ, which seem to act rather like the steel or glass microfibres added to some cements today to toughen them by providing obstacles to crack propagation. Unlike them, however, strätlingite resists corrosion.

Since the cement industry is a major producer of carbon dioxide liberated during the production of Portland cement, there is considerable interest in finding environmentally friendly alternatives. Some of these have a binding matrix of similar composition to the Roman mortar, and so Jackson et al. suggest that an improved understanding of what makes it so durable could point to approaches worth adopting today – such as using chemical additives that promote the intergrowth of reinforcing platelets.

Of course, the Roman engineers knew of the superior properties of their mortar only by experience. A similar combination of astute empiricism and good fortune lies behind the medieval lime mortars that, because of their slow setting, have preserved some churches and other buildings in the face of structural shifting. They tempt us to celebrate the skills of ancient artisans, but we should also remember that what we see today is selective: time has already levelled the failures.

Monday, January 26, 2015

Secrets of exploding sodium revealed


Here’s the longer version of my latest news story for Nature. I love this stuff. I saw the experiments being done by Phil M when I visited Pavel a couple of years ago, and have been waiting for the work to come together ever since. Could you possibly need any more evidence that chemistry rocks?

____________________________________________________________________________

There’s more than exploding hydrogen in the violence of the reaction of alkali metals with water.

It’s the classic piece of chemical tomfoolery: take a lump of sodium or potassium metal, toss it into water, and watch the explosion. Yet a paper in Nature Chemistry reveals that this familiar piece of pyrotechnics has not previously been understood [1].

The explosion, say Pavel Jungwirth and his collaborators at the Czech Academy of Sciences in Prague, is not merely a consequence of the ignition of the hydrogen gas that the alkali metals release from water. That may happen eventually, but it begins as something far stranger: a rapid exodus of electrons followed by explosion of the metal driven by electrical repulsion.

Neurologist and chemical enthusiast Oliver Sacks offers a vivid account of how, as a boy, he and his friends carried out the reaction on Highgate Ponds in North London with a lump of sodium bought from the local chemicals supplier [2]: “It took fire instantly and sped around and around on the surface like a demented meteor, with a huge sheet of yellow flame above it. We all exulted – this was chemistry with a vengeance.”

Highly reactive sodium and potassium react with water to form sodium hydroxide and hydrogen, and the reaction liberates so much heat that the hydrogen may ignite spontaneously. The process seems so straightforward and understandable that no one previously seems to have felt there was anything else to explain.

But as Jungwirth says, there is a fundamental problem with the conventional explanation. “In order to have a runaway explosive behaviour of a chemical reaction, very good mixing of the reactants needs to be ensured,” he says. But the hydrogen gas and steam released at the surface of the metal should impede the further access of water and quench the reaction. Why doesn’t it?

This, Jungwirth admits, was only a part of the original motivation for looking more deeply into the reaction. The experiments were conducted by his colleague Philip Mason, and he says that “an equally important part is Phil's love for exciting experimentation and the easy availability of our balcony, where the first experiments were carried out.” There Mason set up a high-speed video camera to film the process, although the final movies were shot in the lab of coauthor Sigurd Bauerecker at the Technical University of Braunschweig in Germany.

Despite its notoriously explosive nature, the reaction of sodium with water is in fact extremely erratic: sometimes it explodes and sometimes it doesn’t, largely because of surface oxidation of the metal. “The basic trick Phil came up with is to use liquid metal – a sodium/potassium alloy that is liquid at room temperature”, says Jungwirth. But getting a reliable explosion has its hazards. “A face shield is a must”, he adds. “Phil took it off once to blow out a small fire and a tiny piece of metal exploded into his face: luckily lower part of it, so he only had a few scratches on his cheek.”

The movies revealed a vital clue to what was fueling the violent reaction in the early stages. The reaction starts less than a millisecond after the metal droplet, released from a syringe, enters the water. After just 0.4 ms, “spikes” of metal shoot out from the droplet, too fast to be expelled by heating.

What’s more, between 0.3 and 0.5 ms, this “spiking” droplet becomes surrounded by a dark blue/purple colour in the solution. The reason for these two observations became clear when Jungwirth’s postgraduate student Frank Uhlig carried out quantum-mechanical computer simulations of the process with clusters of just 19 sodium atoms. He found that each of the atoms at the surface of the cluster loses an electron within just several picoseconds (10^-12 s), and these electrons enter the surrounding water where they are solvated (surrounded by water molecules)[3].

Solvated electrons in water are known to have the deep blue colour observed transiently in the videos – although they are highly reactive, quickly decomposing water to hydrogen gas and hydroxide ions. What’s more, their departure leaves the metal cluster full of positively charged ions, which repel each other. The result is a “Coulomb explosion” in which the cluster bursts apart due to its own electrostatic repulsion, a process first explained by Lord Rayleigh in the late nineteenth century.

This explosion creates the spikes known as Taylor cones, the researchers say. They support that idea with less detailed simulations involving clusters of 4,000 sodium atoms, which also break up with spike-like instabilities at the surface.

“Four thousand sodium atoms is still a very tiny piece of matter, and I do not think we see proper Taylor cones in the simulations”, says Jungwirth. “At best, we see a microscopic version.”

Inorganic chemist James Dye of Michigan State University, a specialist on solvated electrons, is full of praise for the work. “I have done the demonstration dozens of times and wondered why sodium globules often danced on the surface, while potassium leads to explosive behaviour”, he says. “The paper gives a complete and interesting account of the early stages of the reaction.”

References
1. Mason, P. E. et al., Nat. Chem. http://dx.doi.org/10.1038/nchem.2161 (2015).
2. Sacks, O. Uncle Tungsten, p.123. Picador, London, 2001.
3. Young, R. M. & Neumark, D. M., Chem. Rev. 112, 5553-5577 (2012).

Friday, January 23, 2015

Are you ready? Then I'll begin...

The beginning of a play or book is so hard. I was reminded of this last night while watching the RSC’s new production in Stratford upon Avon, Oppenheimer. It’s a pretty good play, as I’ll say in my review in Nature soon. But I had first to get over the hump of the opening lines, where Oppenheimer reads from Niels Bohr’s 1934 book Atomic Theory and the Description of Nature: “The task of science is both to extend the range of our experience and to reduce it to order.” It seems an unobjectionable claim, even a rather good one. But as spoken by an actor dressed in period style as Oppenheimer, it seemed a terribly stagey and self-conscious opening. It was as if he were saying “The play’s starting now, and it’s about science, and now you have to believe that I’m Oppenheimer, OK?”

I had the same feeling at the start of Michael Frayn’s Copenhagen when I first saw it years ago. As I recall, the actress playing Margrethe Bohr marched on stage, struck a pose and said “But why?” And I thought “Yeah, yeah, so we are supposed to allow that the play is starting in mid-conversation and to ask ourselves, Why what?” But Copenhagen is brilliant, and so is Frayn, so what’s my problem here?

It’s all about that transition to another reality, and how to make us believe in it. Once Oppenheimer was underway, there was no problem – there was still the odd stagey moment in that production, but on the whole we can get inside the narrative quite comfortably once we are acclimatized. But how do you avoid that awkward instant at the start, where the actors have to say “We’ve started pretending now”?

This matters to me even more with books. I won’t say that I judge them by their first line, but that first line is certainly a hurdle that they have to clear. If it feels as though it has been worked on, burnished, set in place like a jewel for us to admire, then I am off to a bad start. New writers seem to be told that first lines matter a lot, and in a sense they do – but this doesn’t mean that a first line has to strive to be brilliant and lapidary, to compete with the astonishingly over-rated opening lines of Pride and Prejudice or War and Peace. Getting it right with a memorable first line, like Camus in L’Étranger or Dickens in A Christmas Carol, is far more difficult than is generally acknowledged, and more often these attempts just come across as contrived and self-conscious. How much better it is to go for the effortlessly mundane: “Stately, plump Buck Mulligan came from the stairhead, bearing a bowl of lather on which a mirror and a razor lay crossed.” Surely what is far better is that the opening page or so is captivating. If you can create one as jaw-dropping as Dickens in Bleak House, it doesn’t matter what the heck your very first line is.

But theatre: that’s another challenge. Here you’ve got the added problem that there are real people standing in front of you pretending to be different real people, and you know that and they know you know that. So how to start weaving the illusion without a jolt?

One of the best answers I ever saw was in Theatre de Complicité’s Mnemonic, when Simon McBurney just began by talking to us, as the audience. It seemed like a preamble to the start of the play, but gradually we realized that this actually was the play. Arguably that was a trick or gimmick, but it contained a more general solution: don’t try too hard. A Brechtian approach won’t work for every play, but at the very least it seems a good idea to relax and not to feel you have to ensnare the audience from the very first utterance. At that point at least, there’s really no risk we will be bored.