Friday, July 24, 2015

Silence of the geronotologists

I was perhaps a bit cryptic in tweeting about my New Statesman piece on “the immortality business” (which I’m afraid I can’t put up here, but it should be online soon – and NS is always worth its modest cover price anyway). This is what I meant.

When I pester researchers for comments on a topic I’m writing about, I recognize of course that none is under the slightest obligation to respond. That they almost always do (even if it’s to apologize for being unable to help) is a testament to the extraordinary generosity of the research community, and is one of the abiding joys and privileges of writing about science – my impression is that some other disciplines don’t fully share this willingness to explain and discuss their work. Occasionally I do simply get no response at all from a researcher, although it is unusual that a gentle follow-up enquiry will not at least elicit an explanation that the person concerned is too busy or otherwise indisposed to comment.

That’s why my experience in writing this piece was so clearly anomalous. I contacted a large number of gerontologists and others working on ageing, explaining what I was trying to do with this piece. With the very few honourable exceptions named in my article, none responded at all. (One other did at least have the grace to pretend that this was “not really my field”, despite that being self-evidently untrue.) I am almost certain that this is because these folks have decided that any “journalist” contacting them while mentioning names like Aubrey de Grey wants to write another uncritical piece about how he and others like him are going to conquer ageing.

I can understand this fear, especially in the light of what I said in the article: some researchers feel that even allowing the immortalists the oxygen of publicity is counter-productive. But truly, chaps, burying your head in the sand is the worst way to deal with this. A blanket distrust of the press, while to some degree understandable, just takes us back to the bad old days of adversarial science communication, the kind of “us versus them” mentality that, several years ago, I saw John Sulston so dismayingly portray at a gathering of scientists and science writers. What researchers need to do instead is to be selective and discerning: to decide that all writers are going to recycle the same old rubbish is not only silly but damaging to the public communication of science. I would even venture to say that, in figuring out how to deal with the distortions and misrepresentations that science sometimes undoubtedly suffers from, scientists need help. While it is understandable that, say, IVF pioneer Robert Edwards should have bemoaned the way “Frankenstein or Faust or Jekyll… [loom] over every biological debate”, I see little indication that biologists and medics really know how to grapple with that fact rather than just complain about it. You really need to talk to us, guys – we will (some of us) do our very best to help.

Wednesday, July 22, 2015

Understanding the understanding of science

That the computer scientist Charles Simonyi has endowed a professorial chair at Oxford for the Public Understanding of Science seems a rather splendid thing, acknowledging as it does the cultural importance of science communication (which was for a long time disdained by some academics, as Carl Sagan knew only too well). Richard Dawkins was the natural choice for the first occupant of the position, and indeed it seems to have been created partly with him in mind.

When his incumbency ended and applications were invited for his successor, a few well-meaning folks told me “you should have a go!” I quickly assured them that I am simply not in that league. Little did I know, however, that should I have been overcome with mad delusions of grandeur, I’d not only have stood less than a cat’s chance in hell but would have been specifically excluded from consideration in the first place. The full text of Simonyi’s manifesto in creating the position is reproduced in the second volume of Dawkins’ autobiography, Brief Candle in the Dark. It doesn’t simply say, as it might quite reasonably have done, that the post is for academics and not professional science communicators. No, it goes out of its way to insult the latter. Get this, fellow science hacks:

The university chair is intended for accomplished scholars who have made original contributions to their field, and who are able to grasp the subject, when necessary, at the highest levels of abstraction. A populariser, on the other hand, focuses mainly on the size of the audience and frequently gets separated from the world of scholarship. Popularisers often write on immediate concerns or even fads. In some cases they seduced less educated audiences by offering a patronizingly oversimplified or exaggerated view of the state of the art or the scientific process itself. This is best seen in hindsight, as we remember the ‘giant brains’ computer books of yesteryear but I suspect many current science books will in time be recognized as having fallen into this category. While the role of populariser may [may, note] still be valuable, nevertheless it is not one supported by this chair.

OK, I won’t even get started in on this. Richard doesn’t reproduce this without comment, however. He says he wants to “call attention especially” to “the distinction between popularizers of science and scientists (with original scientific contributions to their credit) who also popularize.” It’s not clear why he does this, especially as the distinction is spurious for many reasons.

I might add that Simonyi also stipulates that “preference should be given to specialities which express or achieve their results mainly by symbolic manipulation, such as Particle physics, Molecular biology, Cosmology, Genetics, Computer Science, Linguistics, Brain research, and of course, Mathematics.” So stuff you, chemists and earth scientists. Actually, stuff you too, cell biologists, immunologists and many others.

It doesn’t much matter to the world that I find this citation offensive. I think it does matter that it displays such ignorance of what science communication is about. I would be much more troubled, however, if the chair were not currently occupied by such a profoundly apt, capable and broad-minded individual as Marcus du Sautoy. If it continues to attract incumbents of such quality, I guess we needn’t trouble ourselves too much about the attitudes of its founder and patron.

Friday, July 17, 2015

Dawkins and the Spotted Dick mystery

I have agreed, with some trepidation, to review volume 2 of Richard Dawkins’ autobiography, this one called Brief Candle in the Dark. I guess I figured it might be refreshing to return to the pre-God-bashing, pre-Twitter Dawkins, when he was rightly known primarily as our pre-eminent science communicator (who called out the idiocies of creationism). And on the whole it is: rather than appearing to be the polarizing caricature that Dawkins is often presented as today, he comes across so far in the book as simply a chap with appealing features as well as foibles, not least of which among the former being his touching generosity to students. Sure, there are Pooterish touches (note to editors: if I ever write anything autobiographical that includes the line “I think my speech went down quite well”, then I’m counting on you guys), but also a sense of the humane individual (not to mention the splendid writer) who these days it can be hard to discern behind all the controversy that surrounds him. I should add that I’m still only on page 50.

But there are also occasional glimpses of the Twitter-era Dawkins, springing out Hyde-like from the good Jekyllish doctor. I was particularly struck by a passage in which, apropos of nothing in particular, Dawkins tells us about a “care home for old people in England” at which a “local government inspector” banned the traditional pudding Spotted Dick from the menu on the grounds that its name was “sexist”. This looked to me for all the world like one of those apocryphal “PC gone mad” stories that the Daily Mail loves to run (and then occasionally retract a few weeks later in small print). Could it really be true?

Thee only item that comes up after a quick Google is one reported – well, what did you expect? – by the Daily Mail. There, the change in naming was not occasioned by a prudish, PC government inspector. The story says that staff in a council canteen were totally fed up with a few customers (one in particular) who kept on making lewd and childish remarks whenever Spotted Dick was on the menu, and so they decided to take matters into their own hands – with the extremely ill-advised idea of calling it instead Spotted Richard. A council official then rather shamefacedly decided to intervene and reverse this policy because it looked so silly (and because it was being reported as an example of political correctness). There was no mention of anyone finding the name sexist, nor of officialdom actually trying to be politically correct.

Some Twitter comments challenged Dawkins about this, and his response was that this was not the same story at all. Rather, the Spotted Dickgate that he heard was from “a personal acquaintance, personally vouched for,” and not the infamous Flintshire Spotted Dickgate. And that, it seems, is all we are going to get from him (though you might think he’d be curious about the parallels).

So you must make up your own minds, people. Was Dawkins’ acquaintance recounting what shows every sign of becoming an urban myth, or was this really a case of Spotted Dick strikes again? Can anyone, in any event, figure out how Spotted Dick could be construed as “sexist” – or even, to paraphrase Spinal Tap, as ”sexy”? The anecdote doesn’t really make sense.

Alleged political correctness has of course become one of Dawkins’ bête noirs (bêtes noir?) – after all, it did for his good friend James Watson after Watson betrayed his racist views once too often, and it also came close to doing for his friend Tim Hunt (a much nicer man than Watson) after Tim said something stupidly sexist. Could it possibly be that it suited Dawkins to believe what he was told without feeling the need to inquire further?

If that’s so, it’s simply another example of the kind of confirmation bias that often leads scientists astray, as I discussed here. What is ironic is that this passage comes so soon after Dawkins has given us a rather nice account of the critical thinking that interview questions at Oxford aim to probe. But it’s one thing to be led to false conclusions in research by seeking out the answer you are already predisposed to find; it’s quite another to recycle an anecdote in a way that makes you sound like a ranter in the comments section of the Daily Mail website.

So pending a full disclosure of data and references, preferably in a major peer-reviewed journal, I propose we should avoid propagating the “Spotted Dick” meme, even if the inventor of memes himself repeats it. This has been a public service announcement.

Monday, July 13, 2015

Beckett's epic fail (again)

One of my esteemed colleagues recently finished a nice piece on careers in science by quoting Samuel Beckett: “Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.” The sentiment is entirely laudable: you’ll get things wrong, but don’t be deterred – every time you attempt something and fail, you get a little better. Or something like that.

Yet whenever I see Beckett put to use this way, I can’t help thinking “Hmphrgh”. This is Beckett you’re quoting. Yes, Samuel Beckett. Does anyone believe that he was ever going to write a soundbite of fist-punching, keep-on-goin’ self-motivation?

The line comes, of course, from Beckett’s late work Worstward Ho. I say of course because that’s commonly acknowledged, but I wonder how many have seen or read Worstward Ho. It is, shall we say, opaque even by the standards of a master of opacity. Dense, you might say. Difficult. Now, I love Beckett and find him an intensely funny writer, but funny because of a wry bleakness that makes Will Self seem like a bouncing-bunny optimist. It’s a braver soul than me who will pronounce with certainty on what Beckett was driving at with “Fail better”, but I will bet a pint of Guinness that he did not intend this to be a boiled-down version of that pious little primary-school mantra “If at first you don’t succeed, try, try again.”

It’s wise not to get too po-faced and spluttery about this misappropriation, not least because Beckett would doubtless have appreciated the joke. We get the memes we seem to need, like the martyrdom of Giordano Bruno or the misuse of “deconstruct”, and I’d be a sad fool indeed to think that a blog comment is going to make the slightest difference in squelching them.

But it’s sad that the irony here is so seldom recognized. Indeed, what seems particularly sad is that the opportunity to take a more nuanced view of failure is bypassed by this bit of repurposed wisdom.

Mark O’Connell has a great piece on Slate, called “How Samuel Beckett became Silicon Valley’s life coach.” He says “What has happened here, I suppose, is that a small shard of a fragmentary and difficult work of literature has been salvaged from the darkness of its setting, sanded and smoothed of the jagged remnants of that context”. The result, O’Connell says, is that Beckett is pressed “into service as a kind of highbrow motivational thought-leader.” But in truth “his attitude toward success and failure was more complex and perverse than this interpretation suggests.” That’s surely true.

What, then, was that attitude? Maggi Dawn has a nice interpretation on her blog: “there is a sense in which claiming always to fail is comedy not tragedy. It releases us from the lie of success, frees us from the obligation to adopt its thin veneer, and allows us to do whatever it is we do for its own sake.”

My own suspicion is that Beckett was hinting at the glorious tragedy of our own self-delusion, in which we tell ourselves that we will eventually transform failure into success, and that the world really cares whether we do or not. We are not Steve Jobs but Harold Steptoe (and if you’re too young to get that allusion, you can thank me later for broadening your horizons), doomed forever to be making pathetic plans for betterment in a kind of frenzied desperation, forever glimpsing our cherished goal only to have it snatched from our grasp by the realities of our sad and miserable existence. And perhaps to realise that our only real hope of solace lies in accepting that Albert will always thwart our efforts, so that we might ask well celebrate failure and get drunk with the surly old sod.

But imagine trying to sell that in Silicon Valley.

Wednesday, July 08, 2015

Does anyone have any questions?

That I can be fairly relied upon to put my foot in it was confirmed after a talk I gave at the Royal Society last week. The Q&A seemed to be going well enough, but then the RS staff said “Well, we’ll have to bring it to an end there.”

“Oh, there’s just one more”, I quickly interjected, pointing out the chap at the end of the row with his hand up. What I didn’t know was that this fellow is a regular at RS events, where he apparently makes a habit of getting bolshy. The attempt to end the proceedings before handing him the mike was not an oversight but a tactful intervention – which I’d now undermined.

As the question began, I thought I could see a way to create a valid question out of what seemed like his skepticism about the way science is used (“delinquent science” was the term). But as he went on (and on), it became clear that this wasn’t a question at all but a rant about how science wastes taxpayers’ money making things that no one wants or needs, regardless of the consequences, and how the person who switched on the LHC didn’t give a damn whether it would make a black hole that would swallow us all, and – OK, you get the point. One of the organizers had to step in to halt the bitter diatribe.

I try to make a point of turning any question into a reason to say something that I hope will be of interest, even if the connection with the question itself is slender. That’s to say, I will try to answer questions as directly as I can, but when they aren’t really questions at all, or when they are questions about fairies or telepathy, I’ll try to move the discussion in what I hope is a useful direction. I have no problem with disagreeing with a questioner (and if, say, I was confronting a climate sceptic then I’d feel obliged to do so). But I would feel uncomfortable making my answer a put-down. Speakers are in a position of relative power in these situations, and so it seems only fair to try to engage with the issues raised rather than to dismiss, far less ridicule, them. The number of times I have been approached after a talk by someone saying “I have a stupid question, so I didn’t want to ask it in public…” makes me realize how many people, probably because of experiences at school, are extremely nervous about putting up their hand, thinking everyone will laugh at them. (I’m not sure that the questions which follow such a disclaimer have ever been stupid in any event – in my experience, people whose questions are genuinely of dubious value, for example when they serve only to showcase the erudition of the questioner, are rarely averse to asking them.)

So what did I do on this occasion? I waffled something about how modest the aims of most science is, and about the common contrast between the way a piece of work is presented to the public and what its real goals are. I don’t know, it was something to say, but it wasn’t terribly insightful. But I came away troubled. Not because I’d been attacked by the questioner, but because I felt I hadn’t dealt with it in the best way. So I asked my wife later – she being far more generous, perceptive and sensitive than I am – if she thought that on this occasion I should have answered more firmly – not by getting into the vague and paranoid issues that the questioner was, after a fashion, raising, but to say explicitly that they were not relevant here. I realized that what he had said was in fact rather rude – not to me (or rather, that aspect doesn’t greatly bother me), but to the audience, who hadn’t come to hear some aimless angry diatribe against science in general. Was it really right to be so tolerant and irenic in this situation? No, she said, it wasn’t. I had every right to deal with such a “question” with firm curtness – to say, perhaps, that I had trouble discerning any kind of question at all in his comments, and that I wasn’t going to launch into a general defence of what science is all about and why it is done. That’s all it would have taken.

I think she is right. Speakers have a responsibility to treat an audience with respect, but the reverse applies too – at least, in a situation like this. I see no reason why questions should not be challenging, even angry, when controversial subjects are being aired (mine certainly wasn't one such), but even then they need to be brief and to the point.

I wonder how others deal with situations like this? The likelihood of getting flaky or strange or irrelevant questions after a public scientific talk (“Do you think drugs allow us to see other dimensions?”) is of course fairly high. One can perhaps try, as I heard Adam Rutherford do recently, anticipate that by asking at the outset “Please try not to be mad”. (It didn’t work though, did it Adam? – the question above is one of those that followed.) But mad questions aren’t so much the issue (though of course one has to try to be sensitive to genuine mental-health issues here, and I’m not being facetious). Rather, what’s the best way of dealing with folks whose determination to mount a hobby horse, or push a particular point of view, or show off, leads them into confrontational or boring rudeness? Should one treat them the way stand-ups treat hecklers, with an acerbic put-down? Or by politely declining to answer the question? (“You know, I don’t think I can say anything very intelligent about that.”) Or with a brusque and magisterial Dawkinsesque dismissal? With attempted humour? (“What have you been drinking?”) When do you hold back, and when do you let rip?

[Postscript: Incidentally, I was awed by how, at a talk last week at the Royal Institution, Frank Wilczek was able instantly to cut to the physics core of left-field questions. Like this:
Questioner: [apropos supersymmetry] Does this have anything to do with wave-particle duality?
Me: [thinks] Um yeah, does it? Or are you just mouthing a buzzword you’ve heard?
Wilczek: Wave-particle duality is what makes this possible, because [I paraphrase] it's the bosonic picture of quantum fields that we’re hoping to unify with the fermionic nature of matter.
Me: [thinks] Well yeah, I knew that.]

Wednesday, July 01, 2015

Perkin's purple: a journey around London

I have just presented one of BBC Radio 4’s Science Stories, a new series looking at episodes in the history of science. This one tells the tale of William Perkin’s purple coal-tar dye and how it changed the course of chemistry. That, of course, is the kind of grand and often contentious claim these programmes inevitably end up making, but I do feel that there is a case to be made for it here.

The initial plan was for me to take a journey across London, visiting the key locations en route: from Shadwell in the East End to the Royal College of Chemistry in the West End and then the site of the Perkins’ factory in Greenford Green on the outskirts of west London. In the end it didn’t quite happen that way, but I got a few pictures of some of the relevant locations as we recorded, and so wanted to include these here with the original draft of the script – it changed considerably, and I’m sure very much for the better, but this at least tells and illustrates the story. For more details, see Simon Garfield's excellent book Mauve, Tony Travis's authoritative The Rainbow Makers, and my own Bright Earth.

_________________________________________________________________________

“A reservoir of dirt, drunkenness and drabs” – that’s what Dickens called Shadwell, and I’m not sure that he wasn’t being affectionate. There’s not a lot of Dickens’ Shadwell left: whatever the bombs didn’t destroy during the war disappeared soon after in the slum clearances. But I can’t say that what took its place has added much to its appeal: all these ugly flats and traffic bollards.

But here’s the place I want. King David Lane. Just down here in the mid-nineteenth century there was a big old house at 1 King David Fort, but now it’s just a council block.


Visiting the site of William Perkin’s family home in Shadwell – on a very blustery spring day!

This was the home of the Perkin family, who were wealthy by the standards of Shadwell. George Perkin was a successful carpenter who could afford to indulge his son William’s passion for chemistry. William had a little home laboratory on the top floor of the house – just a simple place, with a table and bottles of chemicals, no running water, no gas. But when he was 18 years old and still a student, he discovered something here that for once justifies that awful cliché: it changed the world.

There’s a blue plaque here to back me up. “Sir William Henry Perkin, FRS, discovered the first aniline dyestuff, March 1856, while working in his home laboratory on this site, and went on to found science-based industry.”


The blue plaque marking the spot where Perkin discovered mauveine.

Listen to that again: “went on to found science-based industry”. In other words, what Perkin discovered led to the whole idea that industry might be based on science.

That’s an astonishing claim. What could this young lad have found that was so important?

Let’s start with a gin and tonic.

For the British army in India in the nineteenth century, this drink really was medicinal. The troops were issued with their bitter tonic water at daybreak, but the officers started taking this medicine on the verandah as the sun set, not just with a spoonful of sugar but with a splash of lime and a generous shot of gin.

You see, the bitter taste was due to quinine, the only effective anti-malarial drug then known. This stuff was extracted at great labour and expense from the bark of a Peruvian tree called the cinchona. The bark had been known since the seventeenth century to help treat and prevent malaria. No one really knew what was in it until two French chemists separated and purified quinine in 1820. With quinine to protect them, the Europeans were able to begin the colonization of Africa, the consequences of which are still reverberating today.

You really didn’t want to get malaria. Chills, convulsions, fever, vomiting, delirium, and quite possibly at the end of it all – death. But quinine cost a fortune. Peru was then just about the only place where the tree was found and the bark contained only tiny amounts of it. And the Peruvians kept a monopoly by outlawing the export of cinchona seeds or saplings. In the nineteenth century, the East India Company was spending about £100,000 every year to keep the officers and officials in the colonies healthy.

But what if, instead of extracting this stuff drip by drip from tree bark, you could make it from scratch?

What does that mean? Well, over the previous centuries, chemists had found how to take simple chemical ingredients and get them to combine to make entirely different chemicals: useful substances like soap, soda, bleach. Might they be able to make a complicated natural drug like quinine?

One man in particular had this dream of using chemistry to reproduce and even rival nature. He was a German chemist called August Wilhelm Hofmann, and many people, including Prince Albert, hoped that he’d be the savior of British chemistry. In 1845 Hofmann was appointed director of the Royal College of Chemistry in London, which had been set up at Albert’s request.


August Wilhelm Hofmann

So what do we know about Hofmann? Well, according to the sign that now marks the spot in Oxford Street where the Royal College of Chemistry used to stand [it’s next to Moss Bros, opposite John Lewis’s], he “inspired the young to do great things in chemistry, and relate them to both academic and everyday life.”


The plaque erected by the Royal Society of Chemistry to mark the former site of the Royal College of Chemistry in Oxford Street, London.

There were two aspects of everyday life that Perkin, walking down these streets in the mid-nineteenth century, couldn’t fail to have noticed. In the lanes and docks of Shadwell, Dickens said, everyone seemed to be wearing rough blue sailors’ jackets, oilskin hats and big canvas trousers. But up here in the fashionable West End, it wasn’t so different to the style emporiums of today: ladies wore the latest colours: yellow silks from France and fabrics printed in patterns of rich madder red and indigo. Those last two colours were plant extracts, and they faded after lots of washing and being out in the sun. But the yellow silk, which had graced the Great Exhibition in 1851, was coloured with a new dye that was made artificially – by chemistry.

And the stuff it was made of was a by-product of the other thing that distinguished the splendor of Oxford Street from the gloomy alleys of Shadwell: the street lights. They had brightened up the evenings since the start of the century, burning gas that was extracted from coal.

Left over from that process was a thick, smelly tar called, naturally enough, coal tar. At first it seemed to be just noxious waste, and was often just dumped into streams. But then folks figured out that coal tar might be useful. Charles Macintosh used it to make waterproof raincoats. And if you distilled it, then you could extract a whole range of chemicals, like coal itself primarily composed of carbon. They often had an acrid smell – aromatics, the chemists called them. One was carbolic acid, also known as phenol. You remember that stinky old coal-tar soap? That’s phenol you were smelling, and it was in there to act as a disinfectant, one of its main uses since the 1850s.

But phenol was also the starting ingredient for the yellow silk dye that rich ladies bought from Lyon. Yes, this coal tar had some valuable stuff within it.

No one knew that better than August Hofmann, who had become pretty much the world expert on coal-tar compounds. So when William Perkin enrolled at the Royal College of Chemistry in 1853, pretty soon he found himself working on coal tar.

And when Hofmann set Perkin the challenging task of trying to make synthetic quinine in 1856, the coal-tar compounds seemed like good materials to start from.

We need to do some chemistry now. But don’t worry. I’ve got a Scrabble set to help me. You see, molecules are like poems: you have to get the words in the right order. Each word is a cluster of letters, and we can think of each letter as an atom. Making molecules is like stringing together these letters in an order that has some meaning. Now, some molecules, like polythene or DNA, really are a lot like strings of atoms. But others have other shapes. Benzene, for example, which is at the heart of all the coal-tar aromatic compounds, is a ring of six carbon atoms, each with a hydrogen atom attached. I take all six C’s for carbon – and yes, this isn’t exactly a regular Scrabble set – and put them in a ring.

But the problem was that in Perkin’s day no one knew that molecules have shapes like this, with atoms in particular arrangements. All they knew was the relative amounts of each kind of element, like carbon and hydrogen, a substance contained. Benzene was equal parts of carbon and hydrogen, rather like a G&T is one part gin to three parts tonic water.

So then, what Hofmann and Perkin knew about the element cocktail that is quinine was that it is twenty parts carbon, to twenty four of hydrogen, two of nitrogen and two of oxygen.

What gives quinine its meaning – what lets it cure malaria – is its particular arrangement of these atoms. But Perkin knew nothing about that. His strategy – so crude that in retrospect it was obviously hopeless – was, roughly speaking, to take a compound that had half of these amounts – ten parts carbon and so on – and try and stick them together, as if mixing up these two piles of letters is going to miraculously give them the same meaning as quinine.

It’s not surprising he didn’t succeed. When he did the experiment at home one night, instead of colourless quinine he got a red sludge.

He could have been forgiven for just flushing it down the drain. But he was too good a student for that, which is why Hofmann had made Perkin his personal assistant.

Instead, he thinks, well, what seems to be going on here? Let’s try the same reaction with another two identical piles of letters, rather like the ones before but a bit simpler. And so he goes through the same procedure with a different coal-tar extract, one of Hofmann’s own favourites: a compound called aniline.

Well, this time the result is even worse. Now the gunk is black. Even so, Perkin keeps going. He dries the stuff and swills it around in methylated spirits.

And now at last, something nice. It dissolves to turn the liquid a beautiful purple.

Here Perkin thinks of those fine ladies of Oxford Street in their bright silks. He knows that the textile industry is hungry for new dyes. And so takes a piece of white silk and dips it into the liquid, and when he pulls it out the colour has stuck fast to the fabric.

So what now? Perkin manages to get hold of the name of a dye works in Scotland and he sends them a piece of his purple-dyed silk. When the reply comes a few months later, it must make his heart beat faster:
“If your discovery does not make the goods too expensive it is decidedly one of the most valuable that has come out for a very long time. This colour is one which has been very much wanted in all classes of goods and could not be had fast on silk and only at great expense on cotton yarns… the best lilac we have… is done by only one house in the United Kingdom… and they get any price they wish for it, but… it does not stand the tests that yours does and fades by exposure to air.”

So there it was: Perkin had a potential new dye on his hands.

But remember what the man had said: “If your discovery does not make the goods too expensive”. Well, aniline was expensive. If this dye was going to succeed, Perkin had to find a way of making it cheaply – which meant, on an industrial scale.

He realized that he wasn’t going to be able to do that while he was still a chemistry student. So he told Hofmann that he was quitting. But Hofmann had made the young man his protégé, and as Perkin recalled many years later, “he appeared much annoyed”. What was his best student thinking of, abandoning a promising career in pure research to go into industry? As Perkin recalled,
“Hofmann perhaps anticipated that the undertaking would be a failure, and was very sorry to think that I should be so foolish as to leave my scientific work for such an object, especially as I was then but a lad of eighteen years of age.”

The funny thing is that purple was already fashionable even before Perkin discovered his aniline dye. From the 1830s a purple dye called murexide became popular, though probably its fans had little idea that it was made from Peruvian bird droppings. Another purple dye was made from an extract of lichen. In the year that Perkin made his discovery, the Pre-Raphaelite Arthur Hughes painted his picture April Love, showing a young woman in the kind of long flowing purple dress then in style. The French, who even at that time called the shots in fashion, had a word for these rather pale purples. It was what they called the purple-flowered mallow: mauve.


April Love (1856), by Arthur Hughes.

But he did leave, and when he couldn’t find a backer for the factory he proposed to build, his father George put up his life savings, even though he’d never wanted William to become a chemist in the first place. William’s older brother Thomas chipped in to help too.

Now they had to give aniline purple a catchy trade name. Perkin thought of the famous royal purple of Rome, originally made in the Phoenician city of Tyre from a substance extracted a drop at a time from shellfish. Why not call it Tyrian purple?

But it didn’t catch on. Soon enough the aniline dye he’d intended to call Tyrian purple had become synonymous instead with the colour mauve.

There was nowhere suitable in the East End for the coal-tar dyeworks of Perkin & Sons, and in the end they found a meadow right over in Greenford Green, near Harrow, northwest of London, conveniently close to the Grand Junction Canal. In less than six months, a factory was turning it into purple for the dyers of Great Britain.

Well, I can’t say that the industrial estate in Greenford Green is much of an improvement on the faceless modern development in Shadwell. But I guess it wasn’t any better in Perkin’s day. His dyeworks grew quickly, and it looks pretty grim in old engravings and photos, with its tall chimneys belching smoke and toxic nitrous fumes. He found a way to make aniline cheaply on the site from benzene, sulphuric and nitric acid, so goodness knows what the factory’s chemical vats spewed into the canal. The chemical process was dangerously explosive, and none of the Perkins had any experience with industrial-scale chemistry. It’s a wonder the whole place didn’t go up in smoke.


A photograph of the Perkins’ dyeworks in Greenford Green.

The last traces of the old factory were destroyed in 1976, but there’s a blue plaque here to mark its place… and here it is. “William Henry Perkin established on this site in 1857 the first synthetic dye factory in the world.”


The blue plaque at Greenford Green where the original coal-tar dye factory of Perkin and Sons once stood.

It became so much the rage in London that it even drew comment from Dickens in 1859:
“As I look out of my window, the apotheosis of Perkin’s purple seems at hand – purple hands wave from open carriages – purple hands shake each other at street doors – purple hands threaten each other from opposite sides of the street; purple-striped gowns cram barouches, jam up cabs, throng steamers, fill railway stations; all flying countryward, like so many migrating birds of purple Paradise.”

Perkin’s Greenford Green factory marks the end of the beginning – for aniline dyes and for the entire synthetic chemicals industry.

Perkin & Sons couldn’t get the French patent rights for their mauve, and within a year French and German companies started to make it too. Soon the coal-tar dyes were everywhere – not just purple but green, red, blue, black. The liberation of colour had arrived, and fashion became positively gaudy.

Bright colour – once the preserve of the rich – could be worn in all walks of life. Gone was the colour-coding of social hierarchies that had existed since the Middle Ages. Colour became a matter of individual expression.

What began as a stroke of serendipity in Shadwell was now becoming an exact science. Chemists came to understand that the particular arrangement of atoms in a molecule determines what it does – what, as I said earlier, the molecule means. And what it does might include which colours it absorbs and which it reflects, when light shines onto it.

So on the one hand, it became possible to make new colours to order. By carefully studying aniline dyes, chemists in the late nineteenth century could predict from the architecture of these compounds what colour they were likely to have. This is now the entire business of synthetic chemistry: constructing molecules with particular atomic arrangements and therefore particular properties.

On the other hand, if there was a substance found in nature that had useful properties – like quinine, say – then if you could figure out the shape of its atomic framework you had a chance of working out how to make it synthetically, perhaps more cheaply than harvesting it from plants.

But what became of the natural dyes, such as indigo and madder? They didn’t go out of fashion; instead, synthetic chemistry re-invented them. Getting these substances pure and in large amounts was costly and labour-intensive, and indigo plantations in India were the British Empire’s most lucrative business in all of Asia.

But as chemists came to understand that molecules were made of atoms linked together into particular architectures, they turned themselves into molecular architects who could even aspire to construct the molecules of nature. They figured out how, from simple ingredients like coal-tar substances, they could string together atoms to make the very molecules that gave indigo and madder their colours.



The molecular structures of indigo (top) and alizarin (bottom), which gives madder red its colour.

When two German chemists figured out how to make synthetic madder red in 1868 from the coal-tar compound anthracene, William Perkin quickly figured out how to do it more cheaply and on an industrial scale. By 1873 he’d got rich enough from this and other dyes to sell his company and return to pure research.


The blue plaque in Victory Place, near Elephant and Castle in southeast London, showing where the dyeworks of Simpson, Maule and Nicholson was situated. The company was established here in 1853, and in 1860 it began to manufacture aniline red dye, known also as magenta. Three years later they marketed an aniline violet, discovered by August Hofmann, that offered Perkin’s mauve some stiff competition. In 1873 William Perkin sold his dye company to the firm that Simpson, Maule and Nicholson had become, called Brooke, Simpson and Spiller. I was terribly excited when I discovered this plaque on my usual cycling route into London; I suspect I was the only person who could say that for a good many years.


Portrait of William Henry Perkin, painted in 1906 by Arthur S. Cope.

Perkin’s main competitor for synthetic madder was the German chemicals company BASF. If you’re like me, the name BASF will put you in mind of cassette tapes. But that’s just an example of how the dye companies diversified into other areas, because BASF stands for Badische Anilin und Soda Fabrik: the aniline and soda makers of Baden.

In 1877 one of their academic consultants, the German chemist Adolf Baeyer, worked out how to make indigo from the coal-tar extract toluene. BASF was soon producing it by the hundreds of tons. Within just a few years the price of indigo plummeted and the colonial plantations were put out of business, which the British government declared a national calamity.

Doesn’t this then make the chemist a kind of modern Prometheus? If you can control the shapes of molecules, what can you not create?

These colour manufacturers now pervade our language, our material world, our history. ICI, Hoescht, Agfa, Novartis – all began with dyes. In 1925 some of the major German dye companies merged to form the notorious cartel IG Farben, a force powerful enough to dictate its terms to Hitler. The diversification into pesticides left IG Farben with the patent for the poison gas Zyklon B, which it licensed for use in the concentration camps.

The diversification of the great dye companies into areas like pharmaceuticals had begun by the late nineteenth century. The coal-tar dyes themselves showed the way. In the 1870s the German physician Paul Ehrlich began to use the dyes for staining cells, which made them easier to see and distinguish under the microscope. He found that some dyes actually killed the microorganisms they stuck too.

That sounded useful. In 1909 Ehrlich discovered an arsenic-containing dye that would destroy the microorganism responsible for one of the most feared and deadly afflictions of the day: the disease that dare not speak its name, syphilis. Other coal-tar dyes worked as antibiotics.

Before this time, most drugs were, like quinine, extracts from natural sources, mostly from plants – like the extract of willow bark called salicylic acid that had long used as a painkiller. In 1897 a chemist at the German dye company Bayer turned phenol into a compound related to salicylic acid but which worked even better. The company started selling it under a trade name: aspirin.

To make sense of the science behind all this, chemicals companies couldn’t just any longer rely on hiring the services of academics. They started to employ their own chemists, who could design products like drugs based on a rational understanding of how the molecules needed to be shaped, and what they would do.

This, then, is what science-based industry is all about. It’s what the pharmaceuticals industry looks like today.

All the same, the revolution that Perkin began is in some ways still just getting started. We now know that there’s more to the way a drug works than just a good fit with the biological molecule that it aims to latch onto, like a lock and key. But we still can’t always fully understand or predict how a given drug will behave: you can’t be sure of designing it at the drawing board. Instead, most drug discovery still relies on trial and error, on shuffling molecular fragments into many different shapes and then seeing which ones work best.

What’s more, synthetic chemistry still has plenty of problems to solve: scientists struggle to put together some of the complicated molecules that nature produces. And even if they succeed, the route is often too long and too expensive to be useful in industry. This is why chemical synthesis is still as much an art as a science.

But Perkin is now regarded as one of its finest early stylists: a man who first gave us a glimpse of what might be possible if we can get clever enough at molecular architecture. And for that we have to thank the colour purple.

Saturday, June 27, 2015

Against big ideas

Sam Leith’s comment on the trend in non-fiction publishing is spot-on, and Toby Mundy’s analysis of it typically insightful. (And I’m not saying that just because you’re the new director of the Samuel Johnson prize, Toby – though, you know, congratulations and all.) Sam echoes my impression, though I suppose as someone published in the UK by Bodley Head (rightly exonerated here as a noble exception) and in the US by the University of Chicago Press, I would say this. It is good to have critical reviewers around, like Steven Poole and Bryan Appleyard, who will challenge this Gladwellization of non-fiction, but I fear they’re fighting against the tide. Sam’s complaint about the way the mainstream trade publishers seem mostly interested in books that offer a single “big idea” that explains everything about being human/history/the brain/the economy/the internet/the universe (until the next one comes along) is very well founded. Life is not just complex (in which case “complexity theory” would explain it all right?) but complicated. So are most areas of science. So are people. We need ideas and narratives that help us unravel the threads, not ones that pretend it is all just one big rope. This seems especially problematic in the US, where it feels ever harder – outside of the university presses – to publish a serious discussion of any topic rather than an airport book in which the subtitle tells you all you need to know. It’s very reassuring to hear that being published there by a university press there is increasingly a guarantor of substance.