I’ve been thinking a lot lately about Jacques Tati’s masterful 1967 work Play Time, which describes an antiseptic, colorless and angular world of the future, a bleak world that only relents when the characters get together for a drink. It’s always been one of my favorite films, and it occurred to me recently that it can be seen as a metaphor for today’s big pharma. This is an industry that has lost its soul. And the one thing that can save it is having time to play—because play is the only reliable path to innovation and the only way to avoid the catastrophes of hype-based science.
In my last blog, I suggested that pharma’s troubles are caused by businessmen with no understanding of the nature of the business they serve. (My key analogy was John Sculley, the man who pumped up Apple sales while systematically killing that which made Apple great.) An interesting point, however, was raised by a colleague who commented that while it was true that mismanagement had hurt the industry, a good scientist can find a way around management nonsense. What he found far more disruptive is management actively interfering in science—that is, managers who think they can make decisions on the content of his work, rather than the process by which he has to do it. These content decisions invariably come about by buying into the hype behind a scientific fad, as opposed to a process-oriented management fad, and so this blog is about hype-based science and how to avoid it.
Because of the windfall from pharma’s profit surge from 1995–2000, there’s really no shortage of examples of scientific snake oil: combinatorial chemistry, high throughput screening (HTS), kinases-based drug discovery, bioinformatics/human genomics, fragment-based drug design (FBDD), systems biology, etc. Although investment in each has reaped very little reward, I am sure there are many reading this thinking, “Wait a minute, I use those techniques, they aren’t snake oil!” But in my opinion, each of these has largely been hype-based science, with little to show and plenty to answer for. And attention and resources have been squandered that could have actually made drug discovery more scientific and, eventually, more of a process—the “domestication” of drug discovery, a term I happily borrow from David Shaywitz and Nassim Taleb.1 How much closer are we to rational design than we were twenty years ago? Hmmm?
Am I saying that bioinformatics or, say, FBDD are useless? Not at all. Most hype-based concepts do occasionally, eventually deliver; why should hype-based science in drug discovery be any different? What I am claiming is that a dangerous confluence has occurred in our industry, a confluence of weak scientific management and, more important, the stifling of the natural tendencies of real scientists to explore and understand new ideas—in other words, to play. Play sounds terribly wasteful to management, who are generally interested in immediate results. Since starting OpenEye, I’ve seen it largely vanish from modeling groups. Yet play is how you work out substance from hype, it’s how you get new domain knowledge that, properly used by management, saves you from disaster. Without it, management is likely to make bad scientific decisions—typically because they don’t know better and because their personal interests are too easily aligned with those selling the hype-based approach. How much easier to look like you are an inspired manager by catching the next great technological wave instead of actually managing scientists? How visionary, how far-sighted and, usually, how sadly wrong.
I’m going to go through my list of hype-based approaches, one by one. Although some have actually proved useful, I would claim this is either despite the hype or because the original proposition had mutated in the light of real-world frustrations. There are considerable commonalities here, enough so that I propose a set of litmus tests that should be applied to any newly proposed concept that make extraordinary claims.
Let’s start with the worst: combinatorial chemistry. Watching the industry get wild about combinatorial chemistry was like watching one of those disaster movies where a train goes off the rails while on a bridge over a deep canyon. Wonderful spectacle if you aren’t on the train. Completely useless? Worse waste of chemical resource and talent of the last twenty years? The original concept—making libraries of millions or even billions of compounds, either separately or in mixtures (to be deconvoluted later—yes, that worked so well)—was stupid from the get go. Why? Because it doesn’t matter how many compounds you make; what matters is how many drugs you make. What happened is that corporate collections became stuffed with literally millions of useless, barely soluble molecules. A common theme of most pharma hype debacles is illustrated here: the idea that more must be better, without any accounting for quality. Experience and expertise eventually led the field away from the concept entirely and towards parallel and robotic synthesis, and a more careful enumeration of useful reactions.
High throughput screening (HTS) was a natural counterpart to combinatorial chemistry and pharma’s reoccurring binges of compound acquisition. Want more drugs? Just screen more compounds. I suppose that would work if you screened more compounds that are likely to be drugs. Yet in the history of HTS only one drug has been “discovered.” “But,” you might say, “a lot of drug leads have been discovered.” True, but that’s not how the technology was sold—i.e., hyped—and then you have to compare it to alternate approaches, such as focused screening, fast followers, even computational methods like similarity searches. In addition, HTS is notoriously noisy—we’ve all seen, I suspect, the scattershot graphs of successive HTS screens on the same compound collection. Not for nothing was the phrase “HTS rescue” invented. False positives cost money. Repeated screening costs money. I’d conjecture that the real return on investment, in the billions of dollars at some companies, is hard to find. In fact, some (successful) companies, such as Lilly, have abandoned HTS altogether. So is the idea of faster, bigger screening a bad one? Of course not. We all want faster, cheaper and better. However, any engineer will tell you to pick two. HTS chose “faster and cheaper,” when it should have chosen “faster and better.” Once again, quality of information was sacrificed for quantity. How much better if the basic science of measurement had been supported, if speed and quality had evolved hand-in-hand? As happened, for example, in the chip industry. New fabrication plans cost billions (i.e., not cheaper) but they are ever faster and more accurate.
Kinases. I was first introduced to the idea that pharma was going to design kinase inhibitors in 1994, while visiting Glaxo. I distinctly recall thinking two things. The first was that it seemed dubious to design drugs for proteins that all had ATP in common, simply because of specificity; the second was that it was unclear what the biological case was for wanting to mess with such a central component of cellular machinery. Now, there have been some spectacular successes where the biology has been clear, such as Gleevec, but has the investment of many, many billions of dollars come close to break-even twenty years later? Many I’ve talked to do not think so. I am very familiar with one company that abandoned its traditional (and highly lucrative) areas to go “big” with kinases. In the ten years that followed they had one minor success story. Still, I’m conflicted about claiming that kinases are a total fad. The biology that has emerged, the wonderful dendritic maps of kinase families, the spectacular successes of Gleevec, possibly PLX4032 from Plexxicon for Raf kinase—these are all good, good things. But has the resources devoted to kinases as pharmaceutical targets by the industry as a whole paid off financially? Almost certainly not. Have many opportunities been lost in traditional areas? Absolutely. So what could have been done differently? Well, this is one area where support of academic research would have helped: why not pay academics to play? It would have helped to have known much more of the biochemistry and the nature of the underlying biology, but the rush to not get left behind prevents that. It’s all or nothing, go big or go home.
Remember the whole bioinformatics mania? Useful, no doubt, but it hardly revolutionized the industry. I recall seeing a presentation in the early ’90s by Bill Hasletine, founder of Human Genome Sciences. It was awesome. And yet, when the FDA actually approved their treatment for lupus last year, it came as a shock because most people had forgotten who they were. (And for the record, they are the company that so convinced SmithKline of the value of bioinformatics that SK never recovered and had to become a part of GlaxoWellcome). And the less said about the hype surrounding the Human Genome Project the better. Personally, I found it reprehensible and scientifically disingenuous to hype the effort as a fount of new medicine instead of what it was: a foundation for our future understanding of how we might one day make new medicine. But, hey, without the hype we wouldn’t have got the funding to do the project, right? Well, actually, no, since Craig Venter was doing it anyway and more efficiently. (Yet Venter’s commercial experiment—Celera Corp.— lost money every fiscal year and is currently being purchased for less than 3% of its peak value.) As with kinases, bioinformatics has been interesting, but the return on investment has been negligible. Most bioinformatics groups in pharma disappeared or were subsumed into other groups four or five years ago. Once again, a lot of information but very little quality. The quality was wrapped up in the biological relevance, something still lacking from most of the genome. Even the much-touted efforts to associate genes with diseases have been mostly a dry well: correlations found by one group are commonly disputed by later research, and even those that persist usually offer only glimpses of the underlying etiology.
Fragment-based drug design (FBDD). Snake oil? Those are fighting words, as many love FBDD. Some companies, such as Astex, were founded on the idea. That would be Astex Therapeutics, recently merged with SuperGen for less money than the total venture capital pumped into them since 1999. Yes, starting with small molecules and building up is a legitimate method (isn’t that just the old idea of the “anchor and grow” method of Howe and Moon?2), but is it an effective method—i.e., stacked up against traditional methods? That’s quite unproven. Certain insights would probably only have been gained by screening fragments—but is it a new paradigm for drug discovery or, as one colleague admitted, what you do when nothing else works? That’s a lot to go into here. My opinion is that mid-sized molecules, around fifteen to twenty heavy atoms, make sense, but really small molecules produce almost no useful information and have too high a false positive rate. What is for sure is that the useful applications of FBDD will not be what were initially hyped. And I think a lot of basic experiments (play) have not, even today, been done. For example, it is unclear to me how often molecules bind the same way as the fragments from which they were elaborated, or what the enrichment rate is of molecules bearing active fragments, compared to base-line enrichment, or when (if) FBDD produces active compounds any faster than standard structure-based drug design. No one, other than Abbott, has succeeded in joining up multiple active fragments in an active site. That would be the same Abbott that announced they were giving up FBDD. There may well be utility in FBDD but the suggestion that it is a front-line approach to all drug discovery is hype-based science.
Systems biology. I’ll keep it short: systems biology is another unfortunate attempt to rebrand biochemistry (lest anyone forget, the first being to call it “chemical biology”). Biochemistry is a real science. It is built upon the foundations of Krebs, Buchner, Sumner, Kornberg—all great scientists—and with many great contributions over the years. Such work is dedicated to understanding how the molecules of life work and work together. What exactly needs rebranding? Most of systems biology is hype and that which isn’t is just plain old biochemistry. Look it up in Stryer.3 That it should be necessary to rebrand, to re-hype, such a fundamental and wonderful science says much about current scientific leadership and granting agencies.
I could add many other examples: biologics, stapled peptides, outsourcing, workflow tools, cloud computing, RNA-based therapeutics, structural biology (does it actually help drug discovery more than the traditional development of SAR?), even computational chemistry. Perhaps especially computational chemistry! So can we, as a community, avoid hype-heartache in the future? In an attempt to be useful, not just negative, here are my five general principles:
1) Beware of technologies being sold to management, not to bench-level scientists.
It’s not (just) that management won’t understand the issues; they often have misaligned incentives. The kudos for bringing in a new and useful technology, for showing how you have “innovated” (if only by buying it) is often irresistible. And if it all goes wrong, as it often does, management has either moved on, propelled by the anticipated glory yet to arrive, or can blame the proponents for fooling them. A common excuse is that everyone was doing it or that someone with a Nobel Prize told him or her it was a good idea.
2) Ask if the fundamental scientific issues have been addressed.
And I don’t mean by those selling the hype; I mean by scientists not vested in the approach (or in a competing approach). Another common ploy of hype-sellers is paper churn. If you are attuned to this one you’ll eventually see a proponent of a method show a graph illustrating how the rate of publication is ramping up on this topic. As if this means anything other than that there are lots of people willing to believe hype. Publication quantity is no guarantee of quality. Yes, a good idea will generate lots of papers—but so do bad ones. The incentives of journals are to publish.
3) Work out if more is actually better.
A recurring theme in the examples above is that more must be better. More compounds, more screening, bigger chemical pathways, more gene sequences. No one really asks if the inevitable loss of quality is worth the increase in quantity. I find this curious because there are standard statistical techniques from information theory that answer this exact question. Yet I’ve never heard of them being used—perhaps a future blog, talk or product from OpenEye? A good rule of thumb ought to be that more is seldom better unless it is also at least roughly as accurate.
4) When in doubt, buy the results, not the technology.
From a senior industry figure, and a rule to live by. If someone claims they have a great new way of producing active compounds, see if you want to buy the compounds. If someone wants to sell you a measurement technique, buy the output. Now, if you repeatedly like what you get, sure, buy the technology. But what’s wrong with first buying the results, not the hype? It’s a rhetorical question, but the answer appears to be the fear of being left behind (big pharma’s desperate need to act like lemmings). In most of the examples here, the wise company would have lost nothing by waiting until the hype wore off, and in fact would have gained much. Buying the results of the hype rather than the hype is a hedging strategy, in the original sense of the word.
5) Thou shalt always play.
The only sure way to get to the other side of Hype Hill, to get to the real utility, is to play. You have to be prepared to let talented people goof around, sometimes with substantial budgets, and develop expertise. A couple of examples from outside our industry: Ray Dolby, who founded the eponymous Dolby Labs in 1976. He engendered a culture of experimentation that has had few parallels. His engineers could buy any equipment they liked, as long as it was less than a couple of hundred thousand dollars! Today Ray is worth $2.7 billion and his company has an enduring reputation for innovation as well as profits. Or consider when the British tried to interest the American armed forces in the Harrier JumpJet. After a few flights the American test pilots began in-flight manipulations of the adjustable thrusters that were only supposed to be horizontal in flight and vertical in takeoff, risking expensive structural failure but learning that the plane’s real value was maneuverability. It helps having “management” willing to buy you new toys if you break the old ones!
Play is not cheap: people playing means people not contributing to the apparent bottom line. Tati’s great Play Time, in the end, did not make money—it’s a risky business, movies and drugs. But if you want to innovate, to avoid the pitfalls of hype, you have to commit to play time—invest in constructing a climate of curiosity and experimentation. Let real science take root. And stick with it.
1. “Drug research needs serendipity”, David Shaywitz, Nassim Taleb, http://www.fooledbyrandomness.com/FT-Drugs.pdf
2. “Computer Design of Bioactive Molecules: A Method for Receptor-based de Novo Ligand Design”, J. B. Moon, W. J. Howe, PROTEINS: Structure, Function and Genetics, 11:314-328 (1981)
3. Biochemistry. Lubert Stryer. 4th Edition (the one with the GRASP pictures!)