On the Passing of Andrew Grant

By | January 23, 2013

Of the many reasons to restart my blog I expected least the death of a friend. As many of you know, Andrew Grant of AstraZeneca and long-time OpenEye collaborator, passed away on the 29th of December. His death was a total surprise to all who knew Andy: he was fit, ate well and was seldom sick. Still, while jogging in the Spanish hills, he collapsed due to a massive heart attack and could not be revived.

I first knew Andy when I was a post-doc at Columbia with Barry Honig and he was a post-doc with Harold Scheraga, way back in 1991. In those days, to get GRASP or DelPhi you downloaded an encrypted file and then asked me for the password. As the download idea was mine (before that we had to send out tapes!), I chose “newrose” as that password. “New Rose” was the debut single by the English punk band The Damned, a group renowned for doing everything “first and worst” (a habit of mine!). A keen fan of punk, he was incredulous and wrote to ask me if I knew it was a song by The Damned! I really didn’t know much about Andy, I think our only other communication had been my faxing him notes on the Non-Linear Poisson-Boltzmann equation, so I had no idea that instant brotherhood had been struck! He took music that seriously.

I don’t recall many other communications until a couple of years later when we actually met. There was a conference on molecular graphics and modeling in Leeds or York. I sat at the back of the lecture theatre and this red headed bloke came and sat down next to me. “You don’t know me, but I know you,” were his first words to me. Before I thought him a maniac off the streets, he quickly explained he was J. Andrew Grant, who had begged notes from me when I was at Columbia and who had loved the GRASP password. By that time he had moved back to England and was working for the pharmaceutical division of ICI, shortly to become Zeneca. It was instant camaraderie. As any of you know who were lucky enough to form a friendship with Andy, he took such very seriously and we kept in regular contact. He invited me to meet the CADD group at Zeneca, which I still consider the best I ever saw. When I decided to leave Columbia and form OpenEye, it was with the massive encouragement and support of Andrew, and those at Zeneca who trusted his insight.

More than just encouragement, he gave ideas. Before he had settled in Macclesfield he had spent a year in the Wilmington branch, working with Brian Masek. Brian had been playing around with the superposition of molecules represented as fused spheres. The code he had written was slow and prone to getting stuck in local minima, but when it worked it gave strikingly good overlays. While Andy was with Harold, he had been given the task of seeing how one might use Gaussians to calculate a robust and rapid estimate of molecular area—a problem he had not solved. However, he had worked with Professor Barry Pickup at Sheffield University, his PhD advisor, on the concept of representing molecular volumes with Gaussians. In fact, although it is little appreciated, I think that work with Barry and Maria Gallardo, who became his long-time partner, was one of the best ever in the modeling of molecules. It is highly unintuitive that one can use Gaussians to not just represent the volume of an atom (many had been drawn to that concept before) but to use the convolution formula for Gaussians to represent spherical overlaps—to any order! Pure genius. And the result- that you could model the fused sphere volume to within 0.1% is, I do believe, the most remarkable result I have ever seen in our field.

It was for this reason that I began seriously thinking about ROCS (a beautiful result deserves a great implementation). Andy’s initial work along those lines had improved on Brian’s, largely overcoming the problem of multiple minima and decreasing the time required for an overlap minimization from 100 seconds to one or two seconds. This was amazing in of itself, but we both knew it had to be much faster to really have an impact. I proposed aiming for a 1000 per second. I remember clearly him looking at me, not doubting we could do it, but his natural modesty made him suggest we just claim we might be able to do 100 per second, so as not to seem too brash! It was the first example of the Nicholls/Grant dichotomy (me being the brash one). Of course, we eventually achieved that mark and with Matt Stahl’s help made it into ROCS, a product that remains at the center of all we do at OpenEye.

If Andy had done nothing else, his work on ROCS would, in my mind, put in the sparsely-populated pantheon of those who really made a difference in molecular modeling. But in many ways, it was only a minor part of his output over the next fifteen years. Here are, for me at least, some highlights, remembering that this body of work was in addition to the many things he did within AstraZeneca:

1) ZAP. Given such an accurate representation of molecular volume with Gaussians, it seemed natural to imagine using this form for the dielectric constant of molecules. With the help of two students, Paula Kitts and Christine Kitchen, we derived a functional form that worked very well, a form that both reproduced the results from the molecular surface dielectric map in DelPhi, but was much more stable with respect to translation and rotation of the molecule. Both Paula and Christine earned their PhDs from Sheffield with Barry for their contributions to this work. The result was that we could use coarser grids to get the same numerical precision as DelPhi, and so an enormous increase in speed. This was in addition to the faster convergence in ZAP, due to the use of smoother functions. Even many years later it is still not widely appreciated what a breakthrough this was: a couple of years ago we used the ZAP-based protein pKa code Andy and I wrote for the protein pKa blind challenge put together around Bertrand Garcia-Moreno’s work on SNase. People were amazed it took less than a minute to calculate accurate pKas for the whole protein. Over a decade after we wrote the code it was, and is, still the fastest way to make PB predictions. And the work did not stop there; Andy also wrote an interface to Gaussian to allow one to do iterative QM/PB calculations (QZAP) and also helped with the derivation of the first order derivatives of the electrostatic energy—the first time anyone had done this in a robust and fast manner—in fact, he found that forces almost cost nothing compared to the original PB calculation.

2) Docking. A less appreciated, but still wonderful, piece of work that grew out of the concept of modeling hard spheres with Gaussians was the Grant-Pickup formula for shape-based docking, a formula that underlies our FRED docking program. Essentially, Barry and Andy found that a weighted average of a Gaussian function and its first radial derivative gave a function that looked amazingly like the VdW energy of the approach of two spheres. A free parameter, kappa, essentially the width of the Gaussian, controlled the sharpness of this pseudo-energy function—i.e., a small value would produce a shallow minima, a large kappa a steeper one. As such, they could “tune” their function to either the character of VdW interactions or produce a much smoother, less fractal landscape. Mark McGann, while at Johnson and Johnson with Frank Brown, showed that you could tune kappa for real-world applications and illustrated this by optimizing cross-docking pose predictions across a disparate set of trypsin isoforms. Ironically, given its success, Andy was actually less interested this than in what he called “internal docking”. By simply replacing the original function with one in which the two component functions subtracted rather than added, the modified function had Partial Shape-Matching character—that is, if a fragment of a molecule was randomly placed within the parent molecule and this new function optimized the fragment, it could find its way back to itself within the larger molecule. This intriguing finding remains largely unexplored.

3) Gaussian Accessible Area. Having succeeded in calculating the electrostatic solvent forces in ZAP, we looked again at whether we could calculate the accessible area of a molecule. Note that the word “accessible” signifies that this is not the contact, or Van der Waals, surface but the extended surface that represents the loci of a center of a sphere the radius of water rolled over that surface. The problem with using Gaussians for this is that the number of overlaps of these extended spheres became impractically huge, approaching N (Heavy Atom Count) factorial. Instead, Andy and I went back to the oldest method for area calculation, namely that from Shrake and Rupley, which was simply to put dots on each extended sphere and calculate whether such are inside or outside. The problem with this old algorithm was that it took many dots to get an accurate area and had no derivatives. We solved this problem by having such dots sample the extended Gaussians representing each atom. If this sum was above a certain threshold we removed it from the area calculation, otherwise we calculate a “partial” occlusion. Because the sum of extended Gaussians was so smooth we could get away with many fewer dots—the method was again amazingly fast (to my knowledge there is nothing comparable) and we could easily calculate derivatives. Both aspects were important because it meant we could add a non-polar energy term to PB, allowing us to evaluate and optimize continuum theory predictions of binding energies, usually in less than a second.

4) Approximate Solvation Models. Because PB and surface area were now so fast we could compare and contrast to our bête noir, GBSA. This very approximate, and to some of us profoundly distasteful, attempt to estimate the energies of PB was (and unfortunately still is in many quarters) quite rampant. The advantages claimed were that it was simpler to program (true), much faster (suspect) and gave you gradients (which ZAP now did at no cost). We examined GBSA very closely and realized that if you wanted ZAP to be as inaccurate as GBSA you had to choose such a coarse grid spacing that PB was actually faster than GB! This was widely ignored, I think because the other advantage of GB, being able to adjust internal parameters to give you the answer you wanted, was an advantage PB lacked. Eventually, more for fun than anything else, we did construct our own version of GB using, naturally, Gaussians. It was quite elegant and required only a single parameter, unlike the parameter proliferation within the literature. But we never used it in anger; we already had the real thing. The work on GB vs. PB did lead to one very startling discovery that became known as the Sheffield Model. A young post-doc, Matthew Sykes, again working for Barry Pickup and Andy, was engaged to work on variants of our Gaussian PB but instead tried an extraordinary simplification. Rather than calculating all the atom-atom distances required for the shielding and desolvation parts of GB, Matt simply used a couple of universal constants, A and B, and found that the resultant model was almost as accurate as GB (i.e., about as worse from GB as GB was from PB). Considering that this approach required almost no computation, gave easy gradients and was almost as good for small molecules, it seemed a natural to use in minimization code, where it now resides in Szybki. In fact, for entropy calculations in solvent it is actually better than PB (!) because the gradients are so reliable and differentiable.

5) Getting back to shape, I think if there was one piece of work Andy himself would most want to be remembered for it is an approach that hasn’t even (yet) made an impact, namely the representation of molecules by Ellipsoidal Gaussians. The idea here was simple enough: instead of using a Gaussian centered at each atom, we would allow a few Gaussians to “float” around and to elongate or squash, so at they could represent sets of atoms (we imagined cigar-shaped ellipsoids for aliphatic chains and disk-shaped ones for ring systems). It proved a real challenge, both mathematically and computationally, to solve this problem in a robust and reliable manner, but I shall always recall the day we got it working. Andy had come over to Santa Fe for a couple of weeks of “vacation” and was working on the loaner SGI from Glaxo that Paul Charifson had organized. By a strange twist of fate it still had a yellow sticker that read “Mill Lambert” (Mill had been a colleague from Andy’s Scheraga days). After a frustrating week of working on the problem he called me over in his typical low-key way: “You might want to take a look at this.” There on the screen was a beautiful two-ellipsoidal representation of Omeprazole (attached below). It was like seeing a goal by Eric Cantona for the first time. We subsequently improved the approach so we could construct quite elaborate, multi-ellipsoid, representations, coloring their surfaces, working out the mathematics of overlap between two sets of such and the derivatives with respect to position—but we never quite found a use for them! The mathematics was expensive so there wasn’t much of an advantage over ROCS. There were often different (quite reasonable) representations of a single conformer. And sometimes it took a lot of them to really capture a shape. The best use we ever envisioned, and actually published upon, was as an anonymization technique: i.e., you could obscure what the underlying molecular structure was but keep the shape. I hope this, or some other use, one day catches on as I know it was Andy’s favorite piece of work.

6) Around the time he and I were perfecting ellipsoids, OpenEye hired James Haigh, ostensibly to work for Barry Pickup as a post-doc but more to help Andy with his many projects. Although initially suspicious about this free help, James and Andy quickly because very productive together. Their best work was on the exploration of Shape Space: i.e., seeing how many molecular shapes up to a given size where needed such that any subsequent molecule had to be within a certain shape distance of a molecule already seen. They discovered that for a given threshold of similarity the number of such shapes grows in an almost perfectly exponential manner with respect to size. By comparing to hypersphere results we could estimate that the effective dimensionality of shape space grew roughly as the number of heavy atoms divided by three—a remarkable result. It led to the concept of a shape fingerprint, a string representation of ones and zeros where a bit was set to one if a conformer was within a threshold of similarity to a particular “reference” shape. Shape fingerprints proved a pretty good surrogate of shape, but the original concept of shape space proved the stronger concept, going on to influence the design of chemical libraries, in particular in the work Andy did with Neil Hales. Another interesting aspect of the work on shape was to show that the shape tanimoto obeys the triangle inequality. This work was done as a part of Huw Jones’ Sheffield PhD project that also encompassed the examination of RMSD between conformations as a distance metric.

7) Returning to electrostatics, work with David Timms on Lck kinase threw up an interesting observation still not widely appreciated, namely that some series show SAR in parts of the molecule that have no contact with the protein; this is purely a through-space effect of the protein potential (as calculated by PB) and the functional group being modified. It was a lovely piece of work, hampered only by the limited amount of data AZ was able to generate. This limitation in data quantity led to its rejection by J. Med. Chem., in my mind one of the poorer decisions made by that journal. The quality of the data, I think, was sufficient, and the concept compeling enough that the work could and should have been examined by a wider audience. I still hope that happens to what Andy and I always referred to as the Timms Effect.

8) As well as pioneering the ROCS concept, Andy was a big believer in the expansion of shape comparison toElectrostatics Field Comparison. We always assumed that having “pure” physical quantities, as opposed to ones constructed from atom typing, would be superior, and were always disappointed when virtual screening studies suggested the opposite. However, like all true believers, sometimes you have to wait awhile for reality to catch up to your faith! And when it finally did we were delighted to see examples such as Grant Churchill’s discovery of a nanomolar NAADP mimic. That this work was done by an undergraduate in Grant’s lab as a summer project only emphasized to us that this was a robust, intuitively correct concept. And, only a couple of years later, Andy had a successful “EON” hit of his own with a fibrinolysis inhibitor, discovered almost “off-the-shelf” that no other method had found. Although it did not make it into development, largely because of commercial decisions, it was gratifying for Andy to see this elementary, physics-based method deliver.

9) The last piece of substantial work Andrew did was on Molecular Dipoles. For several years he had been aiding my work on solvation energy prediction, regularly computing high-quality QM charges for use in the SAMPL challenges. It was that careful work that demonstrated to us that better charges usually got better (PB) results—a nice result but a frustrating one as some of the QM calculations were taking months. To see what the appropriate level of calculation ought to be we turned to a different physical property, dipole moments, where we could be sure of the experimental result (or so we thought). To get this data, Lisa Chubrilo carefully transcribed hundreds of results from the compendium of McClellan. To our surprise, the agreement with QM, even at a high level, was not that great. Although there seemed to be a core set that were well predicted, there were dozens that were not. This led to a lot of soul-searching and a lot more diligent work by Andy and by Maria, who discovered that sometimes these outliers were the result of poor proton positioning by OMEGA, sometimes the conformation was wrong, and sometimes the literature was incorrectly transcribed into McClellan (we never found an example of Lisa mis-transcribing from McClellan!). With all of these examples cleaned up we could see a very strong signal—DFT always gave superior results to Hartree-Fock calculations for the same basis set, and we could also see a real sweet spot where the calculation was both accurate and fast—something not yet fully exploited in our work. Finally, we also came to realize that the claimed accuracy of experiment of 0.01Debye was too low, both because very high level calculation never improved beyond an RMSE accuracy of 0.1D, and because we saw enough papers by different groups on the same molecule to realize that 0.1D was a much more realistic experimental error.

10) I want to end with perhaps an unlikely example. I’ve not mentioned several internal pieces of work at AZ, such as his exploration of the LINGO similarity measure, or his amazing notes on quaternion adaption, or his aid to Peter Taylor’s tautomer work, or any of a number of great talks he gave at meetings. Rather, I want to end with a small piece of science that he and I did that illustrates his love of mathematics. From my early work with GRASP I had noticed something interesting in the Effective Relative Dielectric, the uniform dielectric that would be needed to reproduce the potential from a charge at one atom measured at a second, distant, atom. If these two atoms were in a protein, the effective dielectric was small when the atoms were close, but would increase if the atoms were far enough apart for solvent to have an effect. What was surprising was that the maximum of this function was typically larger than that of water—i.e., a low dielectric mass (protein) embedded in a high dielectric (water) actually could create over-screening. This was especially true when the second atom was on the opposite side of the protein. This was considered by some major figures in the field to be a mistake, a consequence of incomplete convergence or a grid spacing problem. Actually it was quite real and might have interesting biological consequences: proteins with a net negative active site often have a large positive area on the opposite side of the protein and vice versa. To prove this was a real effect, Andy implemented the analytic solution for the PB equation for a charge located just beneath the surface of a sphere and calculated the contours of effective dielectric. When plotted, this beautiful figure clearly showed an antipodean zone of an effective relative dielectric of over 110. I have appended his illustration below. This picture and the simple two-ellipsoid fit to Omeprazole are images that I shall always associate with the work of J. Andrew Grant. Simple and elegant, with an eye for the beautiful, but practical, result.

I hope this illustrates why his death is such a loss for the field of molecular modeling. Even without ROCS he made such major contributions, not all of which have yet played out. But even that loss does not compare to the loss of our dear friend, someone who put up with us camping out at his house, who always had time to help with projects, to freely exchange ideas, to see the bright side in people—which he did despite severe provocation! He was tremendously witty, a natural sportsman, could write like an angel, loved music and Manchester United with a passion, was widely read and appreciated greatness, wherever it lay: in a great goal, a great song, a great piece of art or a wonderful piece of science or math. He was a universally liked man, and I only wish more of you had had the chance to get to know him.

Anthony

AG-blog-pic1

 

AG-blog-pic2

Anthony Nicholls

CEO & Founder of OpenEye Scientific Software, Inc.

More Posts - Website