A few months ago, I finally got around to installing & learning NAMD for doing some molecular dynamics. Installation was (and has been) relatively easy. Leaning how to use it in the past was always a bit too much for me for some reason or another. However, at the encouragement1 of a new postdoc in one of the adjoining labs at work, I gave it another stab with much better results!

Thankfully, this time, things went substantially more smoothly. I am unsure as to why, but having a pretty solid tutorial to take me start to finish made it a hell of a lot easier to do so2. Regardless, with workflows that were functional in my hands, I was finally able to migrate it over to my own proteins of interest (ParA, ParB, Hda, Beta clamp) and get to using (more like troubleshooting) them to work.

Once I finally had the file preparation figured out3, I was able to finally crank through some atomic motion time. Unfortunately, the time it took to calculate 1 nanosecond of motion—even while maxxing out my 8-core Mac Pro—was slightly obnoxious: 26ish hours or so.

Taking a few weeks to get my PI on board with registering for access to Compute Canada, I finally got access to some of the supercomputing clusters (namely, the GPC at SciNet) funded by the Canadian federal government, being operated here in Toronto. Scaling up CPU load (over multiple 8-CPU nodes) for the molecular dynamics runs, I’ve been able to push the same 1ns/26hr crunch time up to 5ns/24hr. A pretty substantial improvement, but those timeframes are mostly just useful for assessing changes in structural stability for mutants of my protein. Considering the protein spends on the order of seconds to minutes to undergo its important conformational change (that which I was hoping to get some modeling sense for), I really wanted to get the most bang for my buck in modelled-time/CPU-time. Even with a CUDA-capable video card in my Mac Pro, there were no CUDA-capable binaries available for Mac OS X, and I certainly didn’t want to have to compile it from source if I didn’t have to (plus some sources said it won’t compile with CUDA unless it’s on Linux).

Here’s where I start looking into GROMACS once again4, which is available on the MacPorts collection that I frequently delve into for open-source and/or command line tools. MacPorts unfortunately does not offer the CUDA-enabled version of GROMACS (and probably for good reason I imagine). Fine. Okay. Alright. Let’s suck it up & build it from source. I already do this on my FreeBSD server all the time…how bad can it be?

Pretty miserable, apparently. I think I spent about a week straight, trying to figure out why the hell the damn software kept erroring out. And amidst that, oh, the Intel compilers are best for performance on Mac OS X. Oh, and which version of gcc do I need? Oh, and nvcc (the CUDA compiler helper) refuses to talk to any compiler but clang on Mac OS X. And a slew of other obnoxious and/or cryptic errors that I could find jackshit for help on. So I just gave up on it for a while, and stuck to the GPC & NAMD route for now.

Last week—with my PI out of town for a Gordon conference—I decided to have a sit down again to try & learn GROMACS, but just on my laptop for now. Make sure I can actually use the software, before I sink too much time into attempting to build a CUDA-capable version again. Lo & behold, I finally make it through a couple tutorials5 for it, and feel comfortable enough with it that I’ve been typing up a “typical” workflow/pipeline today of steps/commands to process a structure from start to finish. It was also aided by the fact that I finally got a working CUDA-enabled Mac OS X build going! Apparently I had some luck with it Thursday or Friday, so in the best interests of disclosure for anyone else stuck trying to build a CUDA-enabled OS X version of GROMACS, here’s my path to success:

Following the framework outlined here for installing GROMACS 5 from source, everything in here had traditionally worked fine except for the make step, where I was constantly having builds fucked up from getting nvcc & an appropriate compiler to talk. So. To run through the first steps for brevity…download GROMACS, unpack it to a directory, change to the directory via Terminal (or xQuartz/X11) and then…

mkdir build
cd build

This was the easy stuff…here was the tricky line of configuring cmake to do just what I wanted…

cmake .. \
-DCMAKE_C_COMPILER=/usr/bin/icc \
-DCMAKE_CXX_COMPILER=/usr/bin/icpc \
-DGMX_MPI=OFF \
-DGMX_GPU=ON \
-DCUDA_HOST_COMPILER=/usr/bin/cc \
-DCUDA_PROPAGATE_HOST_FLAGS=OFF \
-DGMX_BUILD_OWN_FFTW=ON \
-DREGRESSIONTEST_DOWNLOAD=ON

Apparently MPI has to be disabled (because seriously, who is going to have an MPI setup at home for this stuff?), and nvcc basically has to talk to the Apple clang/llvm compiler (which was 6.1.0 at the time of compile). Manually defining a newer (from MacPorts) version of clang (3.7, whereas Apple’s is based on 3.6) refused to work, similarly to trying to get gcc (4.7, 4.8, 4.9, or even 5) or icc (the Intel 15.0 compiler) to work just flat-out failed. Not sure why I never tried the native cc before, but that was the kicker, I believe. That, and making sure MPI was turned off. After that, the rest was easy as pasta!

make -j16
make check
sudo make install
source /usr/local/gromacs/bin/GMXRC

After all that, I had a working GROMACS installation, CUDA-enabled and all (could hear my video card working its ass off during the benchmarking tests).

With that, I plugged in one of my older structures that took the 26 hours to crunch 1 nanosecond’s worth of time on, and let it go overnight to see how long it’d take. It didn’t even take the night to finish…that bad boy was done in under 5 hours of time! mdrun told me that system was crunching at ~5.6ns/day. More than 5 nanoseconds per DAY!

Holy fucking shit! That’s six times as fast! Granted, I don’t know if that’s to be expected (or less than, since GROMACS is single-precision FP calculations vs. NAMD’s double-precision FP), but that’s one hell of a benchmarking improvement. Some old SciNet wiki posts claimed that someone was crunching upwards of microseconds (thousands of nanoseconds) per day on the cluster with only 8 nodes (the same amount I’m using to get 5ish nanoseconds/day with NAMD). Needless to say, I’m enthusiastic for the improved timeframes I should be able to model over!

Now, with this compilation source finally established & logged in case I ever forget, it’s time to get back to my cheatsheet/workflow construction so I can (a) get some of these longer timescales crunching, (b) figure out how the hell to troubleshoot/build topology maps for cofactors (namely, ATP & ATPγS), and (c) start staging co-structures to examine potential interaction surfaces.

EDIT: Apparently the installation settings don’t stick after rebooting the system. Because I didn’t stop to pay attention to what the source /usr/local/gromacs/bin/GMXRC line did (which is only establish the appropriate environment variables once run). Instead, you need to add this line to .profile or .bashrc in your home folder so it’s invoked whenever you startup a Terminal session. Doh!


1“You seem to be pretty okay with coding, like you’re writing some Python scripts and know Perl decently enough. NAMD shouldn’t be that tricky to learn; it’s just learning how to run a bunch of scripts, really. If you really want some help sometime, I can probably help get you started.” At least that’s the general sense of what he told me when I was asking him about it, since he worked on disordered proteins for his Ph.D.
2Seriously, I don’t know why I hadn’t had the motivation to sit down & work through their posted tutorial before, but going through it this last time, it made a hell of a lot more sense.
3Doing all of the solvation & ionization with the command line was bit of a nuisance (and a bit confusing for me on chain definition/separation, so VMD‘s built-in tools helped a lot with that.
4I’ve repeatedly attempted to dabble in using it, but having no readily available tutorial for learning it, I considered it nothing more than a pipe dream half the time, as I’d install the software & have absolutely no clue how to use the damn thing.
5The first tutorial I ran across was rather comprehensive in making you iteratively go through and learn what each step makes you do, but became frustrating in its repetition. Still useful! Just not what I needed right there and then. Plus, it was outdated for using with GROMACS 5. Instead, I found another batch of tutorials better suited towards GROMACS 5 with some more diversity in what they cover, and I’ve been happy with those since!

Leave a Reply

Your email address will not be published. Required fields are marked *