Technical Paper
What You Should Know About NAMD and Charm++ But Were Hoping to Ignore
Event Type
Technical Paper
Application Tags
Technical Paper Tags
Technical Paper
TimeTuesday, July 243:30pm - 3:45pm
DescriptionThe biomolecular simulation program NAMD is used heavily at many HPC
centers. Supporting NAMD users requires knowledge of the Charm++ parallel
runtime system on which NAMD is built. Introduced in 1993, Charm++
supports message-driven, task-based, and other programming models and has
demonstrated its portability across generations of architectures,
interconnects, and operating systems. While Charm++ can use MPI as a
portable communication layer, specialized high-performance layers are
preferred for Cray, IBM, and InfiniBand networks and a new OFI layer
supports Omni-Path. NAMD binaries using some specialized layers can be
launched directly with mpiexec or its equivalent, or mpiexec can be called
by the charmrun program to leverage system job-launch mechanisms.
Charm++ supports multi-threaded parallelism within each process, with a
single thread dedicated to communication and the rest for computation.
The optimal balance between thread and process parallelism depends on the
size of the simulation, features used, memory limitations, nodes count,
and the core count and NUMA structure of each node. It is also important
to enable the Charm++ built-in CPU affinity settings to bind worker and
communication threads appropriately to processor cores. Appropriate
execution configuration and CPU affinity settings are particularly
non-intuitive on Intel KNL processors due to their high core counts and
flat NUMA hierarchy. Rules and heuristics for default settings can
provide good default performance in most cases and dramatically reduce the
search space when optimizing for a specific simulation on particular
machine. Upcoming Charm++ and NAMD releases will simplify and automate
launch configuration and affinity settings.