Presentation
Analyzing the Human Connectome Project Datasets using GPUs: The Anatomy of a Science Engagement
SessionFacilitation: Gateways
Event Type
Technical Paper
Facilitation
Gateway
Technical Paper
TimeTuesday, July 2411:45am - 12pm
LocationGrand Ballroom 4
DescriptionResearch support engagements to facilitate use of advanced computing resources are crucial to improve scientific outcomes and ensure efficient use and operation of resources. The UAB Visual Brain Core was created to improve the availability of services across campus neuro-imaging research groups. High-performance computing support is provided to the core through a collaboration with the campus Research Computing support organization. Opportunities for engagement often occur serendipitously through ordinary use of resources.
This presentation shares the experience of improving the performance of a data processing work flow for analysis of the Human Connectome 900 data set. It shares how bottlenecks were discovered and resolved in the work flow. A series of computational enhancements to the stock FSL BedpostX work flow are discussed. These enhancements migrated the work flow from a slow serial execution of computations resulting from Slurm scheduler incompatibilities to eventual execution on GPU resources, going from a 21-day execution on a single core to a 2 hour execution on GPUs.
The presentation concludes with insights on how this workflow contributed a vital use case to the build out of the cluster with additional GPUs and enhancements to network bandwidth. It also explores potential improvements to distribution of scientific software to avoid stagnation in site-specific deployment decisions. The discussion highlights the advantages of popular code collaboration sites like GitHub.com in feeding contributions upstream. Additionally local conventions for software build and deploy are considered as opportunities for coordination efforts for downstream consumers.
This presentation shares the experience of improving the performance of a data processing work flow for analysis of the Human Connectome 900 data set. It shares how bottlenecks were discovered and resolved in the work flow. A series of computational enhancements to the stock FSL BedpostX work flow are discussed. These enhancements migrated the work flow from a slow serial execution of computations resulting from Slurm scheduler incompatibilities to eventual execution on GPU resources, going from a 21-day execution on a single core to a 2 hour execution on GPUs.
The presentation concludes with insights on how this workflow contributed a vital use case to the build out of the cluster with additional GPUs and enhancements to network bandwidth. It also explores potential improvements to distribution of scientific software to avoid stagnation in site-specific deployment decisions. The discussion highlights the advantages of popular code collaboration sites like GitHub.com in feeding contributions upstream. Additionally local conventions for software build and deploy are considered as opportunities for coordination efforts for downstream consumers.
Authors