Now that we have readied the WSL2 environment with Singularity and the relevant CUDA libraries, it’s time to run the sample Keras workflow.
Sug Talk: Globus’ Rick Wagner On Managing Large-Scale Cosmology Simulations With Parsl And Singularity
TL;DR: Globus continues to enable Big Science; here the added value of containerization via Singularity is shown to deliver the scientific results.
SUG Series Introduction
The inaugural meeting of the Singularity User Group (SUG) was held March 12-13, 2019, at the San Diego Supercomputer Center (SDSC). The event attracted diverse representation from the international advanced computing community as conveyed through the post-event press release issued jointly by SDSC and Sylabs.
Over the course of the two-day event, over 20 talks were presented by members of the Singularity user, developer, and provider community. Because SUG generated a significant amount of interest, even from those who were unable to attend, we are sharing online each of the talks presented.
SUG Talk Introduction
Established originally in 1995, the Globus Project pioneers the development and use of leading-edge technologies in Big Science. In this astronomically oriented use case, the Project’s Rick Wagner shares how Singularity has become a critical technology, and is helping to deliver impressive results. The abstract for Rick’s contributed talk entitled Managing large-scale cosmology simulations with Parsl and Singularity is as follows:
In preparation for the Large Synoptic Survey Telescope (LSST), we are working with dark energy researchers to simulate images that are similar to the raw exposures that will be generated from the telescope. To do so, we use the imSim software package (https://github.com/LSSTDESC/imSim) to create images based on catalogs of astronomical objects and by taking into account effects of the atmosphere, optics, and telescope. In order to produce data comparable to what the LSST will create, we must scale the imSim workflow to process tens of thousands of instance catalogs, each containing millions of astronomical objects, and to simulate the output of the LSST’s 189 LSST’s 189 CCDs, comprising 3.1 gigapixels of imaging data. To address these needs, we have developed a Parsl-based workflow that coordinates the execution of imSim on input instance catalogs and for each sensor. We package the imSim software inside a Singularity container so that it can be developed independently, packaged to include all dependencies, trivially scaled across thousands of computing nodes, and seamlessly moved between computing systems. The Parsl workflow is responsible for processing instance catalogs, determining how to pack simulation workloads onto compute nodes, and orchestrating the invocation of imSim in the Singularity containers deployed to each node. To date, the simulation workflow has consumed more than 30M core hours using 4K nodes (256K cores) on Argonne’s Theta supercomputer and 2K nodes (128K cores) on NERSC’s Cori supercomputer. The use of Singularity not only enabled efficient scaling and seamless conversion to support other container technologies, but it was also an integral part of our development process. It significantly simplified the complexity of developing and managing the execution of a workflow as part of a multi-institution collaboration and furthermore it removed much of the difficulties associated with execution on heterogeneous supercomputers.
Rick’s talk from SUG can be found below and here. Enjoy!
Join Our Mailing List
Signing the Container The Singularity 3.0 family introduced the ability to create (and manage) PGP keys to sign and verify containers. This provides a trusted method for Singularity users to share containers and ensures a bit-for-bit reproduction of the original...
Create an Account & Authentication Token Now that we have SingularityCE installed in WSL2, and NVIDIA GPU support is enabled, we will create a Singularity Container Services account and configure the local Singularity client, followed by building a remote...