Docker / OCI Compatibility in SingularityCE 3.9.0

By Staff

Oct 20, 2021 | Blog, News

The Open Containers Initiative (OCI) image format, which grew out of Docker, is the dominant standard for cloud-focused containerized deployments of software. While it’s possible to work with SingularityCE and never touch a Docker or OCI container, it’s likely you’ll want to run containers that are being distributed through the large public OCI registries such as Docker Hub and Quay.io.

Since Singularity 2.2 the ability to directly run, or build from, most OCI containers has been a key feature of the project. SingularityCE 3.9 continues the trend of providing simpler and more complete compatibility with Docker. We’ve also heavily rewritten the “Support for Docker and OCI Containers” section of our user guide, to more clearly explain the caveats and workarounds sometimes required.

A new --compat flag has been added to turn on a set of common options that are most frequently needed to improve compatibility with Docker images. Because SingularityCE has a different security model than Docker / OCI runtimes, not all concepts map exactly. We also mount directories into the container by default, for convenience, and use less isolation from host devices and networking. This approach has many advantages and makes it easier to e.g. run many parallel workloads on HPC systems, but can affect the execution of some Docker containers. The --compat flag is a quick and easy way to work around most of the differences between SingularityCE and Docker, when needed.

We’ve also introduced a --mount flag, which accepts Docker style syntax to bind mount files and directories into a container. This makes it easier to transition to running containers with SingularityCE, and permits mounting paths that contain certain special characters (not possible with --bind). We’ll be working to support other non-bind mount types in future, for additional compatibility.

For users with NVIDIA GPUs, we’ve provided experimental new functionality to configure the container using the nvidia-container-cli tool from NVIDIA’s libnvidia-container project. This is the same tool that is used to set up GPUs with Docker and other OCI runtimes. Our next blog post will cover this new functionality, and the workflows it permits, in depth.

Finally, we’ve fixed some sharp edges so that Dockerfile LABEL‘s are correctly inherited in a singularity build, and the %files section of our definition files uses the same pattern matching as Dockerfile COPY.

We’re looking forward to continuing this work as we look forward to SingularityCE 4.0. Let us know what’s most important to you via our open SingularityCE roadmap, or our community channels.

Join Our Mailing List

Related Posts

An Introduction to Singularity Containers

Enabling Portable and Secure Computing Environments for High-Performance Workloads.As part of their ongoing efforts to streamline workflows, enhance productivity, and save time, engineers, and developers in enterprises and high performance computing (HPC) focused...

read more