Limiting container resources with cgroups¶
Starting in Singularity 3.0, users have the ability to limit container resources using cgroups.
Singularity cgroups support can be configured and utilized via a TOML file. An
example file is typically installed at
/usr/local/etc/singularity/cgroups/cgroups.toml. You can copy and edit this
file to suit your needs. Then when you need to limit your container resources,
apply the settings in the TOML file by using the path as an argument to the
--apply-cgroups option like so:
$ sudo singularity shell --apply-cgroups /path/to/cgroups.toml my_container.sif
--apply-cgroups option can only be used with root privileges.
To limit the amount of memory that your container uses to 500MB (524288000
bytes), follow this example. First, create a
cgroups.toml file like this
and save it in your home directory.
[memory] limit = 524288000
Start your container like so:
$ sudo singularity instance start --apply-cgroups /home/$USER/cgroups.toml \ my_container.sif instance1
After that, you can verify that the container is only using 500MB of memory.
(This example assumes that
instance1 is the only running instance.)
$ cat /sys/fs/cgroup/memory/singularity/*/memory.limit_in_bytes 524288000
After you are finished with this example, be sure to cleanup your instance with the following command.
$ sudo singularity instance stop instance1
Similarly, the remaining examples can be tested by starting instances and
examining the contents of the appropriate subdirectories of
Limit CPU resources using one of the following strategies. The
of the configuration file can limit memory with the following:
You can enforce hard limits on the CPU cycles a cgroup can consume, so
contained processes can’t use more than the amount of CPU time set for the
quota allows you to configure the amount of CPU time that a cgroup
can use per period. The default is 100ms (100000us). So if you want to limit
amount of CPU time to 20ms during period of 100ms:
[cpu] period = 100000 quota = 20000
You can also restrict access to specific CPUs and associated memory nodes by
[cpu] cpus = "0-1" mems = "0-1"
Where container has limited access to CPU 0 and CPU 1.
It’s important to set identical values for both
For more information about limiting CPU with cgroups, see the following external links:
You can limit and monitor access to I/O for block devices. Use the
[blockIO] section of the configuration file to do this like so:
[blockIO] weight = 1000 leafWeight = 1000
leafWeight accept values between
weight is the default weight of the group on all the devices until and
unless overridden by a per device rule.
leafWeight relates to weight for the purpose of deciding how heavily to
weigh tasks in the given cgroup while competing with the cgroup’s child cgroups.
devices you would do something like this:
[blockIO] [[blockIO.weightDevice]] major = 7 minor = 0 weight = 100 leafWeight = 50 [[blockIO.weightDevice]] major = 7 minor = 1 weight = 100 leafWeight = 50
You could limit the IO read/write rate to 16MB per second for the
block device with the following configuration. The rate is specified in bytes
[blockIO] [[blockIO.throttleReadBpsDevice]] major = 7 minor = 0 rate = 16777216 [[blockIO.throttleWriteBpsDevice]] major = 7 minor = 0 rate = 16777216
To limit the IO read/write rate to 1000 IO per second (IOPS) on
block device, you can do the following. The rate is specified in IOPS.
[blockIO] [[blockIO.throttleReadIOPSDevice]] major = 7 minor = 0 rate = 1000 [[blockIO.throttleWriteIOPSDevice]] major = 7 minor = 0 rate = 1000
For more information about limiting IO, see the following external links:
- Red Hat resource management guide section 3.1 blkio
- Kernel block IO controller documentation
- Kernel CFQ scheduler documentation
Limiting device access¶
You can limit read, write, or creation of devices. In this example, a container
is configured to only be able to read from or write to
[[devices]] access = "rwm" allow = false [[devices]] access = "rw" allow = true major = 1 minor = 3 type = "c"
For more information on limiting access to devices the Red Hat resource management guide section 3.5 DEVICES.