git clone git@github.com:jwillikers/unifi-controller.git ~/Projects/unifi-controller
Ever wanted to run a container, or pod, as a systemd service on Linux? This allows the container to be started automatically and even restarted on failure. I’m got a container running like this right now thanks to Podman which makes this incredibly easy and a bit more secure. If managing your containers as services is something you’re interested in, then this tutorial is for you.
This tutorial lays out the steps to manage a Podman container as a systemd service. A UniFi Controller container, derived from a Kubernetes YAML file, will be used as an example. Steps are provided for both rootless and root configurations. This tutorial continues the series on Podman. Previous tutorials include Podman Compose, Translate Docker Compose to Kubernetes With Podman, and Automatically Update Podman Containers. The target system is elementary OS 5.1, based on Ubuntu 18.04. You’ll need to have Podman installed, of course. To install Podman on an Ubuntu system, follow the instructions in Install Podman on Ubuntu. You are expected to be familiar with Linux containers, Podman, the command-line, the Kubernetes configuration format, {Git}, systemd, and anything else I forgot to mention…
Clone the repository with the Kubernetes YAML file for the UniFi Controller.
git clone git@github.com:jwillikers/unifi-controller.git ~/Projects/unifi-controller
Provide the generated Kubernetes YAML to podman-play-kube(1) to create and launch the pod.
podman play kube ~/Projects/unifi-controller/unifi-controller.yml
sudo podman play kube ~/Projects/unifi-controller/unifi-controller.yml
Change into the directory where you want the systemd unit files to be placed. Below are common locations for these files.
cd ~/.config/systemd/user
cd /etc/systemd/system
Generate the systemd service unit files using podman-generate-systemd(1).
The following commands use a couple of extra options.
By default, podman-generate-systemd will output the content of the units to the console.
--files
places the output in the appropriate files.
In this particular situation, it will create a service unit file for the pod and a service unit file for the single container.
The --name
option will use the names of the pod and containers instead of their hash id’s.
The --new
option causes the pods and containers to be created each time the service starts or restarts.
When running containers as systemd services, this option is required for Podman’s auto-update functionality to work.
For details on auto-update, checkout Automatically Update Podman Containers.
The last argument to the command is the pod’s identifier.
podman generate systemd --files --name --new unifi-controller
sudo podman generate systemd --files --name --new unifi-controller
Enable the systemd service. For the rootless configuration, the service will start upon the user logging in. For the root configuration, the service will be activated on boot.
systemctl --user enable --now pod-unifi-controller.service
Created symlink /home/jordan/.config/systemd/user/multi-user.target.wants/pod-unifi-controller.service → /home/jordan/.config/systemd/user/pod-unifi-controller.service.
Created symlink /home/jordan/.config/systemd/user/default.target.wants/pod-unifi-controller.service → /home/jordan/.config/systemd/user/pod-unifi-controller.service.
sudo systemctl enable --now pod-unifi-controller.service
Created symlink /etc/systemd/system/multi-user.target.wants/pod-unifi-controller.service → /etc/systemd/system/pod-unifi-controller.service.
Created symlink /etc/systemd/system/default.target.wants/pod-unifi-controller.service → /etc/systemd/system/pod-unifi-controller.service.
Access the controller’s web console at https://127.0.0.1:8443/.
open http://127.0.0.1:8443
xdg-open http://127.0.0.1:8443
On Red Hat’s Enable Sysadmin publication, the article Improved systemd integration with Podman 2.0 delves into Podman’s systemd and auto-update functionality.
An article on Red Hat’s Developer Blog, https://developers.redhat.com/blog/2019/04/24/how-to-run-systemd-in-a-container/How to run systemd in a container], describes how to run systemd from within containers.
Toolbox is a simplified wrapper for using Podman containers for development.
Given the simplicity of managing Podman containers as systemd services, why not use them yourself if they fit your use case?
Podman can automatically update your containers and hopefully make you’re life easier at the same time. Setting this up for Podman is actually pretty straightforward. Read on to learn how to set this up.
This tutorial will guide you through the steps to configure automatic updates for a Podman container. Specifically, the tutorial will walk through automating updates for a UniFi Controller container using a Kubernetes YAML file. It’s a continuation of the Podman Compose and Translate Docker Compose to Kubernetes With Podman posts. The target system is Ubuntu 18.04. You’ll need to have Podman installed, of course. You should also be familiar with Linux containers, Podman, the command-line, the Kubernetes configuration format, Git, and systemd.
Clone the GitHub repository with the Kubernetes configuration file for the UniFi controller.
git clone git@github.com:jwillikers/unifi-controller.git ~/Projects/unifi-controller
Inspect the YAML file.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-03-13T17:21:54Z"
labels:
app: unifi-controller
io.containers.autoupdate: image (1)
name: unifi-controller
1 | Add the label io.containers.autoupdate and set it to image to enable automatic updates for the containers herein. |
When using the podman create
command, the --label
or -l
flag can be followed by the label, "io.containers.autoupdate=image"
to enable auto-updates for the container.
The image name must be fully qualified for auto-update to update the image. |
Provide the generated Kubernetes YAML to podman-play-kube(1) to create and launch the pod.
podman play kube ~/Projects/unifi-controller/unifi-controller.yml
Check the labels attached to the UniFi Controller container with podman ps
.
podman ps -a --filter name=unifi-controller --format "{{.Names}} {{.Labels}}"
unifi-controller_unifi-controller_1 map[PODMAN_SYSTEMD_UNIT:container-unifi-controller_unifi-controller_1.service build_version:Linuxserver.io version:- 6.0.45-ls100 Build-date:- 2021-03-02T04:05:16+00:00 com.docker.compose.container-number:1 com.docker.compose.service:unifi-controller io.containers.autoupdate:image io.podman.compose.config-hash:123 io.podman.compose.project:unifi-controller io.podman.compose.version:0.0.1 maintainer:aptalca]
There are quite a few labels present, but one of them is the correct label, io.containers.autoupdate:image
.
This confirms that the container is labelled correctly.
Enable the Podman’s auto-update systemd timer. This tutorial uses the rootless runtime, but the necessary command is provided for enabling the auto-update timer for containers run as root.
systemctl --user enable --now podman-auto-update.timer
sudo systemctl enable --now podman-auto-update.timer
When using podman-generate-systemd(1) to create systemd units for a pod, make sure to use the --new
flag.
This will create, start, and remove containers as part of the systemd units, which is necessary for applying automatic updates to running containers.
To learn more about running a pod or container as a systemd service, refer to A Podman Pod as a systemd Service.
It’s also possible to trigger auto-updates manually with podman-auto-update(1).
podman auto-update
In case you’re interested in accessing the UniFi controller container, the controller’s web console is at https://127.0.0.1:8443/.
open http://127.0.0.1:8443
xdg-open http://127.0.0.1:8443
On Red Hat’s Enable Sysadmin publication, the article Improved systemd integration with Podman 2.0 delves into Podman’s auto-update functionality.
You have learned how to enable automatic updates for Podman containers.
elementary OS 5.1 doesn’t automatically update Flatpak applications. Given the arbitrary appearance of updates, it’s a bit bothersome to be nagged about updates all day. Flatpak doesn’t provide an auto-update mechanism but instead leaves this up to software apps. GNOME Software has had this functionality baked-in since GNOME 3.30, for instance, according to the Phoronix article GNOME Software 3.30 Will Automatically Update Flatpaks By Default. Since I don’t want to have multiple app stores on my machine, I opted for using systemd to update Flatpaks.
The instructions here describe how to create systemd service and timers to automate updating both user and system Flatpak installations. The system systemd units will only update the system Flatpaks, whereas the user systemd units will update both the user’s Flatpaks and the system’s. In most cases, having both user and system services to update Flatpaks is unnecessary. The system systemd units are handy for the default Flatpak behavior, which installs Flatpaks system-wide. The user systemd units are great for users who opt to install Flatpaks in their user-specific installation, such as Flatpak developers.
The tutorial uses elementary OS 5.1 as a reference operating system but are more generally applicable to any Linux system with systemd and Flatpak. I assume you are familiar with these concepts and will keep things brief. Separate instructions are provided for the user and system Flatpak installations. The systemd units here were derived from those provided by marcelpaulo's GitHub comment.
The systemd user unit files are placed in the directory |
Create the systemd service unit to update Flatpaks.
[Unit]
Description=Update user Flatpaks
[Service]
Type=oneshot
ExecStart=/usr/bin/flatpak update --assumeyes --noninteractive
[Install]
WantedBy=default.target
[Unit]
Description=Update system Flatpaks
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/bin/flatpak update --assumeyes --noninteractive --system
[Install]
WantedBy=multi-user.target
Create the systemd timer unit to automate the updates.
[Unit]
Description=Update user Flatpaks daily
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
[Unit]
Description=Update system Flatpaks daily
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
Start the systemd timer.
systemctl --user enable --now update-user-flatpaks.timer
Created symlink /home/jordan/.config/systemd/user/timers.target.wants/update-user-flatpaks.timer → /etc/systemd/user/update-user-flatpaks.timer.
sudo systemctl --system enable --now update-system-flatpaks.timer
Created symlink /etc/systemd/system/timers.target.wants/update-system-flatpaks.timer → /etc/systemd/system/update-system-flatpaks.timer.
You have removed a bit of distraction from your day. With any luck, it wasn’t even too difficult.
Suffering from obsessive updating syndrome? Are you making frequent trips to the App Center or terminal to apply updates? Do update notifications haunt you all day long? If your on a Debian-based system, unattended upgrades can help.[1]
Systems based on Debian systems such as Ubuntu and elementary OS can use the unattended-upgrades package to automate system updates with Aptitude. The package provides a Python script by the same name. This tutorial provides a quick run through to install and configure the package for those familiar with Linux, Debian, Aptitude, and the command-line. The tutorial uses elementary OS 5.1 as the reference system. My configuration choices were based off my preferences for a system I use as a general desktop workstation.
This won’t update Flatpak applications for you. To do this, see Automate Flatpak Updates With systemd. |
Install the unattended-upgrades package.
sudo apt -y install unattended-upgrades
Refine the update behavior in the configuration file /etc/apt/apt.conf.d/50unattended-upgrades
.
Apply updates from all repositories.
The Unattended-Upgrade::Allowed-Origins
block contains specific repositories from which to update automatically.
Lines are commented with //
.
Only Ubuntu repositories are listed in this file and some of the repositories are commented out.
Uncomment these to use them.
The following example enables "Ubuntu:bionic-updates";
and "Ubuntu:bionic-backports"
enabling Ubuntu updates and backports.
The lines for security updates were already uncommented, so I left those as they were.
Unattended-Upgrade::Allowed-Origins {
"Ubuntu:bionic";
"Ubuntu:bionic-security";
// Extended Security Maintenance; doesn't necessarily exist for
// every release and this system may not have it installed, but if
// available, the policy for updates is such that unattended-upgrades
// should also install from here by default.
"UbuntuESMApps:bionic-apps-security";
"UbuntuESM:bionic-infra-security";
"Ubuntu:bionic-updates";
// "Ubuntu:bionic-proposed";
"Ubuntu:bionic-backports";
};
For my particular use case, I want to allow updates from all repositories I have configured.
I could add these manually to the Allowed-Origins
block, but that’s more work than I’d like to do.
Instead, my configuration replaces the Allowed-Origins
block with an Origins-Pattern
block which allows any origin with the *
wildcard.
This is shown in the following snippet.
Unattended-Upgrade::Origins-Pattern {
"origin=*";
};
Remove unused dependencies.
I don’t want to keep old or unused dependencies around, so I uncommented the following line Unattended-Upgrade::Remove-Unused-Dependencies
and set it to true.
The included comment is fairly self-explanatory.
// Do automatic removal of new unused dependencies after the upgrade
// (equivalent to apt-get autoremove)
Unattended-Upgrade::Remove-Unused-Dependencies "true";
The other options related to removing unused dependencies and kernels are already enabled by default.
Automatically reboot after upgrades when required.
I don’t think anyone appreciates their desktop suddenly rebooting immediately after it applies some updates. I rarely leave my computer on for more than a few hours at a time. However, I figured it’s good to make sure the computer reboots eventually if necessary. The configuration below will automatically reboot but it does this at two in the morning to be as unobtrusive as possible. To further reduce the chance of unexpected interruptions I’ve disallowed the computer from rebooting so long as users are logged in.
// Automatically reboot *WITHOUT CONFIRMATION*
// if the file /var/run/reboot-required is found after the upgrade
Unattended-Upgrade::Automatic-Reboot "true";
// If automatic reboot is enabled and needed, reboot at the specific
// time instead of immediately
// Default: "now"
Unattended-Upgrade::Automatic-Reboot-Time "02:00";
// Automatically reboot even if users are logged in
Unattended-Upgrade::Automatic-Reboot-WithUsers "false";
Configure Aptitude’s schedule for unattended-upgrades and related functions.
Aptitude has it’s own scheduling configuration activated by systemd timers, namely apt-daily.timer
and apt-daily-upgrade.timer
.
Aptitude configuration resides under the /etc/apt
directory.
The unattended-upgrades script should be enabled here.
The package update-notifier-common
is installed on my elementary OS system, so I simply updated the existing configuration file /etc/apt/apt.conf.d/10periodic
with the appropriate settings to enable unattended-upgrades.
Alternatively, you might create a new file with a higher precedence such as /etc/apt/apt.conf.d/20auto-upgrades
or /etc/apt/apt.conf
and put the configuration there.
The options shown below use numbers to indicate the frequency to apply the corresponding operation in days.
Unattended-Upgrade
is set to one so that the unattended-upgrades script is run every day.
Similarly, Update-Package-Lists
is set to one because the package lists should be updated from their repositories each day.
If the package lists aren’t updated automatically then packages wont be upgraded because updates won’t be detected, so enabling this is important.
In addition to setting these two variables, I also set AutocleanInterval
to automatically clean out the package cache every week.
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::Update-Package-Lists "1";
These variables and more are described in detail in the script /usr/lib/apt/apt.systemd.daily
.
Test the behavior of unattended upgrades by running the script manually with the --dry-run
and --debug
flags.
sudo unattended-upgrades --dry-run --debug
Monitor unattended upgrades by perusing the log files in /var/log/unattended-upgrades/
.
You should know everything you need to get started automating package updates on Debian systems.
Podman supports the Kubernetes YAML format for configuring pods. Unfortunately, I’m coming to the Podman scene from Docker where the Docker Compose format is common. The Docker Compose format isn’t supported by Podman. I don’t really want to invest the time in learning a new configuration file format right now, so what should I do? Use Podman Compose!
This tutorial describes how to use a Docker Compose file with Podman to create a rootless container. It uses the Docker Compose for the UniFi Controller described in the UniFi Controller post. This tutorial targets Ubuntu 18.04, and you should be familiar with Linux Containers, Docker Compose, Podman, Python, and the command-line. You’ll need to have Podman installed on your machine, which can be installed on Ubuntu 18.04 by following the instructions in the post Install Podman on Ubuntu.
Since Podman Compose is a Python tool, install Python 3 and pip.
sudo apt -y install python3 python3-pip
Now using pip, install the latest development version of Podman Compose.
pip3 install --user https://github.com/containers/podman-compose/archive/devel.tar.gz
Add ~/.local/bin
to your PATH
.
fish_add_path ~/.local/bin
echo "set PATH=$HOME/.local/bin:$PATH" >> ~/.zshrc; source ~/.zshrc
echo "set PATH=$HOME/.local/bin:$PATH" >> ~/.bashrc; source ~/.bashrc
Create a directory for the Docker Compose file.
mkdir -p ~/Projects/unifi-controller
Change to the new directory.
cd ~/Projects/unifi-controller
Create the Docker Compose file.
---
version: "2.1"
services:
unifi-controller:
image: ghcr.io/linuxserver/unifi-controller
environment:
- MEM_LIMIT=1024M #optional
volumes:
- data:/config
ports:
- 3478:3478/udp
- 10001:10001/udp
- 8080:8080
- 8443:8443
- 1900:1900/udp #optional
- 8843:8843 #optional
- 8880:8880 #optional
- 6789:6789 #optional
- 5514:5514/udp #optional
restart: unless-stopped
labels:
io.containers.autoupdate: image (1)
volumes:
data:
1 | Spoiler! I’ll be describing how to automatically update container images with Podman in an upcoming blog post. |
This Docker Compose uses the docker-unifi-controller image provided by LinuxServer.io and is very close to the provided Docker Compose file.
It uses a volume to store persistent data.
The volume dubbed data here will use a Podman volume named unifi-controller_data
.
From within the project directory, run Podman Compose to create the unifi-controller pod.
Just like when using Docker Compose, the up
subcommand creates and starts the container, and the -d
flag backgrounds the process.
podman-compose up -d
Access the controller’s web console at https://127.0.0.1:8443/.
open http://127.0.0.1:8443
xdg-open http://127.0.0.1:8443
If you’d like to learn more about using Podman Compose, checkout the article Manage containers with Podman Compose from Fedora Magazine.
That was fast, wasn’t it? Love Podman yet? If you want to simplify your workflow, checkout Translate Docker Compose to Kubernetes With Podman.
Podman ships with built-in support for Kubernetes configuration files but not for Docker Compose. As described in Podman Compose, the Podman Compose utility can use Docker Compose files to create Podman containers. However, you might want to migrate to the Kubernetes format, eschewing Podman Compose and Docker Compose entirely. This is what I ended up doing, and I describe the process here.
This tutorial provides the steps necessary to convert a simple Docker Compose file to an equivalent Kubernetes configuration using Podman Compose and Podman. It continues where Podman Compose left off, having created a Podman container from the Docker Compose for the UniFi Controller from the UniFi Controller post. So, complete this tutorial before following the steps below. This tutorial targets Ubuntu 18.04, and you should be familiar with Linux Containers, Docker Compose, Podman, the command-line, and the Kubernetes configuration format.
Change into the directory containing the UniFi Controller’s Docker Compose file.
cd ~/Projects/unifi-controller
Check for the previously created UniFi Controller pod with podman-pod-ps(1).
podman pod ps
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
241f0bf222a3 unifi-controller Running 2 hours ago d5eaaf6d5625 2
Okay, it’s present and accounted for!
To generate the Kubernetes configuration from a Podman container, use podman-generate-kube(1).
Here I output the configuration to the file unifi-controller.yml using the -f
flag.
The -s
flag produces the necessary network service configuration.
podman generate kube -s -f unifi-controller.yml unifi-controller
Examine the generated YAML file, reproduced below.
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-3.0.1
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-03-14T15:41:03Z"
labels:
app: unifi-controller
name: unifi-controller
spec:
containers:
- command:
- /init
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: HOME
value: /root
- name: LANGUAGE
value: en_US.UTF-8
- name: LANG
value: en_US.UTF-8
- name: MEM_LIMIT
value: 1024M
image: ghcr.io/linuxserver/unifi-controller
name: unifi-controllerunifi-controller1
ports:
- containerPort: 6789
hostPort: 6789
protocol: TCP
- containerPort: 3478
hostPort: 3478
protocol: UDP
- containerPort: 5514
hostPort: 5514
protocol: UDP
- containerPort: 8880
hostPort: 8880
protocol: TCP
- containerPort: 8080
hostPort: 8080
protocol: TCP
- containerPort: 8443
hostPort: 8443
protocol: TCP
- containerPort: 10001
hostPort: 10001
protocol: UDP
- containerPort: 8843
hostPort: 8843
protocol: TCP
- containerPort: 1900
hostPort: 1900
protocol: UDP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- CAP_MKNOD
- CAP_NET_RAW
- CAP_AUDIT_WRITE
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /usr/lib/unifi
dnsConfig: {}
restartPolicy: Never
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-03-14T15:41:03Z"
labels:
app: unifi-controller
name: unifi-controller
spec:
ports:
- name: "6789"
nodePort: 32062
port: 6789
protocol: TCP
targetPort: 0
- name: "3478"
nodePort: 32030
port: 3478
protocol: UDP
targetPort: 0
- name: "5514"
nodePort: 30747
port: 5514
protocol: UDP
targetPort: 0
- name: "8880"
nodePort: 30295
port: 8880
protocol: TCP
targetPort: 0
- name: "8080"
nodePort: 32396
port: 8080
protocol: TCP
targetPort: 0
- name: "8443"
nodePort: 32319
port: 8443
protocol: TCP
targetPort: 0
- name: "10001"
nodePort: 30786
port: 10001
protocol: UDP
targetPort: 0
- name: "8843"
nodePort: 31695
port: 8843
protocol: TCP
targetPort: 0
- name: "1900"
nodePort: 31076
port: 1900
protocol: UDP
targetPort: 0
selector:
app: unifi-controller
type: NodePort
status:
loadBalancer: {}
This generated file warrants some additional attention. Most importantly, the generated Kubernetes configuration is conspicuously lacking any volumes.
Add a section for an associated named volume that will hold the persistent data.
In the Docker Compose file, a volume was created like so.
version: "2.1"
services:
unifi-controller:
...
volumes:
- data:/config (1)
...
volumes:
data: (2)
1 | Associate the unifi-controller with the volume dubbed data which is mounted at /config inside the container. |
2 | Declare the named volume data which will be created automatically if it doesn’t exist. |
The way to accomplish the same behavior in the Kubernetes YAML is to use a Persistent Volume Claim. Podman has recently added support for using Persistent Volume Claims to associate Podman containers with named Podman volumes. See Podman pull request #8497 for details. This wasn’t in the generated YAML because the functionality to generate the corresponding YAML is still outstanding per Podman issue #5788.
For the time being, we’ll just have to add this manually.
spec:
containers:
- command:
- /init
...
volumeMounts: (1)
- mountPath: /config
name: unifi-data
volumes:
- name: unifi-data (2)
persistentVolumeClaim:
claimName: unifi-controller-data
1 | Mount the volume dubbed unifi-data at /config inside the container. |
2 | Declare the Persistent Volume Claim, unifi-data, using the claim name unifi-controller-data. Podman associates the claim name with the name of the Podman named volume to use for this particular pod. |
In an attempt to preserve what little sanity remains in my possession in this moment, I named the volume using |
Optionally, you can remove some of the environment variable cruft in the env
section.
I reduced this to just the values below.
env:
- name: container
value: podman
- name: MEM_LIMIT
value: 1024M
If you want to allow automatic updates of the image, add the appropriate label.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-03-13T17:21:54Z"
labels:
app: unifi-controller
io.containers.autoupdate: image (1)
name: unifi-controller
1 | Add the label io.containers.autoupdate and set it to image to enable automatic updates for the containers herein. |
This is a bit of a tease for an upcoming blog post which will describe this in more detail. You’ll need to make sure that Podman’s auto-update systemd timer is enabled. Details forthcoming.
Before starting this pod up, use podman-compose to destroy the existing unifi-controller pod.
podman-compose down
Provide the generated Kubernetes YAML to podman-play-kube(1) to create and launch the pod.
podman play kube ~/Projects/unifi-controller/unifi-controller.yml
Access the controller’s web console at https://127.0.0.1:8443/.
open http://127.0.0.1:8443
xdg-open http://127.0.0.1:8443
I have a GitHub repository for this Kubernetes configuration file which you might find helpful. RedHat has several blog posts related to Podman and Kubernetes YAML including Podman can now ease the transition to Kubernetes and CRI-O, From Docker Compose to Kubernetes with Podman, and The podman play kube command now supports deployments.
You should now have a better idea of how the Docker Compose format translates to the Kubernetes format plus how to get the conversion started with Podman and Podman Compose. This also sets the stage for transitioning to using Kubernetes for managing container deployments. Hopefully you’ve found this post helpful. Posts on automatic image updates and setting up a Podman container as a systemd service to follow.
You probably want to take advantage of the data integrity checking offered by Btrfs. Btrfs calculates checksums for all data written to disk. These checksums are used to verify the data hasn’t been unduly altered. While data is verified every time it is read, what about the data that isn’t read often? How long may bit rot go unnoticed in that case? That’s the crux of this blog post which will explain how to best preserve your data on Btrfs and detect corruption early.
To scrub you filesystem is to have all the data read from disk and validated against the stored checksums. This detects corrupt data. When coupled with redundancy such as a raid configuration, self-healing fully restores the damaged data on the disk. If you don’t use redundancy, then the scrub will alert you to the corruption so that you can restore the data manually from backups. Both Btrfs and ZFS handle scrubs in this manner.
To scrub a Btrfs filesystem use btrfs-scrub(8), and in case your interested, the equivalent ZFS command is zpool-scrub(8). Both of them also offer ways to cancel, pause, resume, and monitor scrubs. Btrfs scrubs entire filesystems at a time which is provided by a device or just any directory’s path on the target filesystem. I’m not exactly sure why it takes a directory path to anywhere on the filesystem since that seems a bit arbitrary. You should probably use either a mount point or device path to make the intended target clear.
Even if the |
To initiate a scrub in the background, use the start subcommand followed by the path or device. Here I initiate a scrub on the device on which my root filesystem resides.
sudo btrfs scrub start (df --output=source / | tail -n 1)
scrub started on /dev/mapper/sda2_crypt, fsid 175792e7-4167-40d1-aebc-78b948d6d378 (pid=10555)
To check on the status of a scrub, use the status subcommand and the path or device. Check the status of the previous scrub like so.
sudo btrfs scrub status (df --output=source / | tail -n 1)
scrub status for 175792e7-4167-40d1-aebc-78b948d6d378
scrub started at Fri Mar 5 06:07:42 2021, running for 00:01:25
total bytes scrubbed: 26.19GiB with 0 errors
In many circumstances, you might want the scrub to block and return once it finishes.
This is ideal for people like me who don’t want to type a status command constantly and it’s ideal for running the scrub as a command in systemd.
Use the -B
flag to scrub in the foreground.
This command scrubs my boot partition and returns once the scrub is complete.
sudo btrfs scrub start -B /boot
scrub done for 264b42a6-a09c-40cc-b754-88926d43b395
scrub started at Fri Mar 5 06:13:23 2021 and finished after 00:00:01
total bytes scrubbed: 159.55MiB with 0 errors
That didn’t take long! There’s also subcommands to pause, resume, and cancel scrubs as needed.
Scheduling regular scrubs is a necessary component of proper maintenance You can regularly run scrubs manually or automate the process of running them yet it’s critical that you monitor the results either way. If you go to the trouble to automate your scrubs you’ll want to make sure to regularly check the results. Ideally you’d use something like www.nagios.org[Nagios] for monitoring this aspect of your systems.
Don’t rely on alerts whether that is through email or desktop notifications. If they fail silently, you won’t realize when something has gone horribly wrong. Set aside time regularly to check your systems' status and health. |
Arch Linux provides a handy systemd.service and systemd.timer to automate scrubs.
The Btrfs maintenance toolbox provides similar functionality.
We’ll take a look at the instantiable systemd units provided by Arch Linux for how to make scheduling regular scrubs a breeze.
The Arch Linux Wiki’s Btrfs Scrub section has a subsection on these systemd units, Start with a service or timer.
The systemd units here should be dropped in the standard system directory /etc/systemd/system
.
Below is the Arch Linux systemd Btrfs scrub service.
[Unit]
Description=Btrfs scrub on %f
ConditionPathIsMountPoint=%f
RequiresMountsFor=%f
[Service]
Nice=19
IOSchedulingClass=idle
KillSignal=SIGINT
ExecStart=/usr/bin/btrfs scrub start -B %f
This systemd.service is an instantiated service which expects that a properly escaped path is provided after the @
and before the .service
extension.
systemd uses special escaping rules to map filesystem paths to unit file names.
The systemd-escape(1) tool makes it quite easy to convert a given path.
This service requires that the path of the service unit is indeed a mount point and that it exists with ConditionPathIsMountPoint.
The argument %f
represents the unescaped path used to instantiate this systemd unit.
Similarly, the %i
flag is the escaped version of the path used to instantiate this unit, that is the string between @
and before .service
when starting the unit.
RequiresMountsFor will ensure that any mount points on the given path are mounted before executing the unit.
One might opt to use {BindsTo} and {After} instead of RequiresMountsFor
to define a stronger relationship to the systemd.mount unit responsible for mounting the filesystem at the given mount point.
systemd mount units are usually generated automatically from entries in /etc/fstab.
For this dependency relationship to work, a corresponding systemd mount unit needs to exist.
You’ll want the filesystem your scrubbing to have an entry in fstab or otherwise provide the mount unit in some other way.
BindsTo
requires that the filesystem at the mount point be available the entire time this unit is running.
If it becomes unavailable for some reason, the mount unit fails and the scrub service is killed along with it.
The After
keyword requires that the target be mounted before this service runs.
Both of these would be set to %i.mount
, the name of the corresponding systemd mount unit.
The Nice directive sets the scheduling priority to the lowest possible value, 19, giving the scrub a very low priority to avoid hogging the system CPU time.
The IOSchedulingClass directive is set to idle
which effectively means that the IO activity of the process shouldn’t impact normal system activity.
the scrub will only use the disk when no other programs are using it.
KillSignal sets the signal used to kill the process to SIGINT, i.e. Ctrl-C.
Finally, the ExecStart executes the scrub command on the unescaped path used to instantiate the service but uses -B
to avoid immediately returning.
The systemctl(1) command handles interacting with systemd services and units.
To start a scrub directly with the systemd service, start the the systemd unit with systemctl start
.
Here, I start the unit on the root path of the filesystem which is converted by systemd to -
.
sudo systemctl start btrfs-scrub@(systemd-escape -p /).service
You can then check the status of the systemd service with systemctl status
as follows.
sudo systemctl status btrfs-scrub@(systemd-escape -p /).service
● btrfs-scrub@-.service - Btrfs scrub on /
Loaded: loaded (/etc/systemd/system/btrfs-scrub@.service; static; vendor preset: enabled)
Active: inactive (dead)
Below is the Arch Linux systemd Btrfs scrub timer albeit with a small modification on my part.
The systemd.timer runs on the first and fifteenth of every month instead of only once a month.
Weekly is also a good option which can be configured by setting OnCalendar to weekly
.
[Unit]
Description=Btrfs scrub on %f twice per month
[Timer]
OnCalendar=*-*-1,15
AccuracySec=1d
RandomizedDelaySec=1w
Persistent=true
[Install]
WantedBy=timers.target
The Persistent keyword ensures the service runs even if the timer would have fired previously but the system was not available. If you miss a scrub due to your machine being powered off, the scrub will happen the next time you boot up.
Use systemctl enable
to activate the timer.
Here I set the timer to scrub the root filesystem automatically activate at boot while starting the timer immediately with --now
.
sudo systemctl enable --now btrfs-scrub@(systemd-escape -p /).timer
Created symlink /etc/systemd/system/timers.target.wants/btrfs-scrub@-.timer → /etc/systemd/system/btrfs-scrub@.timer.
As with the service, you can check the status of the systemd timer which is shown here.
sudo systemctl status btrfs-scrub@(systemd-escape -p /).timer
● btrfs-scrub@boot.timer - Btrfs scrub on / twice per month
Loaded: loaded (/etc/systemd/system/btrfs-scrub@.timer; indirect; vendor preset: enabled)
That’s a scrub! Hopefully you’ve got some valuable insight into scrubbing and managing scrubs with Btrfs. Happy scrubbing!
So, you’ve got libvirt installed on your Linux box and your looking for a simple application for running virtual machines. Look no further than Boxes, so far as it meets your needs, of course. What’s that you ask? What do you need to figure out to run on this on a Btrfs filesystem? Well, you’ve come to the right place! This post describes how to install and accommodate Boxes on Btrfs.
This tutorial describes how to install GNOME Boxes on a Btrfs filesystem on elementary OS 5.1 which is based on Ubuntu 18.04. You’ll need to have libvirt installed. Instructions for this are available in the post Install libvirt on elementary OS 5.1, which addresses Btrfs concerns. You should be familiar with installing software on Ubuntu and elementary OS, Flatpak, the command-line, and Btrfs.
For more robust configurations and anything that doesn’t just work in Boxes, try virt-manager. |
Boxes is readily available in two formats, as a Flatpak and a deb package from Ubuntu’s repositories. You can install in one or both ways. The Flatpak will receive updates to newer versions where the deb package won’t be updated beyond the minor version provided, currently 3.28. While the Flatpak will be a much newer version, development in Flatpak is still necessary to expose and connect all the necessary system components for virtualization. Some things may not work quite right yet with the Flatpak, but I’ve found it to work well enough.
A Flatpak can be installed system-wide or for an individual user. The instructions below describe both methods. systemd,
Add the Flathub remote.
flatpak --user remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
sudo flatpak --system remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
Install the GNOME Boxes Flatpak.
flatpak --user install -y flathub org.gnome.Boxes
sudo flatpak --system install -y flathub org.gnome.Boxes
Install the GNOME Boxes Ubuntu package.
sudo apt -y install gnome-boxes
By default, Boxes uses the copy-on-write qcow2 disk image format. If you use Btrfs on your system like I do, then you’ll want to avoid placing these CoW disk images on a CoW Btrfs filesystem. You’ll probably want to exclude the disk images from Btrfs snapshots as well and opt to manage you disk image snapshots independently using the built-in features of qcow2. In the future, perhaps libvirt will provide a native Btrfs storage pool making the qcow2 format unnecessary along with these workarounds.
The sections here demonstrate a couple of ways to disable CoW for the disk image directory used by Boxes and how to create a separate subvolume for that directory. The location of the Boxes disk image directory depends on whether it is installed as a Flatpak or a deb package. Refer to {Where does Boxes store disk images} in the Boxes documentation for more information. Commands are provide for both locations where feasible.
If you snapshot your filesystem, take care to exclude the Boxes virtual disk image directory by making the directory a subvolume. Btrfs subvolumes are automatically excluded from snapshots of their parent subvolumes. Snapshots for virtual disk images should be handled in the disk image itself. Snapshots are provided by default qcow2 format used by Boxes. Here’s how to create the subvolume.
Delete the current images directory.
rmdir ~/.var/app/org.gnome.Boxes/data/gnome-boxes/images
rmdir ~/.local/share/gnome-boxes/images
Create a subvolume in its place.
btrfs subvolume create ~/.var/app/org.gnome.Boxes/data/gnome-boxes/images
Create subvolume '/home/jordan/.var/app/org.gnome.Boxes/data/gnome-boxes/images'
btrfs subvolume create ~/.local/share/gnome-boxes/images Create subvolume '/home/jordan/.local/share/gnome-boxes/images'
The two most straightforward ways to disable CoW for a directory, or subvolume, are to use a file attribute or libvirt’s storage pool feature. Use whichever one you prefer.
There’s also the |
The simplest way to disable CoW on a particular directory or file is with chattr(1) as described in Can copy-on-write be turned off for data blocks?.
This makes it easy to disable CoW on the Boxes disk image directory.
To do this, add the no copy on write attribute with the +C
option followed by the directory.
The following commands disable CoW on Boxes' image directory.
chattr +C ~/.var/app/org.gnome.Boxes/data/gnome-boxes/images
chattr +C ~/.local/share/gnome-boxes/images
Boxes creates a dedicated libvirt storage pool. libvirt uses the concept of storage pools to abstract the complexities involved in managing the underlying virtual machine disk images in a variety of situations. There’s a bit to it, but I’ll leave out the lengthy explanation for brevity. libvirt has fantastic documentation on its Storage Management if you wish to learn more.
elementary OS 5.1 and Ubuntu 18.04 only ship with access to libvirt 4.0.0, so you’ll need to get newer version by some external means for this to work. |
CoW can be disabled on the libvirt storage pool by configuring the appropriate storage pool feature. libvirt stores pretty much all configuration in XML files. This is the case for storage pools and the XML can be viewed and edited with virsh(1). The steps here walk through the steps to disable CoW on the Boxes storage pool.
Find the Boxes storage pool with the pool-list subcommand.
virsh pool-list
Name State Autostart
-------------------------------------------
default active yes
gnome-boxes active yes
libvirt’s default pool is simply called default while Boxes' pool is named gnome-boxes.
To view the current XML configuration for a pool, use the pool-dumpxml subcommand followed by the pool’s name. Here I output the default pool’s XML configuration where you can verify path is as expected for the Flatpak.
virsh pool-dumpxml gnome-boxes
<pool type='dir'>
<name>images</name>
<uuid>02814071-7a82-4444-80f1-295cfc6f947d</uuid>
<capacity unit='bytes'>1999372288000</capacity>
<allocation unit='bytes'>191017480192</allocation>
<available unit='bytes'>1808354807808</available>
<source>
</source>
<target>
<path>/home/jordan/.var/app/org.gnome.Boxes/data/gnome-boxes/images</path>
<permissions>
<mode>0775</mode>
<owner>1001</owner>
<group>1001</group>
</permissions>
</target>
</pool>
To edit a pool’s configuration, use the pool-edit subcommand. To modify the Boxes pool, the command would appear as follows.
virsh pool-edit gnome-boxes
To disable CoW, set the cow feature with state=no
in the pool’s XML.
The snippet here illustrates the necessary XML.
<features>
<cow state='no'>
</features>
For Boxes' storage pool, the resulting XML to disable CoW could appear like so.
<pool type='dir'>
<name>gnome-boxes</name>
<uuid>02814071-7a82-4444-80f1-295cfc6f947d</uuid>
<capacity unit='bytes'>1999372288000</capacity>
<allocation unit='bytes'>191017480192</allocation>
<available unit='bytes'>1808354807808</available>
<features>
<cow state='no'>
</features>
<source>
</source>
<target>
<path>/home/jordan/.var/app/org.gnome.Boxes/data/gnome-boxes/images</path>
<permissions>
<mode>0775</mode>
<owner>1001</owner>
<group>1001</group>
</permissions>
</target>
</pool>
That should be everything you need to get started with GNOME Boxes on a Btrfs filesystem. Enjoy that simple virtualization.
If you want to run virtual machines on Linux, chances are you’re going to use libvirt. I make use of it all the time, especially for testing these blog posts in a clean environment. libvirt provides a common interface around various underlying tools for virtual machine management. It not only offers features for guest management but for networking and storage management as well. It’s standard XML schema also makes for a powerful and versatile configuration format. On Linux, libvirt is typically utilizing KVM, the virtualization layer in the kernel, and, in userspace, QEMU, a generic machine emulator and virtualizer.
This tutorial provides the necessary steps to verify your system supports hardware virtualization and install libvirt on elementary OS 5.1. Most of these steps are the same for Ubuntu 18.04. This tutorial takes into account special considerations for systems using the Btrfs filesystem. There is also a brief section on installing the graphical user interface for libvirt, virt-manager. It is assumed that you are familiar with installing software on Ubuntu, using the command-line, and the Btrfs filesystem.
Check that the system supports hardware virtualization.
egrep -c '(vmx|svm)' /proc/cpuinfo
8
If the output is not zero, then your CPU supports virtualization.
Install the tool for checking that your CPU is compatible with KVM.
sudo apt -y install cpu-checker
Verify that the system supports KVM.
kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
If all checks passed, then you should be able to continue installation of libvirt without issue. Otherwise, you’d better switch to some compatible hardware before proceeding.
If you want to get a more up-to-date virtualization stack, add the virtualization PPA to your system.
The software-properties-common package includes a command for easily adding PPA’s.
sudo apt -y software-properties-common
Add the virtualization PPA.
sudo add-apt-repository -uy ppa:jacob/virtualisation
Install libvirt.
sudo apt -y install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils
Add the current user to the kvm and libvirt groups.
sudo usermod -aG kvm,libvirt $USER
Reload the current user’s group assignments.
newgrp -
On elementary OS 5.1, there’s a bit of a glitch after installing libvirt on the system. That is, a new libvirt-qemu user appears as a logon option in Greeter. This isn’t supposed to happen but luckily there’s a workaround. The steps here hide the libvirt-qemu login in Greeter. The steps were come from this solution on Stack Overflow.
Set the libvirt-qemu user account as a system account for the accountsservices package to hide it in the login menu.
printf "[User]\nSystemAccount=true\n" \
| sudo tee /var/lib/AccountsService/users/libvirt-qemu
Restart the accounts service.
sudo systemctl restart accounts-daemon.service
If you use Btrfs on your system like I do, then you’ll want to avoid CoW on CoW when using virtual machine disk images. Using the default CoW qcow2 format for virtual disk images on top of a Btrfs filesystem is asking for trouble. This section demonstrates the various ways of disabling CoW for virtual disk images on Btrfs filesystems. If you snapshot your filesystem, take care to place virtual disk images in a subvolume that is excluded from snapshots. Snapshots for virtual disk images should be handled in the disk image itself as is the case with the qcow2 format. At least, that’s the way until a Btrfs storage driver appears for libvirt. I can hope.
When creating a qcow2 image directly with qemu-img(1), the nocow option can be used to disable CoW for that file.
The following command creates a 25 gigabyte qcow2 image named my-vm-image.qcow2
with CoW disabled.
qemu-img create -o nocow my-vm-image.qcow2 25G
In libvirt 6.6.0, Storage Pool Features were introduced, including the cow feature. This version of libvirt disabled CoW by default on Btrfs filesystems. This default behavior was quickly rescinded in libvirt 6.7.0 which re-enabled CoW by default. The change leaves the decision to disable CoW in the hands of system administrators. If your lucky enough to be using libvirt 6.6.0 or newer, you can take advantage of this feature.
elementary OS 5.1 and Ubuntu 18.04 only ship with access to libvirt 4.0.0 Even if you use the virtualization PPA, it only goes up to version 4.7.0 for Ubuntu 18.04. You’ll need to get newer version by some external means or use a newer version of Ubuntu for this to work. |
libvirt uses the concept of storage pools to abstract the complexities involved in managing the underlying virtual machine disk images in a variety of situations. I won’t delve into the details here. Refer to Storage Management for more information. For the purposes of this post you should know that libvirt’s default directory for disk images is its default storage pool. This pool is a simple Directory pool. libvirt stores pretty much all configuration in XML files. This is the case for storage pools and the XML can be viewed and edited with virsh(1). The steps here walk through the steps to disable CoW on the default storage pool.
List storage pools with the pool-list subcommand. The default pool is just called default. No surprises here.
virsh pool-list
Name State Autostart
-------------------------------------------
default active yes
To simply view the XML, use the pool-dumpxml subcommand followed by the pool’s name.
Here I output the default pool’s XML configuration where you can see that path is indeed /var/lib/libvirt/images
.
virsh pool-dumpxml default
<pool type='dir'>
<name>default</name>
<uuid>4f779eae-e312-4e4d-bf9f-fafe0e334f63</uuid>
<capacity unit='bytes'>1999372288000</capacity>
<allocation unit='bytes'>191017480192</allocation>
<available unit='bytes'>1808354807808</available>
<source>
</source>
<target>
<path>/var/lib/libvirt/images</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
Edit a pool’s configuration with the pool-edit subcommand. To modify the default pool’s XML, the command would appear as follows.
virsh pool-edit default
To disable CoW, set the cow feature with state=no
in the pool’s XML.
The snippet here demonstrates the XML to disable CoW.
<features>
<cow state='no'>
</features>
For the default storage pool, the resulting XML to disable CoW could appear like so.
<pool type='dir'>
<name>default</name>
<uuid>4f779eae-e312-4e4d-bf9f-fafe0e334f63</uuid>
<capacity unit='bytes'>1999372288000</capacity>
<allocation unit='bytes'>191017480192</allocation>
<available unit='bytes'>1808354807808</available>
<features>
<cow state='no'>
</features>
<source>
</source>
<target>
<path>/var/lib/libvirt/images</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
The simplest way to disable CoW on a particular directory or file is with chattr(1) as described in Can copy-on-write be turned off for data blocks?.
To do this, add the no copy on write attribute with the +C
option.
The following commands disable CoW on libvirt’s image directory.
Disable CoW on the /var/lib/libvirt/images
directory.
sudo chattr +C /var/lib/libvirt/images
A dedicated Btrfs subvolume for /var/lib/libvirt/images
is probably your best option since it excludes the disk images from snapshots.
The subvolume can have CoW disabled via chattr, but CoW can also be disabled with the mount option nodatacow
when using a subvolume in a flat layout.
The steps here create a dedicated subvolume for libvirt’s disk image directory and mount it with CoW disabled.
Mount the root Btrfs filesystem to create a subvolume.
sudo mount (df --output=source / | tail -n 1) /mnt
Create a dedicated Btrfs subvolume for libvirt’s virtual disk images.
sudo btrfs subvolume create /mnt/var-lib-libvirt-images
Create subvolume '/mnt/var-lib-libvirt-images'
Add the subvolume to fstab(5).
echo (df --output=source / \
| tail -n 1)" /var/lib/libvirt/images btrfs defaults,nodatacow,noatime,subvol=var-lib-libvirt-images 0 0" \
| sudo tee -a /etc/fstab
/dev/mapper/sda2_crypt /var/lib/libvirt/images btrfs defaults,nodatacow,noatime,subvol=var-lib-libvirt-images 0 0
Verify there are no errors in fstab.
sudo findmnt --verify --verbose
Now mount the subvolume according to the rule just added in fstab.
sudo mount /var/lib/libvirt/images
Don’t forget to unmount /mnt
.
sudo umount /mnt
That’s it! The default storage pool for libvirt will store virtual disk images in this subvolume.
virt-manager is an application for managing virtual machines with libvirt graphically. It’s a handy one for the toolbox, though some might prefer the simplicity of Boxes.
Install virt-manager.
sudo apt -y install virt-manager
If you haven’t logged out and back in since installing libvirt, you’ll need to that before running virt-manager.
You should now be able to get virtual machines up and running without issue. Now that you have all the components in place for virtualization, why not make your life easier with Boxes? I’ll cover all the details of installing the GNOME Boxes Flatpak on a Btrfs system in an upcoming post, so stay tuned!
Podman is a daemonless container runtime for Linux compatible with Docker. It offers several advantages over using Docker to manage and run containers. First, there is no overhead associated with running a background service as is the case with Docker. Podman also allows users to run rootless containers which provides a higher degree of protection for the system. Podman integrates deeply with Linux, taking advantage of a number of specific features. Notably, it uses namespaces for process isolation and integrates nicely with systemd. Just like Kubernetes, Podman is built on the concept of pods, groups of one or more containers, instead of individual containers.
Podman is only available in the Ubuntu repositories as of Ubuntu 20.10, making it just an apt install away. For older Ubuntu LTS releases, the Kubic PPA can be used to get Podman which is described here.
This tutorial provides the necessary steps to install Podman on elementary OS 5.1, i.e. Ubuntu 18.04, as well as Ubuntu 20.04 proper. It is assumed that you are familiar with Linux, Ubuntu, and the command-line.
Add the Kubic repository for Podman to the system’s sources list.
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_18.04/ /" \
| sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_18.04/ /
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/ /" \
| sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/ /
Import the Kubic repository’s GPG key.
wget -qO - https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_18.04/Release.key \
| gpg --dearmor \
| sudo tee /etc/apt/trusted.gpg.d/kubic_libcontainers.gpg > /dev/null
wget -qO - https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/Release.key \
| gpg --dearmor \
| sudo tee /etc/apt/trusted.gpg.d/kubic_libcontainers.gpg > /dev/null
Refresh Aptitude.
sudo apt update
Upgrade any installed packages to those from the Kubic repository.
sudo apt -y upgrade
Install Podman.
sudo apt -y install podman
On Ubuntu 18.04, restart dbus in order to use rootless containers.
systemctl --user restart dbus
If you’re using Btrfs or ZFS, now is a good time to switch over to appropriate driver. Just follow the simple steps in Podman With Btrfs and ZFS. |
You should now have the power of Podman available.
When you want to use Podman, just use the same Docker command-line but substitute podman
for docker
.