Intel iGPU VAAPI in Unprivileged LXC 4.0 Container
This article is originally published on yoursunny.com blog https://yoursunny.com/t/2022/lxc-vaapi/
Background
I recently bought a DELL OptiPlex 7040 Micro desktop computer and wanted to operate it as a dedicated server.
I installed Debian 11 on the computer, and placed it into the closet to be accessed over SSH only.
To keep the host machine stable, I decide to run most workloads in LXC containers, which are said to be Fast-as-Metal.
Since I operate my own video streaming website, I have an LXC container for encoding the videos.
The computer comes with an Intel Core i5-6500T processor.
It has 4 hardware cores running at 2.50GHz frequency, and belongs to the Skylake family.
FFmpeg is happily encoding my videos on this CPU.
As I read through the processor specification, I noticed this section:
Processor Graphics: Intel® HD Graphics 530
- Processor Graphics indicates graphics processing circuitry integrated into the processor, providing the graphics, compute, media, and display capabilities.
Intel® Quick Sync Video: Yes
- Intel® Quick Sync Video delivers fast conversion of video for portable media players, online sharing, and video editing and authoring.
It seems that I have a GPU!
Can I make use of this Intel GPU and accelerate video encoding workloads?
Story
If you just want the solution, skip to the TL;DR Steps to Enable VAAPI in LXC section at the end.
Testing VAAPI with Docker
I read FFmpeg HWAccelIntro and QuickSync pages, and learned:
- FFmpeg supports hardware acceleration on various GPU brands including Intel, AMD, and NVIDIA.
- Hardware encoders typically generate outputs of significantly lower quality than good software encoders, but are generally faster and do not use much CPU resource.
On Linux, FFmpeg may access Intel GPU through libmfx, OpenCL, or VAAPI.
Among these, encoding is possible with libmfx or VAAPI.Each generation Intel processors has different video encoding capabilities.
For the Skylake family that I have, the integrated GPU can encode to H.264, MPEG-2, VP8, and H.265 formats.
I decided to experiment with VAAPI, because it has the shortest name 🤪.
I quickly found jrottenberg/ffmpeg Docker image.
Following the example commands on FFmpeg VAAPI page, I verified that my GPU can successfully encode videos to H264 format:
docker run \
--device /dev/dri \
-v $(pwd):/data -w /data \
jrottenberg/ffmpeg:4.1-vaapi \
-loglevel info -stats \
-vaapi_device /dev/dri/renderD128 \
-i input.mov \
-vf 'hwupload,scale_vaapi=w=640:h=480:format=nv12' \
-preset ultrafast \
-c:v h264_vaapi \
-f mp4 output.mp4
The renderD128 Device
This above docker run
command tells me that the /dev/dri/renderD128
device is likely the key of getting Intel GPU to work in an LXC container.
It is a character device with major number 226 and minor number 128.
sunny@sunnyD:~$ ls -l /dev/dri
total 0
drwxr-xr-x 2 root root 80 Jan 22 11:04 by-path
crw-rw---- 1 root video 226, 0 Jan 22 11:04 card0
crw-rw---- 1 root render 226, 128 Jan 22 11:04 renderD128
Inside the container, this device does not exist.
Naively, I tried mknod
, but it returns an "operation not permitted" error:
ubuntu@video:~$ ls -l /dev/dri
ls: cannot access '/dev/dri': No such file or directory
ubuntu@video:~$ sudo mkdir /dev/dri
ubuntu@video:~$ sudo mknod /dev/dri/renderD128 c 226 128
mknod: /dev/dri/renderD128: Operation not permitted
I searched for this problem over several weeks, found several articles regarding how to get Plex or Emby media server to use VAAPI hardware encoding from LXC containers, but they are either using Proxmox or LXD (unavailable on Debian), both differ from the plain LXC that I'm trying to use.
From these articles, I gathered enough hints on what's needed:
- LXC container cannot
mknod
arbitrary devices for security reasons. To have a device inode in an LXC container, the container config must:
- grant permission with
lxc.cgroup.devices.allow
directive, and - mount the device with
lxc.mount.entry
directory.
- grant permission with
In addition to
ffmpeg
, it's necessary to installvainfo i965-va-driver
packages (available on both Debian and Ubuntu).
nobody:nogroup
With these configs in place, the device showed up in the container, but it does not work:
ubuntu@video:~$ ls -l /dev/dri
total 0
crw-rw---- 1 nobody nogroup 226, 128 Jan 22 16:04 renderD128
ubuntu@video:~$ vainfo
error: can't connect to X server!
error: failed to initialize display
ubuntu@video:~$ sudo vainfo
error: XDG_RUNTIME_DIR not set in the environment.
error: can't connect to X server!
error: failed to initialize display
One suspicious thing is the nobody:nogroup
owner on the renderD128 device.
It differs from the root:render
owner as seen on the host machine.
Naively, I tried chown
, but it returns an "invalid argument" error and has no effect:
ubuntu@video:~$ sudo chown root:render /dev/dri/renderD128
chown: changing ownership of '/dev/dri/renderD128': Invalid argument
ubuntu@video:~$ ls -l /dev/dri
total 0
crw-rw---- 1 nobody nogroup 226, 128 Jan 22 16:04 renderD128
A Reddit post claims that running chmod 0666 /dev/dri/renderD128
from the host machine would solve this problem.
I gave it a try and it was indeed effective.
However, I know this isn't a proper solution because you are not supposed to change permission on device inodes.
So I continued searching.
idmap
The last piece of the puzzle lies in user and group ID mappings.
In an unprivileged LXC container, user and group IDs are shifted, so that the root user (UID 0) inside the container would not gain root privilege on the host machine.
lxc.idmap
directive in the container config controls these mappings.
In my container, the relevant config was:
# map container UID 0~65535 to host UID 100000~165535
lxc.idmap = u 0 100000 65536
# map container GID 0~65535 to host GID 100000~165535
lxc.idmap = g 0 100000 65536
Notably, the root
user (UID 0) and render
group (GID 107) on the host user aren't mapped to anything in the container.
The kernel uses 65534 to represent a UID/GID which is outside the container's map.
Hence, the renderD128 device, when mounted into the container, has owner UID and GID being 65534:
ubuntu@video:~$ ls -ln /dev/dri
total 0
crw-rw---- 1 65534 65534 226, 128 Jan 22 16:04 renderD128
65534 is the UID of nobody
and the GID of nogroup
, which is why this device appears to be owned by nobody:nogroup
.
To make the renderD128 owned by render
group, the correct solution is mapping the render
group inside the container to the render
group on the host.
This, in turn, requires two ingredients:
/etc/subgid
must authorize the host user who starts the container to map the GID of the host'srender
group into child namespaces.- The container config should have an
lxc.idmap
directive that maps the GID of the container'srender
group to the GID of the host'srender
group.
So I added lxc:107:1
to /etc/subgid
, in which lxc
is the ordinary user on the host machine that starts the containers, and 107
is the GID of render
group on the host machine.
Then I modified the container config as:
# map container UID 0-65535 to host UID 100000-165535
lxc.idmap = u 0 100000 65536
# map container GID 0-65535 to host GID 100000-165535
lxc.idmap = g 0 100000 65536
# map container GID 109 to host GID 107
lxc.idmap = g 109 107 1
However, the container fails to start:
lxc@sunnyD:~$ lxc-unpriv-start -F video
Running scope as unit: run-r611f1778b87645918a2255d44073b86b.scope
lxc-start: video: conf.c: lxc_map_ids: 2865 newgidmap failed to write mapping "newgidmap: write to gid_map failed: Invalid argument": newgidmap 5297 0 100000 65536 109 107 1
lxc-start: video: start.c: lxc_spawn: 1726 Failed to set up id mapping.
Re-reading user_namespaces(7) manpage reveals the reason:
Defining user and group ID mappings: writing to uid_map and gid_map
- The range of user IDs (group IDs) specified in each line cannot overlap with the ranges in any other lines.
The above container config defines two group ID mappings that overlaps at the GID 109, which causes the failure.
Instead, it must be split to three ranges: 0-108 mapped to 100000-100108, 109 mapped to 107, 110-65535 mapped to 100110-165535.
Another idea I had, changing the GID of the render
group to a large number greater than 65535 and thus dodge the overlap, turns out to be a bad idea, as it causes an error during system upgrades:
ubuntu@video:~$ sudo apt full-upgrade
Setting up udev (245.4-4ubuntu3.15) ...
The group `render' already exists and is not a system group. Exiting.
dpkg: error processing package udev (--configure):
installed udev package post-installation script subprocess returned error exit status 1
Hence, I must carefully calculate the GID ranges and write three GID mapping entries.
With this final piece in place, success!
ubuntu@video:~$ vainfo 2>/dev/null | head -10
vainfo: VA-API version: 1.7 (libva 2.6.0)
vainfo: Driver version: Intel i965 driver for Intel(R) Skylake - 2.4.0
vainfo: Supported profile and entrypoints
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Simple : VAEntrypointEncSlice
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointEncSlice
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
Encoding speed comparison on one of my videos:
h264, ultrafast, 640x480 resolution
Intel GPU VAAPI encoding:
frame= 2900 fps=201 q=-0.0 Lsize= 18208kB time=00:01:36.78 bitrate=1541.2kbits/s speed=6.71x video:16583kB audio:1528kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.533910%
Skylake CPU encoding:
frame= 2900 fps=171 q=-1.0 Lsize= 18786kB time=00:01:36.78 bitrate=1590.1kbits/s speed=5.71x video:17177kB audio:1528kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.434900%
GPU encoding was 17.5% faster than CPU encoding.
TL;DR Steps to Enable VAAPI in LXC
Confirm that the
/dev/dri/renderD128
device exists on the host machine.lxc@sunnyD:~$ ls -l /dev/dri/renderD128 crw-rw---- 1 root render 226, 128 Jan 22 11:04 /dev/dri/renderD128
If the device does not exist, you do not have an Intel GPU or it is not recognized by the kernel.
You must resolve this issue before proceeding to the next step.Find the GID of the
render
group on the host machine:lxc@sunnyD:~$ getent group render render:x:107:
On my computer, the GID is 107.
Authorize the host user who starts LXC containers to map the GID to child namespaces.
Run
sudoedit /etc/subgid
to open the editor.Append a line:
lxc:107:1
Explanation:
lxc
refers to the host user account.107
is the GID of therender
group, as seen in step 2.1
means authorizing just one GID.
Create and start an LXC container, and find out the GID of the container's
render
group.
I'm using a Ubuntu 20.04 template, but the same procedure is applicable to other templates.lxc@sunnyD:~$ export DOWNLOAD_KEYSERVER=keyserver.ubuntu.com lxc@sunnyD:~$ lxc-create -n video -t download -- -d ubuntu -r focal -a amd64 Using image from local cache Unpacking the rootfs You just created an Ubuntu focal amd64 (20211228_07:42) container. To enable SSH, run: apt install openssh-server No default root or user password are set by LXC. lxc@sunnyD:~$ lxc-unpriv-start video Running scope as unit: run-re7a88541bd5d42ab92c9ea6d4cd2a19f.scope lxc@sunnyD:~$ lxc-unpriv-attach video getent group render Running scope as unit: run-reaad3e4a549a420bacb160fd8cbc87a8.scope render:x:109:
Edit the container config.
Run
editor ~/.local/share/lxc/video/config
to open the editor.Delete existing lines that start with
lxc.idmap = g
.However, do not delete lines that start with
lxc.idmap = u
.Append these lines:
lxc.idmap = g 0 100000 109 lxc.idmap = g 109 107 1 lxc.idmap = g 110 100110 65426 lxc.cgroup.devices.allow = c 226:128 rwm lxc.mount.entry = /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
Explanation:
The
lxc.idmap = g
directive defines a group ID mapping.109
is the GID of the container'srender
group, as seen instep 4.107
is the GID of the host'srender
group, as seen in step 2.
The
lxc.cgroup.devices.allow
directive exposes a device to the container.226:127
is the major number and minor number of the renderD128 device, as seen in step 1.
The
lxc.mount.entry
directive mounts the host's renderD128 device into the container.
You may use this handy idmap calculator to generate the
lxc.idmap
directives:
(read original article https://yoursunny.com/t/2022/lxc-vaapi/ to use the JavaScript calculator)Restart the container and attach to its console.
lxc@sunnyD:~$ lxc-stop video lxc@sunnyD:~$ lxc-unpriv-start video Running scope as unit: run-r77f46b8ba5b24254a99c1ef9cb6384c3.scope lxc@sunnyD:~$ lxc-unpriv-attach video Running scope as unit: run-r11cf863c81e74fcfa1615e89902b1284.scope
Install FFmpeg and VAAPI packages in the container.
root@video:/# apt update root@video:/# apt install --no-install-recommends ffmpeg vainfo i965-va-driver 0 upgraded, 148 newly installed, 0 to remove and 15 not upgraded. Need to get 79.2 MB of archives. After this operation, 583 MB of additional disk space will be used. Do you want to continue? [Y/n]
Confirm that the
/dev/dri/renderD128
device exists in the container and is owned byrender
group.root@video:/# ls -l /dev/dri/renderD128 crw-rw---- 1 nobody render 226, 128 Jan 22 16:04 /dev/dri/renderD128
It's normal for the owner user to show as
nobody
.
This does not affect operation as long as the calling user is a member of therender
group.
The only implication is that, the container'sroot
user cannot access the renderD128 unless it is added to therender
group.Add container's user account(s) to
render
group.
These users will have access to the GPU.root@video:/# /sbin/adduser ubuntu render Adding user `ubuntu' to group `render' ... Adding user ubuntu to group render Done.
Become one of these users, and verify the Intel iGPU is operational in the LXC container.
root@video:/# sudo -iu ubuntu ubuntu@video:~$ vainfo error: XDG_RUNTIME_DIR not set in the environment. error: can't connect to X server! libva info: VA-API version 1.7.0 libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so libva info: va_openDriver() returns -1 libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so libva info: Found init function __vaDriverInit_1_6 libva info: va_openDriver() returns 0 vainfo: VA-API version: 1.7 (libva 2.6.0) vainfo: Driver version: Intel i965 driver for Intel(R) Skylake - 2.4.0 vainfo: Supported profile and entrypoints VAProfileMPEG2Simple : VAEntrypointVLD VAProfileMPEG2Simple : VAEntrypointEncSlice VAProfileMPEG2Main : VAEntrypointVLD VAProfileMPEG2Main : VAEntrypointEncSlice VAProfileH264ConstrainedBaseline: VAEntrypointVLD VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP VAProfileH264ConstrainedBaseline: VAEntrypointFEI VAProfileH264ConstrainedBaseline: VAEntrypointStats VAProfileH264Main : VAEntrypointVLD VAProfileH264Main : VAEntrypointEncSlice VAProfileH264Main : VAEntrypointEncSliceLP VAProfileH264Main : VAEntrypointFEI VAProfileH264Main : VAEntrypointStats VAProfileH264High : VAEntrypointVLD VAProfileH264High : VAEntrypointEncSlice VAProfileH264High : VAEntrypointEncSliceLP VAProfileH264High : VAEntrypointFEI VAProfileH264High : VAEntrypointStats VAProfileH264MultiviewHigh : VAEntrypointVLD VAProfileH264MultiviewHigh : VAEntrypointEncSlice VAProfileH264StereoHigh : VAEntrypointVLD VAProfileH264StereoHigh : VAEntrypointEncSlice VAProfileVC1Simple : VAEntrypointVLD VAProfileVC1Main : VAEntrypointVLD VAProfileVC1Advanced : VAEntrypointVLD VAProfileNone : VAEntrypointVideoProc VAProfileJPEGBaseline : VAEntrypointVLD VAProfileJPEGBaseline : VAEntrypointEncPicture VAProfileVP8Version0_3 : VAEntrypointVLD VAProfileVP8Version0_3 : VAEntrypointEncSlice VAProfileHEVCMain : VAEntrypointVLD VAProfileHEVCMain : VAEntrypointEncSlice
Conclusion
This article explores how to make use of Intel processor's integrated GPU in an unprivileged LXC 4.0 container, on Debian 11 bullseye host machine without Proxmox or LXD.
The key points include mounting the renderD128 device into the container, configuring idmap for the render
group, and verifying the setup with vainfo
command.
The result is an LXC container that can encode videos to H.264 and other formats in the GPU with Intel Quick Sync Video feature, which is 17.5% faster than CPU encoding.
Comments
This is great. Even though I half-assed read only half of it.
ExtraVM