My goal was to make NVENC work on Jellyfin.
Docker
I was able to expose my GPU in docker without the libnvidia-container by doing:
sudo docker run \
-e NVIDIA_VISIBLE_DEVICES=all \
-v /usr/local/bin/nvidia-smi:/usr/local/bin/nvidia-smi \
-v /usr/local/bin/nvidia-cuda-mps-control:/usr/local/bin/nvidia-cuda-mps-control \
-v /usr/local/bin/nvidia-persistenced:/usr/local/bin/nvidia-persistenced \
-v /usr/local/bin/nvidia-cuda-mps-server:/usr/local/bin/nvidia-cuda-mps-server \
-v /usr/local/bin/nvidia-debugdump:/usr/local/bin/nvidia-debugdump \
-v /usr/lib/libnvcuvid.so:/usr/lib/libnvcuvid.so \
-v /usr/lib/libnvidia-cfg.so:/usr/lib/libnvidia-cfg.so \
-v /usr/lib/libnvidia-compiler.so:/usr/lib/libnvidia-compiler.so \
-v /usr/lib/libnvidia-eglcore.so:/usr/lib/libnvidia-eglcore.so \
-v /usr/lib/libnvidia-encode.so:/usr/lib/libnvidia-encode.so \
-v /usr/lib/libnvidia-fatbinaryloader.so:/usr/lib/libnvidia-fatbinaryloader.so \
-v /usr/lib/libnvidia-fbc.so:/usr/lib/libnvidia-fbc.so \
-v /usr/lib/libnvidia-glcore.so:/usr/lib/libnvidia-glcore.so \
-v /usr/lib/libnvidia-glsi.so:/usr/lib/libnvidia-glsi.so \
-v /usr/lib/libnvidia-ifr.so:/usr/lib/libnvidia-ifr.so \
-v /usr/lib/libnvidia-ml.so.440.44:/usr/lib/libnvidia-ml.so \
-v /usr/lib/libnvidia-ml.so.440.44:/usr/lib/libnvidia-ml.so.1 \
-v /usr/lib/libnvidia-ml.so.440.44:/usr/lib/libnvidia-ml.so.440.44 \
-v /usr/lib/libnvidia-opencl.so:/usr/lib/libnvidia-opencl.so \
-v /usr/lib/libnvidia-ptxjitcompiler.so:/usr/lib/libnvidia-ptxjitcompiler.so \
-v /usr/lib/libnvidia-tls.so:/usr/lib/libnvidia-tls.so \
-v /usr/lib/libicuuc.so:/usr/lib/libicuuc.so \
-v /usr/lib/libcuda.so:/usr/lib/libcuda.so \
-v /usr/lib/libcuda.so.1:/usr/lib/libcuda.so.1 \
-v /usr/lib/libicudata.so:/usr/lib/libicudata.so \
--device /dev/nvidia0:/dev/nvidia0 \
--device /dev/nvidiactl:/dev/nvidiactl \
--device /dev/nvidia-uvm:/dev/nvidia-uvm \
--device /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools \
nvidia/cuda:11.0.3-runtime nvidia-smi
Output is:
> nvidia/cuda:11.0.3-runtime nvidia-smi
Tue Aug 1 00:54:12 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44 Driver Version: 440.44 CUDA Version: N/A |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 106... On | 00000000:01:00.0 Off | N/A |
| 84% 89C P2 58W / 180W | 1960MiB / 3018MiB | 90% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
This should work on any platform that has NVIDIA runtime library installed.
However, this still does not seem to work with Jellyfin docker. I can configure NVENC, play videos fine, but the logs does not show h264_nvenc. I also see no process running in `nvidia-smi`.
Official docs points to using nvidia-container-toolkit https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
That's why I was looking at how to build it with DSM 7.2 kernel.
Running rffmpeg
My second idea was to use rffmpeg (remote ffmpeg to offload transcoding to another machine). I was thinking running Jellyfin in Docker and configure rffmpeg, then run the hardware accelerated ffmpeg in DSM host.
I downloaded the portable linux jellyfin-ffmpeg distribution https://github.com/jellyfin/jellyfin-ffmpeg/releases/tag/v5.1.3-2
Running it in ssh yields
[h264_nvenc @ 0x55ce40d8c480] Driver does not support the required nvenc API version. Required: 12.0 Found: 9.1
[h264_nvenc @ 0x55ce40d8c480] The minimum required Nvidia driver for nvenc is (unknown) or newer
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Conversion failed!
I think this is because of the driver DSM uses which is an old 440.44. jellyfin-ffmpeg is compiled with the latest https://github.com/FFmpeg/nv-codec-headers. The DSM Nvidia driver only supports 9.1.
Confirming NVENC works
I confirmed NVENC works with the official driver by installing Emby and trying out their packaged ffmpeg
/volume1/@appstore/EmbyServer/bin/emby-ffmpeg -i /volume1/downloads/test.mkv -c:v h264_nvenc -b:v 1000k -c:a copy /volume1/downloads/test_nvenc.mp4
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 15282 C /var/packages/EmbyServer/target/bin/ffmpeg 112MiB |
| 0 20921 C ...ceStation/target/synoface/bin/synofaced 1108MiB |
| 0 32722 C ...anceStation/target/synodva/bin/synodvad 834MiB |
+-----------------------------------------------------------------------------+
Next steps
The other thing I have yet to try is recompile jellyfin-ffmpeg with an older nv-codec-headers and use it inside Jellyfin docker