Jump to content
XPEnology Community

Nvidia Runtime Library


disone

Recommended Posts

On 9/19/2022 at 12:01 AM, dimakv2014 said:

I did a little research on updating gpu drivers to use more recent versions of nvidia gpus with dva3221, some older nvidia runtime library .spk can be still opened and extracted by 7zip and etc. But when drivers are being replaced with new ones it doesnt allow it to install, but found this guide how to create .spk package https://amigotechnotes.wordpress.com/2014/05/17/how-to-create-a-spk-for-synology-dsm-to-distribute-your-lamp/ using synology dev toolkit its possible to recreate it with latest drivers https://help.synology.com/developer-guide/release_notes.html

but question will it replace old 440.44 drivers or not on root level still remains. Maybe its possible to replace driver files manually through ssh. QNAP does update gpu drivers for nvidia often unlike synology(

 

Hi @dimakv2014 did you manage to make this work. I have a GTX770 that show "Not supported" under nvidia-smi. I was wondering if updateing the drivers would fix this problem.

Link to comment
Share on other sites

В 26.10.2022 в 11:21, dookei сказал:

Hi @dimakv2014 did you manage to make this work. I have a GTX770 that show "Not supported" under nvidia-smi. I was wondering if updateing the drivers would fix this problem.

In your case its not supported because architecture of cuda and nvenc is way too old, NVENC on Kelper gpus is very limited, and I think Synology has detection of minimal arch to be supported which is Pascal, but even on Pascal gpus it gets very hot and on your kelper if it would be supported it would probably burn gpu. In my case I have newer gpu which is not included in a driver 440.44 to test it out and nvidia-smi doesnt run because it doesnt find supported nvidia gpu at all. So I dont think updating driver will solve your issue and your only choice is to buy gtx 1650 ddr5 :(

Edited by dimakv2014
  • Like 1
Link to comment
Share on other sites

  • 1 month later...
1 hour ago, ITdesk said:

I have good news to tell you, I have been on Synology 918 6.23, let docker jellyfin call Nvidia card to transcode

群辉独显N卡,jellyfin成功调用转码
https://www.right.com.cn/forum/thread-8267275-1-1.html

 

Well done, but just a question, is there any real benefits of using a unknown trust library instead of using DVA3221 loader ?

Link to comment
Share on other sites

  • 2 months later...

Hi, I installed XPEnology DS920+ on my old Atom 330 PC which has nvidia 9400m GPU built in. I was also able to install DSM DS3622xs+. I was looking to be able to HW transcode video through Plex or some other media server app, but I need nvidia drivers for that. So I tried to install suggested versions of DVA but no success. After the initial setup, the system does not find the Synology login page and just shuts down. So I tried to manually install the drivers but failed due to lack of glibc to compile the drivers. I then tried to extract the drivers from the image, but DSM 7.0 is different from the previous version and I can't extract any file from it (possibly due to encryption).
Maybe I'll never get it to work and I should forget about using plex on this machine, but it worked on Windows (full hd videos only) so I thought it would be nice to have it. Is there any chance for that? Should I upgrade to DSM 6.X and try to extract the ko files and install them manually?

Link to comment
Share on other sites

16 minutes ago, Xoxer said:

Hi, I installed XPEnology DS920+ on my old Atom 330 PC which has nvidia 9400m GPU built in. I was also able to install DSM DS3622xs+. I was looking to be able to HW transcode video through Plex or some other media server app, but I need nvidia drivers for that. So I tried to install suggested versions of DVA but no success. After the initial setup, the system does not find the Synology login page and just shuts down. So I tried to manually install the drivers but failed due to lack of glibc to compile the drivers. I then tried to extract the drivers from the image, but DSM 7.0 is different from the previous version and I can't extract any file from it (possibly due to encryption).
Maybe I'll never get it to work and I should forget about using plex on this machine, but it worked on Windows (full hd videos only) so I thought it would be nice to have it. Is there any chance for that? Should I upgrade to DSM 6.X and try to extract the ko files and install them manually?

 

Does your Atom 330 has MOVBE instructions ? If it doesn't, you won't be able to run DS918+/DVA3221 loader.

Link to comment
Share on other sites

Looks it has: 
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss ht tm xtpr pdcm movbe lahf_lm dtherm
and also I can find it under https://www.cpu-world.com/CPUs/Atom/Intel-Atom 330 AU80587RE0251M.html
Could you tell me what exactly image I should use to test it? I used ARPL v1.1-beta2 and I am not sure if that version checks movbe. Maybe I shouldn't use ARPL but TCRP directly? Please advice which exactly model and build should I test.

I looked at logs from installation on usb (logs>jr>messages) and I think this gpu wont work, unless I can install somehow older version:

 

nvidia: module license 'NVIDIA' taints kernel.
Disabling lock debugging due to kernel taint
nvidia-nvlink: Nvlink Core is being initialized, major device number 248
NVRM: The NVIDIA ION GPU installed in this system is
NVRM:  supported through the NVIDIA 340.xx Legacy drivers. Please
NVRM:  visit http://www.nvidia.com/object/unix.html for more
NVRM:  information.  The 440.44 NVIDIA driver will ignore
NVRM:  this GPU.  Continuing probe...
NVRM: No NVIDIA graphics adapter found!
nvidia-nvlink: Unregistered the Nvlink Core, major device number 248


Also, when I tried install regular drivers from nvidia it fails with error:
 

 ERROR: Unable to find the system utility `ldconfig`; please make sure you have the package 'glibc' installed.  If you do
         have glibc installed, then please check that `ldconfig` is in your PATH.

 

Edited by Xoxer
Additional Info
Link to comment
Share on other sites

3 hours ago, Orphée said:

@Peter Suh your opinion ?

 

 

The ultimate goal of Xoxer is video HW transcoding.
The only platforms capable of video HW transcoding are Apollo Lake and Gemini Lake. A DS918+ or DS920+ looks good.
HW transcoding through plex requires the purchase of plex pass.
HW transcoding of video station is possible without genuine SN within DSM 7.0.1-42218.
However, I don't have a 2nd generation Atom CPU, so I can't confirm if HW transcoding is really guaranteed.
Choose between ARPL / TCRP (currently only M SHELL has no problem)
After finishing the build of DS918+ / DS920+ DSM 7.0.1-42218 Loader
It would be easy to do HW transcoding with only the integrated graphics.
Forget NVIDIA.

 

Try adjusting the video station's settings as instructed below.

 

1. Install DSM 7.0.1-42218

 

2. Install Advanced Media Extensions from Package Center

 

3. Check HEVC/AAC in Advanced Media Extensions

 

4. Video station installation

 

5. Manual installation of synocommunity ffmpeg.

https://synocommunity.com/packages

 

6. Transcoding patch (all items)
curl -L "https://raw.githubusercontent.com/Yanom1212/synocodectool-patch/master/patch.sh" -O;sh patch.sh -p

 

7. cat /usr/syno/etc/codec/activation.conf
Confirm that the transcoding patch is enabled by checking the content

 

8.ll /dev/dri/renderD128
Check if the above path exists

 

9. video station patch 1
bash -c "$(curl "https://raw.githubusercontent.com/AlexPresso/VideoStation-FFMPEG-Patcher/master/patcher.sh")"

 

10. video station patch 2 ,Install only all
bash -c "$(curl "https://raw.githubusercontent.com/darknebular/Wrapper_VideoStation/main/installer.sh")"

 

11. Enable hardware transcoding check in Video Station Advanced Settings

 

12. Upload a movie or video from ds video or plex to check offline transcoding

Link to comment
Share on other sites

@Peter Suh HW transcoding works with Nvidia card and DVA3221 loader as long as you have a compatible Nvidia card (and >= Haswell CPU).

 

I do it myself perfectly fine :

image.thumb.png.bda3307a8ad1c2d30e8b78d8d5a5d75e.png

 

But I agree with you about plex pass or emby premiere (in my case)

 

Edit : You can do it also without any subscription with Jellyfin... I did it as a proof or concept on DVA3221 thread in the beginning.

Edited by Orphée
  • Like 1
Link to comment
Share on other sites

Than you @Peter Suhfor response.
re 1. I will have to downgrade to DS920+ DSM 7.0.1-42218 as I am on higher version right now and I assume your idea will not work on higher versions.
re 3. as you said "Forget NVIDIA" then just using Intel Atom for HEVC/AAC will be not possible, so whole idea could be not possible to achieve but I will try anyway, just for curiosity 
re 12. you propose offline transcoding to it means I will need more space to keep original video and transcoded video?

re @Orphée it is nice idea to use Jellyfin but is it also offline transcoding?

It is funny to have nvidia processor near to intel processor on the same board and totally not use it where nvidia offers some extra power as coprocessor e.g. using CUDA.
 

Link to comment
Share on other sites

8 hours ago, Xoxer said:

Than you @Peter Suhfor response.
re 1. I will have to downgrade to DS920+ DSM 7.0.1-42218 as I am on higher version right now and I assume your idea will not work on higher versions.
re 3. as you said "Forget NVIDIA" then just using Intel Atom for HEVC/AAC will be not possible, so whole idea could be not possible to achieve but I will try anyway, just for curiosity 
re 12. you propose offline transcoding to it means I will need more space to keep original video and transcoded video?

re @Orphée it is nice idea to use Jellyfin but is it also offline transcoding?

It is funny to have nvidia processor near to intel processor on the same board and totally not use it where nvidia offers some extra power as coprocessor e.g. using CUDA.
 

 

re re 1. Downgrading without a fresh installation of DSM requires a lot of effort and is not recommended. It's virtually impossible without a genuine SN inside DSM 7.1, but there are tricks.

 

re re 12. Probably. If you lower the quality, the file will be smaller than the original, but it will definitely take up space.

Link to comment
Share on other sites

1 hour ago, Orphée said:

I don't know what offline transcoding means.

 

As shown below, offline transcoding is the name specified by Video Station,

and it is understood that it is not the real-time streaming transcoding supported by Plex,

but encoding in advance at a low quality and linking to the existing video.
PLEX has a similar feature.

 

1054657114_2023-02-2811_48_28.thumb.png.e07e308d6f8d453e687097ab7c258e9c.png

 

535712977_2023-02-2811_49_10.thumb.png.a069a3b7b64587cd6e94c1b7b3e15b5e.png

  • Thanks 1
Link to comment
Share on other sites

  • 3 months later...
  • 1 month later...

Has anyone actually got this to work for Docker? Seems the files provided works to some extend but I think there's an issue with how it's setup (This is following the write up by @Zowlverein ).

 

When I run docker run --gpus all nvidia/cuda:11.1.1-base nvidia-smi

I get the following error:

docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: input error: parse user/group failed: no such file or directory: unknown.

 

Before adding the files there was an error about no gpu or something so atleast that is gone. Also note I'm trying with cuda:11.1.1 and not cuda:10.2 as it's no longer available but the idea is to atleast get the GPU passed in (same error also applies to CodeProject.AI).

 

DVA3221 DSM 7.2

GTX1650 (Working fine with Surveillance Station etc.)

Edited by cqoute
Link to comment
Share on other sites

  • 3 weeks later...
On 11/30/2021 at 8:31 PM, Zowlverein said:

Have got cuda in docker working (at least nvidia-smi with the sample container: sudo docker run --gpus all nvidia/cuda:10.2-runtime nvidia-smi)

 

Required files here: https://gofile.io/d/zYTBCP

/usr/bin/nvidia-container-toolkit (v 1.5.1 from ubuntu 16.04 build)

/usr/bin/nvidia-container-cli (custom build with syscall security removed)

also need symlink (ln -s /usr/bin/nvidia-container-toolkit /usr/bin/nvidia-container-runtime-hook)

 

/lib/libseccomp.so.2.5.1 (from ubuntu 16.04 - also ln -s /lib/libseccomp.so.2.5.1 /lib/libseccomp.so.2)

/lib/libnvidia-container.so.1.5.1 (from same custom build - also ln -s /lib/libnvidia-container.so.1.5.1 /lib/libnvidia-container.so.1)

 

ldconfig - copied to /opt/bin/ from ubuntu installation (version in entware/optware didn't work)

 

/etc/nvidia-container-runtime/config.toml (need to update file to point to local ldconfig and nvidia drive paths)

 

 

Untitled.png

 

I know this is an old post, but can anybody provide how to build the `libnvidia-container` for DSM 7.2 kernel?

Link to comment
Share on other sites

My goal was to make NVENC work on Jellyfin.

 

Docker

 

I was able to expose my GPU in docker without the libnvidia-container by doing:

sudo docker run \
        -e NVIDIA_VISIBLE_DEVICES=all \
        -v /usr/local/bin/nvidia-smi:/usr/local/bin/nvidia-smi \
        -v /usr/local/bin/nvidia-cuda-mps-control:/usr/local/bin/nvidia-cuda-mps-control \
        -v /usr/local/bin/nvidia-persistenced:/usr/local/bin/nvidia-persistenced \
        -v /usr/local/bin/nvidia-cuda-mps-server:/usr/local/bin/nvidia-cuda-mps-server \
        -v /usr/local/bin/nvidia-debugdump:/usr/local/bin/nvidia-debugdump \
        -v /usr/lib/libnvcuvid.so:/usr/lib/libnvcuvid.so \
        -v /usr/lib/libnvidia-cfg.so:/usr/lib/libnvidia-cfg.so \
        -v /usr/lib/libnvidia-compiler.so:/usr/lib/libnvidia-compiler.so \
        -v /usr/lib/libnvidia-eglcore.so:/usr/lib/libnvidia-eglcore.so \
        -v /usr/lib/libnvidia-encode.so:/usr/lib/libnvidia-encode.so \
        -v /usr/lib/libnvidia-fatbinaryloader.so:/usr/lib/libnvidia-fatbinaryloader.so \
        -v /usr/lib/libnvidia-fbc.so:/usr/lib/libnvidia-fbc.so \
        -v /usr/lib/libnvidia-glcore.so:/usr/lib/libnvidia-glcore.so \
        -v /usr/lib/libnvidia-glsi.so:/usr/lib/libnvidia-glsi.so \
        -v /usr/lib/libnvidia-ifr.so:/usr/lib/libnvidia-ifr.so \
        -v /usr/lib/libnvidia-ml.so.440.44:/usr/lib/libnvidia-ml.so \
        -v /usr/lib/libnvidia-ml.so.440.44:/usr/lib/libnvidia-ml.so.1 \
        -v /usr/lib/libnvidia-ml.so.440.44:/usr/lib/libnvidia-ml.so.440.44 \
        -v /usr/lib/libnvidia-opencl.so:/usr/lib/libnvidia-opencl.so \
        -v /usr/lib/libnvidia-ptxjitcompiler.so:/usr/lib/libnvidia-ptxjitcompiler.so \
        -v /usr/lib/libnvidia-tls.so:/usr/lib/libnvidia-tls.so \
        -v /usr/lib/libicuuc.so:/usr/lib/libicuuc.so \
        -v /usr/lib/libcuda.so:/usr/lib/libcuda.so \
        -v /usr/lib/libcuda.so.1:/usr/lib/libcuda.so.1 \
        -v /usr/lib/libicudata.so:/usr/lib/libicudata.so \
        --device /dev/nvidia0:/dev/nvidia0 \
        --device /dev/nvidiactl:/dev/nvidiactl \
        --device /dev/nvidia-uvm:/dev/nvidia-uvm \
        --device /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools \
nvidia/cuda:11.0.3-runtime nvidia-smi

 

Output is:

 

 

> nvidia/cuda:11.0.3-runtime nvidia-smi
Tue Aug  1 00:54:12 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44       Driver Version: 440.44       CUDA Version: N/A      |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 106...  On   | 00000000:01:00.0 Off |                  N/A |
| 84%   89C    P2    58W / 180W |   1960MiB /  3018MiB |     90%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

 

 

This should work on any platform that has NVIDIA runtime library installed.

 

However, this still does not seem to work with Jellyfin docker. I can configure NVENC, play videos fine, but the logs does not show h264_nvenc. I also see no process running in `nvidia-smi`.

 

Official docs points to using nvidia-container-toolkit https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

 

That's why I was looking at how to build it with DSM 7.2 kernel.

 

Running rffmpeg

 

My second idea was to use rffmpeg (remote ffmpeg to offload transcoding to another machine). I was thinking running Jellyfin in Docker and configure rffmpeg, then run the hardware accelerated ffmpeg in DSM host.

 

I downloaded the portable linux jellyfin-ffmpeg distribution https://github.com/jellyfin/jellyfin-ffmpeg/releases/tag/v5.1.3-2

 

Running it in ssh yields

[h264_nvenc @ 0x55ce40d8c480] Driver does not support the required nvenc API version. Required: 12.0 Found: 9.1
[h264_nvenc @ 0x55ce40d8c480] The minimum required Nvidia driver for nvenc is (unknown) or newer
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Conversion failed!

 

I think this is because of the driver DSM uses which is an old 440.44. jellyfin-ffmpeg is compiled with the latest https://github.com/FFmpeg/nv-codec-headers. The DSM Nvidia driver only supports 9.1.

 

Confirming NVENC works

 

I confirmed NVENC works with the official driver by installing Emby and trying out their packaged ffmpeg

 

/volume1/@appstore/EmbyServer/bin/emby-ffmpeg -i /volume1/downloads/test.mkv -c:v h264_nvenc -b:v 1000k -c:a copy /volume1/downloads/test_nvenc.mp4

 

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     15282      C   /var/packages/EmbyServer/target/bin/ffmpeg   112MiB |
|    0     20921      C   ...ceStation/target/synoface/bin/synofaced  1108MiB |
|    0     32722      C   ...anceStation/target/synodva/bin/synodvad   834MiB |
+-----------------------------------------------------------------------------+

 

Next steps

 

The other thing I have yet to try is recompile jellyfin-ffmpeg with an older nv-codec-headers and use it inside Jellyfin docker

Edited by jerico
Spacing
Link to comment
Share on other sites

  • 1 month later...

I'm running DVA3221 with a Nvidia 1050 Ti graphics card. I'm on DSM 7.2-64570 Update 1.

 

My goal is to have Plex hardware transcoding within docker. Hardware transcoding without docker in the Synology Plex app works. It wasn't easy, but it's working. A piece I was missing was installing the NVIDIA Runtime Library from Package Center. I didn't see that mentioned anywhere in any instructions.

 

My drivers are loaded properly on the NAS:

root@nas:/volume1/docker/appdata# nvidia-smi
Sat Sep  2 11:26:23 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44       Driver Version: 440.44       CUDA Version: N/A      |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 105...  On   | 00000000:01:00.0 Off |                  N/A |
| 30%   30C    P8    N/A /  75W |      0MiB /  4039MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

 

I believe I have successfully exposed the GPU to my Plex docker. Confirming by running nvidia-smi from within the docker:

root@nas:/volume1/docker/appdata# docker exec plex nvidia-smi
Sat Sep  2 11:26:30 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44       Driver Version: 440.44       CUDA Version: N/A      |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 105...  On   | 00000000:01:00.0 Off |                  N/A |
| 30%   30C    P8    N/A /  75W |      0MiB /  4039MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

I accomplished this by copying over the files necessary for the Nvidia-container-runtime as outlined on this post. Maybe some of these files don't work on 7.2?

 

However, Plex hardware transcoding does not work. I get the following in my Plex logs:

DEBUG - [Req#583/Transcode] Codecs: testing h264_nvenc (encoder)
DEBUG - [Req#583/Transcode] Codecs: hardware transcoding: testing API nvenc for device '' ()
ERROR - [Req#583/Transcode] [FFMPEG] - Cannot load libcuda.so.1
ERROR - [Req#583/Transcode] [FFMPEG] - Could not dynamically load CUDA
DEBUG - [Req#583/Transcode] Codecs: hardware transcoding: opening hw device failed - probably not supported by this system, error: Operation not permitted
DEBUG - [Req#583/Transcode] Could not create hardware context for h264_nvenc
DEBUG - [Req#583/Transcode] Codecs: testing h264 (decoder) with hwdevice vaapi
DEBUG - [Req#583/Transcode] Codecs: hardware transcoding: testing API vaapi for device '' ()
DEBUG - [Req#583/Transcode] Codecs: hardware transcoding: opening hw device failed - probably not supported by this system, error: Generic error in an 
DEBUG - [Req#583/Transcode] Could not create hardware context for h264
DEBUG - [Req#583/Transcode] Codecs: testing h264 (decoder) with hwdevice nvdec
DEBUG - [Req#583/Transcode] Codecs: hardware transcoding: testing API nvdec for device '' ()
ERROR - [Req#583/Transcode] [FFMPEG] - Cannot load libcuda.so.1
ERROR - [Req#583/Transcode] [FFMPEG] - Could not dynamically load CUDA
DEBUG - [Req#583/Transcode] Codecs: hardware transcoding: opening hw device failed - probably not supported by this system, error: Operation not permitted
DEBUG - [Req#583/Transcode] Could not create hardware context for h264

 

 

Since native Plex works, the following emby-ffmpeg command also works:

/volume1/\@appstore/EmbyServer/bin/emby-ffmpeg -i source.mp4 -c:v h264_nvenc -b:v 1000k -c:a copy destination.mp4

 

 

Any ideas on what could be wrong?

 

Thanks!

Edited by fuzzypacket
Link to comment
Share on other sites

  • 3 weeks later...
On 8/2/2023 at 4:55 AM, jerico said:

My goal was to make NVENC work on Jellyfin.

 

Docker

 

I was able to expose my GPU in docker without the libnvidia-container by doing:

sudo docker run \
        -e NVIDIA_VISIBLE_DEVICES=all \
        -v /usr/local/bin/nvidia-smi:/usr/local/bin/nvidia-smi \
        -v /usr/local/bin/nvidia-cuda-mps-control:/usr/local/bin/nvidia-cuda-mps-control \
        -v /usr/local/bin/nvidia-persistenced:/usr/local/bin/nvidia-persistenced \
        -v /usr/local/bin/nvidia-cuda-mps-server:/usr/local/bin/nvidia-cuda-mps-server \
        -v /usr/local/bin/nvidia-debugdump:/usr/local/bin/nvidia-debugdump \
        -v /usr/lib/libnvcuvid.so:/usr/lib/libnvcuvid.so \
        -v /usr/lib/libnvidia-cfg.so:/usr/lib/libnvidia-cfg.so \
        -v /usr/lib/libnvidia-compiler.so:/usr/lib/libnvidia-compiler.so \
        -v /usr/lib/libnvidia-eglcore.so:/usr/lib/libnvidia-eglcore.so \
        -v /usr/lib/libnvidia-encode.so:/usr/lib/libnvidia-encode.so \
        -v /usr/lib/libnvidia-fatbinaryloader.so:/usr/lib/libnvidia-fatbinaryloader.so \
        -v /usr/lib/libnvidia-fbc.so:/usr/lib/libnvidia-fbc.so \
        -v /usr/lib/libnvidia-glcore.so:/usr/lib/libnvidia-glcore.so \
        -v /usr/lib/libnvidia-glsi.so:/usr/lib/libnvidia-glsi.so \
        -v /usr/lib/libnvidia-ifr.so:/usr/lib/libnvidia-ifr.so \
        -v /usr/lib/libnvidia-ml.so.440.44:/usr/lib/libnvidia-ml.so \
        -v /usr/lib/libnvidia-ml.so.440.44:/usr/lib/libnvidia-ml.so.1 \
        -v /usr/lib/libnvidia-ml.so.440.44:/usr/lib/libnvidia-ml.so.440.44 \
        -v /usr/lib/libnvidia-opencl.so:/usr/lib/libnvidia-opencl.so \
        -v /usr/lib/libnvidia-ptxjitcompiler.so:/usr/lib/libnvidia-ptxjitcompiler.so \
        -v /usr/lib/libnvidia-tls.so:/usr/lib/libnvidia-tls.so \
        -v /usr/lib/libicuuc.so:/usr/lib/libicuuc.so \
        -v /usr/lib/libcuda.so:/usr/lib/libcuda.so \
        -v /usr/lib/libcuda.so.1:/usr/lib/libcuda.so.1 \
        -v /usr/lib/libicudata.so:/usr/lib/libicudata.so \
        --device /dev/nvidia0:/dev/nvidia0 \
        --device /dev/nvidiactl:/dev/nvidiactl \
        --device /dev/nvidia-uvm:/dev/nvidia-uvm \
        --device /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools \
nvidia/cuda:11.0.3-runtime nvidia-smi

 

Output is:

 

 

> nvidia/cuda:11.0.3-runtime nvidia-smi
Tue Aug  1 00:54:12 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44       Driver Version: 440.44       CUDA Version: N/A      |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 106...  On   | 00000000:01:00.0 Off |                  N/A |
| 84%   89C    P2    58W / 180W |   1960MiB /  3018MiB |     90%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

 

 

This should work on any platform that has NVIDIA runtime library installed.

 

However, this still does not seem to work with Jellyfin docker. I can configure NVENC, play videos fine, but the logs does not show h264_nvenc. I also see no process running in `nvidia-smi`.

 

Official docs points to using nvidia-container-toolkit https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

 

That's why I was looking at how to build it with DSM 7.2 kernel.

 

Running rffmpeg

 

My second idea was to use rffmpeg (remote ffmpeg to offload transcoding to another machine). I was thinking running Jellyfin in Docker and configure rffmpeg, then run the hardware accelerated ffmpeg in DSM host.

 

I downloaded the portable linux jellyfin-ffmpeg distribution https://github.com/jellyfin/jellyfin-ffmpeg/releases/tag/v5.1.3-2

 

Running it in ssh yields

[h264_nvenc @ 0x55ce40d8c480] Driver does not support the required nvenc API version. Required: 12.0 Found: 9.1
[h264_nvenc @ 0x55ce40d8c480] The minimum required Nvidia driver for nvenc is (unknown) or newer
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Conversion failed!

 

I think this is because of the driver DSM uses which is an old 440.44. jellyfin-ffmpeg is compiled with the latest https://github.com/FFmpeg/nv-codec-headers. The DSM Nvidia driver only supports 9.1.

 

Confirming NVENC works

 

I confirmed NVENC works with the official driver by installing Emby and trying out their packaged ffmpeg

 

/volume1/@appstore/EmbyServer/bin/emby-ffmpeg -i /volume1/downloads/test.mkv -c:v h264_nvenc -b:v 1000k -c:a copy /volume1/downloads/test_nvenc.mp4

 

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     15282      C   /var/packages/EmbyServer/target/bin/ffmpeg   112MiB |
|    0     20921      C   ...ceStation/target/synoface/bin/synofaced  1108MiB |
|    0     32722      C   ...anceStation/target/synodva/bin/synodvad   834MiB |
+-----------------------------------------------------------------------------+

 

Next steps

 

The other thing I have yet to try is recompile jellyfin-ffmpeg with an older nv-codec-headers and use it inside Jellyfin docker

 

Thanks for sharing this info.

 

Your docker run commands work fine for me with Emby in docker, the only issue is that hardware encoding doesn't work.  The GPU device shows up under hardware decoding and it works fine, but there are no devices shown under hardware encoding, where-as within the Emby app download from the package manager it shows the GPU within both the hardware decoding / encoding sections in Emby.  So close, yet so far :)

 

# Emby Docker

 

Screenshot 2023-09-21 170607.png

 

Screenshot 2023-09-21 170528.png

 

# Emby Package

 

Screenshot 2023-09-21 164637.png

 

 

Edited by irishj
Link to comment
Share on other sites

  • 3 weeks later...

Sorry for responding to this old topic. I have a DS918+ on a barebone with a GTX 1080. My goal would be transcoding with the GPU using Jellyfin in Docker. Could someone help me? I tried unpacking the SPK files in order to modify the info, but the newer ones don't open as such.

Link to comment
Share on other sites

13 часов назад, Ilfe98 сказал:

Sorry for responding to this old topic. I have a DS918+ on a barebone with a GTX 1080. My goal would be transcoding with the GPU using Jellyfin in Docker. Could someone help me? I tried unpacking the SPK files in order to modify the info, but the newer ones don't open as such.

DS918+ uses the built-in graphics (iGPU). If it is in your CPU, then you can install Jellyfin directly (without Docker)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...