Jump to content
XPEnology Community

jerico

Rookie
  • Posts

    4
  • Joined

  • Last visited

Posts posted by jerico

  1. I have DVA3219 working with GT 1030 with Facial Recognition and Deep Video Analytics. I'd say a good low-power alternative to enable AI tasks.

     

    I used arpl-i18n loader. DVA3219 is under beta platform.

     

    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 440.44       Driver Version: 440.44       CUDA Version: 10.2     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |===============================+======================+======================|
    |   0  GeForce GT 1030     On   | 00000000:01:00.0 Off |                  N/A |
    | 58%   60C    P0    N/A /  30W |   1516MiB /  2001MiB |     74%      Default |
    +-------------------------------+----------------------+----------------------+
                                                                                   
    +-----------------------------------------------------------------------------+
    | Processes:                                                       GPU Memory |
    |  GPU       PID   Type   Process name                             Usage      |
    |=============================================================================|
    |    0     26525      C   ...ceStation/target/synoface/bin/synofaced   746MiB |
    |    0     26527      C   ...anceStation/target/synodva/bin/synodvad   760MiB |
    +-----------------------------------------------------------------------------+

     

    Screenshot2023-08-05at22_05_53.thumb.png.ff7740e6cde82cda34bcd2f938abc0ca.pngScreenshot2023-08-05at22_08_28.thumb.png.566b4dedd03ac2c19fc9c3dc9782b2aa.pngScreenshot2023-08-05at22_20_14.thumb.png.089f0a78db52ee81ba422348124bccba.pngScreenshot2023-08-05at22_13_01.thumb.png.78ce436f6e751188e1ab79e0e94f664e.png

    • Like 1
  2. Hello all, just want to share for Pascal-based GPU, Facial Recognition and DVA works if the platform is DVA3219 which has 1050Ti bundled.

     

    I tried DVA3221 first. It "seemed" to work. GPU was detected in the info center, nvidia-smi shows running tasks. But when I tried Facial Recognition and Object Detection, nothing is actually being detected.

     

    DVA3219 is available on arpl-i18n under "beta" platforms.

     

    My setup is Proxmox + GTX 1060 3gb gpu passthrough.

    - for Proxmox, had to blacklist Nvidia drivers in the host to be able to successfully passthrough.

  3. My goal was to make NVENC work on Jellyfin.

     

    Docker

     

    I was able to expose my GPU in docker without the libnvidia-container by doing:

    sudo docker run \
            -e NVIDIA_VISIBLE_DEVICES=all \
            -v /usr/local/bin/nvidia-smi:/usr/local/bin/nvidia-smi \
            -v /usr/local/bin/nvidia-cuda-mps-control:/usr/local/bin/nvidia-cuda-mps-control \
            -v /usr/local/bin/nvidia-persistenced:/usr/local/bin/nvidia-persistenced \
            -v /usr/local/bin/nvidia-cuda-mps-server:/usr/local/bin/nvidia-cuda-mps-server \
            -v /usr/local/bin/nvidia-debugdump:/usr/local/bin/nvidia-debugdump \
            -v /usr/lib/libnvcuvid.so:/usr/lib/libnvcuvid.so \
            -v /usr/lib/libnvidia-cfg.so:/usr/lib/libnvidia-cfg.so \
            -v /usr/lib/libnvidia-compiler.so:/usr/lib/libnvidia-compiler.so \
            -v /usr/lib/libnvidia-eglcore.so:/usr/lib/libnvidia-eglcore.so \
            -v /usr/lib/libnvidia-encode.so:/usr/lib/libnvidia-encode.so \
            -v /usr/lib/libnvidia-fatbinaryloader.so:/usr/lib/libnvidia-fatbinaryloader.so \
            -v /usr/lib/libnvidia-fbc.so:/usr/lib/libnvidia-fbc.so \
            -v /usr/lib/libnvidia-glcore.so:/usr/lib/libnvidia-glcore.so \
            -v /usr/lib/libnvidia-glsi.so:/usr/lib/libnvidia-glsi.so \
            -v /usr/lib/libnvidia-ifr.so:/usr/lib/libnvidia-ifr.so \
            -v /usr/lib/libnvidia-ml.so.440.44:/usr/lib/libnvidia-ml.so \
            -v /usr/lib/libnvidia-ml.so.440.44:/usr/lib/libnvidia-ml.so.1 \
            -v /usr/lib/libnvidia-ml.so.440.44:/usr/lib/libnvidia-ml.so.440.44 \
            -v /usr/lib/libnvidia-opencl.so:/usr/lib/libnvidia-opencl.so \
            -v /usr/lib/libnvidia-ptxjitcompiler.so:/usr/lib/libnvidia-ptxjitcompiler.so \
            -v /usr/lib/libnvidia-tls.so:/usr/lib/libnvidia-tls.so \
            -v /usr/lib/libicuuc.so:/usr/lib/libicuuc.so \
            -v /usr/lib/libcuda.so:/usr/lib/libcuda.so \
            -v /usr/lib/libcuda.so.1:/usr/lib/libcuda.so.1 \
            -v /usr/lib/libicudata.so:/usr/lib/libicudata.so \
            --device /dev/nvidia0:/dev/nvidia0 \
            --device /dev/nvidiactl:/dev/nvidiactl \
            --device /dev/nvidia-uvm:/dev/nvidia-uvm \
            --device /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools \
    nvidia/cuda:11.0.3-runtime nvidia-smi

     

    Output is:

     

     

    > nvidia/cuda:11.0.3-runtime nvidia-smi
    Tue Aug  1 00:54:12 2023       
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 440.44       Driver Version: 440.44       CUDA Version: N/A      |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |===============================+======================+======================|
    |   0  GeForce GTX 106...  On   | 00000000:01:00.0 Off |                  N/A |
    | 84%   89C    P2    58W / 180W |   1960MiB /  3018MiB |     90%      Default |
    +-------------------------------+----------------------+----------------------+
                                                                                   
    +-----------------------------------------------------------------------------+
    | Processes:                                                       GPU Memory |
    |  GPU       PID   Type   Process name                             Usage      |
    |=============================================================================|
    +-----------------------------------------------------------------------------+

     

     

    This should work on any platform that has NVIDIA runtime library installed.

     

    However, this still does not seem to work with Jellyfin docker. I can configure NVENC, play videos fine, but the logs does not show h264_nvenc. I also see no process running in `nvidia-smi`.

     

    Official docs points to using nvidia-container-toolkit https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

     

    That's why I was looking at how to build it with DSM 7.2 kernel.

     

    Running rffmpeg

     

    My second idea was to use rffmpeg (remote ffmpeg to offload transcoding to another machine). I was thinking running Jellyfin in Docker and configure rffmpeg, then run the hardware accelerated ffmpeg in DSM host.

     

    I downloaded the portable linux jellyfin-ffmpeg distribution https://github.com/jellyfin/jellyfin-ffmpeg/releases/tag/v5.1.3-2

     

    Running it in ssh yields

    [h264_nvenc @ 0x55ce40d8c480] Driver does not support the required nvenc API version. Required: 12.0 Found: 9.1
    [h264_nvenc @ 0x55ce40d8c480] The minimum required Nvidia driver for nvenc is (unknown) or newer
    Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
    Conversion failed!

     

    I think this is because of the driver DSM uses which is an old 440.44. jellyfin-ffmpeg is compiled with the latest https://github.com/FFmpeg/nv-codec-headers. The DSM Nvidia driver only supports 9.1.

     

    Confirming NVENC works

     

    I confirmed NVENC works with the official driver by installing Emby and trying out their packaged ffmpeg

     

    /volume1/@appstore/EmbyServer/bin/emby-ffmpeg -i /volume1/downloads/test.mkv -c:v h264_nvenc -b:v 1000k -c:a copy /volume1/downloads/test_nvenc.mp4

     

    +-----------------------------------------------------------------------------+
    | Processes:                                                       GPU Memory |
    |  GPU       PID   Type   Process name                             Usage      |
    |=============================================================================|
    |    0     15282      C   /var/packages/EmbyServer/target/bin/ffmpeg   112MiB |
    |    0     20921      C   ...ceStation/target/synoface/bin/synofaced  1108MiB |
    |    0     32722      C   ...anceStation/target/synodva/bin/synodvad   834MiB |
    +-----------------------------------------------------------------------------+

     

    Next steps

     

    The other thing I have yet to try is recompile jellyfin-ffmpeg with an older nv-codec-headers and use it inside Jellyfin docker

  4. On 11/30/2021 at 8:31 PM, Zowlverein said:

    Have got cuda in docker working (at least nvidia-smi with the sample container: sudo docker run --gpus all nvidia/cuda:10.2-runtime nvidia-smi)

     

    Required files here: https://gofile.io/d/zYTBCP

    /usr/bin/nvidia-container-toolkit (v 1.5.1 from ubuntu 16.04 build)

    /usr/bin/nvidia-container-cli (custom build with syscall security removed)

    also need symlink (ln -s /usr/bin/nvidia-container-toolkit /usr/bin/nvidia-container-runtime-hook)

     

    /lib/libseccomp.so.2.5.1 (from ubuntu 16.04 - also ln -s /lib/libseccomp.so.2.5.1 /lib/libseccomp.so.2)

    /lib/libnvidia-container.so.1.5.1 (from same custom build - also ln -s /lib/libnvidia-container.so.1.5.1 /lib/libnvidia-container.so.1)

     

    ldconfig - copied to /opt/bin/ from ubuntu installation (version in entware/optware didn't work)

     

    /etc/nvidia-container-runtime/config.toml (need to update file to point to local ldconfig and nvidia drive paths)

     

     

    Untitled.png

     

    I know this is an old post, but can anybody provide how to build the `libnvidia-container` for DSM 7.2 kernel?

×
×
  • Create New...