Jump to content
XPEnology Community

DVA3221 loader development thread


Recommended Posts

Hello

 

Under @yanjun's control, here is a dedicated thread for the DVA3221 loader.

 

Here is the part you need to add to custom_config.json to handle this loader

 

              {
                    "id": "denverton-7.0.1-42218",
                    "platform_version": "denverton-7.0.1-42218",
                    "user_config_json": "denverton_user_config.json",
                    "docker_base_image": "debian:8-slim",
                    "redpill_lkm_make_target": "dev-v7",
                    "compile_with": "toolkit_dev",
                    "downloads": {
                            "kernel": {
          "url": "https://global.download.synology.com/download/ToolChain/Synology%20NAS%20GPL%20Source/7.0-41890/denverton/linux-4.4.x.txz",
                    "sha256": "7fe8e92ebf0a2fd30da10867d5165ae00b10b0a316286465ae9831ed3b598f0f"

                            },
                            "toolkit_dev": {
                                    "url": "https://sourceforge.net/projects/dsgpl/files/toolkit/DSM7.0/ds.denverton-7.0.dev.txz/download",
                                    "sha256": "6dc6818bad28daff4b3b8d27b5e12d0565b65ee60ac17e55c36d913462079f57"
                            }
                    },
                    "redpill_lkm": {
                            "source_url": "https://github.com/dogodefi/redpill-lkm.git",
                            "branch": "develop"
                    },
                    "redpill_load": {
                            "source_url": "https://github.com/dogodefi/redpill-load.git",
                            "branch": "develop"
                    }
            },

 

@yanjun and @pocopico updated their repository with fixes so it should currently work as long as you respect usual prerequisites.

 

 

The test I already made :

 

Surveillance Station Advanced AI features works with a Nvidia GTX1650 GPU (same as official DVA3221 GPU).

 

Some infos :

- I don't know if Surveillance Station AI features works without a real SN/mac. i don't know if it will work with another GPU than GTX 1650.

- Surveillance Station is able to run without any GPU and still work for standard camera features like normal NAS, but has 8 licences available instead of 2 (but logs will flood in /var/log/messages and /var/log/kern.log about missing GPU, maybe a risk of heavy log generation and disk space).

 

I was able to run the loader on Proxmox VE virtual machine with GPU passthrough working.

BUT there are some prerequisites : https://pve.proxmox.com/wiki/Pci_passthrough

I personnally had to configure GRUB boot menu like this : GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction video=efifb:off"

 

Update :

With Proxmox 7.2 I had to change me GRUB boot menu like this : GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction initcall_blacklist=sysfb_init"

 

 

I did not test on VMWare ESXi.

 

The loader also works on baremetal (currently running on my system)

 

image.png

 

On baremetal you may need to request to add DVA3221 for missing ext/modules here :

 

Until @buggy25200 is able to update his repo, or @pocopico to add acpi ext module, you will find it there :

 

./rploader.sh ext denverton-7.0.1-42218 add https://raw.githubusercontent.com/OrpheeGT/redpill-ext/master/acpid/rpext-index.json

 

Thanks to @yanjun @pocopico @buggy25200 @jumkey @IG-88 and all others I could miss/forget for their work on this loader :)

 

 

 

Edited by Orphée
  • Like 7
Link to comment
Share on other sites

Hello all,

 

Thanks a lot to all who contributed to the work on a loader for the DVA3221, magnificient. I think I can answer some of the questions which where mentioned by Orphée.

In a Proxmox 7 environment I have a DVA3221 running for 2 days. Motherboard is an Asrock H370 with a i3 8th gen. The loader was build in Tinycore (and before that in TOSSP toolchain). The serial/mac was generated there as well.

My first test was with a GTX1050 card which was recognised in DSM7.0.1 u3. In the log file however was a continous fault message:  segvault Synodvad, error 4 in libdvacore.dll. No AI functionality.

Replacing it with a GTX1650 works flawless. No fault messages anymore and AI (video deep learning) is working. It recognises peoples, cars, license plates. It has 8 camera licenses, and I have 2 camera's active at this moment. Both with AI. Hope this gives some answers.

 

Regards, Paul

Edited by PaulEvo
typo
  • Like 2
  • Thanks 1
Link to comment
Share on other sites

Hello all,
 
Thanks a lot to all who contributed to the work on a loader for the DVA3221, magnificient. I think I can answer some of the questions which where mentioned by Orphée.
In a Proxmox 7 environment I have a DVA3622 running for 2 days. Motherboard is an Asrock H370 with a i3 8th gen. The loader was build in Tinycore (and before that in TSSOP toolchain). The serial/mac was generated there as well.
My first test was with a GTX1050 card which was recognised in DSM7.0.1 u3. In the log file however was a continous fault message:  segvault Synodvad, error 4 in libdvacore.dll. No AI functionality.
Replacing it with a GTX1650 works flawless. No fault messages anymore and AI (video deep learning) is working. It recognises peoples, cars, license plates. It has 8 camera licenses, and I have 2 camera's active at this moment. Both with AI. Hope this gives some answers.
 
Regards, Paul
Great !
You probably mistyped and wanted to refer to DVA3221 ? I don't know any DVA3622.

But from your statement :

- Generated SN/mac is not a problem with AI advanced features !
- Did not work with a GTX1050, so it may confirm it only works with a GTX1650, need more tests with other cards (like 2060/3080 etc...)
Link to comment
Share on other sites

Regarding the log flood without the Nvidia GPU, I remember @flyrides topic to suppress logs :

 

 

 

Maybe with some help we could apply the same kind of solution.

It would be helfull for those who only want the 8 camera licence availability and don't care about advanced AI features.

  • Like 1
Link to comment
Share on other sites

New info :

 

DVA3221 loader is not compatible with CPU older than haswell (same as DS918+ loader).

I tried to run in on HP Gen8 Proxmox VM and just after boot, Hard lockup CPU.

 

I wanted to edit the first post to keep it up-to-date but it seems I can't... @nicoueron .. ? Thanks

Edited by Orphée
Link to comment
Share on other sites

Another news :

 

I tried to run nvidia-smi / cuda under docker.

 

I took this post as reference :

 

 

I downloaded only docker.tar.xz file and deployed it as described in the post above.

 

I updated the config.toml file :

# cat /etc/nvidia-container-runtime/config.toml 
disable-require = false
#swarm-resource = "DOCKER_RESOURCE_GPU"
#accept-nvidia-visible-devices-envvar-when-unprivileged = true
#accept-nvidia-visible-devices-as-volume-mounts = false

[nvidia-container-cli]
#root = "/var/services/homes/admin/nvidia/NVIDIA-Linux-x86_64-440.44"
path = "/usr/bin/nvidia-container-cli"
environment = []
debug = "/var/log/nvidia-container-toolkit.log"
ldcache = "/etc/ld.so.cache"
load-kmods = true
#no-cgroups = false
user = "root:videodriver"
ldconfig = "@/opt/bin/ldconfig"

[nvidia-container-runtime]
debug = "/var/log/nvidia-container-runtime.log"

 

restarted docker package and tried docker run command, first exec is from original DVA3221 nvidia system, second one is from docker :

root@DVA3221:/etc/nvidia-container-runtime# nvidia-smi 
Mon Mar 14 22:46:02 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44       Driver Version: 440.44       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1650    On   | 00000000:01:00.0 Off |                  N/A |
| 40%   56C    P0    41W /  75W |   1957MiB /  3908MiB |     45%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     18969      C   ...anceStation/target/synodva/bin/synodvad  1004MiB |
|    0     18970      C   ...ceStation/target/synoface/bin/synofaced   942MiB |
+-----------------------------------------------------------------------------+
root@DVA3221:/etc/nvidia-container-runtime# docker run --gpus all nvidia/cuda:10.2-runtime nvidia-smi
Mon Mar 14 21:46:09 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44       Driver Version: 440.44       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1650    On   | 00000000:01:00.0 Off |                  N/A |
| 40%   57C    P0    31W /  75W |   1957MiB /  3908MiB |     43%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

 

It seems to work, if you have a better/real test to confirm HW acceleration works on docker, please tell me (I don't have plex pass account, I can't test Plex HW acceleration)

 

Logs from nvidia-toolkit :

# cat nvidia-container-toolkit.log 

-- WARNING, the following logs are for debugging purposes only --

I0314 21:55:13.260863 28027 nvc.c:372] initializing library context (version=1.5.1, build=)
I0314 21:55:13.260881 28027 nvc.c:346] using root /
I0314 21:55:13.260885 28027 nvc.c:347] using ldcache /etc/ld.so.cache
I0314 21:55:13.260887 28027 nvc.c:348] using unprivileged user 0:937
I0314 21:55:13.260898 28027 nvc.c:389] attempting to load dxcore to see if we are running under Windows Subsystem for Linux (WSL)
I0314 21:55:13.260935 28027 nvc.c:391] dxcore initialization failed, continuing assuming a non-WSL environment
I0314 21:55:13.263206 28034 nvc.c:274] loading kernel module nvidia
I0314 21:55:13.263314 28034 nvc.c:278] running mknod for /dev/nvidiactl
I0314 21:55:13.263335 28034 nvc.c:282] running mknod for /dev/nvidia0
I0314 21:55:13.263348 28034 nvc.c:286] running mknod for all nvcaps in /dev/nvidia-caps
I0314 21:55:13.263353 28034 nvc.c:292] loading kernel module nvidia_uvm
I0314 21:55:13.263405 28034 nvc.c:296] running mknod for /dev/nvidia-uvm
I0314 21:55:13.263433 28034 nvc.c:301] loading kernel module nvidia_modeset
E0314 21:55:13.266258 28034 nvc.c:303] could not load kernel module nvidia_modeset
I0314 21:55:13.266397 28036 driver.c:101] starting driver service
I0314 21:55:13.267357 28027 nvc_container.c:388] configuring container with 'compute utility supervised'
I0314 21:55:13.267493 28027 nvc_container.c:236] selecting /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/local/cuda-10.2/compat/libcuda.so.440.118.02  
I0314 21:55:13.267516 28027 nvc_container.c:236] selecting /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/local/cuda-10.2/compat/libnvidia-fatbinaryloader.so.440.118.02
I0314 21:55:13.267528 28027 nvc_container.c:236] selecting /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/local/cuda-10.2/compat/libnvidia-ptxjitcompiler.so.440.118.02
I0314 21:55:13.267588 28027 nvc_container.c:408] setting pid to 28021
I0314 21:55:13.267591 28027 nvc_container.c:409] setting rootfs to /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549
I0314 21:55:13.267594 28027 nvc_container.c:410] setting owner to 0:0
I0314 21:55:13.267597 28027 nvc_container.c:411] setting bins directory to /usr/bin
I0314 21:55:13.267599 28027 nvc_container.c:412] setting libs directory to /usr/lib/x86_64-linux-gnu
I0314 21:55:13.267602 28027 nvc_container.c:413] setting libs32 directory to /usr/lib/i386-linux-gnu
I0314 21:55:13.267605 28027 nvc_container.c:414] setting cudart directory to /usr/local/cuda
I0314 21:55:13.267607 28027 nvc_container.c:415] setting ldconfig to @/opt/bin/ldconfig (host relative)
I0314 21:55:13.267610 28027 nvc_container.c:416] setting mount namespace to /proc/28021/ns/mnt
I0314 21:55:13.267613 28027 nvc_container.c:418] setting devices cgroup to /sys/fs/cgroup/devices/docker/665bd7b6e7f2c9f452b1b9edf9bad588a2ba5b3ffcb349d35bf62cb6452af411
I0314 21:55:13.267617 28027 nvc_info.c:758] requesting driver information with ''
I0314 21:55:13.268274 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libvdpau_nvidia.so.440.44
I0314 21:55:13.268485 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-tls.so.440.44
I0314 21:55:13.268570 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-ptxjitcompiler.so.440.44
I0314 21:55:13.268648 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-opencl.so.440.44
I0314 21:55:13.268698 28027 nvc_info.c:171] selecting /usr/lib/libnvidia-ml.so.440.44
I0314 21:55:13.268751 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-ifr.so.440.44
I0314 21:55:13.268849 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-glsi.so.440.44
I0314 21:55:13.268930 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-glcore.so.440.44
I0314 21:55:13.269014 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-fbc.so.440.44
I0314 21:55:13.269094 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-fatbinaryloader.so.440.44
I0314 21:55:13.269172 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-encode.so.440.44
I0314 21:55:13.269250 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-eglcore.so.440.44
I0314 21:55:13.269329 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-compiler.so.440.44
I0314 21:55:13.269406 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-cfg.so.440.44
I0314 21:55:13.269484 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvcuvid.so.440.44
I0314 21:55:13.269632 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libcuda.so.440.44
I0314 21:55:13.269735 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libGLX_nvidia.so.440.44
I0314 21:55:13.269816 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libGLESv2_nvidia.so.440.44
I0314 21:55:13.269894 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libGLESv1_CM_nvidia.so.440.44
I0314 21:55:13.269974 28027 nvc_info.c:171] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libEGL_nvidia.so.440.44
W0314 21:55:13.270022 28027 nvc_info.c:397] missing library libnvidia-nscq.so
W0314 21:55:13.270026 28027 nvc_info.c:397] missing library libnvidia-allocator.so
W0314 21:55:13.270029 28027 nvc_info.c:397] missing library libnvidia-ngx.so
W0314 21:55:13.270031 28027 nvc_info.c:397] missing library libnvidia-opticalflow.so
W0314 21:55:13.270034 28027 nvc_info.c:397] missing library libnvidia-rtcore.so
W0314 21:55:13.270037 28027 nvc_info.c:397] missing library libnvoptix.so
W0314 21:55:13.270039 28027 nvc_info.c:397] missing library libnvidia-glvkspirv.so
W0314 21:55:13.270042 28027 nvc_info.c:397] missing library libnvidia-cbl.so
W0314 21:55:13.270044 28027 nvc_info.c:401] missing compat32 library libnvidia-ml.so
W0314 21:55:13.270047 28027 nvc_info.c:401] missing compat32 library libnvidia-cfg.so
W0314 21:55:13.270050 28027 nvc_info.c:401] missing compat32 library libnvidia-nscq.so
W0314 21:55:13.270052 28027 nvc_info.c:401] missing compat32 library libcuda.so
W0314 21:55:13.270055 28027 nvc_info.c:401] missing compat32 library libnvidia-opencl.so
W0314 21:55:13.270058 28027 nvc_info.c:401] missing compat32 library libnvidia-ptxjitcompiler.so
W0314 21:55:13.270060 28027 nvc_info.c:401] missing compat32 library libnvidia-fatbinaryloader.so
W0314 21:55:13.270063 28027 nvc_info.c:401] missing compat32 library libnvidia-allocator.so
W0314 21:55:13.270066 28027 nvc_info.c:401] missing compat32 library libnvidia-compiler.so
W0314 21:55:13.270068 28027 nvc_info.c:401] missing compat32 library libnvidia-ngx.so
W0314 21:55:13.270071 28027 nvc_info.c:401] missing compat32 library libvdpau_nvidia.so
W0314 21:55:13.270073 28027 nvc_info.c:401] missing compat32 library libnvidia-encode.so
W0314 21:55:13.270076 28027 nvc_info.c:401] missing compat32 library libnvidia-opticalflow.so
W0314 21:55:13.270079 28027 nvc_info.c:401] missing compat32 library libnvcuvid.so
W0314 21:55:13.270081 28027 nvc_info.c:401] missing compat32 library libnvidia-eglcore.so
W0314 21:55:13.270084 28027 nvc_info.c:401] missing compat32 library libnvidia-glcore.so
W0314 21:55:13.270086 28027 nvc_info.c:401] missing compat32 library libnvidia-tls.so
W0314 21:55:13.270089 28027 nvc_info.c:401] missing compat32 library libnvidia-glsi.so
W0314 21:55:13.270092 28027 nvc_info.c:401] missing compat32 library libnvidia-fbc.so
W0314 21:55:13.270094 28027 nvc_info.c:401] missing compat32 library libnvidia-ifr.so
W0314 21:55:13.270097 28027 nvc_info.c:401] missing compat32 library libnvidia-rtcore.so
W0314 21:55:13.270102 28027 nvc_info.c:401] missing compat32 library libnvoptix.so
W0314 21:55:13.270105 28027 nvc_info.c:401] missing compat32 library libGLX_nvidia.so
W0314 21:55:13.270107 28027 nvc_info.c:401] missing compat32 library libEGL_nvidia.so
W0314 21:55:13.270110 28027 nvc_info.c:401] missing compat32 library libGLESv2_nvidia.so
W0314 21:55:13.270112 28027 nvc_info.c:401] missing compat32 library libGLESv1_CM_nvidia.so
W0314 21:55:13.270115 28027 nvc_info.c:401] missing compat32 library libnvidia-glvkspirv.so
W0314 21:55:13.270118 28027 nvc_info.c:401] missing compat32 library libnvidia-cbl.so
I0314 21:55:13.270159 28027 nvc_info.c:297] selecting /usr/bin/nvidia-smi
I0314 21:55:13.270342 28027 nvc_info.c:297] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/bin/nvidia-debugdump
I0314 21:55:13.270378 28027 nvc_info.c:297] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/bin/nvidia-persistenced
I0314 21:55:13.270421 28027 nvc_info.c:297] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/bin/nvidia-cuda-mps-control
I0314 21:55:13.270459 28027 nvc_info.c:297] selecting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/bin/nvidia-cuda-mps-server
W0314 21:55:13.270462 28027 nvc_info.c:423] missing binary nv-fabricmanager
W0314 21:55:13.270472 28027 nvc_info.c:347] missing firmware path /lib/firmware/nvidia/440.44
I0314 21:55:13.270482 28027 nvc_info.c:520] listing device /dev/nvidiactl
I0314 21:55:13.270485 28027 nvc_info.c:520] listing device /dev/nvidia-uvm
I0314 21:55:13.270488 28027 nvc_info.c:520] listing device /dev/nvidia-uvm-tools
I0314 21:55:13.270491 28027 nvc_info.c:520] listing device /dev/nvidia-modeset
W0314 21:55:13.270501 28027 nvc_info.c:347] missing ipc path /var/run/nvidia-persistenced/socket
W0314 21:55:13.270509 28027 nvc_info.c:347] missing ipc path /var/run/nvidia-fabricmanager/socket
W0314 21:55:13.270517 28027 nvc_info.c:347] missing ipc path /tmp/nvidia-mps
I0314 21:55:13.270520 28027 nvc_info.c:814] requesting device information with ''
I0314 21:55:13.275986 28027 nvc_info.c:705] listing device /dev/nvidia0 (GPU-1e98e62a-69f7-80ee-a2d2-ea047ddb96d2 at 00000000:01:00.0)
I0314 21:55:13.276022 28027 nvc_mount.c:344] mounting tmpfs at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/proc/driver/nvidia
I0314 21:55:13.276256 28027 nvc_mount.c:112] mounting /usr/bin/nvidia-smi at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/bin/nvidia-smi
I0314 21:55:13.276293 28027 nvc_mount.c:112] mounting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/bin/nvidia-debugdump at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/bin/nvidia-debugdump
I0314 21:55:13.276329 28027 nvc_mount.c:112] mounting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/bin/nvidia-persistenced at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/bin/nvidia-persistenced
I0314 21:55:13.276390 28027 nvc_mount.c:112] mounting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/bin/nvidia-cuda-mps-control at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/bin/nvidia-cuda-mps-control
I0314 21:55:13.276428 28027 nvc_mount.c:112] mounting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/bin/nvidia-cuda-mps-server at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/bin/nvidia-cuda-mps-server
I0314 21:55:13.276496 28027 nvc_mount.c:112] mounting /usr/lib/libnvidia-ml.so.440.44 at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.440.44
I0314 21:55:13.276531 28027 nvc_mount.c:112] mounting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-cfg.so.440.44 at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.440.44
I0314 21:55:13.276577 28027 nvc_mount.c:112] mounting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libcuda.so.440.44 at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/lib/x86_64-linux-gnu/libcuda.so.440.44
I0314 21:55:13.276610 28027 nvc_mount.c:112] mounting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-opencl.so.440.44 at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.440.44
I0314 21:55:13.276643 28027 nvc_mount.c:112] mounting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-ptxjitcompiler.so.440.44 at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.440.44
I0314 21:55:13.276676 28027 nvc_mount.c:112] mounting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-fatbinaryloader.so.440.44 at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/lib/x86_64-linux-gnu/libnvidia-fatbinaryloader.so.440.44
I0314 21:55:13.276724 28027 nvc_mount.c:112] mounting /volume1/@appstore/NVIDIARuntimeLibrary/nvidia/lib/libnvidia-compiler.so.440.44 at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/lib/x86_64-linux-gnu/libnvidia-compiler.so.440.44
I0314 21:55:13.276740 28027 nvc_mount.c:524] creating symlink /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/usr/lib/x86_64-linux-gnu/libcuda.so -> libcuda.so.1
I0314 21:55:13.276798 28027 nvc_mount.c:208] mounting /dev/nvidiactl at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/dev/nvidiactl
I0314 21:55:13.276815 28027 nvc_mount.c:499] whitelisting device node 195:255
I0314 21:55:13.276836 28027 nvc_mount.c:208] mounting /dev/nvidia-uvm at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/dev/nvidia-uvm
I0314 21:55:13.276847 28027 nvc_mount.c:499] whitelisting device node 246:0
I0314 21:55:13.276866 28027 nvc_mount.c:208] mounting /dev/nvidia-uvm-tools at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/dev/nvidia-uvm-tools
I0314 21:55:13.276878 28027 nvc_mount.c:499] whitelisting device node 246:1
I0314 21:55:13.276900 28027 nvc_mount.c:208] mounting /dev/nvidia0 at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/dev/nvidia0
I0314 21:55:13.276940 28027 nvc_mount.c:412] mounting /proc/driver/nvidia/gpus/0000:01:00.0 at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549/proc/driver/nvidia/gpus/0000:01:00.0
I0314 21:55:13.276954 28027 nvc_mount.c:499] whitelisting device node 195:0
I0314 21:55:13.276963 28027 nvc_ldcache.c:354] executing /opt/bin/ldconfig from host at /volume1/@docker/btrfs/subvolumes/8bbb7889d383ef416dd4f81b0e02627dcbb71050b87c3161dd15ebd62b235549
W0314 21:55:13.288949 1 nvc_ldcache.c:324] seccomp is disabled, all syscalls are allowed
I0314 21:55:14.574998 28027 nvc.c:423] shutting down library context
I0314 21:55:14.575440 28036 driver.c:163] terminating driver service
I0314 21:55:14.575656 28027 driver.c:203] driver service terminated successfully

 

Edited by Orphée
Link to comment
Share on other sites

Following @Zowlverein 's posts regarding HW acceleration/transcoding :

 

image.thumb.png.e93a9b271e61f0111a60721ee96ea326.png

 

I installed ffmpeg SynoCommunity package.

 

Took his modified package and extracted it.

I overwritten default /var/packages/ffmpeg/target/ with his modified package.

 

Installed jellyfin portable, installed ASP dotnet runtime

 

And I launched a movie from my Firefox Browser :

image.thumb.png.ca1362fcd8507afbd3ceb96adb690558.png

 

image.thumb.png.7768c12f5557c85b25c8b023ee2ca45a.png

 

image.thumb.png.29a1d8f9fad78a8a6ed7c11e84590b16.png

image.thumb.png.4e8e66377d06b6f52d15c1347ce27c39.png

 

As you can see, 6 CPU cores are quite low, and the flow is transcoded, and ffmpeg shown in nvidia-smi !

 

I will probably never use it, but I wanted to confirm if it was possible.

Edited by Orphée
Link to comment
Share on other sites

19 hours ago, PaulEvo said:

Hello all,

 

Thanks a lot to all who contributed to the work on a loader for the DVA3221, magnificient. I think I can answer some of the questions which where mentioned by Orphée.

In a Proxmox 7 environment I have a DVA3221 running for 2 days. Motherboard is an Asrock H370 with a i3 8th gen. The loader was build in Tinycore (and before that in TOSSP toolchain). The serial/mac was generated there as well.

My first test was with a GTX1050 card which was recognised in DSM7.0.1 u3. In the log file however was a continous fault message:  segvault Synodvad, error 4 in libdvacore.dll. No AI functionality.

Replacing it with a GTX1650 works flawless. No fault messages anymore and AI (video deep learning) is working. It recognises peoples, cars, license plates. It has 8 camera licenses, and I have 2 camera's active at this moment. Both with AI. Hope this gives some answers.

 

Regards, Paul

I wonder if the GTX1050 failed on the basis of having Pascal cores and the 1650 having Turing or if it is a case where they somehow hardcoded the 1650 into the code.

Link to comment
Share on other sites

Hello,
I have successfully launched the DVA3221

sudo ./rploader.sh update now

    "pid": "0x0001",
    "vid": "0x46f4",
./rploader.sh serialgen DVA3221
./rploader.sh ext denverton-7.0.1-42218 add https://raw.githubusercontent.com/OrpheeGT/redpill-ext/master/acpid/rpext-index.json
sudo ./rploader.sh build denverton-7.0.1-42218
Generation OK recovery from Loader and send to proxmox

image.png.2cfc5cf8896e6e449d7e7045ad617613.png
Only I have no network
In theory it must be based on a Virtio driver from those I understood.
However none of works:

image.png.9eb4eebe87faf3cc174d95a39424860f.png

No DHCP request works

I managed to install a DS3615xs

 

image.png.c0ad425319d204537fb9586bbee36a07.png

 

An idea for the DVA network?

 

Thank you

Link to comment
Share on other sites

il y a 59 minutes, Orphée a dit :

I can't do better than this...

 

Sorry I didn't see this message

 

Here are the commands I typed:

./rploader.sh ext denverton-7.0.1-42218 add https://raw.githubusercontent.com/OrpheeGT/redpill-ext/master/acpid/rpext-index.json

./rploader.sh ext denverton-7.0.1-42218 add https://raw.githubusercontent.com/OrpheeGT/redpill-ext/master/virtio/rpext-index.json

sudo ./rploader.sh build denverton-7.0.1-42218

 

image.thumb.png.1c745519606eaeba6173a0bcef8bf82d.png

thanks for your help

Link to comment
Share on other sites

Hey guys, I did a quick search on the GPU, kinda expensive. However I did find this link used and on sale for the moment. https://www.stomizef.com/product/portanyz-msi-nvidia-geforce-gtx-1650-super-4gb-gddr6-pci-express-3-0-graphics-card-black-gray/ use at your own risk. Just trying to pass on a good deal as I may be looking for myself at some point

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...