RedPill - the new loader for 6.2.4 - Discussion


Recommended Posts

Hey people, please don’t share loader images 🙏🏻

 

They contain software that is the property of Synology and is not open source. ThorGroup specifically designed the toolchain and build process to download the freely available software from Synology’s own servers to build the images with.  This way no one can be accused of redistributing Synology’s intellectual property, and potentially get the project and the forums shut down.

  • Like 2
Link to post
Share on other sites
43 minutes ago, havast said:

https://skynet.zone/loader.7z

 

Here the working loader in VMDK format. I hope the masters will do a 3617xs version, i have a real sn / mac pair for that NAS. (I have a few real 918+ sn / mac pair too, but unfortunatelly my CPU is too old for that 😕

Hope i can find a soultion. I think its impossible to get a real 3615xs sn and mac :(

and it works :) thank you very much .. after struggling for few days trying to install 918+ this one works on esxi :) thanks once more

Link to post
Share on other sites

Just my Feedback ,

 

Id been testing redpill DS918 using Esxi on a new machine for HW transcoding . 

 

But out of curiosty did an install on my Gen8 Microserver, as bare metal . (This usually runs 6.2.3/Juns 1.03b for most my services/dockers and VM's, plus a VDSM7.0 as I was testing photos app(face rec works) )

 

Spec : - 

Xeon E3-1265L V2

16GB 

H220/LSI HBA Card IT mode - I disabled this as I don't believe there to be Sas Support as yet - Used onboard b120i in ACHI mode , used spare disks 

Onboard Broadcom Dual NIC/HP Ethernet 1Gb 2-port 332i - Both Working (I use different VLANS on each) 

APC UPS connected via USB - Working 

 

So far not had any reboots. 

installed Docker , already disabled IPV6 , about to start copying some of my docker configs across . (check if I can replicate what others are seeing) 

 

 

 

 

 

  • Like 1
Link to post
Share on other sites
il y a 49 minutes, scoobdriver a dit :

Just my Feedback ,

 

Id been testing redpill DS918 using Esxi on a new machine for HW transcoding . 

 

But out of curiosty did an install on my Gen8 Microserver, as bare metal . (This usually runs 6.2.3/Juns 1.03b for most my services/dockers and VM's, plus a VDSM7.0 as I was testing photos app(face rec works) )

 

Spec : - 

Xeon E3-1265L V2

16GB 

H220/LSI HBA Card IT mode - I disabled this as I don't believe there to be Sas Support as yet - Used onboard b120i in ACHI mode , used spare disks 

Onboard Broadcom Dual NIC/HP Ethernet 1Gb 2-port 332i - Both Working (I use different VLANS on each) 

APC UPS connected via USB - Working 

 

So far not had any reboots. 

installed Docker , already disabled IPV6 , about to start copying some of my docker configs across . (check if I can replicate what others are seeing) 

 

 

 

 

 

I really like what i read here 😛

Link to post
Share on other sites

Got a brand new Intel Dual Port NIC and finally I was able to get a web UI with diskstation! Unfortunately same issues still persist with my previous attempts to use other loaders, no disk found unless I'm using loaders specifically modified with MBR partition. Looks like I'll have to wait until the final release and wait for members to tweak the loaders as I have no idea how members like Genysys patched Jun's loader to work with MBR partition. 
Xpe7.0.thumb.jpg.76979dd1e7e5ca10780edeeab74283c0.jpg

Baremetal Intel Core 2 Quad Q6600, 4GB RAM, 3 physical disks hooked up via SATA directly to board, Intel Pro 1000 Dual Gigabit Ethernet NIC (3 interfaces including on board ethernet.)

Link to post
Share on other sites

 

 

1 hour ago, scoobdriver said:

Just my Feedback ,

 

Id been testing redpill DS918 using Esxi on a new machine for HW transcoding . 

 

But out of curiosty did an install on my Gen8 Microserver, as bare metal . (This usually runs 6.2.3/Juns 1.03b for most my services/dockers and VM's, plus a VDSM7.0 as I was testing photos app(face rec works) )

 

Spec : - 

Xeon E3-1265L V2

16GB 

H220/LSI HBA Card IT mode - I disabled this as I don't believe there to be Sas Support as yet - Used onboard b120i in ACHI mode , used spare disks 

Onboard Broadcom Dual NIC/HP Ethernet 1Gb 2-port 332i - Both Working (I use different VLANS on each) 

APC UPS connected via USB - Working 

 

So far not had any reboots. 

installed Docker , already disabled IPV6 , about to start copying some of my docker configs across . (check if I can replicate what others are seeing) 

 

 

 

 

 

That i understand this correct you have installed DSM 7.0 with Loader 918 on baremetall at the gen8 ?????

which loader and pat exactly ?

thanks

 

Edited by nemesis122
Link to post
Share on other sites
19 minutes ago, nemesis122 said:

That i understand this correct you have installed DSM 7.0 with Loader 918 on baremetall at the gen8 ?????

No. Apologies if I was not clear (it’s been mentioned many times ds918 will not work on g8 micro server, cpu gen too old) 

iv’e been trying 918 on a newer machine in Esxi. 
baremetal g8 is 7.0.1 ds3615xs 

Link to post
Share on other sites

i have now docker installed on my DSM 7.0 on ds3615xs and started an nginx1 Container.. since 50min up and running.. no crash nothing 

 

 

add now influxdb container... keep you updated

 

 

30min running influxdb container.. no crash still stable

Edited by altas
Link to post
Share on other sites
2 hours ago, WiteWulf said:

Hey people, please don’t share loader images 🙏🏻

 

They contain software that is the property of Synology and is not open source. ThorGroup specifically designed the toolchain and build process to download the freely available software from Synology’s own servers to build the images with.  This way no one can be accused of redistributing Synology’s intellectual property, and potentially get the project and the forums shut down.

 

I have removed the file, sorry. 

  • Like 1
Link to post
Share on other sites
5 hours ago, coint_cho said:

Got a brand new Intel Dual Port NIC and finally I was able to get a web UI with diskstation! Unfortunately same issues still persist with my previous attempts to use other loaders, no disk found unless I'm using loaders specifically modified with MBR partition. Looks like I'll have to wait until the final release and wait for members to tweak the loaders as I have no idea how members like Genysys patched Jun's loader to work with MBR partition. 
Xpe7.0.thumb.jpg.76979dd1e7e5ca10780edeeab74283c0.jpg

Baremetal Intel Core 2 Quad Q6600, 4GB RAM, 3 physical disks hooked up via SATA directly to board, Intel Pro 1000 Dual Gigabit Ethernet NIC (3 interfaces including on board ethernet.)

Q6600 too old for 918+, 918plus need at least Haswell or after , Try 3615xs

Link to post
Share on other sites

I found that if the system crashes, the login interface of NAS accessed by mobile phone before the crash will display the old DSM login interface instead of the new DSM 7.0 interface

Link to post
Share on other sites

@WiteWulfAre you and the other having problem with Docker on 3615sx running Ryzen CPUs by any chance?

 

My observation is that cpu softlocks randomly occour with modern Ryzen cpus on older kernel, which cause a freeze of the system. 

I made this obervation when building vagrant base boxes with packer on a Ryzen7 4800H CPU and  especialy CentOS (having a 3.10.x kernel) tends to randomly freeze during packer execution (which just creates a virtualbox vm, installs the os from the iso and provisions additional software). Never had this experience on Intel cpus.

 

 

Link to post
Share on other sites
5 minutes ago, haydibe said:

@WiteWulfAre you and the other having problem with Docker on 3615sx running Ryzen CPUs by any chance?

 

My observation is that cpu softlocks randomly occour with modern Ryzen cpus on older kernel, which cause a freeze of the system. 

I made this obervation when building vagrant base boxes with packer on a Ryzen7 4800H CPU and  especialy CentOS (having a 3.10.x kernel) tends to randomly freeze during packer execution (which just creates a virtualbox vm, installs the os from the iso and provisions additional software). Never had this experience on Intel cpus.

 

 

I'm having soft lock too it seems on Gen8 ESXi if you look my posts above.

[  104.022425] BUG: soft lockup - CPU#1 stuck for 41s! [runc:[2:INIT]:13695]

 

so maybe not because of new CPU...

Link to post
Share on other sites
7 minutes ago, haydibe said:

@白狼您和另一个人在运行 Ryzen CPU 的 3615sx 上遇到 Docker 问题吗?

 

我的观察是 cpu 软锁在旧内核上与现代 Ryzen cpu 随机发生,这会导致系统冻结。 

我在 Ryzen7 4800H CPU 上使用打包器构建 vagrant base box 时做了这个观察,尤其是 CentOS(具有 3.10.x 内核)在打包器执行期间倾向于随机冻结(它只是创建一个 virtualbox vm,从 iso 和配置中安装操作系统)附加软件)。从未在英特尔 cpu 上有过这种体验。

 

 

I also encountered the problem of automatic system restart when using Intel CPU (J1900)

Link to post
Share on other sites

I took latest Synology Photos

 

and Facedetection still not working on my VM...

 

From /var/log/messages

Quote

synofoto-bin-team-library-tool[21429]: /source/synofoto/src/lib/io/channel.cpp:79 channel[/run/synofoto/task-center.socket] construct failed: connect: No such file or directory
synofoto-bin-team-library-tool[21429]: /source/synofoto/src/lib/io/channel.cpp:79 channel[/run/synofoto/notify-center.socket] construct failed: connect: No such file or directory
coredump[22908]: Process synofoto-face-e[21742](/volume1/@appstore/SynologyPhotos/usr/sbin/synofoto-face-extraction) dumped core on signal [8]. Core file [/volume1/@synofoto-face-e.core.gz]. Cmdline [/var/packages/SynologyPhotos/target/usr/sbin/synofoto-face-extraction ]
coredump[22908]: Core file [/volume1/@synofoto-face-e.core.gz] size [13716626]
coredump[22988]: Process synofoto-face-e[22945](/volume1/@appstore/SynologyPhotos/usr/sbin/synofoto-face-extraction) dumped core on signal [8]. Core file [/volume1/@synofoto-face-e.core.gz]. Cmdline [/var/packages/SynologyPhotos/target/usr/sbin/synofoto-face-extraction ]
coredump[22988]: Core file [/volume1/@synofoto-face-e.core.gz] size [13794422]
coredump[23039]: Process synofoto-face-e[23016](/volume1/@appstore/SynologyPhotos/usr/sbin/synofoto-face-extraction) dumped core on signal [8]. Core file [/volume1/@synofoto-face-e.core.gz]. Cmdline [/var/packages/SynologyPhotos/target/usr/sbin/synofoto-face-extraction ]
coredump[23039]: Core file [/volume1/@synofoto-face-e.core.gz] size [13652444]
coredump[23087]: Process synofoto-face-e[23060](/volume1/@appstore/SynologyPhotos/usr/sbin/synofoto-face-extraction) dumped core on signal [8]. Core file [/volume1/@synofoto-face-e.core.gz]. Cmdline [/var/packages/SynologyPhotos/target/usr/sbin/synofoto-face-extraction ]
coredump[23087]: Core file [/volume1/@synofoto-face-e.core.gz] size [13652365]
coredump[23337]: Process synofoto-face-e[23105](/volume1/@appstore/SynologyPhotos/usr/sbin/synofoto-face-extraction) dumped core on signal [8]. Core file [/volume1/@synofoto-face-e.core.gz]. Cmdline [/var/packages/SynologyPhotos/target/usr/sbin/synofoto-face-extraction ]
coredump[23337]: Core file [/volume1/@synofoto-face-e.core.gz] size [13801590]
coredump[23385]: Process synofoto-face-e[23359](/volume1/@appstore/SynologyPhotos/usr/sbin/synofoto-face-extraction) dumped core on signal [8]. Core file [/volume1/@synofoto-face-e.core.gz]. Cmdline [/var/packages/SynologyPhotos/target/usr/sbin/synofoto-face-extraction ]
coredump[23385]: Core file [/volume1/@synofoto-face-e.core.gz] size [13885704]
coredump[23426]: Process synofoto-face-e[23404](/volume1/@appstore/SynologyPhotos/usr/sbin/synofoto-face-extraction) dumped core on signal [8]. Core file [/volume1/@synofoto-face-e.core.gz]. Cmdline [/var/packages/SynologyPhotos/target/usr/sbin/synofoto-face-extraction ]
coredump[23426]: Core file [/volume1/@synofoto-face-e.core.gz] size [13749066]
coredump[23531]: Process synofoto-face-e[23447](/volume1/@appstore/SynologyPhotos/usr/sbin/synofoto-face-extraction) dumped core on signal [8]. Core file [/volume1/@synofoto-face-e.core.gz]. Cmdline [/var/packages/SynologyPhotos/target/usr/sbin/synofoto-face-extraction ]
coredump[23531]: Core file [/volume1/@synofoto-face-e.core.gz] size [13745018]

 

From /var/log/synofoto.log

Quote

synofoto-task-center[21631]: /source/synofoto/src/daemon/task-center/plugin-monitor/sended_task_queue_base.cpp:61 Plugin wake up pkg-SynologyPhotos-face-extraction, retry 1
synofoto-task-center[21631]: /source/synofoto/src/daemon/task-center/plugin-monitor/sended_task_queue.cpp:84 send unload plugin 4
synofoto-face-extraction[23359]: /source/synophoto-plugin-face/src/face_plugin/main.cpp:22 face plugin init
synofoto-task-center[21631]: /source/synofoto/src/daemon/task-center/plugin-monitor/sended_task_queue_base.cpp:61 Plugin wake up pkg-SynologyPhotos-face-extraction, retry 2
synofoto-task-center[21631]: /source/synofoto/src/daemon/task-center/plugin-monitor/sended_task_queue.cpp:84 send unload plugin 4
synofoto-face-extraction[23404]: /source/synophoto-plugin-face/src/face_plugin/main.cpp:22 face plugin init
synofoto-task-center[21631]: /source/synofoto/src/daemon/task-center/plugin-monitor/sended_task_queue_base.cpp:61 Plugin wake up pkg-SynologyPhotos-face-extraction, retry 3
synofoto-task-center[21631]: /source/synofoto/src/daemon/task-center/plugin-monitor/sended_task_queue.cpp:84 send unload plugin 4
synofoto-face-extraction[23447]: /source/synophoto-plugin-face/src/face_plugin/main.cpp:22 face plugin init
synofoto-task-center[21631]: /source/synofoto/src/daemon/task-center/plugin-monitor/sended_task_queue_base.cpp:67 Plugin wake up pkg-SynologyPhotos-face-extraction over retry times, DoCleanTaskAtQueueAndDB  
synofoto-task-center[21631]: /source/synofoto/src/daemon/task-center/plugin-monitor/sended_task_queue.cpp:102 pkg-SynologyPhotos-face-extraction DoCleanTaskAtQueueAndDB, clear queue
synofoto-task-center[21631]: /source/synofoto/src/daemon/task-center/plugin-monitor/sended_task_queue.cpp:94 pkg-SynologyPhotos-face-extraction DoCleanTaskAtQueueAndDB, clear task in db
synofoto-task-center[21631]: /source/synofoto/src/daemon/task-center/plugin-monitor/sended_task_queue.cpp:97 Delete task user_id 1, unit_id 2, type 4
synofoto-task-center[21631]: /source/synofoto/src/daemon/task-center/plugin-monitor/sended_task_queue.cpp:97 Delete task user_id 1, unit_id 8, type 4
synofoto-task-center[21631]: /source/synofoto/src/daemon/task-center/plugin-monitor/sended_task_queue.cpp:97 Delete task user_id 1, unit_id 7, type 4
synofoto-task-center[21631]: /source/synofoto/src/daemon/task-center/plugin-monitor/sended_task_queue.cpp:97 Delete task user_id 1, unit_id 10, type 4

 

From console serial output :

Quote

[57242.305224] traps: synofoto-face-e[21743] trap divide error ip:7fe0e2dd3004 sp:7fe0ebf655e0 error:0 in libMKLDNNPlugin.so[7fe0e2cbd000+10e3000]
[57247.240983] traps: synofoto-face-e[22946] trap divide error ip:7f2f29987004 sp:7f2f329b95e0 error:0 in libMKLDNNPlugin.so[7f2f29871000+10e3000]
[57252.230411] traps: synofoto-face-e[23017] trap divide error ip:7f457edd3004 sp:7f4588ca65e0 error:0 in libMKLDNNPlugin.so[7f457ecbd000+10e3000]

 

I have a DS3615xs valid SN and MAC (Only one of the 4th original MAC  address set)

Link to post
Share on other sites

I tried again to start influxdb container with 139 syno package.

System unresponsive

 

Serial console output :

Quote

[  635.870176] device docker4ca244a entered promiscuous mode
[  635.871329] IPv6: ADDRCONF(NETDEV_UP): docker4ca244a: link is not ready
[  697.276227] ata5.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x6 frozen
[  697.277520] ata5.00: failed command: WRITE FPDMA QUEUED
[  697.278431] ata5.00: cmd 61/01:00:08:00:90/00:00:00:00:00/40 tag 0 ncq 512 out
[  697.278431]          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[  697.280807] ata5.00: status: { DRDY }
[  697.281440] ata5: hard resetting link
[  697.586993] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[  707.579733] ata5.00: qc timeout (cmd 0xec)
[  707.580475] ata5.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[  707.581524] ata5.00: revalidation failed (errno=-5)
[  707.582357] ata5: hard resetting link
[  707.887494] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[  737.863786] ata5.00: qc timeout (cmd 0xec)
[  737.864542] ata5.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[  737.865584] ata5.00: revalidation failed (errno=-5)
[  737.866426] ata5: limiting SATA link speed to 3.0 Gbps
[  737.867311] ata5: hard resetting link
[  738.172581] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[  768.148856] ata5.00: qc timeout (cmd 0xec)
[  768.149617] ata5.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[  768.150886] ata5.00: revalidation failed (errno=-5)
[  768.151711] ata5.00: disabled
[  768.152234] ata5.00: handle -EIO dev fail, detach this dev
[  768.153166] ata5.00: already disabled (class=0x2)
[  768.153970] ata5.00: device reported invalid CHS sector 0
[  768.154902] ata5: hard resetting link
[  768.459599] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[  768.460701] sd 4:0:0:0: [sda]
[  768.461248] Result: hostbyte=0x00 driverbyte=0x08
[  768.462057] sd 4:0:0:0: [sda]
[  768.462602] Sense Key : 0xb [current] [descriptor]
[  768.463492] Descriptor sense data with sense descriptors (in hex):
[  768.464566]         72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
[  768.466089]         00 00 00 00
[  768.466775] sd 4:0:0:0: [sda]
[  768.467318] ASC=0x0 ASCQ=0x0
[  768.467846] sd 4:0:0:0: [sda] CDB:
[  768.468451] cdb[0]=0x2a: 2a 00 00 90 00 08 00 00 01 00
[  768.469639] end_request: I/O error, dev sda, sector in range 9437184 + 0-2(12)
[  768.470856] md: super_written_retry for error=-5
[  768.471681] ata5: EH complete
[  768.472213] sd 4:0:0:0: rejecting I/O to offline device
[  768.473103] md: super_written gets error=-5, uptodate=0
[  768.473982] syno_md_error: sda3 has been removed
[  768.474768] raid1: Disk failure on sda3, disabling device.
[  768.474768]  Operation continuing on 0 devices
[  768.476461] ata5.00: detaching (SCSI 4:0:0:0)
[  768.477225] RAID1 conf printout:
[  768.477785]  --- wd:0 rd:1
[  768.478259]  disk 0, wo:1, o:0, dev:sda3
[  768.478933] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 1, rd 0, flush 0, corrupt 0, gen 0
[  768.480310] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 2, rd 0, flush 0, corrupt 0, gen 0
[  768.481674] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 3, rd 0, flush 0, corrupt 0, gen 0
[  768.483116] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 4, rd 0, flush 0, corrupt 0, gen 0
[  768.484521] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 5, rd 0, flush 0, corrupt 0, gen 0
[  768.485924] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 6, rd 0, flush 0, corrupt 0, gen 0
[  768.487322] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 7, rd 0, flush 0, corrupt 0, gen 0
[  768.488710] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 8, rd 0, flush 0, corrupt 0, gen 0
[  768.490098] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 9, rd 0, flush 0, corrupt 0, gen 0
[  768.491487] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 10, rd 0, flush 0, corrupt 0, gen 0
[  768.494540] BTRFS: error (device dm-2) in btrfs_commit_transaction:2381: errno=-5 IO failure (Error while writing out transaction)
[  768.496465] BTRFS info (device dm-2): forced readonly
[  768.497325] BTRFS warning (device dm-2): Skipping commit of aborted transaction.
[  768.498570] ------------[ cut here ]------------
[  768.499385] WARNING: at fs/btrfs/super.c:263 __btrfs_abort_transaction+0x11d/0x130 [btrfs]()
[  768.500795] BTRFS: Transaction aborted (error -5)
[  768.501571] Modules linked in: nfnetlink xfrm_user xfrm_algo fuse bridge stp aufs macvlan veth xt_conntrack xt_addrtype nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables ipt_MASQUERADE xt_REDIRECT xt_nat iptable_nat nf_nat_ipv4
nf_nat xt_recent xt_iprange xt_limit xt_state xt_tcpudp xt_multiport xt_LOG nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack iptable_filter ip_tables x_tables 8021q vhost_scsi(O) vhost(O) tcm_loop(O) iscsi_target_mod(O) target_core_ep(O) target_core_multi_file(O) target_core_file(O) target_core_iblock(O) target_core_mod(O) syno_extent_pool(PO) rodsp_ep(O) cdc_acm ftdi_sio ch341(OF) cp210x(OF) usbserial udf isofs loop synoacl_vfs(PO) btrfs zstd_decompress ecryptfs zstd_compress xxhash xor raid6_pq zram(C) aesni_intel glue_helper lrw gf128mul ablk_helper bromolow_synobios(PO) hid_generic usbhid hid usblp bnx2x(O) mdio mlx5_core(O) mlx4_en(O) mlx4_core(O) mlx_compat(O) qede(O) qed(O) atlantic_v2(O) atlantic(O) tn40xx(O) i40e(O) ixgbe(O) be2net(O) i2c_algo_bit igb(O) dca e1000e(O) sg dm_snapshot crc_itu_t crc_ccitt psnap p8022 llc zlib_deflate libcrc32c hfsplus md4 hmac sit tunnel4 ipv6 flashcache_syno(O) flashcache(O) syno_flashcache_control(O) dm_mod crc32c_intel cryptd arc4 sha256_generic sha1_generic ecb aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_powersave cpufreq_performance mperf processor thermal_sys cpufreq_stats freq_table vxlan ip_tunnel vmxnet3(F) etxhci_hcd mpt2sas(O) usb_storage xhci_hcd uhci_hcd ehci_pci ehci_hcd usbcore usb_common redpill(OF) [last unloaded: bromolow_synobios]
[  768.527815] CPU: 1 PID: 8021 Comm: btrfs-transacti Tainted: PF        C O 3.10.108 #42214
[  768.529179] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 11/12/2020
[  768.530966]  ffffffff814a2759 ffffffff81036e56 ffff880088858298 ffff8801348f3d40
[  768.532320]  ffff880137eba000 ffffffffa0b0ed90 00000000000007c1 ffffffff81036eb7
[  768.533665]  ffffffffa0b142a8 ffff880100000020 ffff8801348f3d50 ffff8801348f3d10
[  768.535003] Call Trace:
[  768.535429]  [<ffffffff814a2759>] ? dump_stack+0xc/0x15
[  768.536314]  [<ffffffff81036e56>] ? warn_slowpath_common+0x56/0x70
[  768.537355]  [<ffffffff81036eb7>] ? warn_slowpath_fmt+0x47/0x50
[  768.538358]  [<ffffffffa0a39e3d>] ? __btrfs_abort_transaction+0x11d/0x130 [btrfs]
[  768.539614]  [<ffffffff81058430>] ? wake_atomic_t_function+0x60/0x60
[  768.540688]  [<ffffffffa0a7133b>] ? cleanup_transaction+0x6b/0x2c0 [btrfs]
[  768.541848]  [<ffffffff81058430>] ? wake_atomic_t_function+0x60/0x60
[  768.542922]  [<ffffffff8105f934>] ? __wake_up+0x34/0x50
[  768.543812]  [<ffffffffa0a72899>] ? btrfs_commit_transaction+0xb29/0xcc0 [btrfs]
[  768.545062]  [<ffffffffa0a6ce3d>] ? transaction_kthread+0x26d/0x2e0 [btrfs]
[  768.546237]  [<ffffffffa0a6cbd0>] ? btrfs_cleanup_transaction+0x5c0/0x5c0 [btrfs]
[  768.547496]  [<ffffffff81057b01>] ? kthread+0xb1/0xc0
[  768.548353]  [<ffffffff81057a50>] ? kthread_worker_fn+0x160/0x160
[  768.549392]  [<ffffffff814afe0d>] ? ret_from_fork+0x5d/0xb0
[  768.550343]  [<ffffffff81057a50>] ? kthread_worker_fn+0x160/0x160
[  768.551373] ---[ end trace fd813dc1d05f402a ]---
[  768.552158] BTRFS: error (device dm-2) in cleanup_transaction:1985: errno=-5 IO failure
[  768.553509] BTRFS info (device dm-2): delayed_refs has NO entry
[  773.565493] btrfs_dev_stat_print_on_error: 148 callbacks suppressed
[  773.566594] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 159, rd 0, flush 0, corrupt 0, gen 0
[  773.568042] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 160, rd 0, flush 0, corrupt 0, gen 0
[  773.569441] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 161, rd 0, flush 0, corrupt 0, gen 0
[  773.570880] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 162, rd 0, flush 0, corrupt 0, gen 0
[  773.572317] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 163, rd 0, flush 0, corrupt 0, gen 0
[  778.569359] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 164, rd 0, flush 0, corrupt 0, gen 0
[  778.570890] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 165, rd 0, flush 0, corrupt 0, gen 0
[  778.572297] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 166, rd 0, flush 0, corrupt 0, gen 0
[  778.573697] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 167, rd 0, flush 0, corrupt 0, gen 0
[  778.575284] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 168, rd 0, flush 0, corrupt 0, gen 0
[  783.573248] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 169, rd 0, flush 0, corrupt 0, gen 0
[  783.574716] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 170, rd 0, flush 0, corrupt 0, gen 0
[  783.576133] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 171, rd 0, flush 0, corrupt 0, gen 0
[  783.577546] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 172, rd 0, flush 0, corrupt 0, gen 0
[  783.579004] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 173, rd 0, flush 0, corrupt 0, gen 0
[  788.577168] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 174, rd 0, flush 0, corrupt 0, gen 0
[  788.578635] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 175, rd 0, flush 0, corrupt 0, gen 0
[  788.580169] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 176, rd 0, flush 0, corrupt 0, gen 0
[  788.581616] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 177, rd 0, flush 0, corrupt 0, gen 0
[  788.583042] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 178, rd 0, flush 0, corrupt 0, gen 0
[  793.581001] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 179, rd 0, flush 0, corrupt 0, gen 0
[  793.582457] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 180, rd 0, flush 0, corrupt 0, gen 0
[  793.583855] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 181, rd 0, flush 0, corrupt 0, gen 0
[  793.585248] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 182, rd 0, flush 0, corrupt 0, gen 0
[  793.586662] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 183, rd 0, flush 0, corrupt 0, gen 0
[  798.584886] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 184, rd 0, flush 0, corrupt 0, gen 0
[  798.586334] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 185, rd 0, flush 0, corrupt 0, gen 0
[  798.587737] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 186, rd 0, flush 0, corrupt 0, gen 0
[  798.589132] BTRFS: bdev /dev/mapper/cachedev_0 errs: wr 187, rd 0, flush 0, corrupt 0, gen 0
[  816.223279] systemd[1]: systemd-logind.service watchdog timeout (limit 3min)!
[  840.357424] INFO: task kworker/u4:29:3988 blocked for more than 120 seconds.
[  840.358630] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  840.359956] kworker/u4:29   D ffff88013dd12f40     0  3988      2 0x00000000
[  840.361203] Workqueue: writeback bdi_writeback_workfn (flush-9:0)
[  840.362256]  ffff880135b37770 0000000000000046 000000000000c000 ffff880135b37fd8
[  840.363619]  ffff880135b37fd8 ffff880135ce6040 0000000000000000 ffff880135b37770
[  840.364961]  ffffffff8105f934 ffff880135502000 ffff880135502000 0000000000000008
[  840.366288] Call Trace:
[  840.366766]  [<ffffffff8105f934>] ? __wake_up+0x34/0x50
[  840.367629]  [<ffffffff813c28c5>] ? md_write_start+0xa5/0x190
[  840.368619]  [<ffffffff81058430>] ? wake_atomic_t_function+0x60/0x60
[  840.369664]  [<ffffffff813ba1fe>] ? make_request+0x7e/0xe30
[  840.370625]  [<ffffffffa0a5c7ca>] ? find_free_extent+0x61a/0x1200 [btrfs]
[  840.371743]  [<ffffffffa0a943ea>] ? merge_state.part.42+0x3a/0x140 [btrfs]
[  840.372887]  [<ffffffffa0ac2349>] ? search_bitmap+0xc9/0x1a0 [btrfs]
[  840.373930]  [<ffffffff81058430>] ? wake_atomic_t_function+0x60/0x60
[  840.374970]  [<ffffffff813bdade>] ? md_handle_request+0x8e/0xe0
[  840.375941]  [<ffffffff813c66b1>] ? md_make_request+0x251/0x460
[  840.376931]  [<ffffffff812705d9>] ? generic_make_request+0xd9/0x2b0
[  840.377958]  [<ffffffff81270827>] ? submit_bio+0x77/0x190
[  840.378848]  [<ffffffff8116a8d3>] ? bio_alloc_bioset+0x83/0x220
[  840.379842]  [<ffffffff81165acd>] ? _submit_bh+0x12d/0x200
[  840.380757]  [<ffffffff811679a1>] ? __block_write_full_page+0x171/0x3b0
[  840.381842]  [<ffffffff81166530>] ? bh_submit_read+0x90/0x90
[  840.382774]  [<ffffffff8116bbf0>] ? I_BDEV+0x10/0x10
[  840.383595]  [<ffffffff810ec060>] ? __writepage+0x10/0x40
[  840.384497]  [<ffffffff810ec768>] ? write_cache_pages+0x178/0x430
[  840.385503]  [<ffffffff810ec050>] ? global_dirty_limits+0x180/0x180
[  840.386531]  [<ffffffff810eca55>] ? generic_writepages+0x35/0x60
[  840.387518]  [<ffffffff810ee052>] ? do_writepages+0x22/0x80
[  840.388450]  [<ffffffffa0a3ccda>] ? leaf_space_used+0xca/0x100 [btrfs]
[  840.389520]  [<ffffffff8115dc71>] ? __writeback_single_inode+0x41/0x2b0
[  840.390600]  [<ffffffff8115eae4>] ? writeback_sb_inodes+0x1c4/0x420
[  840.391630]  [<ffffffff8115eda1>] ? __writeback_inodes_wb+0x61/0xc0
[  840.392666]  [<ffffffff8115f03b>] ? wb_writeback+0x23b/0x320
[  840.393596]  [<ffffffff8115f97b>] ? wb_do_writeback+0x1fb/0x210
[  840.394573]  [<ffffffff8115f9fa>] ? bdi_writeback_workfn+0x6a/0x210
[  840.395574]  [<ffffffff8105f553>] ? worker_run_work+0xa3/0xf0
[  840.396502]  [<ffffffff8115f990>] ? wb_do_writeback+0x210/0x210
[  840.397461]  [<ffffffff81050fce>] ? process_one_work+0x14e/0x520
[  840.398446]  [<ffffffff81051d98>] ? worker_thread+0x108/0x420
[  840.399390]  [<ffffffff81051c90>] ? manage_workers.isra.30+0x260/0x260
[  840.400468]  [<ffffffff81057b01>] ? kthread+0xb1/0xc0
[  840.401301]  [<ffffffff81057a50>] ? kthread_worker_fn+0x160/0x160
[  840.402302]  [<ffffffff814afe0d>] ? ret_from_fork+0x5d/0xb0
[  840.403219]  [<ffffffff81057a50>] ? kthread_worker_fn+0x160/0x160

image.thumb.png.b169e825fc7e74b18eaaead948af0855.png

Link to post
Share on other sites

@Orphée

Photo Detection works on my System.

Do you have only one Core set for your VM?

The divide error i have only with one Core.

 

If yes set more cores, deactivate und reactivate the Person Album.

Edited by dodo-dk
  • Thanks 1
Link to post
Share on other sites
54 minutes ago, dodo-dk said:

@Orphée

Photo Detection works on my System.

Do you have only one Core set for your VM?

The divide error i have only with one Core.

 

If yes set more cores, deactivate und reactivate the Person Album.

image.png.124324e6a1cdc13a557d09149ce9c913.png

 

Edit : @dodo-dk well played dude. Changing to 2 fixed !

Edited by Orphée
Link to post
Share on other sites
4 hours ago, haydibe said:

@WiteWulfAre you and the other having problem with Docker on 3615sx running Ryzen CPUs by any chance?

 

My observation is that cpu softlocks randomly occour with modern Ryzen cpus on older kernel, which cause a freeze of the system. 

I made this obervation when building vagrant base boxes with packer on a Ryzen7 4800H CPU and  especialy CentOS (having a 3.10.x kernel) tends to randomly freeze during packer execution (which just creates a virtualbox vm, installs the os from the iso and provisions additional software). Never had this experience on Intel cpus.

 

 

Everyone that's contacted me so far has been using either Xeon or Celeron CPUs, I've not heard from anyone with non-Intel CPU so far. Also, I (and others) extensively used docker on 6.2.3 and earlier versions on Jun's boot loader. These problems have only been observed on DSM 6.2.4 and 7.x since moving to redpill, and only on DS3615xs.

  • Thanks 1
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.