Jump to content
XPEnology Community

TinyCore RedPill Loader Build Support Tool ( M-Shell )


Peter Suh

Recommended Posts

7 hours ago, Peter Suh said:

 

I don't know what `/upd@te/` contains, but I cannot confirm whether it is safe to remove.

When I had the issue, I deleted folder content (I left the folder but cleaned inside it). (and another temp one) and I was then able to relaunch the DSM junior install.

Edited by Orphée
  • Like 1
Link to comment
Share on other sites

Thanks all - so I cleared out the directory and have DSM a restart. No issues / errors during the boot, all good. Now the UI doesn't complain about the lack of storage on the update panel.

 

But Storage Manager still doesn't load. I suspect it's related the missing disk from my volume 1:

image.png.14bb8085865ba8c59d6bb96f2b064c77.png

 

Any suggested fix? Would you recommend upgrading to 7.2.1?

 

image.thumb.png.3dadd496d2c2a5a00b294fc087a2eea5.png

Edited by Tibag
Link to comment
Share on other sites

1 hour ago, Tibag said:

Thanks all - so I cleared out the directory and have DSM a restart. No issues / errors during the boot, all good. Now the UI doesn't complain about the lack of storage on the update panel.

 

But Storage Manager still doesn't load. I suspect it's related the missing disk from my volume 1:

image.png.14bb8085865ba8c59d6bb96f2b064c77.png

 

Any suggested fix? Would you recommend upgrading to 7.2.1?

 

image.thumb.png.3dadd496d2c2a5a00b294fc087a2eea5.png

 

Connect via SSH and check the health of the disks with the command below.

sudo -i
ll /dev/sata*
smartctl -H /dev/sata1
smartctl -H /dev/sata2
smartctl -H /dev/sata3
...

Link to comment
Share on other sites

5 minutes ago, Peter Suh said:

 

Connect via SSH and check the health of the disks with the command below.

sudo -i
ll /dev/sata*
smartctl -H /dev/sata1
smartctl -H /dev/sata2
smartctl -H /dev/sata3
...

 

There is nothing under "sata":

Quote

root@Diskstation:~# ll /dev/sata*
ls: cannot access '/dev/sata*': No such file or directory

 

I suspect it's all under sda, no? 

Quote

root@Diskstation:~# ll /dev/sda*
brw------- 1 root root 66,  0 Jan 15 10:54 /dev/sdag
brw------- 1 root root 66,  1 Jan 15 10:54 /dev/sdag1
brw------- 1 root root 66,  2 Jan 15 10:54 /dev/sdag2
brw------- 1 root root 66,  5 Jan 15 10:54 /dev/sdag5
brw------- 1 root root 66, 16 Jan 15 10:54 /dev/sdah
brw------- 1 root root 66, 17 Jan 15 10:54 /dev/sdah1
brw------- 1 root root 66, 18 Jan 15 10:54 /dev/sdah2
brw------- 1 root root 66, 21 Jan 15 10:54 /dev/sdah5
brw------- 1 root root 66, 32 Jan 15 10:54 /dev/sdai
brw------- 1 root root 66, 33 Jan 15 10:54 /dev/sdai1
brw------- 1 root root 66, 34 Jan 15 10:54 /dev/sdai2
brw------- 1 root root 66, 37 Jan 15 10:54 /dev/sdai5

 

Link to comment
Share on other sites

5 minutes ago, Tibag said:

 

There is nothing under "sata":

 

I suspect it's all under sda, no? 

 

 

 

Connect via SSH and check the health of the disks with the command below.

Shouldn't there be one more disc?
ll /dev/sd*


Do this to make sure all four cards appear.
And the command changes again as shown below.

 

sudo -i
smartctl -H /dev/sdag
smartctl -H /dev/sdah
smartctl -H /dev/sdah


If anything suspicious appears on the disk, check the status in more detail as shown below.


smartctl -a -d sat /dev/sdag

Link to comment
Share on other sites

Right so initially I had removed the disk not showing up anymore and put it back in ESXI. It allowed me to see which one it is in /dev:

Quote

root@Diskstation:~# ll /dev/sd*
brw------- 1 root root 65, 224 Jan 15 13:19 /dev/sdae
brw------- 1 root root 65, 225 Jan 15 13:19 /dev/sdae1
brw------- 1 root root 65, 226 Jan 15 13:19 /dev/sdae2
brw------- 1 root root 65, 229 Jan 15 13:19 /dev/sdae5
brw------- 1 root root 66,   0 Jan 15 13:19 /dev/sdag
brw------- 1 root root 66,   1 Jan 15 13:19 /dev/sdag1
brw------- 1 root root 66,   2 Jan 15 13:19 /dev/sdag2
brw------- 1 root root 66,   5 Jan 15 13:19 /dev/sdag5
brw------- 1 root root 66,  16 Jan 15 13:19 /dev/sdah
brw------- 1 root root 66,  17 Jan 15 13:19 /dev/sdah1
brw------- 1 root root 66,  18 Jan 15 13:19 /dev/sdah2
brw------- 1 root root 66,  21 Jan 15 13:19 /dev/sdah5
brw------- 1 root root 66,  32 Jan 15 13:19 /dev/sdai
brw------- 1 root root 66,  33 Jan 15 13:19 /dev/sdai1
brw------- 1 root root 66,  34 Jan 15 13:19 /dev/sdai2
brw------- 1 root root 66,  37 Jan 15 13:19 /dev/sdai5

 

So it's sdae. I scanned them all through smartctl and they all return

Quote

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK

 

So somehow it's not bothered here. 

Link to comment
Share on other sites

20 minutes ago, Tibag said:

Adding a log from dmesg, there is a lot about sata stuff. Can you see anything you are familiar with? 

dmesg.log 130.19 kB · 0 downloads

 

 

[    0.538330] pci 0000:00:0f.0: can't claim BAR 6 [mem 0xffff8000-0xffffffff pref]: no compatible bridge window
[    0.538534] pci 0000:02:02.0: can't claim BAR 6 [mem 0xffff0000-0xffffffff pref]: no compatible bridge window
[    0.538749] pci 0000:02:03.0: can't claim BAR 6 [mem 0xffff0000-0xffffffff pref]: no compatible bridge window
[    0.538991] pci 0000:0b:00.0: can't claim BAR 6 [mem 0xffff0000-0xffffffff pref]: no compatible bridge window
[    0.539323] pci 0000:00:15.0: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
[    0.539499] pci 0000:00:15.1: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
[    0.539745] pci 0000:00:15.2: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
[    0.539963] pci 0000:00:15.3: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
[    0.540193] pci 0000:00:15.4: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
[    0.540404] pci 0000:00:15.5: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
[    0.540617] pci 0000:00:15.6: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
[    0.540849] pci 0000:00:15.7: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
[    0.541110] pci 0000:00:16.1: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
[    0.541321] pci 0000:00:16.2: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
[    0.541529] pci 0000:00:16.3: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
[    0.541740] pci 0000:00:16.4: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
[    0.541975] pci 0000:00:16.5: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
[    0.542203] pci 0000:00:16.6: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
[    0.542414] pci 0000:00:16.7: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
[    0.542639] pci 0000:00:17.0: bridge window [io  0x1000-0x0fff] to [bus 13] add_size 1000
[    0.542869] pci 0000:00:17.1: bridge window [io  0x1000-0x0fff] to [bus 14] add_size 1000
[    0.543080] pci 0000:00:17.2: bridge window [io  0x1000-0x0fff] to [bus 15] add_size 1000
[    0.543307] pci 0000:00:17.3: bridge window [io  0x1000-0x0fff] to [bus 16] add_size 1000
[    0.543541] pci 0000:00:17.4: bridge window [io  0x1000-0x0fff] to [bus 17] add_size 1000
[    0.544037] pci 0000:00:15.0: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.544225] pci 0000:00:15.0: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.544422] pci 0000:00:15.1: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.544643] pci 0000:00:15.1: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.544873] pci 0000:00:15.2: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.545067] pci 0000:00:15.2: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.545247] pci 0000:00:15.3: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.545439] pci 0000:00:15.3: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.545608] pci 0000:00:15.4: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.545811] pci 0000:00:15.4: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.546005] pci 0000:00:15.5: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.546233] pci 0000:00:15.5: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.546488] pci 0000:00:15.6: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.546711] pci 0000:00:15.6: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.546923] pci 0000:00:15.7: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.547103] pci 0000:00:15.7: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.547300] pci 0000:00:16.1: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.547513] pci 0000:00:16.1: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.547769] pci 0000:00:16.2: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.547980] pci 0000:00:16.2: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.548199] pci 0000:00:16.3: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.548379] pci 0000:00:16.3: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.548545] pci 0000:00:16.4: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.548725] pci 0000:00:16.4: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.548929] pci 0000:00:16.5: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.549122] pci 0000:00:16.5: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.549336] pci 0000:00:16.6: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.549525] pci 0000:00:16.6: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.549711] pci 0000:00:16.7: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.549928] pci 0000:00:16.7: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.550145] pci 0000:00:17.0: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.550379] pci 0000:00:17.0: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.550561] pci 0000:00:17.1: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.550747] pci 0000:00:17.1: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.550944] pci 0000:00:17.2: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.551123] pci 0000:00:17.2: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.551303] pci 0000:00:17.3: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.551482] pci 0000:00:17.3: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.551682] pci 0000:00:17.4: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.551889] pci 0000:00:17.4: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.552078] pci 0000:00:0f.0: BAR 6: assigned [mem 0xff100000-0xff107fff pref]
[    0.552229] pci 0000:00:15.0: BAR 13: no space for [io  size 0x1000]
[    0.552360] pci 0000:00:15.0: BAR 13: failed to assign [io  size 0x1000]
[    0.552501] pci 0000:00:15.1: BAR 13: no space for [io  size 0x1000]
[    0.552624] pci 0000:00:15.1: BAR 13: failed to assign [io  size 0x1000]
[    0.552765] pci 0000:00:15.2: BAR 13: no space for [io  size 0x1000]
[    0.552913] pci 0000:00:15.2: BAR 13: failed to assign [io  size 0x1000]
[    0.553059] pci 0000:00:15.3: BAR 13: no space for [io  size 0x1000]
[    0.553209] pci 0000:00:15.3: BAR 13: failed to assign [io  size 0x1000]
[    0.553349] pci 0000:00:15.4: BAR 13: no space for [io  size 0x1000]
[    0.553480] pci 0000:00:15.4: BAR 13: failed to assign [io  size 0x1000]
[    0.553613] pci 0000:00:15.5: BAR 13: no space for [io  size 0x1000]
[    0.553743] pci 0000:00:15.5: BAR 13: failed to assign [io  size 0x1000]
[    0.553905] pci 0000:00:15.6: BAR 13: no space for [io  size 0x1000]
[    0.554041] pci 0000:00:15.6: BAR 13: failed to assign [io  size 0x1000]
[    0.554210] pci 0000:00:15.7: BAR 13: no space for [io  size 0x1000]
[    0.554336] pci 0000:00:15.7: BAR 13: failed to assign [io  size 0x1000]
[    0.554489] pci 0000:00:16.1: BAR 13: no space for [io  size 0x1000]
[    0.554621] pci 0000:00:16.1: BAR 13: failed to assign [io  size 0x1000]
[    0.554767] pci 0000:00:16.2: BAR 13: no space for [io  size 0x1000]
[    0.554959] pci 0000:00:16.2: BAR 13: failed to assign [io  size 0x1000]
[    0.555105] pci 0000:00:16.3: BAR 13: no space for [io  size 0x1000]
[    0.555253] pci 0000:00:16.3: BAR 13: failed to assign [io  size 0x1000]
[    0.555393] pci 0000:00:16.4: BAR 13: no space for [io  size 0x1000]
[    0.555521] pci 0000:00:16.4: BAR 13: failed to assign [io  size 0x1000]
[    0.555675] pci 0000:00:16.5: BAR 13: no space for [io  size 0x1000]
[    0.555801] pci 0000:00:16.5: BAR 13: failed to assign [io  size 0x1000]
[    0.555947] pci 0000:00:16.6: BAR 13: no space for [io  size 0x1000]
[    0.556068] pci 0000:00:16.6: BAR 13: failed to assign [io  size 0x1000]
[    0.556199] pci 0000:00:16.7: BAR 13: no space for [io  size 0x1000]
[    0.556320] pci 0000:00:16.7: BAR 13: failed to assign [io  size 0x1000]
[    0.556451] pci 0000:00:17.0: BAR 13: no space for [io  size 0x1000]
[    0.556537] pci 0000:00:17.0: BAR 13: failed to assign [io  size 0x1000]
[    0.556673] pci 0000:00:17.1: BAR 13: no space for [io  size 0x1000]
[    0.556816] pci 0000:00:17.1: BAR 13: failed to assign [io  size 0x1000]
[    0.556956] pci 0000:00:17.2: BAR 13: no space for [io  size 0x1000]
[    0.557121] pci 0000:00:17.2: BAR 13: failed to assign [io  size 0x1000]
[    0.557280] pci 0000:00:17.3: BAR 13: no space for [io  size 0x1000]
[    0.557482] pci 0000:00:17.3: BAR 13: failed to assign [io  size 0x1000]
[    0.557611] pci 0000:00:17.4: BAR 13: no space for [io  size 0x1000]
[    0.557756] pci 0000:00:17.4: BAR 13: failed to assign [io  size 0x1000]
[    0.557920] pci 0000:00:17.4: BAR 13: no space for [io  size 0x1000]
[    0.558067] pci 0000:00:17.4: BAR 13: failed to assign [io  size 0x1000]
[    0.558219] pci 0000:00:17.3: BAR 13: no space for [io  size 0x1000]
[    0.558345] pci 0000:00:17.3: BAR 13: failed to assign [io  size 0x1000]
[    0.558480] pci 0000:00:17.2: BAR 13: no space for [io  size 0x1000]
[    0.558616] pci 0000:00:17.2: BAR 13: failed to assign [io  size 0x1000]
[    0.558756] pci 0000:00:17.1: BAR 13: no space for [io  size 0x1000]
[    0.558898] pci 0000:00:17.1: BAR 13: failed to assign [io  size 0x1000]
[    0.559034] pci 0000:00:17.0: BAR 13: no space for [io  size 0x1000]
[    0.559160] pci 0000:00:17.0: BAR 13: failed to assign [io  size 0x1000]
[    0.559295] pci 0000:00:16.7: BAR 13: no space for [io  size 0x1000]
[    0.559455] pci 0000:00:16.7: BAR 13: failed to assign [io  size 0x1000]
[    0.559592] pci 0000:00:16.6: BAR 13: no space for [io  size 0x1000]
[    0.559749] pci 0000:00:16.6: BAR 13: failed to assign [io  size 0x1000]
[    0.559931] pci 0000:00:16.5: BAR 13: no space for [io  size 0x1000]
[    0.560053] pci 0000:00:16.5: BAR 13: failed to assign [io  size 0x1000]
[    0.560184] pci 0000:00:16.4: BAR 13: no space for [io  size 0x1000]
[    0.560305] pci 0000:00:16.4: BAR 13: failed to assign [io  size 0x1000]
[    0.560436] pci 0000:00:16.3: BAR 13: no space for [io  size 0x1000]
[    0.560528] pci 0000:00:16.3: BAR 13: failed to assign [io  size 0x1000]
[    0.560665] pci 0000:00:16.2: BAR 13: no space for [io  size 0x1000]
[    0.560809] pci 0000:00:16.2: BAR 13: failed to assign [io  size 0x1000]
[    0.560960] pci 0000:00:16.1: BAR 13: no space for [io  size 0x1000]
[    0.561099] pci 0000:00:16.1: BAR 13: failed to assign [io  size 0x1000]
[    0.561230] pci 0000:00:15.7: BAR 13: no space for [io  size 0x1000]
[    0.561351] pci 0000:00:15.7: BAR 13: failed to assign [io  size 0x1000]
[    0.561482] pci 0000:00:15.6: BAR 13: no space for [io  size 0x1000]
[    0.561592] pci 0000:00:15.6: BAR 13: failed to assign [io  size 0x1000]
[    0.561733] pci 0000:00:15.5: BAR 13: no space for [io  size 0x1000]
[    0.561887] pci 0000:00:15.5: BAR 13: failed to assign [io  size 0x1000]
[    0.562073] pci 0000:00:15.4: BAR 13: no space for [io  size 0x1000]
[    0.562208] pci 0000:00:15.4: BAR 13: failed to assign [io  size 0x1000]
[    0.562378] pci 0000:00:15.3: BAR 13: no space for [io  size 0x1000]
[    0.562504] pci 0000:00:15.3: BAR 13: failed to assign [io  size 0x1000]
[    0.562644] pci 0000:00:15.2: BAR 13: no space for [io  size 0x1000]
[    0.562774] pci 0000:00:15.2: BAR 13: failed to assign [io  size 0x1000]
[    0.562915] pci 0000:00:15.1: BAR 13: no space for [io  size 0x1000]
[    0.563058] pci 0000:00:15.1: BAR 13: failed to assign [io  size 0x1000]
[    0.563209] pci 0000:00:15.0: BAR 13: no space for [io  size 0x1000]
[    0.563340] pci 0000:00:15.0: BAR 13: failed to assign [io  size 0x1000]

 

I suspect this part in dmesg.

I think the lack of IO space is related to not being able to open Disk Manager.

https://bugzilla.redhat.com/show_bug.cgi?id=1334867

 

Isn't there more room somewhere to cause a space shortage?

 

 

Edited by Peter Suh
Link to comment
Share on other sites

7 minutes ago, Peter Suh said:

 

 

[    0.538330] pci 0000:00:0f.0: can't claim BAR 6 [mem 0xffff8000-0xffffffff pref]: no compatible bridge window
[    0.538534] pci 0000:02:02.0: can't claim BAR 6 [mem 0xffff0000-0xffffffff pref]: no compatible bridge window
[    0.538749] pci 0000:02:03.0: can't claim BAR 6 [mem 0xffff0000-0xffffffff pref]: no compatible bridge window
[    0.538991] pci 0000:0b:00.0: can't claim BAR 6 [mem 0xffff0000-0xffffffff pref]: no compatible bridge window
[    0.539323] pci 0000:00:15.0: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
[    0.539499] pci 0000:00:15.1: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
[    0.539745] pci 0000:00:15.2: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
[    0.539963] pci 0000:00:15.3: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
[    0.540193] pci 0000:00:15.4: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
[    0.540404] pci 0000:00:15.5: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
[    0.540617] pci 0000:00:15.6: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
[    0.540849] pci 0000:00:15.7: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
[    0.541110] pci 0000:00:16.1: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
[    0.541321] pci 0000:00:16.2: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
[    0.541529] pci 0000:00:16.3: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
[    0.541740] pci 0000:00:16.4: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
[    0.541975] pci 0000:00:16.5: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
[    0.542203] pci 0000:00:16.6: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
[    0.542414] pci 0000:00:16.7: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
[    0.542639] pci 0000:00:17.0: bridge window [io  0x1000-0x0fff] to [bus 13] add_size 1000
[    0.542869] pci 0000:00:17.1: bridge window [io  0x1000-0x0fff] to [bus 14] add_size 1000
[    0.543080] pci 0000:00:17.2: bridge window [io  0x1000-0x0fff] to [bus 15] add_size 1000
[    0.543307] pci 0000:00:17.3: bridge window [io  0x1000-0x0fff] to [bus 16] add_size 1000
[    0.543541] pci 0000:00:17.4: bridge window [io  0x1000-0x0fff] to [bus 17] add_size 1000
[    0.544037] pci 0000:00:15.0: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.544225] pci 0000:00:15.0: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.544422] pci 0000:00:15.1: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.544643] pci 0000:00:15.1: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.544873] pci 0000:00:15.2: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.545067] pci 0000:00:15.2: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.545247] pci 0000:00:15.3: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.545439] pci 0000:00:15.3: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.545608] pci 0000:00:15.4: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.545811] pci 0000:00:15.4: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.546005] pci 0000:00:15.5: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.546233] pci 0000:00:15.5: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.546488] pci 0000:00:15.6: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.546711] pci 0000:00:15.6: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.546923] pci 0000:00:15.7: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.547103] pci 0000:00:15.7: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.547300] pci 0000:00:16.1: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.547513] pci 0000:00:16.1: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.547769] pci 0000:00:16.2: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.547980] pci 0000:00:16.2: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.548199] pci 0000:00:16.3: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.548379] pci 0000:00:16.3: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.548545] pci 0000:00:16.4: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.548725] pci 0000:00:16.4: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.548929] pci 0000:00:16.5: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.549122] pci 0000:00:16.5: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.549336] pci 0000:00:16.6: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.549525] pci 0000:00:16.6: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.549711] pci 0000:00:16.7: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.549928] pci 0000:00:16.7: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.550145] pci 0000:00:17.0: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.550379] pci 0000:00:17.0: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.550561] pci 0000:00:17.1: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.550747] pci 0000:00:17.1: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.550944] pci 0000:00:17.2: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.551123] pci 0000:00:17.2: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.551303] pci 0000:00:17.3: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.551482] pci 0000:00:17.3: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.551682] pci 0000:00:17.4: res[13]=[io  0x1000-0x0fff] res_to_dev_res add_size 1000 min_align 1000
[    0.551889] pci 0000:00:17.4: res[13]=[io  0x1000-0x1fff] res_to_dev_res add_size 1000 min_align 1000
[    0.552078] pci 0000:00:0f.0: BAR 6: assigned [mem 0xff100000-0xff107fff pref]
[    0.552229] pci 0000:00:15.0: BAR 13: no space for [io  size 0x1000]
[    0.552360] pci 0000:00:15.0: BAR 13: failed to assign [io  size 0x1000]
[    0.552501] pci 0000:00:15.1: BAR 13: no space for [io  size 0x1000]
[    0.552624] pci 0000:00:15.1: BAR 13: failed to assign [io  size 0x1000]
[    0.552765] pci 0000:00:15.2: BAR 13: no space for [io  size 0x1000]
[    0.552913] pci 0000:00:15.2: BAR 13: failed to assign [io  size 0x1000]
[    0.553059] pci 0000:00:15.3: BAR 13: no space for [io  size 0x1000]
[    0.553209] pci 0000:00:15.3: BAR 13: failed to assign [io  size 0x1000]
[    0.553349] pci 0000:00:15.4: BAR 13: no space for [io  size 0x1000]
[    0.553480] pci 0000:00:15.4: BAR 13: failed to assign [io  size 0x1000]
[    0.553613] pci 0000:00:15.5: BAR 13: no space for [io  size 0x1000]
[    0.553743] pci 0000:00:15.5: BAR 13: failed to assign [io  size 0x1000]
[    0.553905] pci 0000:00:15.6: BAR 13: no space for [io  size 0x1000]
[    0.554041] pci 0000:00:15.6: BAR 13: failed to assign [io  size 0x1000]
[    0.554210] pci 0000:00:15.7: BAR 13: no space for [io  size 0x1000]
[    0.554336] pci 0000:00:15.7: BAR 13: failed to assign [io  size 0x1000]
[    0.554489] pci 0000:00:16.1: BAR 13: no space for [io  size 0x1000]
[    0.554621] pci 0000:00:16.1: BAR 13: failed to assign [io  size 0x1000]
[    0.554767] pci 0000:00:16.2: BAR 13: no space for [io  size 0x1000]
[    0.554959] pci 0000:00:16.2: BAR 13: failed to assign [io  size 0x1000]
[    0.555105] pci 0000:00:16.3: BAR 13: no space for [io  size 0x1000]
[    0.555253] pci 0000:00:16.3: BAR 13: failed to assign [io  size 0x1000]
[    0.555393] pci 0000:00:16.4: BAR 13: no space for [io  size 0x1000]
[    0.555521] pci 0000:00:16.4: BAR 13: failed to assign [io  size 0x1000]
[    0.555675] pci 0000:00:16.5: BAR 13: no space for [io  size 0x1000]
[    0.555801] pci 0000:00:16.5: BAR 13: failed to assign [io  size 0x1000]
[    0.555947] pci 0000:00:16.6: BAR 13: no space for [io  size 0x1000]
[    0.556068] pci 0000:00:16.6: BAR 13: failed to assign [io  size 0x1000]
[    0.556199] pci 0000:00:16.7: BAR 13: no space for [io  size 0x1000]
[    0.556320] pci 0000:00:16.7: BAR 13: failed to assign [io  size 0x1000]
[    0.556451] pci 0000:00:17.0: BAR 13: no space for [io  size 0x1000]
[    0.556537] pci 0000:00:17.0: BAR 13: failed to assign [io  size 0x1000]
[    0.556673] pci 0000:00:17.1: BAR 13: no space for [io  size 0x1000]
[    0.556816] pci 0000:00:17.1: BAR 13: failed to assign [io  size 0x1000]
[    0.556956] pci 0000:00:17.2: BAR 13: no space for [io  size 0x1000]
[    0.557121] pci 0000:00:17.2: BAR 13: failed to assign [io  size 0x1000]
[    0.557280] pci 0000:00:17.3: BAR 13: no space for [io  size 0x1000]
[    0.557482] pci 0000:00:17.3: BAR 13: failed to assign [io  size 0x1000]
[    0.557611] pci 0000:00:17.4: BAR 13: no space for [io  size 0x1000]
[    0.557756] pci 0000:00:17.4: BAR 13: failed to assign [io  size 0x1000]
[    0.557920] pci 0000:00:17.4: BAR 13: no space for [io  size 0x1000]
[    0.558067] pci 0000:00:17.4: BAR 13: failed to assign [io  size 0x1000]
[    0.558219] pci 0000:00:17.3: BAR 13: no space for [io  size 0x1000]
[    0.558345] pci 0000:00:17.3: BAR 13: failed to assign [io  size 0x1000]
[    0.558480] pci 0000:00:17.2: BAR 13: no space for [io  size 0x1000]
[    0.558616] pci 0000:00:17.2: BAR 13: failed to assign [io  size 0x1000]
[    0.558756] pci 0000:00:17.1: BAR 13: no space for [io  size 0x1000]
[    0.558898] pci 0000:00:17.1: BAR 13: failed to assign [io  size 0x1000]
[    0.559034] pci 0000:00:17.0: BAR 13: no space for [io  size 0x1000]
[    0.559160] pci 0000:00:17.0: BAR 13: failed to assign [io  size 0x1000]
[    0.559295] pci 0000:00:16.7: BAR 13: no space for [io  size 0x1000]
[    0.559455] pci 0000:00:16.7: BAR 13: failed to assign [io  size 0x1000]
[    0.559592] pci 0000:00:16.6: BAR 13: no space for [io  size 0x1000]
[    0.559749] pci 0000:00:16.6: BAR 13: failed to assign [io  size 0x1000]
[    0.559931] pci 0000:00:16.5: BAR 13: no space for [io  size 0x1000]
[    0.560053] pci 0000:00:16.5: BAR 13: failed to assign [io  size 0x1000]
[    0.560184] pci 0000:00:16.4: BAR 13: no space for [io  size 0x1000]
[    0.560305] pci 0000:00:16.4: BAR 13: failed to assign [io  size 0x1000]
[    0.560436] pci 0000:00:16.3: BAR 13: no space for [io  size 0x1000]
[    0.560528] pci 0000:00:16.3: BAR 13: failed to assign [io  size 0x1000]
[    0.560665] pci 0000:00:16.2: BAR 13: no space for [io  size 0x1000]
[    0.560809] pci 0000:00:16.2: BAR 13: failed to assign [io  size 0x1000]
[    0.560960] pci 0000:00:16.1: BAR 13: no space for [io  size 0x1000]
[    0.561099] pci 0000:00:16.1: BAR 13: failed to assign [io  size 0x1000]
[    0.561230] pci 0000:00:15.7: BAR 13: no space for [io  size 0x1000]
[    0.561351] pci 0000:00:15.7: BAR 13: failed to assign [io  size 0x1000]
[    0.561482] pci 0000:00:15.6: BAR 13: no space for [io  size 0x1000]
[    0.561592] pci 0000:00:15.6: BAR 13: failed to assign [io  size 0x1000]
[    0.561733] pci 0000:00:15.5: BAR 13: no space for [io  size 0x1000]
[    0.561887] pci 0000:00:15.5: BAR 13: failed to assign [io  size 0x1000]
[    0.562073] pci 0000:00:15.4: BAR 13: no space for [io  size 0x1000]
[    0.562208] pci 0000:00:15.4: BAR 13: failed to assign [io  size 0x1000]
[    0.562378] pci 0000:00:15.3: BAR 13: no space for [io  size 0x1000]
[    0.562504] pci 0000:00:15.3: BAR 13: failed to assign [io  size 0x1000]
[    0.562644] pci 0000:00:15.2: BAR 13: no space for [io  size 0x1000]
[    0.562774] pci 0000:00:15.2: BAR 13: failed to assign [io  size 0x1000]
[    0.562915] pci 0000:00:15.1: BAR 13: no space for [io  size 0x1000]
[    0.563058] pci 0000:00:15.1: BAR 13: failed to assign [io  size 0x1000]
[    0.563209] pci 0000:00:15.0: BAR 13: no space for [io  size 0x1000]
[    0.563340] pci 0000:00:15.0: BAR 13: failed to assign [io  size 0x1000]

 

I suspect this part in dmesg.

I think the lack of IO space is related to not being able to open Disk Manager.

https://bugzilla.redhat.com/show_bug.cgi?id=1334867

 

Isn't there more room somewhere to cause a space shortage?

 

 

 

Hm good catch. I will try googling around that. Regarding the storage here is my df:

Quote

root@Diskstation:~# df -h
Filesystem              Size  Used Avail Use% Mounted on
/dev/md0                2.3G  1.6G  591M  74% /
devtmpfs                7.8G     0  7.8G   0% /dev
tmpfs                   7.9G  248K  7.9G   1% /dev/shm
tmpfs                   7.9G   19M  7.8G   1% /run
tmpfs                   7.9G     0  7.9G   0% /sys/fs/cgroup
tmpfs                   7.9G  1.1M  7.9G   1% /tmp
/dev/mapper/cachedev_0  7.0T  5.5T  1.6T  78% /volume2

So nothing obvious here. 

Link to comment
Share on other sites

36 minutes ago, Orphée said:

You may want to try :

 

cd /
du -ax | sort -rn | more

 

To show what are biggest folder/files size consuming.

to check if they are all needed.

 

Yeah that's a nice way of doing it - I did have a go when I wanted to free my / mount. I can't see a lot jumping out of the ordinary. See attached in case anything catches your eye. 

 

 

du.txt

Link to comment
Share on other sites

6 minutes ago, Orphée said:

I meant it to be done from Serial console while in Junior install mode (to check what could consume too much data)

Gotcha, do I do that by interrupting the boot like it suggests on:

image.thumb.png.fa980e4effb2da05dd112f53209bb75f.png

 

Or I even press J? Won't it start trying to reinstall DSM?

Link to comment
Share on other sites

I actually noticed a lot of errors in my syslog like

Quote

[2024-01-15T10:37:44.178538] Error opening file for writing; filename='/var/packages/SynoFinder/var/log/fileindexd.log', error='File exists (17)'

 

Then check that folder

Quote

root@Diskstation:~# ll /var/packages/SynoFinder/var/log
lrwxrwxrwx 1 root root 24 Mar  9  2022 /var/packages/SynoFinder/var/log -> /volume1/@SynoFinder-log

 

So that's pointing to a dead path, because my volume1 is not loading (for whatever reason). I wonder if it's preventing the Storage Manager to start. 

Link to comment
Share on other sites

Looks like the above is a red herring - I fixed it anyway. 

 

I am trying to understand why storage manager doesn't look happy:

Quote

root@Diskstation:/var/log# systemctl status synostoraged.service
● synostoraged.service - Synology daemon for monitoring space/disk/cache status
   Loaded: loaded (/usr/lib/systemd/system/synostoraged.service; static; vendor preset: disabled)
   Active: active (running) since Mon 2024-01-15 16:32:54 GMT; 12min ago
  Process: 26275 ExecStart=/usr/syno/sbin/synostoraged (code=exited, status=0/SUCCESS)
 Main PID: 26282 (synostoraged)
   CGroup: /syno_dsm_storage_manager.slice/synostoraged.service
           ├─26282 synostoraged
           ├─26283 synostgd-disk
           ├─26284 synostgd-space
           ├─26285 synostgd-cache
           ├─26287 synostgd-volume
           └─26289 synostgd-external-volume

Jan 15 16:32:54 Diskstation [26291]: disk/disk_sql_cmd_valid_check.c:24 Invalid string ../../../33:0:0:0
Jan 15 16:32:54 Diskstation [26291]: disk/disk_error_status_get.c:50 Invalid serial number
Jan 15 16:32:54 Diskstation [26296]: disk/disk_sql_cmd_valid_check.c:24 Invalid string ../../../34:0:0:0
Jan 15 16:32:54 Diskstation [26296]: disk/disk_error_status_get.c:50 Invalid serial number
Jan 15 16:32:54 Diskstation [26291]: disk/disk_sql_cmd_valid_check.c:24 Invalid string ../../../33:0:0:0
Jan 15 16:32:54 Diskstation [26291]: disk/disk_error_status_get.c:50 Invalid serial number
Jan 15 16:32:54 Diskstation [26296]: disk/disk_sql_cmd_valid_check.c:24 Invalid string ../../../34:0:0:0
Jan 15 16:32:54 Diskstation [26296]: disk/disk_error_status_get.c:50 Invalid serial number
Jan 15 16:32:54 Diskstation [26296]: disk/disk_sql_cmd_valid_check.c:24 Invalid string ../../../34:0:0:0
Jan 15 16:32:54 Diskstation [26296]: disk/disk_error_status_get.c:50 Invalid serial number

 

Does that "Invalid serial number" ring a bell? 

 

Link to comment
Share on other sites

12 hours ago, Orphée said:

When I had the issue, I deleted folder content (I left the folder but cleaned inside it). (and another temp one) and I was then able to relaunch the DSM junior install.

These 3 folders can be deleted,I'm thinking about adding it to the bootloader

mkdir -p /mnt/md0
mount /dev/md0 /mnt/md0/
rm -rf /mnt/md0/@autoupdate/*
rm -rf /mnt/md0/upd@te/*
rm -rf /mnt/md0/.log.junior/*
umount /mnt/md0/

 

  • Like 1
Link to comment
Share on other sites

These 3 folders can be deleted,I'm thinking about adding it to the bootloader
mkdir -p /mnt/md0mount /dev/md0 /mnt/md0/rm -rf /mnt/md0/@autoupdate/*rm -rf /mnt/md0/upd@te/*rm -rf /mnt/md0/.log.junior/*umount /mnt/md0/

 


Could this script also eliminate the repeated recovery of a damaged SAN MANAGER?


Sent from my iPhone using Tapatalk
  • Like 1
Link to comment
Share on other sites

3 hours ago, Peter Suh said:


Could this script also eliminate the repeated recovery of a damaged SAN MANAGER?


Sent from my iPhone using Tapatalk

Probably not, it just solves the problem of insufficient space that may occur during the installation process.

  • Thanks 2
Link to comment
Share on other sites

With arc-24.1.14f.vmdk-flat I do not get SAN Manager corrupted any way.

The san manager corruption issue lasted for a while because the mac-spoof addon was forced to be used when the mshell loader was built between December 28, 2023 and January 3, 2024. There is no problem now.


Sent from my iPhone using Tapatalk
Link to comment
Share on other sites

1 hour ago, Peter Suh said:


The san manager corruption issue lasted for a while because the mac-spoof addon was forced to be used when the mshell loader was built between December 28, 2023 and January 3, 2024. There is no problem now.


Sent from my iPhone using Tapatalk

I would venture to say something unusual is going on...  Yesterday, when I decided to restart my 920+, and "build loader" so it would pull any updates that you might have released, I noticed a glitch.  When all 4 of the status windows opened, the windows which would show the SN / MAC, the SN field was blank, but in a fast glance, the rest looked fine.  I have a real SN / MAC.  I built and restarted, it all came back as expected, correct SN / MAC @ Info Center, as I was able to sign-in to the static IP assigned to that machine, I assume the SN came from DSM, not the loader.  There's more...but I'll share that in a bit.  FYI

Edited by gericb
Link to comment
Share on other sites

@Peter Suh yesterday i upgraded Xpenology from 7.1.1 (Juns mod) to 7.2.1 using your latest tinycore-redpill-m-shell release and i found a HUGE issue - modules required by ScsiTarget (SAN Manager) are not loaded and Virtual Machine Manager service does not work.

It`s easy to reproduce (on baremetal and proxmox):

1) install fresh TCRP: i tested DS920+ and DS923+ with DDSML

2) SAN Manager is installed by default (and it`s working) so install Virtual Machine Manager

3) reboot Xpenology

4) SAN Manager and VMM has warning (both are stopped and cannot be repaired)

 

obraz.png.25e16a844f659cc48b1c90318d4d54ce.png

 

In logs /var/log/iscsi.log we can find

Quote

2024-01-17T21:43:57+01:00 NAS-ITX synoiscsiep[15029]: iscsi_lun_service_start_all.cpp:90:SYNOiSCSILunServiceStartAllWithoutLock Mounting configfs on /config
2024-01-17T21:43:57+01:00 NAS-ITX synoiscsiep[15029]: iscsi_lun_service_start_all.cpp:105:SYNOiSCSILunServiceStartAllWithoutLock mkdir(/config/target/core/iblock_0, 448), err=No such file or directory
2024-01-17T21:43:57+01:00 NAS-ITX synoiscsiep[15029]: iscsi_start_all.cpp:22:SYNOiSCSIStartAllWithoutLock SYNOiSCSILunServiceStartAllWithoutLock(), err=Failed to create directory
2024-01-17T21:43:57+01:00 NAS-ITX synoiscsiep[15029]: iscsi_start_all.cpp:115:SYNOiSCSIStartAll SYNOiSCSIStartAllWithoutLock(), err=Failed to create directory
2024-01-17T21:43:57+01:00 NAS-ITX synoiscsiep[15032]: iscsi_lun_service_start_all.cpp:105:SYNOiSCSILunServiceStartAllWithoutLock mkdir(/config/target/core/iblock_0, 448), err=No such file or directory
2024-01-17T21:43:57+01:00 NAS-ITX synoiscsiep[15032]: vhost_scsi_start_all.cpp:13:SYNOiSCSIVhostStartAllWithoutLock SYNOiSCSILunServiceStartAllWithoutLock(), err=Failed to create directory
2024-01-17T21:43:57+01:00 NAS-ITX synoiscsiep[15032]: vhost_scsi_start_all.cpp:39:SYNOiSCSIVhostStartAll SYNOiSCSIVhostStartAllWithoutLock(), err=Failed to start service
2024-01-17T21:43:59+01:00 NAS-ITX synoiscsiep[15035]: iscsi_lun_service_start_all.cpp:105:SYNOiSCSILunServiceStartAllWithoutLock mkdir(/config/target/core/iblock_0, 448), err=No such file or directory
2024-01-17T21:43:59+01:00 NAS-ITX synoiscsiep[15035]: iscsi_lun_service_start_all.cpp:145:SYNOiSCSILunServiceStartAll SYNOiSCSILunServiceStartAllWithoutLock(), err=Failed to create directory
2024-01-17T21:43:59+01:00 NAS-ITX synoiscsiep[15045]: iscsi_lun_service_start_all.cpp:105:SYNOiSCSILunServiceStartAllWithoutLock mkdir(/config/target/core/iblock_0, 448), err=No such file or directory
2024-01-17T21:43:59+01:00 NAS-ITX synoiscsiep[15045]: iscsi_loopback_start_all.cpp:29:SYNOiSCSILoopbackStartAll SYNOiSCSILunServiceStartAllWithoutLock(), err=Failed to create directory
2024-01-17T21:43:59+01:00 NAS-ITX synoiscsiep[15074]: iscsi_lun_service_start_all.cpp:105:SYNOiSCSILunServiceStartAllWithoutLock mkdir(/config/target/core/iblock_0, 448), err=No such file or directory
2024-01-17T21:43:59+01:00 NAS-ITX synoiscsiep[15074]: fc_start_all.cpp:53:SYNOFCStartAll SYNOiSCSILunServiceStartAllWithoutLock(), err=Failed to create directory

 

On working Synology NAS on logs i found:

Quote

2023-12-21T13:25:39+01:00 NAS-ITX synoiscsiep[12513]: iscsi_module_manage.cpp:56:insmod_if_module_file_exist Module 'target_core_mod' is loaded
2023-12-21T13:25:39+01:00 NAS-ITX synoiscsiep[12513]: iscsi_module_manage.cpp:56:insmod_if_module_file_exist Module 'target_core_iblock' is loaded
2023-12-21T13:25:39+01:00 NAS-ITX synoiscsiep[12513]: iscsi_module_manage.cpp:56:insmod_if_module_file_exist Module 'target_core_file' is loaded
2023-12-21T13:25:39+01:00 NAS-ITX synoiscsiep[12513]: iscsi_module_manage.cpp:56:insmod_if_module_file_exist Module 'target_core_multi_file' is loaded
2023-12-21T13:25:39+01:00 NAS-ITX synoiscsiep[12513]: iscsi_module_manage.cpp:56:insmod_if_module_file_exist Module 'target_core_ep' is loaded
2023-12-21T13:25:39+01:00 NAS-ITX synoiscsiep[12513]: iscsi_module_manage.cpp:56:insmod_if_module_file_exist Module 'target_core_user' is loaded
2023-12-21T13:25:39+01:00 NAS-ITX synoiscsiep[12513]: iscsi_lun_service_start_all.cpp:90:SYNOiSCSILunServiceStartAllWithoutLock Mounting configfs on /config
2023-12-21T13:25:39+01:00 NAS-ITX kernel: target_core_file.c:152:fd_attach_hba RODSP plugin for fileio is enabled.
2023-12-21T13:25:39+01:00 NAS-ITX kernel: target_core_file.c:159:fd_attach_hba ODX Token Manager is enabled.
2023-12-21T13:25:39+01:00 NAS-ITX kernel: target_core_multi_file.c:91:fd_attach_hba RODSP plugin for multifile is enabled.
2023-12-21T13:25:39+01:00 NAS-ITX kernel: target_core_ep.c:795:ep_attach_hba RODSP plugin for epio is enabled.
2023-12-21T13:25:39+01:00 NAS-ITX kernel: target_core_ep.c:802:ep_attach_hba ODX Token Manager is enabled.
2023-12-21T13:25:39+01:00 NAS-ITX synoiscsiep[12513]: iscsi_module_manage.cpp:56:insmod_if_module_file_exist Module 'iscsi_target_mod' is loaded
2023-12-21T13:25:39+01:00 NAS-ITX synoiscsiep[12513]: iscsi_module_manage.cpp:56:insmod_if_module_file_exist Module 'tcm_loop' is loaded
2023-12-21T13:25:39+01:00 NAS-ITX synoiscsiep[12513]: iscsi_start_all.cpp:89:SYNOiSCSIStartAllWithoutLock Successfully started iSCSI service.
2023-12-21T13:25:39+01:00 NAS-ITX synoiscsiep[12531]: iscsi_module_manage.cpp:56:insmod_if_module_file_exist Module 'vhost' is loaded
2023-12-21T13:25:39+01:00 NAS-ITX synoiscsiep[12531]: iscsi_module_manage.cpp:56:insmod_if_module_file_exist Module 'vhost_scsi' is loaded

 

Those modules ARE NOT loaded right now. But when i load all those modules manually via "modprobe" then repair SAN Manager package and it works!! but only to the next reboot :(

 

In /var/log/messages i  found

Quote

2024-01-17T21:43:36+01:00 NAS-ITX kernel: [   32.263150] scsi_mod: exports duplicate symbol __scsi_add_device (owned by kernel)
2024-01-17T21:43:36+01:00 NAS-ITX kernel: [   32.477481] scsi_mod: exports duplicate symbol __scsi_add_device (owned by kernel)
2024-01-17T21:43:36+01:00 NAS-ITX kernel: [   32.522720] scsi_mod: exports duplicate symbol __scsi_add_device (owned by kernel)
2024-01-17T21:43:36+01:00 NAS-ITX kernel: [   32.598157] scsi_mod: exports duplicate symbol __scsi_add_device (owned by kernel)

 

maybe this will be some clue

 

Then i tried build redpill with selected EUDEV instead of DDSML but result was the same. With DDSML+EUDEV still the same.

 

another thing which i didnt understand is why under proxmox, synology detect synoboot drive as first disk?? Should be 2 drives, not 3.

obraz.png.37f04761375bcc762863de819b394bab.png

 

obraz.png.6338c99bea2c9da64ebb1d827dff43dd.png

 

on baremetal this issue is not appear

 

Then i downloaded tinycore-m-shell 1.0.0.0, mounted as sata0, compiled DS920+ DDSML

SAN Manager still doesn`t work BUT at least this fixed drive list (synoboot disk disappear from list)

 

obraz.png.5f9a1ad8ec1904636b6447ce99033da3.png

obraz.png.33657bb69592c71edc314f7da6ce3f16.png

 

The last one what i did was compile DS923+ DDSML and migrate from DS920+. First boot after migration and SAN Manager works (VMM also)! but after reboot both failed.

 

i gave up 😕

 

Edited by shibby
  • Thanks 1
Link to comment
Share on other sites

24 minutes ago, shibby said:

@Peter Suh yesterday i upgraded Xpenology from 7.1.1 (Juns mod) to 7.2.1 using your latest tinycore-redpill-m-shell release and i found a HUGE issue - modules required by ScsiTarget (SAN Manager) are not loaded and Virtual Machine Manager service does not work.

It`s easy to reproduce (on baremetal and proxmox):

1) install fresh TCRP: i tested DS920+ and DS923+ with DDSML

2) SAN Manager is installed by default (and it`s working) so install Virtual Machine Manager

3) reboot Xpenology

4) SAN Manager and VMM has warning (both are stopped and cannot be repaired)

 

obraz.png.25e16a844f659cc48b1c90318d4d54ce.png

 

In logs /var/log/iscsi.log we can find

 

On working Synology NAS on logs i found:

 

Those modules ARE NOT loaded right now. But when i load all those modules manually via "modprobe" then repair SAN Manager package and it works!! but only to the next reboot :(

 

In /var/log/messages i  found

 

maybe this will be some clue

 

Then i tried build redpill with selected EUDEV instead of DDSML but result was the same. With DDSML+EUDEV still the same.

 

another thing which i didnt understand is why under proxmox, synology detect synoboot drive as first disk?? Should be 2 drives, not 3.

obraz.png.37f04761375bcc762863de819b394bab.png

 

obraz.png.6338c99bea2c9da64ebb1d827dff43dd.png

 

on baremetal this issue is not appear

 

Then i downloaded tinycore-m-shell 1.0.0.0, mounted as sata0, compiled DS920+ DDSML

SAN Manager still doesn`t work BUT at least this fixed drive list (synoboot disk disappear from list)

 

obraz.png.5f9a1ad8ec1904636b6447ce99033da3.png

obraz.png.33657bb69592c71edc314f7da6ce3f16.png

 

The last one what i did was compile DS923+ DDSML and migrate from DS920+. First boot after migration and SAN Manager works (VMM also)! but after reboot both failed.

 

i gave up 😕

 

 

Using the clues you gave, let's create a script that reprocesses modules related to SAN MANAGER with modprobe.
I hope this helps users who are currently struggling with this issue.
thank you

  • Like 1
Link to comment
Share on other sites

I used it for testing before writing the script.
It will be restored well!!

 

modprobe target_core_mod
modprobe target_core_iblock
modprobe target_core_file
modprobe target_core_multi_file
modprobe target_core_user
modprobe iscsi_target_mod
modprobe tcm_loop
modprobe vhost
modprobe vhost_scsi

modprobe target_core_ep (modprobe: ERROR: could not insert 'target_core_ep': Operation not permitted)

Link to comment
Share on other sites

2 hours ago, shibby said:

@Peter Suh yesterday i upgraded Xpenology from 7.1.1 (Juns mod) to 7.2.1 using your latest tinycore-redpill-m-shell release and i found a HUGE issue - modules required by ScsiTarget (SAN Manager) are not loaded and Virtual Machine Manager service does not work.

It`s easy to reproduce (on baremetal and proxmox):

1) install fresh TCRP: i tested DS920+ and DS923+ with DDSML

2) SAN Manager is installed by default (and it`s working) so install Virtual Machine Manager

3) reboot Xpenology

4) SAN Manager and VMM has warning (both are stopped and cannot be repaired)

 

obraz.png.25e16a844f659cc48b1c90318d4d54ce.png

 

In logs /var/log/iscsi.log we can find

 

On working Synology NAS on logs i found:

 

Those modules ARE NOT loaded right now. But when i load all those modules manually via "modprobe" then repair SAN Manager package and it works!! but only to the next reboot :(

 

In /var/log/messages i  found

 

maybe this will be some clue

 

Then i tried build redpill with selected EUDEV instead of DDSML but result was the same. With DDSML+EUDEV still the same.

 

another thing which i didnt understand is why under proxmox, synology detect synoboot drive as first disk?? Should be 2 drives, not 3.

obraz.png.37f04761375bcc762863de819b394bab.png

 

obraz.png.6338c99bea2c9da64ebb1d827dff43dd.png

 

on baremetal this issue is not appear

 

Then i downloaded tinycore-m-shell 1.0.0.0, mounted as sata0, compiled DS920+ DDSML

SAN Manager still doesn`t work BUT at least this fixed drive list (synoboot disk disappear from list)

 

obraz.png.5f9a1ad8ec1904636b6447ce99033da3.png

obraz.png.33657bb69592c71edc314f7da6ce3f16.png

 

The last one what i did was compile DS923+ DDSML and migrate from DS920+. First boot after migration and SAN Manager works (VMM also)! but after reboot both failed.

 

i gave up 😕

 

 

 

It's a success!!!

 

The main script contents are as follows.

https://github.com/PeterSuh-Q3/tcrp-addons/blob/main/sanmanager-repair/src/sanrepair.sh

If you rebuild the mshell loader from this point on, SAN MANAGER will be restored.^^

 

Thanks @shibby !!!

 

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...