PeteyNice Posted June 3, 2022 #1 Posted June 3, 2022 (edited) I recently used RedPill to migrate from 6.2.3. Now Storage Manager will only recognize the drive in Bay 1 (whichever drive I put in there). The other drives show up if I ssh in and check fdisk, they pass smartctl etc. So they exist. When in Bay 1 Storage Manager reports each drive as healthy. I am ready to format one of the drives in the hopes that that will cause Syno to accept it and rebuild the array but I wanted to see if there were less drastic things I should try first. Hardware: ASRock J5040-ITX Drives: 2x 8 TB WD Loader: RedPill geminilake-7.1-42661 update 2 fdisk (identifiers changed😞 Disk /dev/sata2: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: WD80EDAZ-11TA3A0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: DDDDDDD-RRRR-SSSS-YYYY-XXXXXXXXX Device Start End Sectors Size Type /dev/sata2p1 2048 4982527 4980480 2.4G Linux RAID /dev/sata2p2 4982528 9176831 4194304 2G Linux RAID /dev/sata2p5 9453280 15627846239 15618392960 7.3T Linux RAID Disk /dev/sata1: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: WD80EDAZ-11TA3A0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: OOOOOOO-PPPP-7777-8888-AAAAAAAAAAAA Device Start End Sectors Size Type /dev/sata1p1 2048 4982527 4980480 2.4G Linux RAID /dev/sata1p2 4982528 9176831 4194304 2G Linux RAID /dev/sata1p5 9453280 15627846239 15618392960 7.3T Linux RAID Thanks! Edited June 3, 2022 by PeteyNice Quote
flyride Posted June 4, 2022 #2 Posted June 4, 2022 You are saying that you installed DSM to one of your array members, and went forward without a Migration install? That means that DSM told you it was going to erase your disk on install. And if so, it did. If that's true, you no longer have an array set. Is your data intact? Maybe post some screenshots of Storage Manager, and maybe a cat /proc/mdstat if this doesn't make sense. Quote
PeteyNice Posted June 4, 2022 Author #3 Posted June 4, 2022 Both drives were installed when I did the Migration. When it rebooted, only the drive in Bay 1 shows up in Storage Manager. The data is intact. Whichever drive I put in Bay 1 shows me all of my data. Here are some screeenshots. Quote
flyride Posted June 4, 2022 #4 Posted June 4, 2022 Post the results of cat /proc/cmdline Assuming you have installed DS920+, also post the dump of your dtb. sudo -i cd /usr/bin curl --location --progress-bar https://github.com/jumkey/redpill-load/raw/develop/redpill-dtb/releases/dtc -O chmod 700 dtc dtc -I dtb -O dts /var/run/model.dtb Quote
PeteyNice Posted June 4, 2022 Author #5 Posted June 4, 2022 $ cat /proc/cmdline BOOT_IMAGE=/zImage syno_hw_version=DS920+ console=ttyS0,115200n8 netif_num=1 synoboot2 earlycon=uart8250,io,0x3f8,115200n8 mac1=00113241BBBA sn=2040SBRQ3A5FS HddEnableDynamicPower=1 intel_iommu=igfx_off DiskIdxMap=0002 vender_format_version=2 root=/dev/md0 SataPortMap=22 syno_ttyS1=serial,0x2f8 syno_ttyS0=serial,0x3f8 sh-4.4# dtc -I dtb -O dts /var/run/model.dtb <stdout>: Warning (unit_address_vs_reg): /DX517/pmp_slot@1: node has a unit name, but no reg or ranges property <stdout>: Warning (unit_address_vs_reg): /DX517/pmp_slot@2: node has a unit name, but no reg or ranges property <stdout>: Warning (unit_address_vs_reg): /DX517/pmp_slot@3: node has a unit name, but no reg or ranges property <stdout>: Warning (unit_address_vs_reg): /DX517/pmp_slot@4: node has a unit name, but no reg or ranges property <stdout>: Warning (unit_address_vs_reg): /DX517/pmp_slot@5: node has a unit name, but no reg or ranges property <stdout>: Warning (unit_address_vs_reg): /internal_slot@1: node has a unit name, but no reg or ranges property <stdout>: Warning (unit_address_vs_reg): /internal_slot@2: node has a unit name, but no reg or ranges property <stdout>: Warning (unit_address_vs_reg): /internal_slot@3: node has a unit name, but no reg or ranges property <stdout>: Warning (unit_address_vs_reg): /internal_slot@4: node has a unit name, but no reg or ranges property <stdout>: Warning (unit_address_vs_reg): /esata_port@1: node has a unit name, but no reg or ranges property <stdout>: Warning (unit_address_vs_reg): /usb_slot@1: node has a unit name, but no reg or ranges property <stdout>: Warning (unit_address_vs_reg): /usb_slot@2: node has a unit name, but no reg or ranges property <stdout>: Warning (unit_address_vs_reg): /nvme_slot@1: node has a unit name, but no reg or ranges property <stdout>: Warning (unit_address_vs_reg): /nvme_slot@2: node has a unit name, but no reg or ranges property /dts-v1/; / { compatible = "Synology"; model = "synology_geminilake_920+"; version = <0x01>; syno_spinup_group = <0x02 0x01 0x01>; syno_spinup_group_delay = <0x0b>; syno_hdd_powerup_seq = "true"; syno_cmos_reg_secure_flash = <0x66>; syno_cmos_reg_secure_boot = <0x68>; DX517 { compatible = "Synology"; model = "synology_dx517"; pmp_slot@1 { libata { EMID = <0x00>; pmp_link = <0x00>; }; }; pmp_slot@2 { libata { EMID = <0x00>; pmp_link = <0x01>; }; }; pmp_slot@3 { libata { EMID = <0x00>; pmp_link = <0x02>; }; }; pmp_slot@4 { libata { EMID = <0x00>; pmp_link = <0x03>; }; }; pmp_slot@5 { libata { EMID = <0x00>; pmp_link = <0x04>; }; }; }; internal_slot@1 { protocol_type = "sata"; power_pin_gpio = <0x14 0x00>; detect_pin_gpio = <0x23 0x01>; led_type = "lp3943"; ahci { pcie_root = "00:12.0"; ata_port = <0x00>; }; led_green { led_name = "syno_led0"; }; led_orange { led_name = "syno_led1"; }; }; internal_slot@2 { protocol_type = "sata"; power_pin_gpio = <0x15 0x00>; detect_pin_gpio = <0x24 0x01>; led_type = "lp3943"; ahci { pcie_root = "00:13.3,00.0"; ata_port = <0x02>; }; led_green { led_name = "syno_led2"; }; led_orange { led_name = "syno_led3"; }; }; internal_slot@3 { protocol_type = "sata"; power_pin_gpio = <0x16 0x00>; detect_pin_gpio = <0x25 0x01>; led_type = "lp3943"; ahci { pcie_root = "00:12.0"; ata_port = <0x02>; }; led_green { led_name = "syno_led4"; }; led_orange { led_name = "syno_led5"; }; }; internal_slot@4 { protocol_type = "sata"; power_pin_gpio = <0x17 0x00>; detect_pin_gpio = <0x26 0x01>; led_type = "lp3943"; ahci { pcie_root = "00:12.0"; ata_port = <0x03>; }; led_green { led_name = "syno_led6"; }; led_orange { led_name = "syno_led7"; }; }; esata_port@1 { ahci { pcie_root = "00:13.0,00.0"; ata_port = <0x03>; }; }; usb_slot@1 { vbus { syno_gpio = <0x1d 0x01>; }; usb2 { usb_port = "1-3"; }; usb3 { usb_port = "2-1"; }; }; usb_slot@2 { vbus { syno_gpio = <0x1e 0x01>; }; usb2 { usb_port = "1-2"; }; usb3 { usb_port = "2-2"; }; }; nvme_slot@1 { pcie_root = "00:14.1"; port_type = "ssdcache"; }; nvme_slot@2 { pcie_root = "00:14.0"; port_type = "ssdcache"; }; }; sh-4.4# I also tried other random HDD I have lying around and see the same thing. Any disk not in Bay 1 is not shown in Storage Manager but is available in shell. Quote
flyride Posted June 4, 2022 #6 Posted June 4, 2022 I can see what is probably an issue with the device tree. Port 1: ahci { pcie_root = "00:12.0"; ata_port = <0x00>; }; Port 2: ahci { pcie_root = "00:13.3,00.0"; ata_port = <0x02>; }; Port 3: ahci { pcie_root = "00:12.0"; ata_port = <0x02>; }; Port 4: ahci { pcie_root = "00:12.0"; ata_port = <0x03>; }; You have two different SATA controllers on that motherboard, each with 2 ports connected. The device tree port #1 is valid, but I think the rest of the ports are bad. If I had to guess, I would configure them like this: Port 2: ahci { pcie_root = "00:12.0"; ata_port = <0x01>; }; Port 3: ahci { pcie_root = "00:13.3,00.0"; ata_port = <0x00>; }; Port 4: ahci { pcie_root = "00:13.3,00.0"; ata_port = <0x01>; The PCI address of the second controller might be wrong (I think 00:13.3,00.0 is the actual DS920+ second controller address). You might want to confirm its PCI address with lspci -d::106 ls -la /sys/class/ata_port The device tree script is not very sophisticated yet, which is why the 7.x Loaders and Platforms matrix continues to suggest DS918+ as a preferred platform. The script only can provision ports that are populated at the time of the loader build. I'm guessing you only had one disk in the time of loader build, then you put in your second drive on the DSM install boot. You can fix this by booting into TinyCore, then: ./rploader.sh restoresession geminilake-7.x.x-xxxxx # edit the ds920+.dts file to reflect the port suggestions above vi /home/tc/redpill-load/custom/extensions/redpill-dtb/ds920p_xxxxx/model_ds920p.dtb ./rploader.sh clean ./rploader.sh build geminilake-7.x.x-xxxxx It shouldn't require a DSM reinstallation, just a loader rebuild. If it worked, you will see the second drive in Storage Manager. You'll need to add it to the array and resync. Quote
PeteyNice Posted June 4, 2022 Author #7 Posted June 4, 2022 (edited) Actually, I found a file that looks like that at /home/tc/ds920.dts. Editing it now. Required a re-install. Will pick it up tomorrow. Thanks! Edited June 4, 2022 by PeteyNice Quote
flyride Posted June 4, 2022 #8 Posted June 4, 2022 /home/tc is a reference copy. If you edit that copy, it will have no impact on your system. Quote
PeteyNice Posted June 4, 2022 Author #9 Posted June 4, 2022 It had some impact because when I rebooted it came up as "Not Installed". Anyway, I abandoned DS920/gemninilake and went back to the beginning with DS918/apollolake. That worked perfectly. Thank you for the wonderful guide and reviewing my issue. Quote
Bughunt Posted June 14, 2022 #10 Posted June 14, 2022 (edited) On 6/4/2022 at 9:57 AM, flyride said: /home/tc is a reference copy. If you edit that copy, it will have no impact on your system. Flyride, HI. I was following this thread as I was experiencing a similar issue with my ASROCK J4125 - The 2 (ASMedia) Sata ports are not being mapped correctly using DS920+. (Works fine with DS918+, but on that build the Hardware Transcoding is failing under Jellyfin even though I have the Dev/Dri and 915.ko files installed...) Anyway - you stated : You can fix this by booting into TinyCore, then: ./rploader.sh restoresession geminilake-7.x.x-xxxxx # edit the ds920+.dts file to reflect the port suggestions above vi /home/tc/redpill-load/custom/extensions/redpill-dtb/ds920p_xxxxx/model_ds920p.dtb ./rploader.sh clean ./rploader.sh build geminilake-7.x.x-xxxx Unfortunately - when I followed this, the requestedfile is not in this directory - (Only recipes and release directories without any .dtb files) so I'm not sure what happened there. Do I need to do a full rebuild somehow to get this? I've enclosed a .txt file with the system info I typically see requested: lspci -tvnnq , cat /proc/cmdline , ls -la /sys/class/ata_port , dtc -I dtb -O dts /var/run/model.dtb 920device info.txt Edited June 14, 2022 by Bughunt Quote
flyride Posted June 14, 2022 #11 Posted June 14, 2022 There are many examples of this in the dev threads. But you can build a new loader, then in the same session, edit the dts file, then rebuild again with the same result. Quote
Bughunt Posted June 15, 2022 #12 Posted June 15, 2022 As suggested I built a new DS920+ 7.0.1-42218-JUN mode build ( ./my.sh DS920+J jumkey ), then upgraded to 7.1 via ./rploader.sh build geminilake-7.1.0-42661. The reboot finally had all 4 disks present and HW Transcoding works. No editing needed... Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.