Jump to content
XPEnology Community

lyter

Transition Member
  • Posts

    13
  • Joined

  • Last visited

Posts posted by lyter

  1. @IG-88, I'm really sorry about this but for whatever reason the config of the VM did not contain the passthrough of the USB controller anymore. Also older backups did not contain it. So I've added it again and voilà the USB drives are recognized!

    Really have no idea how this could have happened and why the backups to the USB drives could have happened.

    But I've learned how to recover an array through a live linux, so thanks for that! And for your patience!

  2. Thanks @IG-88, you're a life saver!🤝 Resetted the synoinfo.confs and set your suggested values for 20 drives.

    The result is still that the USB drives don't show up. But they also did not show when i resetted the confs. So any idea, why the USB drives are still missing?

     

    I know for a fact that the USB drives were detected before I first changed the confs to extend the disk limit, because of the successful daily backups that were done right up to this point. I did not change anything on the VM settings, so this should be fine as well. Also detached the USB drives multiple times.

     

  3.  

    @IG-88, thanks for the help so far, mate! Really appreciated! ;)

    So I ran your recommendation:

    root@ubuntu:~# mdadm --examine /dev/sd[abcdf]1 | egrep 'Event|/dev/sd'
    mdadm: No md superblock detected on /dev/sdb1.
    mdadm: No md superblock detected on /dev/sdf1.
    /dev/sda1:
             Events : 13363196
    /dev/sdc1:
             Events : 13363196
    /dev/sdd1:
             Events : 13363196

     

    What exactly does this mean?

     

    EDIT: I just rebooted the live ubuntu, because the results for various commands like "cat /proc/mdstat" were non-existent. Now the array seems to be active under md0:

    
    root@ubuntu:~# cat /proc/mdstat
    Personalities : [raid1] [raid6] [raid5] [raid4]
    md4 : active raid1 sdb3[0]
          9761614848 blocks super 1.2 [1/1] [U]
    
    md2 : active raid1 sdf3[0]
          100035584 blocks super 1.2 [1/1] [U]
    
    md3 : active raid5 sda3[3] sdd3[2] sdc3[1]
          11711401088 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
    
    md0 : active raid1 sda1[1] sdc1[3] sdd1[2]
          2490176 blocks [12/3] [_UUU________]
    
    unused devices: <none>
    root@ubuntu:~# mdadm --detail /dev/md0
    /dev/md0:
               Version : 0.90
         Creation Time : Wed Aug 28 18:21:19 2019
            Raid Level : raid1
            Array Size : 2490176 (2.37 GiB 2.55 GB)
         Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
          Raid Devices : 12
         Total Devices : 3
       Preferred Minor : 0
           Persistence : Superblock is persistent
    
           Update Time : Sun Jan  3 22:28:51 2021
                 State : clean, degraded
        Active Devices : 3
       Working Devices : 3
        Failed Devices : 0
         Spare Devices : 0
    
    Consistency Policy : resync
    
                  UUID : d81e429a:6dda9ebf:3017a5a8:c86610be
                Events : 0.13363196
    
        Number   Major   Minor   RaidDevice State
           -       0        0        0      removed
           1       8        1        1      active sync   /dev/sda1
           2       8       49        2      active sync   /dev/sdd1
           3       8       33        3      active sync   /dev/sdc1
           -       0        0        4      removed
           -       0        0        5      removed
           -       0        0        6      removed
           -       0        0        7      removed
           -       0        0        8      removed
           -       0        0        9      removed
           -       0        0       10      removed
           -       0        0       11      removed

     

    Mounting the array still seems not working though:

    
    root@ubuntu:~# mdadm -Ee0.swap /dev/sda1 /dev/sdc1 /dev/sdd1
    mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got fc4e2ba9)
    mdadm: No super block found on /dev/sdc1 (Expected magic a92b4efc, got fc4e2ba9)
    mdadm: No super block found on /dev/sdd1 (Expected magic a92b4efc, got fc4e2ba9)
    
    
    
    root@ubuntu:~# mdadm -AU byteorder /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
    mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got fc4e2ba9)
    mdadm: no RAID superblock on /dev/sda1
    mdadm: /dev/sda1 has no superblock - assembly aborted

     

  4. @IG-88, thanks for the pointers. Still I'm not able to mount the drives.

     

    So, this are the results for several commands when running Ubuntu Live:

    root@ubuntu:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [raid1]
    md127 : inactive sdc1[3](S) sdd1[2](S) sda1[1](S)
          7470528 blocks
    
    md2 : active raid1 sdf3[0]
          100035584 blocks super 1.2 [1/1] [U]
    
    md4 : active raid1 sdb3[0]
          9761614848 blocks super 1.2 [1/1] [U]
    
    md3 : active raid5 sda3[3] sdd3[2] sdc3[1]
          11711401088 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
    
    unused devices: <none>
    
    ------
      
    root@ubuntu:~# fdisk -l | grep /dev/sd
    GPT PMBR size mismatch (102399 != 106495) will be corrected by write.
    Disk /dev/sda: 5.47 TiB, 6001175126016 bytes, 11721045168 sectors
    /dev/sda1      256     4980735     4980480  2.4G Linux RAID
    /dev/sda2  4980736     9175039     4194304    2G Linux RAID
    /dev/sda3  9437184 11720840351 11711403168  5.5T Linux RAID
    Disk /dev/sdc: 5.47 TiB, 6001175126016 bytes, 11721045168 sectors
    /dev/sdc1      256     4980735     4980480  2.4G Linux RAID
    /dev/sdc2  4980736     9175039     4194304    2G Linux RAID
    /dev/sdc3  9437184 11720840351 11711403168  5.5T Linux RAID
    Disk /dev/sdd: 5.47 TiB, 6001175126016 bytes, 11721045168 sectors
    /dev/sdd1      256     4980735     4980480  2.4G Linux RAID
    /dev/sdd2  4980736     9175039     4194304    2G Linux RAID
    /dev/sdd3  9437184 11720840351 11711403168  5.5T Linux RAID
    Disk /dev/sdb: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
    /dev/sdb1     2048     4982527     4980480  2.4G Linux RAID
    /dev/sdb2  4982528     9176831     4194304    2G Linux RAID
    /dev/sdb3  9437184 19532668927 19523231744  9.1T Linux RAID
    The backup GPT table is not on the end of the device. This problem will be corrected by write.
    Disk /dev/sde: 52 MiB, 54525952 bytes, 106496 sectors
    /dev/sde1   2048  32767   30720  15M EFI System
    /dev/sde2  32768  94207   61440  30M Linux filesystem
    /dev/sde3  94208 102366    8159   4M BIOS boot
    Disk /dev/sdf: 100 GiB, 107374182400 bytes, 209715200 sectors
    /dev/sdf1          2048   4982527   4980480  2.4G fd Linux raid autodetect
    /dev/sdf2       4982528   9176831   4194304    2G fd Linux raid autodetect
    /dev/sdf3       9437184 209510399 200073216 95.4G fd Linux raid autodetect
      
    -- details for md127 raid0 somewhat does not seem right though...
    root@ubuntu:~# mdadm --detail /dev/md127
    /dev/md127:
               Version : 0.90
            Raid Level : raid0
         Total Devices : 3
       Preferred Minor : 0
           Persistence : Superblock is persistent
    
                 State : inactive
       Working Devices : 3
    
                  UUID : d81e429a:6dda9ebf:3017a5a8:c86610be
                Events : 0.13363196
    
        Number   Major   Minor   RaidDevice
    
           -       8        1        -        /dev/sda1
           -       8       49        -        /dev/sdd1
           -       8       33        -        /dev/sdc1

     

    So, if i follow the tutorial, i get the following:

    -- 3 RAID5 disks --
    root@ubuntu:~# mdadm -Ee0.swap /dev/sda1 /dev/sdc1 /dev/sdd1
    mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got fc4e2ba9)
    mdadm: No super block found on /dev/sdc1 (Expected magic a92b4efc, got fc4e2ba9)
    mdadm: No super block found on /dev/sdd1 (Expected magic a92b4efc, got fc4e2ba9)
    
    -- 3 RAID5 disks + 10TB disk
    root@ubuntu:~# mdadm -Ee0.swap /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
    mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got fc4e2ba9)
    /dev/sdb1:
              Magic : a92b4efc
            Version : 0.90.00
               UUID : d81e429a:6dda9ebf:3017a5a8:c86610be
      Creation Time : Wed Aug 28 18:21:19 2019
         Raid Level : raid1
      Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
         Array Size : 2490176 (2.37 GiB 2.55 GB)
       Raid Devices : 12
      Total Devices : 4
    Preferred Minor : 0
    
        Update Time : Sun Jan  3 22:28:51 2021
              State : clean
     Active Devices : 4
    Working Devices : 4
     Failed Devices : 7
      Spare Devices : 0
           Checksum : a6b8c521 - correct
             Events : 13363196
    
    
          Number   Major   Minor   RaidDevice State
    this     4       8      113        4      active sync
    
       0     0       0        0        0      removed
       1     1       8       97        1      active sync
       2     2       8      145        2      active sync
       3     3       8      129        3      active sync
       4     4       8      113        4      active sync
       5     5       0        0        5      faulty removed
       6     6       0        0        6      faulty removed
       7     7       0        0        7      faulty removed
       8     8       0        0        8      faulty removed
       9     9       0        0        9      faulty removed
      10    10       0        0       10      faulty removed
      11    11       0        0       11      faulty removed
    mdadm: No super block found on /dev/sdc1 (Expected magic a92b4efc, got fc4e2ba9)
    mdadm: No super block found on /dev/sdd1 (Expected magic a92b4efc, got fc4e2ba9)

     

    The "no super block found" part doesn't seem good. Any ideads? Should i use md127 instead of md0 from the tutorial?

  5. @IG-88, thanks for the patience. Yes, totally my bad.🙃

     

    Just tried the tutorial, not sure how i need to proceed in my case. I have 4 volumes:

    - vol 1: ssd 100GB for systems stuff and documents (basic)

    - vol 2: 3x6TB disks in RAID5 (the one that should be replaced)

    - vol 3: 10TB disk for archive (basic)

    - vol 4: 3x16TB disks in RAID5 (new)

     

    So i thought I'd run step 7 & 8 for vol 2, but i get the message for step 8.

    root@ubuntu:# mdadm -AU byteorder /dev/md0 /dev/sda1 /dev/sdc1 /dev/sdd1
    mdadm: /dev/md0 assembled from 3 drives - need 4 to start (use --run to insist).

    Would you happen to know how I need to proceed here?

  6. @IG-88, I restored to original synoinfo.confs and then just edited the maxdisk parameter. Which of course was stupid, but I wouldn't have expected DSM not to work at all anymore...

    So I'll try the tutorial in the link tomorrow. Hope I'll be able to fix it...🤔

    So what are the params you'd recommend? Don't I need to adjust all 4 params to make them fit to 16 disks instead of 12?

    == original conf for 12 disks ==
    esataportcfg="0xff000"
    usbportcfg="0x300000"
    internalportcfg="0xfff"


     

  7. I was running a Xpenology-VM with DSM 6.2.2-24922 Update 4 under Proxmox. I have 3 HDDs in RAID5, 1 HDD in Basic and 1 SSD in Basic. Furthermore 3 external USB-HDDs for various backups. This was working fine so far.

    Now I bought 3x16TB HDDs to replace the old RAID5 array since the disks are getting old and the volume getting too small. After I connected them, only 2 of the 3 disks were shown in DSM. With the count of disks hitting 12 in Storage Manager>HDD/SSD (even though I don't really understand, why Drive 3-6 are skipped), I realised that the max disks of 12 was reached and therefore I've modified the synoinfo.confs (both in /etc.defaults/ and /etc/) to the following:

    maxdisks="24"
    internalportcfg="0xffffff"
    usbportcfg="0xf000000"
    esataportcfg="0x0" (since I don't have any)
    


    But like this all the internal disks are showed correctly. But none of the external USB disks are recognised anymore in Control Panel > External Devices. I've done multiple reboots.

    What am I missing here? Thanks for any leads to solve this.

    dsm-disks.PNG

  8. Thanks. No idea whats the problem over here. I have a pretty similar config. The VM behaves as if it lost all boot devices after the controller is detected and the VM rebooted.

    If I had the same problem with DSM 5.2 the case would be clear, but since that is still working like a charm I'm very confused.

    Would be super happy if someone had a hint.

  9. I got Dell H310 flashed to LSI 9211-8i it mode and mounted under proxmox for xpenology. It works without issue. Maybe try update software on raid card (i belive that 20a is newested for this card).

    Could you post your VM.conf here? :grin:

    What firmware for the controller are you using? I'm using P19, which should be the forelast firmware.

×
×
  • Create New...