• 0

Raising Max Disks leads to USB Disks not being showed anymore


Go to solution Solved by lyter,

Question

I was running a Xpenology-VM with DSM 6.2.2-24922 Update 4 under Proxmox. I have 3 HDDs in RAID5, 1 HDD in Basic and 1 SSD in Basic. Furthermore 3 external USB-HDDs for various backups. This was working fine so far.

Now I bought 3x16TB HDDs to replace the old RAID5 array since the disks are getting old and the volume getting too small. After I connected them, only 2 of the 3 disks were shown in DSM. With the count of disks hitting 12 in Storage Manager>HDD/SSD (even though I don't really understand, why Drive 3-6 are skipped), I realised that the max disks of 12 was reached and therefore I've modified the synoinfo.confs (both in /etc.defaults/ and /etc/) to the following:

maxdisks="24"
internalportcfg="0xffffff"
usbportcfg="0xf000000"
esataportcfg="0x0" (since I don't have any)


But like this all the internal disks are showed correctly. But none of the external USB disks are recognised anymore in Control Panel > External Devices. I've done multiple reboots.

What am I missing here? Thanks for any leads to solve this.

dsm-disks.PNG

Edited by lyter
Link to post
Share on other sites

Recommended Posts

  • 0
  • Solution

@IG-88, I'm really sorry about this but for whatever reason the config of the VM did not contain the passthrough of the USB controller anymore. Also older backups did not contain it. So I've added it again and voilà the USB drives are recognized!

Really have no idea how this could have happened and why the backups to the USB drives could have happened.

But I've learned how to recover an array through a live linux, so thanks for that! And for your patience!

Link to post
Share on other sites
  • 1
1 hour ago, lyter said:

(even though I don't really understand, why Drive 3-6 are skipped)

because your virtual controller has 6 ports and and dsm will use as count what the controller is capable of

you could check /var/log/dmesg to see that

i wrote something about that here a few hours ago

https://xpenology.com/forum/topic/39577-lsi-hba-passthrough-error-drives-not-showing-in-dsm/?do=findComment&comment=186609

 

1 hour ago, lyter said:

I realised that the max disks of 12 was reached and therefore I've modified the synoinfo.confs (both in /etc.defaults/ and /etc/) to the following:



maxdisks="24"
internalportcfg="0xffffff"
usbportcfg="0xf000000"
esataportcfg="0x0" (since I don't have any)


But like this all the internal disks are showed correctly. But none of the external USB disks are recognised anymore in Control Panel > External Devices. I've done multiple reboots.

What am I missing here? Thanks for any leads to solve this.

 

why going to 24? just use 16 or 20  and you dont even have to touch the usb value - thats the easiest way i guess

the default in 3615/17 is usb drives above 20 (usbportcfg="0x300000")

your mod looks ok, i've not experimented with max 24 and usb, you could try to extend it to more possible usb ports/drives like

usbportcfg="0xffff000000"

the dmesg log might have information about the usb ports and whats happening but i guess max 16 or 20 drives will solve your problem much faster

Edited by IG-88
Link to post
Share on other sites
  • 1
1 hour ago, lyter said:

Thanks @IG-88, just tried maxdisks = 16. But now DSM is not reachable anymore (web-interface, ssh, also not on Synology Assistant). Will have to find a way to fix this first...

you dis just edit maxdisk? you would need to change "internalportcfg=" too and the one for usb back to its original

if you want to change the system partition then you would need to mount them as a raid1 set, you could use a rescue linux and boot your vm with it

https://xpenology.com/forum/topic/7004-tutorial-how-to-access-dsms-data-system-partitions/

 

Link to post
Share on other sites
  • 1
30 minutes ago, lyter said:

So what are the params you'd recommend? Don't I need to adjust all 4 params to make them fit to 16 disks instead of 12?


== original conf for 12 disks ==
esataportcfg="0xff000"
usbportcfg="0x300000"
internalportcfg="0xfff"

 

 

yes, i assumed you would know how to do it as you did it ok in you 1st try (that did not work for unkonown reason, but the numbers where right)

you would leave us to its original values as it was working before and adjuste the rest around it

maxdisks="20"

esataportcfg="0x0"

usbportcfg="0x300000"

internalportcfg="0xfffff"

 

Link to post
Share on other sites
  • 1
11 minutes ago, lyter said:

Would you happen to know how I need to proceed here?

the new ones are not uses yet (not initialized aka system on it)  so the will not be part of it

a would expected it to be 5 as the system is on every disk, you can just try one oth the other 2 disks to add to the

you would check whats other disks are found beside sda, sdc and sdd


 

mdadm -AU byteorder /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
or
mdadm -AU byteorder /dev/md0 /dev/sda1 /dev/sdc1 /dev/sdd1 /dev/sde1

in therory you can force the raid to start incomplete, that would leave disks out of it and these disks would have a lower sequence number then the disks in the started raid resultinging in disks would not be part of the raid anymore because the mismatch of sequence number, as its raid1 there would be no harm, you could dsm let repair it later - but try to "find" all disks so that it starts in a normal state, that's safer

Edited by IG-88
Link to post
Share on other sites
  • 1

 

try this

mdadm --examine /dev/sd[abcdf]1 | egrep 'Event|/dev/sd'

that should list the events of all the 4 disks that should be the raid1 of the system partitons

in theory all should have the same, whatever has the highest number would be your last mounted raid1

 

9 hours ago, lyter said:

Should i use md127 instead of md0 from the tutorial?

lets see what comes up from the event number

md127 would be the systems partition of the 3 raid5 drives

 

 

 

Link to post
Share on other sites
  • 1
2 hours ago, lyter said:

What exactly does this mean?

the other two drives are useless for the raid1 of the system

if you already have /dev/md[x] you can mount the /dev/md... of these three sd[x]1 partitions and make your change in the synoinfo.conf

maybe 1st restore the default you had before, test it by booting the vm, look in the gui if there are any things to fix with the raid (if 2 disks having a re volume are not part of the system raid1 there should be something to repair

if everythin is back to normal you can change to what i suggested above

Link to post
Share on other sites
  • 1
39 minutes ago, lyter said:

why the USB drives are still missing?

as you only added 2 ports (and usual system have much more usb ports) they might simple be beyond the default value of 2 ports, kind of same thing as with the sata ports, if its only 12 and you have ports beyond that you will not see them

just make it

usbportcfg="0xffff00000"

and see what happens

Link to post
Share on other sites
  • 1
12 minutes ago, H_U_L_K said:

but after dsm upgrade disks change status to "not initialized" and pool disapeared.

bigger udpates/upgrades reset the synoinfo.conf and its way beyond what the system is made for to use usb as internal drives, can break with every update, synology does not test or care about this kind of scenario, risky thing to do as you found out and i'm sure you have been warned here to do this

 

12 minutes ago, H_U_L_K said:

May be somebody know what parameter must be also changed?...

i guess you need to check your synoinfo.conf and redo your configuration change

 

 

19 minutes ago, lyter said:

thanks for the suggestion. Tried these values now, but still no USB disks:

the its not working in the way i expected it, maybe send a dmesg log

you could try to define less internal disks and have usb start lower

maxdisks="16"

esataportcfg="0x0"

usbportcfg="0xffff0000"

internalportcfg="0xffff"

 

Edited by IG-88
  • Thanks 1
Link to post
Share on other sites
  • 1
2 hours ago, lyter said:

. So I've added it again and voilà the USB drives are recognized!

good to know that it's solved and there is no unknown behavior, i guess the two usb ports where just virtual ports and for that reason not interacting with your usb drives

 

Link to post
Share on other sites
  • 0
Posted (edited)

Thanks @IG-88, just tried maxdisks = 16. But now DSM is not reachable anymore (web-interface, ssh, also not on Synology Assistant). Will have to find a way to fix this first...

Edited by lyter
Link to post
Share on other sites
  • 0
Posted (edited)

@IG-88, I restored to original synoinfo.confs and then just edited the maxdisk parameter. Which of course was stupid, but I wouldn't have expected DSM not to work at all anymore...

So I'll try the tutorial in the link tomorrow. Hope I'll be able to fix it...🤔

So what are the params you'd recommend? Don't I need to adjust all 4 params to make them fit to 16 disks instead of 12?

== original conf for 12 disks ==
esataportcfg="0xff000"
usbportcfg="0x300000"
internalportcfg="0xfff"


 

Edited by lyter
Link to post
Share on other sites
  • 0

@IG-88, thanks for the patience. Yes, totally my bad.🙃

 

Just tried the tutorial, not sure how i need to proceed in my case. I have 4 volumes:

- vol 1: ssd 100GB for systems stuff and documents (basic)

- vol 2: 3x6TB disks in RAID5 (the one that should be replaced)

- vol 3: 10TB disk for archive (basic)

- vol 4: 3x16TB disks in RAID5 (new)

 

So i thought I'd run step 7 & 8 for vol 2, but i get the message for step 8.

root@ubuntu:# mdadm -AU byteorder /dev/md0 /dev/sda1 /dev/sdc1 /dev/sdd1
mdadm: /dev/md0 assembled from 3 drives - need 4 to start (use --run to insist).

Would you happen to know how I need to proceed here?

Link to post
Share on other sites
  • 0
Posted (edited)

@IG-88, thanks for the pointers. Still I'm not able to mount the drives.

 

So, this are the results for several commands when running Ubuntu Live:

root@ubuntu:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md127 : inactive sdc1[3](S) sdd1[2](S) sda1[1](S)
      7470528 blocks

md2 : active raid1 sdf3[0]
      100035584 blocks super 1.2 [1/1] [U]

md4 : active raid1 sdb3[0]
      9761614848 blocks super 1.2 [1/1] [U]

md3 : active raid5 sda3[3] sdd3[2] sdc3[1]
      11711401088 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

------
  
root@ubuntu:~# fdisk -l | grep /dev/sd
GPT PMBR size mismatch (102399 != 106495) will be corrected by write.
Disk /dev/sda: 5.47 TiB, 6001175126016 bytes, 11721045168 sectors
/dev/sda1      256     4980735     4980480  2.4G Linux RAID
/dev/sda2  4980736     9175039     4194304    2G Linux RAID
/dev/sda3  9437184 11720840351 11711403168  5.5T Linux RAID
Disk /dev/sdc: 5.47 TiB, 6001175126016 bytes, 11721045168 sectors
/dev/sdc1      256     4980735     4980480  2.4G Linux RAID
/dev/sdc2  4980736     9175039     4194304    2G Linux RAID
/dev/sdc3  9437184 11720840351 11711403168  5.5T Linux RAID
Disk /dev/sdd: 5.47 TiB, 6001175126016 bytes, 11721045168 sectors
/dev/sdd1      256     4980735     4980480  2.4G Linux RAID
/dev/sdd2  4980736     9175039     4194304    2G Linux RAID
/dev/sdd3  9437184 11720840351 11711403168  5.5T Linux RAID
Disk /dev/sdb: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
/dev/sdb1     2048     4982527     4980480  2.4G Linux RAID
/dev/sdb2  4982528     9176831     4194304    2G Linux RAID
/dev/sdb3  9437184 19532668927 19523231744  9.1T Linux RAID
The backup GPT table is not on the end of the device. This problem will be corrected by write.
Disk /dev/sde: 52 MiB, 54525952 bytes, 106496 sectors
/dev/sde1   2048  32767   30720  15M EFI System
/dev/sde2  32768  94207   61440  30M Linux filesystem
/dev/sde3  94208 102366    8159   4M BIOS boot
Disk /dev/sdf: 100 GiB, 107374182400 bytes, 209715200 sectors
/dev/sdf1          2048   4982527   4980480  2.4G fd Linux raid autodetect
/dev/sdf2       4982528   9176831   4194304    2G fd Linux raid autodetect
/dev/sdf3       9437184 209510399 200073216 95.4G fd Linux raid autodetect
  
-- details for md127 raid0 somewhat does not seem right though...
root@ubuntu:~# mdadm --detail /dev/md127
/dev/md127:
           Version : 0.90
        Raid Level : raid0
     Total Devices : 3
   Preferred Minor : 0
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 3

              UUID : d81e429a:6dda9ebf:3017a5a8:c86610be
            Events : 0.13363196

    Number   Major   Minor   RaidDevice

       -       8        1        -        /dev/sda1
       -       8       49        -        /dev/sdd1
       -       8       33        -        /dev/sdc1

 

So, if i follow the tutorial, i get the following:

-- 3 RAID5 disks --
root@ubuntu:~# mdadm -Ee0.swap /dev/sda1 /dev/sdc1 /dev/sdd1
mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got fc4e2ba9)
mdadm: No super block found on /dev/sdc1 (Expected magic a92b4efc, got fc4e2ba9)
mdadm: No super block found on /dev/sdd1 (Expected magic a92b4efc, got fc4e2ba9)

-- 3 RAID5 disks + 10TB disk
root@ubuntu:~# mdadm -Ee0.swap /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got fc4e2ba9)
/dev/sdb1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : d81e429a:6dda9ebf:3017a5a8:c86610be
  Creation Time : Wed Aug 28 18:21:19 2019
     Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
     Array Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 12
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jan  3 22:28:51 2021
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 7
  Spare Devices : 0
       Checksum : a6b8c521 - correct
         Events : 13363196


      Number   Major   Minor   RaidDevice State
this     4       8      113        4      active sync

   0     0       0        0        0      removed
   1     1       8       97        1      active sync
   2     2       8      145        2      active sync
   3     3       8      129        3      active sync
   4     4       8      113        4      active sync
   5     5       0        0        5      faulty removed
   6     6       0        0        6      faulty removed
   7     7       0        0        7      faulty removed
   8     8       0        0        8      faulty removed
   9     9       0        0        9      faulty removed
  10    10       0        0       10      faulty removed
  11    11       0        0       11      faulty removed
mdadm: No super block found on /dev/sdc1 (Expected magic a92b4efc, got fc4e2ba9)
mdadm: No super block found on /dev/sdd1 (Expected magic a92b4efc, got fc4e2ba9)

 

The "no super block found" part doesn't seem good. Any ideads? Should i use md127 instead of md0 from the tutorial?

Edited by lyter
Link to post
Share on other sites
  • 0
Posted (edited)

 

@IG-88, thanks for the help so far, mate! Really appreciated! ;)

So I ran your recommendation:

root@ubuntu:~# mdadm --examine /dev/sd[abcdf]1 | egrep 'Event|/dev/sd'
mdadm: No md superblock detected on /dev/sdb1.
mdadm: No md superblock detected on /dev/sdf1.
/dev/sda1:
         Events : 13363196
/dev/sdc1:
         Events : 13363196
/dev/sdd1:
         Events : 13363196

 

What exactly does this mean?

 

EDIT: I just rebooted the live ubuntu, because the results for various commands like "cat /proc/mdstat" were non-existent. Now the array seems to be active under md0:


root@ubuntu:~# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md4 : active raid1 sdb3[0]
      9761614848 blocks super 1.2 [1/1] [U]

md2 : active raid1 sdf3[0]
      100035584 blocks super 1.2 [1/1] [U]

md3 : active raid5 sda3[3] sdd3[2] sdc3[1]
      11711401088 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md0 : active raid1 sda1[1] sdc1[3] sdd1[2]
      2490176 blocks [12/3] [_UUU________]

unused devices: <none>
root@ubuntu:~# mdadm --detail /dev/md0
/dev/md0:
           Version : 0.90
     Creation Time : Wed Aug 28 18:21:19 2019
        Raid Level : raid1
        Array Size : 2490176 (2.37 GiB 2.55 GB)
     Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
      Raid Devices : 12
     Total Devices : 3
   Preferred Minor : 0
       Persistence : Superblock is persistent

       Update Time : Sun Jan  3 22:28:51 2021
             State : clean, degraded
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              UUID : d81e429a:6dda9ebf:3017a5a8:c86610be
            Events : 0.13363196

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8        1        1      active sync   /dev/sda1
       2       8       49        2      active sync   /dev/sdd1
       3       8       33        3      active sync   /dev/sdc1
       -       0        0        4      removed
       -       0        0        5      removed
       -       0        0        6      removed
       -       0        0        7      removed
       -       0        0        8      removed
       -       0        0        9      removed
       -       0        0       10      removed
       -       0        0       11      removed

 

Mounting the array still seems not working though:


root@ubuntu:~# mdadm -Ee0.swap /dev/sda1 /dev/sdc1 /dev/sdd1
mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got fc4e2ba9)
mdadm: No super block found on /dev/sdc1 (Expected magic a92b4efc, got fc4e2ba9)
mdadm: No super block found on /dev/sdd1 (Expected magic a92b4efc, got fc4e2ba9)



root@ubuntu:~# mdadm -AU byteorder /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got fc4e2ba9)
mdadm: no RAID superblock on /dev/sda1
mdadm: /dev/sda1 has no superblock - assembly aborted

 

Edited by lyter
Link to post
Share on other sites
  • 0
Posted (edited)

Thanks @IG-88, you're a life saver!🤝 Resetted the synoinfo.confs and set your suggested values for 20 drives.

The result is still that the USB drives don't show up. But they also did not show when i resetted the confs. So any idea, why the USB drives are still missing?

 

I know for a fact that the USB drives were detected before I first changed the confs to extend the disk limit, because of the successful daily backups that were done right up to this point. I did not change anything on the VM settings, so this should be fine as well. Also detached the USB drives multiple times.

 

Edited by lyter
Link to post
Share on other sites
  • 0
Posted (edited)

@IG-88 thanks for the suggestion. Tried these values now, but still no USB disks:

maxdisks="20"
esataportcfg="0x0"
usbportcfg="0xffff00000"
internalportcfg="0xfffff"

 

But why wouldn't the USB drives show up again, when I reset the confs and disconnect the new 16TB drives? That would be the same situation as before, no?

Edited by lyter
Link to post
Share on other sites
  • 0

I also have some trouble with usb. I already change config and usb devices works well like internal drives with pool on it, but after dsm upgrade disks change status to "not initialized" and pool disapeared.

May be somebody know what parameter must be also changed?...

Link to post
Share on other sites
  • 0
29 minutes ago, lyter said:

See attached the dmseglog

nothing unusual for usb (2 usb ports detected)

 

there is still another option if usb works with the default config

the sata controller has 6 ports but only uses 2 ports dir disks, if you can change this in the vm config to two ports as needed then your sas controller will get lower and you will be within the 12 port limit of the default config

 

it can also be tried to add a different (usb3) controller in the vm config or trying with more usb ports and assign the hardware device (usb disk)  to a higher usb port

 

Link to post
Share on other sites
  • 0
23 часа назад, IG-88 сказал:

bigger udpates/upgrades reset the synoinfo.conf and its way beyond what the system is made for to use usb as internal drives, can break with every update, synology does not test or care about this kind of scenario, risky thing to do as you found out and i'm sure you have been warned here to do this

 

Thanks.

I already check, but there is no changes. 

esataportcfg, usbportcfg, internalportcfg have parametrs that i write in both etc and etc.default

But I find other my fault that I doesnt change maxdisk parameter ))...

 

Link to post
Share on other sites
  • 0
В 06.01.2021 в 01:17, IG-88 сказал:

 

After NAS restart get not initialised disk again (actually, it was 3-rd time of that)... 

And i lost all my installed packages and data on usb ssd.

 

Now thinking about change config to original and use separate usb disks for media, and internal raid5 for 3 disks.

Move ssd to internal space, and left a surveillance hdd inside the NAS.

Its a bit scary to use such config of USB disks for SHR (for all my disks), and have possibility to get crashed raid.

Link to post
Share on other sites
  • 0
1 hour ago, H_U_L_K said:

Its a bit scary to use such config of USB disks for SHR (for all my disks), and have possibility to get crashed raid.

have a backup and don't restart?

i dont know your hardware but maybe a pcie ahci card and "peeling" the usb drives might be possible

Link to post
Share on other sites
  • 0
4 часа назад, IG-88 сказал:

have a backup and don't restart?

i dont know your hardware but maybe a pcie ahci card and "peeling" the usb drives might be possible

I have ups, but i want to check everyrhing. Because that situation may appear, that electrical power will be disconected 

I have original 1019+

Link to post
Share on other sites
  • 0
8 часов назад, H_U_L_K сказал:

I have ups, but i want to check everyrhing. Because that situation may appear, that electrical power will be disconected 

I have original 1019+

I try that config:

maxdisks="24"

esataportcfg="0x0"

usbportcfg="0x0"

internalportcfg="0xffffff"

Put in in both synoinfo.conf (in /etc/ and /etc.defaults/). And after reboot sub disk have #17 and "not initialized"

(also tro to wright "0" instead of "0x0" in all positions - no changes and still that trouble)

 

 

Initialy parameters in /etc/ was:

maxdisks="5"

esataportcfg="0xff000"

usbportcfg="0x300000"

internalportcfg="0xfff"

 

Initialy parameters in /etc.defaults/ was:

maxdisks="5"

esataportcfg="0x20"

usbportcfg="0x300000"

internalportcfg="0xfff"

 

And for now i have no idea why it not works.

USB disk was not moved when reboot was performed.

 

 

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.