Jump to content
XPEnology Community

Expanding SHR, not able to add one of same smaller disks


Recommended Posts

Hi, 

 

I have DSM 6 with a SHR storage pool of 5 disks (1T, 1.5T, 1.5T, 2T, 3T). I am trying to add another 1T (exact same model as the other 1T) to it, but the add command is grayed out, so not able to add. 

 

The Synology help located here says that 1T should be able to be added to such SHR configuration. So I am confused. Anyone can help?

 

Thanks in advance!

 

Edited by kzch
Link to comment
Share on other sites

ESXi 6.7u2. All disks are connected on a LSI SAS 9707-8i pci-e card. The LSI SAS card is pci pass-through to the VM.

 

See the pics, drives 3,4,6,9,10 are part of the storage pool. drive 2 is what I want to add to the pool. It is same model as as drive 6

 

Add drive command is greyed.

SP.JPG

HDDs.JPG

Link to comment
Share on other sites

Also, I have two more 3T drives connected to this same LSI SAS card, and the add drive command is available. But when adding, only those two 3T drives were available to be added to this pool, not the 1T drive. I later added these two drives to a different pool just to make then unavailable for this pool, then the add drive command is greyed again.

Link to comment
Share on other sites

admin@KCDSM:~$ sudo fdisk -l

Disk /dev/sdb: 931.5 GiB, 1000200658432 bytes, 1953516911 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0001d505

Device     Boot   Start        End    Sectors   Size Id Type
/dev/sdb1          2048    4982527    4980480   2.4G fd Linux raid autodetect
/dev/sdb2       4982528    9176831    4194304     2G fd Linux raid autodetect
/dev/sdb3       9437184 1953511007 1944073824   927G  f W95 Ext'd (LBA)
/dev/sdb5       9453280 1953302175 1943848896 926.9G fd Linux raid autodetect


Disk /dev/sdf: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x77dfc2e7

Device     Boot   Start        End    Sectors   Size Id Type
/dev/sdf1          2048    4982527    4980480   2.4G fd Linux raid autodetect
/dev/sdf2       4982528    9176831    4194304     2G fd Linux raid autodetect
/dev/sdf3       9437184 1953511007 1944073824   927G  f W95 Ext'd (LBA)
/dev/sdf5       9453280 1953318239 1943864960 926.9G fd Linux raid autodetect

Link to comment
Share on other sites

if /dev/sdb is your new drive #2, there's your problem.  While they are the "same" type (they actually are two different models of Caviar Green) and are both 1TB drives, the actual number of sectors available on /dev/sdb is less than /dev/sdf.  The array SHR base size is based on the exact size of /dev/sdf.

 

In other words, the new drive (again, assuming it's mapped to /dev/sdb) is too small to add to the array.

 

mdadm will let you create an array with up to 1% mismatch because not all drives are exactly the same size, but in an existing array, it has to be larger.

Edited by flyride
  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...