Jump to content
XPEnology Community
  • 0

cannot expand volume on Xpeneology DSM 5.2-5967


gizmomelb

Question

Hi all,

 

an old issue but so far my google-fu has not been able to resolve (some mentioned web pages and links not existing any more does not help).

I recently installed a dead 4TB drive with a 10TB drive and after the volume was repaired it would not let me expand the volume.  The array is a multi-disk SHR affair.

if I SSH to the NAS and execute 'print devices' the results are:

print devices
/dev/hda (12.0TB)
/dev/sda (12.0TB)
/dev/sdb (12.0TB)
/dev/sdc (10.0TB)    <- the new HDD
/dev/sdd (4001GB)
/dev/sde (4001GB)
/dev/sdf (4001GB)
/dev/md0 (2550MB)
/dev/md1 (2147MB)
/dev/md2 (15.0TB)
/dev/md3 (5001GB)
/dev/md4 (7999GB)
/dev/zram0 (591MB)
/dev/zram1 (591MB)
/dev/synoboot (8382MB)

total storage is 25.1GB, but should expand to 31.something GB I guess.

My parted doesn't support 'resizedisk' as it is version 3.1

the LVM is vg1000

mdadm --detail /dev/md2 | fgrep /dev/
/dev/md2:
      11       8        5        0      active sync   /dev/sda5
      10       8       21        1      active sync   /dev/sdb5
       9       8       37        2      active sync   /dev/sdc5
       8       8       53        3      active sync   /dev/sdd5
       7       8       85        4      active sync   /dev/sdf5
       6       8       69        5      active sync   /dev/sde5


if I try the following I cannot expand the partition due to the 'has no superblock' message.  I've also tried /md2 and /md4

01> syno_poweroff_task -d
01> mdadm -S /dev/md3
mdadm: stopped /dev/md3
01> mdadm -A /dev/md3 -U devicesize /dev/sdc
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has no superblock - assembly aborted

Can anyone please assist me with how I can expand the partition on sdc from 4TB to 10TB?

Or would booting up Ubuntu from a USB stick and doing it from gparted be the easiest / quicket way instead of trying to do it locally under SSH?

Thank you.

 

Edited by gizmomelb
Link to comment
Share on other sites

2 answers to this question

Recommended Posts

  • 0

I realize you are posting in the other thread to try and recover your data.  I'm posting on this one to help with some forensic reconstruction and to potentially explain why you cannot get more storage from your SHR based on replacing the 4TB with a 10TB drive.

 

To recap, at some point in your history, you had 4TB and 1TB drives that comprised your SHR.

It would appear that you replaced all your 1TB drives with 4TB and that left you the following SHR makeup:

 

image.png.8adda9024533ce7fa79ee1fe7e593e52.png

 

Because you had 1TB drives, the 4TB drives were split into two pieces, a 3TB RAID5 and a 1TB RAID5, which are joined together via lvm and then presented as a unified storage (vg1000) to DSM for your filesystem.  The 1TB md3 RAID was required in order to provide redundancy for the 1TB drives, but when they were removed, DSM cannot collapse them and maintains the 1TB RAID indefinitely (or at least until you delete and remake the SHR).

 

Then, you added in 2x 12TB drives.  These drives mostly became their own RAID1 (since they are the only drives large enough to back up themselves), but SHR does something stupid and adds them to existing RAID5 arrays (4TB and 1TB) in hopes that it will make more storage available.  Unfortunately it results in the same space available if they were totally reserved for RAID1, but it does work with some combinations of drive sizes.

 

image.thumb.png.56554f234df6e7d2c6a3ccc6da118d4b.png

 

However, this is what I dislike about SHR, things get very complicated. Now you replaced sdc with a 10TB drive.

 

image.thumb.png.1970fbdd57c0e2cb0efc4bdafaeda379.png

 

Notice that there is no more space available and 6TB is unused.  The reason for this is that the existing arrays need redundancy restored, and when that is complete there is no space to achieve redundancy with the remaining space on sdc.  In other words, the drive gets configured with partitions to support /dev/md2 and /dev/md3 but it isn't big enough to also be added to /dev/md4.

 

Had you added a 12TB drive, /dev/md4 would have transformed into a RAID5 and added 8TB additional space.  But as it is, SHR has nowhere to grow.

Edited by flyride
clarified sdc replacement logic
  • Like 1
Link to comment
Share on other sites

  • 0

thank you for the detailed info, yes that is the process I had when updating my drives.. I replaced the failed 4TB with a 10TB as I had 2x 4TB drives fail me (WD purples, each with 22K hours usage on them) and I only had 1x 4TB replacement drive (a Seagate.. yeah desperate times) and I didn't want to run the NAS with a degraded volume so I moved some files around and shucked a 10TB external I had which hadn't had much use and installed that in the NAS.

 

The only HDD I Could purchase was a 6TB WD which I then found out was SMR and not CMR so I'll be returning that as I've read it'll cause too many issues and most likely die early in it's lifespan.

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...