Jump to content
XPEnology Community
  • 0

Disk volume not expanding in raid 6 setup


arrush
 Share

Question

Hello

 

I was wondering if anyone could point me in the right direction in case I am doing something wrong. I recently found out about XPEnology when researching nas solutions and wanted to give it a try. Everything was going well until I added my 6th disk and the volume did not increase. I am running DSM 6.1.3 update 7 on baremetal (DS3617xs) and moving disks slowly over from another raid solution. Everything was going well up to disk 5 but when adding #6 the volume failed to increase. I have since run both a raid scrub and file system scrub with no luck (also tried a reboot). According to the raid calculator I should have ~8TB of available space.

 

I've done some reading in this regard but most tutorials are back around DSM releases 3 and 4 and involve running ubantu to fix the file system but I wanted to check in before I go wondering off into unknown territory. Thanks in advance.

xpe1.PNG.9632ce836be2c189b0506100cf6ac87b.PNG

xpe2.PNG

Edited by arrush
Link to comment
Share on other sites

13 answers to this question

Recommended Posts

  • 0

it is because your are using RAID 6

 

it will look at the smallest drive in you raid setup and use that size * the amount of drives.
in your case 1.82Tb * 4 (the other 2 are parity with raid 6) = ~7.28Tb.

 

The remaining space (~5Tb) is not used.

 

you might want to look into SHR, but your DSM doesn't support it.

Perhaps you may consider switching to DS917

Edited by emkookmer
Link to comment
Share on other sites

  • 0
7 minutes ago, emkookmer said:

it is because your are using RAID 6

 

it will look at the smallest drive in you raid setup and use that size * the amount of drives.
in your case 1.82Tb * 4 (the other 2 are parity with raid 6) = ~7.28Tb.

 

The remaining space (~5Tb) is not used.

 

you might want to look into SHR, but your DSM doesn't support it.

Perhaps you may consider switching to DS917

 

I would agree but I am only seeing 5.44 which is 1.82*3 not 4

Link to comment
Share on other sites

  • 0

Check the resource monitor to see the disk activity. Give it more time then and if it doesn't work, I would scrap the volume and rebuild it afresh.

With RAID6 (two parity disks) you will always loose capacity of the two largest disks.

In setup like your its too much loss for my liking. I would rethink the volume setup. Maybe some combination of different RAID levels in the RAID groups.

 

Link to comment
Share on other sites

  • 0

I think you should be able to cheat the system by editing /etc.defaults/synoinfo.conf and deleting supportraidgroup="yes" and add support_syno_hybrid_raid="yes" and this should enable SHR and remove the standard raid only functions. You will have to do this every time you update the system as the new PAT files replace these defaults. This is quite clearly not supported so do at your own risk!

The other way, if you don't want to edit config files, is to create a basic SHR array in a device that supports it and then add those discs to the array; you can then use the SHR array as normal just can't create new ones without the above work.

  • Like 1
Link to comment
Share on other sites

  • 0

the code for SHR is still there and active, synology only diabled the ability to create a SHR RAID

so if you install a new *.pat (~200MB, not the small update files) like 6.1.1->6.1.2->6.1.3 your SHR in place will still work and you just loose the ability to create new SHR's

you write you dont have data on the disks so then just delete the raid6 array and after aktivation of SHR you crate a SHR RAID (SHR1 one disk can fail, SHR2 two disks can fail, difference is like raid 5 to raid6)

if you create a SHR1 it might create a RAID5 volume of 6 disks 1.82TB + raid5 of 3 disks (the 3 bigger then 1.82TB with the size 2.73-1.82TB + RAID1 of 2 disks 3.64-2.73TB

 so usable space will be 5 x 1.82TB + 2 x 0.91 + 1 x 0.91 = 11.83 TB

if SHR2 every position in that list looses one disks so it will be

4 x 1.82 + 1 x 0.91 + 0 x 0.91 = 8.19 TB

 

 

Edited by IG-88
  • Like 1
Link to comment
Share on other sites

  • 0
21 hours ago, IG-88 said:

the raid group ist just a mdadm raid (afaik)

you can check what mdadm has to say about the raid


cat /proc/mdstat
or/and
mdadm --detail /dev/md2

 

and you might check the log whats there about expanding the raid

 

 

May I ask to to explain a little bit ?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...