Jump to content
XPEnology Community
  • 0

You have 8 disks, Will you create 1 raid array or 2 arrays?


Marawan

Question

8 answers to this question

Recommended Posts

  • 0
2 часа назад, Marawan сказал:

If you have 8 disks will you create one big array or create 2 arrays? pros and cons for each?

say 4 are 12 TB and 4 are 10 TB

if "big array" == SHR, then

  • one Resource pool: total size = 76TB, but if you need to reduce the number of disks in the array , you will have to delete and recreate the entire array
  • two Resource pools (4x12 and 4x10): total size 36TB+30Tb= 66TB, but you will have more flexibility when managing arrays (for example, you can backup one of them to another when changing the configuration)

something like this :)

 

 

 

  • Thanks 1
Link to comment
Share on other sites

  • 0

thx for the info dj_nsk

Currently I got all disks in SHR-2 (2 parity disks).

If I create two raid5 arrays the capacity is similar to an 8 disks single array

but as you said it's more flexible, faster management operations (scrubbing, backup...etc)..

but less secure? 2 parity disks vs 1 disk

what about wear and tear? seem more disks means shorter disk life?

 

Link to comment
Share on other sites

  • 0
1 час назад, Marawan сказал:

Currently I got all disks in SHR-2 (2 parity disks).

In SHR-2:

  • 1 array - 64Tb
  • 2 arrays - 24+20=44Tb

In RAID5:

  • 1 array - 70Tb
  • 2 arrays - 36+30=66Tb
1 час назад, Marawan сказал:

If I create two raid5 arrays the capacity is similar to an 8 disks single array

No! One SHR(1) == 76Tb

 

1 час назад, Marawan сказал:

what about wear and tear? seem more disks means shorter disk life?

I don't think there is such a direct relationship, you can ignore it

  • Thanks 1
Link to comment
Share on other sites

  • 0

thanks dj_nsk

yeah what I got now is 64 TB array, if I break it into 2 raid5 arrays I will get 66 TB, which is nearly equal.

I calculate any array size using Synology RAID Calculator

 

I can't see much benefits of one large 8 disks SHR-2 array over two 4 disks raid5 except may be its more resilient.

more speed? not a priority for me, It seems arrays with fewer number of disks are easier to maintain but less secure.

 I need more insights from experts!

Link to comment
Share on other sites

  • 0
15 hours ago, Marawan said:

calculate any array size using Synology RAID Calculator

 

its way easier then you think, you sum up the space of all disks in your array and subtract the largest - in case of shr-1, for shr-2 you subtract the two largest disks

a raid5 or raid6 (same disk size) would be a sub case of this

since dsm 7 all the created volumes are shr1 or 2 if you look closely, shr was always mdadm software raid sets (same size partitions of disks put together as raid set) and these "glued" together by LVM2 to a volume, in older dsm versions you could leave out LVM2 if you had same size disks only

 

15 hours ago, Marawan said:

I can't see much benefits of one large 8 disks SHR-2 array over two 4 disks raid5 except may be its more resilient.

 

mainly its the question how many disks max for 1 disk as redundancy, if you are willing to have 8 disks as raid5 then you "loose" one disk if you create two raid5 sets it will be two disks for redundancy

if you take it as not more the 6 disks in a raid 5 then with 8 disks there is the need of raid6 and in that scenario there is not much difference between two raid5 sets (raid6 has the edge as its not important whats disks fail if two disks fail, with two raid5 sets of 4 disks each and two disks in a 4 disk set fail ...

also its more convenient to have just one volume, no juggling space between two volumes, so two times a little argument for all disks in raid6

(you can replace raud5 with shr1 and raid6 with shr2 here, there can be some differences  with shr of whats "lost" for redundancy depending on the size difference of biggest disks)

 

15 hours ago, Marawan said:

more speed? not a priority for me,

 

that might have been the weaker point of raid6 as it need more writing with two redundancy's, but there is also the argument of having more disks is better to split the needed IOPS and transfer between more disk (older phrase, more spindles more speed, gets more clear if you think of a raid0 made of raid1 sets aka raid10)

there is also some correlational with the number of disks in a raid5 set like sets of 3 or 5 having  better performance as the number of disks taking the data (not redundancy) it 2 and 4 in this case, so a 9 disk raid 5 would be next in this line (8 disks taking the date, so its about two to the power of n, 2, 4, 8 )  but thats kind of two many disks for just one redundancy, so a 10 disk raid6 would be in that place

 

 

15 hours ago, Marawan said:

It seems arrays with fewer number of disks are easier to maintain but less secure.

see my comparison above, the possibility's for a two disk fail of a raid6 of 8 disks are better then having two raid5 sets of 4 disks each, with the 8 disk raid6 any two disks can fail, thats not the case with the two times raid5

 

there are more things you could take into account when building a system and deciding about redundancy and there a a lot more options to handle this with a normal linux/bsd system then dsm can offer, dsm pretty much limited to mdadm raid

depending on how important these things are there can be other solutions then dsm with its mdadm, like systems doing ZFS or UnRAID

examples of other things that might be important: constant guaranteed write speed, IOPS, scaling to a larger number of disks, caching, higher level of redundancy, ...

Edited by IG-88
  • Thanks 1
Link to comment
Share on other sites

  • 0

thanks for the helpful insights IG-88! 🙂

 

What about the so called "Wear and tear"? I remember I read something that the more disks in an array the faster the wear and tear is.

not sure how accurate is that and dj_nsk thankfully suggested it's something irrelevant and should be ignored.

 

Link to comment
Share on other sites

  • 0
On 4/28/2023 at 6:55 AM, Marawan said:

more disks in an array the faster the wear and tear is.

not sure how accurate is that and dj_nsk thankfully suggested it's something irrelevant and should be ignored.

thats about ssd's, on a single ssd there would be the controller leveling that out by distributing the write access between cells (wear leveling)

something like that is not needed with conventional magnetic recording but might get a thing with heat or microwave assisted magnetic recording in the next years

 

when combining ssd's in a raid5 set you might face the effect that all disks fail at the same time as wear out is something that will hit a ssd at some point (there is usually a tool or s.m.a.r.t. to monitor this), synology has raid f1 for this as alternative to raid5

https://xpenology.com/forum/topic/9394-installation-faq/?tab=comments#comment-131458

https://global.download.synology.com/download/Document/Software/WhitePaper/Firmware/DSM/All/enu/Synology_RAID_F1_WP.pdf

 

on synology system that is build into kernel and as we use syno's original kernel some units lack that support

https://xpenology.com/forum/topic/61634-dsm-7x-loaders-and-platforms/#comment-281190

you can also see if that is supported when looking at the state of a mdadm with "cat /proc/mdstat"

"Personalities" would tell you what raid types are possible, in general don't expect a consumer unit being able to use raid f1 but there is a list from synology

https://kb.synology.com/en-ro/DSM/tutorial/Which_Synology_NAS_models_support_RAID_F1

 

Edited by IG-88
  • Thanks 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...