Jump to content
XPEnology Community

DSM 7 and Storage Pool/Array Functionality


flyride

Recommended Posts

DSM 7 handles array creation somewhat differently than DSM 6.x. This tutorial will explain the new, less flexible behavior of DSM 7, provide historical context and offer practical solutions should the reader wish to create certain array configurations no longer possible via the DSM 7 UI. The structures described herein may also be useful toward an understanding of the configuration, troubleshooting and repair of DSM arrays.

 

Be advised - over time, Synology has altered/evolved the definitions of the storage terms in its documentation and used in this tutorial.  Additionally, many of the terms have overlapping words and meanings. This can't be helped, but you should be aware of the problem.

 

Background:

 

DSM 6 offers many options as to how to set up your arrays. In DSM, arrays are broadly referred to as Storage Pools. Within a Storage Pool, you may configure one or more Volume devices, on which btrfs or ext4 filesystems are created, and within which Shared Folders are provisioned. Here's a simplified history of the different Storage Pool configurations available on the XPe-enabled DSM versions and platforms.

 

If you aren't familiar with lvm (logical volume manager), it refers to a storage meta-container able to hold multiple host arrays (i.e. SHR) and/or multiple target Volumes. Note that the word "volume" in logical volume manager has nothing to do with DSM Volumes.

 

DSM 5 provides multiple Storage Pool configurations, including "Basic" (RAID 1 array with one drive), RAID 1, RAID 5, RAID 6, RAID 10, and SHR/SHR2 (conjoined arrays with lvm overlay). These are known as "Simple" Storage Pools, with a single Volume spanning the entire array. The consumer models (i.e. DS918+/DS920+ platforms) always create Simple Storage Pools, with non-lvm arrays (RAID) designated as "Performance" and lvm-overlayed (SHR/SHR2) as "Flexible."

image.thumb.png.2fd99b1e7b3da68f3910c5c49e2297a2.png 

DSM 6 adds enterprise features for DS3615xs/DS3617xs/DS3622xs+, including RAID F1 arrays and "Raid Groups." A Raid Group is a Storage Pool with an lvm overlay regardless of array type, and that permits multiple Volumes to be created within it.  For DS3615xs/DS3617xs the "Performance" Storage Pool is the same as the consumer models (non-lvm), while "Flexible" refers to the Raid Group (lvm-overlayed) option.

 

Synology limits the use of SHR/SHR2 with these models by default. However, it can be enabled with a well-known modification to DSM configuration files, such that the UI is then able to create a SHR array within a "Flexible" Raid Group Storage Pool.

image.thumb.png.9fa10558272d12b0aed3934115263e69.png

DSM 6 also offers SSD caching on all platforms. When SSD caching is enabled, the target Volume device is embedded into a "device mapper" that binds it with the physical SSD storage. The device mapper is then mounted to the root filesystem in the place of the Volume device.

image.thumb.png.1f42396674f3bdbcac3afee53ba14742.png

DSM 7 supports all of the above configurations if they exist prior to upgrade. However, DSM 7 Storage Manager no longer is able to create any type of Simple Storage Pool. On all the XPe supported platforms, new Storage Pools in DSM 7 are always created within Raid Groups (therefore lvm-overlayed) and with a dummy SSD cache, even if the system does not have SSDs to support caching.

 

lvm uses additional processor, memory and disk space (Synology is apparently no longer is concerned with this, assuming that the penalty is minor on modern hardware) and if you don't need SHR or cache, creates unnecessary complexity for array troubleshooting, repair, and data recovery. Look back at the first illustration vs. the last and you can see how much is added for zero benefit if the features are not required. Users might prefer not to have superfluous services involved in their Storage Pools, but DSM 7 no longer offers a choice.

 

From all these configuration options, your Storage Pool type can be determined by observing how the DSM Volume connects to the filesystem, and the presence or absence of lvm "Containers." This table shows various permutations that you may encounter (note that all are describing the first example of a Storage Pool and DSM Volume, multiples will increase the indices accordingly):

 

Storage Pool Type Array Device(s) Container /volume1 Device
Simple (DSM 6) "Performance" /dev/md2 (none) /dev/md2
Simple (DSM 6, DS918+/DS920+ SHR) "Flexible" /dev/md2, /dev/md3... /dev/vg1000 /dev/vg1000/lv
Raid Group (DSM 6, DS36nnxs SHR) "Flexible" /dev/md2, /dev/md3... /dev/vg1 /dev/vg1/volume_1
Simple with Cache (DSM 6) /dev/md2 (none) /dev/mapper/cachedev_0
Raid Group with Cache /dev/md2, [dev/md3...] /dev/vg1 /dev/mapper/cachedev_0

 

Example query to determine the type (this illustrates a cache-enabled (with read-only cache), lvm-overlaid Raid Group with one DSM Volume):

 

dsm:/$ df
Filesystem               1K-blocks        Used   Available Use% Mounted on
/dev/md0                   2385528     1063108     1203636  47% /
none                       1927032           0     1927032   0% /dev
/tmp                       1940084        1576     1938508   1% /tmp
/run                       1940084        3788     1936296   1% /run
/dev/shm                   1940084           4     1940080   1% /dev/shm
none                             4           0           4   0% /sys/fs/cgroup
cgmfs                          100           0         100   0% /run/cgmanager/fs
/dev/mapper/cachedev_0 33736779880 19527975964 14208803916  58% /volume1

dsm:/$ sudo dmsetup table | head -2
cachedev_0: 0 1234567 flashcache-syno conf:
        ssd dev (/dev/md3), disk dev (/dev/vg1/volume_1) cache mode(WRITE_AROUND)

dsm:/$ sudo lvs
  LV                    VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  syno_vg_reserved_area vg1  -wi-a----- 12.00m                                  
  volume_1              vg1  -wi-ao---- 11.00g

 

Solutions:

 

If desired, DSM 7 can be diverted from its complex Storage Pool creation behavior. You will need to edit the /etc/synoinfo.conf and /etc.defaults/synoinfo.conf files from the shell command line.

 

/etc/synoinfo.conf is read by DSM during boot and affects various aspects of DSM's functionality. But just before this during the boot, /etc.defaults/synoinfo.conf is copied over the file in /etc.  Generally you can just make changes in /etc.defaults/synoinfo.conf but the copy is algorithmic, and desynchronization can occur.  So it is always best to check for the desired change in both files if the results are not as expected.

 

1. Enable SHR on DS3615xs, DS3617xs and DS3622xs+: 

Edit synoinfo.conf per above, changing the parameter supportraidgroup from "yes" to "no".

It is no longer required to comment out this line, or add "support_syno_hybrid_raid" to the configuration file. Despite the implications of this parameter being set, Raid Groups continue to be supported and DSM will create Raid Group enabled Storage Pools, with the added bonus of the option for SHR available in the UI.

 

2. Create arrays without dummy caches:

Edit synoinfo.conf per above, changing the parameter supportssdcache from "yes" to "no".

Arrays created while this option is disabled will not have the dummy cache configured. Further testing is required to determine whether SSD cache can subsequently be added using the UI, but it is likely to work.

 

3. Create a cacheless, Simple array on DSM 7:

DSM 7 no longer can create a Simple Storage Pool via the UI. The only method I've found thus far is with a DSM command line tool. Once Storage Pool creation is complete, Volume creation and subsequent resource management can be accomplished with the UI.

 

Note that you cannot just create an array using mdadm as you would on Linux. Because DSM disks and arrays include special "space" metadata, the mdadm-created array will be rejected by DSM 7.  However, using the DSM command-line tool resolves this problem.

  1. First, you need to determine the RAID array type (raid1, raid5, raid6, raid10 or raid_f1; raid_f1 is only valid on DS3615xs/DS3617xs/DS3622xs+ and all array members must be SSDs) and the device names of all the disks that should comprise the array. The disks that you want to use should all be visible like the example:

     

    dsm:/$ ls /dev/sd?
    /dev/sda  /dev/sdb  /dev/sdc /dev/sdd

     

    And, the target disks should not be in use by existing arrays (nothing should be returned):
     
    dsm:/$ cat /proc/mdstat | egrep "sd[abcd]3"
     
  2. Then, create the array with the following command (example uses the disks from above and creates a RAID 5 array). All data will be erased from the disks specified.
     
    dsm:/$ sudo synostgpool --create -t single -l raid5 /dev/sda /dev/sdb /dev/sdc /dev/sdd

    If successful, the array will be created along with all the appropriate DSM metadata, and you will see the new Storage Pool immediately reflected in the Storage Manager UI. If it fails, there will be no error message, and it's usually because disks are already in use in another array. Deleting the affected Storage Pools should free them up. You can review the events that have occurred by reviewing /var/log/space_operation.log.
     
  3. A DSM Volume can then be added using the Storage Manager UI.

Further testing is required to determine whether SSD cache can subsequently be added using the UI, but it is likely to work.

 

  • Like 5
  • Thanks 11
Link to comment
Share on other sites

  • 7 months later...
On 1/12/2022 at 1:05 AM, flyride said:

3. Create a cacheless, Simple array on DSM 7:

there seem to some little changes in DSM 7.1 (tested with dva1622 and arpl 1.1 beta2)

its not /dev/sda, /dev/sdb, ... anymore its now /dev/sata1, /dev/sata2 and partitons are p1, p2, ...

a cat /proc/mdstat from a 2 disk system without a volume looks like this (dont mind that its not starting with sata1, its from a dva1622 test and the system only goes to two disks as its originally a 2 disk unit)

md1 : active raid1 sata4p2[0] sata3p2[2](S) sata2p2[1]
      2097088 blocks [2/2] [UU]

md0 : active raid1 sata4p1[0] sata3p1[2](S) sata2p1[1]
      8388544 blocks [2/2] [UU]

 

so the above example still works with /dev/sataY

synostgpool --create -t single -l raid5 /dev/sata1 /dev/sata2 /dev/sata3 /dev/sata4 /dev/sata5

this will create a raid5 volume and its creation will instantly be shown in the storage manager

 

it will look like this (this example uses only disk2 to disk5, 1st disk was different size and is not part of the raid5)

md2 : active raid5 sata5p3[3] sata4p3[2] sata3p3[1] sata2p3[0]
      46845465600 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      [>....................]  resync =  2.9% (463234168/15615155200) finish=1055.4min speed=239258K/sec

md1 : active raid1 sata2p2[2](S) sata1p2[0] sata5p2[3](S) sata4p2[4](S) sata3p2[1]
      2097088 blocks [2/2] [UU]

md0 : active raid1 sata2p1[2](S) sata1p1[0] sata5p1[3](S) sata4p1[4](S) sata3p1[1]
      8388544 blocks [2/2] [UU]

 

image.thumb.png.f234df415eda505feb345662fc587030.png

 

i think i also found a loophole in the actual GUI process (Storage Manager) allowing a raid5 creation (gui only show SHR for anything higher then RAID1, no raid5/6 option when creating something new)

its possible to create a basic volume on one disk (no lvm overlay with that i guess) and after that extend the storage pool  by choosing "change raid type", that way a raid5 option is presented - but there is a downside to this, it will take ages to finish as it will add disks one by one, so it will be convert the Basic to a raid1 (2 disks) and then to raid5 (3 disks) - i did only start the process to see if its working and then destroyed the thing (there is not regular way to stop the process in the gui), i did see the basic to be changed into a raid1 with two disks and i think when waiting (a few days, depends on disk size and speed) it will be raid 5 in the next step until all disks are added - maybe it would be better to test with some smaller disks on a hypervisor system that uses ssd's as base

it might be a alternative for people not wanting to mess around on the console

 

 

 

Edited by IG-88
Link to comment
Share on other sites

  • 1 month later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...