DSM 7 and Storage Pool/Array Functionality


Recommended Posts

DSM 7 handles array creation somewhat differently than DSM 6.x. This tutorial will explain the new, less flexible behavior of DSM 7, provide historical context and offer practical solutions should the reader wish to create certain array configurations no longer possible via the DSM 7 UI. The structures described herein may also be useful toward an understanding of the configuration, troubleshooting and repair of DSM arrays.

 

Be advised - over time, Synology has altered/evolved the definitions of the storage terms in its documentation and used in this tutorial.  Additionally, many of the terms have overlapping words and meanings. This can't be helped, but you should be aware of the problem.

 

Background:

 

DSM 6 offers many options as to how to set up your arrays. In DSM, arrays are broadly referred to as Storage Pools. Within a Storage Pool, you may configure one or more Volume devices, on which btrfs or ext4 filesystems are created, and within which Shared Folders are provisioned. Here's a simplified history of the different Storage Pool configurations available on the XPe-enabled DSM versions and platforms.

 

If you aren't familiar with lvm (logical volume manager), it refers to a storage meta-container able to hold multiple host arrays (i.e. SHR) and/or multiple target Volumes. Note that the word "volume" in logical volume manager has nothing to do with DSM Volumes.

 

DSM 5 provides multiple Storage Pool configurations, including "Basic" (RAID 1 array with one drive), RAID 1, RAID 5, RAID 6, RAID 10, and SHR/SHR2 (conjoined arrays with lvm overlay). These are known as "Simple" Storage Pools, with a single Volume spanning the entire array. The consumer models (i.e. DS918+ platform) always create Simple Storage Pools, with non-lvm arrays (RAID) designated as "Performance" and lvm-overlayed (SHR/SHR2) as "Flexible."

image.thumb.png.2fd99b1e7b3da68f3910c5c49e2297a2.png 

DSM 6 adds enterprise features for the DS3615xs/DS3617xs platforms, including RAID F1 arrays and "Raid Groups." A Raid Group is a Storage Pool with an lvm overlay regardless of array type, and that permits multiple Volumes to be created within it.  For DS3615xs/DS3617xs the "Performance" Storage Pool is the same as the consumer models (non-lvm), while "Flexible" refers to the Raid Group (lvm-overlayed) option.

 

Synology limits the use of SHR/SHR2 with enterprise models (inclusive of DS3615xs/DS3617xs platforms) by default. However, it can be enabled with a well-known modification to DSM configuration files, such that the UI is then able to create a SHR array within a "Flexible" Raid Group Storage Pool.

image.thumb.png.9fa10558272d12b0aed3934115263e69.png

DSM 6 also offers SSD caching on all platforms. When SSD caching is enabled, the target Volume device is embedded into a "device mapper" that binds it with the physical SSD storage. The device mapper is then mounted to the root filesystem in the place of the Volume device.

image.thumb.png.1f42396674f3bdbcac3afee53ba14742.png

DSM 7 supports all of the above configurations if they exist prior to upgrade. However, DSM 7 Storage Manager no longer is able to create any type of Simple Storage Pool. On all the XPe supported platforms, new Storage Pools in DSM 7 are always created within Raid Groups (therefore lvm-overlayed) and with a dummy SSD cache, even if the system does not have SSDs to support caching.

 

lvm uses additional processor, memory and disk space (Synology is apparently no longer is concerned with this, assuming that the penalty is minor on modern hardware) and if you don't need SHR or cache, creates unnecessary complexity for array troubleshooting, repair, and data recovery. Look back at the first illustration vs. the last and you can see how much is added for zero benefit if the features are not required. Users might prefer not to have superfluous services involved in their Storage Pools, but DSM 7 no longer offers a choice.

 

From all these configuration options, your Storage Pool type can be determined by observing how the DSM Volume connects to the filesystem, and the presence or absence of lvm "Containers." This table shows various permutations that you may encounter (note that all are describing the first example of a Storage Pool and DSM Volume, multiples will increase the indices accordingly):

 

Storage Pool Type Array Device(s) Container /volume1 Device
Simple (DSM 6) "Performance" /dev/md2 (none) /dev/md2
Simple (DSM 6, DS918+ SHR) "Flexible" /dev/md2, /dev/md3... /dev/vg1000 /dev/vg1000/lv
Raid Group (DSM 6, DS3615xs/17xs SHR) "Flexible" /dev/md2, /dev/md3... /dev/vg1 /dev/vg1/volume_1
Simple with Cache (DSM 6) /dev/md2 (none) /dev/mapper/cachedev_0
Raid Group with Cache (DSM 7) /dev/md2, [dev/md3...] /dev/vg1 /dev/mapper/cachedev_0

 

Example query to determine the type (this illustrates a cache-enabled (with read-only cache), lvm-overlaid Raid Group with one DSM Volume):

 

dsm:/$ df
Filesystem               1K-blocks        Used   Available Use% Mounted on
/dev/md0                   2385528     1063108     1203636  47% /
none                       1927032           0     1927032   0% /dev
/tmp                       1940084        1576     1938508   1% /tmp
/run                       1940084        3788     1936296   1% /run
/dev/shm                   1940084           4     1940080   1% /dev/shm
none                             4           0           4   0% /sys/fs/cgroup
cgmfs                          100           0         100   0% /run/cgmanager/fs
/dev/mapper/cachedev_0 33736779880 19527975964 14208803916  58% /volume1

dsm:/$ sudo dmsetup table | head -2
cachedev_0: 0 1234567 flashcache-syno conf:
        ssd dev (/dev/md3), disk dev (/dev/vg1/volume_1) cache mode(WRITE_AROUND)

dsm:/$ sudo lvs
  LV                    VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  syno_vg_reserved_area vg1  -wi-a----- 12.00m                                  
  volume_1              vg1  -wi-ao---- 11.00g

 

Solutions:

 

If desired, DSM 7 can be diverted from its complex Storage Pool creation behavior. You will need to edit the /etc/synoinfo.conf and /etc.defaults/synoinfo.conf files from the shell command line.

 

/etc/synoinfo.conf is read by DSM during boot and affects various aspects of DSM's functionality. But just before this during the boot, /etc.defaults/synoinfo.conf is copied over the file in /etc.  Generally you can just make changes in /etc.defaults/synoinfo.conf but the copy is algorithmic, and desynchronization can occur.  So it is always best to check for the desired change in both files if the results are not as expected.

 

1. Enable SHR on DS3615xs and DS3617xs: 

Edit synoinfo.conf per above, changing the parameter supportraidgroup from "yes" to "no".

It is no longer required to comment out this line, or add "support_syno_hybrid_raid" to the configuration file. Despite the implications of this parameter being set, Raid Groups continue to be supported and DSM will create Raid Group enabled Storage Pools, with the added bonus of the option for SHR available in the UI.

 

2. Create arrays without dummy caches:

Edit synoinfo.conf per above, changing the parameter supportssdcache from "yes" to "no".

Arrays created while this option is disabled will not have the dummy cache configured. Further testing is required to determine whether SSD cache can subsequently be added using the UI, but it is likely to work.

 

3. Create a cacheless, Simple array on DSM 7:

DSM 7 no longer can create a Simple Storage Pool via the UI. The only method I've found thus far is with a DSM command line tool. Once Storage Pool creation is complete, Volume creation and subsequent resource management can be accomplished with the UI.

 

Note that you cannot just create an array using mdadm as you would on Linux. Because DSM disks and arrays include special "space" metadata, the mdadm-created array will be rejected by DSM 7.  However, using the DSM command-line tool resolves this problem.

  1. First, you need to determine the RAID array type (raid1, raid5, raid6, raid10 or raid_f1; raid_f1 is only valid on DS3615xs/DS3617xs and all array members must be SSDs) and the device names of all the disks that should comprise the array. The disks that you want to use should all be visible like the example:

     

    dsm:/$ ls /dev/sd?
    /dev/sda  /dev/sdb  /dev/sdc /dev/sdd

     

    And, the target disks should not be in use by existing arrays (nothing should be returned):
     
    dsm:/$ cat /proc/mdstat | egrep "sd[abcd]3"
     
  2. Then, create the array with the following command (example uses the disks from above and creates a RAID 5 array). All data will be erased from the disks specified.
     
    dsm:/$ sudo synostgpool --create -t single -l raid5 /dev/sda /dev/sdb /dev/sdc /dev/sdd

    If successful, the array will be created along with all the appropriate DSM metadata, and you will see the new Storage Pool immediately reflected in the Storage Manager UI. If it fails, there will be no error message, and it's usually because disks are already in use in another array. Deleting the affected Storage Pools should free them up. You can review the events that have occurred by reviewing /var/log/space_operation.log.
     
  3. A DSM Volume can then be added using the Storage Manager UI.

Further testing is required to determine whether SSD cache can subsequently be added using the UI, but it is likely to work.

 

Edited by flyride
added lvs query in analysis example, added illustrations
  • Like 3
  • Thanks 2
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.