Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 01/13/2022 in all areas

  1. DSM 7 handles array creation somewhat differently than DSM 6.x. This tutorial will explain the new, less flexible behavior of DSM 7, provide historical context and offer practical solutions should the reader wish to create certain array configurations no longer possible via the DSM 7 UI. The structures described herein may also be useful toward an understanding of the configuration, troubleshooting and repair of DSM arrays. Be advised - over time, Synology has altered/evolved the definitions of the storage terms in its documentation and used in this tutorial. Additionally, many of the terms have overlapping words and meanings. This can't be helped, but you should be aware of the problem. Background: DSM 6 offers many options as to how to set up your arrays. In DSM, arrays are broadly referred to as Storage Pools. Within a Storage Pool, you may configure one or more Volume devices, on which btrfs or ext4 filesystems are created, and within which Shared Folders are provisioned. Here's a simplified history of the different Storage Pool configurations available on the XPe-enabled DSM versions and platforms. If you aren't familiar with lvm (logical volume manager), it refers to a storage meta-container able to hold multiple host arrays (i.e. SHR) and/or multiple target Volumes. Note that the word "volume" in logical volume manager has nothing to do with DSM Volumes. DSM 5 provides multiple Storage Pool configurations, including "Basic" (RAID 1 array with one drive), RAID 1, RAID 5, RAID 6, RAID 10, and SHR/SHR2 (conjoined arrays with lvm overlay). These are known as "Simple" Storage Pools, with a single Volume spanning the entire array. The consumer models (i.e. DS918+/DS920+ platforms) always create Simple Storage Pools, with non-lvm arrays (RAID) designated as "Performance" and lvm-overlayed (SHR/SHR2) as "Flexible." DSM 6 adds enterprise features for DS3615xs/DS3617xs/DS3622xs+, including RAID F1 arrays and "Raid Groups." A Raid Group is a Storage Pool with an lvm overlay regardless of array type, and that permits multiple Volumes to be created within it. For DS3615xs/DS3617xs the "Performance" Storage Pool is the same as the consumer models (non-lvm), while "Flexible" refers to the Raid Group (lvm-overlayed) option. Synology limits the use of SHR/SHR2 with these models by default. However, it can be enabled with a well-known modification to DSM configuration files, such that the UI is then able to create a SHR array within a "Flexible" Raid Group Storage Pool. DSM 6 also offers SSD caching on all platforms. When SSD caching is enabled, the target Volume device is embedded into a "device mapper" that binds it with the physical SSD storage. The device mapper is then mounted to the root filesystem in the place of the Volume device. DSM 7 supports all of the above configurations if they exist prior to upgrade. However, DSM 7 Storage Manager no longer is able to create any type of Simple Storage Pool. On all the XPe supported platforms, new Storage Pools in DSM 7 are always created within Raid Groups (therefore lvm-overlayed) and with a dummy SSD cache, even if the system does not have SSDs to support caching. lvm uses additional processor, memory and disk space (Synology is apparently no longer is concerned with this, assuming that the penalty is minor on modern hardware) and if you don't need SHR or cache, creates unnecessary complexity for array troubleshooting, repair, and data recovery. Look back at the first illustration vs. the last and you can see how much is added for zero benefit if the features are not required. Users might prefer not to have superfluous services involved in their Storage Pools, but DSM 7 no longer offers a choice. From all these configuration options, your Storage Pool type can be determined by observing how the DSM Volume connects to the filesystem, and the presence or absence of lvm "Containers." This table shows various permutations that you may encounter (note that all are describing the first example of a Storage Pool and DSM Volume, multiples will increase the indices accordingly): Storage Pool Type Array Device(s) Container /volume1 Device Simple (DSM 6) "Performance" /dev/md2 (none) /dev/md2 Simple (DSM 6, DS918+/DS920+ SHR) "Flexible" /dev/md2, /dev/md3... /dev/vg1000 /dev/vg1000/lv Raid Group (DSM 6, DS36nnxs SHR) "Flexible" /dev/md2, /dev/md3... /dev/vg1 /dev/vg1/volume_1 Simple with Cache (DSM 6) /dev/md2 (none) /dev/mapper/cachedev_0 Raid Group with Cache /dev/md2, [dev/md3...] /dev/vg1 /dev/mapper/cachedev_0 Example query to determine the type (this illustrates a cache-enabled (with read-only cache), lvm-overlaid Raid Group with one DSM Volume): dsm:/$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/md0 2385528 1063108 1203636 47% / none 1927032 0 1927032 0% /dev /tmp 1940084 1576 1938508 1% /tmp /run 1940084 3788 1936296 1% /run /dev/shm 1940084 4 1940080 1% /dev/shm none 4 0 4 0% /sys/fs/cgroup cgmfs 100 0 100 0% /run/cgmanager/fs /dev/mapper/cachedev_0 33736779880 19527975964 14208803916 58% /volume1 dsm:/$ sudo dmsetup table | head -2 cachedev_0: 0 1234567 flashcache-syno conf: ssd dev (/dev/md3), disk dev (/dev/vg1/volume_1) cache mode(WRITE_AROUND) dsm:/$ sudo lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert syno_vg_reserved_area vg1 -wi-a----- 12.00m volume_1 vg1 -wi-ao---- 11.00g Solutions: If desired, DSM 7 can be diverted from its complex Storage Pool creation behavior. You will need to edit the /etc/synoinfo.conf and /etc.defaults/synoinfo.conf files from the shell command line. /etc/synoinfo.conf is read by DSM during boot and affects various aspects of DSM's functionality. But just before this during the boot, /etc.defaults/synoinfo.conf is copied over the file in /etc. Generally you can just make changes in /etc.defaults/synoinfo.conf but the copy is algorithmic, and desynchronization can occur. So it is always best to check for the desired change in both files if the results are not as expected. 1. Enable SHR on DS3615xs, DS3617xs and DS3622xs+: Edit synoinfo.conf per above, changing the parameter supportraidgroup from "yes" to "no". It is no longer required to comment out this line, or add "support_syno_hybrid_raid" to the configuration file. Despite the implications of this parameter being set, Raid Groups continue to be supported and DSM will create Raid Group enabled Storage Pools, with the added bonus of the option for SHR available in the UI. 2. Create arrays without dummy caches: Edit synoinfo.conf per above, changing the parameter supportssdcache from "yes" to "no". Arrays created while this option is disabled will not have the dummy cache configured. Further testing is required to determine whether SSD cache can subsequently be added using the UI, but it is likely to work. 3. Create a cacheless, Simple array on DSM 7: DSM 7 no longer can create a Simple Storage Pool via the UI. The only method I've found thus far is with a DSM command line tool. Once Storage Pool creation is complete, Volume creation and subsequent resource management can be accomplished with the UI. Note that you cannot just create an array using mdadm as you would on Linux. Because DSM disks and arrays include special "space" metadata, the mdadm-created array will be rejected by DSM 7. However, using the DSM command-line tool resolves this problem. First, you need to determine the RAID array type (raid1, raid5, raid6, raid10 or raid_f1; raid_f1 is only valid on DS3615xs/DS3617xs/DS3622xs+ and all array members must be SSDs) and the device names of all the disks that should comprise the array. The disks that you want to use should all be visible like the example: dsm:/$ ls /dev/sd? /dev/sda /dev/sdb /dev/sdc /dev/sdd And, the target disks should not be in use by existing arrays (nothing should be returned): dsm:/$ cat /proc/mdstat | egrep "sd[abcd]3" Then, create the array with the following command (example uses the disks from above and creates a RAID 5 array). All data will be erased from the disks specified. dsm:/$ sudo synostgpool --create -t single -l raid5 /dev/sda /dev/sdb /dev/sdc /dev/sdd If successful, the array will be created along with all the appropriate DSM metadata, and you will see the new Storage Pool immediately reflected in the Storage Manager UI. If it fails, there will be no error message, and it's usually because disks are already in use in another array. Deleting the affected Storage Pools should free them up. You can review the events that have occurred by reviewing /var/log/space_operation.log. A DSM Volume can then be added using the Storage Manager UI. Further testing is required to determine whether SSD cache can subsequently be added using the UI, but it is likely to work.
    1 point
  2. В репозиториях пропал MC, советуют ставить через костыли. Я скачал и поставил .spk файл. Все работает пользуйтесь кому надо mc.v5.f4458[apollolake-avoton-braswell-broadwell-broadwellnk-bromolow-cedarview-denverton-dockerx64-grantley-kvmx64-x86-x86_64].spk
    1 point
  3. Hi, I'm just leaving this here in case anyone needs it. In order to start the telnet service once you end up in a recovery page, or on install page etc. You open your browser and run the following http://<your-dsm-ip-address>:5000/webman/start_telnet.cgi
    1 point
  4. I went from 6.2.3 with no issues.
    1 point
  5. As mentioned an easy way is to have it redirected to a file and then when you start up it shall write the console output to that file. When you restart the VM it shall ask if you wish to append or overwrite the file which is normal
    1 point
  6. Well, good question , i never thought that someone would like to exclude an extension. I need to add that to my todo list. Meanwhile please comment the following line and answer no when you are asked to update rploader. Line number 1126 : # listmodules
    1 point
  7. Ничего не скажу ..... Если бы вы озвучили хотя бы одну конфигурацию, то можно было бы что то поискать разумного на форуме. А так .... Только догадки и предположения Тоже немало использовал разных конфигураций и результаты были разными. Но только всего два раза я не смог победить. И то, от того что бросал попытки и уходил на другое железо
    1 point
  8. После установки Synology-TorrSrver MatriX.111 замечены баги, сервер самовыгружается, даже после ручной загрузки, откатился на 110 версию
    1 point
  9. with uefi support. redpill-load.win.zip.001redpill-load.win.zip.002 it also can support dsm 7.0.1
    1 point
  10. Да нет, спрашивайте, мне несложно помочь )))
    1 point
×
×
  • Create New...