Jump to content
XPEnology Community

Installation FAQ


sbv3000

Recommended Posts

1 - Where can I download the boot loaders from and which Synology hardware platform should I use?


Boot loaders, from DSM 5.x through current (DSM 7.x) can be found here:

 

For DSM 6.x, it is strongly recommended that you read this topic (at least the OP) made by the developer of the loader, @jun:

Up to DSM 6.1:

For DSM 6.2 onward:

 

Hardware Platforms:

See the following tables for more information and a decision tree:

DSM 6.x: https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/

DSM 7.x: https://xpenology.com/forum/topic/61634-dsm-7x-loaders-and-platforms/

 

For DSM 6.1.x, DS3615xs is preferable unless you have specific hardware requirements.

For DSM 6.2.x, DS918+ is preferable unless you have specific hardware requirements.

I recommend looking at Synology's platform spec data sheets for the various models and compare to your own hardware.

 

Information on DS916+: this hardware version of the loader is not a generic build. It is for boards similar to the official one, otherwise there is no point in using it. In other words, this loader is for Braswell family processors (J3160 N3710 etc) and its kernel is optimised for those processors. The loader was added for hardware transcoding support. For desktop/mobile processors, 4th Gen Core processor or later is required to provide necessary instruction features. For example, i7-4700mq works, but e3-1230v2 complains about undefined instructions (source: Jun's post 11/04/2017, Jun's post 12/04/2017, Jun's post 08/01/2018)

  • Like 1
Link to comment
Share on other sites

Link to comment
Share on other sites

3 - How can I use SHR in DSM?

 

For DSM6, SHR can be re-enabled by editing the 'synoinfo.conf' file, located under '/etc.defaults'
Comment out (or delete) the line:

supportraidgroup="yes"

and add:

support_syno_hybrid_raid ="yes"

 

For DSM7 please refer to the following link for in-depth explanations: https://xpenology.com/forum/topic/54545-dsm-7-and-storage-poolarray-functionality/. [Thank you @flyride]

Edited by Polanskiman
Added DSM 7.x specifics.
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

7 - After the Grub boot menu, my monitor shows a message 'booting from kernel' and nothing else - whats wrong?

 

This is normal for XPE 6 boot loader - connect a serial cable to see more diagnostics or use Synology Assistant to connect to the NAS.

Edited by Polanskiman
Minor edits.
Link to comment
Share on other sites

 

10 - What MAC address should I use in the grub configuration settings?

 

It is recommended but not a requirement to set the grub configuration MAC address to be the same as the installed network adapter. You can check your NIC's MAC address by either looking at the BIOS or launching a live OS such Ubuntu.

Link to comment
Share on other sites

11 - The HDD slot numbers in Storage Manager do not match the ports on my disk controller - why?

 

Storage Manager determines drive numbers based on the enumeration of controllers at a bus/slot level, which cannot be changed. Manually recording disk slots/serial numbers/controller ports is a recommended workaround.

  • Confused 1
Link to comment
Share on other sites

12 - How can I regain high amounts of reserved memory when looking in DSM's Resource Monitor?

 

Add

disable_mtrr_trim

to the 'set common_args_3615' line in the grub.cfg file contained in the loader (applicable to jun's loader). It should look something like this:

set common_args_3615='disable_mtrr_trim syno_hdd_powerup_seq=0 HddHotplug=0 syno_hw_version=DS3615xs vender_format_version=2 console=ttyS0,115200n8 withefi elevator=elevator quiet'

That should give you back all the reserved RAM.

 

Note:  DS3617xs use 'set common_args_3617' and DS916+ uses 'set common_args_916'

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

14 - What is the minimum VMDK size one can have in DSM?

 

The minimum size one can allocate to a VMDK is 5GB. Below that you will get an Failed to format the disk (35) error during the installation.

 

It should also be noted that this error may appear on baremetal installations where an HDD has not been properly formatted prior being added to you RAID. Refer to this link in case you are in this situation: https://www.synology.com/en-global/knowledgebase/DSM/tutorial/General/What_can_I_do_when_I_get_the_error_message_quot_Failed_to_format_disk_quot_when_formatting_data_partition_during_the_setup

 

For DSM 7, new installs reserve 8GB for /dev/md0, 2GB for swap and the minimum volume size is 10GB.  This means the practical smallest usable VMDK is now 21GB. [Thank you @flyride]

Edited by Polanskiman
Added DSM 7.x specifics.
  • Like 1
Link to comment
Share on other sites

15 - What is RAIDF1 and why would I want to use it?

 

RAIDF1 a modification of RAID5, implemented with a customization of MDRAID (the disk array manager used by DSM). It is specifically tuned to minimize the likelihood of SSDs wearing out at the same time.

 

SSDs have a finite lifespan based on the number of times they are written. This information is usually presented as a "wear indicator" or "life remaining" number from 100 (new) counting down to 0 (end of service life). Most operating systems, including DSM, monitor SSD health using SMART and will alert when devices near the end of their service lives, and prior to failure.

 

An array of brand new SSDs are consumed evenly because of how RAID5 intentionally distributes workloads evenly to the array members. Eventually, the SSDs all wear out together, which can result in multiple, simultaneous drive failures and subsequent data loss.

 

How does RAIDF1 work?

 

RAIDF1 attempts to avoid this by biasing writes to a specific drive in the array. To understand this, consider how the DSM btrfs and ext4 filesystems store data. By default, DSM filesystems save data in 4K blocks. Even a 1-byte file uses 4K as a minimum amount of space. Modern HDDs/SSDs also store data in 4K sectors. When a byte must be changed, all the other bytes within the sector are read, then rewritten at the same time. This read/write requirement is called write amplification and it affects the performance of all parts of the storage ecosystem, from HDDs and SSDs to filesystems to RAID arrays.

 

MDRAID also works with blocks, but they are called chunks to differentiate them from filesystem blocks. The default chunk size for DSM RAID5/6/10 is 64K. A stripe is the logical grouping of adjacent chunks spanning the array members horizontally.

 

Using the example of a RAID5 with three drives, two of the chunks in the stripe contain data and the third chunk is parity. When DSM performs data scrubbing, it reads all three chunks, then validates all the data and parity in each stripe for mathematical consistency (and corrects if necessary).

 

Each stripe rotates the position of the parity block successively through the array members. In the three-drive example, stripe 1's parity chunk is on drive 1, stripe 2's parity chunk is on drive 2, stripe 3's parity chunk is on drive 3, stripe 4's parity chunk is back on drive 1, and so on... This results in an even distribution of data and parity across all array members.

 

Note that many files (filesystem blocks) may be stored in one chunk. The highest density case is 16 files of 4K or smaller in a single chunk. Consider that when one of those files change, only two of the three chunks in the stripe must be rewritten: first, the chunk containing the block containing the file, and then the parity chunk (since the parity calculation must be updated).

 

RAIDF1 subtly modifies the RAID5 implementation by picking one of the array members (let's call it the F1-drive), and sequencing two consecutive stripes in the stripe parity rotation for it. This is NOT additional parity (each stripe still only has one parity chunk), so there is no loss of space or read/write performance. The table below compares parity distribution (how much of the total parity is stored on specific array members) between RAID5 and RAIDF1:

 

Array configuration Drive #1 parity Drive #2 parity Drive #3 parity Drive #4 parity Drive #5 parity
3-drive RAID5 33.33% 33.33% 33.33%    
4-drive RAID5 25% 25% 25% 25%  
3-drive RAIDF1 25% 25% 50% (F1-drive)    
4-drive RAIDF1 20% 20% 20% 40% (F1-drive)  
5-drive RAIDF1 16.66% 16.66% 16.66% 16.66% 33.33% (F1-drive)

 

With RAIDF1, anytime a full stripe is written, I/O is evenly distributed among the drives, just like RAID5. When a small file or file fragment (one that does not span a stripe) is written, on average the F1-drive will be used about twice as often as the other drives. Thus, the F1-drive will experience accelerated wear and will reach its life limit first. Then it can be replaced with minimal risk of one of the remaining members failing at the same time.

 

Upon replacement, DSM selects the SSD that is closest to being worn out and designates it as the new F1-drive. The array sync then rewrites the array to achieve the desired RAIDF1 parity distribution.

 

Note that the total number of write events are not increased with RAIDF1. "Total cost of ownership" does not change, as the extra writes to the F1-drive are avoided with the other array members, so they last longer.

 

Caveats and other notable issues

 

As a RAID5 variant, RAIDF1 creates arrays based on the smallest member device. For best results, all the drives should all be the same size and type (a larger drive can be used but extra space will be ignored). RAIDF1 can theoretically be “defeated” by installing dissimilar drives, with one drive having significantly higher capacity and/or a high DWPD (drive writes per day) rating. If this drive was then selected as the F1-drive, it may have enough write capacity to outlast the other array members, which could then fail together. Always using identical SSDs for the array will avoid this freak occurrence.

 

SHR (Synology Hybrid RAID) allows drives of different sizes to be used in a redundant array while maximizing space available. This is done by creating a series of arrays, including a small one compatible with the smallest drive, and a large one using the available space common to the largest drives, and possibly some in between depending upon the makeup and complexity of the SHR. The arrays are then concatenated into a single logical volume (using LVM) available for use within DSM.

 

For redundancy, the large SHR drives must be members of all the arrays. The small SHR drives contain only one array and not much of the overall data, and are accessed much less frequently than the larger drives. For RAIDF1’s algorithm to produce expected results, array write patterns must be simple and predictable. In summary, RAIDF1 and SHR array behaviors are not compatible with each other, which is reflected in the Synology DiskStation product lines. The Synology models that support RAIDF1 are the same as those that do not officially support SHR. This includes the XPEnology-enabled DS3615xs+ and DS3617xs+ platforms.  Note that SHR can be enabled on these platforms by modifying /etc.defaults/synoinfo.conf, with no impact to RAIDF1 functionality.

 

The MDRAID modifications that enable RAIDF1 are compiled into the DSM kernel. The consumer-oriented DSM platforms do not contain those changes, including the XPEnology-enabled DS916+ and DS918+ platforms. Creation and maintenance of a RAIDF1 is not possible on those systems. However, just like SHR, an established RAIDF1 array is completely functional and behaves like any other RAID5 array when migrated to a platform that does not support it. Brilliant!

 

TRIM helps minimize the impact of write amplification on SSDs. Because the F1-drive is written to more frequently, it will be affected by write amplification more severely than the other array members, and performance of both the drive and the array will degrade over time unless TRIM support is enabled.

 

Finally, there is no RAID6-based, SSD-optimized choice yet. Perhaps RAIDF2 will be an option in DSM 7.0.

 

References

 

If you want to install RAIDF1 on XPEnology, you will find a simple tutorial here

 

https://en.wikipedia.org/wiki/Standard_RAID_levels

https://global.download.synology.com/download/Document/Software/WhitePaper/Firmware/DSM/All/enu/Synology_RAID_F1_WP.pdf

http://wiki.linuxquestions.org/wiki/Block_devices_and_block_sizes

https://raid.wiki.kernel.org/index.php/RAID_setup#Chunk_sizes

https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm

https://www.synology.com/en-sg/knowledgebase/DSM/tutorial/Storage/Which_Synology_NAS_models_support_RAID_F1

https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/Which_models_have_limited_support_for_Synology_Hybrid_RAID_SHR

https://en.wikipedia.org/wiki/Trim_(computing)

  • Like 6
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...