Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. flyride

    DSM 6.2 Loader

    1.03b does not support UEFI boot. 1.04b only supports DS918+ platform. Usually there is an option in your BIOS to configure boot order. It may look different when Legacy/CSM boot is enabled. Check your documentation carefully.
  2. I never cease to be amazed to learn new things about DSM even after years ...
  3. FWIW, I had to download a new version of Synology Assistant in order to see DSM 7 installs.
  4. DSM 7 handles array creation somewhat differently than DSM 6.x. This tutorial will explain the new, less flexible behavior of DSM 7, provide historical context and offer practical solutions should the reader wish to create certain array configurations no longer possible via the DSM 7 UI. The structures described herein may also be useful toward an understanding of the configuration, troubleshooting and repair of DSM arrays. Be advised - over time, Synology has altered/evolved the definitions of the storage terms in its documentation and used in this tutorial. Additionally, many of the terms have overlapping words and meanings. This can't be helped, but you should be aware of the problem. Background: DSM 6 offers many options as to how to set up your arrays. In DSM, arrays are broadly referred to as Storage Pools. Within a Storage Pool, you may configure one or more Volume devices, on which btrfs or ext4 filesystems are created, and within which Shared Folders are provisioned. Here's a simplified history of the different Storage Pool configurations available on the XPe-enabled DSM versions and platforms. If you aren't familiar with lvm (logical volume manager), it refers to a storage meta-container able to hold multiple host arrays (i.e. SHR) and/or multiple target Volumes. Note that the word "volume" in logical volume manager has nothing to do with DSM Volumes. DSM 5 provides multiple Storage Pool configurations, including "Basic" (RAID 1 array with one drive), RAID 1, RAID 5, RAID 6, RAID 10, and SHR/SHR2 (conjoined arrays with lvm overlay). These are known as "Simple" Storage Pools, with a single Volume spanning the entire array. The consumer models (i.e. DS918+/DS920+ platforms) always create Simple Storage Pools, with non-lvm arrays (RAID) designated as "Performance" and lvm-overlayed (SHR/SHR2) as "Flexible." DSM 6 adds enterprise features for DS3615xs/DS3617xs/DS3622xs+, including RAID F1 arrays and "Raid Groups." A Raid Group is a Storage Pool with an lvm overlay regardless of array type, and that permits multiple Volumes to be created within it. For DS3615xs/DS3617xs the "Performance" Storage Pool is the same as the consumer models (non-lvm), while "Flexible" refers to the Raid Group (lvm-overlayed) option. Synology limits the use of SHR/SHR2 with these models by default. However, it can be enabled with a well-known modification to DSM configuration files, such that the UI is then able to create a SHR array within a "Flexible" Raid Group Storage Pool. DSM 6 also offers SSD caching on all platforms. When SSD caching is enabled, the target Volume device is embedded into a "device mapper" that binds it with the physical SSD storage. The device mapper is then mounted to the root filesystem in the place of the Volume device. DSM 7 supports all of the above configurations if they exist prior to upgrade. However, DSM 7 Storage Manager no longer is able to create any type of Simple Storage Pool. On all the XPe supported platforms, new Storage Pools in DSM 7 are always created within Raid Groups (therefore lvm-overlayed) and with a dummy SSD cache, even if the system does not have SSDs to support caching. lvm uses additional processor, memory and disk space (Synology is apparently no longer is concerned with this, assuming that the penalty is minor on modern hardware) and if you don't need SHR or cache, creates unnecessary complexity for array troubleshooting, repair, and data recovery. Look back at the first illustration vs. the last and you can see how much is added for zero benefit if the features are not required. Users might prefer not to have superfluous services involved in their Storage Pools, but DSM 7 no longer offers a choice. From all these configuration options, your Storage Pool type can be determined by observing how the DSM Volume connects to the filesystem, and the presence or absence of lvm "Containers." This table shows various permutations that you may encounter (note that all are describing the first example of a Storage Pool and DSM Volume, multiples will increase the indices accordingly): Storage Pool Type Array Device(s) Container /volume1 Device Simple (DSM 6) "Performance" /dev/md2 (none) /dev/md2 Simple (DSM 6, DS918+/DS920+ SHR) "Flexible" /dev/md2, /dev/md3... /dev/vg1000 /dev/vg1000/lv Raid Group (DSM 6, DS36nnxs SHR) "Flexible" /dev/md2, /dev/md3... /dev/vg1 /dev/vg1/volume_1 Simple with Cache (DSM 6) /dev/md2 (none) /dev/mapper/cachedev_0 Raid Group with Cache /dev/md2, [dev/md3...] /dev/vg1 /dev/mapper/cachedev_0 Example query to determine the type (this illustrates a cache-enabled (with read-only cache), lvm-overlaid Raid Group with one DSM Volume): dsm:/$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/md0 2385528 1063108 1203636 47% / none 1927032 0 1927032 0% /dev /tmp 1940084 1576 1938508 1% /tmp /run 1940084 3788 1936296 1% /run /dev/shm 1940084 4 1940080 1% /dev/shm none 4 0 4 0% /sys/fs/cgroup cgmfs 100 0 100 0% /run/cgmanager/fs /dev/mapper/cachedev_0 33736779880 19527975964 14208803916 58% /volume1 dsm:/$ sudo dmsetup table | head -2 cachedev_0: 0 1234567 flashcache-syno conf: ssd dev (/dev/md3), disk dev (/dev/vg1/volume_1) cache mode(WRITE_AROUND) dsm:/$ sudo lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert syno_vg_reserved_area vg1 -wi-a----- 12.00m volume_1 vg1 -wi-ao---- 11.00g Solutions: If desired, DSM 7 can be diverted from its complex Storage Pool creation behavior. You will need to edit the /etc/synoinfo.conf and /etc.defaults/synoinfo.conf files from the shell command line. /etc/synoinfo.conf is read by DSM during boot and affects various aspects of DSM's functionality. But just before this during the boot, /etc.defaults/synoinfo.conf is copied over the file in /etc. Generally you can just make changes in /etc.defaults/synoinfo.conf but the copy is algorithmic, and desynchronization can occur. So it is always best to check for the desired change in both files if the results are not as expected. 1. Enable SHR on DS3615xs, DS3617xs and DS3622xs+: Edit synoinfo.conf per above, changing the parameter supportraidgroup from "yes" to "no". It is no longer required to comment out this line, or add "support_syno_hybrid_raid" to the configuration file. Despite the implications of this parameter being set, Raid Groups continue to be supported and DSM will create Raid Group enabled Storage Pools, with the added bonus of the option for SHR available in the UI. 2. Create arrays without dummy caches: Edit synoinfo.conf per above, changing the parameter supportssdcache from "yes" to "no". Arrays created while this option is disabled will not have the dummy cache configured. Further testing is required to determine whether SSD cache can subsequently be added using the UI, but it is likely to work. 3. Create a cacheless, Simple array on DSM 7: DSM 7 no longer can create a Simple Storage Pool via the UI. The only method I've found thus far is with a DSM command line tool. Once Storage Pool creation is complete, Volume creation and subsequent resource management can be accomplished with the UI. Note that you cannot just create an array using mdadm as you would on Linux. Because DSM disks and arrays include special "space" metadata, the mdadm-created array will be rejected by DSM 7. However, using the DSM command-line tool resolves this problem. First, you need to determine the RAID array type (raid1, raid5, raid6, raid10 or raid_f1; raid_f1 is only valid on DS3615xs/DS3617xs/DS3622xs+ and all array members must be SSDs) and the device names of all the disks that should comprise the array. The disks that you want to use should all be visible like the example: dsm:/$ ls /dev/sd? /dev/sda /dev/sdb /dev/sdc /dev/sdd And, the target disks should not be in use by existing arrays (nothing should be returned): dsm:/$ cat /proc/mdstat | egrep "sd[abcd]3" Then, create the array with the following command (example uses the disks from above and creates a RAID 5 array). All data will be erased from the disks specified. dsm:/$ sudo synostgpool --create -t single -l raid5 /dev/sda /dev/sdb /dev/sdc /dev/sdd If successful, the array will be created along with all the appropriate DSM metadata, and you will see the new Storage Pool immediately reflected in the Storage Manager UI. If it fails, there will be no error message, and it's usually because disks are already in use in another array. Deleting the affected Storage Pools should free them up. You can review the events that have occurred by reviewing /var/log/space_operation.log. A DSM Volume can then be added using the Storage Manager UI. Further testing is required to determine whether SSD cache can subsequently be added using the UI, but it is likely to work.
  5. AFAIK there is only one modern underlying virtual disk structure (perhaps not with vATA, haven't tested with vNVMe). You can change a virtual disk to connect to a vSATA controller, or any dialect of the virtual SCSI controller and the data remains accessible.
  6. I guess I should point out that "real" SCSI devices (as opposed to SAS) don't have SMART. While SCSI disks were functional in 1.02b/DSM 6.1.x, the install FAQs since 6.x was released have always advised virtual SATA devices. With 1.03b/1.04b and 6.2.x, virtual SCSI (and probably physical SCSI too) stopped working reliably. Some of my own analysis back in the early days of 6.2.x is here which may be informative and correlate to the experience you are currently having. Just a conjecture, but I suspect this is not a Redpill problem at all, and the configuration is using "leftover" SCSI support code in DSM which Synology doesn't test anymore, since there is no hardware to support it.
  7. https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ The link has specific CPU requirements vs. versions of DSM. You can install with as little as 2GB RAM but it will be slow. The functionally minimum size hard drive is about 6GB but there is so little space available due to OS overhead that it is hardly worth it.
  8. https://xpenology.com/forum/topic/35882-new-sataahci-cards-with-more-then-4-ports-and-no-sata-multiplexer/ For the least risk you will want to select a PCIe x1 card, although if you research you may find x2 cards that work fine in an x1 slot (with the extra connector overhanging).
  9. flyride

    Backup?

    You have many, many choices. Two good options are Snapshot Replication (native to DSM, leveraging btrfs replication) and rsync (also accessible from the UI)
  10. Does the DC7700 support boot from USB? Very old devices may not support it at all or require a BIOS upgrade to enable USB boot. Even if supported, it may not automatically select a USB boot target and it must be explicitly configured in the BIOS.
  11. All looks good from here. If the volume is currently accessible from the network, I would go ahead and attempt the drive replacement. If the volume is not currently accessible, do a soft reboot and verify that the volume is accessible after the reboot. Also check to make sure there is no change to the mdstat. If all is well that that point, then attempt the drive replacement.
  12. Yes, let's be totally certain of the array state prior to removing a disk (or disabling in the UI, or whatever). Yep. All three remaining drives (sdb, sdc, sdd) need to be current and active ("U") on both arrays prior to removing Drive 1 (sda). Is your drive bay hot-swappable, or are you powering down between disk operations?
  13. Ok, let's fix the other array. # mdadm /dev/md3 --manage --add /dev/sdb6 Before you attempt to replace a drive, please post an mdstat.
  14. What you are doing is repeatedly breaking /dev/md2 and repairing it (which is no longer necessary). Your drive is failing so it keeps replacing bad sectors. Let's stop doing that, because eventually you will run out of replacement sectors. I'm concerned that we don't know which drive is actually experiencing the issue and we may be repairing the wrong array so that you can properly replace it. If so we can issue a different mdadm command and repair the /dev/md3 array and try again. Please verify this by: # smartctl -d sat --all /dev/sda | fgrep -i sector
  15. Not much posted here to be useful, plus you have no redundancy to work with. You need to know the mount device for your filesystem. # cat /etc/fstab The line with /volume1 should tell you the mount device. I expect it to be /dev/md2 but you need to verify this on your own system. If it is /dev/vgXXX then you need to investigate the status of your LVM volume group: https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability/?do=findComment&comment=107934 If it is /dev/mdX then you can attempt some data recovery commands by following the thread starting here (ignore vgchange commands as they won't apply to you): https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability/?do=findComment&comment=107979 You should be prepared to add another drive or USB mounted device equal to or larger than your volume as a target for the recovered data.
  16. Yes, it is trying to recreate parity from the "bad" drive for that specific subarray. Once that is done, you should be able to replace the drive.
  17. Sorry, I got the two arrays crossed up. # mdadm /dev/md2 --manage --add /dev/sda5
  18. As @IG-88 indicated certain controllers will move connected devices with impunity. Because md writes a UUID in a superblock location to each device, it can start and operate an array without concern to physical/logical disk slot mapping. I'm not clear if your experience is due to the controller you are using or the unusual configuration >26 disks, but I don't think you should expect to find a resolution to this, and it very well may not actually be important.
  19. You are doing something that few, if any other people have done successfully. You will not receive anything other than general advice for such an undertaking, otherwise it would be purely speculative, which is not very helpful. To expect that you will be successful without gaining some deep understanding of Linux, Linux devices, md and other Linux open source toolsets is a bit audacious.
  20. There isn't enough information to make a decision about this. You will need to provide information about the add-in PCI card in order for someone to offer an opinion about it. Based on the M92p specs I can't see a reason that the onboard NIC is not suitable for DSM however. In general if you are troubleshooting hardware, it is best to simplify the environment and get things working first, which might encourage me to remove the PCI card, at least for now.
  21. There are three different crashed entities on your system right now. 1. "Disk 1" physical drive - DSM is reporting it as crashed, but it is still working, at least to an extent. 2. /dev/md3 array within your SHR - which is missing /dev/sda6 logical drive 1 3. /dev/md2 array within your SHR - which is missing /dev/sdb5 logical drive 2 The #1 issue is what you think you need to fix by replacing a bad drive. The problem is that /dev/md2 is critical because of a loss of redundancy on another drive, not the one you are trying to replace. SHR needs both /dev/md2 and /dev/md3 to work in order to present your volume. When you try and replace the disk, /dev/md2 cannot start. You need to attempt to repair the /dev/md2 array with the disk you have prior to replacing it. DSM may not let you do this via the GUI because it thinks the physical disk is crashed. You can try and manually resync /dev/md2 with mdadm as root: # mdadm /dev/md2 --manage --add /dev/sdb5 EDIT: add target above should have been /dev/sda5
  22. What you have shown looks correct to me. You should not have or need the SATA disk in the boot sequence though. The one thing that catches my eye, is that your system continues to try a network boot. I would attempt to disable that. Often in PCI settings or hardware device options. Also, you might try a different USB key, or create the key from scratch if BIOS changes have occurred.
×
×
  • Create New...