DSfuchs Posted February 20 #1 Posted February 20 (edited) In 7.2.1 update 7 it works well as ever. Once set up under 7.2.2, the DSM shuts down for punishment. It's a real shame, as this means the most powerful disk subsystem has been deactivated. It's time to look for alternatives. Edited February 20 by DSfuchs Quote
bearcat Posted Wednesday at 02:44 PM #2 Posted Wednesday at 02:44 PM Hmm, when did Synology support RAID-4 ? It was supposedly not supported in either DSM 6.2 or 7.2? https://kb.synology.com/en-global/DSM/help/DSM/StorageManager/storage_pool_what_is_raid?version=6 1 Quote
DSfuchs Posted Wednesday at 04:42 PM Author #3 Posted Wednesday at 04:42 PM 1 hour ago, bearcat said: Hmm, when did Synology support RAID-4 ? It was supposedly not supported in either DSM 6.2 or 7.2? https://kb.synology.com/en-global/DSM/help/DSM/StorageManager/storage_pool_what_is_raid?version=6 mdadm --create --help Usage: mdadm --create device -chunk=X --level=Y --raid-devices=Z devices This usage will initialise a new md array, associate some devices with it, and activate the array. In order to create an array with some devices missing, use the special word 'missing' in place of the relevant device name. Before devices are added, they are checked to see if they already contain raid superblocks or filesystems. They are also checked to see if the variance in device size exceeds 1%. If any discrepancy is found, the user will be prompted for confirmation before the array is created. The presence of a '--run' can override this caution. If the --size option is given then only that many kilobytes of each device is used, no matter how big each device is. If no --size is given, the apparent size of the smallest drive given is used for raid level 1 and greater, and the full device is used for other levels. Options that are valid with --create (-C) are: --bitmap= : Create a bitmap for the array with the given filename : or an internal bitmap is 'internal' is given --chunk= -c : chunk size in kibibytes --rounding= : rounding factor for linear array (==chunk size) --level= -l : raid level: 0,1,4,5,6,10,linear,multipath and synonyms Quote
bearcat Posted Thursday at 06:59 AM #4 Posted Thursday at 06:59 AM Please correct me if I'm wrong, but even though mdadm is beeing used by DSM "in the background", RAID-4 has never been promoted as an "official supported feature" by Synology. There may be other "limitations" in the official DSM builds compared to a "full packaged Linux system", without it beeing related to "marketing reasons" as you claim, just saying... Quote
DSfuchs Posted Thursday at 07:44 AM Author #5 Posted Thursday at 07:44 AM 44 minutes ago, bearcat said: Please correct me if I'm wrong, but even though mdadm is beeing used by DSM "in the background", RAID-4 has never been promoted as an "official supported feature" by Synology. There may be other "limitations" in the official DSM builds compared to a "full packaged Linux system", without it beeing related to "marketing reasons" as you claim, just saying... The DSM shutdown for "punishment" shows me, that they know what there are doing. 1 Quote
DSfuchs Posted Thursday at 11:04 AM Author #6 Posted Thursday at 11:04 AM No confusion, it was simply disabled with punishment. Quote
bearcat Posted Thursday at 11:04 AM #7 Posted Thursday at 11:04 AM For the sake of argument: On 2/20/2025 at 1:26 PM, DSfuchs said: this means the most powerful disk subsystem has been deactivated. How did you reach that conclusion? Someone else made a statement like this: Quote Cons of RAID 4: Only a Single Parity Disk Unlike RAID 5 which writes its parity data across all disks, Raid 4 uses just a single disk to write its parity data. Unfortunately, if the parity disk fails, all data may be lost. Poor Random Writes Performance Since this storage technology uses a single disk for its parity information, random write speed will be slow. For example, to write the information in the independent disks to the parity disk at the same time. Because the parity disk is just one, one independent disk write needs to wait for the other disk write to complete. Rarely Used in Production Today This storage technology is not readily in use nowadays. This is due to the fact that RAID 5 performs better. 1 Quote
bearcat Posted Thursday at 11:05 AM #8 Posted Thursday at 11:05 AM Just now, DSfuchs said: it was simply disabled with punishment. Screenshot? 1 Quote
DSfuchs Posted Thursday at 11:10 AM Author #9 Posted Thursday at 11:10 AM 3 minutes ago, bearcat said: For the sake of argument: How did you reach that conclusion? Someone else made a statement like this: Every sentence is false; the opposite is true in each case. Quote
DSfuchs Posted Thursday at 11:11 AM Author #10 Posted Thursday at 11:11 AM 5 minutes ago, bearcat said: Screenshot? ..of a powered off DiskStation? Quote
DSfuchs Posted Thursday at 11:24 AM Author #11 Posted Thursday at 11:24 AM (edited) "Only a Single Parity Disk" => Good thing, the other hard drives are not affected. "Unlike RAID 5 which writes its parity data across all disks .." =>Bad, all other hard disks are burdened with writing other disks' parity data in addition to their normal workload. "Unfortunately, if the parity disk fails, all data may be lost." => Nonsense, the unprotected storage remains, just like with RAID5. "Poor Random Writes Performance" => Nonsense. Why should RAID 5 be faster than RAID0 + Parity on another disk? RAID 5 is slower, if other write operations are taking place at the same time. .. And so on, I don't want to go into false foundations here. RAID4 has only exceptional advantages! Edited Thursday at 11:41 AM by DSfuchs Quote
bearcat Posted Friday at 01:58 PM #12 Posted Friday at 01:58 PM On 3/13/2025 at 12:11 PM, DSfuchs said: of a powered off DiskStation? Of your "punishment" . Quote
bearcat Posted Friday at 01:59 PM #13 Posted Friday at 01:59 PM (edited) On 3/13/2025 at 12:24 PM, DSfuchs said: RAID4 has only exceptional advantages! I bow to your knowledge, and well documented sources🙏 Edited Friday at 02:03 PM by bearcat Quote
DSfuchs Posted Friday at 04:24 PM Author #14 Posted Friday at 04:24 PM 2 hours ago, bearcat said: I bow to your knowledge, and well documented sources🙏 What's the point of YOUR post here that if one disk in a RAID system with parity fails, all data is lost!? Please don't follow me. Quote
IG-88 Posted 18 hours ago #15 Posted 18 hours ago i can't see the point of arguing that much of "whats best", raid5 is more often used for a long time and raid4 might have some advantages (when the parity disk is "faster" to handle all the parity data coming in from all the other disks, arrays with more disks might get overwhelmed by the parity disk? there are scenarios where raid4 might occur better suited, (NetApp thought so too) On 2/20/2025 at 1:26 PM, DSfuchs said: It's time to look for alternatives you can start open media vault from a extra disk and use synology created raid sets and lvm's (ok, not raid-F1 as it is proprietary) not sure if the volume label problem still exist, but if so, it can be solved https://xpenology.com/forum/topic/42793-hp-gen8-dsm-623-25426-update-3-failed/?do=findComment&comment=200475 and you still have dsm installed and ready to use on the disks (if you keep the usb drive for booting up dsm) thats still my to to option if anything about dsm is a problem, if the drive with omv is already prepared, then the switch over is a mater of a few minutes to be up and running (with network access over smb and nfs), gets more complicated if vm's or docker and dsm packages are involved but just the basic access to files can be done easily so even if you have a raid4 and upgrade to 7.2 and the system does not start anymore you would just grab your omv disk, boot up and have access again, you can also ssh into omv, mount the synology system partition(s) and prepare the downgrade Quote
DSfuchs Posted 18 hours ago Author #16 Posted 18 hours ago (edited) 1 hour ago, IG-88 said: i can't see the point of arguing that much of "whats best", raid5 is more often used for a long time and raid4 might have some advantages (when the parity disk is "faster" to handle all the parity data coming in from all the other disks, arrays with more disks might get overwhelmed by the parity disk? there are scenarios where raid4 might occur better suited, (NetApp thought so too) you can start open media vault from a extra disk and use synology created raid sets and lvm's (ok, not raid-F1 as it is proprietary) not sure if the volume label problem still exist, but if so, it can be solved https://xpenology.com/forum/topic/42793-hp-gen8-dsm-623-25426-update-3-failed/?do=findComment&comment=200475 and you still have dsm installed and ready to use on the disks (if you keep the usb drive for booting up dsm) thats still my to to option if anything about dsm is a problem, if the drive with omv is already prepared, then the switch over is a mater of a few minutes to be up and running (with network access over smb and nfs), gets more complicated if vm's or docker and dsm packages are involved but just the basic access to files can be done easily so even if you have a raid4 and upgrade to 7.2 and the system does not start anymore you would just grab your omv disk, boot up and have access again, you can also ssh into omv, mount the synology system partition(s) and prepare the downgrade The parity disk can be even slower, but it's better to have a larger size, e.g. additional 25%. And a 2-slot system is sufficient because I can easily implement the parity hard drive via USB3. Let's assume a scenario with three hard drives. Then, for example, I write at 140 MB/s to the first two, essentially RAID0 drives. Then the parity disk only needs to be able to write at 70 MB/s; even a SMR drive is sufficient for that. With fast 3x 2-4TB drives I wrote 440MB/s under RAID4. Thanks, but I've been familiar with adjusting the volume label as a workaround for 10 years. Funnily enough, a simple read-write mounted Btrfs-formatted RAID-set under Windows doesn't need it. Edited 17 hours ago by DSfuchs Quote
DSfuchs Posted 18 hours ago Author #17 Posted 18 hours ago (edited) Almost everything here in the community and in the NAS world is done using RAID with parity for failover reasons. The notion here in #7 before, that this shouldn't be the case, because a single disk failure means complete loss, is terrible and should be removed immediately. Edited 17 hours ago by DSfuchs Quote
DSfuchs Posted 16 hours ago Author #18 Posted 16 hours ago (edited) OT: How fast are you writing to your RAID-subsystem with parity (without SSD/NVMe)? I wrote 440MB/s under RAID4 with 3 IronWolf ST4000NE001. Edited 16 hours ago by DSfuchs Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.