Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. Was the spare drive in the NAS before? You might need to zero off the partitions prior to adding it. Just take it out, connect it to another PC, delete all the partitions, reinstall and try again.
  2. flyride

    zfs

    IMHO, that's a no, it would require kernel support and none of DSM's tools are designed for zfs. If you want zfs I think that's a move to FreeNAS as the most direct solution.
  3. Well the performance is cause enough for you to be concerned, so if you can come up with a spare drive or two, I would swap them in and see if your performance improves. One slow drive will drag down an array for sure. Another thing that would be interesting to try is to swap drives into those slots to see if the performance markers follow the drives. As long as the arrays are healthy when you shut down, there will be no issue with the system recognizing and accommodating them in their new positions.
  4. Looking at your specifications, it appears that there is an embedded port multiplier function in the AMD controller which serves the eSATA port. DSM does not support port multipliers (which would have a negative impact on the bandwidth available to the external enclosure), so it is unlikely to be able to be made to work. In theory you can remap the USB drives into the data drive assignments, but this can wreak some havoc on hiding your bootloader and it will revert on an upgrade which will crash your array until you can restore the settings. If you want to look into this it would require changing internalportcfg and usbportcfg in /etc.defaults/synoinfo.conf
  5. No. This is a limitation of DSM, not the patch. Right now the only "safe" way to do it is to embed DSM as a VM in ESXi, attach the NVMe disk to ESXi and syndicate it as an emulated SATA device to the VM. Yes, you would need to turn on TELNET and/or SSH capability via a checkbox in Control Panel. The apps are usually free, such as Putty (on Windows). You do need to learn a little about Linux command line, and there are thousands of Internet resources to help you there.
  6. I think it's worth explaining how this works a little more. You currently have a SHR2 of 4x 4TB drives and 5x 8TB drives. There is a 4TB array across all 9 drives. Subtract two for parity (RAID6) and you have 7 data members with 4TB each. I/O is split across all those spindles so your performance generally should be at maximum unless there is something wrong with one of the members. 7 x 4TB = 28TB so once that 28TB is filled up, there is no more space on the 7-spindle data array. There is a second 4TB across the five 8TB drives because SHR is trying to maximize your storage. Subtract two for RAID6 parity and you have 3 data members with 4TB each. Any I/O that goes here (because 28TB is filled up) is limited to only 3 data spindles. That won't be as fast as the other part both for fewer spindles, and also if the first array is busy, since the arrays are competing with each other for the 8TB drives. If you add an 8TB drive, both arrays will add a member and gain 4TB more space each. If you add new files, eventually the 4TB of space in the 10-drive array will get used but you don't have much control over where the filesystem puts it, so it might get spread across both arrays since there are already files there. If you modify a file in the second array it will probably stay there. Remember this is all transparent to you - all you see is a unified filesystem. So your benefits will be sporadic and unpredictable, at a minimum. You can't really fix the problem completely until you get rid of the SHR. And I'm not completely convinced it matters that much. This doesn't tell us very much. You need a lot of monitoring over time to get a sense of what is going on (iostat -m 2 is a good command to see it in real time). Let me just say that I have no trouble driving my 4C/8T Skylake 3.5Ghz CPU to 40% utilization on 6xRAID5 writes when running at 10Gbe wire speed. At 1Gbe it doesn't matter. But with a total of 14 array members (9 across all disks, 5 extra across 8TB disks) and the additional overhead of RAID6, I think it is entirely possible for your 2C/4T CPU to hit thread availability/latency limits without overall utilization driving up very high. One thing we haven't tried is bumping up your stripe cache, which actually might be a pretty big benefit given your system configuration. But it is a hit to RAM. How much do you have installed? Post the results of the following: ls /sys/block/md*/md/stripe_cache_size cat /sys/block/md*/md/stripe_cache_size
  7. Technically true for some low-end boards, but OP has a B365 board with six chipset-based SATA ports.
  8. Really, I don't think adding a single drive will get you too much. If we take the example of your SHR which is 7 data spindles for the first 27TB and then 3 for the remaining space, and expand it with another disk, 50% of the new data blocks are still on the 8TB-only part of the array. So you would get 4TB of theoretical improved performance but not for the last 4TB or accessing any of the files already on the 8TB-only part of the array. If you want to rule out SHR, mitigate it altogether with a RAID6 (or RAID5) of 8TB drives. I'd try the CPU first and see SHR2 responds before doing anything else though. EDIT: if your intention is to REPLACE a potentially problematic drive, that makes some sense. Why? Onboard SATA either are directly connected to the CPU via chipset, or have a direct connection to PCIe bus. The motherboard documentation will confirm, but for mainstream Core CPU chipsets, it's at least four lanes.
  9. Never you mind, we need all the helpers we can get. You found something basically identical to what I was brain-dumping just then
  10. Yeah, this stuff can get complicated and there are a lot of tricks. The @appstore folder is not a shared folder, but it is part of the data array. This allows us to use array manipulation to do what we need and not screw up your volume location or put your data at additional risk. Here is what I would try. You need a spare disk larger than the new 500GB disk that you can use temporarily. The assumption is that you have a RAID0 Basic volume now (/volume1) with your app store on it . If you built a SHR with it, do not follow this guide immediately and post the results of a df and a cat /proc/mdstat and let's evaluate before you proceed. Convert your RAID0 Basic volume with the 120GB disk into a RAID1, adding the new 500GB disk. Do not make a SHR. The result will be a 120GB RAID1 Shutdown and remove your 120GB disk. If it was the #1 disk slot on the controller, move the new 500GB disk into it. Add your spare disk, boot and repair the RAID1 array with it (again do not make SHR). The array will be expanded automatically. Once it's done, shutdown, remove the spare disk, reboot and you will have a "broken" RAID1 array (which is a RAID0) Confirm which /dev/md device it is by running cat /proc/mdstat Assuming it's /dev/md2 (substitute whatever you figure out from the mdstat command), issue the following command as root: mdadm --grow --raid-devices=1 --force /dev/md2
  11. This seems like the hard way to do it, but you'll need to adapt the procedure outlined here after a clone operation: https://xpenology.com/forum/topic/14091-esxi-unable-to-expand-syno-volume-even-after-disk-space-increased/
  12. What I am not seeing here is evidence of which disk is the eSATA connected device. Can you tell me which device it is? If you can't figure it out, see if you can create a new test storage pool with the eSATA device as a single disk, and then post another cat /proc/mdstat Also, what device are you plugging the eSATA cable into on your computer?
  13. If it thread blocks for microseconds, it will hamper 10Gbe throughput. I'm not sure it's the answer but even Syno shows throughput increases between models with no changes but increases in CPU horsepower.
  14. You have not provided enough information to answer yet. What is an eSATA multiplier? If you are referring to a SATA controller multiplier, that is not supported by the DSM AHCI driver. Please information about all the drives in your system and how they are currently connecting. Some useful information would be: 1) Platform? DS3615xs, DS3617xs or DS918+ 2) Virtualized or baremetal? 3) Storage Manager screenshot of all your disks, your slot map, and your current Storage Pools 4) Output of ls /dev/sd* and cat /proc/mdstat
  15. I think you need to stick with what your motherboard will support. DSM shouldn't care though.
  16. So /dev/sdk, which is a 4GB Red drive, is quite a bit slower than its peers on reads. I'd test that a little bit more, and maybe review the SMART data for it? FWIW, WD Reds are among the slowest drives out there for throughput, but I would not expect to see net throughput as you have. That said, I'm not a huge fan of SHR2/RAID6. /dev/sdb, which is the Toshiba N300 with 128K cache and on the onboard SATA port, is slower but not significantly so than its peers on reads. I don't think it's a problem. The high utilization you observed is expected when the 8GB drives are being pushed to their performance limits, and that is a good thing. Your CPU has only 2 cores, which is probably a limiting factor with the computational requirements of SHR2/RAID6. I think your system is working correctly, but everything is at a worst-case state from a performance standpoint, and the negative impact of all the items that are at performance limit is cumulative. In summary, investigate whether /dev/sdk is working correctly and change your CPU to one with four cores (Core i3-8100 or 8300 would work great and retain your transcoding capability). Also, is write cache on the individual drives??
  17. The point is that you have moved out of the part of the array where all the drives are utilized and so the throughput will be slower than when the array was empty. I agree you have CMR drives, that's good. The onboard SATA might be a factor, but also that drive is a different model with half the onboard cache of your other 8TB drives. So it is going to be the slowest, therefore most heavily utilized. Array performance limits are defined by the slowest drive in the array. Yes, it seems low, but you are 1) using SHR2 and 2) using dissimilar drives so you have the worst possible configuration for performance. That doesn't mean the performance should be bad. The next thing is to check the drives themselves. Use hdparm to check the raw read rates for each drive: # hdparm -t /dev/sdX where sdX is sda, sdb, sdg, sdh, sdhi, sdj, sdk, sdl in sequence My advice is to fix or accept this performance issue before worrying about that one...
  18. This is lackluster performance, I agree. Did you remove the cache before this test? Can you confirm that you have >28TB (4*(9-2)) in use on the data volume? If so, this is illustrative of the negative impact of SHR. Your 8TB drives are part of both /dev/md3 and /dev/md4. Once the 4TB drives fill up (meaning the first 28TB used in the volume) then the performance benefits of 9 spindles drop to five. This is a price that is paid for the additional storage enabled via SHR. You're also using SHR2/RAID6 so there is also double the write overhead, compounded by the above. I'm not convinced there is anything wrong, but the next thing that I would try is a synthetic test on each of the HDD's to see if one is underperforming for some reason. Have you confirmed that your WD Red's are not the SMR versions? You didn't post the actual models so I can't look it up for you.
  19. Also maybe post output of cat /proc/mdstat so we can see how your SHR is constructed.
  20. What exactly do you have on your "System" disk? DSM is installed to all drives, so maybe this is a volume for Docker and other packages etc? The System disk is a Basic volume? The NVMe cache is dedicated to the data volume? To objectively evaluate each array, first remove the NVMe cache. Then run this test on each volume (again, assuming one for SATA SSD and another for spinning disk data) and let us know what you get: https://xpenology.com/forum/topic/13368-benchmarking-your-synology/?tab=comments#comment-97997
  21. I'd figure out what the bottleneck is before doing anything else, this doesn't seem to tell us much. Clearly if you get 200-300MBps you are using 10Gbe for a network connection otherwise would be limited to about 110 MBps throughput. Is the SSD in the NAS or on a client device? What are the disk make and models in the array? With this additional information we can formulate a test strategy to try and isolate the bottleneck. Depending on the drives, the SHR configuration and type of data you are moving, this performance is not implausible.
  22. It's hardcoded. You can ignore it as your CPU is fully utilized.
  23. I can't say, it's only a suggestion. I've never dealt directly with your hardware. I do not mean to be overly critical, but if you are so cavalier with your data to ask this question, then you should be prepared to lose your data. You should ALWAYS have a backup option for your data. If you insist on trying to install a brand new environment for the first time with your data in the mix, then please get another drive and do some test installs first, so that you are 100% on the procedure and resolution of any problems (which there will be) so that your next post is not "install failed, can't see it on the network, can I save my data" which is posted here most every day. Soapbox off. To answer your question, a migration install is fully supported from the DS918+ platform to DS3615xs and it can be quite painless - IF you know what you are doing. EDIT: you found the Genesys MBR version of the 1.02b loader - this won't work for 6.2.3. Here's the link to the 1.03b build: https://xpenology.com/forum/topic/20270-dsm-62-loader-with-mbr-partition-table/
  24. Compatibility list is useful, UPS's connect generally via USB and this works on XPe just fine. If your UPS is cheap such that it has no connectivity options - well then it won't work in the way you imagine. The underlying software is the open source Linux NUT package and it can be modified to support most any USB UPS connectivity if it doesn't work out of the box. Can't speak about cheap UPS's (this is an oxymoron to my ears) but I am using a CyberPower CP1500PFCLCD - PFS correcting, sine wave UPS.
×
×
  • Create New...