Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. Do you really want to run a NAS with only 100Mbps Ethernet? That would be a maximum of 11 megabyte per second transfer rate. However, the LAN adapter is not Intel, it's Atheros AR8132M. That is probably where your issue lies. Intel CT PCIe card is US$20 though and that would solve it. https://www.msi.com/Motherboard/g41m-p26/specification
  2. Oculink is just a physical interface, I don't think you will have any problems.
  3. @billat29 assessment is correct. There are things that can only be done with ESXi from a hardware standpoint. And if you have powerful enough hardware it cannot be fully utilized by DSM due to processor thread limits. So if you want to fully leverage your very powerful system, virtualization to make the unused hardware available to other workloads is required.
  4. Very few enterprise storage solutions recommend more than 24 drives per array. Synology has a 72-drive unit now - FS6400 - and it can support more than 24 disks per array if you choose the "flexibility" instead of the "performance" storage pool option, but I don't know if there is a limit before 72 drives. However, 108TB per volume limit applies to our XPE-enabled platforms, so that may start a practical limit to the number of drives that can be supported as mean capacity per drive increases (yes, we can deploy multiple volumes per storage pool but that will create performance contention). https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/Why_does_my_Synology_NAS_have_a_single_volume_size_limitation
  5. One should to calculate startup power for ps sizing. So that 22w number SHOULD be the mobo power plus the max power for each drive. Running power is going to be less.
  6. No, it supports two out of the box. If you modify it to support >2 then it will revert on a big update. The likelihood of us getting another big update with DSM 7 pending is very small.
  7. Well, you can't repair a system partition when you only have one drive. There is no disk redundancy in your system. If there were, DSM would allow you to do the repair from the Control Panel. For reference, /dev/md0 is DSM/Linux. /dev/md1 is Linux swap and /dev/md2 is your volume. "E" is a custom mdadm disk status flag that is part of Synology's data integrity enhancements to Linux. While it appears that the filesystem on /dev/md0 is operational (I assume you can still boot DSM), the one array member is flagged as bad. I agree the cause may have been the power outage and uncommitted write state, so DSM flagged it to error. There are two ways to fix this problem: recreate the array with mdadm. Here's a reference: https://www.dsebastien.net/2015/05/19/recovering-a-raid-array-in-e-state-on-a-synology-nas/ However, this requires that you are able to stop the array to recreate it, and that is your booted OS. So you will need to take the drive and install it on another Linux system to do it. The array you need to rebuild is /dev/md0 and you will have to figure out the disk array member (it will probably be /dev/sdb1 if you install the DSM disk to a single-disk Linux system). As long as you don't make a mistake, this has no impact to the data inside the array. reinstall DSM. You should be able to do this from Synology Assistant without building up a new loader USB. Just download and install the same PAT file you are running now. This will reset any OS customizations (save off your configuration in Control Panel first, then restore afterward), but your user data should be unaffected. In the future, you should consider adding another drive for redundancy so that you don't encounter this again. It really should be a non-issue.
  8. addendum: USB3-connected WD Red 4TB at 5400rpm on J4105-ITX
  9. My advice is not to get creative moving drives around in your live system, as you only have one copy of your data. While you could create volumes one at a time to offload data, it's again your only copy distributed across multiple volumes that all have to be compatible with your new configuration. It's possible but increases your risk. Any solution that requires you to expand your very large volume on the new build, or change from SHR1 and SHR2 will take at least as long as the network copy. So keeping things uncomplicated and safe may be the right move here.
  10. There is something very wrong. I am not trying to talk you out of your decision (or say that your results are inaccurate) but if DSM performed like that nobody would use it. Here's the output from my J4105-ITX and a WD whitelabel 5400 RPM drive: root@archive:~# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 540 MB in 3.01 seconds = 179.63 MB/sec root@archive:~# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 542 MB in 3.02 seconds = 179.59 MB/sec root@archive:~# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 536 MB in 3.00 seconds = 178.66 MB/sec root@archive:~# and from my Skylake E3 system with SATA SSDs: root@nas:~# hdparm -t /dev/sde /dev/sde: Timing buffered disk reads: 1540 MB in 3.00 seconds = 512.91 MB/sec root@nas:~# hdparm -t /dev/sde /dev/sde: Timing buffered disk reads: 1540 MB in 3.00 seconds = 513.08 MB/sec root@nas:~# hdparm -t /dev/sde /dev/sde: Timing buffered disk reads: 1538 MB in 3.00 seconds = 512.31 MB/sec root@nas:~#
  11. Do you have slots for all four drives? If so, I would create as another storage pool/volume. Assuming your current volume is Volume1, you can do this and create as Volume2 (or Volume3 if necessary) Make it a JBOD single volume (or a RAID5 if you can fit in 36TB of storage) Once all data is complete, pull the 12TB's out and set aside Reinstall and create new SHR2 btrfs Volume1 with your old drives Once everything is up and running, reinstall the 12TB drives and the Volume2 will magically appear, ready for you to copy back This keeps all the storage transfer in-box and doesn't use the network. The critical item for you is to make sure your volume numbering does not collide from old to new. In other words, the 12TB volume number must be larger than the number of volumes in the new configuration.
  12. Your CPU and RAM is fine. btrfs will use more RAM than ext4 with your data size. At some point I'd consider adding memory with your large storage capacity, but it isn't urgent. But I'm not quite sure I understand that you are describing with the 12TB drives. Are these bare drives you are going to add to DSM? Or are they externals?
  13. On very slow hardware and limited memory, yes. You always have to trade features for performance until your hardware makes the tradeoff cost irrelevant. My 4-core system can run my RAIDF1 array with btrfs at speeds > 1 gigabyte per second using btrfs, exceeding the capacity of the 10Gbe interface. Should I care that ext4 is faster?
  14. You could be correct, but my point is they are running the same software (at least for core disk access). If you are unwilling to post your disk drive types, I question whether you really are seeing 100MBps from a single-drive setup on OMV, it's beyond the capabilities of the drive. Therefore, the tests may not have been comparable in some way. But if you have your mind made up that OMV is better, by all means go on back.
  15. If you are JBOD then it's limited to the speed of individual drives. Few desktop-class drives can produce sustained transfer rates >100Mbps. Large files (lots of sequential r/w) will be faster than lots of small ones. 60MBps sustained is pretty typical for a WD Red class 5400rpm drive. If you want more feedback post the actual drive model numbers. Part of the reason for RAID, and SHR's ability to incorporate dissimilar drives into RAID, is to leverage drives in parallel to improve net transfer rate. If you want speed why are you configuring JBOD pools?
  16. flyride

    cache ssd NVMe

    Vous devez convertir en image DS918+ DSM et en Jun loader 1.04b, puis appliquer le correctif à partir du thread ici: https://xpenology.com/forum/topic/13342-nvme-cache-support/?do=findComment&comment=141659
  17. These two threads are good bellweathers as to what people are successfully using: https://xpenology.com/forum/topic/12867-user-reported-compatibility-thread-for-dsm-62/ https://xpenology.com/forum/topic/29401-dsm-623-25426/ Also check post signatures, lots of people list their in-production hardware there.
  18. Yes, you can create two volumes and they can be different filesystems. But why? Snapshots, CRC integrity and bitrot healing are all features useful for all your data.
  19. Underneath the covers DSM is mdadm, lvm and either ext4 or btrfs. OMV is mdadm and ext4 or zfs (yes they have btrfs but it's not suitable for RAID so few pick it). The possible differences are your RAID layout and whether your CPU is allowed to burst mode. There are threads on improving the CPU performance of the J series processors here. Post more information about your disk type, array configuration etc. Have you verified full-duplex 1Gbps connectivity? For reference I am using a J4105 with 4GB RAM running DSM 6.2.3 and it can easily handle full Gbe (100+MBps) to and from a 4-disk RAID5. Can't speak about external drives but your transfer rate seems to be USB2 not USB3 rate there.
  20. If your VM has unused storage controllers of any type present and active, they will use slots in DSM and skew your drive numbering. I would expect you only to have one virtual SATA for the loader, and the passthrough LSI. If you still can't make sense of the numbering, and/or if your sophisticated controller isn't fully supported by the DSM Linux driver, you can let ESXi manage the controller, and syndicate the drives into the DSM VM using RDM. Once an RDM pointer is created for a given drive, it can be attached to a second virtual SATA controller in the VM and it will use the regular SATA driver.
  21. I have 56GB assigned to my DS3617xs XPE VM (the max available in ESXi with 64GB) and it works fine. My Control Panel Info Center tells me that it has 56GB available. I don't think the RAM measurement is static, just the CPU information. htop shows actual available to the Linux kernel - 55.1GB I don't know if it will use more than 64GB however. Someone may have done this.
  22. You haven't posted which RAID cards are in your system. That would help with providing advice. In any case, DSM runs better when you don't use hardware to provide RAID services. Many RAID cards can be configured in "IT" mode so all drives can be addressed individually by the OS - in other words, it is just a multiport SATA/SAS controller at that point, and that's fine. On some Dell controllers, the only way to do this is to configure each individual drive as a RAID 0. Don't configure an array and then use that array for DSM as a single disk. The best XPEnology configuration for your hardware will be loader 1.03b, platform DS3617xs and DSM version 6.2.3. Alternative would be 1.02b, DS3615xs and DSM version 6.1.7. The bad news for you is that you cannot use all your CPU resources. DSM supports a maximum of 8 threads for DS3615xs and 16 threads for DS3617xs. You have 24 threads between your two 6-core processors, so much of your compute will be idle. The best baremetal case with that hardware is to run DS3617xs and turn off HyperThreading, resulting in 12 CPU cores active. With ESXi, you can alternately give the DSM VM 16 threads and reserve the rest for other VM workloads if desired. https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
×
×
  • Create New...