Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. Another one bites the dust. At some point folks should heed the warnings about this time bomb waiting to happen. What type of SSD drive were you using for cache? Were you monitoring for SSD health? How much SSD lifespan do you believe was remaining?
  2. Start here: https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability/?do=findComment&comment=107979 ignoring the lvm commands (vchange -ay) and substituting your /dev/md2 for the lv device. I'd begin with the sudo mount's and follow the work that thread's OP did.
  3. You have a simple RAID5 so the logical volume manager (lvm) is probably not being used and you won't have vg's. You need to figure out what device your array is. Try a "df" and see if you can match /dev/md... to your volume. If that is inconclusive because the volume isn't mounting "cat /etc/fstab" See this thread for some options: https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability/#comment-108013
  4. https://xpenology.com/forum/topic/13862-dsm-617-15284-update-3/
  5. As you are familiar with LAG or other network interface aggregation tech, you'll agree that it won't help you get a single client (i.e. gaming machine) to go any faster than a single port. To put this into perspective: A single SATA SSD (or 2 in RAID 1) will easily read faster than a 1 Gbe interface. 2 SATA SSD's in RAID 0 will nearly fill a 10Gbe interface. 4 SATA SSD's in RAID 5 will certainly saturate a 10Gbe interface.
  6. https://xpenology.com/forum/topic/9392-general-faq/?do=findComment&comment=82391
  7. Yes, there should be 24 threads but DSM cannot support that many. To use all your cores, you must disable SMT (Simultaneous Multi-Threading, or Hyperthreading) in your motherboard BIOS.
  8. Again, DSM will only use 16 THREADS not cores. You have 12 cores, and 12 SMT (Hyperthreading) threads. So DSM is actually only using 8 cores, and 8 threads. You will get better performance if you disable SMT and then DSM will report 12 actual cores.
  9. Well that makes sense why you want everything you can out of that 40Gbps card, as the theoretical drive throughput is 2.5x your network bandwidth. So maybe it's not quite so critical that you get the iSCSI hardware support working natively as that won't be the limiting factor. But good luck however it turns out. You may know this already, but: DS361x image has a native maximum of 12 drives DS918 image has a native maximum of 16 drives These can be modified, but every time that you update DSM the maximum will revert and your array will be compromised. It SHOULD come right back once you fix the MaxDisks setting.
  10. How many drives will you have in your RAID F1?
  11. My system is very close in design to yours (see my signature). If you virtualize your network and storage, you may be correct. However, ESXi allows you to be selective as to what it manages and what it does not. I am using 2x enterprise NVMe drives that are presented to DSM via physical RDM, which is a simple command/protocol translation. The disks are not otherwise managed by ESXi. This allows me to use them as SATA or SCSI within DSM (they would be totally inaccessible otherwise). If you have a difficult to support storage controller, the same tactic may apply. From a performance standpoint, if there is overhead it is negligible, as I routinely see 1.4MBps (that's megaBYTES) throughput, which is very close to the stated limits of the drive. If the hardware is directly supported by DSM, ESXi can passthrough the device and not touch it at all. I do this with my dual Mellanox 10Gbps card and can easily max out the interfaces simultaneously. In the case of SATA, I pass that through as well so there is no possible loss of performance on that controller and attached drives. The point is that ESXi can help resolve a problematic device in a very elegant way, and can still provide direct access to hardware that works well with DSM.
  12. ESXi assigns a random MAC the first time a VM is booted. If you are using a prebuilt VM, it probably doesn't do the MAC assignment unless you delete the virtual Ethernet card, save the VM, and then add it back in.
  13. DSM representation is cosmetic and is hard coded to the DSM image you're using. cat /proc/cpuinfo if you want to see what is actually recognized in the system. There is a limit of 16 threads. You will need to disable SMT if you want to use all the cores (you are using two hexacore CPU's). https://xpenology.com/forum/topic/15022-maximum-number-of-cores/?do=findComment&comment=115359 Just a general comment on this thread (which I am following with interest): this task would be a lot easier if you ran the system as a large VM within ESXi.
  14. FYI there is almost no overhead with RDMs and you get access to the entire disk, so it should be portable.
  15. Under "Features and Services" within the TOS: 2. QuickConnect and Dynamic Domain Name Service (DDNS) Users who wish to use this service must register their Synology device to a Synology Account. When using XPenology, you are not using a Synology device. Therefore you aren't able to register that device to a Synology account. If you do, you are violating the TOS. This is tantamount to stealing proprietary cloud services, and is discouraged here and by the cited FAQ.
  16. https://xpenology.com/forum/topic/23473-new-to-xpenology/?do=findComment&comment=127956 Please note the post discussing the FAQ
  17. Nobody knows. The current 1.03b and 1.04b loaders seem to work with DSM 6.2.x but any new DSM patch can (and does with surprising regularity) fail to work with them. The community has found workarounds in most cases. That's the reason for this thread here: https://xpenology.com/forum/forum/78-dsm-updates-reporting/ Look for folks with similar hardware, virtualization, loader and DSM versions being successful before attempting any DSM update. And seeing as you are planning to use ESXi, there really is no excuse not to have a test XPenology DSM instance to see if the upgrade fails or succeeds before committing the update to your production VM. When Synology releases DSM 7.0, it's a virtual certainty that the current loaders will not work. Someone will have to develop a new DSM 7.0 loader hack, and there is really no information about how long it might take or how difficult it may be.
  18. It's not possible for you to agree to Synology's Terms of Service. By using XPenology to connect to Synology services, you are directly violating them. Please note this from the FAQ: https://xpenology.com/forum/topic/9392-general-faq/?do=findComment&comment=82390
  19. Clicking the upgrade button would be unwise. You will need to burn a new boot loader, and you will need to evaluate your hardware to see what combination of loader and code to use. https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ Please have a backup/backout plan as it's not always straightforward depending on your system.
  20. Yes, that's your boot loader and is a required runtime filesystem for DSM. It's not an installation key.
  21. I'm not an HP expert, but I can answer the distilled-down question above. The CPU you have is an Ivy Bridge architecture, which is too old to run the DS918 version of DSM, compiled to use new instructions present in Haswell or later. So those running Ivy Bridge architecture have no choice but to run DS3615xs. Hardware transcoding requires Intel Quicksync drivers that are only implemented on DS918 DSM. This post may help you understand the limitations further.
  22. MBR and Legacy are two different things. If you can support a GPT partition, definitely do so. Loader 1.02b (for 6.1.x) can work in either Legacy or UEFI mode Loader 1.03b (for 6.2.x) works in Legacy mode Loader 1.04b (for 6.2.x) works only in UEFI mode
  23. Nothing that a VPN won't solve. If you think your "patched up" Synology box can't be hacked, you need to meet some white hat security folks.
  24. Sorry for the event and to bring you bad news. As you know, RAID 5 spans parity across the array such that all members, less one must be present for data integrity. Your data may have been recoverable at one time, but once the repair operation was initiated with only 2 valid drives, the data on all four drives was irreparably lost. I've highlighted the critical items above.
  25. If the array was healthy when shut down, it should work out of order. But that is a last resort. I'd use some blank drives and figure out the order of the ports before installing.
×
×
  • Create New...