flyride

Members
  • Content Count

    1,907
  • Joined

  • Last visited

  • Days Won

    98

Everything posted by flyride

  1. For DSM 6.1.x, download loader 1.02b. The one you are using has been deprecated. See this Have a care with DSM versions and upgrades once you have it installed. Many folks break their systems allowing DSM to update itself to a version that is not supported by the loader.
  2. You should be able to do a recovery installation and your volumes will stay intact. Packages, DSM settings etc will be lost. This would not be something to do without a good backup.
  3. You must use the 1.03b or 1.04b loaders to run 6.2. If your hardware doesn't work with those loaders, you are out of luck to upgrade.
  4. This is one of the reasons I am still using 6.1.7 on my system with 10Gbe
  5. Any memory that isn't being used by apps is used to cache writes.
  6. Again, there are plenty of examples of volumes being corrupted with SSD cache in RAID1! Check reddit, Synology forum, etc. You have a M.2 adapter to PCIe and a PCIe SATA controller. The bandwidth commentary is completely relevant to your configuration. Consider testing performance on SATA SSD connected to both chipset and Marvell controllers and prove to yourself that it is different, not rely on "known" information. That will save you a lot of effort if the performance is the same! The example just shows that the poster planned to configure SSD cache, not th
  7. If your PC is sustaining 200W you must be overclocking with a configuration that does not allow the processor to idle. And +300W during games, maybe you have a dual graphics card setup, probably also overclocked? There is no benefit to DSM from either a high performance video card or overclocking. So the example really is not comparable.
  8. You are asking for opinion, not what is technically possible. I do not value highly the DSM SSD write cache implementation. There are many examples of data loss due to array corruption when using SSD write cache. Plus, most feasible DSM NAS write workloads can be similarly benefited by increasing RAM, with no corruption risk. You assert that the chipset controller is faster than a (M.2) PCIe connected controller, and that is the reason for all this effort. Based on what? Have you actually benchmarked this? SATA III SSD is limited to 550MBps by its interface specification, no ma
  9. Your processor is Yorkfield, which was replaced by Nehalem. The hardware the 1.02b loaders are based on (3615, 3617, 916) have post-Nehalem processors. Someone may pipe up and prove it wrong, but I'm not aware of anyone 6.1.x running on earlier than a Nehalem family chip. See this: https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  10. You still have not answered the question. Which loader (exactly), which DSM (exactly). Your CPU won't work with Jun 1.04b and DS918 6.2.x. Your NIC probably won't be supported by 1.03b and 6.2.x, so you are left with 1.02b DS3615 6.1.x for your best chance of success.
  11. Generally your answer to both your questions is yes. But it is hard to answer without understanding much more about your system. Assuming your HDD's are in some sort of RAID redundant config, you can discover the answer on your own. Why not just shutdown, move one drive over to your other controller, boot up and see? If it works, great, then shutdown again and move a second drive. If it does not, move the first drive back and recover your array. Moving the drives so that they do not match the controller drive sequence can complicate a recovery from a catastrophic fa
  12. You can always SSH and #sudo -i #sync; shutdown -P now
  13. Which loader and DSM are you attempting to use?
  14. This is ambitious. It's cool that you are following my install, but please understand that it's an esoteric and complex configuration with some inherent risk. XPenology has always been a bit fiddly. Be prepared to do a lot of testing and experimentation, and always have backups of your data. Honestly, I would have jumped to FreeNAS if it was necessary to get NVMe working. What does testing and experimentation mean in this context? BIOS vs UEFI. BIOS spinup modes on your drives. BIOS virtualization configuration options. Check ESXi compatibility and stability thoroughly. Upgrade ESX
  15. You want to look into "physical RDM" which can only be done at command line, not in vSphere. One pRDM definition per RAID0. Once the pRDM definition is built you can add it to a VM like any other device.
  16. There are currently two options to make NVMe work as a RAIDable disk; both require virtualization. You can set up a datastore and create a virtual disk. Or, with ESXi, create a physical RDM definition for the NVMe drive.
  17. It looks like H710 for VSAN implementation is to set up each drive that will participate as a RAID 0. Same config for XPenology. So you should be able to do your RAID 1 for scratch and RAID 0 for everything else, then RDM those devices into the XPenology VM. See this link: https://community.spiceworks.com/topic/748479-best-raid-controller-for-vmware-vsan-from-dell?page=1#entry-4213269 The more "native" approach is to obtain or convert the controller into a JBOD device. Your controller is actually an LSI 2208 card. So you might be able to flash it to stop being a RAID card and
  18. Technically there isn't such a thing as "hardware RAID" just a primitive CPU on a controller that doesn't have much else to do. Some time in the past, that card was faster than the CPUs that were then available. That just isn't true any more. And your "hardware RAID" is the one in the motherboard BIOS right? That's just software my friend, and not nearly as sophisticated as MDRAID in linux. The very fastest enterprise flash SANs in my own data centers are NVMe connected to bus, and Xeon... totally software based. I'm not sure why you don't want to passthrough your S
  19. Honestly, stop running hardware RAID and let DSM do what it's supposed to.
  20. VROC is another form of hardware RAID, which displaces much of the functionality that DSM provides around storage redundancy management. Also, I think VROC is really only a very high-end Xeon product implementation... and not generally available on the hardware on which XPEnology is likely to be run. Lastly, using NVMe is currently only possible within very specific parameters using ESXi, and I'm not sure how a VROC implementation would play into that.
  21. I don't have an 8100T, but I would guess it is between 3-6W idle. What difference does 2-3W make for you? As far as how to estimate total power consumption, I like to use https://pcpartpicker.com which lets you build systems out of spec parts and it will tell you your max power budget. Idle consumption is usually a lot less. CPU's, memory, SSD's and drives all use quite a bit less power at idle than their max spec.
  22. 8700 only uses about 6W idle. On any modern processor, you only burn power if you are doing work. Any Skylake processor or later can be fixed to any TDP limit you like in the BIOS.
  23. Typically ESXi boots from a USB or a DOM, then runs from RAM. It needs a place for temporary ("scratch") files, and also for the VM definitions and support/patch files. So you will need some sort of storage for this. You can virtualize your NAS storage or provide a physical (passthrough controller or RDM). My configuration described above has all the storage intended for the NAS configured via passthrough or RDM. None of that is available to ESXi for scratch, so another drive is needed. I use the NVMe slot and a 128GB drive for this, and all it has on it is the VM definitions
  24. More specifically, 1.03b and DS3615/17 only supports an Intel-type NIC on 6.2.1+ (e1000e on ESXi is an emulated Intel NIC). On earlier versions of DSM, or on other loaders, other NICs may be supported depending upon your combination of drivers available and hardware - e.g. the above Intel limitation is not applicable.