flyride

Members
  • Content count

    178
  • Joined

  • Last visited

  • Days Won

    9

flyride last won the day on December 7

flyride had the most liked content!

Community Reputation

53 Excellent

2 Followers

About flyride

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. flyride

    Shutdown does work

    You can always SSH and #sudo -i #sync; shutdown -P now
  2. flyride

    USB Loader won't boot any version

    Which loader and DSM are you attempting to use?
  3. This is ambitious. It's cool that you are following my install, but please understand that it's an esoteric and complex configuration with some inherent risk. XPenology has always been a bit fiddly. Be prepared to do a lot of testing and experimentation, and always have backups of your data. Honestly, I would have jumped to FreeNAS if it was necessary to get NVMe working. What does testing and experimentation mean in this context? BIOS vs UEFI. BIOS spinup modes on your drives. BIOS virtualization configuration options. Check ESXi compatibility and stability thoroughly. Upgrade ESXi to the latest major patchlevel. Try various permutations of VM configurations and do a XPenology full install on each, so you know exactly what works and doesn't. Benchmark array performance on each. Deliberately break your arrays. Reboot them before rebuilding. Fully test hotplug in/out if your hardware supports it. Upgrade DSM. Just because some users have success upgrading a simple setup does not mean there won't be problems with a fully custom configuration. A simple test VM is not a fully adequate simulator, because once you pass through disks, you adopt the risks of a baremetal install. I apologize for the lecture. I just don't want you committing to hundreds or thousands of dollars of hardware without understanding what you could be getting into. On the equipment manifest: The motherboard should work based on the spec sheet. The case and CPU cooling combo are fine. You might want to review this FreeNAS thread. I'm not sure how the ASUS Hyper M.2 x16 card works - it must have a PCI switch on-board for it to support 4 drives? It must be supported by ESXi natively. If it is able to see all the drives using the standard ESXi NVMe driver, it should be fine. Performance wise, there is no practical reason for a NVME SSD RAID10. NVMe SSD's will read crazy fast in RAID1 (>1 gigaByte per second), but they will probably throttle on sustained writes without active cooling. You might want RAID5/6/10 for capacity, or to use some lower capacity/less expensive sticks, which will also reduce (delay) the cooling issue. This is really silly talk though! To be clear, 1GBps (capital B = bytes) disk throughput is 8x the performance of 1Gbps (small b = bits) Ethernet. If you don't have a 10Gbe or 40Gbe network setup, the NVMe performance is wasted. RAID10 performance (and capacity, obviously) on HDDs scales linearly with the number of drives. 4xHDD in RAID10 is roughly 2x the speed of 2xHDD in RAID1. Full disclosure item: I have not set up the required NVMe pRDM passthrough configuration using 1.04b/6.2.1 yet. I'm intentionally still running 1.02b/6.1.7 and intend to do some careful testing before committing to a newer DSM. I can't think of a reason it won't continue to work technically. One area needing attention is how to present the pRDM devices to DSM. I've documented how SAS emulation can be used on 6.1.7 to match up native device SMART functionality with DSM's internal smartctl query arguments. This no longer works with 6.2.1. It seems that SATA emulation is the only reliable option.
  4. You want to look into "physical RDM" which can only be done at command line, not in vSphere. One pRDM definition per RAID0. Once the pRDM definition is built you can add it to a VM like any other device.
  5. flyride

    Can ESXi/DSM handle VROC?

    There are currently two options to make NVMe work as a RAIDable disk; both require virtualization. You can set up a datastore and create a virtual disk. Or, with ESXi, create a physical RDM definition for the NVMe drive.
  6. It looks like H710 for VSAN implementation is to set up each drive that will participate as a RAID 0. Same config for XPenology. So you should be able to do your RAID 1 for scratch and RAID 0 for everything else, then RDM those devices into the XPenology VM. See this link: https://community.spiceworks.com/topic/748479-best-raid-controller-for-vmware-vsan-from-dell?page=1#entry-4213269 The more "native" approach is to obtain or convert the controller into a JBOD device. Your controller is actually an LSI 2208 card. So you might be able to flash it to stop being a RAID card and just make it a high performance SAS/SATA controller. See these links as a starting point: https://forums.servethehome.com/index.php?threads/lsi-raid-controller-hba-equivalency-mapping.19/ https://forums.servethehome.com/index.php?threads%2Fis-there-a-way-to-restore-an-lsi-2208-after-firmware-update-failure.13237%2F https://www.vladan.fr/flash-dell-perc-h310-with-it-firmware/ (this is for a 310 but has lots of relevant information) HOWEVER, some reported problems with your specific hardware (ref PCI slots, controller firmware availability) here: https://forums.unraid.net/topic/51057-flash-dell-perc-710-to-it-mode/ Once you flash an "IT" BIOS, you won't be able to do the RAID 1 for the datastore. You have a few alternative options (VSAN, manually copying between two datastore drives, etc) but I don' t think that you can do both RAID1 and non-RAID drive support on the same LSI controller.
  7. Technically there isn't such a thing as "hardware RAID" just a primitive CPU on a controller that doesn't have much else to do. Some time in the past, that card was faster than the CPUs that were then available. That just isn't true any more. And your "hardware RAID" is the one in the motherboard BIOS right? That's just software my friend, and not nearly as sophisticated as MDRAID in linux. The very fastest enterprise flash SANs in my own data centers are NVMe connected to bus, and Xeon... totally software based. I'm not sure why you don't want to passthrough your SATA controller, but if you must, you can try to RDM the specific drives you want XPenology to see, while leaving the controller to ESXi.
  8. Honestly, stop running hardware RAID and let DSM do what it's supposed to.
  9. flyride

    Can ESXi/DSM handle VROC?

    VROC is another form of hardware RAID, which displaces much of the functionality that DSM provides around storage redundancy management. Also, I think VROC is really only a very high-end Xeon product implementation... and not generally available on the hardware on which XPEnology is likely to be run. Lastly, using NVMe is currently only possible within very specific parameters using ESXi, and I'm not sure how a VROC implementation would play into that.
  10. flyride

    My build for XPEnology. Looking for advice.

    I don't have an 8100T, but I would guess it is between 3-6W idle. What difference does 2-3W make for you? As far as how to estimate total power consumption, I like to use https://pcpartpicker.com which lets you build systems out of spec parts and it will tell you your max power budget. Idle consumption is usually a lot less. CPU's, memory, SSD's and drives all use quite a bit less power at idle than their max spec.
  11. flyride

    My build for XPEnology. Looking for advice.

    8700 only uses about 6W idle. On any modern processor, you only burn power if you are doing work. Any Skylake processor or later can be fixed to any TDP limit you like in the BIOS.
  12. Typically ESXi boots from a USB or a DOM, then runs from RAM. It needs a place for temporary ("scratch") files, and also for the VM definitions and support/patch files. So you will need some sort of storage for this. You can virtualize your NAS storage or provide a physical (passthrough controller or RDM). My configuration described above has all the storage intended for the NAS configured via passthrough or RDM. None of that is available to ESXi for scratch, so another drive is needed. I use the NVMe slot and a 128GB drive for this, and all it has on it is the VM definitions and scratch files - maybe 30GB in total, which includes some small virtual drives for XPenology test, and virtualized storage for a few other non-XPenology VM's. Sorry if this is overly explanatory, but it sounds like you might be setting up ESXi for the first time.
  13. More specifically, 1.03b and DS3615/17 only supports an Intel-type NIC on 6.2.1+ (e1000e on ESXi is an emulated Intel NIC). On earlier versions of DSM, or on other loaders, other NICs may be supported depending upon your combination of drivers available and hardware - e.g. the above Intel limitation is not applicable.
  14. My comment does refer to the case. The case assembly/disassembly is a bit intricate and requires some careful cable routing for airflow. And I made it somewhat more complicated by adding in the U.2 drives and doing a custom power supply. That said, my ESXi environment has a lot of tweaks in it as well - you can find the ESXi/NVMe thread on here with a little searching.