flyride

Members
  • Content Count

    1,843
  • Joined

  • Last visited

  • Days Won

    96

Everything posted by flyride

  1. Which loader and DSM are you attempting to use?
  2. This is ambitious. It's cool that you are following my install, but please understand that it's an esoteric and complex configuration with some inherent risk. XPenology has always been a bit fiddly. Be prepared to do a lot of testing and experimentation, and always have backups of your data. Honestly, I would have jumped to FreeNAS if it was necessary to get NVMe working. What does testing and experimentation mean in this context? BIOS vs UEFI. BIOS spinup modes on your drives. BIOS virtualization configuration options. Check ESXi compatibility and stability thoroughly. Upgrade ESX
  3. You want to look into "physical RDM" which can only be done at command line, not in vSphere. One pRDM definition per RAID0. Once the pRDM definition is built you can add it to a VM like any other device.
  4. There are currently two options to make NVMe work as a RAIDable disk; both require virtualization. You can set up a datastore and create a virtual disk. Or, with ESXi, create a physical RDM definition for the NVMe drive.
  5. It looks like H710 for VSAN implementation is to set up each drive that will participate as a RAID 0. Same config for XPenology. So you should be able to do your RAID 1 for scratch and RAID 0 for everything else, then RDM those devices into the XPenology VM. See this link: https://community.spiceworks.com/topic/748479-best-raid-controller-for-vmware-vsan-from-dell?page=1#entry-4213269 The more "native" approach is to obtain or convert the controller into a JBOD device. Your controller is actually an LSI 2208 card. So you might be able to flash it to stop being a RAID card and
  6. Technically there isn't such a thing as "hardware RAID" just a primitive CPU on a controller that doesn't have much else to do. Some time in the past, that card was faster than the CPUs that were then available. That just isn't true any more. And your "hardware RAID" is the one in the motherboard BIOS right? That's just software my friend, and not nearly as sophisticated as MDRAID in linux. The very fastest enterprise flash SANs in my own data centers are NVMe connected to bus, and Xeon... totally software based. I'm not sure why you don't want to passthrough your S
  7. Honestly, stop running hardware RAID and let DSM do what it's supposed to.
  8. VROC is another form of hardware RAID, which displaces much of the functionality that DSM provides around storage redundancy management. Also, I think VROC is really only a very high-end Xeon product implementation... and not generally available on the hardware on which XPEnology is likely to be run. Lastly, using NVMe is currently only possible within very specific parameters using ESXi, and I'm not sure how a VROC implementation would play into that.
  9. I don't have an 8100T, but I would guess it is between 3-6W idle. What difference does 2-3W make for you? As far as how to estimate total power consumption, I like to use https://pcpartpicker.com which lets you build systems out of spec parts and it will tell you your max power budget. Idle consumption is usually a lot less. CPU's, memory, SSD's and drives all use quite a bit less power at idle than their max spec.
  10. 8700 only uses about 6W idle. On any modern processor, you only burn power if you are doing work. Any Skylake processor or later can be fixed to any TDP limit you like in the BIOS.
  11. Typically ESXi boots from a USB or a DOM, then runs from RAM. It needs a place for temporary ("scratch") files, and also for the VM definitions and support/patch files. So you will need some sort of storage for this. You can virtualize your NAS storage or provide a physical (passthrough controller or RDM). My configuration described above has all the storage intended for the NAS configured via passthrough or RDM. None of that is available to ESXi for scratch, so another drive is needed. I use the NVMe slot and a 128GB drive for this, and all it has on it is the VM definitions
  12. More specifically, 1.03b and DS3615/17 only supports an Intel-type NIC on 6.2.1+ (e1000e on ESXi is an emulated Intel NIC). On earlier versions of DSM, or on other loaders, other NICs may be supported depending upon your combination of drivers available and hardware - e.g. the above Intel limitation is not applicable.
  13. My comment does refer to the case. The case assembly/disassembly is a bit intricate and requires some careful cable routing for airflow. And I made it somewhat more complicated by adding in the U.2 drives and doing a custom power supply. That said, my ESXi environment has a lot of tweaks in it as well - you can find the ESXi/NVMe thread on here with a little searching.
  14. I'm running almost everything you're inquiring about. 8-bay U-NAS 810A (8 hot swap drive chassis, MicroATX) SuperMicro X11SSH-F (MicroATX) E3-1230V6 64GB RAM 8x 4TB in RAID 10 Mellanox Connect-X 3 dual 10Gbe 2x Intel P3500 2TB NVMe (these are U.2 drives) A few items of note. XPenology is running as a VM under ESXi. This allows the NVMe drives to be RDM'd as SCSI, which works fine. NVMe native doesn't work as DSM doesn't currently support it for regular storage. The NVMe drives are attached via PCIe U.2 adapters since I don't need the slots. I'
  15. Then the PCIe enumeration (controller order) is not known - you really need to figure that out before you do anything else. Why are you trying this with a production array??
  16. I'm not 100% sure of the controller order on your system, but assuming your PCIe enumeration = SATA1x2, SATA2x4, LSI, try DiskIdxMap=080A00 You can also add SataPortMap=228 but it should not be necessary since you just want to cut off 2 of the SATA2 devices
  17. I just need to point out, that you post a problem, then conclude it's not fixable, and now complain when offered the tools that will fix your issue? How about just trying it out? The tutorial does not contradict itself, you just misapply the syntax to the first example. You must have two controllers or this wouldn't be an issue in the first place. So if you have no drives on the first controller, you could use SataPortMap=0 and it would make the drives on that controller disappear. However, I don't think this is your problem, but you don't offer any information about the loader
  18. SataPortMap, SASIdxMap, and DiskIdxMap - it's in the 6.1.x installation tutorial. Don't try it on a production Disk Group.
  19. I'm not sure SuperMicro can advise you on how DSM should be configured. In any case, using a combination of hardware (motherboard) and software (DSM) RAID seems counterproductive. You will get the most out of DSM if you present raw drives and let DSM do all the RAID operations you want to do. That would mean increasing MaxDrives in your case. If you want to delete an entire controller or subset of drives from a controller, look into SataPortMap, SASIdxMap, and DiskIdxMap. This is on the main tutorial here.
  20. It's been awhile since I tested this with baremetal DSM, but I don't think you can see NVMe drives at all until you go into "add cache to volume" and they will be listed there. syno_hdd_util --ssd_detect will only return SSD's that are able to be used for disk groups - i.e. SATA SSD's.
  21. The bonding strategies available to DSM (i.e. LACP) use some combination of source/target IP and MAC addresses to choose which of the NICs to use. It doesn't connect them together simultaneously for twice the bandwidth. For many clients connecting to a bonded NIC destination, there is an equal chance to select one or the other. Therefore the traffic theoretically gets distributed across both NICs and (again, theoretically) twice the traffic is possible. For a single PC connected to single NAS, you probably won't ever see an increase in speed. Always worth benchmarkin