Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Posts posted by flyride

  1. My comment does refer to the case. The case assembly/disassembly is a bit intricate and requires some careful cable routing for airflow.  And I made it somewhat more complicated by adding in the U.2 drives and doing a custom power supply.

     

    That said, my ESXi environment has a lot of tweaks in it as well - you can find the ESXi/NVMe thread on here with a little searching.

  2. I'm running almost everything you're inquiring about.

    • 8-bay U-NAS 810A (8 hot swap drive chassis, MicroATX)
    • SuperMicro X11SSH-F (MicroATX)
    • E3-1230V6
    • 64GB RAM
    • 8x 4TB in RAID 10
    • Mellanox Connect-X 3 dual 10Gbe
    • 2x Intel P3500 2TB NVMe (these are U.2 drives)

    A few items of note.  XPenology is running as a VM under ESXi.  This allows the NVMe drives to be RDM'd as SCSI, which works fine.  NVMe native doesn't work as DSM doesn't currently support it for regular storage.  The NVMe drives are attached via PCIe U.2 adapters since I don't need the slots.  I'm using the motherboard M.2 slot for the ESXi scratch/VM drive.   The SATA controller and Mellanox are passed through to the VM so DSM is using mostly native drivers, which work fine baremetal or VM with Mellanox and the C236 chipset.

     

    So all this in a package in the footprint of, and just a bit taller than a DS1817.  I'm pretty happy with it, except that case was the hardest to set up of any server I've ever built.

     

    I'm also running another XPenology server on an ITX board in a U-NAS 410, which is a 4-bay ITX chassis.  Works fine but it is running a low-power embedded chip. I'd pay a lot of attention to cooling and cooler compatibility if you want to run a 95W chip in an ITX case.

     

    You should know that DSM code bases that work with XPenology only support 8 threads total (including hyperthreading) so most E5s might not be a fit.

    • Thanks 1
  3. I just need to point out, that you post a problem, then conclude it's not fixable, and now complain when offered the tools that will fix your issue?  How about just trying it out?

     

    The tutorial does not contradict itself, you just misapply the syntax to the first example.  You must have two controllers or this wouldn't be an issue in the first place.  So if you have no drives on the first controller, you could use SataPortMap=0 and it would make the drives on that controller disappear.  However, I don't think this is your problem, but you don't offer any information about the loader version or anything about your system.  This is probably what you are trying to do, which leverages DIskIdxMap, but maybe not...

     

    The tutorials aren't exhaustive. The one you reference introduces the correct tools as a solution. There are many, many configuration examples here if you just search for the port mapping terms.  Also, I just put those same search terms into Google and the second hit takes me to this page, which has some excellent descriptions of how they work and interact.

     

    Please be assured there is a relatively simple solution.  Good luck.

     

    EDIT: I only noticed after posting you basically asked the same question before, and I offered up the same answer.  In that thread you say you are using 6.2.1 and the synoboot on the vSATA controller.  You could use DiskIdxMap to move the enumeration of your first (SATA) controller to something high like 1F and your second controller (LSI) to 00 and your problem would probably be solved.  Alternatively if you were using less than 12 drives, you could use SataPortMap=1 and the SATA controller would only use one slot for the synoboot (which is useless but you can see how your problem could be solved).

  4. I'm not sure SuperMicro can advise you on how DSM should be configured.  In any case, using a combination of hardware (motherboard) and software (DSM) RAID seems counterproductive.  You will get the most out of DSM if you present raw drives and let DSM do all the RAID operations you want to do.  That would mean increasing MaxDrives in your case.

     

    If you want to delete an entire controller or subset of drives from a controller, look into SataPortMap, SASIdxMap, and DiskIdxMap.  This is on the main tutorial here.

     

  5. It's been awhile since I tested this with baremetal DSM, but I don't think you can see NVMe drives at all until you go into "add cache to volume" and they will be listed there.

     

    syno_hdd_util --ssd_detect will only return SSD's that are able to be used for disk groups - i.e. SATA SSD's.

     

  6. The bonding strategies available to DSM (i.e. LACP) use some combination of source/target IP and MAC addresses to choose which of the NICs to use.  It doesn't connect them together simultaneously for twice the bandwidth.  For many clients connecting to a bonded NIC destination, there is an equal chance to select one or the other.  Therefore the traffic theoretically gets distributed across both NICs and (again, theoretically) twice the traffic is possible.

     

    For a single PC connected to single NAS, you probably won't ever see an increase in speed.

     

    Always worth benchmarking your changes, and make sure you don't get fooled by cached performance.

     

    • Like 1
  7. No, in order to run RAID10 you have to tear down your array and build from scratch.  You will be adding a 4th drive to either SHR (RAID 5) or converting to SHR2 (RAID6).

     

    If your drives are the same size, SHR doesn't offer any advantage over the equivalent RAID and is slightly slower.  If you want to run mismatch drive sizes, SHR/SHR2 will give you somewhat more storage available than RAIDx.

     

×
×
  • Create New...