Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Posts posted by flyride

  1. 1 hour ago, Ikyo said:

    Any news about a loader for the older generation CPUs?

     

    The issue isn't the loader, it's the DSM Linux kernel.  XPEnology uses DSM images direct from Synology, so you get what they build.

     

    DSM 6 on DS3615/DS3617/DS916 requires 64-bit and Nehalem.  DSM 6.2 on DS918 requires 64-bit and Haswell.

     

    If you need older than that, you will need to stay on DSM 5.

  2. 9 minutes ago, ed_co said:

    1) That's why I was planning to do RAID 1 (mirror) with 2 SATA SSD, to avoid corruption.

    Again, there are plenty of examples of volumes being corrupted with SSD cache in RAID1!  Check reddit, Synology forum, etc.

     

    9 minutes ago, ed_co said:

    2) Not applicable... I am not planning get one of this.

    You have a M.2 adapter to PCIe and a PCIe SATA controller.  The bandwidth commentary is completely relevant to your configuration.   Consider testing performance on SATA SSD connected to both chipset and Marvell controllers and prove to yourself that it is different, not rely on "known" information.  That will save you a lot of effort if the performance is the same!

     

    9 minutes ago, ed_co said:

    3) There is no other way to expand. And 6 SATA ports now is enough, but not in the future. There are other possibilities with other controllers though.

    So, summarising you think doesn't worth it the SSD cache. Curious, I thought this will improve a lot the performance of the NAS. Here one example.

    The example just shows that the poster planned to configure SSD cache, not that it was tested to offer any measurable benefit.  He hypothesizes that writes will benefit (theoretically true).  It is equally true that RAM will give you the same benefit, on real-world workloads, without the corruption risk.  SSD cache sounds good in concept but it is not very useful in reality, unless you have a sustained enterprise workload.

     

    And now I am just repeating my opinion here.  You can do what you want to, half the fun is in experimenting!

    • Like 1
  3. On 12/21/2018 at 6:29 AM, Konfl1kt said:

    I just not sure how is it work. If PC always stand by can it take 20-30W total? My PC use around 200w and around 500W during playing games.

     

    If your PC is sustaining 200W you must be overclocking with a configuration that does not allow the processor to idle.

    And +300W during games, maybe you have a dual graphics card setup, probably also overclocked?

     

    There is no benefit to DSM from either a high performance video card or overclocking.  So the example really is not comparable.

    • Like 1
  4. You are asking for opinion, not what is technically possible.

    1. I do not value highly the DSM SSD write cache implementation.  There are many examples of data loss due to array corruption when using SSD write cache.  Plus, most feasible DSM NAS write workloads can be similarly benefited by increasing RAM, with no corruption risk.
    2. You assert that the chipset controller is faster than a (M.2) PCIe connected controller, and that is the reason for all this effort.  Based on what? Have you actually benchmarked this?  SATA III SSD is limited to 550MBps by its interface specification, no matter where it is connected. M.2 is essentially a PCIe x4 slot which is 4GBps bandwidth to the CPU, so connected SATA drives should run just fine from there. I don't really understand the performance advantage from moving things around.
    3. There is a inherent, latent risk of spanning volumes across controllers.  What if you upgrade and only one controller comes up?  Your array is now broken.  Unless the ports are needed for the drives you want to connect, it would be better not to run a second controller at all.  This is also true with SSD cache on a secondary controller - your array is broken if the SSD cache isn't available (i.e. you do not reduce risk by putting SSD cache on secondary controller).

    Summarizing, I don't think you will get much performance value for this plan, and will incur subjectively unnecessary risk, both short term and long term.  If you do still want to move drives around, I recommend moving ONE drive at a time with DSM shut down.  Then boot up and let DSM update the array information and verify that everything comes up clean.  Then shutdown and move another drive, repeat until you are done with whatever you are trying to do.

    • Like 1
  5. You still have not answered the question.  Which loader (exactly), which DSM (exactly).  Your CPU won't work with Jun 1.04b and DS918 6.2.x.  Your NIC probably won't be supported by 1.03b and 6.2.x, so you are left with 1.02b DS3615 6.1.x for your best chance of success.

  6. Generally your answer to both your questions is yes.  But it is hard to answer without understanding much more about your system.

     

    Assuming your HDD's are in some sort of RAID redundant config, you can discover the answer on your own.  Why not just shutdown, move one drive over to your other controller, boot up and see?  If it works, great, then shutdown again and move a second drive.  If it does not, move the first drive back and recover your array.

     

    Moving the drives so that they do not match the controller drive sequence can complicate a recovery from a catastrophic failure where you are trying to manually reconstruct an array that won't start.

     

    Now if you are successful moving your drives, you can get your array corrupted from SSD cache failure, so maybe we should not be helping you ... :-)

  7. This is ambitious. It's cool that you are following my install, but please understand that it's an esoteric and complex configuration with some inherent risk. XPenology has always been a bit fiddly. Be prepared to do a lot of testing and experimentation, and always have backups of your data. Honestly, I would have jumped to FreeNAS if it was necessary to get NVMe working.

     

    What does testing and experimentation mean in this context? BIOS vs UEFI. BIOS spinup modes on your drives. BIOS virtualization configuration options. Check ESXi compatibility and stability thoroughly. Upgrade ESXi to the latest major patchlevel. Try various permutations of VM configurations and do a XPenology full install on each, so you know exactly what works and doesn't. Benchmark array performance on each. Deliberately break your arrays. Reboot them before rebuilding. Fully test hotplug in/out if your hardware supports it. Upgrade DSM. Just because some users have success upgrading a simple setup does not mean there won't be problems with a fully custom configuration.

     

    A simple test VM is not a fully adequate simulator, because once you pass through disks, you adopt the risks of a baremetal install.

     

    I apologize for the lecture.  I just don't want you committing to hundreds or thousands of dollars of hardware without understanding what you could be getting into.

     

    On the equipment manifest:

    • The motherboard should work based on the spec sheet. The case and CPU cooling combo are fine. You might want to review this FreeNAS thread.
    • I'm not sure how the ASUS Hyper M.2 x16 card works - it must have a PCI switch on-board for it to support 4 drives? It must be supported by ESXi natively. If it is able to see all the drives using the standard ESXi NVMe driver, it should be fine.
    • Performance wise, there is no practical reason for a NVME SSD RAID10. NVMe SSD's will read crazy fast in RAID1 (>1 gigaByte per second), but they will probably throttle on sustained writes without active cooling. You might want RAID5/6/10 for capacity, or to use some lower capacity/less expensive sticks, which will also reduce (delay) the cooling issue. This is really silly talk though!
    • To be clear, 1GBps (capital B = bytes) disk throughput is 8x the performance of 1Gbps (small b = bits) Ethernet. If you don't have a 10Gbe or 40Gbe network setup, the NVMe performance is wasted.
    • RAID10 performance (and capacity, obviously) on HDDs scales linearly with the number of drives.  4xHDD in RAID10 is roughly 2x the speed of 2xHDD in RAID1.

    Full disclosure item:

    I have not set up the required NVMe pRDM passthrough configuration using 1.04b/6.2.1 yet.  I'm intentionally still running 1.02b/6.1.7 and intend to do some careful testing before committing to a newer DSM.  I can't think of a reason it won't continue to work technically.  One area needing attention is how to present the pRDM devices to DSM.  I've documented how SAS emulation can be used on 6.1.7 to match up native device SMART functionality with DSM's internal smartctl query arguments. This no longer works with 6.2.1. It seems that SATA emulation is the only reliable option.

  8. There are currently two options to make NVMe work as a RAIDable disk; both require virtualization.

     

    You can set up a datastore and create a virtual disk.  Or, with ESXi, create a physical RDM definition for the NVMe drive.

     

    • Like 1
  9. It looks like H710 for VSAN implementation is to set up each drive that will participate as a RAID 0.  Same config for XPenology.  So you should be able to do your RAID 1 for scratch and RAID 0 for everything else, then RDM those devices into the XPenology VM.  See this link: https://community.spiceworks.com/topic/748479-best-raid-controller-for-vmware-vsan-from-dell?page=1#entry-4213269

     

    The more "native" approach is to obtain or convert the controller into a JBOD device.  Your controller is actually an LSI 2208 card.  So you might be able to flash it to stop being a RAID card and just make it a high performance SAS/SATA controller. See these links as a starting point:

    https://forums.servethehome.com/index.php?threads/lsi-raid-controller-hba-equivalency-mapping.19/

    https://forums.servethehome.com/index.php?threads%2Fis-there-a-way-to-restore-an-lsi-2208-after-firmware-update-failure.13237%2F

    https://www.vladan.fr/flash-dell-perc-h310-with-it-firmware/ (this is for a 310 but has lots of relevant information)

     

    HOWEVER, some reported problems with your specific hardware (ref PCI slots, controller firmware availability) here: https://forums.unraid.net/topic/51057-flash-dell-perc-710-to-it-mode/

     

    Once you flash an "IT" BIOS, you won't be able to do the RAID 1 for the datastore.  You have a few alternative options (VSAN, manually copying between two datastore drives, etc) but I don' t think that you can do both RAID1 and non-RAID drive support on the same LSI controller.

  10. Technically there isn't such a thing as "hardware RAID" just a primitive CPU on a controller that doesn't have much else to do.  Some time in the past, that card was faster than the CPUs that were then available.  That just isn't true any more.  And your "hardware RAID" is the one in the motherboard BIOS right?  That's just software my friend, and not nearly as sophisticated as MDRAID in linux.

     

    The very fastest enterprise flash SANs in my own data centers are NVMe connected to bus, and Xeon... totally software based.

     

    I'm not sure why you don't want to passthrough your SATA controller, but if you must, you can try to RDM the specific drives you want XPenology to see, while leaving the controller to ESXi.

    • Like 1
  11. VROC is another form of hardware RAID, which displaces much of the functionality that DSM provides around storage redundancy management.

     

    Also, I think VROC is really only a very high-end Xeon product implementation... and not generally available on the hardware on which XPEnology is likely to be run.

     

    Lastly, using NVMe is currently only possible within very specific parameters using ESXi, and I'm not sure how a VROC implementation would play into that.

  12. I don't have an 8100T, but I would guess it is between 3-6W idle.  What difference does 2-3W make for you?

     

    As far as how to estimate total power consumption, I like to use https://pcpartpicker.com which lets you build systems out of spec parts and it will tell you your max power budget.  Idle consumption is usually a lot less.  CPU's, memory, SSD's and drives all use quite a bit less power at idle than their max spec.

  13. Typically ESXi boots from a USB or a DOM, then runs from RAM.  It needs a place for temporary ("scratch") files, and also for the VM definitions and support/patch files.  So you will need some sort of storage for this.  You can virtualize your NAS storage or provide a physical (passthrough controller or RDM).

     

    My configuration described above has all the storage intended for the NAS configured via passthrough or RDM.  None of that is available to ESXi for scratch, so another drive is needed.  I use the NVMe slot and a 128GB drive for this, and all it has on it is the VM definitions and scratch files - maybe 30GB in total, which includes some small virtual drives for XPenology test, and virtualized storage for a few other non-XPenology VM's.

     

    Sorry if this is overly explanatory, but it sounds like you might be setting up ESXi for the first time.

    • Thanks 1
×
×
  • Create New...