Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Posts posted by flyride

  1. Ok, this is a simple system with the following likely attributes:

    1. VMware is probably installed on a USB key and booting from that
    2. You have a small (32GB) SSD which is dedicated for the VMware scratch datastore
    3. You have created a XPEnology VM, storing the virtual boot loader and a 200GB sparsely populated virtual disk as storage for XPEnology.  These are all stored on scratch datastore.
    4. Presumably you have installed DSM to the virtual disk but it's not clear if you have built a storage pool or a volume.
    5. You probably have a physical HDD you want to use as a second (?) disk for XPEnology. This is not explained or shown in the system pics.

    Hopefully you can see that sparse virtual disk storage will be problematic in a production environment because your virtual disk will rapidly exceed the SSD physical storage once you start putting things onto it.  This is fine for test to simulate a larger disk, but definitely not for production.

     

    Assuming I am correct about the second disk (assuming for now it is an HDD) you wish to add, there are three ways to connect it.

    1. Create a new datastore in ESXi and locate the HDD.  Then create a virtual disk for some or all of it, and attach that virtual disk to your VM, which should then be visible in DSM.
    2. Create an RDM pointer to the HDD (see my sig on how to do this).  Then attach the RDM definition to your VM.  The entire disk should then be visible in DSM.
    3. If your SSD is not on the same SATA controller as your HDD (for example if SSD is an NVMe drive), you can pass your SATA controller through to the VM entirely. Any attached drives then will be visible to DSM as long as your SATA controller is supported by DSM.

    This is probably a bit overwhelming.  You seem new to ESXi so just build up and burn down some test systems and do a lot of research on configurations, until you get the hang of it.

     

    Good luck!

  2. On 11/21/2019 at 4:55 AM, polik said:

    I'm using "Quicknick’s Loaders DSM 6.1.X", downloaded from https://xpenology.club/downloads/

    Unfortunatly in both cases (XPEnology_DSM_6.1.x-quicknick-3.0.zip and ds3615xs_DSM_6.1.x-quicknick-3.0.zip), I'm getting error message during OS install from a file (previously downloaded from synology site) - something about fille corruption.

    My question is, what I'm doing wrong?

     

    Quicknick's loader is not supported, and not supported here.

     

    • Like 1
  3. ESXi needs its own storage. It can boot off of a USB key, but it will also need a place for your VM definitions to live, and any virtual disks.  This is called "scratch" storage.

     

    XPenology's boot loader under ESXi is a vdisk hosted on scratch. The disks that DSM manages should usually not be - one exception is a test XPenology VM.  In any case, if you use scratch to provide virtual disks for DSM to manage, the result won't be portable to a baremetal XPenology or Synology installation.

     

    As you have researched, one alternative is to define RDM definitions (essentially, virtual pointers) for physical disks attached to ESXi.  RDM disks can then be dedicated to the XPenology VM and won't be accessible by other VM's.  The reasons to do this are 1) to provide an emulated interface to a disk type not normally addressable by DSM, such as NVMe, or 2) allow for certain drives to be dedicated to DSM (and therefore portable) and others to scratch for VM shared access - all on the same controller.

     

    If you have access to other storage for scratch... for example, an M.2 NVMe SSD, you can "passthrough" your SATA controller - i.e. dedicate it and all of its attached drives to the XPenology VM.  The controller and drives will then actually be seen by the VM (and won't be virtualized at all) and will be portable. An alternative to the M.2 drive is another PCIe SATA controller, as you suggest.

     

    On my own "main" XPenology system, I do all of the above.  There is a USB boot drive for ESXi, an NVMe M.2 drive for scratch, and the XPenology VM has two U.2 connected NVMe drives translated to SCSI via RDM, and the chipset SATA controller passed through with 8 drives attached.  Other VM's run along with XPenology, using virtual disks hosted on scratch.

  4. Yes, a migration upgrade works as long as there is no ESXi on the data disks themselves.  It's the same platform (DSM).

     

    It might be simpler just to pass through the SATA controller and ensure your drives are 100% seen by DSM.  If you must RDM, make sure it's physical RDM so that there is no encapsulation of the partition at all.  The only reason I ever found to do this was to support NVMe drives in volumes.  See my sig for details on that if you aren't familiar.

  5. EDIT: read through this whole thread for patch information specific to the DSM version you are running.

     

    This is nice work, and thank you for your contribution.

     

    For those who aren't familiar with patching binary files, here's a script to enable nvme support per this research.

    It must be run as sudo and you should reboot afterward.

     

    Note that an update to DSM might overwrite this file such that it has to be patched again (and/or can't be patched due to string changes, although this is unlikely).  Your volume might appear as corrupt or not mountable until the patch is reapplied.  To be very safe, you may want to remove the cache drive from the volume prior to each update.

    #!/bin/ash
    # patchnvme for DSM 6.2.x
    #
    TARGFILE="/usr/lib/libsynonvme.so.1"
    PCISTR="\x00\x30\x30\x30\x30\x3A\x30\x30\x3A\x31\x33\x2E\x31\x00"
    PHYSDEVSTR="\x00\x50\x48\x59\x53\x44\x45\x56\x50\x41\x54\x48\x00\x00\x00\x00\x00\x00"
    PCINEW="\x00\x6E\x76\x6D\x65\x00\x00\x00\x00\x00\x00\x00\x00\x00"
    PHYSDEVNEW="\x00\x50\x48\x59\x53\x44\x45\x56\x44\x52\x49\x56\x45\x52\x00\x00\x00\x00"
    #
    [ -f $TARGFILE.bak ] || cp $TARGFILE $TARGFILE.bak
    if [ $? == 1 ]; then
      echo "patchnvme: can't create backup (sudo?)"
      exit
    fi
    COUNT=`grep -obUaP "$PCISTR" $TARGFILE | wc -l`
    if [ $COUNT == 0 ]; then
      echo "patchnvme: can't find PCI reference (already patched?)"
      exit
    fi
    if [ $COUNT -gt 1 ]; then
      echo "patchnvme: multiple PCI reference! abort"
      exit
    fi
    COUNT=`grep -obUaP "$PHYSDEVSTR" $TARGFILE | wc -l`
    if [ $COUNT == 0 ]; then
      echo "patchnvme: can't find PHYSDEV reference (already patched?)"
      exit
    fi
    if [ $COUNT -gt 1 ]; then
      echo "patchnvme: multiple PHYSDEV reference! abort"
      exit
    fi
    sed "s/$PCISTR/$PCINEW/g" $TARGFILE >$TARGFILE.tmp
    if [ $? == 1 ]; then
      echo "patchnvme: patch could not be applied (sudo?)"
      exit
    fi
    sed "s/$PHYSDEVSTR/$PHYSDEVNEW/g" $TARGFILE.tmp >$TARGFILE
    if [ $? == 1 ]; then
      echo "patchnvme: patch could not be applied (sudo?)"
      exit
    fi
    echo "patchnvme: success"
    rm $TARGFILE.tmp 2>/dev/null

     

    • Like 7
    • Thanks 2
  6. 6 hours ago, mrpeabody said:

    It was btrfs 

     

    Then using an ext filesystem utility (e2fsck) is ill advised.

     

    btrfs really doesn't have any user-accessible repair options in Synology.  It's mostly designed for self-healing and then if that doesn't work, Synology remote access recovery.

     

    Here's a data recovery thread from awhile back.  If you want better advice, post some screenshots or more information about your issue.

     

  7. 5 minutes ago, Jamzor said:

    I have tried creating new disks with SATA controller multiple times.. I tried setting "dependend" and "persistant" but still every time I boot it says no disks found...

    I dont know what Im doing wrong.. The only way I get it to actually detect disks are with SCSI controller..

     

    I think the problem may be that you have the wrong VM hardware emulation profile.

     

    It's important that you pick the "Other Linux 3.x x64" option when you initially build the VM.  In that particular tutorial it's not very prominently shown, but it is there. 

     

  8. Many folks don't know that you can get DACs which are copper cables with embedded SFP's at each end.  That gets rid of a lot of power and heat too.  I would much rather have an SFP port than a 10Gbe port just for reduction of heat in the switch and/or NIC.

  9. As a strong advocate of 10Gbe networking on XPenology, I am happy to finally see an affordable, passively-cooled switch on the market (even though many folks don't even need a switch - a direct-connected multi-port NIC will often suffice).

     

    To use it, you'll have to familiarize yourself with DACs and/or optical SFP's but this is a major step forward for the price.

     

    https://www.servethehome.com/mikrotik-crs305-1g-4sin-review-4-port-must-have-10gbe-switch/

    • Like 3
    • Thanks 2
  10. Thanks!  This is helpful, but not conclusive yet.  If you don't mind iterating with me a little bit, please post the output of:

     

    1. synonvme --m2-card-model-get /dev/nvme0
    2. synodiskport -cache
    3. fdisk -l /dev/nvme0n1
    4. udevadm info /dev/nvme0n1
  11. If the low-power CPU appeals to you, provided you can cost-effectively source a "T" CPU they can make sense, but it may be cheaper to just buy the lowest-performance "K" CPU available instead and underclock/undervolt it using a Z370/Z390 board.  The result will be essentially the same.

     

  12. It depends on which DSM hardware platform you are using.  1.02b/1.03b loaders and DS3615 works well with Mellanox (I'm using that myself).  With 1.03b you will need to keep an Intel Gbe card in the system.  There is less empirical information available about 1.04b and 10Gbe (the DS918 hardware does not have any provision for add-on cards) but there are some drivers in the release.

     

    See these links:

     

×
×
  • Create New...