Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Posts posted by flyride

  1. The existing loader works, and the individual that authored them has not been active since January (and historically is not active period).  So I would not expect any updated loader.

     

    You have to make your own decision about when to pull the trigger on upgrades as they do take some time to sort out.  That said, 6.2.3 seems like a good release for XPEnology as many of the interim hacks that were necessary for 6.2.1 and 6.2.2 are not needed anymore (in fact, most of the problems people are having with 6.2.3 are because they still have those hacks in place and they no longer work properly).

     

    The issue was partially a problem before and folks lived with it. This is a complete fix despite the band-aid style implementation.

     

  2. Consider the FAQ and Tutorials here instead of YouTube.

     

    https://xpenology.com/forum/forum/83-faq-start-here/

    https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/

     

    Installing 6.2 is the same as installing 6.1. Download version 6.2.3 for 918+ or DS3615xs or DS3617xs PAT file from Synology per the installation tutorial.

     

    You need loader 1.03b for DS3615xs/DS3617xs, or 1.04b for DS918+.  Use DS918+ if you want NVMe cache capability, or DS3615xs if you want RAIDF1 capability.

    The only reason to use DS3617xs is that has support for more threads, but you have a 4-core CPU and so 3615 and 3617 are equivalent.

    https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/

     

  3. If your add-on SATA card uses a port multiplier, it may not be able to address all 6 ports.  There are many threads on which cards work well, and which do not.

     

    Before you go replacing hardware, test extensively with DiskIdxMap and SataPortMap options with disks that are not in production use.

  4. As of yet nobody has posted an attempt to install 3617 on 6.2.3 under ESXi, so I went to check it.  Confirming it works fine with the addition of FixSynoboot.sh, but requires DiskIdxMap=0C00 in grub.cfg for drives to start at /dev/sda (isn't this true for all ESXi installs, I don't recall seeing this in the tutorials? - but I have 0C00 on all my test ESXi systems). vmxnet3 works too!

    • Like 3
  5.  

    55 minutes ago, HerbertJ said:

    The hardware based Dell H730 RAID card has a 1GB cache on it, so I was operating on the assumption that I would see higher performance using the hardware RAID over software RAID. Is this incorrect?

     

    Do you think I should just stick the RAID card into non-RAID (HBA mode) and get ESXI to pass through to Synology to run SHR? If so, am I passing the full controller through or just individual disks?

     

    I'm assuming that if I pass through the full controller then I can't use any disk which are connected to the controller as part of any other VMs?

     

    How much RAM do you plan to give your guest running XPenology?  Any free memory after loading the OS is cache.  All system designs are now moving the hardware interfaces closer to the CPU and PCI bus.  Controller logic is just getting in the way.

     

    Regarding configuration - there are two ways to do this. You can  passthrough the controller (in HBA mode) in its entirety, and that will preclude the use of any of those disks by other guests.

     

    The other way is with RDM - where ESXi retains ownership of the controller, but you select specific drives and build a Raw Device Mapping pointer for each that are then attached to the XPenology guest.  See: https://xpenology.com/forum/topic/12391-nvme-optimization-baremetal-to-esxi-report/?tab=comments#comment-88690. Any disks you don't explicitly configure with RDM may be used for scratch volumes or any other normal ESXi purpose.

     

    The other issue in play is that the DSM variant you select has to support your passthrough controller.  For AHCI SATA, no big deal. For LSI, you will get the best luck with DS3615xs but some controllers aren't well supported. With RDM, there is no need for a controller driver as the ESXi RDM pointer just looks like a regular SATA drive to the guest, and can be attached to a vSATA controller.

  6. So, a couple of comments.

     

    There isn't anything in Jun's script that has anything to do with the console.  It only manipulates devices that have a partition structure.  If the console is failing it has to be a general problem with 6.2.3 and the loader, or something else is going on.  Can you reboot a few times and see if that behavior is reliable? 

     

    That code from 1.04b is identical to 1.03b and is probably the same back to the first Jun loader.

     

    There are really two problems being solved here - First, the synoboot device is being generated as a generic block device in the range reserved for esata. If a device exists too late in the boot sequence, DSM sees the device and tries to mount the share.  So we want the device gone before that.  Second, we need the boot device to be present as synoboot.

     

    When you are checking on status, ls /dev/syno* and you should see synobios and three synoboot entries if everything is working right.

     

    I was hoping not to have to recommend this, but if you either change DiskIdxMap=1600 in grub.cfg, or change the portcfg bitmasks in /etc/synoinfo.conf, that will keep the esata shares from generating, and you can keep running the synoboot fix in /usr/local/etc/rc.d

     

    Then all we have left is the console problem, which I really think is unrelated here but warrants investigation.  AFAIK /dev/ttyS1 is the general-purpose I/O port in Syno hardware to manage the lights and beep and fanspeed and doesn't do anything in XPenology.  I think the console is /dev/ttyS0.  It might be informative if you did a dmesg | fgrep tty

  7. On 4/18/2020 at 3:22 AM, DerMoeJoe said:

    @The Chief

     

    thanks for your work, but the new nvme patch doesnt works for me with 6.2.3.

     

    ive relpaced the old patch, modified the rights with chmod, started the script and rebooted the NAS, but im unable to see the nvme drives within the storrage manager.

     

    If you tried to patch with the old patch, the file won't be in the correct state for the new patch.  Restore the old file first (or just use the one @The Chief gave you).

    • Like 1
    • Thanks 1
  8. For those Linux newbs who need exact instructions on installing the script, follow this here.  Please be VERY careful with syntax especially when working as root.

    1. If you have not turned on ssh in Control Panel remote access, do it
       

    2. Download putty or other ssh terminal emulator for ssh access
       

    3. Connect to your nas with putty and use your admin credentials.  It will give you a command line "$" which means non-privileged
       

    4. In File Station, upload FixSynoboot.sh to a shared folder.  If the folder name is "folder" and it's on Volume 1, the path in command line is /volume1/folder
       

    5. From command line, enter "ls /volume1/folder/FixSynoboot.sh" and the filename will be returned if uploaded correctly.  Case always matters in Linux.

      $ ls /volume1/folder/FixSynoboot.sh
      FixSynoboot.sh

       

    6. Enter "sudo -i" which will elevate your admin to root.  Use the admin password again. Now everything you do is destructive, so be careful.  The prompt will change to "#" to tell you that you have done this.

      $ sudo -i
      Password:
      #

       

    7. Copy the file from your upload location to the target location.

      # cp /volume1/folder/FixSynoboot.sh /usr/local/etc/rc.d

       

    8. Make the script executable.

      # chmod 0755 /usr/local/etc/rc.d/FixSynoboot.sh

       

    9. Now verify the install. The important part is the first -rwx which indicates FixSynoboot.sh can be executed.

      # ls -la /usr/local/etc/rc.d/FixSynoboot.sh
      -rwxr-xr-x  1 root root 2184 May 18 17:54 FixSynoboot.sh

       

    Ensure the file configuration is correct, reboot the nas and FixSynoboot will be enabled.

    • Like 7
    • Thanks 11
  9. You can't exactly pass disks through.  You can pass controllers and disks come with them.

     

    Or you can use RDM = Raw Device Mapping.  It will take a ESXi-addressable block storage device and create an alias that can be added to a guest VM, and it behaves as a SATA drive.  A way to support controllers and disk device types that DSM cannot handle. 

     

    See this: https://xpenology.com/forum/topic/12391-nvme-optimization-baremetal-to-esxi-report/?tab=comments#comment-88690

     

  10. Arguably this is the wrong approach.  The point of DSM is to do sophisticated software RAID and not leave that task to comparatively unintelligent firmware on your hardware controller.

     

    So you really "want" to see individual disks so that those disks can be presented to your guest running DSM.  Best case passthrough the RAID controller entirely to DSM if it is supportable.  If that doesn't work, you can create an RDM profile for each drive and attach those to your guest.

     

    You'll get the best performance and array features (btrfs self-healing, dynamic RAID management, more RAID options, SHR if you want that) if you let DSM do the work.

×
×
  • Create New...