Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. Hmm, I typed in "vmtools" in the search bar and this was returned:
  2. Boggle. There are three separate threads under that link that directly lists and discusses hardware successfully used with XPEnology. Another option is to peruse the upgrade threads to see exactly what hardware tends to be compatible with upgrades.
  3. The existing loader works, and the individual that authored them has not been active since January (and historically is not active period). So I would not expect any updated loader. You have to make your own decision about when to pull the trigger on upgrades as they do take some time to sort out. That said, 6.2.3 seems like a good release for XPEnology as many of the interim hacks that were necessary for 6.2.1 and 6.2.2 are not needed anymore (in fact, most of the problems people are having with 6.2.3 are because they still have those hacks in place and they no longer work properly). The issue was partially a problem before and folks lived with it. This is a complete fix despite the band-aid style implementation.
  4. flyride

    Help newbie

    Consider the FAQ and Tutorials here instead of YouTube. https://xpenology.com/forum/forum/83-faq-start-here/ https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/ Installing 6.2 is the same as installing 6.1. Download version 6.2.3 for 918+ or DS3615xs or DS3617xs PAT file from Synology per the installation tutorial. You need loader 1.03b for DS3615xs/DS3617xs, or 1.04b for DS918+. Use DS918+ if you want NVMe cache capability, or DS3615xs if you want RAIDF1 capability. The only reason to use DS3617xs is that has support for more threads, but you have a 4-core CPU and so 3615 and 3617 are equivalent. https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  5. If your add-on SATA card uses a port multiplier, it may not be able to address all 6 ports. There are many threads on which cards work well, and which do not. Before you go replacing hardware, test extensively with DiskIdxMap and SataPortMap options with disks that are not in production use.
  6. The motherboard and CPU will work, the GPU cannot be used.
  7. Download from Synology's support page and install manually through the Control Panel.
  8. Yes, SATA works fine for cache. Be advised more RAM is probably better than SSD cache for most workloads. R/W cache exposes your array to crash-induced data loss. R/O cache is okay.
  9. Try 6.2.3 and DS918 if you are doing a clean build with Realtek network.
  10. As of yet nobody has posted an attempt to install 3617 on 6.2.3 under ESXi, so I went to check it. Confirming it works fine with the addition of FixSynoboot.sh, but requires DiskIdxMap=0C00 in grub.cfg for drives to start at /dev/sda (isn't this true for all ESXi installs, I don't recall seeing this in the tutorials? - but I have 0C00 on all my test ESXi systems). vmxnet3 works too!
  11. https://xpenology.com/forum/forum/90-user-reported-compatibility-thread-drivers-requests/
  12. You are literally posting on the tutorial thread.
  13. DSM SMART support is hit or miss period, more so with RDM. Since with an RDM device the drive is technically still attached to the ESXi controller, you may be able to use ESXi to report SMART status. That works on my NVMe drives but I have never tried with LSI or other SATA controller.
  14. How much RAM do you plan to give your guest running XPenology? Any free memory after loading the OS is cache. All system designs are now moving the hardware interfaces closer to the CPU and PCI bus. Controller logic is just getting in the way. Regarding configuration - there are two ways to do this. You can passthrough the controller (in HBA mode) in its entirety, and that will preclude the use of any of those disks by other guests. The other way is with RDM - where ESXi retains ownership of the controller, but you select specific drives and build a Raw Device Mapping pointer for each that are then attached to the XPenology guest. See: https://xpenology.com/forum/topic/12391-nvme-optimization-baremetal-to-esxi-report/?tab=comments#comment-88690. Any disks you don't explicitly configure with RDM may be used for scratch volumes or any other normal ESXi purpose. The other issue in play is that the DSM variant you select has to support your passthrough controller. For AHCI SATA, no big deal. For LSI, you will get the best luck with DS3615xs but some controllers aren't well supported. With RDM, there is no need for a controller driver as the ESXi RDM pointer just looks like a regular SATA drive to the guest, and can be attached to a vSATA controller.
  15. This is a flawed approach. There is nothing that your hardware RAID can do that DSM cannot do better. Consider this thread for more information: https://xpenology.com/forum/topic/28228-lsi-esxi-and-xpenology-fujitsu-t2540-m1
  16. Waiting for an example of DS3617xs upgrade success on baremetal... (confirmed)
  17. Please post this on an appropriate thread, SMART info has nothing to do with synoboot.
  18. Script updated to gracefully remove loader partitions mounted as esata shares if they exist. No need to edit DiskIdxMap with this new version.
  19. I was able to duplicate your report with the esata shares populating before the loader device is killed. Apparently some differences on boot timing with 918 and 3615 I guess. The DiskIdxMap=1600 should solve the esata problem on 3615 for now, but I'm searching for a simpler fix to the script.
  20. So, a couple of comments. There isn't anything in Jun's script that has anything to do with the console. It only manipulates devices that have a partition structure. If the console is failing it has to be a general problem with 6.2.3 and the loader, or something else is going on. Can you reboot a few times and see if that behavior is reliable? That code from 1.04b is identical to 1.03b and is probably the same back to the first Jun loader. There are really two problems being solved here - First, the synoboot device is being generated as a generic block device in the range reserved for esata. If a device exists too late in the boot sequence, DSM sees the device and tries to mount the share. So we want the device gone before that. Second, we need the boot device to be present as synoboot. When you are checking on status, ls /dev/syno* and you should see synobios and three synoboot entries if everything is working right. I was hoping not to have to recommend this, but if you either change DiskIdxMap=1600 in grub.cfg, or change the portcfg bitmasks in /etc/synoinfo.conf, that will keep the esata shares from generating, and you can keep running the synoboot fix in /usr/local/etc/rc.d Then all we have left is the console problem, which I really think is unrelated here but warrants investigation. AFAIK /dev/ttyS1 is the general-purpose I/O port in Syno hardware to manage the lights and beep and fanspeed and doesn't do anything in XPenology. I think the console is /dev/ttyS0. It might be informative if you did a dmesg | fgrep tty
  21. If you tried to patch with the old patch, the file won't be in the correct state for the new patch. Restore the old file first (or just use the one @The Chief gave you).
  22. For those Linux newbs who need exact instructions on installing the script, follow this here. Please be VERY careful with syntax especially when working as root. If you have not turned on ssh in Control Panel remote access, do it Download putty or other ssh terminal emulator for ssh access Connect to your nas with putty and use your admin credentials. It will give you a command line "$" which means non-privileged In File Station, upload FixSynoboot.sh to a shared folder. If the folder name is "folder" and it's on Volume 1, the path in command line is /volume1/folder From command line, enter "ls /volume1/folder/FixSynoboot.sh" and the filename will be returned if uploaded correctly. Case always matters in Linux. $ ls /volume1/folder/FixSynoboot.sh FixSynoboot.sh Enter "sudo -i" which will elevate your admin to root. Use the admin password again. Now everything you do is destructive, so be careful. The prompt will change to "#" to tell you that you have done this. $ sudo -i Password: # Copy the file from your upload location to the target location. # cp /volume1/folder/FixSynoboot.sh /usr/local/etc/rc.d Make the script executable. # chmod 0755 /usr/local/etc/rc.d/FixSynoboot.sh Now verify the install. The important part is the first -rwx which indicates FixSynoboot.sh can be executed. # ls -la /usr/local/etc/rc.d/FixSynoboot.sh -rwxr-xr-x 1 root root 2184 May 18 17:54 FixSynoboot.sh Ensure the file configuration is correct, reboot the nas and FixSynoboot will be enabled.
  23. Something wrong with this one? It's pinned right in the Tutorials/Guides section. There really isn't anything different from an installation standpoint on any version of 6.x ESXi.
  24. You can't exactly pass disks through. You can pass controllers and disks come with them. Or you can use RDM = Raw Device Mapping. It will take a ESXi-addressable block storage device and create an alias that can be added to a guest VM, and it behaves as a SATA drive. A way to support controllers and disk device types that DSM cannot handle. See this: https://xpenology.com/forum/topic/12391-nvme-optimization-baremetal-to-esxi-report/?tab=comments#comment-88690
  25. Arguably this is the wrong approach. The point of DSM is to do sophisticated software RAID and not leave that task to comparatively unintelligent firmware on your hardware controller. So you really "want" to see individual disks so that those disks can be presented to your guest running DSM. Best case passthrough the RAID controller entirely to DSM if it is supportable. If that doesn't work, you can create an RDM profile for each drive and attach those to your guest. You'll get the best performance and array features (btrfs self-healing, dynamic RAID management, more RAID options, SHR if you want that) if you let DSM do the work.
×
×
  • Create New...