flyride

Members
  • Content Count

    496
  • Joined

  • Last visited

  • Days Won

    38

Everything posted by flyride

  1. https://xpenology.com/forum/topic/23473-new-to-xpenology/?do=findComment&comment=127956 Please note the post discussing the FAQ
  2. Nobody knows. The current 1.03b and 1.04b loaders seem to work with DSM 6.2.x but any new DSM patch can (and does with surprising regularity) fail to work with them. The community has found workarounds in most cases. That's the reason for this thread here: https://xpenology.com/forum/forum/78-dsm-updates-reporting/ Look for folks with similar hardware, virtualization, loader and DSM versions being successful before attempting any DSM update. And seeing as you are planning to use ESXi, there really is no excuse not to have a test XPenology DSM instance to see if the upgrade fails or succeeds before committing the update to your production VM. When Synology releases DSM 7.0, it's a virtual certainty that the current loaders will not work. Someone will have to develop a new DSM 7.0 loader hack, and there is really no information about how long it might take or how difficult it may be.
  3. It's not possible for you to agree to Synology's Terms of Service. By using XPenology to connect to Synology services, you are directly violating them. Please note this from the FAQ: https://xpenology.com/forum/topic/9392-general-faq/?do=findComment&comment=82390
  4. Clicking the upgrade button would be unwise. You will need to burn a new boot loader, and you will need to evaluate your hardware to see what combination of loader and code to use. https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ Please have a backup/backout plan as it's not always straightforward depending on your system.
  5. Yes, that's your boot loader and is a required runtime filesystem for DSM. It's not an installation key.
  6. I'm not an HP expert, but I can answer the distilled-down question above. The CPU you have is an Ivy Bridge architecture, which is too old to run the DS918 version of DSM, compiled to use new instructions present in Haswell or later. So those running Ivy Bridge architecture have no choice but to run DS3615xs. Hardware transcoding requires Intel Quicksync drivers that are only implemented on DS918 DSM. This post may help you understand the limitations further.
  7. MBR and Legacy are two different things. If you can support a GPT partition, definitely do so. Loader 1.02b (for 6.1.x) can work in either Legacy or UEFI mode Loader 1.03b (for 6.2.x) works in Legacy mode Loader 1.04b (for 6.2.x) works only in UEFI mode
  8. Nothing that a VPN won't solve. If you think your "patched up" Synology box can't be hacked, you need to meet some white hat security folks.
  9. Sorry for the event and to bring you bad news. As you know, RAID 5 spans parity across the array such that all members, less one must be present for data integrity. Your data may have been recoverable at one time, but once the repair operation was initiated with only 2 valid drives, the data on all four drives was irreparably lost. I've highlighted the critical items above.
  10. If the array was healthy when shut down, it should work out of order. But that is a last resort. I'd use some blank drives and figure out the order of the ports before installing.
  11. Ok, this is a simple system with the following likely attributes: VMware is probably installed on a USB key and booting from that You have a small (32GB) SSD which is dedicated for the VMware scratch datastore You have created a XPEnology VM, storing the virtual boot loader and a 200GB sparsely populated virtual disk as storage for XPEnology. These are all stored on scratch datastore. Presumably you have installed DSM to the virtual disk but it's not clear if you have built a storage pool or a volume. You probably have a physical HDD you want to use as a second (?) disk for XPEnology. This is not explained or shown in the system pics. Hopefully you can see that sparse virtual disk storage will be problematic in a production environment because your virtual disk will rapidly exceed the SSD physical storage once you start putting things onto it. This is fine for test to simulate a larger disk, but definitely not for production. Assuming I am correct about the second disk (assuming for now it is an HDD) you wish to add, there are three ways to connect it. Create a new datastore in ESXi and locate the HDD. Then create a virtual disk for some or all of it, and attach that virtual disk to your VM, which should then be visible in DSM. Create an RDM pointer to the HDD (see my sig on how to do this). Then attach the RDM definition to your VM. The entire disk should then be visible in DSM. If your SSD is not on the same SATA controller as your HDD (for example if SSD is an NVMe drive), you can pass your SATA controller through to the VM entirely. Any attached drives then will be visible to DSM as long as your SATA controller is supported by DSM. This is probably a bit overwhelming. You seem new to ESXi so just build up and burn down some test systems and do a lot of research on configurations, until you get the hang of it. Good luck!
  12. You'll have to be more specific about what you have now, and what you are trying to do. There isn't anything magic about ESXi, but it will matter how you are provisioning your storage.
  13. Quicknick's loader is not supported, and not supported here.
  14. Especially as if someone bought several at once, they all fail within a few moments of each other. Unbelievable.
  15. Download 6.2.1, install and follow the real3x procedure. It's not just replacing extra.lzma, but you have to run the scripted commands that cause the i915 driver to be disabled. Then update to latest version.
  16. ESXi needs its own storage. It can boot off of a USB key, but it will also need a place for your VM definitions to live, and any virtual disks. This is called "scratch" storage. XPenology's boot loader under ESXi is a vdisk hosted on scratch. The disks that DSM manages should usually not be - one exception is a test XPenology VM. In any case, if you use scratch to provide virtual disks for DSM to manage, the result won't be portable to a baremetal XPenology or Synology installation. As you have researched, one alternative is to define RDM definitions (essentially, virtual pointers) for physical disks attached to ESXi. RDM disks can then be dedicated to the XPenology VM and won't be accessible by other VM's. The reasons to do this are 1) to provide an emulated interface to a disk type not normally addressable by DSM, such as NVMe, or 2) allow for certain drives to be dedicated to DSM (and therefore portable) and others to scratch for VM shared access - all on the same controller. If you have access to other storage for scratch... for example, an M.2 NVMe SSD, you can "passthrough" your SATA controller - i.e. dedicate it and all of its attached drives to the XPenology VM. The controller and drives will then actually be seen by the VM (and won't be virtualized at all) and will be portable. An alternative to the M.2 drive is another PCIe SATA controller, as you suggest. On my own "main" XPenology system, I do all of the above. There is a USB boot drive for ESXi, an NVMe M.2 drive for scratch, and the XPenology VM has two U.2 connected NVMe drives translated to SCSI via RDM, and the chipset SATA controller passed through with 8 drives attached. Other VM's run along with XPenology, using virtual disks hosted on scratch.
  17. Yes, a migration upgrade works as long as there is no ESXi on the data disks themselves. It's the same platform (DSM). It might be simpler just to pass through the SATA controller and ensure your drives are 100% seen by DSM. If you must RDM, make sure it's physical RDM so that there is no encapsulation of the partition at all. The only reason I ever found to do this was to support NVMe drives in volumes. See my sig for details on that if you aren't familiar.
  18. Look up real3x mod should fix your problem.
  19. flyride

    NVMe cache support

    This is nice work, and thank you for your contribution. For those who aren't familiar with patching binary files, here's a script to enable nvme support per this research. It must be run as sudo and you should reboot afterward. Note that an update to DSM might overwrite this file such that it has to be patched again (and/or can't be patched due to string changes, although this is unlikely). Your volume might appear as corrupt or not mountable until the patch is reapplied. To be very safe, you may want to remove the cache drive from the volume prior to each update. #!/bin/ash # patchnvme for DSM 6.2.x # TARGFILE="/usr/lib/libsynonvme.so.1" PCISTR="\x00\x30\x30\x30\x30\x3A\x30\x30\x3A\x31\x33\x2E\x31\x00" PHYSDEVSTR="\x00\x50\x48\x59\x53\x44\x45\x56\x50\x41\x54\x48\x00\x00\x00\x00\x00\x00" PCINEW="\x00\x6E\x76\x6D\x65\x00\x00\x00\x00\x00\x00\x00\x00\x00" PHYSDEVNEW="\x00\x50\x48\x59\x53\x44\x45\x56\x44\x52\x49\x56\x45\x52\x00\x00\x00\x00" # [ -f $TARGFILE.bak ] || cp $TARGFILE $TARGFILE.bak if [ $? == 1 ]; then echo "patchnvme: can't create backup (sudo?)" exit fi COUNT=`grep -obUaP "$PCISTR" $TARGFILE | wc -l` if [ $COUNT == 0 ]; then echo "patchnvme: can't find PCI reference (already patched?)" exit fi if [ $COUNT -gt 1 ]; then echo "patchnvme: multiple PCI reference! abort" exit fi COUNT=`grep -obUaP "$PHYSDEVSTR" $TARGFILE | wc -l` if [ $COUNT == 0 ]; then echo "patchnvme: can't find PHYSDEV reference (already patched?)" exit fi if [ $COUNT -gt 1 ]; then echo "patchnvme: multiple PHYSDEV reference! abort" exit fi sed "s/$PCISTR/$PCINEW/g" $TARGFILE >$TARGFILE.tmp if [ $? == 1 ]; then echo "patchnvme: patch could not be applied (sudo?)" exit fi sed "s/$PHYSDEVSTR/$PHYSDEVNEW/g" $TARGFILE.tmp >$TARGFILE if [ $? == 1 ]; then echo "patchnvme: patch could not be applied (sudo?)" exit fi echo "patchnvme: success" rm $TARGFILE.tmp 2>/dev/null
  20. Then using an ext filesystem utility (e2fsck) is ill advised. btrfs really doesn't have any user-accessible repair options in Synology. It's mostly designed for self-healing and then if that doesn't work, Synology remote access recovery. Here's a data recovery thread from awhile back. If you want better advice, post some screenshots or more information about your issue.
  21. https://www.fs.com/products/30862.html
  22. I think the problem may be that you have the wrong VM hardware emulation profile. It's important that you pick the "Other Linux 3.x x64" option when you initially build the VM. In that particular tutorial it's not very prominently shown, but it is there.
  23. For 6.2.x use all SATA. Usually I use SATA 0:0 for synoboot and other virtual drives as SATA 1:x This is in the Tutorial...
  24. You might try looking at the updates threads, part of the reporting that happens there is a description of the hardware and how it is deployed. https://xpenology.com/forum/forum/78-dsm-updates-reporting/