flyride
-
Posts
2,438 -
Joined
-
Last visited
-
Days Won
127
Posts posted by flyride
-
-
You'll have to be more specific about what you have now, and what you are trying to do. There isn't anything magic about ESXi, but it will matter how you are provisioning your storage.
-
On 11/21/2019 at 4:55 AM, polik said:
I'm using "Quicknick’s Loaders DSM 6.1.X", downloaded from https://xpenology.club/downloads/
Unfortunatly in both cases (XPEnology_DSM_6.1.x-quicknick-3.0.zip and ds3615xs_DSM_6.1.x-quicknick-3.0.zip), I'm getting error message during OS install from a file (previously downloaded from synology site) - something about fille corruption.
My question is, what I'm doing wrong?
Quicknick's loader is not supported, and not supported here.
- 1
-
Especially as if someone bought several at once, they all fail within a few moments of each other. Unbelievable.
-
Download 6.2.1, install and follow the real3x procedure. It's not just replacing extra.lzma, but you have to run the scripted commands that cause the i915 driver to be disabled.
Then update to latest version.
-
ESXi needs its own storage. It can boot off of a USB key, but it will also need a place for your VM definitions to live, and any virtual disks. This is called "scratch" storage.
XPenology's boot loader under ESXi is a vdisk hosted on scratch. The disks that DSM manages should usually not be - one exception is a test XPenology VM. In any case, if you use scratch to provide virtual disks for DSM to manage, the result won't be portable to a baremetal XPenology or Synology installation.
As you have researched, one alternative is to define RDM definitions (essentially, virtual pointers) for physical disks attached to ESXi. RDM disks can then be dedicated to the XPenology VM and won't be accessible by other VM's. The reasons to do this are 1) to provide an emulated interface to a disk type not normally addressable by DSM, such as NVMe, or 2) allow for certain drives to be dedicated to DSM (and therefore portable) and others to scratch for VM shared access - all on the same controller.
If you have access to other storage for scratch... for example, an M.2 NVMe SSD, you can "passthrough" your SATA controller - i.e. dedicate it and all of its attached drives to the XPenology VM. The controller and drives will then actually be seen by the VM (and won't be virtualized at all) and will be portable. An alternative to the M.2 drive is another PCIe SATA controller, as you suggest.
On my own "main" XPenology system, I do all of the above. There is a USB boot drive for ESXi, an NVMe M.2 drive for scratch, and the XPenology VM has two U.2 connected NVMe drives translated to SCSI via RDM, and the chipset SATA controller passed through with 8 drives attached. Other VM's run along with XPenology, using virtual disks hosted on scratch.
-
Yes, a migration upgrade works as long as there is no ESXi on the data disks themselves. It's the same platform (DSM).
It might be simpler just to pass through the SATA controller and ensure your drives are 100% seen by DSM. If you must RDM, make sure it's physical RDM so that there is no encapsulation of the partition at all. The only reason I ever found to do this was to support NVMe drives in volumes. See my sig for details on that if you aren't familiar.
-
Done 😀
-
Look up real3x mod should fix your problem.
-
EDIT: read through this whole thread for patch information specific to the DSM version you are running.
This is nice work, and thank you for your contribution.
For those who aren't familiar with patching binary files, here's a script to enable nvme support per this research.
It must be run as sudo and you should reboot afterward.
Note that an update to DSM might overwrite this file such that it has to be patched again (and/or can't be patched due to string changes, although this is unlikely). Your volume might appear as corrupt or not mountable until the patch is reapplied. To be very safe, you may want to remove the cache drive from the volume prior to each update.
#!/bin/ash # patchnvme for DSM 6.2.x # TARGFILE="/usr/lib/libsynonvme.so.1" PCISTR="\x00\x30\x30\x30\x30\x3A\x30\x30\x3A\x31\x33\x2E\x31\x00" PHYSDEVSTR="\x00\x50\x48\x59\x53\x44\x45\x56\x50\x41\x54\x48\x00\x00\x00\x00\x00\x00" PCINEW="\x00\x6E\x76\x6D\x65\x00\x00\x00\x00\x00\x00\x00\x00\x00" PHYSDEVNEW="\x00\x50\x48\x59\x53\x44\x45\x56\x44\x52\x49\x56\x45\x52\x00\x00\x00\x00" # [ -f $TARGFILE.bak ] || cp $TARGFILE $TARGFILE.bak if [ $? == 1 ]; then echo "patchnvme: can't create backup (sudo?)" exit fi COUNT=`grep -obUaP "$PCISTR" $TARGFILE | wc -l` if [ $COUNT == 0 ]; then echo "patchnvme: can't find PCI reference (already patched?)" exit fi if [ $COUNT -gt 1 ]; then echo "patchnvme: multiple PCI reference! abort" exit fi COUNT=`grep -obUaP "$PHYSDEVSTR" $TARGFILE | wc -l` if [ $COUNT == 0 ]; then echo "patchnvme: can't find PHYSDEV reference (already patched?)" exit fi if [ $COUNT -gt 1 ]; then echo "patchnvme: multiple PHYSDEV reference! abort" exit fi sed "s/$PCISTR/$PCINEW/g" $TARGFILE >$TARGFILE.tmp if [ $? == 1 ]; then echo "patchnvme: patch could not be applied (sudo?)" exit fi sed "s/$PHYSDEVSTR/$PHYSDEVNEW/g" $TARGFILE.tmp >$TARGFILE if [ $? == 1 ]; then echo "patchnvme: patch could not be applied (sudo?)" exit fi echo "patchnvme: success" rm $TARGFILE.tmp 2>/dev/null
- 7
- 2
-
6 hours ago, mrpeabody said:
It was btrfs
Then using an ext filesystem utility (e2fsck) is ill advised.
btrfs really doesn't have any user-accessible repair options in Synology. It's mostly designed for self-healing and then if that doesn't work, Synology remote access recovery.
Here's a data recovery thread from awhile back. If you want better advice, post some screenshots or more information about your issue.
-
-
5 minutes ago, Jamzor said:
I have tried creating new disks with SATA controller multiple times.. I tried setting "dependend" and "persistant" but still every time I boot it says no disks found...
I dont know what Im doing wrong.. The only way I get it to actually detect disks are with SCSI controller..
I think the problem may be that you have the wrong VM hardware emulation profile.
It's important that you pick the "Other Linux 3.x x64" option when you initially build the VM. In that particular tutorial it's not very prominently shown, but it is there.
-
For 6.2.x use all SATA. Usually I use SATA 0:0 for synoboot and other virtual drives as SATA 1:x
This is in the Tutorial...
- 1
-
You might try looking at the updates threads, part of the reporting that happens there is a description of the hardware and how it is deployed.
-
Many folks don't know that you can get DACs which are copper cables with embedded SFP's at each end. That gets rid of a lot of power and heat too. I would much rather have an SFP port than a 10Gbe port just for reduction of heat in the switch and/or NIC.
-
As a strong advocate of 10Gbe networking on XPenology, I am happy to finally see an affordable, passively-cooled switch on the market (even though many folks don't even need a switch - a direct-connected multi-port NIC will often suffice).
To use it, you'll have to familiarize yourself with DACs and/or optical SFP's but this is a major step forward for the price.
https://www.servethehome.com/mikrotik-crs305-1g-4sin-review-4-port-must-have-10gbe-switch/
- 3
- 2
-
I've had good results with Mellanox Connect-X 3 single and dual port
-
There is a loader for 6.1.x (1.02b) and 6.2.x (1.03b and 1.04b, depending upon your hardware needs). So that will govern the major versions. Beyond that there are some compatibility factors on 6.2.x that you need to be aware of that affect your ability to upgrade, which you may discern from the links in the post above.
-
You were so close, that knowledge arrives on the 16th minute of looking.
https://xpenology.com/forum/topic/9392-general-faq/?tab=comments#comment-95507
- 2
- 2
-
Look up the real3x mod to disable the faulty i915 video driver, the combination you are proposing works fine.
- 1
-
DSM installs on all drives, unless you perform some unorthodox heroics. When a disk shows "Initialized" it means that the DSM partition structure has been built on it.
-
Thanks! This is helpful, but not conclusive yet. If you don't mind iterating with me a little bit, please post the output of:
- synonvme --m2-card-model-get /dev/nvme0
- synodiskport -cache
- fdisk -l /dev/nvme0n1
- udevadm info /dev/nvme0n1
-
If the low-power CPU appeals to you, provided you can cost-effectively source a "T" CPU they can make sense, but it may be cheaper to just buy the lowest-performance "K" CPU available instead and underclock/undervolt it using a Z370/Z390 board. The result will be essentially the same.
-
It depends on which DSM hardware platform you are using. 1.02b/1.03b loaders and DS3615 works well with Mellanox (I'm using that myself). With 1.03b you will need to keep an Intel Gbe card in the system. There is less empirical information available about 1.04b and 10Gbe (the DS918 hardware does not have any provision for add-on cards) but there are some drivers in the release.
See these links:
DS3615xs on esxi 6.7
in The Noob Lounge
Posted
Ok, this is a simple system with the following likely attributes:
Hopefully you can see that sparse virtual disk storage will be problematic in a production environment because your virtual disk will rapidly exceed the SSD physical storage once you start putting things onto it. This is fine for test to simulate a larger disk, but definitely not for production.
Assuming I am correct about the second disk (assuming for now it is an HDD) you wish to add, there are three ways to connect it.
This is probably a bit overwhelming. You seem new to ESXi so just build up and burn down some test systems and do a lot of research on configurations, until you get the hang of it.
Good luck!