Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 07/07/2021 in all areas

  1. After fiddling with it for a day, and getting it to work, I thought it would be nice to share the knowledge, especially since "officially" only ESXi is supported, and ESXi is picky about supporting stuff... The main reason for me to move to a hypervisor was because Synology has not yet introduced NVMe support AT ALL. And even with a kernel driver, the Storage Manager will not see it as an SSD, as an HDD, or anything. Synology is silent about this, even though some have requested it on their forums (although they do not have a model with M.2 NVMe connector yet, they have some models with full sized x16 or x4 PCIe ports, which can be used with an adapter card for NVMe). So I decided to try with a hypervisor. On one hand it makes installation easier, upgrading also, and I don't need a display connected (since most hypervisors provide a VNC connection to the guest OS). On the other hand, I can install the measly 2-3GB hypervisor and all the tools on the NVMe SSD, and have the rest mounted as a VMDK (or any other virtual disk file). The rest of the hard drives would use passthrough of course. I fiddled around with multiple options. XenServer simply refused to boot in UEFI mode, ESXi does not support my network adapter, or my built-in SATA controller (B250 chipset), Microsoft's Hyper-V server has issues if you do not have a domain server on the network, also as soon as the display output goes off, the device drops network connection. It left me with Proxmox. Never used it, and during installation I had some issues with the bootloader (both on 4.4 release and 5.0 beta). Luckily there's a workaround, since it is based on Debian, one can use the Debian netinst image, create a very basic system, and install Proxmox on top. I won't bore you with the details, there are enough guides about installing it to make me think twice before I write an (n+1)th version. So let's begin! Requirements: A working install of Proxmox 5.0 - it can be 4.4 too, but I only tested this on 5.0. Follow the guide to create the bridged network interface! The loader you wish to use. I recommend Jun's Loader, specifically 1.02a2 at the time of the writing of this guide. Steps: 0. Edit the loader (if needed) to your liking - MAC address, serial number, etc. This is especially important if you have multiple XPE systems on the same network. 1. Create a new VM in Proxmox. 1.1 Set the name, and make sure to note down the VM ID (if it is your first VM, it should be 100). I'll be using {VM_ID} as a placeholder from now on. 1.2 OS type should be "Linux 4.x/3.x/2.6 Kernel". 1.3 Set the CD/DVD to "Do not use media" (we will remove the virtual disk drive any way later on). 1.4 For the HDD, you should create a new virtual disk with the format of your liking (qcow2, vmdk or raw), this will be the initial drive. I made sure that it uses nearly the whole storage of the OS drive it was installed on (in my case it was a 256GB NVMe SSD, which, after setup and partitioning, resulted in a 226GiB root drive, of which I had 211GB free, so I set the virtual disk's size to 200GB). You can set it to any kind of bus, EXCEPT VirtIO. With VirtIO I had performance issues, so I went with SCSI (it supports up to 12 devices any way, so it is better). This is for the virtual disk only, VirtIO works just fine with passthrough devices. So apart from the bus, size and format, you don't need to touch a thing. 1.5 For CPU, set kvm64 with as many cores as your host has (incl. virtual cores if you're on a HyperThreading supported CPU!). In my case with the Intel G4560 this is 4. 1.6 For RAM, you should leave some slack for the host OS, I went with 7.5GB from the 8GB I have. Ballooning is not required. 1.7 Networking. This is where many things can go wrong. The VirtIO paravirtualized network adapter should work, but to be safe I went with the Intel E1000. On the left select Bridged Mode, with the previously created bridge as the first choice. You can also enable Firewall if you do not trust Syno's own. Leave the rest of the settings as default. 1.8 On the Confirm page, confirm your settings and create the VM. 2. After the VM is created, first thing to do is to remove the virtual disk drive (IDE 2, if everything went right). Then comes the hard part. 3. You have to add each and every HDD to the config file that you want to use for passthrough. The command is simple: qm set {VM_ID} -[protocol][port] /dev/disk/by-id/[disk-id] The {VM_ID} part is obvious, but what about the rest? [protocol] is the connection protocol you want to use. This can be sata, ide, scsi or virtio. I'm using SATA here, but you can use anything (IDE is not IDEal for us). SATA supports up to 6 devices (port indexes 0-6), scsi supports up to 12 devices, and virtio does not have a limitation to my knowledge. [port] is the first unused port of said protocol. E.g. if you set the initial disk during setup to SATA0, and you want to use SATA further here, you have to start numbering from 1. [disk-id] is the unique identifier of your HDD. Go to /dev/disk/by-id/ and list the disks you see. For most SATA devices, you'll see entries like "ata-[MANUFACTURER]_[MODEL]_[sERIAL]". So let's say I have 4 disks, with the disk-id's ata-1, ata-2, ata-3, and ata-4 (yours will be a lot longer, but don't worry, you can use the bash autocomplete with the tab key). For this to work I execute the following commands: qm set 100 -sata1 /dev/disk/by-id/ata-1 qm set 100 -sata2 /dev/disk/by-id/ata-2 qm set 100 -sata3 /dev/disk/by-id/ata-3 qm set 100 -sata4 /dev/disk/by-id/ata-4 Of course later on you can add further HDDs to a VM config by using the same command, just keep in mind the limitations of the protocols. 4. Now comes the hard part, we'll have to add the bootloader image file to the config. The config file is located under /etc/pve/qemu-server/, and is named {VM_ID}.conf. Open it with nano. This config file defines everything about the VM. Disks to mount, which device to use for booting, RAM amount, CPU cores, name of the VM, et cetera. Don't touch anything else than the lines you see here! Copy the synoboot.img to somewhere on your server. If you want to be consistent with the Proxmox setup, copy it under /var/lib/vz/images/{VM_ID}/ - you'll need root for that. After that, come back to the conf file, and open it again. You'll enter a few things here, make sure you pay attention! Enter the following line into the conf file, and make sure you replace the parts in the path! args: -device 'piix3-usb-uhci,addr=0x18' -drive 'id=synoboot,file=/var/lib/vz/images/{VM_ID}/synoboot.img,if=none,format=raw' -device 'usb-storage,id=synoboot,drive=synoboot' Make sure to replace the path to the synoboot.img with your own! One more thing to edit here, the boot device. Find the line that begins with "boot: " and replace it so it looks like this: boot: synoboot Save the file, then start the VM. It should start up without an issue, and you'll be able to use find.synology.com to find your server and install it.
    1 point
  2. The main advantage of Synology boxes is that they are very very quiet. Can anyone recommend PCs/Workstations that work well for Xpenology and are very quiet.
    1 point
  3. У меня HP Microserver N54L, проблема возникла с пятым хардом, не в корзине, он отдельно стоящий (или лежащий ))) , так вот начали харды Бэды выдавать.... Естественно поиски пошли по наиболее сложному пути. Правка кода, логи там всякие и прочая хренотень, ни разу не относящаяся к проблеме. И уже вытрахав себе мозг, решил проверить напряжение на Хардах. Так вот на этом, проблемном была просадка . В общем оказался переход Молекс - Сата. Заменил и всё в норме уже пару лет. На G8 схемотехника несколько иная, но если у вас проблема только в двух, конкретных слотах, то имеет смысл обратить внимание на провода, питание и прочую электрику. Чудес не бывает....))))
    1 point
  4. камеры Reolink RLC-410, RLC-420, RLC-423
    1 point
  5. Should we be concerned? Long term, with the intermediate updates not working? (assuming we already have backups and have all the functionality we need) Define concern? Is there anything that can be done with 6.2.4 that cannot be done with 6.2.3? The assumption has always been is that any update has the possibility of breaking the loader. Could DSM 6.2.3-25426 Update 3 possibly be the last safe Xpen Version for us using Jun's Loaders? Maybe. Is Jun still kicking around to come up with a possible solution? Or anyone else? Jun is the only one that can answer that question. My guess is that if they are at all interested in keeping XPe alive, they will be focusing on DSM 7 and not a minor "last-of-the-line" patch on a DSM version that already works quite well. How often in the past have 2 consecutive updates failed right out of the gate like the latest intermediate updates have? 25554 and 25556 are in fact the same update. Updates either work, or they don't. Is there any particular reason, such as a major OS change, or Synology wanting to rid us "Open Sourcers" of this beautiful software, that is causing these updates not to work? I am sure Synology would prefer that we do not use DSM in this way. They probably consider DSM as entirely their IP even though it is derived from GPL-licensed open source. They could probably set up a Secure Boot solution that would probably shut DSM to us forever. But that would also limit backward compatibility with the hundreds of thousands or millions of units they have shipped up until now, and probably interfere with any cloud service/virtualization plans they have in the wings. So I don't think that this or the other issues that have been worked through in the past to be a result of an "anti-XPe" effort on the part of Synology. Just to be clear, DSM 6.2.3 "broke" the loader. FixSynoboot corrects the failure without modifying or updating the loader. We have also had problems with PCI hardware compatibility, and were able to work through solutions. Other than the irrational FOMO, I'm not clear why getting 6.2.4 working is anyone's priority at this point. Personally, I'm far more interested in DSM 7.
    1 point
  6. No problem, sometimes it's easy to overlook the simple things.
    1 point
  7. Personally I number the bays & record the HDD’s serial number.
    1 point
  8. It's not political. The person who developed the loader lost interest in maintaining it. He won't release the source code for the loader. If he would release the source code I am sure someone else could keep it current.
    0 points
×
×
  • Create New...