Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 06/13/2017 in Posts

  1. After fiddling with it for a day, and getting it to work, I thought it would be nice to share the knowledge, especially since "officially" only ESXi is supported, and ESXi is picky about supporting stuff... The main reason for me to move to a hypervisor was because Synology has not yet introduced NVMe support AT ALL. And even with a kernel driver, the Storage Manager will not see it as an SSD, as an HDD, or anything. Synology is silent about this, even though some have requested it on their forums (although they do not have a model with M.2 NVMe connector yet, they have some models with full sized x16 or x4 PCIe ports, which can be used with an adapter card for NVMe). So I decided to try with a hypervisor. On one hand it makes installation easier, upgrading also, and I don't need a display connected (since most hypervisors provide a VNC connection to the guest OS). On the other hand, I can install the measly 2-3GB hypervisor and all the tools on the NVMe SSD, and have the rest mounted as a VMDK (or any other virtual disk file). The rest of the hard drives would use passthrough of course. I fiddled around with multiple options. XenServer simply refused to boot in UEFI mode, ESXi does not support my network adapter, or my built-in SATA controller (B250 chipset), Microsoft's Hyper-V server has issues if you do not have a domain server on the network, also as soon as the display output goes off, the device drops network connection. It left me with Proxmox. Never used it, and during installation I had some issues with the bootloader (both on 4.4 release and 5.0 beta). Luckily there's a workaround, since it is based on Debian, one can use the Debian netinst image, create a very basic system, and install Proxmox on top. I won't bore you with the details, there are enough guides about installing it to make me think twice before I write an (n+1)th version. So let's begin! Requirements: A working install of Proxmox 5.0 - it can be 4.4 too, but I only tested this on 5.0. Follow the guide to create the bridged network interface! The loader you wish to use. I recommend Jun's Loader, specifically 1.02a2 at the time of the writing of this guide. Steps: 0. Edit the loader (if needed) to your liking - MAC address, serial number, etc. This is especially important if you have multiple XPE systems on the same network. 1. Create a new VM in Proxmox. 1.1 Set the name, and make sure to note down the VM ID (if it is your first VM, it should be 100). I'll be using {VM_ID} as a placeholder from now on. 1.2 OS type should be "Linux 4.x/3.x/2.6 Kernel". 1.3 Set the CD/DVD to "Do not use media" (we will remove the virtual disk drive any way later on). 1.4 For the HDD, you should create a new virtual disk with the format of your liking (qcow2, vmdk or raw), this will be the initial drive. I made sure that it uses nearly the whole storage of the OS drive it was installed on (in my case it was a 256GB NVMe SSD, which, after setup and partitioning, resulted in a 226GiB root drive, of which I had 211GB free, so I set the virtual disk's size to 200GB). You can set it to any kind of bus, EXCEPT VirtIO. With VirtIO I had performance issues, so I went with SCSI (it supports up to 12 devices any way, so it is better). This is for the virtual disk only, VirtIO works just fine with passthrough devices. So apart from the bus, size and format, you don't need to touch a thing. 1.5 For CPU, set kvm64 with as many cores as your host has (incl. virtual cores if you're on a HyperThreading supported CPU!). In my case with the Intel G4560 this is 4. 1.6 For RAM, you should leave some slack for the host OS, I went with 7.5GB from the 8GB I have. Ballooning is not required. 1.7 Networking. This is where many things can go wrong. The VirtIO paravirtualized network adapter should work, but to be safe I went with the Intel E1000. On the left select Bridged Mode, with the previously created bridge as the first choice. You can also enable Firewall if you do not trust Syno's own. Leave the rest of the settings as default. 1.8 On the Confirm page, confirm your settings and create the VM. 2. After the VM is created, first thing to do is to remove the virtual disk drive (IDE 2, if everything went right). Then comes the hard part. 3. You have to add each and every HDD to the config file that you want to use for passthrough. The command is simple: qm set {VM_ID} -[protocol][port] /dev/disk/by-id/[disk-id] The {VM_ID} part is obvious, but what about the rest? [protocol] is the connection protocol you want to use. This can be sata, ide, scsi or virtio. I'm using SATA here, but you can use anything (IDE is not IDEal for us). SATA supports up to 6 devices (port indexes 0-6), scsi supports up to 12 devices, and virtio does not have a limitation to my knowledge. [port] is the first unused port of said protocol. E.g. if you set the initial disk during setup to SATA0, and you want to use SATA further here, you have to start numbering from 1. [disk-id] is the unique identifier of your HDD. Go to /dev/disk/by-id/ and list the disks you see. For most SATA devices, you'll see entries like "ata-[MANUFACTURER]_[MODEL]_[sERIAL]". So let's say I have 4 disks, with the disk-id's ata-1, ata-2, ata-3, and ata-4 (yours will be a lot longer, but don't worry, you can use the bash autocomplete with the tab key). For this to work I execute the following commands: qm set 100 -sata1 /dev/disk/by-id/ata-1 qm set 100 -sata2 /dev/disk/by-id/ata-2 qm set 100 -sata3 /dev/disk/by-id/ata-3 qm set 100 -sata4 /dev/disk/by-id/ata-4 Of course later on you can add further HDDs to a VM config by using the same command, just keep in mind the limitations of the protocols. 4. Now comes the hard part, we'll have to add the bootloader image file to the config. The config file is located under /etc/pve/qemu-server/, and is named {VM_ID}.conf. Open it with nano. This config file defines everything about the VM. Disks to mount, which device to use for booting, RAM amount, CPU cores, name of the VM, et cetera. Don't touch anything else than the lines you see here! Copy the synoboot.img to somewhere on your server. If you want to be consistent with the Proxmox setup, copy it under /var/lib/vz/images/{VM_ID}/ - you'll need root for that. After that, come back to the conf file, and open it again. You'll enter a few things here, make sure you pay attention! Enter the following line into the conf file, and make sure you replace the parts in the path! args: -device 'piix3-usb-uhci,addr=0x18' -drive 'id=synoboot,file=/var/lib/vz/images/{VM_ID}/synoboot.img,if=none,format=raw' -device 'usb-storage,id=synoboot,drive=synoboot' Make sure to replace the path to the synoboot.img with your own! One more thing to edit here, the boot device. Find the line that begins with "boot: " and replace it so it looks like this: boot: synoboot Save the file, then start the VM. It should start up without an issue, and you'll be able to use find.synology.com to find your server and install it.
    2 points
  2. Salut à tous, Ici je vais vous expliquez comment faire tourner le DSM 6 Xpenology sur Hyper-V. Téléchargez les VHDX (uploadés par moi même) prévues à cet effet : https://1drv.ms/u/s!Anih4xuqH7tysAOHkDHOqgim2CZ_ Créer une nouvelle machine virtuelle en respectant scrupuleusement les mêmes paramètres que ce screenshot : Appliquez les paramètres comme l'image ci-dessus. N'ajoutez pas votre VHDX de données tout de suite. Démarrez la machine virtuelle attendez 3 à 5 minutes le temps quelle démarre, et configurez là via http://find.synology.com/ Une fois quelle est démarrée et que tout est OK, éteignez-là et ajoutez votre disque VHDX de données : Démarrez de nouveau la machine virtuelle, est vous pouvez allez commencer à créer votre volume dans le Gestionnaire de disque. N'appliquez pas les mises à jour du DSM ! Enjoy !
    1 point
  3. That's what I did today. Copied Jun's loader 1.02a to the ESXi Server and then a fresh install and update of DS3615xs ... Now the VM is running with 6.1.1-15101 Update 4 like the physical Synologies I have here.
    1 point
  4. вот бы ещё обратную совместимость с прежними линками типа "/forum/viewtopic.php?f=2&t=20216&start=2770#p100378" внедрить, чтоб постоянно не упираться в "Запрашиваемая страница не существует" как в самих топиках, где постились важные прямые линки, так и в результатах поисковиков.
    1 point
  5. Yesterday I also took the step and added 3x3TB SATA disks to 3 of the motherboard internal SATA connectors, changed internalportcfg setting: from binary 0011 1111 1100 0000 = 0x3fc0 (i.e. zeroes on the 6 first positions since I do not have any disks connected to the mobo Asus P7F-X sata ports) to binary 0011 1111 1111 1000 = 0x3ff8 (i.e. zeroes now on the 3 first positions only since I have 3 SATA disks connected and the other 8 are connected to my LSI-controller) Rebooted the system and the 3 new disks on the SATA ports were visible as disks in Synology. When I started to greate a separate disk group of the 3 new disks added I could not choose SHR as the raid type but some searching here on the forum quickly enlightened me that I needed to comment out / delete support_raid_group = "yes" and then add support_syno_hybrid_raid = "yes" in /etc.defaults/synoinfo.conf and /etc/synoinfo.conf and reboot and voilaaaa! My 2nd disk group is now also up and running in SHR mode with BTRFS-filesystem. Cool!
    1 point
×
×
  • Create New...