fonix232

Members
  • Content count

    323
  • Joined

  • Last visited

  • Days Won

    1

fonix232 last won the day on June 13

fonix232 had the most liked content!

Community Reputation

2 Neutral

About fonix232

  • Rank
    Super Member
  1. Jun's loader works fine with RTL8111H.
  2. If you can hack the driver into the installer, sure it will work. However I had other system incompatibilities that ended up taking a week of my time. Proxmox setup took maybe 4 hours, 3 of that was finding a proper way for the USB disk, and testing.
  3. Yes, indeed, virtio drivers are not included, so let's stick with the regular stuff. I did the USB workaround so e.g. updates won't wipe that disk. This way it is exactly the same as booting from an actual USB stick on barebones. Harder to achieve but you don't need to always overwrite the boot disk. Not sure if running a virtualized OS within a virtualized OS is a good idea. ESXi has a VERY limited support for various hardware (e.g. my RTL8111H LAN is not supported, many other hw aspects are not supported), and in my opinion management is hard. I'm used to Debian-like systems, so Proxmox is an instant win (especially since if you can't install with the default installer, like me, you can just install the latest Debian and pull it in after install).
  4. After fiddling with it for a day, and getting it to work, I thought it would be nice to share the knowledge, especially since "officially" only ESXi is supported, and ESXi is picky about supporting stuff... The main reason for me to move to a hypervisor was because Synology has not yet introduced NVMe support AT ALL. And even with a kernel driver, the Storage Manager will not see it as an SSD, as an HDD, or anything. Synology is silent about this, even though some have requested it on their forums (although they do not have a model with M.2 NVMe connector yet, they have some models with full sized x16 or x4 PCIe ports, which can be used with an adapter card for NVMe). So I decided to try with a hypervisor. On one hand it makes installation easier, upgrading also, and I don't need a display connected (since most hypervisors provide a VNC connection to the guest OS). On the other hand, I can install the measly 2-3GB hypervisor and all the tools on the NVMe SSD, and have the rest mounted as a VMDK (or any other virtual disk file). The rest of the hard drives would use passthrough of course. I fiddled around with multiple options. XenServer simply refused to boot in UEFI mode, ESXi does not support my network adapter, or my built-in SATA controller (B250 chipset), Microsoft's Hyper-V server has issues if you do not have a domain server on the network, also as soon as the display output goes off, the device drops network connection. It left me with Proxmox. Never used it, and during installation I had some issues with the bootloader (both on 4.4 release and 5.0 beta). Luckily there's a workaround, since it is based on Debian, one can use the Debian netinst image, create a very basic system, and install Proxmox on top. I won't bore you with the details, there are enough guides about installing it to make me think twice before I write an (n+1)th version. So let's begin! Requirements: A working install of Proxmox 5.0 - it can be 4.4 too, but I only tested this on 5.0. Follow the guide to create the bridged network interface! The loader you wish to use. I recommend Jun's Loader, specifically 1.02a2 at the time of the writing of this guide. Steps: 0. Edit the loader (if needed) to your liking - MAC address, serial number, etc. This is especially important if you have multiple XPE systems on the same network. 1. Create a new VM in Proxmox. 1.1 Set the name, and make sure to note down the VM ID (if it is your first VM, it should be 100). I'll be using {VM_ID} as a placeholder from now on. 1.2 OS type should be "Linux 4.x/3.x/2.6 Kernel". 1.3 Set the CD/DVD to "Do not use media" (we will remove the virtual disk drive any way later on). 1.4 For the HDD, you should create a new virtual disk with the format of your liking (qcow2, vmdk or raw), this will be the initial drive. I made sure that it uses nearly the whole storage of the OS drive it was installed on (in my case it was a 256GB NVMe SSD, which, after setup and partitioning, resulted in a 226GiB root drive, of which I had 211GB free, so I set the virtual disk's size to 200GB). You can set it to any kind of bus, EXCEPT VirtIO. With VirtIO I had performance issues, so I went with SCSI (it supports up to 12 devices any way, so it is better). This is for the virtual disk only, VirtIO works just fine with passthrough devices. So apart from the bus, size and format, you don't need to touch a thing. 1.5 For CPU, set kvm64 with as many cores as your host has (incl. virtual cores if you're on a HyperThreading supported CPU!). In my case with the Intel G4560 this is 4. 1.6 For RAM, you should leave some slack for the host OS, I went with 7.5GB from the 8GB I have. Ballooning is not required. 1.7 Networking. This is where many things can go wrong. The VirtIO paravirtualized network adapter should work, but to be safe I went with the Intel E1000. On the left select Bridged Mode, with the previously created bridge as the first choice. You can also enable Firewall if you do not trust Syno's own. Leave the rest of the settings as default. 1.8 On the Confirm page, confirm your settings and create the VM. 2. After the VM is created, first thing to do is to remove the virtual disk drive (IDE 2, if everything went right). Then comes the hard part. 3. You have to add each and every HDD to the config file that you want to use for passthrough. The command is simple: qm set {VM_ID} -[protocol][port] /dev/disk/by-id/[disk-id] The {VM_ID} part is obvious, but what about the rest? [protocol] is the connection protocol you want to use. This can be sata, ide, scsi or virtio. I'm using SATA here, but you can use anything (IDE is not IDEal for us). SATA supports up to 6 devices (port indexes 0-6), scsi supports up to 12 devices, and virtio does not have a limitation to my knowledge. [port] is the first unused port of said protocol. E.g. if you set the initial disk during setup to SATA0, and you want to use SATA further here, you have to start numbering from 1. [disk-id] is the unique identifier of your HDD. Go to /dev/disk/by-id/ and list the disks you see. For most SATA devices, you'll see entries like "ata-[MANUFACTURER]_[MODEL]_[sERIAL]". So let's say I have 4 disks, with the disk-id's ata-1, ata-2, ata-3, and ata-4 (yours will be a lot longer, but don't worry, you can use the bash autocomplete with the tab key). For this to work I execute the following commands: qm set 100 -sata1 /dev/disk/by-id/ata-1 qm set 100 -sata2 /dev/disk/by-id/ata-2 qm set 100 -sata3 /dev/disk/by-id/ata-3 qm set 100 -sata4 /dev/disk/by-id/ata-4 Of course later on you can add further HDDs to a VM config by using the same command, just keep in mind the limitations of the protocols. 4. Now comes the hard part, we'll have to add the bootloader image file to the config. The config file is located under /etc/pve/qemu-server/, and is named {VM_ID}.conf. Open it with nano. This config file defines everything about the VM. Disks to mount, which device to use for booting, RAM amount, CPU cores, name of the VM, et cetera. Don't touch anything else than the lines you see here! Copy the synoboot.img to somewhere on your server. If you want to be consistent with the Proxmox setup, copy it under /var/lib/vz/images/{VM_ID}/ - you'll need root for that. After that, come back to the conf file, and open it again. You'll enter a few things here, make sure you pay attention! Enter the following line into the conf file, and make sure you replace the parts in the path! args: -device 'piix3-usb-uhci,addr=0x18' -drive 'id=synoboot,file=/var/lib/vz/images/{VM_ID}/synoboot.img,if=none,format=raw' -device 'usb-storage,id=synoboot,drive=synoboot' Make sure to replace the path to the synoboot.img with your own! One more thing to edit here, the boot device. Find the line that begins with "boot: " and replace it so it looks like this: boot: synoboot Save the file, then start the VM. It should start up without an issue, and you'll be able to use find.synology.com to find your server and install it.
  5. Anyone successfully installed this yet on Proxmox? Nvm, done it.
  6. Hey guys! I ended up reinstalling my server... And now I'm also getting error 13 when I'm trying to install. Serial and MAC are correct, USB VID and PID too. Has any of you got around this issue? EDIT: Using 1.02a 3615xs works, but no updates show up to 15101. 1.02a2 does not work at all, fails with error 13 even on clean disks. Now to compile the nvme driver to get it to see my SSD...
  7. If you want to back up and/or restore: - Use a Linux live disk. For me, GParted recognized the MDADM arrays and mounted them. - DO NOT make a disk image via dd or any similar block-reading tool - DO use a tool made for backing up files, or simply moving them (rsync or even better, tar it all up) To restore, just erase all files on the initial mdadm array, and extract the TAR file to it. It should restore everything stored on the system partition. However this whole thing is unnecessary since Synology stores little to no "new" things on the system partition - it is instead kept on the volumes you create (hence why you can't install apps without a volume).
  8. Do you have network connection? If not, it is possible that you ran into the upgrade issue many had, which killed the network drivers.
  9. you footer states as board msi b250i, the manual states Realtek RTL8111H, the realtek website states RTL8111B...RTL8118AS Unix (Linux) Apply to RTL8111H(S)/RTL8118/RTL8119i as well. the driver source provided by realtek, LINUX driver for kernel up to 4.7, contains file r8168.*, so it will be r8168.ko took 5 minutes and just reading websites and manual your bigger problem might be that juns loader only provides r8169.ko and that driver is often mentions to be loaded and causing problems with RTL8111H so a r8168.ko driver comiled for dsm 6.1 is the thing you need (and you have to make shure r8169.ko is not loaded, but instead r8168.ko Ah, you're right. I know that the r8169 driver on paper causes issues, but I had none before the update. We'll see if the replacement driver from /update can fix the issue, if not, I'll compile my own.
  10. This is what I have seen others do: try mounting the raid array in live ubuntu usb. Then replace /lib/modules/[yournic].ko with the one in /lib/modules/update/[yournic].ko Let us know if that works. Thanks for the heads-up. I'll try to do so (pity that I have literally no idea which NIC driver I'll need, but I guess I'll just push the whole modules/update folder to modules/, overwriting old files).
  11. So for now, no solution for people who accidentally upgraded to 15101? I've tried "reinstalling" (popped 3617xs 1.02a2 loader on my drive, booted it, migration went fine then bam disappeared from the network), to no avail.
  12. Have you had any luck fixing it? Booting with mfg forces a reinstall, so I'm not sure if that would work on the long term.
  13. Neat! I can probably modify the loader images so that you can include extra modules without replacing the whole extra.lzma (instead you can add another). That way there won't be need to include all the patches Jun made, just the modules. Also, could you please compile a few wireless drivers for wireless USB dongles? The most widespread models (iwl, realtek, atheros) should do fine for a test run.
  14. I don't download for storage, I download to watch&dump (okay, I store it if I want to re-watch it, but that's all). I don't have all that important information stored only on my NAS, I have cloud backups everywhere. Well if you only need 3x8TB storage, it's perfectly fine. I suppose you don't want petabyte storage for home That Lian Li chassis seems small though. It only hosts 5x3.5" disks, while the DS380 can do 8x3.5". You should also consider the CS280 - 8x2.5" but considerably smaller. Plus cooler, since 2.5" disks don't heat up as much as 3.5" ones do, plus eat less power. The soft RAID Synology is using for RAID arrays is completely unaware of the controller the ports reside on. It does not need to know. All it needs is the GPT disk and partition ID, and done. So you can run a whole array off of both the onboard controller and an expansion card. You won't see considerable performance loss (probably less than 1%, depends on the expansion card). Because they're a niche product. Not many want swappable drives, not to mention that SATA backplanes are considerably harder to replace compared to cables. Also, they need to be specifically manufactured for the chassis. I prefer them too, but the selection is so limited that I went with the Bitfenix. Sure, nightmare to cable, but still better than cashing out 150-200$ for a chassis that I need to replace in a few months due to some unforeseen issue.
  15. Any 1xx chipset (110, 150, 170, regardless of class) will require a BIOS update to work with 7th gen CPUs like the G4560. If you live in a larger town, you should have someone doing it as a service for a few dollars. But I'd rather not risk that, and just go for a 2xx motherboard. What's wrong with the chassis you bought? Do you think it won't have enough space? Even if that happens, you can probably sell the whole shebang sans HDDs and move all your stuff to a new build. You don't necessarily need 6x SATA, you can always grab an extension card for around 50$ or even less (I'm buying an HP Smart Array P410 for 30$, with SAS-SATA cables included!). I doubt that cache failure will result in data loss, but my "data" is mostly TV show episodes downloading. Worst case scenario, I have to rebuild my server (software-wise), which takes about 30 minutes, and re-download all my shite.