Jump to content
XPEnology Community

fonix232

Member
  • Posts

    324
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by fonix232

  1. On 1/11/2019 at 2:25 AM, jhonny said:

    Now I'm trying to install the DSM using eMMC instead of USB and it seems the box fails to boot.

    If I flash juns loader onto the eMMC, I will see the loader boot up, and the usual splash screen comes up, however beyond that there is no activity. The ODROID H2 does not get an IP, doesn't connect to the network yet network lights are up.

    If I put the eMMC onto a eMMC->USB adapter and boot the ODRID-H2 from that it works perfectly.

     

    Any ideas ?

     

    The loader specifically looks for USB devices. An eMMC module installed on the system will NOT show up as a USB device, thus the loader won't work. No, you can't install the loader on the eMMC. Most likely you can't even use the eMMC for storage. Stick with a small USB drive for bootloading, and that's it. I know it's annoying, but hey, what can you do...

     

    However when you attach the eMMC to a USB adapter, it will show up as a USB device, thus it will boot. It is that simple.

  2. On 6/27/2017 at 2:05 PM, ordimans said:

    I have 6.5, and my RTL8111 works :

     

    
    [root@localhost:~] esxcfg-nics -l
    Name    PCI          Driver      Link Speed      Duplex MAC Address       MTU    Description
    vmnic0  0000:03:00.0 tg3         Up   1000Mbps   Full   00:9c:02:97:53:5d 1500   Broadcom Corporation NetXtreme BCM5723 Gigabit Ethernet
    vmnic1  0000:02:00.0 r8168       Down 0Mbps      Half   00:e0:4c:80:1a:50 9000   Realtek Semiconductor Co., Ltd. RTL8111/8168 PCI Express Gigabit Ethernet controller
    [root@localhost:~]

     

     

    If you can hack the driver into the installer, sure it will work. However I had other system incompatibilities that ended up taking a week of my time. Proxmox setup took maybe 4 hours, 3 of that was finding a proper way for the USB disk, and testing.

  3. On 11/06/2017 at 11:55 PM, arm4dillo said:

    Jun's loader doesnt seem to support virtio_net and Virtio_scsi. So with Network: VirtIO paravirtualized doesnt seem to work and HDD: scsi doenst seem to work. I'm using Proxmox 5.0 beta2.

     

    Maby someone else (Proxmox user) can try/test to get Network and HDD (VirtIO paravirtualized and scsi) to work ?

     

    Yes, indeed, virtio drivers are not included, so let's stick with the regular stuff.

     

    On 13/06/2017 at 6:42 AM, lemon55 said:

    In fact, everything is much simpler!

    1. Create HDD, I did SATA0, RAW format (because synology.img is a RAW disk)

    2. Replace the resulting disk**1**.raw file with synology.img (where it lies) with renaming.

    3. In the UI, we remove the disk (once!). It will become disabled. Then click on it two times - connect sata0, raw. It will be displayed as 50mb.

    4. Profit!!!

    Only e1000, this has already been discussed - Virtio does not work!

     

    I did the USB workaround so e.g. updates won't wipe that disk. This way it is exactly the same as booting from an actual USB stick on barebones. Harder to achieve but you don't need to always overwrite the boot disk.

     

    On 15/06/2017 at 4:42 PM, wenlez said:

    Running Xpenology 6.1 in Proxmox 5 Beta 2.  I can't get Synology's Virtualization Manager to work.  When I try to start a virtual machine, the log say the following. Had anyone got it to work? 

     

    
    2017-06-13T13:50:43-05:00 Portal synoscgi_SYNO.Virtualization.Guest.Action_1_pwr_ctl[10460]: MemSize: ccc/reservation.cpp:477 Start to op(3), ba03cfbc-00ff-451a-aef5-b814a2623589: +1073741824, orig 0
    2017-06-13T13:50:43-05:00 Portal synoscgi_SYNO.Virtualization.Guest.Action_1_pwr_ctl[10460]: core/utils.cpp:895 Failed to mkdir: /dev/virtualization/libvirt/qemu, error: No such file or directory
    2017-06-13T13:50:43-05:00 Portal synoscgi_SYNO.Virtualization.Guest.Action_1_pwr_ctl[10460]: MemSize: ccc/reservation.cpp:537 Failed to mkdir
    2017-06-13T13:50:43-05:00 Portal synoscgi_SYNO.Virtualization.Guest.Action_1_pwr_ctl[10460]: MemSize: ccc/reservation.cpp:488 Failed to op(1), ba03cfbc-00ff-451a-aef5-b814a2623589: 1073741824
    2017-06-13T13:50:43-05:00 Portal synoscgi_SYNO.Virtualization.Guest.Action_1_pwr_ctl[10460]: ccc/guest.cpp:1575 Failed to edit reservation resource for [ba03cfbc-00ff-451a-aef5-b814a2623589] ret [-2], mem: 1024 / 0, cpu: 4 / 4
    2017-06-13T13:50:43-05:00 Portal synoscgi_SYNO.Virtualization.Guest.Action_1_pwr_ctl[10460]: ccc/guest.cpp:1686 Failed to prepare for guest [ba03cfbc-00ff-451a-aef5-b814a2623589]
    2017-06-13T13:50:43-05:00 Portal synoscgi_SYNO.Virtualization.Guest.Action_1_pwr_ctl[10460]: Guest/guest.cpp:3141 Failed creating guest [ba03cfbc-00ff-451a-aef5-b814a2623589] reason [-7]

     

     

    Not sure if running a virtualized OS within a virtualized OS is a good idea.

    On 17/06/2017 at 11:59 AM, ordimans said:

    Not sure to understand what is better than ESXI 6.5 ?
     

     

    ESXi has a VERY limited support for various hardware (e.g. my RTL8111H LAN is not supported, many other hw aspects are not supported), and in my opinion management is hard. I'm used to Debian-like systems, so Proxmox is an instant win (especially since if you can't install with the default installer, like me, you can just install the latest Debian and pull it in after install).

  4. After fiddling with it for a day, and getting it to work, I thought it would be nice to share the knowledge, especially since "officially" only ESXi is supported, and ESXi is picky about supporting stuff...

     

    The main reason for me to move to a hypervisor was because Synology has not yet introduced NVMe support AT ALL. And even with a kernel driver, the Storage Manager will not see it as an SSD, as an HDD, or anything. Synology is silent about this, even though some have requested it on their forums (although they do not have a model with M.2 NVMe connector yet, they have some models with full sized x16 or x4 PCIe ports, which can be used with an adapter card for NVMe).

     

    So I decided to try with a hypervisor. On one hand it makes installation easier, upgrading also, and I don't need a display connected (since most hypervisors provide a VNC connection to the guest OS). On the other hand, I can install the measly 2-3GB hypervisor and all the tools on the NVMe SSD, and have the rest mounted as a VMDK (or any other virtual disk file). The rest of the hard drives would use passthrough of course.

     

    I fiddled around with multiple options. XenServer simply refused to boot in UEFI mode, ESXi does not support my network adapter, or my built-in SATA controller (B250 chipset), Microsoft's Hyper-V server has issues if you do not have a domain server on the network, also as soon as the display output goes off, the device drops network connection.

     

    It left me with Proxmox. Never used it, and during installation I had some issues with the bootloader (both on 4.4 release and 5.0 beta). Luckily there's a workaround, since it is based on Debian, one can use the Debian netinst image, create a very basic system, and install Proxmox on top. I won't bore you with the details, there are enough guides about installing it to make me think twice before I write an (n+1)th version.

     

    So let's begin!

     

    Requirements:

    • A working install of Proxmox 5.0 - it can be 4.4 too, but I only tested this on 5.0. Follow the guide to create the bridged network interface!
    • The loader you wish to use. I recommend Jun's Loader, specifically 1.02a2 at the time of the writing of this guide.

     

    Steps:

    0. Edit the loader (if needed) to your liking - MAC address, serial number, etc. This is especially important if you have multiple XPE systems on the same network.

     

    1. Create a new VM in Proxmox.

     

    1.1 Set the name, and make sure to note down the VM ID (if it is your first VM, it should be 100). I'll be using {VM_ID} as a placeholder from now on.

     

    1.2 OS type should be "Linux 4.x/3.x/2.6 Kernel".

     

    1.3 Set the CD/DVD to "Do not use media" (we will remove the virtual disk drive any way later on).

     

    1.4 For the HDD, you should create a new virtual disk with the format of your liking (qcow2, vmdk or raw), this will be the initial drive. I made sure that it uses nearly the whole storage of the OS drive it was installed on (in my case it was a 256GB NVMe SSD, which, after setup and partitioning, resulted in a 226GiB root drive, of which I had 211GB free, so I set the virtual disk's size to 200GB). You can set it to any kind of bus, EXCEPT VirtIO. With VirtIO I had performance issues, so I went with SCSI (it supports up to 12 devices any way, so it is better). This is for the virtual disk only, VirtIO works just fine with passthrough devices. So apart from the bus, size and format, you don't need to touch a thing.

     

    1.5 For CPU, set kvm64 with as many cores as your host has (incl. virtual cores if you're on a HyperThreading supported CPU!). In my case with the Intel G4560 this is 4.

     

    1.6 For RAM, you should leave some slack for the host OS, I went with 7.5GB from the 8GB I have. Ballooning is not required.

     

    1.7 Networking. This is where many things can go wrong. The VirtIO paravirtualized network adapter should work, but to be safe I went with the Intel E1000. On the left select Bridged Mode, with the previously created bridge as the first choice. You can also enable Firewall if you do not trust Syno's own. Leave the rest of the settings as default.

     

    1.8 On the Confirm page, confirm your settings and create the VM.

     

    2. After the VM is created, first thing to do is to remove the virtual disk drive (IDE 2, if everything went right). Then comes the hard part.

     

    3. You have to add each and every HDD to the config file that you want to use for passthrough. The command is simple:

    qm set {VM_ID} -[protocol][port] /dev/disk/by-id/[disk-id]
     

     

    The {VM_ID} part is obvious, but what about the rest?

     

    [protocol] is the connection protocol you want to use. This can be sata, ide, scsi or virtio. I'm using SATA here, but you can use anything (IDE is not IDEal for us). SATA supports up to 6 devices (port indexes 0-6), scsi supports up to 12 devices, and virtio does not have a limitation to my knowledge.

     

    [port] is the first unused port of said protocol. E.g. if you set the initial disk during setup to SATA0, and you want to use SATA further here, you have to start numbering from 1.

     

    [disk-id] is the unique identifier of your HDD. Go to /dev/disk/by-id/ and list the disks you see. For most SATA devices, you'll see entries like "ata-[MANUFACTURER]_[MODEL]_[sERIAL]".

     

    So let's say I have 4 disks, with the disk-id's ata-1, ata-2, ata-3, and ata-4 (yours will be a lot longer, but don't worry, you can use the bash autocomplete with the tab key). For this to work I execute the following commands:

    qm set 100 -sata1 /dev/disk/by-id/ata-1
    qm set 100 -sata2 /dev/disk/by-id/ata-2
    qm set 100 -sata3 /dev/disk/by-id/ata-3
    qm set 100 -sata4 /dev/disk/by-id/ata-4
     

     

    Of course later on you can add further HDDs to a VM config by using the same command, just keep in mind the limitations of the protocols.

     

    4. Now comes the hard part, we'll have to add the bootloader image file to the config.

     

    The config file is located under /etc/pve/qemu-server/, and is named {VM_ID}.conf. Open it with nano.

     

    This config file defines everything about the VM. Disks to mount, which device to use for booting, RAM amount, CPU cores, name of the VM, et cetera. Don't touch anything else than the lines you see here!

     

    Copy the synoboot.img to somewhere on your server. If you want to be consistent with the Proxmox setup, copy it under /var/lib/vz/images/{VM_ID}/ - you'll need root for that.

     

    After that, come back to the conf file, and open it again. You'll enter a few things here, make sure you pay attention!

     

    Enter the following line into the conf file, and make sure you replace the parts in the path!

    args: -device 'piix3-usb-uhci,addr=0x18' -drive 'id=synoboot,file=/var/lib/vz/images/{VM_ID}/synoboot.img,if=none,format=raw' -device 'usb-storage,id=synoboot,drive=synoboot'
     

     

    Make sure to replace the path to the synoboot.img with your own!

     

    One more thing to edit here, the boot device. Find the line that begins with "boot: " and replace it so it looks like this:

    boot: synoboot
     

     

    Save the file, then start the VM. It should start up without an issue, and you'll be able to use find.synology.com to find your server and install it.

    • Like 5
  5. Hey guys!

     

    I ended up reinstalling my server... And now I'm also getting error 13 when I'm trying to install. Serial and MAC are correct, USB VID and PID too.

     

    Has any of you got around this issue?

     

    EDIT:

     

    Using 1.02a 3615xs works, but no updates show up to 15101. 1.02a2 does not work at all, fails with error 13 even on clean disks. Now to compile the nvme driver to get it to see my SSD...

  6. If you want to back up and/or restore:

     

    - Use a Linux live disk. For me, GParted recognized the MDADM arrays and mounted them.

    - DO NOT make a disk image via dd or any similar block-reading tool

    - DO use a tool made for backing up files, or simply moving them (rsync or even better, tar it all up)

     

    To restore, just erase all files on the initial mdadm array, and extract the TAR file to it. It should restore everything stored on the system partition.

     

     

    However this whole thing is unnecessary since Synology stores little to no "new" things on the system partition - it is instead kept on the volumes you create (hence why you can't install apps without a volume).

  7. Synology isn't loading anymore.

     

    I shut the system down to install another hdd and after a reboot I can't access the control panel or any shares. I tried taking the HDD out but still nothing.

     

    Do you have network connection?

     

    If not, it is possible that you ran into the upgrade issue many had, which killed the network drivers.

  8. Thanks for the heads-up. I'll try to do so (pity that I have literally no idea which NIC driver I'll need, but I guess I'll just push the whole modules/update folder to modules/, overwriting old files).

     

    you footer states as board msi b250i, the manual states Realtek RTL8111H, the realtek website states

    RTL8111B...RTL8118AS

    Unix (Linux)

    Apply to RTL8111H(S)/RTL8118/RTL8119i as well.

    the driver source provided by realtek, LINUX driver for kernel up to 4.7, contains file r8168.*, so it will be r8168.ko

     

    took 5 minutes and just reading websites and manual

    your bigger problem might be that juns loader only provides r8169.ko and that driver is often mentions to be loaded and causing problems with RTL8111H

    so a r8168.ko driver comiled for dsm 6.1 is the thing you need (and you have to make shure r8169.ko is not loaded, but instead r8168.ko

     

    Ah, you're right.

     

    I know that the r8169 driver on paper causes issues, but I had none before the update. We'll see if the replacement driver from /update can fix the issue, if not, I'll compile my own.

  9. So for now, no solution for people who accidentally upgraded to 15101?

     

    I've tried "reinstalling" (popped 3617xs 1.02a2 loader on my drive, booted it, migration went fine then bam disappeared from the network), to no avail.

    This is what I have seen others do:

     

    try mounting the raid array in live ubuntu usb. Then replace /lib/modules/[yournic].ko with the one in /lib/modules/update/[yournic].ko

     

    Let us know if that works.

     

    Thanks for the heads-up. I'll try to do so (pity that I have literally no idea which NIC driver I'll need, but I guess I'll just push the whole modules/update folder to modules/, overwriting old files).

  10. So for now, no solution for people who accidentally upgraded to 15101?

     

    I've tried "reinstalling" (popped 3617xs 1.02a2 loader on my drive, booted it, migration went fine then bam disappeared from the network), to no avail.

  11. I used Migration with Junes 1.02 Loader (network worked)

    but after reboot the network does not come up!?

    i found out there is a difference between normal boot and install boot

     

    with "loadlinux 3615 usb mfg" the network works

    with the normal boot " loadlinux 3615 usb" the network does not come up

     

    Have you had any luck fixing it? Booting with mfg forces a reinstall, so I'm not sure if that would work on the long term.

  12. I have compiled a few more network modules and firmwares. This is for loader v1.02a (3615xs only) (see below for details). Compilation was done with DSM 6.1 source code and tool chain.

     

    Neat!

     

    I can probably modify the loader images so that you can include extra modules without replacing the whole extra.lzma (instead you can add another). That way there won't be need to include all the patches Jun made, just the modules.

     

    Also, could you please compile a few wireless drivers for wireless USB dongles? The most widespread models (iwl, realtek, atheros) should do fine for a test run.

  13. Re-download all can be quite a mess :eek: .. Or you mean just whats in the cache ?

     

    I don't download for storage, I download to watch&dump (okay, I store it if I want to re-watch it, but that's all). I don't have all that important information stored only on my NAS, I have cloud backups everywhere.

     

    Yeah my chassi I bought I feel kind off stupid because maximum 4 drives...Why did I not consider it more before.. You warned me I know :smile:

    Im now looking at either silverstone DS380 or lian li Q35 . I want a sata backplane of some sort because I really dont want to re-do everything as soon as I wanna pop in a new drive..

     

    Well if you only need 3x8TB storage, it's perfectly fine. I suppose you don't want petabyte storage for home :smile: That Lian Li chassis seems small though. It only hosts 5x3.5" disks, while the DS380 can do 8x3.5". You should also consider the CS280 - 8x2.5" but considerably smaller. Plus cooler, since 2.5" disks don't heat up as much as 3.5" ones do, plus eat less power.

     

    I know you can extend the sata ports with a card but I think it can be easier just to have all the ports directly.. If I want one large volume in RAID5 for example..? Because I think splitting the sata ports between pci-card and motherboard is not the same as all ports from same motherboard right,.. ?

     

    The soft RAID Synology is using for RAID arrays is completely unaware of the controller the ports reside on. It does not need to know. All it needs is the GPT disk and partition ID, and done. So you can run a whole array off of both the onboard controller and an expansion card. You won't see considerable performance loss (probably less than 1%, depends on the expansion card).

     

    Damn this is all so frustrating.. I just want to build something nice now and then every now and then buy a new drive and pop it in... So few options for that.. In a small case like these I dont want to be playing around with sata cables etc etc as soon as I wanna add another drive.. Why is there not more cases with sata backplanes

     

    Because they're a niche product. Not many want swappable drives, not to mention that SATA backplanes are considerably harder to replace compared to cables. Also, they need to be specifically manufactured for the chassis. I prefer them too, but the selection is so limited that I went with the Bitfenix. Sure, nightmare to cable, but still better than cashing out 150-200$ for a chassis that I need to replace in a few months due to some unforeseen issue.

  14. Any 1xx chipset (110, 150, 170, regardless of class) will require a BIOS update to work with 7th gen CPUs like the G4560. If you live in a larger town, you should have someone doing it as a service for a few dollars. But I'd rather not risk that, and just go for a 2xx motherboard.

     

    What's wrong with the chassis you bought? Do you think it won't have enough space? Even if that happens, you can probably sell the whole shebang sans HDDs and move all your stuff to a new build.

     

    You don't necessarily need 6x SATA, you can always grab an extension card for around 50$ or even less (I'm buying an HP Smart Array P410 for 30$, with SAS-SATA cables included!).

     

    I doubt that cache failure will result in data loss, but my "data" is mostly TV show episodes downloading. Worst case scenario, I have to rebuild my server (software-wise), which takes about 30 minutes, and re-download all my shite.

  15. It's still a SATA M.2 SSD, not PCIe (NVME) SSD. Luckily the MSI B250I supports both :smile: Do tell the results you get with it! I'm planning on grabbing a 256GB Samsung P950, which has a theoretical write speed of 1500MB/s, and 3200MB/s read speed. A bit faster than what you got, but IMO it's the only one worth it (around 150$, and in speed it's 3x better, plus utilizes less resources AFAIK).

  16. FYI: I just finished installing DSM 6.1-15047 Update 2 on my Baremetal Microserver Gen8 with a Xeon E3-1220v2 and 10gb ram (2+8).

    I used viewtopic.php?p=74492#p74492

     

    However, I did not download a seperate PAT file, I just used the default setup. I made a small edit to enable root login in SSH and made it possible to create SHR diskgroups. Small note on the enable SHR trick. I had to reboot to enable the creation of SHR diskgroups.

     

    When I was done, I installed the latest update that was available. Now I just have to see if this keeps running stable in the coming week.

     

    No need to re-enable root user, seriously. Just alias

     sudo su - 

    to something you'll remember (I use

     sume 

    ), and use that command to get root access. Which is unnecessary most of the time.

  17. Well I bought a 400W PSU and a Bitfenix Prodigy for less than 50$, so in comparison, it is expensive :smile:

     

    I dislike it because its value/price ratio is not that great, and future expansions are hard to achieve. A Silverstone case would make more sense. Also the case makes it hard to access the motherboard after installation of the PSU.

     

    For RAM, the freq does not matter. It will work just fine :smile:

     

    Jeg studerede i Danmark for to år, men jeg taler lidt dansk :sad:

  18. That motherboard and SSD combo is no good. The MB only supports PCI-E M.2 cards, while the one you choose is a SATA one. If you want to save some headaches, go with the B250I PRO (just ignore the 1xx series chipset motherboards altogether as they need a BIOS update to function with newer, 7th gen CPUs like the G4560). The B250I PRO is 20$ or so more expensive than the H110I, and brings quite some to the table at that price.

     

    The rest is perfect choice, mostly matches my config. The case, I dislike, but if you want that, well, it's your NAS.

     

     

    Also, er du svensk?

×
×
  • Create New...