Jump to content
XPEnology Community

dominicc

Rookie
  • Posts

    3
  • Joined

  • Last visited

Posts posted by dominicc

  1. Just reporting some information in the hopes that others find it useful.

     

    System:

    ASRock Z490M-ITXac

    Intel Core i3 10100 s1200

    4x12TB Toshiba HDDs (CMR & 5 year warranty)

    16GB Corsair DDR4 Venagance 2666 RAM

    Noctua Cooler NH-LPi low profile cooler

    Fractal Design Node 304 code (space for 6 HDDs)

    Samsung EVO 950 512GB SSD (proxmox, nas boot and nas ssd cache)

     

    Software:

    Proxmox 6.2-1 OS for VM's and Docker Containers

    HostAPD for a Wireless access point using the built-in WiFi card on the motherboard.

     

    Config:

    PCIe Passthrough for the HDDs to the NAS VM.  48TB RAID-5.

    2x 32GB Virtual SSDs for NAS Cache

    1x 16GB Virtual SSD for NAS apps.

     

    vm-100-xpenology1-q35-pcei-1hd-2ssd.thumb.JPG.f2e2480fcb0d96c6a8b69f30455750ed.JPG

     

    VM Config

    args: -device 'nec-usb-xhci,id=usb-ctl-synoboot,addr=0x18' -drive 'id=usb-drv-synoboot,file=/var/lib/vz/template/iso/synoboot_ds918_1.04-vm-xpenology1.img,if=none,format=raw' -device 'usb-storage,id=usb-stor-synoboot,bootindex=1,removable=off,drive=usb-drv-synoboot'
    balloon: 0
    bios: ovmf
    bootdisk: sata0
    cores: 2
    efidisk0: local-lvm:vm-100-disk-1,size=4M
    hostpci0: 00:17,pcie=1
    ide2: none,media=cdrom
    machine: q35
    memory: 2048
    name: xpenology1
    net0: e1000=D2:F2:B8:6B:D0:40,bridge=vmbr0,firewall=0
    numa: 0
    ostype: l26
    sata0: local-lvm:vm-100-disk-2,size=16G,ssd=1
    sata1: local-lvm:vm-100-disk-3,size=32G,ssd=1
    sata2: local-lvm:vm-100-disk-0,size=32G,ssd=1
    scsihw: lsi
    serial0: socket
    smbios1: uuid=3964d1cc-e350-4fb3-b133-6903f5f05d7e
    sockets: 1
    vmgenid: c7f7222e-5bc7-4928-b6d1-6e0cd94f24d2

     

    A personal note to Synology marketing - If you hadn't dropped Wireless AP support from the new DSM/NAS systems I'd have upgraded my existing DS415+ to a DS920+  That and the C2000 CPU-of-death issue.

     

    What didn't work:

    * Single SATA VM backed by a ZFS pool - It worked, i.e. you could create a simple volume in the running NAS on the ZFS pool, but proxmox only showed 22TB available capacity for a RAIDZ1 ZFS pool instead of the expected 30.64GB (48TB-12TB=36-overhead=33.6TB (30.64TiB).  The pool is correct, (when using zpool list and zfs list) but when you add an SATA disk to the VM it shows the wrong capacity. - https://wintelguy.com/zfs-calc.pl

     

    root@proxmox:~# zfs list
    NAME                 USED  AVAIL     REFER  MOUNTPOINT
    tank                30.7T  33.8M      140K  /tank
    tank/vm-100-disk-0  30.7T  30.6T     69.6G  -
    root@proxmox:~# zpool list
    NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    tank  43.6T  95.9G  43.5T        -         -     0%     0%  1.00x    ONLINE  -
    
    root@proxmox:/tank# zfs list -o space,compressratio,recordsize,volblocksize -r -t all
    NAME                AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  RATIO  RECSIZE  VOLBLOCK
    tank                33.8M  30.7T        0B    140K             0B      30.7T  1.11x     128K         -
    tank/vm-100-disk-0  30.6T  30.7T        0B   69.6G          30.6T         0B  1.11x        -        8K

     

    * The DSM3615/DSM3617 synoboot 1.03 images.  They boot in the VM, there is some disk activity but no output on the VM console or VM serial console.  The DS419+ boots fine.

    * Changing the VM from i440 to q35 re-orders the HDD ID's inside the NAS and kills the RAID-5 array.  I was inititally using PCI passthough, two of the 4 drives were no-longer detected.  The solution appears to be to re-upload the synboot image to the VM, delete all the SATA VM attached disks and re-create them all...

    * Trying to re-install the VM was a 6 hour headache, I was getting Error Code 13 on re-install onto fresh disks, deleting an re-uploading the synoboot image fixed that - as detailed here.

     

     

     

     

     

  2. I also get the same error (13).  However this was *after* I successfully created a VM using proxmox and then did a factory reset from the Synology web UI of the NAS running in the VM.

     

    No matter what I did I always got the error, things I tried:

    * removing all configured disks from the VM.

    * removing all EFI disks from the VM.

    * removing the PCI passthough from the VM.

    * verified the MD5 of the file I downloaded was correct (using the handy HashTab tool on windows http://implbits.com/products/hashtab/)

    * restoring the config of the VM to the original one that I used when I created the VM.

     

    What solved it for me was DELETING the ISO image from the proxmox 'local' storage - I had uploaded it as 'synoboot_ds918_1.04.img' and then re-uploading it.

     

    Why does this work?  The synoboot image appears to be modified once booted, and no-longer works for a re-install afterwards.

     

    I checked this by comparing the MD5 sum of the image on the server before and after booting it.

     

    before:

    via hashtabs of file synoboot_ds918_1.04.img before uploading it - B8FEE45A22B1263899F84E9030827742

    via commandline after uploading it:

    root@proxmox:/etc/pve/qemu-server# md5sum /var/lib/vz/template/iso/synoboot_ds918_1.04.img
    b8fee45a22b1263899f84e9030827742  /var/lib/vz/template/iso/synoboot_ds918_1.04.img

     

    after booting the VM:

    root@proxmox:/etc/pve/qemu-server# md5sum /var/lib/vz/template/iso/synoboot_ds918_1.04.img
    a28c0d30885684a16a8445d2adf99b20  /var/lib/vz/template/iso/synoboot_ds918_1.04.img

     

    so, I suggest the following, which I didn't see noted elsewhere:

    * keep the IMG file used to boot the VM handy if you need to re-install it.

    * don't shame IMG files between VMs.

     

    Hope this helps someone!  If it does let me know.

     

×
×
  • Create New...