DSM 6.2 Loader


Recommended Posts

Hi,

I have recently updated my USB boot to 1.03b and accidentally instead of migrate, installed and now my admin password is reset. I was foolish not to enable ssh so the method I found would not work. is there anyway to reset via the boot or any other ways which I am not aware of? my last resort is to buy a new HDD and set up a new environment then migrate my two existing HDD once the new system is up, is this the only option left? your assistance is greatly appreciated.

 

Thanks in advance.

Link to post
Share on other sites

I have purchased a new HDD and set up a new system, connected the old HDD and move the files across. will wipe the old HDDs and I am back in normal operation. I have learnt this the hard way. have enabled SSH to ensure I won't get lock out again in the future.

Link to post
Share on other sites

Just reporting some information in the hopes that others find it useful.

 

System:

ASRock Z490M-ITXac

Intel Core i3 10100 s1200

4x12TB Toshiba HDDs (CMR & 5 year warranty)

16GB Corsair DDR4 Venagance 2666 RAM

Noctua Cooler NH-LPi low profile cooler

Fractal Design Node 304 code (space for 6 HDDs)

Samsung EVO 950 512GB SSD (proxmox, nas boot and nas ssd cache)

 

Software:

Proxmox 6.2-1 OS for VM's and Docker Containers

HostAPD for a Wireless access point using the built-in WiFi card on the motherboard.

 

Config:

PCIe Passthrough for the HDDs to the NAS VM.  48TB RAID-5.

2x 32GB Virtual SSDs for NAS Cache

1x 16GB Virtual SSD for NAS apps.

 

vm-100-xpenology1-q35-pcei-1hd-2ssd.thumb.JPG.f2e2480fcb0d96c6a8b69f30455750ed.JPG

 

VM Config

args: -device 'nec-usb-xhci,id=usb-ctl-synoboot,addr=0x18' -drive 'id=usb-drv-synoboot,file=/var/lib/vz/template/iso/synoboot_ds918_1.04-vm-xpenology1.img,if=none,format=raw' -device 'usb-storage,id=usb-stor-synoboot,bootindex=1,removable=off,drive=usb-drv-synoboot'
balloon: 0
bios: ovmf
bootdisk: sata0
cores: 2
efidisk0: local-lvm:vm-100-disk-1,size=4M
hostpci0: 00:17,pcie=1
ide2: none,media=cdrom
machine: q35
memory: 2048
name: xpenology1
net0: e1000=D2:F2:B8:6B:D0:40,bridge=vmbr0,firewall=0
numa: 0
ostype: l26
sata0: local-lvm:vm-100-disk-2,size=16G,ssd=1
sata1: local-lvm:vm-100-disk-3,size=32G,ssd=1
sata2: local-lvm:vm-100-disk-0,size=32G,ssd=1
scsihw: lsi
serial0: socket
smbios1: uuid=3964d1cc-e350-4fb3-b133-6903f5f05d7e
sockets: 1
vmgenid: c7f7222e-5bc7-4928-b6d1-6e0cd94f24d2

 

A personal note to Synology marketing - If you hadn't dropped Wireless AP support from the new DSM/NAS systems I'd have upgraded my existing DS415+ to a DS920+  That and the C2000 CPU-of-death issue.

 

What didn't work:

* Single SATA VM backed by a ZFS pool - It worked, i.e. you could create a simple volume in the running NAS on the ZFS pool, but proxmox only showed 22TB available capacity for a RAIDZ1 ZFS pool instead of the expected 30.64GB (48TB-12TB=36-overhead=33.6TB (30.64TiB).  The pool is correct, (when using zpool list and zfs list) but when you add an SATA disk to the VM it shows the wrong capacity. - https://wintelguy.com/zfs-calc.pl

 

root@proxmox:~# zfs list
NAME                 USED  AVAIL     REFER  MOUNTPOINT
tank                30.7T  33.8M      140K  /tank
tank/vm-100-disk-0  30.7T  30.6T     69.6G  -
root@proxmox:~# zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank  43.6T  95.9G  43.5T        -         -     0%     0%  1.00x    ONLINE  -

root@proxmox:/tank# zfs list -o space,compressratio,recordsize,volblocksize -r -t all
NAME                AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  RATIO  RECSIZE  VOLBLOCK
tank                33.8M  30.7T        0B    140K             0B      30.7T  1.11x     128K         -
tank/vm-100-disk-0  30.6T  30.7T        0B   69.6G          30.6T         0B  1.11x        -        8K

 

* The DSM3615/DSM3617 synoboot 1.03 images.  They boot in the VM, there is some disk activity but no output on the VM console or VM serial console.  The DS419+ boots fine.

* Changing the VM from i440 to q35 re-orders the HDD ID's inside the NAS and kills the RAID-5 array.  I was inititally using PCI passthough, two of the 4 drives were no-longer detected.  The solution appears to be to re-upload the synboot image to the VM, delete all the SATA VM attached disks and re-create them all...

* Trying to re-install the VM was a 6 hour headache, I was getting Error Code 13 on re-install onto fresh disks, deleting an re-uploading the synoboot image fixed that - as detailed here.

 

 

 

 

 

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.