lyter

Transition Members
  • Content Count

    13
  • Joined

  • Last visited

Community Reputation

0 Neutral

About lyter

  • Rank
    Newbie
  1. @IG-88, I'm really sorry about this but for whatever reason the config of the VM did not contain the passthrough of the USB controller anymore. Also older backups did not contain it. So I've added it again and voilà the USB drives are recognized! Really have no idea how this could have happened and why the backups to the USB drives could have happened. But I've learned how to recover an array through a live linux, so thanks for that! And for your patience!
  2. @IG-88, your suggested values unfortunately have no positive effect. Still, no USB disk is recognized. See attached the dmseglog. Thanks for your patience.dmesglog.txt
  3. @IG-88 thanks for the suggestion. Tried these values now, but still no USB disks: maxdisks="20" esataportcfg="0x0" usbportcfg="0xffff00000" internalportcfg="0xfffff" But why wouldn't the USB drives show up again, when I reset the confs and disconnect the new 16TB drives? That would be the same situation as before, no?
  4. Thanks @IG-88, you're a life saver! Resetted the synoinfo.confs and set your suggested values for 20 drives. The result is still that the USB drives don't show up. But they also did not show when i resetted the confs. So any idea, why the USB drives are still missing? I know for a fact that the USB drives were detected before I first changed the confs to extend the disk limit, because of the successful daily backups that were done right up to this point. I did not change anything on the VM settings, so this should be fine as well. Also detached the USB drives multiple times.
  5. @IG-88, thanks for the help so far, mate! Really appreciated! So I ran your recommendation: root@ubuntu:~# mdadm --examine /dev/sd[abcdf]1 | egrep 'Event|/dev/sd' mdadm: No md superblock detected on /dev/sdb1. mdadm: No md superblock detected on /dev/sdf1. /dev/sda1: Events : 13363196 /dev/sdc1: Events : 13363196 /dev/sdd1: Events : 13363196 What exactly does this mean? EDIT: I just rebooted the live ubuntu, because the results for various commands like "cat /proc/mdstat" were non-existent. Now the array seems to be active under md0:
  6. @IG-88, thanks for the pointers. Still I'm not able to mount the drives. So, this are the results for several commands when running Ubuntu Live: root@ubuntu:~# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md127 : inactive sdc1[3](S) sdd1[2](S) sda1[1](S) 7470528 blocks md2 : active raid1 sdf3[0] 100035584 blocks super 1.2 [1/1] [U] md4 : active raid1 sdb3[0] 9761614848 blocks super 1.2 [1/1] [U] md3 : active raid5 sda3[3] sdd3[2] sdc3[1] 11711401088 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU] unused devices: <n
  7. @IG-88, thanks for the patience. Yes, totally my bad. Just tried the tutorial, not sure how i need to proceed in my case. I have 4 volumes: - vol 1: ssd 100GB for systems stuff and documents (basic) - vol 2: 3x6TB disks in RAID5 (the one that should be replaced) - vol 3: 10TB disk for archive (basic) - vol 4: 3x16TB disks in RAID5 (new) So i thought I'd run step 7 & 8 for vol 2, but i get the message for step 8. root@ubuntu:# mdadm -AU byteorder /dev/md0 /dev/sda1 /dev/sdc1 /dev/sdd1 mdadm: /dev/md0 assembled from 3 drives - need 4 to start
  8. @IG-88, I restored to original synoinfo.confs and then just edited the maxdisk parameter. Which of course was stupid, but I wouldn't have expected DSM not to work at all anymore... So I'll try the tutorial in the link tomorrow. Hope I'll be able to fix it... So what are the params you'd recommend? Don't I need to adjust all 4 params to make them fit to 16 disks instead of 12? == original conf for 12 disks == esataportcfg="0xff000" usbportcfg="0x300000" internalportcfg="0xfff"
  9. Thanks @IG-88, just tried maxdisks = 16. But now DSM is not reachable anymore (web-interface, ssh, also not on Synology Assistant). Will have to find a way to fix this first...
  10. I was running a Xpenology-VM with DSM 6.2.2-24922 Update 4 under Proxmox. I have 3 HDDs in RAID5, 1 HDD in Basic and 1 SSD in Basic. Furthermore 3 external USB-HDDs for various backups. This was working fine so far. Now I bought 3x16TB HDDs to replace the old RAID5 array since the disks are getting old and the volume getting too small. After I connected them, only 2 of the 3 disks were shown in DSM. With the count of disks hitting 12 in Storage Manager>HDD/SSD (even though I don't really understand, why Drive 3-6 are skipped), I realised that the max disks of 12 was reached and therefo
  11. lyter

    DSM 6.1.x Loader

    Thanks. No idea whats the problem over here. I have a pretty similar config. The VM behaves as if it lost all boot devices after the controller is detected and the VM rebooted. If I had the same problem with DSM 5.2 the case would be clear, but since that is still working like a charm I'm very confused. Would be super happy if someone had a hint.
  12. lyter

    DSM 6.1.x Loader

    Could you post your VM.conf here? What firmware for the controller are you using? I'm using P19, which should be the forelast firmware.
  13. lyter

    DSM 6.1.x Loader

    I'm having trouble passing through (pci passthrough) my LSI 9211-8i controller (flashed to IT mode) in Proxmox 4.4. The VM boots without the controller passed through. As soon, as I want to pass it through the VM finds the controller as usual (see https://www.dropbox.com/s/kert6x8q6l48n ... 2.PNG?dl=0), but then reboots and is stuck at "Booting from Hard Disk..." (see attached screenshot).