IG-88

Members
  • Content Count

    1,859
  • Joined

  • Last visited

  • Days Won

    73

IG-88 last won the day on April 5

IG-88 had the most liked content!

Community Reputation

346 Excellent

About IG-88

  • Rank
    Guru Master

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. i haven't done it this often and have not seen anything like this, /dev/sdc5 looked like it would be easy to force it back into the array like you already tried by stopping /dev/md2 and then "force" the drive back into the raid - as you already tried thst would have been my approach, its the same as you already tied mdadm --stop /dev/md2 mdadm --assemble --force /dev/md2 /dev/sd[bcd]5 mdadm --detail /dev/md2 doing more advanced steps would be experimental for me and i don't like suggesting stuff i haven't tried myself before here is the procedure for recreating the whole /dev/md2 instead of assemble it https://raid.wiki.kernel.org/index.php/RAID_Recovery drive 0 is missing (sda5), sdc5 ist device 1 (odd but it say's so in the status in examine), sdb5 is device 2 and sdd5 is device 3 Used Dev Size : 5851063296 (2790.00 GiB 2995.74 GB) from status -> divided by two its 2925531648 i came up with this to do it mdadm --create --assume-clean --level=5 --raid-devices=4 --size=2925531648 /dev/md2 missing /dev/sdc5 /dev/sdb5 /dev/sdd5 its a suggestion, nothing more (or its less then a suggestion? - what would be the name for this?) edit: maybe try this before trying a create mdadm --assemble --force --verbose /dev/md2 /dev/sdc5 /dev/sdb5 /dev/sdd5
  2. what new drives, the 3617 has the same driver level as 918+ extension, maybe minor differences as of different kernel (3.10.105 vs. 4.4.59) can both be done after installing, the SHR in in the faq, for more disks there are guides and youtube videos, no urgent need from my point of view the >12 disks thing will be a pain when installing big updates where the system partition is replaced 918+ is a good choice, has newer kernel, nvme support, the 3617 is just the better choice when it comes to cpu cores support (8 vs 16) and 3615/17 has raidf1 for all ssd raid arrays, both are not that usual for home power efficient home nas if not done right a wrong patch in the extra.lzma will screw up a lot of systems and only a smaller minority need >12 disks its still in the back of my mind to do it but as long as my own new hardware waits to be assembled here and the newly available 24922 source is not used for new drivers, i'm doing anything with the patch, is no rocket science and completely independent from the drivers, so if anyone else does it with diff and provides a new patch i could incorporate it into a test version dedicated for desperate people to try out
  3. the broadcom/lsi link looks like it would be the right thing for the 2108 chip but i can't say if its possible to use with this supermicro controller
  4. initiator target mode, its a firmware that hast to be flashed when the controller has IR (R for RAID) firmware in this firmware is no raid support, IR/IT is shown when in controller bios https://nguvu.org/freenas/Convert-LSI-HBA-card-to-IT-mode/ you can goolge "lsi sas it mode"
  5. usually nothing special needed SataPortMap or anything there would only be used in case of problems the only thing important is that the controller has to use IT mode so disks can be seen as individual single devices
  6. you can unpack and repack the extra.lzma like here https://xpenology.com/forum/topic/7187-how-to-build-and-inject-missing-drivers-in-jun-loader-102a/ ignore the part about chroot, just install the tools with apt-get in a normal linux system and continue with "2. modify the "synoboot.img"" you could delete the r8168.ko and repack it and copy the new extra.lzma to you loader, replacinf the "old" one in addition you would need to delete the r8168.ko in your installed dsm system in /usr/lib/modules/update if that does not work and you loose access to you system (like r8169.ko is not working) then you can replace your new extra.lzma with the "old" one and reboot the system, the r8168.ko should be copied to your system again and be used
  7. with it firmware it should work with all tree images as its "only" 6GBit SAS, lsi 2108 chip, nothing special, when using the latest extra.lzma's and 6.2.2 if there are problem check if it has IT firmare, IR firmware is not working the way we need it and list the vendor/device id with lspic should be inside this range https://pci-ids.ucw.cz/read/PC/1000
  8. erst csm / legacy aktivieren, einstellung speichern, reboot (mit aktivem csm/legacy) und dann sollte man ein entsprechendes device sehen und natütlich sollte der usb zu dem zeitpunkt (boot mit aktivem csm) angeschlossen sein je nach bios sieht man in der eigentliche boot auswahl nur ein device je boot typ, dann gibt es aber noch eine auswahl für das primäre divece dieses typs das heißt in der regel vid/pid vom usb nicht richtig eingetragen für 6.1 gibt es keine sicherheitsupdates mehr, für 6.2 schon, ist also nicht ganz unwichtig
  9. da csm nochnicht aktiv ist/war gibts auch (noch) keine boot device, cms aktiv speichern, reboot, nochmal bei den boot devices schauen, da sollte dann ein zusätzliches zu sehen sein das sollte 1.03b auch an den start bringen
  10. if flyride has some time to help you here its good for you, he's defiantly better at this then i am
  11. i alreday gave you commands matching your system above (abcd) the examine you did from the other thread does not contain "a" so its missing "/dev/sda" informations so please do execute this to get the information about the state of the disks mdadm --examine /dev/sd[abcd]5 | egrep 'Event|/dev/sd' it also seems you cut some output lines at the beginning from the command? (the part where it say's /dev/sdb5) mdadm --examine /dev/sd[bcdefklmnopqr]5 >>/tmp/raid.status please be careful, sloppiness might have heavy consequences, be very careful when doing such stuff some of the commands cant be undone so easily it important to be precise from what we have now it would be possible to do a recovery of the raid with /dev/sdc but lets see what /dev/sda has to offer maybe nothing at all for /dev/sda because root@DiskStation:~# ls /dev/sd* did not show any partitions for /dev/sda, there should be /dev/sda1 /dev/sda2 /dev/sda5 but what we have from /dev/sdc might be enough the loss would be 44 x 64k, 2,75 MByte
  12. did you try the recovery type extra/extra2? i did at least have one tester with a N3150 (also braswell as N3160) and it did work for him
  13. you would just do the same as using the 2nd boot option in the 6.2 loader, a fresh install of dsm and keeping your raid as it is so its less effort to "keep" 6.2.2 and just do the install without knocking out your raid - thats a usual option on original systems too, when the dsm system is wonky for some reason or does not come up after a systemupdate you can put the loader (internal usb on original units) in fresh install mode and you can choose to install just the system (dsm) and keep your data or to a complete new install knocking out you "old" raid/data (wizard in web gui when booting up 2nd boot loader option) https://global.download.synology.com/download/Document/Software/UserGuide/Firmware/DSM/6.2/enu/Syno_UsersGuide_NAServer_enu.pdf page 25 dsm is a custom linux appliance that handles some things there own way, if you want do find out whats wrong with your upgrade attempt it can take some time, on the more you change you might make things less predictable or might come into tha same situation after a bigger uddate (like 6.2.2, full 200-300MB dsm *.pat file) whre the whole system partition is overwritten and only config data are reapplied) the efficient way (imho) is a fresh install and redo the shares/permissions in the gui and reinstall the plugins
  14. IG-88

    Transcoding options

    i dont know if its confirmed that intel QSV does not work in a dsm vm on esxi but what should work (if you have vt-d with your cpu/chipset) is having a nvidia/amd gpu (pcie) and giving this gpu to a different linux vm (maybe with added docker) and use that for transcoding