Jump to content
XPEnology Community

IG-88

Developer
  • Posts

    4,640
  • Joined

  • Last visited

  • Days Won

    212

Everything posted by IG-88

  1. it does have a serial port so you coud use a nullmodem cable to have a look at the console whats going on the loader in it original state comes with dsm 6.2.0 kernel and rivers, when installing 6.2.2 it updates the kernel and as this version has very specific changes rendering the old drivers (in extra.lzma) useless you need to copy a new extra.lzma to the loader https://xpenology.com/forum/topic/21663-driver-extension-jun-103b104b-for-dsm622-for-3615xs-3617xs-918/ also try to boot a live linux to check your system and network if you cant get things to work and need access to your data you can install open media vault to a single empty disk boot this one up and the other synology disks should be recognized as raid array (its just the normal mdadm and lvm from linux) omv can work as normal nas with the usual smb/cifs as long as you dont delete any synology specific stuff like /volume1/@appstore you can use it to handle (or backup) your data
  2. https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/ Note 3: Please check you have the right VID/PID prior proceeding. If you get the following error ”Failed to install the file. The file is probably corrupted. (13)" it most certainly means your VID and/or PID is/are wrong. If you still have the same error message after verifying the VID/PID then try another USB drive. as posiible alternative you can try to install on another system and then move usb and disk to your destination computer (already installed dsm on disk)
  3. maybe try loader 1.04b as it comes from jun, dont change anything, it should at least boot and you should find it in network (vid/pid gets important when installing the *.pat file) if that does not work you can try the same with 1.03b for 3615
  4. https://xpenology.com/forum/forum/83-faq-start-here/ https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/
  5. ignore the gui, it cant help you with your real raid5 problem md0 and md1 are system and swap partitions as a raid1 as long as even one drive is working its not going to fail and you can later repair that within the gui stick to your problem thats important, the raid1 problems are easy to repair later, i would not risk any automatic you dont know do do anything from what i've found on the internet it seems the superblock does not match and you might have to recreate the array be resetting the superblocks and recreating the array (in the order as before), also in examine there is a Recovery Offset flag set for sdc i'd suggest to find a second opinion (in theory) my next steps would be like this (and thats something you can't undo so be careful) mdadm --stop /dev/md2 mdadm --zero-superblock /dev/sd[bcd]5 mdadm --create --assume-clean --level=5 --raid-devices=4 --size=2925531648 /dev/md2 missing /dev/sdc5 /dev/sdb5 /dev/sdd5 mdadm --detail /dev/md2
  6. more likely another problem then nic what loader and type (there are three) if 1.04b/918+ what extra/extra2 type did you use (there are three, syno, std, recovery) did you also copy the new zImage and rd.gz to the loader?
  7. that was what you already did with flyride, so nothing new to expect if you repeat it in my edit from the last post i suggested a a little different assemble try, is gives the /dev/sdX in the order as they are in the examine, sdc as the 1st and the other two after this, i 'm not sure if that makes a difference when using --force, it also contains --verbose, maybe we get more information when it fails to assemble it its just a slightly different variation of what you already tried, cant make things worse so you should try this next mdadm --stop /dev/md2 mdadm --assemble --force --verbose /dev/md2 /dev/sdc5 /dev/sdb5 /dev/sdd5 mdadm --detail /dev/md2 the --create command would be only if the above does not work and we can't figure out why, i would like to know why --assemble --force does not work as it should for the --create coammand yes, the other try with the --verbose should be ok when you are sure your problem is located and the reason why sdc also dropped out a removed sda seems to be a bad drive already and is not used anymore if the s.m.a.r.t. info's of the other three drives is ok it might be safe to shutdown, but if i would be in your place i would let it in the state is now and running, if there would be indication that ram, board, controller or psu are source of the problem i would shut down to get a stable system, thats key for a recovery if the assemble or create is successful even then you would not shut down, maybe a reboot for me it seems still unclear why sdc dropped out of the raid did you check logs the see when sda and sdc dropped out, dis they at the same time or was sda already failed for longer and you did not noticed it? in a more professional recovery environment (much more money involved) i guess one would make a image file from every disk and work with these (an a tested stable system)
  8. i think its more about the brand names like synology and dsm, if there are a lot unhappy people with there own hardware (some overly cheap and already used for >5 years) and driver problems i guess one of the reasons they let as do our thing here is the different brand name "xpenology" sound different enough not much to gain, more to loose for a free open source distribution they could be much more unpleasant if they wanted to, it not too bad the way it is they need to support "older" business units like 3617 or similar for at least 5 years if they want to compete with established server brands, so either way we will have continues (secuity) updates for 6.2 or 7.0 also there could be a new hack, replacing 3615, also 918+ was once new and takes now a lot of the installs of xpenology (i guess) we will see what happens when 7.0 is out how will you know? its to early, even less then speculation but if some people feel better selling they might do it now before other do it - might be a good catch for others that dont need always the latest, Gen8 with 6.2.2 works for a lot of people yes and you dont know what hardware will be in favor next, i got a intel 9100 from whats inside the dsm source files they at least experiment with geminilake and coffeelake exactly, what killer feature should 7.0 have we need here, most of the stuff i've seen from 7.0 was for business customers a fully working 6.2.2 (without the need of a valid serial) is pretty good and if there would be a custom kernel now that we have source for 24922 ... that might be better then having 7.0
  9. i haven't done it this often and have not seen anything like this, /dev/sdc5 looked like it would be easy to force it back into the array like you already tried by stopping /dev/md2 and then "force" the drive back into the raid - as you already tried thst would have been my approach, its the same as you already tied mdadm --stop /dev/md2 mdadm --assemble --force /dev/md2 /dev/sd[bcd]5 mdadm --detail /dev/md2 doing more advanced steps would be experimental for me and i don't like suggesting stuff i haven't tried myself before here is the procedure for recreating the whole /dev/md2 instead of assemble it https://raid.wiki.kernel.org/index.php/RAID_Recovery drive 0 is missing (sda5), sdc5 ist device 1 (odd but it say's so in the status in examine), sdb5 is device 2 and sdd5 is device 3 Used Dev Size : 5851063296 (2790.00 GiB 2995.74 GB) from status -> divided by two its 2925531648 i came up with this to do it mdadm --create --assume-clean --level=5 --raid-devices=4 --size=2925531648 /dev/md2 missing /dev/sdc5 /dev/sdb5 /dev/sdd5 its a suggestion, nothing more (or its less then a suggestion? - what would be the name for this?) edit: maybe try this before trying a create mdadm --assemble --force --verbose /dev/md2 /dev/sdc5 /dev/sdb5 /dev/sdd5
  10. what new drives, the 3617 has the same driver level as 918+ extension, maybe minor differences as of different kernel (3.10.105 vs. 4.4.59) can both be done after installing, the SHR in in the faq, for more disks there are guides and youtube videos, no urgent need from my point of view the >12 disks thing will be a pain when installing big updates where the system partition is replaced 918+ is a good choice, has newer kernel, nvme support, the 3617 is just the better choice when it comes to cpu cores support (8 vs 16) and 3615/17 has raidf1 for all ssd raid arrays, both are not that usual for home power efficient home nas if not done right a wrong patch in the extra.lzma will screw up a lot of systems and only a smaller minority need >12 disks its still in the back of my mind to do it but as long as my own new hardware waits to be assembled here and the newly available 24922 source is not used for new drivers, i'm doing anything with the patch, is no rocket science and completely independent from the drivers, so if anyone else does it with diff and provides a new patch i could incorporate it into a test version dedicated for desperate people to try out
  11. the broadcom/lsi link looks like it would be the right thing for the 2108 chip but i can't say if its possible to use with this supermicro controller
  12. initiator target mode, its a firmware that hast to be flashed when the controller has IR (R for RAID) firmware in this firmware is no raid support, IR/IT is shown when in controller bios https://nguvu.org/freenas/Convert-LSI-HBA-card-to-IT-mode/ you can goolge "lsi sas it mode"
  13. usually nothing special needed SataPortMap or anything there would only be used in case of problems the only thing important is that the controller has to use IT mode so disks can be seen as individual single devices
  14. you can unpack and repack the extra.lzma like here https://xpenology.com/forum/topic/7187-how-to-build-and-inject-missing-drivers-in-jun-loader-102a/ ignore the part about chroot, just install the tools with apt-get in a normal linux system and continue with "2. modify the "synoboot.img"" you could delete the r8168.ko and repack it and copy the new extra.lzma to you loader, replacinf the "old" one in addition you would need to delete the r8168.ko in your installed dsm system in /usr/lib/modules/update if that does not work and you loose access to you system (like r8169.ko is not working) then you can replace your new extra.lzma with the "old" one and reboot the system, the r8168.ko should be copied to your system again and be used
  15. with it firmware it should work with all tree images as its "only" 6GBit SAS, lsi 2108 chip, nothing special, when using the latest extra.lzma's and 6.2.2 if there are problem check if it has IT firmare, IR firmware is not working the way we need it and list the vendor/device id with lspic should be inside this range https://pci-ids.ucw.cz/read/PC/1000
  16. erst csm / legacy aktivieren, einstellung speichern, reboot (mit aktivem csm/legacy) und dann sollte man ein entsprechendes device sehen und natütlich sollte der usb zu dem zeitpunkt (boot mit aktivem csm) angeschlossen sein je nach bios sieht man in der eigentliche boot auswahl nur ein device je boot typ, dann gibt es aber noch eine auswahl für das primäre divece dieses typs das heißt in der regel vid/pid vom usb nicht richtig eingetragen für 6.1 gibt es keine sicherheitsupdates mehr, für 6.2 schon, ist also nicht ganz unwichtig
  17. da csm nochnicht aktiv ist/war gibts auch (noch) keine boot device, cms aktiv speichern, reboot, nochmal bei den boot devices schauen, da sollte dann ein zusätzliches zu sehen sein das sollte 1.03b auch an den start bringen
  18. if flyride has some time to help you here its good for you, he's defiantly better at this then i am
  19. i alreday gave you commands matching your system above (abcd) the examine you did from the other thread does not contain "a" so its missing "/dev/sda" informations so please do execute this to get the information about the state of the disks mdadm --examine /dev/sd[abcd]5 | egrep 'Event|/dev/sd' it also seems you cut some output lines at the beginning from the command? (the part where it say's /dev/sdb5) mdadm --examine /dev/sd[bcdefklmnopqr]5 >>/tmp/raid.status please be careful, sloppiness might have heavy consequences, be very careful when doing such stuff some of the commands cant be undone so easily it important to be precise from what we have now it would be possible to do a recovery of the raid with /dev/sdc but lets see what /dev/sda has to offer maybe nothing at all for /dev/sda because root@DiskStation:~# ls /dev/sd* did not show any partitions for /dev/sda, there should be /dev/sda1 /dev/sda2 /dev/sda5 but what we have from /dev/sdc might be enough the loss would be 44 x 64k, 2,75 MByte
  20. did you try the recovery type extra/extra2? i did at least have one tester with a N3150 (also braswell as N3160) and it did work for him
  21. you would just do the same as using the 2nd boot option in the 6.2 loader, a fresh install of dsm and keeping your raid as it is so its less effort to "keep" 6.2.2 and just do the install without knocking out your raid - thats a usual option on original systems too, when the dsm system is wonky for some reason or does not come up after a systemupdate you can put the loader (internal usb on original units) in fresh install mode and you can choose to install just the system (dsm) and keep your data or to a complete new install knocking out you "old" raid/data (wizard in web gui when booting up 2nd boot loader option) https://global.download.synology.com/download/Document/Software/UserGuide/Firmware/DSM/6.2/enu/Syno_UsersGuide_NAServer_enu.pdf page 25 dsm is a custom linux appliance that handles some things there own way, if you want do find out whats wrong with your upgrade attempt it can take some time, on the more you change you might make things less predictable or might come into tha same situation after a bigger uddate (like 6.2.2, full 200-300MB dsm *.pat file) whre the whole system partition is overwritten and only config data are reapplied) the efficient way (imho) is a fresh install and redo the shares/permissions in the gui and reinstall the plugins
  22. i dont know if its confirmed that intel QSV does not work in a dsm vm on esxi but what should work (if you have vt-d with your cpu/chipset) is having a nvidia/amd gpu (pcie) and giving this gpu to a different linux vm (maybe with added docker) and use that for transcoding
  23. das is normal und das wollte die der verfasser der nachricht transpoertieren - mehr kommt an der stelle nicht - weil die ausgabe auf serieller console erfolgt und nicht auf dem monitor, wenn du ein nullmodem kabel zur hand hast und putty könntest du sehen was passiert nicht wiklich da es normal ist das man nichts wesentliches auf dem monitor sieht außer der meldung das da nichts mehr kommt rein um die kiste im netzwerk zu finden muss man nicht mal etwas am loader anpassen, das sollte mit den einstellungen gehen die er schon hat, das wichtige vid/pid wird erst beim installieren relevant (also nach dem finden im netzwerk), die mac ist für die meisten nur für wol wichtig und die sn wird nur bei bestimmten plugins auf gültigkeit abefragt (reines nas betrieb bzw. installierengeht auch ihne gültige sn, die sn kann man auch später noch ändern) 1.03b geht nicht in uefi, benötigt bios/csm und man muss bei ufei im csm auch das richtige (nicht uefi) boot device wählen - aber das problem hast du theoretisch schon mit loader 1.02b ausgeschlossen, der kann uefi und bios/csm 3615 ist die sicherste variante da es auch mit "alten" prozessoren kann, mit deiner cpu ginge auch 918+ -schau mal auf deinem dhcp server nach ob eine adresse bezogen wurde -netzwerkkabel und korrekten sitz uaf beiden seiten überprüfen (kein scherz, gab es hier schon mehrfach) -vlan's aktiv -boote mal mit einem live linux und schau ob du dann netzwerk hast
  24. what version of the loader? what does work not properly mean? low performance od does not work at all (when booting 1st time or after installing)? what version does you test debdian use? look with modinfo
×
×
  • Create New...