Jump to content
XPEnology Community

IG-88

Developer
  • Posts

    4,645
  • Joined

  • Last visited

  • Days Won

    212

Everything posted by IG-88

  1. DSM 6.2.3 will not work with these drivers, if you install or update you will fall back to "native" drivers that come with DSM, like no realtek nic on 3615/17 but on 918+ or no mpt2/mpt3sas on 918+ or no broadcom onboard nic on HP microserver or Dell server read this if you want to know about "native" drivers https://xpenology.com/forum/topic/13922-guide-to-native-drivers-dsm-617-and-621-on-ds3615/ synology reverted the changes made in 6.2.2 so the old drivers made for 6.2.(0) are working again and there are new drivers made for 6.2.3 too (we got recent kernel source from synology lately) https://xpenology.com/forum/topic/28321-driver-extension-jun-104b-for-dsm623-for-918/ This is the new 2nd test version of the driver extension for loader 1.04b and 918+ DSM 6.2.2, network drivers for intel and realtek are now all latest and the same as in 3615/17 from mid. december (also broadcoam tg3 driver is working), tries to address the problems with the different GPU's by haveing 3 versions of the pack additional information and packages for 1.03b and 3615/3617 are in the lower half under a separate topic (i will unify the 918+ and 3615/17 parts later as they are now on the same level again) mainly tested as fresh install with 1.04b loader with DSM 6.2.2, there are extra.lzma and extra2.lzma in the zip file - you need both - the "extra2" file is used when booting the 1st time and under normal working conditions the extra.lzma is used (i guess also normal updates - jun left no notes about that so i had to find out and guess). Hardware in my test system used additional driver: r8168, igb, e1000e, bnx2x, tn40xx, mpt2sas The rest of the drivers just load without any comment on my system, i've seen drivers crashing only when real hardware is present so be warned, i assume any storage driver beside ahci and mps2sas/mpt3sas as not working, so if you use any other storage as listed before you would need to do a test install with a new usb and a single empty disk to find out before doing anything with your "production" system i suggest testing with a new usb and a empty disk and it that's ok then you have a good chance for updating for updating its the same as below in the 3615/17 section with case 1 and 2 but you have extra.lzma and extra2.lzma and you will need to use https://archive.synology.com/download/DSM/release/6.2.2/24922/DSM_DS918+_24922.pat most important is to have zImage and rd.gz from the "DSM_DS918+_24922.pat" file (can be opened with 7zip) together with the new extra/extra2, same procedure as for the new extra for 3615/17 (see below) all 4 files extra.lzma, extra2.lzma (both extracted from the zip downloaded), zImage and rd.gz go to the 2nd partition of the usb (or image when using osfmount), replacing the 4 files there if you want the "old" files of the original loader back you can always use 7zip to open the img file from jun and extract the original files for copying them to usb if really wanting to test with a running 6.2.x system then you should empty /usr/lib/modules/update/ and /usr/lib/firmware/i915/ before rebooting with the new extra/extra2 rm -rf /usr/lib/modules/update/* rm -rf /usr/lib/firmware/i915/* the loader will put its files on that locations when booting again, this step will prevent having old incompatible drivers in that locations as the loader replaces only files that are listed in rc.modules and in case of "syno" and "recovery" there are fewer entries, leaving out i915 related files, as long as the system boots up this cleaning can be done with the new 0.8 test version there a 3 types of driver package, all come with the same drivers (latest nic drivers for realtek and intel) and conditions/limitations as the 3615/17 driver set from mid. december (mainly storage untested, ahci and mpt3sas is tested). 1. "syno" - all extended i915 stuff removed and some firmware added to max compatibility, mainly for "iGPU gen9" (Skylake, Apollo Lake and some Kaby Lake) and older and cases where std did not work, i915 driver source date: 20160919, positive feedback for J3455, J1800 and N3150 2. "std" - with jun's i915 driver from 1.04b (tested for coffee lake cpu from q2/2018), needed for anything newer then kaby lake like gemini lake, coffee lake, cannon lake, ice lake, i915 driver source date: 20180514 - as i had no source i915 driver is the same binary as in jun's original extra/extra2, on my system its working with a G5400, not just /dev/dri present, tested completely with really transcoding a video, so its working in general but might fail in some(?) cases, also 8th/9th gen cpu like i3/i5 8100/9400 produce a /dev/dri, tested with a 9400 and it does work 3. "recovery" - mainly for cases where the system stops booting because of i915 driver (seen on one N3150 braswell), it overwrites all gpu drivers and firmware with files of 0 size on booting so they can't be loaded anymore, should also work for any system above but guarantees not having /dev/dri as even the firmware used from the dsm's own i915 driver is invalid (on purpose) - if that does not work its most likely a network driver problem, safe choice but no transcoding support start with syno, then std and last resort would be recovery anything with a kernel driver oops in the log is a "invalid" as it will result in shutdown problems - so check /var/log/dmesg the often seen Gemini Lake GPU's might work with "std", pretty sure not with "syno", most (all?) testers with gemini lake where unsuccessful with "std" so if you don't like experimenting and need hardware transcoding you should wait with the version you have the "_mod" on the end of the loader name below is a reminder that you need to to "modding" as in make sure you have zImage and rd.gz from DSM 6.2.2 on you usb for booting, the new extra.lzma will not work with older files 0.8_syno ds918+ - extra.lzma/extra2.lzma for loader 1.04b_mod ds918+ DSM 6.2.2 v0.8_syno https://gofile.io/d/mVBHGi SHA256: 21B0CCC8BE24A71311D3CC6D7241D8D8887BE367C800AC97CE2CCB84B48D869A Mirrors by @rcached https://clicknupload.cc/zh8zm4nc762m https://dailyuploads.net/qc8wy6b5h5u7 https://usersdrive.com/t0fgl0mkcrr0.html https://www104.zippyshare.com/v/hPycz12O/file.html 0.8_std ds918+ - extra.lzma/extra2.lzma for loader 1.04b_mod ds918+ DSM 6.2.2 v0.8_std https://gofile.io/d/y8neID SHA256: F611BCA5457A74AE65ABC4596F1D0E6B36A2749B16A827087D97C1CAF3FEA89A Mirrors by @rcached https://clicknupload.cc/h9zrwienhr7h https://dailyuploads.net/elgd5rqu06vm https://usersdrive.com/peltplqkfxvj.html https://www104.zippyshare.com/v/r9I7Tm0K/file.html 0.8_recovery ds918+ - extra.lzma/extra2.lzma for loader 1.04b_mod ds918+ DSM 6.2.2 v0.8_recovery https://gofile.io/d/4K3WPE SHA256: 5236CC6235FB7B5BB303460FC0281730EEA64852D210DA636E472299C07DE5E5 Mirrors by @rcached https://clicknupload.cc/uha07uso7vng https://dailyuploads.net/uwh710etr3hm https://usersdrive.com/ykrt1z0ho7cm.html https://www104.zippyshare.com/v/7gufl3yh/file.html !!! still network limit in 1.04b loader for 918+ !!! atm 918+ has a limit of 2 nic's (as the original hardware) If there are more than 2 nic's present and you can't find your system in network then you will have to try after boot witch nic is "active" (not necessarily the onboard) or remove additional nic's and look for this after installation You can change the synoinfo.conf after install to support more then 2 nic's (with 3615/17 it was 8 and keep in mind when doing a major update it will be reset to 2 and you will have manually change this again, same as when you change for more disk as there are in jun's default setting) - more info's are already in the old thread about 918+ DSM 6.2.(0) and here https://xpenology.com/forum/topic/12679-progress-of-62-loader/?do=findComment&comment=92682 I might change that later so it will be set the same way as more disks are set by jun's patch - syno's max disk default for this hardware was 4 disks but jun's pach changes it on boot to 16!!! (so if you have 6+8 sata ports then you should not have problems when updating like you used to have with 3615/17) Basically what is on the old page is valid, so no sata_*, pata_* drivers Here are the drivers in the test version listed as kernel modules: The old thread as reference !!! especially read "Other things good to know about DS918+ image and loader 1.03a2:" its still valid for 1.04b loader !!! This section is about drivers for ds3615xs and ds3617xs image/dsm version 6.2.2 (v24922) Both use the same kernel (3.10.105) but have different kernel options so don't swap or mix, some drivers might work on the other system some don't at all (kernel oops) Its a test version and it has limits in case of storage support, read careful and only use it when you know how to recover/downgrade your system !!! do not use this to update when you have a different storage controller then AHCI, LSI MPT SAS 6Gb/s Host Adapters SAS2004/SAS2008/SAS2108/SAS2116/SAS2208/SAS2308/SSS6200 (mpt2sas) or LSI MPT SAS 12Gb/s Host Adapters SAS3004/SAS3008/SAS3108 (mpt3sas - only in 3617), instead you can try a fresh "test" install with a different usb flash drive and a empty single disk on the controller in question to confirm if its working (most likely it will not, reason below) !!! The reason why 1.03b loader from usb does not work when updating from 6.2.0 to 6.2.2 is that the kernel from 6.2.2 has different options set witch make the drivers from before that change useless (its not a protection or anything), the dsm updating process extracts the new files for the update to HDD, writes the new kernel to the usb flash drive and then reboots - resulting (on USB) in a new kernel and a extra.lzma (jun's original from loader 1.03b for dsm 6.2.0) that contains now incompatible drivers, the only drivers working reliable in that state are the drivers that come with dsm from synology Beside the different kernel option there is another thing, nearly none of the new compiled scsi und sas drivers worked They only load as long as no drive is connected to the controller. ATM I assume there was some changes in the kernel source about counting/indexing the drives for scsi/sas, as we only have the 2.5 years old dsm 6 beta kernel source there is hardly a way to compensate People with 12GBit SAS controllers from LSI/Avago are in luck, the 6.2.2 of 3617 comes with a much newer driver mpt3sas then 6.2.0 and 6.2.1 (13.00 -> 21.00), confirmed install with a SAS3008 based controller (ds3617 loader) Driver not in this release: ata_piix, mptspi (aka lsi scsi), mptsas (aka lsi sas) - these are drivers for extremely old hardware and mainly important for vmware users, also the vmw_pvscsi is confirmed not to work, bad for vmware/esxi too Only alternative as scsi diver is the buslogic, the "normal" choice for vmware/ESXi would be SATA/AHCI I removed all drivers confirmed to not work from rc.modules so they will not be loaded but the *.ko files are still in the extra.lzma and will be copied to /usr/modules/update/ so if some people want to test they can load the driver manually after booting These drivers will be loaded and are not tested yet (likely to fail when a disk is connected) megaraid, megaraid_sas, sx8, aacraid, aic94xx, 3w-9xxx, 3w-sas, 3w-xxxx, mvumi, mvsas, arcmsr, isci, hpsa, hptio (for some explanation of what hardware this means look into to old thread for loader 1.02b) virtio driver: i added virtio drivers, they will not load automatically (for now), the drivers can be tested and when confirmed working we will try if there are any problems when they are loaded by default along with the other drivers they should be in /usr/modules/update/ after install To get a working loader for 6.2.2 it needs the new kernel (zImage and rd.gz) and a (new) extra.lzma containing new drivers (*.ko files) zImage and rd.gz will be copied to usb when updating DSM or can be manually extracted from the 6.2.2 DSM *.pat file and copied to usb manually and that's the point where to split up between cases/way's case 1: update from 6.2.0 to 6.2.2 case 2: fresh install with 6.2.2 or "migration" (aka upgrade) from 6.0/6.1 Case 1: update from 6.2.0 to 6.2.2 Basically you semi brick your system on purpose by installing 6.2.2 and when booting fails you just copy the new extra.lzma to your usb flash drive by plugging it to a windows system (witch can only mount the 2nd partition that contains the extra.lzma) or you mount the 2nd partition of the usb on a linux system Restart and then it will finish the update process and when internet is available it will (without asking) install the latest update (at the moment update4) and reboot, so check your webinterface of DSM to see whats going or if in doubt wait 15-20 minutes check if the hdd led's are active and check the webinterface or with synology assistant, if there is no activity for that long then power off and start the system, it should work now Case 2: fresh install with 6.2.2 or "migration" (aka upgrade) from 6.0/6.1 Pretty much the normal way as described in the tutorial for installing 6.x (juns loader, osfmount, Win32DiskImager) but in addition to copy the extra.lzma to the 2nd partition of the usb flash drive you need to copy the new kernel of dsm 6.2.2 too so that kernel (booted from usb) and extra.lzma "match" You can extract the 2 files (zImage and rd.gz) from the DSM *.pat file you download from synology https://archive.synology.com/download/DSM/release/6.2.2/24922/DSM_DS3615xs_24922.pat or https://archive.synology.com/download/DSM/release/6.2.2/24922/DSM_DS3617xs_24922.pat These are basically zip files so you can extract the two files in question with 7zip (or other programs) You replace the files on the 2nd partition with the new ones and that's it, install as in the tutorial In case of a "migration" the dsm installer will detect your former dsm installation and offer you to upgrade (migrate) the installation, usually you will loose plugins, but keep user/shares and network settings DS3615: extra.lzma for loader 1.03b_mod ds3615 DSM 6.2.2 v0.5_test https://gofile.io/d/iQuInV SHA256: BAA019C55B0D4366864DE67E29D45A2F624877726552DA2AD64E4057143DBAF0 Mirrors by @rcached https://clicknupload.cc/h622ubb799on https://dailyuploads.net/wxj8tmyat4te https://usersdrive.com/sdqib92nspf3.html https://www104.zippyshare.com/v/Cdbnh7jR/file.html DS3617: extra.lzma for loader 1.03b_mod ds3617 DSM 6.2.2 v0.5_test https://gofile.io/d/blXT9f SHA256: 4A2922F5181B3DB604262236CE70BA7B1927A829B9C67F53B613F40C85DA9209 Mirrors by @rcached https://clicknupload.cc/0z7bf9stycr7 https://dailyuploads.net/68fdx8vuwx7y https://usersdrive.com/jh1pkd33tmx0.html https://www104.zippyshare.com/v/twDIrPXu/file.html
  2. IG-88

    DSM 6.2 Loader

    i once had a case were i could not install dsm, but when using a disk installed on another system, it did work, just transferred usb and hdd to the box with install problems 2nd would be 1.03b 3615 with new extra.lzma, but its not finished yet, turns out my new nas hardware runs fine with 918+ but freezes with 1.03b 3615 after starting the boot process (even serial console does not show anything after grub), i need to get some older hardware for testing
  3. eher anders herum, wenn esxi die finger drauf hat kannst du es nicht als pcie gerät an eine vm durchreichen, kannst su auch leicht testen, nimm eine leere platte hänge sie an, mach sie als vmfs esxi zugänglich und danach solltest du den controller nicht mehr als passthrough fähig sehen heißt RDM (raw device mapping), würde theoretisch auch gehen ohne neu zu installieren, jeder sektor wird durchgemapped aber dann verlierst du so sachen wie smart überwachung der platte in dsm, ich weiß auch nicht wie sich das auf die performance auswirkt dvb: mein letzter ansatz beim vdr war ein netzwerkfähiger tuner (4er) aber das habe ich dann nicht mehr umgesetzt sat>ip (dediziertes gerät) -> tvheadend (autorec aka autotimer funktion) -> client (z.b. raspi) oder vdr direkt mit sat>ip in einem docker, man kann auch einstellen das eine software nur bestimmte tuner nutzt, z.b. tvheadend hat zwei und vdr nutzt zwei (je dediziert unter angabe des tuners in der software) hatte mit einen Telestar Digibit R1 für etwas über ~110€ gekauft (4 tuner) den mit einer für vdr besser verdaulichen firmware über einen usb stick gestartet (keine änderun an dem gerät notwendig, wenn vorhanden startet er von externem usb) theoretisch kannst du auch deine alte vdr hardware zum sat>ip machen (zum netzwerktuner degradieren) mit der passenden software https://www.debacher.de/wiki/Fernsehen_mit_SAT-IP https://github.com/catalinii/minisatip https://www.heise.de/preisvergleich/?cat=satrecv&sort=p&xf=273_S2~275_SAT-IP-Server habe ich aber alles letztes jahr auf's abstellgleis geschoben, ohne (live) tv und konserven für x jahre tv konsum hat man viel zeit für anderes
  4. IG-88

    DSM 6.2 Loader

    the used driver 1.3.3 is the latest available (used it for the extra.lzma), you can try to have a look if you can manually steer the speed with ethtool ethtool --show-priv-flags the source code shows that it also can handle 100mbit, so you might need to contact the vendor of your product or try different 1Gb nic's
  5. theoretisch ja, wenn die platten an einem controller hängen der für esx "optional" ist (von dem esx nicht gestartet ist) kannst du den controller an die vm durchreichen (vt-d als cpu feature vorausgesetzt) und die platten so wie vorher benutzen - ob das sonderlich effektiv ist steht auf einem anderen blatt, denn du willst ja die platten eigentlich auch unter esxi nutzen die micro dinger haben da nicht so viele optionen da die mechanisch und vom netzteil in sachen platten begrenzt sind kommt also darauf an wie viele platten du schon hast und was du so machen willst mit esxi
  6. careful when doing it from windows, needs a editor thats aware of unix text files, notepad++ is ok /etc.defaults/synoinfo.conf is the file activate ssh in dsm web gui, login with putty, give yourself root with "sudo su" and edit the file with nano or vi (both editors) if you never used putty or nano consider to ask a friend for help, you are messing on low level in the system, doing something wrong will have consequences, working as root is "you ask for it you got it" even if it damages/destroys your system
  7. im bios C1E für den prozessor abschalten, AMD wird von synology selbst nicht verwendet und der kernel ist für intel cpu's optimiert, aber in den meistten fällen geht auch amd das 918+ image wird vermutlich nicht damit laufen )kannst su ja mal im forum nachlesen), also 3615 oder 3617 image mit loader 1.02b und dsm 6.1 für den anfang, 6.2 erst mal die finger von lassen bis das neue treiber packet (extra.lzma) fertig ist, ich bin mir nicht sicher was da an netzwerk drin ist, wenn du bei 6.2 update dann ohne die extra treiber bist kann sein das du dann nicht mehr über netzwer an dis kiste kommst hier gehts weiter:
  8. just "revert" the way its activated
  9. you should have followed him in his attempts he gave up on that not knowing what went wrong, he was to optimistic on a lot of things and ran intro one problem after another, xpenology is not as easy to handle when you want something out of the ordinary, even just a kerneldriver can be a problem quicknic's statement was interesting but he did not explained anything about how he circumvented the limits maybe it does not need any special sauce to get it working, maybe you just have to use the info he gave about the numbers of disks? it would need just some time to create a vm, tweak the config, add 28 virtual disks and look if a raid set can be created/degrades and rebuild (for a complete test), if that does not work out then qicknicks 3.0 loader will tell, its all script, no compiled source code so anyone can read it at 1st glance there was nothing special, just the normal variables for synoinfo.conf and a check for the right max drive numbers (but there might be more to it somewhere) ########################################################################################## ### Max Supported Disks ################################################################## ### set maxdisks=48 will mean that XPEnology will now see 48 drives in DSM. ############## ### No changes to /etc.defaults/synoifno.conf needed. Changes are made during boot. ##### ### Acceptable maxdisk values: 12,16,20,24,25,26,28,30,32,35,40,45,48,50,55,58,60,64 ##### ### default value is 12. leave blank for 12 disks. ###################################### ##########################################################################################
  10. es gibt neben 4port sata karten mittlerweile auch "echte" 8 port ahci karten zu einem vernüftigen preis generell gilt das es keine ports geben darf die mit einem multiplexer "erzeugt" werden https://xpenology.com/forum/topic/19854-sata-controllers-not-recognized/?do=findComment&comment=122709 man muss sich also die specs der karte (welche chips) ansehen und dann nachlesen was das für chips sind, wenn einer der chips ein multiplexer ist dann finger weg (es gibt auch 4 port karten mit 2 port sata und multiplexer um auf 4 ports zu kommen, bei 5 oder 6 port sollte man wegen der ungraden zahl schon skeptisch sein da die chips eigentlich immer 2 oder 4 port sata sind, die restlichen ports sind dann meist mit multiplexer angestrickt) ahci hat den vorteil das man nicht von zusätzlichen treiber abhängt, synology hat selbst in allen boxen ahci ports und kommt nicht daran vorbei es zu unterstützen kommt auf das modell und dessen hardware an aber wenn du auf ahci und eine e1000e netzwerkkarte setzt sollte es eigentlich unabhängig von extra treibern sein die beiden sachen sind in aktuell zur verfügung stehen images mit drin imho kommt man aber auch gut bei weg wenn man sich selbst was baut, wird dann meist leiser und stromaprender ob man wirklich hotswap braucht? ich kaufe lieber hgst platten die gehen so selten kaputt das man sich das geld für sowas sparen kann (meine persönlicher erfahrung), das geld für die hotswap hardware ist in qualitativ guten platte besser angelegt, eigentlich will man nicht das eine platte kaputt geht ich habe lieber kompakt und leise gebaut and dafür auf hotswap verzichtet (bei 12 platen geht das auch ins geld)
  11. only 4 sata ports and just one pcie slot, on long term you can only have more sata ports OR 10G network i'd suggest a flex atx board, more pcie extensions, also a board with 6 sata does nor hurt i use a gigabyte b360m hd3, in term of depth just 1/2 inch deeper then mini-itx, just wider because of more slots consider a "T" type cpu, max 35W, if the system is on most of the time that saves heat (=noise) and on long term money on the bill for electricity M.2 is not general supported with xpenology and xpenology/synology only uses usb as loader for the kernel and the system is on all disks as raid1 partition, there is not just one fast boot device like a normal linux/windows (also the system i small, the partition on every disk is just 2GB) 918+ for newer hardware, especially when you want to use hardware transcoding
  12. i'd say you can use the disks and upgrade in a 418play (it will ask you to upgrade and even warn you before overwriting and loosing data) but having a backup is always a good thing
  13. IG-88

    DSM 6.2 Loader

    can you try the usb created with 1.04b and the disk on a different hardware? are you sure you set the vid/pid of the usb the right way (i once had a typo and only on 3rd look i saw it, in some cases your brain makes you see things you expect to see), so check both read vid/pid in windows again and check that against the grub.cfg
  14. if you dont want to keep any data you can just delete the data volume (raid x or shr) use the new bootloader for 6.x and start installation, you will be asked if you want to upgrade, answer no and you will have a fresh system no, and to install dsm 6.x you need a new image on your boot media (upgrade or not doesn't matter) as its your first post you should read more about xpenology (faq) and the install (howto) the sd card is just used a usb device with a bootloader, the "system" (dsm) is on the disks (on every disk as small raid1 partition) so its just grub and kernel that starts from usb be warned its not a simple linux install/system, its a hacked appliance that has it good sides but has defiantly down sides you should know about, also have a look at open media vault as comparison
  15. one more thing, you should check the cables if the 4 disk in question belong to the same 4xcable of one SF-8087 connector
  16. that should not destroy the riad or lvm information, shr1 is made to 1 disk failing, every raid set created in the shr1 process is 1 disk redundant also a additional warning about putting together the raid sets - my assumptions above where that the whole volume was created in one step it there where more steps by adding disks and extending the volume things will look different - 2TB partition size of the 8x2TB and 5x2TB might be the same so combining/forcing the wrong 2TB partitions together might end up in a complete mess look for size difference in 2TB partitions on a disk with 4 or 8 TB and i guess every raid set will have a kind of unique identifier (have not looked into stuff like this for >2 years, can't remember) , so look for something like this before forcing mdadm to do something - you can do tests/training by using a vm on a computer with virtual disk (thin disk that take no space), like how to repair with mdadm, there are howtos and videos about this and you can practice with a vm (like using in virtual box)
  17. imho if that would be the case then disk 14 could never be seen as "normal" as part in the volume 1 you would not even see it under disks, you would just see 12 disks/slots
  18. yeah you got that - and the fact you don't have a backup of your data? also as you had even the exact time when it happen, did you check the logs (ssh login)? whatever you do, you first need to know the reason, you need to rule out the reason to prevent it from reoccurring if you do a repair/rebuild in in the middle of it it strikes again? 3rd or 4th mistake, if you are doing data recovery never "repair" anything, (at least as long as you exactly know what you do and can revert it) btw. when doing such screenshots please change system language to english, same reason why you wrote the post here in english and not in your local tongue i can tell you at least something about how SHR is working (afaik) and in what direction a recovery would have to go SHR uses different size disks creates partitions on all disks in the size of the smallest disks and forms a raid (usually a raid 5) over all this partitions, that goes on with the left over space of the remaining disks, that result in different raid types used like raid1 instead of raid5 when there are only 2 partitions of the same size you end up with 2 or 3 (or more) raid sets and these raid sets (mdadm) are then "glued" together with LVM so first step would be so save all logs from the system, if you are lucky you will find information about creation of the whole volume there, helping you to put the pieces togeter 2nd would be looking for LVM information in /etc, maybe they are still there and if you piece together your raid sets (with mdadm) you might use that after that you need to map the partition layout of all disks using fdisk with this information you can start looking if you can (first in your mind or on paper) stitch together all partitions to mdadm raid sets so that all fits, if that works out you would repair/reattach the raid sets with mdadm manually and then using lvm to put all the raid volumes together in the right way that it will be your original volume just messing around with it will make things worth to a point where it gets nearly impossible to recover things, if you really need your data look for a recovery service that can do this i hope that can give you a overview about what your situation is i cant help you much further here because i have done mdadm repairs just for practice and messing around and lvm repair just once (i only use plain mdadm raid5/6 to keep it simple in cases like this) so from my theoretical point of view the layout of partitons would be like this all disks contain as 1st and 2nd partitons a raid1 set for system (dsm installation) and swap so you are looking for 3rd partitions and futrther the disks listed are by size 2 2 2 8 4 4 4 8 so 3rd on all 8 disks should be a 2GB partiton on the disks with the size 8 4 4 4 8 should be a 4th partition with 2 TB and on the two 8TB disks should be one more with 4TB so the raid sets you had might have been 8 x 2TB raid5 5 x 2TB raid5 2 x 4TB raid1 bringing it up to a 26TB SHR1 volume - assuming that you had one volume
  19. that was already the clue where it gets very suspicious, ASM1061 is a 2 port pci 1x chip !!! https://www.asmedia.com.tw/eng/e_show_products.php?cate_index=166&item=118 where does the other 2 ports com from? usually sata port multiplier by a 2nd chip, a 2 x 2 port-chip solution (like 2xASM1061) would need a additional pcie bridge chip and thats in most cases to complicated and expansive, there are one chip 4-port sata solutions that are cheaper ASM1093 is not liste on ASMedia's website but ASM1092 is and it's ... a port multiplier, so its probably a typo https://www.asmedia.com.tw/eng/e_show_products.php?cate_index=138&item=141 sata ports gained by a sata port multiplier do not work in xpenology so this card is a no go i'm writing this to give a better insight of how to choose a card, in most cases all information are there you just have to read careful and look up the chip specs
  20. even in kernel 4.4.59 of the 918+ (newer kernel then th other two) the latest i can see in the kernel driver is SAS3108 so it will need to build from external source and i do have newer source but we tried it with the 36165/17 (older kernel) and it did not work maybe with the newer kernel 4.4. of the 918+ it will work atm i do have a SAS2008 and my new nas hardware (not in use yet) for testing so i can at least compile a new driver and test is with the older card i will try this in the next hours a i'm on the 918+ drivers for a new 1.04b extra.lzma package [18.12.2017] new 4.4, ... update driver mpt3sas to v23 for SAS93xx/SAS94xx support, ... ... [14.01.2018] new 4.5, revert driver mpt3sas to jun's and in case of 916+ to kernel default as SAS94xx support did not work (kernel oops), ... no, its not just compiling it :: Loading module mpt3sas[ 4.449175] BUG: unable to handle kernel paging request at 000000010000008f [ 4.456576] IP: [<ffffffff813a42b0>] scsi_setup_command_freelist+0x80/0x280 [ 4.463961] PGD 45131b067 PUD 0 [ 4.467420] Oops: 0000 [#1] PREEMPT SMP [ 4.471614] Modules linked in: mpt3sas(OE+) megaraid_sas(E) megaraid(E) mptctl(E) mptspi(E) mptscsih(E) mptbase(E) raid_class(E) libsas(E) scsi_transport_sas(E) scsi_transport_spi(E) megaraid_mbox(E) megaraid_mm(E) vmw_pvscsi(E) BusLogic(E) usb_storage xhci_pci xhci_hcd usbcore usb_common imwz(OE) [ 4.499888] CPU: 1 PID: 4197 Comm: insmod Tainted: G OE 4.4.59+ #24922 [ 4.507930] Hardware name: Gigabyte Technology Co., Ltd. B360M-HD3/B360M HD3, BIOS F13 06/05/2019 [ 4.517334] task: ffff880456a06100 ti: ffff880450bc8000 task.ti: ffff880450bc8000 [ 4.525264] RIP: 0010:[<ffffffff813a42b0>] [<ffffffff813a42b0>] scsi_setup_command_freelist+0x80/0x280 [ 4.535257] RSP: 0018:ffff880450bcbac0 EFLAGS: 00010202 [ 4.540881] RAX: ffff880456a06100 RBX: ffff8804579b4000 RCX: ffffffffa0187140 [ 4.548422] RDX: ffff880456a5b580 RSI: 0000000000000001 RDI: ffffffff818536c0 [ 4.555943] RBP: ffff880450bcbaf8 R08: 000000000000000a R09: ffff8804542d6800 [ 4.563519] R10: 0000000000000001 R11: 0000000000000000 R12: 00000000024000c0 [ 4.571068] R13: ffff8804579b4030 R14: 000000010000007f R15: ffffffffa0187140 [ 4.578633] FS: 00007f46e7af9700(0000) GS:ffff88046e480000(0000) knlGS:0000000000000000 [ 4.587206] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 4.593292] CR2: 000000010000008f CR3: 00000004578e8000 CR4: 00000000003606f0 [ 4.600856] Stack: [ 4.602998] ffffffff8129a6dd 00000001579b4000 ffff8804579b4000 ffff8804579b4208 [ 4.610910] ffff88045b995090 ffff88045b995090 ffff8804579b4000 ffff880450bcbb30 [ 4.618831] ffffffff813a4a6f 0000000000000000 ffff8804579b4788 ffff88045b995000 [ 4.626796] Call Trace: [ 4.629391] [<ffffffff8129a6dd>] ? __blk_queue_init_tags+0x3d/0x80 [ 4.636001] [<ffffffff813a4a6f>] scsi_add_host_with_dma+0x9f/0x310 [ 4.642646] [<ffffffffa015284a>] _scsih_probe+0x64a/0xc80 [mpt3sas] [ 4.649376] [<ffffffff812ffbcc>] pci_device_probe+0x8c/0x100 [ 4.655460] [<ffffffff81387681>] driver_probe_device+0x1f1/0x310 [ 4.661888] [<ffffffff81387822>] __driver_attach+0x82/0x90 [ 4.667773] [<ffffffff813877a0>] ? driver_probe_device+0x310/0x310 [ 4.674420] [<ffffffff81385711>] bus_for_each_dev+0x61/0xa0 [ 4.680409] [<ffffffff81387119>] driver_attach+0x19/0x20 [ 4.686139] [<ffffffff81386d43>] bus_add_driver+0x1b3/0x230 [ 4.692128] [<ffffffffa018e000>] ? 0xffffffffa018e000 [ 4.697562] [<ffffffff8138802b>] driver_register+0x5b/0xe0 [ 4.703464] [<ffffffff812fe6a7>] __pci_register_driver+0x47/0x50 [ 4.709898] [<ffffffffa018e1ea>] _mpt3sas_init+0x1ea/0x203 [mpt3sas] [ 4.716708] [<ffffffff810003b6>] do_one_initcall+0x86/0x1b0 [ 4.722706] [<ffffffff8111703d>] ? __vunmap+0x8d/0xf0 [ 4.728141] [<ffffffff810e1b48>] do_init_module+0x56/0x1be [ 4.734050] [<ffffffff810b7d8d>] load_module+0x1dfd/0x2080 [ 4.739962] [<ffffffff810b51f0>] ? __symbol_put+0x50/0x50 [ 4.745767] [<ffffffff811160c5>] ? map_vm_area+0x35/0x50 [ 4.751469] [<ffffffff8111740c>] ? __vmalloc_node_range+0x13c/0x240 [ 4.758196] [<ffffffff810b810f>] SYSC_init_module+0xff/0x110 [ 4.764281] [<ffffffff810b81a9>] SyS_init_module+0x9/0x10 [ 4.770096] [<ffffffff8156a74a>] entry_SYSCALL_64_fastpath+0x1e/0x92 [ 4.776900] Code: 8b 8b c0 00 00 00 41 81 e4 bf 00 40 02 41 83 c4 01 8b b1 68 01 00 00 85 f6 74 71 4c 8b b1 70 01 00 00 4d 85 f6 0f 84 da 00 00 00 <41> 8b 46 10 85 c0 74 7d 83 c0 01 48 c7 c7 c0 36 85 81 41 89 46 [ 4.798263] RIP [<ffffffff813a42b0>] scsi_setup_command_freelist+0x80/0x280 [ 4.805751] RSP <ffff880450bcbac0> [ 4.809434] CR2: 000000010000008f [ 4.812946] ---[ end trace af6f0f37cfffe320 ]---
  21. beside the time needed to maintain such list(s) there is the fact that people have very different needs and expectations about a nas (server), so there is no best choice or one fits all, look at the spectrum what synology or qnap offering they go from 2 drive to 12+ for "home use" some are extremely eager to user hardware transcoding (cpu choice is important here) other want a beefy docker host (RAM, cpu cores) other just want to store a huge mount of data and need a high sata port count (just for hording i guess) also as xpenology is linked to what synology image can be used (aka is hacked) it depends (atm 3615xs, 3517xs, 918+) synology delivers a small spectrum o drivers in every image (drivers needed for that specific hardware + drivers for "official" extensions), if you want to use more drivers you depend on some one to build them (extra.lzma), to keep hassle free you might consider only using hardware that work ootb with just the synology drivers (limiting the options, might be more expensive) - to do this needs some good knowledge about hardware, dsm and linux - so it can take years keep in mind xpenology is not a normal linux it's a hacked appliance, you buy the comfort with some need to know things and uncertainties if you want a more open approach without the driver limits look out for open media vault
  22. only if you edit the synoinfo.conf manually and after bigger dsm update you will be back to 12 ports used after the first boot (if you forget to do something) so the 1st start will result in a "broken" raid set missing all drives above 12 btw. the limit is 24 for the normal conditions of dsm as we use it
  23. i did some search about alternatives to old lsi 8 port sas controllers and 4 port ahci controllers - namely "cheap" 8 port ahci controllers without multiplexers, i found 2 candidates (using AHCI makes independent from external/additional drivers, if you fall back to just using the drivers synology provides, the ahci will still work in the 918+ image) IO Crest SI-PEX40137 (https://www.sybausa.com/index.php?route=product/product&product_id=1006&search=8+Port+SATA 1 x ASM 1806 (pcie bridge) + 2 x Marvell 9215 (4port sata) ~$100 QNINE 8 Port SATA Card (https://www.amazon.com/dp/B07KW38G9N?tag=altasra-20) 1 x ASM 1806 (pcie bridge) + 4 x ASM1061 (2port sata) ~$50 but both "only" use max 4x pcie 2.0 lanes on the asm1806 and 2x pcie lane for 4 sata ports (Marvell 9225) or 1 lane for 2 sata ports (asm1061), so they might be ok for hdd's but maybe not deliver enough throughput for ssd's an all 8 ports the good thing is as ahci controllers they will work with the drivers synology has in its dsm, so no dependencies of additional drivers ASM1806, https://www.asmedia.com.tw/eng/e_show_products.php?item=199&cate_index=168 Marvell 9215, https://www.marvell.com/storage/system-solutions/assets/Marvell-88SE92xx-002-product-brief.pdf ASM1061, https://www.asmedia.com.tw/eng/e_show_products.php?cate_index=166&item=118 if they turn out to be bough enough then there might be newer/better ones, there are better/newer pcie bridge chips and 2/4 port sata controller being able to use more pcie lanes (making it 1 lane per sata port) i bought the IO Crest SI-PEX40137 as it has the better known/tested marvell 9215 (lots of 4port sata cards with it) but i only tested it shortly to see if its working as ahci controller as intended - it does and with lspci it looks like having 2 x marvell 9215 in the system - all good the SF8087 connector and the delivered cables do work as intended, the same cable did work an my "old" lsi sas controller so nothing special about the connector or the cables, they are ~1m long there are LED's for every sata port on the controller (as we are used to from the lsi controllers) i haven't tested the performance yet, i have 3 older 50GB ssd so even with these as raid0 i will not be able to max it out but at least i will see if there is any unexpected bottleneck - as i'm planing to use it for old fashioned hdd's i guess that one will be ok (and if not it will switch places with the good old 8 port sas from the system doing backup) EDIT: after reviewing the data of the cards again my liking for the 9215 based card has vanished, its kind of a fake with its pcie 4x interface as the marvell 9215 is a pcie 1x chip and as there are two of them the card can only use two pcie lanes, so when it comes to performance the asm1061 based card should be the winner as it uses four 2port controller and every controller uses one pcie lane, making full use of the pcie 4x interface of the card so its 500 MByte/s for four sata ports (max 6G !!! per port) on the 9215 based card and 500 MByte/s for two sata ports on the asm1061 based card the asm1061 ´based card can be found as sa3008 under a lot of different labels - if the quality and reliability can hold up against the old trusty lsi cards is another question better comparison on marvell chips: https://www.marvell.com/content/dam/marvell/en/public-collateral/storage/marvell-storage-88se92xx-product-brief-2012-04.pdf (conclusion 9230 and 9235 should be the choice for a 4port controller instead of a 9215) edit2: the ASM1806/ASM1061 card in different but also has a design flaw, the ASM1806 pci express bridge only has 2 lanes as "upstream" port (to the pcie root aka computer chipset) and has only pcie 2.0 support so in the end it will be capped to ~1000 MB/s for all 8 drives and both cards will end in a measly performance and are unable to handle sdd's in a meaningful way looks like two marvell 9230/9235 cards would perform better then one of these two 8port cards
  24. synology changed the default kernel config and as they don't publish any source we can only guess and try the functions in question are part of PCIE_ASPM so you will have to disable CONFIG_PCIEASPM in the kernel config when compiling modules for dsm 6.2.2, tested it and my freshly compile modules for 918+ are loaded and work (i'm about to continue the extra.lzma's in the next day's starting with 918+) ifair jun wrote something here https://xpenology.com/forum/topic/13074-dsm-621-23824-warning/?tab=comments#comment-95497 and there is a hint in synologys changelog (ok, that one will only be noticed if you know what you are looking for) https://www.synology.com/en-global/releaseNote/DS3615xs ... Version: 6.2-23739-2 (2018-07-12) ... Fixed Issues 1. Adjusted power saving mechanism to improve PCIe compatibility. ... Edit: for the old 918+ beta 4.4.x kernel the Makefile needs to be modified VERSION = 4 PATCHLEVEL = 4 SUBLEVEL = 59 EXTRAVERSION =+ the chroot enviroment should also get additional packages apt-get install openssl libssl-dev for 6.2.2 i also added signature files generate certs like here: https://wiki.gentoo.org/wiki/Signed_kernel_module_support place keyfiles in kerne root, for kernel 4.4.x in /certs instead of kernel root copy one file to a new name: cp signing_key.x509 signing_key create empty files in kernel root: extra_certificates, extra_certificates_untrusted, trusted_certificates, untrusted_certificates
  25. yes a "normal" non synology/xpenolgy disk needs to be connected as external disk usually as usb disk, but if you speak tech and linux fluently you might be able to convince xpenology that one of the internal sata is a eSATA port for extrnal drives (like usb) beside using a usb adapter for the disk(s) you could also connect it to your pc and copy the content over the network to the nas
×
×
  • Create New...