IG-88

Members
  • Content Count

    2,207
  • Joined

  • Last visited

  • Days Won

    88

Everything posted by IG-88

  1. synology provides the kernel config used in the toolchain package so it can be seen/checked it also got documented here https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  2. you switched this into hba mode? dsm is centered around single disk software raid (mdadm) try 1.03b with DSM 6.2.3 and this extra.lzma for hp sa controller (its not supported in jun's original extra.lzma) https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/ it has a hpsa.ko driver from kernel synology uses (3.10.105 for 3615/17) not sure if it works at all, last i remember it was a problem getting these to work at all, maybe now its better as we have nearly recent kernel source from synology you can also read this for some added information https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/?do=findComment&comment=148637 basically the nic is no problem when loader and extra.lzma match (dsm 6.2.2 needs different drivers but with 6.2.3 all is back to normal and the "old" drivers would work again) bigger problem is the P420, the older P400 and P410 where pointless as it did not had hba mode but P420 would at least in theory be possible to use in a way it makes sense with dsm (hba mode), but it might be much easier to get a lsi 9211-8i in IT mode (same as hba mode on HPE SA) if you switch it to hba mode and want to sink some time try the recent 0.11 extra,lzma and dsm 6.2.3, if that does not work and you got your serial port working (see below) you can write me a PM and i might compile newer driver from external source for testing afair in this case with iLO and virtual com port (the console is switched to serial)
  3. gibt je auch alternativen für dynamischen dns (auch direkt in DSM) und wenn man z.b. eine frittzbox hat bekommt man sowas inklusive über das fritz konto, wobei ich nicht gern ports ins lokale netz forwade, wenn dann eher vpn
  4. the "X" is important here it its 6.2.2 and you used a special extra.lzma the you will need to replace it, as 6.2.2 drivers (in the extra.lzma) will not with 6.2.3 so check you documentation ... not when you already have loader 1,03b and DSm 6.2.x, in worst case you need to replace the extra.lzma with jun's original (can be extracted from the img with 7zip or osfmount) or use a newer one from here https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/ i'd use jun's extra.lzma and if that makes any problem then you can try a newer one if you come from 6.2.0 or never replaced jun's extra.lzma you can just update to 6.2.3 (as 6.2.2 was new there was a lack of drivers and lots of jun's drivers did not work, some did not update, other bought "native" hardware that have drivers in dsm itself so they could update, keep jun's extra.lzma and still had a working nic )
  5. IG-88

    DSM on Qnap NAS

    FYI, xpenology 6.x is a hacked DSM appliance, bound to be used with the original synology kernel (that was differen with 5.x) so it might be a bigger problem to solve this when its about having diffrent kernel power settings to many controller and custoum solutions to maintain it as additional driver, a decent board can handle the fan's of its own so most people dont need this my suggestion would be to try dsm 6.1, 6.0, 5.x to find a version that can shut down properly and stays off, the fan can be solved with a add on hardware or try open media vault, as its a normal linux there is much higher chance to get fan control and shutdown working https://forum.openmediavault.org/index.php?thread/18410-inatall-openmediavault-on-qnap-ts-251-x86-guide/
  6. with this hardware your 1st try should be more like loader 1.04b and 918+ DSM 6.2.3 mightl be hard to find some one to support you with such outdated loader and dsm version, not much packages too, lots of unfixed security issues and maybe even smb/cifs (samba) problems with windows 10
  7. the lsi sas will have full speed for all ports, the 1000MB/s would have been the 8port sata/ahci controller with 2 x miniSAS onboard, the same as with the lsi sas controller sata/ahci can a advantage as it does not need any additional drivers, the ahci support in dsm is kernel build in it will always work (on 3615/17 the lsi sas driver is part of dsm aka native driver but on 918+ its not) as you are unable to use 918+ on that hardware the lsi sas is a safe choice, 3615 might phase off but 3617 will still get updates for years and afair it will also be a system getting dsm 7.0
  8. isnt that card marked as native suppoerted,?so even without a driver working from extra.lzma it should work with just the drivers synology provide within dsm itself i was mainly referencing about systems without additional nic using the onboard broadcom with 6.2.3 you could throw out the added intel based nic and have the onboard working with jun's original extra or with the new 0.11 only one not working with 6.2.3 is the extra.lzma made for 6.2.2
  9. jun's original extra.lzma is working again with 6.2.3 and contains the tg3.ko driver already it "broke" when synology introduced a kernel config change in 6.2.2, as as they reverted it in 6.2.3 ... "dont use" means they are using jun's old/original extra.lzma, would be more precise to ask about the "replacement" of extra.lzma if people did not care about the "lost" bcm nic in 6.2.2 because they had a additional intel nic that still worked after update to 6.2.2, and did not install a 6.2.2 aware driver set, then theey still have the original extra.lzma and that one is working again with 6.2.3 because of the reverted change in kernel config from synology -> tg3.ko working again, onboard bcm nic is "back"
  10. that only on 1.04b for 918+, your gen8 can only handle 2n and 3rd gen intel cpu's, to old for 918+ the one that is already in the loader as it comes from jun, the extra.lzma contains the driver (beside other stuff) and jun already delivers a good amount of drivers by default
  11. as you dont have any too special hardware jun's original extra.lzma from the loader 1.03b would also have done the job (can be extracted with 7zip and just needs to copied to the 2nd partition of the usb in use)
  12. übrigens ist hyper-v wegen des fehlenden supports im dsm kernel nicht geeignet, man hat keine vernünftige virtuelle hardware zur vefügung, es gibt nur steinalte emulierte hardware (z.b. 100MBit nic) oder die nicht funktionierenden spezielle hyper-v vm hardware wenn du das im moment auf dem gaming pc machst dann kannst du auch virtualbox nehmen, das habe ich für meine test vm's in benutzung, auch da gibts in den tutorials was dazu außerdem wird deine alte hardware (I5 650) nicht mit 918+ laufen
  13. that means the bad sectors got remapped and your disk is still in the process of failing (as often those areas with bad sectors grow larger) i usually dont trust a disk anymore if it starts to show remapped or bad sectors, marks he usage end for me so at least have a backup and check all disks (s.m.a.r.t) for remapped for bad secors
  14. there is more. you can get a cheap reflashed oem version of a lsi 9211-8i or a P420 (or newer) they do support IT/HBA mode (it has sas connector as P222 and there are also at least two sata 8port controllers with these connectors but both are limited to 1000MB/s as of pcie 2.0 and two lanes) there are also less favorable (dangerous) options like creating a raid0 set with every single disk and combine these "disks" in dsm to a raid or creating a raid5/6 in the controller over all disks and get this big disk as one "basic" disk in dsm in both cases you dont have temp. or smart info's in dsm and it might be harder to see what disk failed in case of a disk error (but should be seen in controller bios or when booting a hpe iso from usb for maintenance) you can look into the section where people flag there update success about different dsm versions and find lots of hpe gen8 microserver and some might have something about additional hardware in the comments - but why not using the internal ports? imho all should be able to deliver a good performance
  15. also bei virtualbox war es wichtig das die mac des nic's in der vm mit dem in der grub.cfg übereinstimmt amsonsten ist das eigentlich die anleitung der du folgen müsstest https://xpenology.com/forum/topic/13061-tutorial-install-dsm-62-on-esxi-67/
  16. thats not really a raid chipset, afir it just intels onboard sata chipset with a HPE name the SA Pxxx series is different, has different chips on it (more dedicated to raid, like true hardware raid), needs a special driver (hpsa.ko) and depending of the controller it can be switched to hba/it mode to have single disks that can be used for synology's software raid ahci driver vs. hpsa.ko driver and there are no tools in dsm to manage any hardware raid, dsm is build around mdadm software raid maybe a JMB585 based 5port sata controller is a good alternative? it supports pcie 3.0 and can deliver up to ~2000MB/s with its two pcie 3.0 lanes
  17. 1st thing, the atlantic driver is not loaded in dmesg, looks like i forgot to add it in rc.modules edit: checked rc.modules in the extra/extra2 and its there, must be something else i will send you a link to a new version that will load it the way it is intended and it will be a driver from latest source 2nd thing - you need to modify your synoinfo.conf to get more then two nic's working, the build in default for 918+ is two https://xpenology.com/forum/topic/21663-driver-extension-jun-103b104b-for-dsm622-for-3615xs-3617xs-918/ !!! still network limit in 1.04b loader for 918+ !!! atm 918+ has a limit of 2 nic's (as the original hardware) If there are more than 2 nic's present and you can't find your system in network then you will have to try after boot witch nic is "active" (not necessarily the onboard) or remove additional nic's and look for this after installation You can change the synoinfo.conf after install to support more then 2 nic's (with 3615/17 it was 8 and keep in mind when doing a major update it will be reset to 2 and you will have manually change this again, same as when you change for more disk as there are in jun's default setting) - more info's are already in the old thread about 918+ DSM 6.2.(0) and here https://xpenology.com/forum/topic/12679-progress-of-62-loader/?do=findComment&comment=92682
  18. only 2? you can always try jun's extra.lzma when using 6.2.3 (as long as its not hardware trascoding with 918+) and if a device is missing you can try a extra.lzma i made
  19. we would need to find a driver that works, with 0.11 its mpt3sys 27.0 the mpt2sas code would be included and the driver mpt2sas driver has a name alias for mpt2sas i do have a lsi 9211-8i so i would be able to test but its nothing that will be done by tomorrow usually when doing drivers i check if the driver is working or not, with hardware i use for my own system a can say something about stability or long term experience (and its just one main nas and one for backups) for my new setup system with 6.2.3 918+ i did not use lsi sas controller on purpose, its about going more native so i had chosen ahci (jmb585, pcie 3.0, 5port, beside the 6x onboard) and i can at least say its not a general problem, the disks go into hibernation and come back without doing anything damage to the raid's (system or data) Info System 2020-05-30 18:13:17 Internal disks woke up from hibemation. Info System 2020-05-30 16:20:44 Internal disks woke up from hibemation. that would only be a point if one disk is slower to wake up (should be in the log as it would be a failed disk) the order of the disks should be no problem, that behavior of the lsi driver was already there in 6.1.7 any one having problems like this should dig into the logs to see more then raid is broken, and repair, you would need to FIND the source in general the 918+ has no native scsi/sas support, all this drivers are additional (there are scsi/sas base drivers too not just the device specific drivers) and it might be possible that the kernel config of the 918+ (every platform like brommolow, braswell and apollolake has there own by synology), afair there is no lsi/scsi supported device in the apollolake platform so there might be things missing from the kernel supporting the drivers also a word on messages about missing fan control and temperature, synology uses specific chips to do it and does not have support for other chips in kernel or driver as they using so its to expect to fail and supporting a myriad of those chips that are used and incorporating them into the way dsm uses it would be to time consuming, i use temp. control from bios for the fan's or if needed i'd use some external way's like voltage reduction or a controller working on its own
  20. sounds like you did the loader first and after finding the system in network you updated to 6.2.2 (migration), if thats the case we might rule out problems with uefi/csm as 1.03b need csm/legacy mode and also the usb device choosen for boot needs to be "non uefi" (often the uefi devices have uefi in there name) you might try jun's original loader 1.03b with just your vid/pid, mac and sn (that would be original extra.lzma and kernel files from dsm 6.2.0) if visible in network you can try to update to the now recent 6.2.3, that version uses the same driver as the "old" 6.2.x before 6.2.2
  21. so it is recognized, but there is a problem getting it to work properly as said dmesg, either call it in a shell or copy the file /var/log/dmesg btw the driver synology provides in 3615/17 only is 2.0.5.0 so its even older i did try a newer a while ago but got feedback it did not work and until now no one had problems with 2.0.10 i could see if i can make a new from more recent source but i would need feedback from you when testing it
  22. the new 0.11.2 i already up so you could just copy the new 0.11.2 to usb's 2nd partition, it it does not work the way you need you still can put back jun's original extra.lzma the loader compares driver from extra.lzma and whats on disk and overwrites it when there is a difference, so by changing the extra.lzma you can change the driver set you are using if you are happy with the result (jun's extra) the just keep it that way, as long as you don't add newer hardware like a 10G nic you won't need the added drivers from 0.11
  23. that would explain the problem, i will fix it later this day but you should be able to get along the way i suggested above