IG-88

Members
  • Content Count

    1,696
  • Joined

  • Last visited

  • Days Won

    63

IG-88 last won the day on January 27

IG-88 had the most liked content!

Community Reputation

293 Excellent

About IG-88

  • Rank
    Guru Master

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. ja, versuchs mal (der status des raid sollte nach dem neustart von crashed auf degraded geändert sein) danach würde man das 4. laufwerk normal hinzufügen so das ein raid rebuild läuft, kann man eigentlich dann auch auf der gui machen in dem man auf der raidgruppe das 4. laufwerk wieder hinuzfügt sollten danach noch systempartitionen als defekt angezeigt werden auch reparieren lassen, das ist nur raid1 und es gab ja 2 gültige partitionen, die anderen werden dann einfach dem raid1 hinzugefügt und dabei überschrieben, damit sind die wieder in sync
  2. IG-88

    rd.gz

    yes, you will see such devices with lspci, also synology kernel modules are signed, i dont know if it really active/enforced by synology but we can load modules without a valid signature btw it would be a great help if you could have a look into the synology kernel source so see whats wrong with the actual sas/drivers, it seems our old beta source from over 2 years ago difers to much from what synology uses for 6.2.2 and also for a longer time we cant compile sata drivers if you are willing to have a look i would collect errors, provide examples and links also jun made a backported drm/i915 driver for its 1.04b loader (kernel 4.4.59) to have hardware transcoding support for newer gpu's then kaby lake, but left no documentation and source so i cant recompile the driver for dsm 6.2.2 as i have no coding skills and i did not look like it could be done without that
  3. the nic is jus a normal newer intel, older driver (like in jun's original extra.lzma or from synology) might not work the 3615/17 need CSM (BIOS) mode and you also need to set the boot device to bios mode (there might be 3 after enabling bios mode, two uefi one other for the same usb device) the nic should work with the new extra.lzma 918+ might not work as the intel drivers in the "old" 918+ release are to old, the new 0.8 will have the same new drivers as the latest extra.lzma for 3615/17 have I'd suggest try 3615/17 and look for the bios settings and if you want to get 918+ working without the not released 0.8 driver pack then you can try with a older nic in a pcie slot if you still have the usb in the state as it was after the 6.2.2 update (dont mind the different 0.6 extra.lzma) the you could just copy a new "recovery" extra.lzma version to the usb and all problems solved - i will send you a link to that new beta version, i had one tester with a N3150 and he solved the problems with it
  4. i was thinking of something lspci would output like this: Class 0200: Device 10ec:8168 (rev 16) for a nic i guess it might be 1000:0097 in your case https://pci-ids.ucw.cz/read/PC/1000/0097 this one would be ok with 918+ and 3617
  5. maybe try this one https://www.youtube.com/watch?v=2PSGAZy7LVQ and keep in mind 24 is max. as long as you have not tested it (with a raid set), quicknick claimed there where certain higher drive counts but noone tested that afaik 12, 16, 20, 24, 25, 26, 28, 30, 32, 35, 40, 45, 48, 50, 55, 58, 60, 64 https://xpenology.com/forum/topic/8057-physical-drive-limits-of-an-emulated-synology-machine/?do=findComment&comment=87213 if you need nvme ssd on baremetal then 918+ is you goal if you can give me the pci id i can look in the driver if it will work SAS2 is no problem, SAS3 depends on how new the chip is older SAS3 will work
  6. IG-88

    DSM 6.2 Loader

    anything not working? on both systems? where does that message come from? check other log's only thing that looked unusual was this [ 78.439122] CIFS VFS: Error connecting to socket. Aborting operation. [ 78.445655] CIFS VFS: cifs_mount failed w/return code = -113 [ 84.451133] CIFS VFS: Error connecting to socket. Aborting operation. [ 84.457686] CIFS VFS: cifs_mount failed w/return code = -113
  7. if using the new 0.6 extra.lzma for 6.2.2 from here https://xpenology.com/forum/topic/21663-driver-extension-jun-103b104b-for-dsm622-for-3615xs-3617xs-918/ there shouldn't be a difference between 3615 and 3617 with drivers if you still want to use 918+ then you can try it with the new 0.8 extra.lzma (eta next weekend) - but i dont see any added value with your hardware (no gpu, no m.2 nvme), only useful to test if you have problems with 3615
  8. yes and there is one already, it was named (i did not test it) ds1019+支持6.22及以上引导.zip ds1019+ͺ│Í6.22╝░ÊÈ╔¤Ê²Á╝.img but i guess you can also get the needed files from the vm image, it will use the same files as in jun's image, just copy the files with osf mount the packed extra.lzma is my "old" one from 10/2019 (918+), i'd suggest to use it with a newer one (eta next weekend) also i don't see the gain (atm) same hardware and kernel used by synology as in the 918+, only one disk slot more and we are not bound to this anyway (jun's defaut is 16 and more is possible)
  9. IG-88

    DSM 6.2 Loader

    real3x's extra.lzma i guess the problem is with the changes in kernel driver settings between 6.2.0 and 6.2.2, if you install 6.2.0 you should be ok, the trouble starts with using 6.2.1 or 6.2.2 there are different problems mixed up when a system does not show up in network after a 6.2.2 upgrade, in most cases its a not working network driver, in some cases it might be a crashing i915 driver and a missing /dev/dri might be s showstopper for some people too when wanting to use hardware transcoding maybe wait a few days, with the version 0.8 of my extra.lzma there will be a split into 2 version, one using synologys i915 driver (as it comes with dsm 6.2.2) and the other with jun's newer i915 driver, both will have new network drivers and the variety with the synology driver will also have additional i915 firmware matching the synology driver so it can use more devices then before that might not have showed the /dev/dri because of missing firmware files and for hard cases there will be also a recovery version that knocks out the i915 driver (in one case with a J1800 the driver prevented the system from booting)
  10. i don't use proxmox, so i cant say much the mac in grub.cfg should be the same as in the vm config whats the scsi controller for? can you connect a virtual com port into the vm? dsm switches to serial output shortly after start, all the interesting stuff can then only be seen with a serial port and a terminal (like putty)
  11. so wie hier (ich kopiere im wesentlichen nur was da steht und passe es an deine 4 laufwerke an, in dem anderen fall sind es 13 platten) https://xpenology.com/forum/topic/24525-help-save-my-55tb-shr1-or-mount-it-via-ubuntu/?do=findComment&comment=131382 die ergebnisse sehen erst mal wie erwartet aus und deine platten die nicht mehr im raid sind (sdc und sdd) haben beide den gleichen stand, ist also egal welche von beiden man nimmt um das raid wiederherstzstellen, der verlust ist der gleiche (44 Blöcke a 64kB) mdadm --stop /dev/md2 mdadm --assemble --force /dev/md2 /dev/sd[abc]5 in dem fall zwingt man (--force) die dritte platte in das raid und nimmt den verlust von 44 blöcken in kauf wenn das filesystem btrfs ist kann beim konsistez check an hand der prüfsummern zumindest feststellen lassen welche files nicht so sind wie sie sein sollten cat /proc/mdstat mdadm --detail /dev/md2 jetzt noch mal den zustand prüfen und danach würde man das 4. laufwerk dem raid hinzufügen um die redundanz wiederherzustellen (was dann einige zeit dauert da er ja die redudanz daten aus den drei laufwerken neu erzeugen muss - aber das ist noch zu früh, erst mal sehen beim assemble raus kommt
  12. it might not get the most out of a system with ssd only but on the other hand people with that in mind would no use sata drives, m.2 nvme would be the solution for this also there are usually still onboard sata connectors for using ssd's with "full" 6Gb/s
  13. where did you read that you can use a different type then the one of the loader? DS1817 is wront, you will have to use "DSM_DS3615xs_24922.pat" when using loader 1.03b for 3615xs also when using 6.2.2 aka 24922 you will have to use a different driver package aka extra.lzma https://xpenology.com/forum/topic/21663-driver-extension-jun-103b104b-for-dsm622-for-3615xs-3617xs-918/ (2nd half about 3615/17) or if you don't want to do this yet you can use loder 1.03b with v23739
  14. IG-88

    DSM 6.2 Loader

    yes, in case of a migration you also keep users and other settings like ip address
  15. IG-88

    DSM 6.2 Loader

    yes it will offer a (cross) migration