Jump to content
XPEnology Community

IG-88

Developer
  • Posts

    4,645
  • Joined

  • Last visited

  • Days Won

    212

Everything posted by IG-88

  1. as to expect, the kernel crashes and the difference with the extended extra are drivers (that would be loaded after the kernel) there are two way to install 6.2.3 you can start with loader 1.03b as it comes (there is kernel 6.2.0 on the loader) so you basically start with dsm 6.2.0 when booting 1st time and when installing 6.2.3 *.pat file the kernel on the loader is updated to 6.2.3 and the next boot is with 6.2.3 (or would be if it does not boot you cant install 6.2.3) 2nd way would be to take zImage and rd.gz from 6.2.3 *.pat file and copy that to the loader (overwriting the 6.2.0 files) and the boot its possible that it makes a difference
  2. from my point of view there is no reason to keep 6.2.2 6.2.3 works better and is again driver compatible to the original 6.2.0 the loader was made with only downside is the non working i915 with jun's original extra/extra2 but thats fixed with my extra/extra2
  3. does not ring a bell, never seen this, where do you se the error /var/log/... there might be indicators before that error that could be a clue maybe try a different usb and disk to make sure its not storage realted, the cpu and board are pretty normal (also run memtest)
  4. yes this hardware is ~2014 and piledriver, in theory it should/might work with 1.04b 918+ but i have a hp desktop amd on that cpu level and it does not so ... 1.03b should work, and my A8 does with 1.04b it crashes when there is a message about unusable ram, try less like 8GB or 16GB with 1.03b there is IDT in the errors, thats interupt hadler table in generela check bios for any cpu realted settings there a also kernel paramters you can add to grub so the synology kernel gets them when loaded like "disable_mtrr_trim" you can look up that on the internet, that would be one possible source http://redsymbol.net/linux-kernel-boot-parameters/ maybe there are parameters that fix your problem, or there are bios settings you can use (from the manual "Advanced CPU Core Features", first replex would be C1E disable as this is know to make problems on HPE microserver with AMD) in bios periperals there is also iommu enable/disable, also worth a shot you can also try to use esxi and see if a dsm vm does work
  5. its "masking" it to show it as f400/f400 to the system - can be seen when using lsusb 1-7 f400:f400:0100 00 2.10 480MBit/s 224mA 1IF (SanDisk Ultra Fit) and just to make sure some readers of the thread don't get the wrong idea, its not just the usb vid/pid beside other things dsm also checks the presence of pci devices too to make sure its running on the "right" hardware (the loader "generates" them, can be check with lspci -k), so just having a (reprogrammed) usb with f400/f400 is of no use (but can be in some cases for original units when the original usb dom is broken) the "impossible" thing here is that it did not work with the vid/pid of the usb but with any other, as if the code logic was inverted the 6.2.4 was a recent addition in that thread we had cases where people had the right usb vid/pid and got error13, maybe they would have got it working by deliberately mismatching the usb vid/pid - for obvious reasons i never suggested something like that (maybe next time , seems to be a rare case)
  6. without any tests and software to use it it makes no sense to sink more into this (for me) even plex would need to be used in a older version to support the nvidia driver synology uses for testing it seems easiest to just user the drivers from another dsm file for 918+ does not look as anyone was able to use the drivers with the spk file in any way https://xpenology.com/forum/topic/22272-nvidia-runtime-library/
  7. looks like as if you would have created a new storagepool with the ssd the drive needs to be unused, no pool or volume, there is a special ssd cache option in storage manager, left down below https://www.thomas-krenn.com/de/wiki/Synology_NAS_SSD_Cache_Konfiguration
  8. there is a new thread in tutorials https://xpenology.com/forum/topic/42765-how-to-undo-a-unfinished-update-623-to-624-no-boot-after-1st-step-of-update/ main points beside deleting the update files are to reset/delete the two VERSION files on disk AND reset the zImage and rd.gz on the loader to the 6.2.3 versions the rd.gz on the loader also contains a VERSION file, the loader checks the VERSION files on the loader and on disk and if one is still 6.2.4 it will not downgrade
  9. there is a section "Tutorials and Guides" in the forum https://xpenology.com/forum/forum/36-tutorials-and-guides/ the normal howto for installing references to extra.lzma and its the same for 6.2, with 1.04b 918+ its extra.lzma AND extra2.lzma you need to replace https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/ accessing the partitions got a little more complicated with newer win10 versions (but maybe use the method i added at the end of the thread) https://xpenology.com/forum/topic/29872-tutorial-mount-boot-stick-partitions-in-windows-edit-grubcfg-add-extralzma/
  10. beside the mounting with partitionwizard there is a better way to use the tool to get easy access to the loaders 1st and 2nd partition start the tool, right click on the loaders main entry where is states GPT, convert it to MBR, apply to execute the oüeration, do the same again but revers it, convert it to GPT and click apply now windows 10 can access both partitions as its supposed to be and you don't need any additional tools and with that its also possible to get the MBR version of the loader (often needed for HP desktop's), no need for a special download of the loader, just do it yourself @jensmander maybe you can add this in the 1st post as option?
  11. yes, you need them to get the devices back just copy the new ones (0.13.3) to your loader and reading the first post of the thread would explain why ("... basically synology reverted the kernel ... i completely removed jun's i915 drivers ...)
  12. yes, you activate ssh in dsm web gui and use putty maybe install package synocli-file and use midnight commander (mc) to access the files (F3 is view a file iirc) /var/log/ dmesg and messages are the files to look into
  13. its usually suggested to test it to some extend, i would at least run memtest over night. install dsm to a single hdd, copy some date and look into the logs in /var/log/ if there is anything unusual
  14. if you started dsm with 1111,1111 too then it seems like every other vid/pid then the real one does work, pretty odd as its usually the other way around i guess thats something only jun himself could answer as its seem closely related to how the loader works (code wise)
  15. interesting, what do you see as usb vid/pid when you use a live linux from the usb in question on the CUS (checking if bios of CUS does anything unusual to usb device)? wrong vid/pid would also prevent dsm from starting properly (not just installing) - at least on my systems, it does not show up in network and serial console show "mount faild" i doe remember some people having vid/pid right and getting error 13 on install can you check one anotheer wrong then 0000/0000 like 1111/1111 just to make sure 0000/0000 is not special?
  16. for sure, if the btrfs file system is damaged just using a different or new installed OS in is not going to change that you can keep the system and delete the raid, but finding the source of the corruption is still to do and might be the more important task for now, without fixing that even trying to repair the file system in any way might be a fruitless effort
  17. the usb flsh drive itself is just for loading the kernel, dsm even unmounts the usb after starting dsm i see some possible sources 1. the usb device itself is producing some kind interference to the system - you could check that by using the old usb again, check if the problem is there and then remove the usb and check again, as long as you dont reboot the nas the usb is not needed 2. driver problems, maybe you used a different extra.lzma containing different driver versions (mainly nic driver) - but i thing thats unlikely also to make sure there are no other interference's i would disconnect the other nas from network when testing btw cruzer is a sandisk brand (i'm using sandisk cruzer fit and ultra fit for my nas)
  18. wouln't you need to check the lvm volume for a btrfs file system instead of md4? MD -> LVM -> filessystem (as there are logical volumes i guess you had a SHR) btrfs check /dev/vg2/volume_2
  19. the m.2 slots on these cars are exclusive sata so there is no gain in having m.2 as it would just have the sata limits as a normal 2.5" drive you can get jmb585 cards cheaper when buying over amazon from china or use aliexpress or just look for a 88se9230/88se9235 card (that is not pcie 1x layout) should be the same price as your highpoint card chipset onboard is usually ok, and if there a re bandwidth limits on a added card (like pcie 2.0 and low pcie lane count) the a better to have less drives on that added card no use at all as its already dead, can be a thing if there is no other choice (like a intel nuc) but adding them on purpose? no, i won't i go with 1 lane pcie 2.0 is 500 MByte/s and for two non ssd drives thats barely ok, some drives can exeed 250MB/s ~1000MByte/s for one lane pcie 3.0 its most often the chip that limits, the card can be "wider", some vendors go for looks (16x card looks so powefull) or it can be for the standard as there is not really a pcie 2x slot or card, its 1x or 4x (but there are some two lane layout cards too) so look for the chips (lane and pcie standard) 88se9215 has just one lane pcie 2.0 and four sata ports - not so good if you have more then a pcie 1x slot (and more lanes), but if you just use two ports then its still 250MB/s per port if it has to be marvell i'd suggest a 88se92xx chip, less problems to expect (there are some funny stories around 88se9128 about problems, you might check them before goung with that old chip) i'd go with a jmb585 card, can be full used in the 16x slot and will also work in the 4x slot (pcie 2.0) with half bandwidth (still good for 3-4 hdd's) the unused port might and with a 10G nic next, at least thats what it is in my system
  20. 1.04b will work but you should read some things (you cant install dsm exclusive to your ssd) https://xpenology.com/forum/forum/83-faq-start-here/ https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  21. if you did not use any specual extra/extra2 for 6.2.2 (just the default from the loader) then 6.2.3 should at least install, to get i915 working you would need my extra/extra2 from here https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/
  22. i do have all the usual braodcom drivers for dell in my extra.lzma i guess yours is BCM5720 and that should be working with tg3.ko if you have pci vendor and device id i can check if its in the driver imho it should work, try 3615 and 918+ (r430 seems to be haswell, so 918+ should work) any intel based will do, there seem to be a lot hpe quadport intel nic's around, maybe you can get one for cheap
  23. download loader 1.02a and use then vmdk from that one, its a text file that works as wrapper for the img, can also be used with 1.03/1.04 just open it with a editor and you will see
  24. beside this with just 4 disks you will use onboard sata for sure (ahci as driver), no problems with this when using disk hibernation
  25. download loader 1.02a (zip) and in there is the vmdk, its jsut a text that can be used with any (even newer) *.img just open the file with a editor and you will see
×
×
  • Create New...