Jump to content
XPEnology Community

IG-88

Developer
  • Posts

    4,639
  • Joined

  • Last visited

  • Days Won

    212

Everything posted by IG-88

  1. that hardware might need the MBR version of the loader (default loader is GPT) https://xpenology.com/forum/topic/7968-dsm-6xx-loader-with-mbr-partition-table/ also read this https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ loader 1.02b with dsm 6.1 is less of a problem as it works uefi and csm/legacy if you go for 1.03b and 6.2 (loader in the same section s above, member tweaked loaders) you need to find the right bios settings to have csm mode can be tricky with hp desktops, disable secure boot and completely disable uefi devices in boot devices menu in bios would be my advice as general howto use this (usb vid/pid is most important when installing 6.x) https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/ "practice" with 1.02b (MBR) and if you get this right look into the csm/boot thing with 1.03b (also MBR if that works with 1.02b) if you are confident about the loader preparing you can concentrate an that
  2. way to overpowered as just a nas aka data tank one cpu and even half the ram would be more then enough as the cpu's are sandy bridge 918+ is not possible and with 8 core + HT its 3617 if you insist of 2xcpu you would need to disable HT to make use of all 16 cores as 16 is the max we can use with xpenology https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ as its persuably uefi bios you will have to use csm mode and non uefi usb boot device when using loader 1.03b and dsm 6.2.3 if you have trouble just try (practive with) loader 1.02b and dsm 6.1 as it can be run with uefi and csm/legacy for 20 sata ports there has to be additional controllers, not everything is working, lsi sas 92xx/93xx in IT mode might be a good choice as 12 ports are the default max. with 3615/3617 you will need to manually tweak synoinfo.conf after installing and would need to redo that after bigger installs (might nver come for 6.2 ever as 7.0 is about to launch) - you should read about that before jumping into it, starting with a broken raid can make people nervous even if it will be allright (usually) after fixing the synoinfo.conf depending on side quests you have (like vm's) it might be better to use the hardware with esxi and dsm as vm and have other vm's too an esxi
  3. https://www.hardkernel.com/shop/odroid-h2plus/ https://www.hardkernel.com/shop/odroid-h2-case-1/
  4. die jetzigen loader für 6.2 nutzen andere kernel versionen, würde wohl nur mit einem neuen koader (und hack) gehen https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/?do=findComment&comment=166009
  5. i dont think so, imho for booting from that the driver has to be in the kernel (zImage) and as we are bound to use the original synology kernel and its protected against tempering (checksums) in dont see how it should be doen, it mivht be possible to compile driver as module but that would not help booting, that way you would just be able to use the module when dsm is already running
  6. the z800 seem so have a intergated sas controller and from what i found and your symptoms its LSI SAS 1068E that has a hardware limit of 2.4TB for a disk, nothing you can do about it then using another controller like lsi sas 9211-8i and its clones (in IT mode) or a ahci sata controller (a two lane 88se9230 or a jmb585 would be working too, as its presumably only pcie 2.0 a one labe controller like 89se9215 would not be that good as it would only have max. 500MB/s on the one pcie lane) 9211-8i can have 8 sata ports and 8 pcie lanes and might be a good choice it its planned to be extended beyond sata 4 ports (edit: if there are normal sata onboard from the chipset you could use them instead the sas ports and you might not need anything additional for you two disks)
  7. before trying to boot from it you should check if its even working with dsm, my guess its not i did not find needed kernel config in synologys default kernel config "CONFIG_MMC_SDHCI_PCI" or a driver "sdhci-pci", so it seems like the hardware is not recognised after installing dsm with usb try "lspci" to check if a driver is used for this hardware
  8. FAQ: https://xpenology.com/forum/topic/9392-general-faq/?do=findComment&comment=82391 if that is important for you then try this https://xpenology.com/forum/topic/13030-dsm-5x6x-cpu-name-cores-infomation-change-tool/
  9. you can use the KB of synology for things like this as you use dsm as any other synology user https://www.synology.com/en-global/knowledgebase/DSM/help/DSM/StorageManager/storage_pool_expand_add_disk ... Drive requirements: Please make sure the drives that you intend to add to your Synology NAS meet the following requirements: RAID or SHR configuration must be created by drives of the same type. Using drives of different types will affect system reliability and performance. Mixed drive types as shown below are not supported for RAID or SHR configuration: SATA drives and SAS drives 4K native drives and non-4K native drives For SHR: The capacity of the drive you intend to add must be equal to or larger than the largest drive in the storage pool, or equal to any of the drives in the storage pool. Example: If an SHR storage pool is composed of three drives (2 TB, 1.5 TB, and 1 TB), we recommend that the newly-added drive should be at least 2 TB for a better capacity usage. You can consider adding 1.5 TB and 1 TB drives, but please note that some capacity of the 2 TB drive will remain unused. For RAID 5, RAID 6, or RAID F1: The capacity of the drive you intend to add must be equal to or larger than the smallest drive in the storage pool. Example: If a RAID 5, RAID 6, or RAID F1 storage pool is composed of three drives (2 TB, 1.5 TB, and 1 TB), then the capacity of the new drive must be at least 1 TB. ...
  10. das it normal, nur der lsi sas treiber macht das anders, bei ahci ist das immer gleich, zumindest auf einem controller, wenn man controller umsteckt kann sich das "blockweise" ändern wenn sich die reihenfolge der controller ändert
  11. it its 918+ dsm 6.2.3 then you will need a modded extra.lzma to get rid of jun's (great but now) obsolete i915 driver https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/
  12. from the board specs the 3&4 would be asm1061, no general trouble with that one to be sure you would need to back track the device [01:00.1] in /var/log/dmesg to its pci vendor and device id to see if its asm1061 or 9215 maybe some kernel parameter in grub.cfg can help (like iommu=pt) but 1st try would be to disable VT-d in bios
  13. if its Asrock J5040-ITX the its would be normal, as CPU/sock have 2 sata ports and asrock adds 2 port asm1061 do a added controller would be 5/6 you can check /var/log/dmesg (or just type dmesg on commandline) to see what controller os using what port (ata01, ata02, ...) with 3/4 unused it looks more like the asm1061 is not working propperly the 88se9125 should be working as ahci https://ata.wiki.kernel.org/index.php/SATA_hardware_features on some system were the chipset has more asata ports the is used on the board it can be ssen too (4 ports onboard but added controller start with 7) as long as you come near the maximum (12 for 3615/17 and 16 for 918+) its just a cosmetic problem
  14. check /var/log/dmesg and /var/log/messages for more information anything in the storage managers hdd section about the drive not being healthy? did you run extended smart test for the drive?
  15. me too, that's good enough isn't it a waste to just replace the usb (<10 bugs) with a m.2 device? the loader is just 50MB and the rest of the m.2 ssd will be unused (i guess)? there are some small site usb and it only loader some megabyte of kernel befoe switching to /dev/md0 for loading the system (aka DSM) and with waste i think about the m.2 slot, with two of them you can have a really potent cache for 10G network (als long as its not QLC) or you could use it as "source for 4 pcie lanes (with a apdater m.2 to pcie on some ribbon cable) in a lot of cases you might be better off to use esxi and have the m.2 ssd used as rdm or vmdk as a "normal" ssd in a virtual dsm (more or less flyrides way of using a m.2 ssd for dsm) edit: no criticism, just some thoughts
  16. as a general rule, dont use driver (*.ko) from 6.2.1 and 6.2.2 for 6.2.3 for 6.2.3 you should try stuff that worked with 6.2.(0) and the original loader back in 2018
  17. maybe try another nas distribution like omv or freenas to see if its working stable
  18. YES, the 6.2.3 comes with its own newer i915 driver for intel quick sync video on gemini lake and the kernel changes synology made will have the i915 driver jun made not working my extra/extra2 for 918+ takes care of this (there is some more description about that in the 1st post) https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/
  19. what backplabe is it? some old part that does not support 6Gbps of SATA3?
  20. there is a tutorial doing on linux, dd is the right thing https://xpenology.com/forum/topic/25833-tutorial-use-linux-to-create-bootable-xpenology-usb/ just for booting and finding it in network the vid/pid does not matter, it matters when installing the *.pat.file so your problem is elsewhere if the system is uefi you need to boot the non-uefi representation of the usb
  21. same kernel version as 3617, no reason to assume a change in central ahci code would not be in both just add it at the end of the line as additional parameter to the others set common_args_918='syno_hdd_powerup_seq=1 HddHotplug=0 syno_hw_version=DS918+ vender_format_version=2 console=ttyS0,115200n8 withefi elevator=elevator quiet syno_hdd_detect=0 syno_port_thaw=1' a space separates entry's and the line is "closed" with the ' for 3617 its "set common_args_3617="
  22. ok, id did some more tests, nothing in case of reconnections (that points to interface/cable/backlane/connectors) there where still zero but i did see something "unusual" in the dmesg log but only for WD disks (had two 500GB disks one 2.5" the other 3.5") nothing like that with HGST, Samsung, Seagate or a Crucuial SSD MX300 [ 98.256360] md: md2: current auto_remap = 0 [ 98.256363] md: requested-resync of RAID array md2 [ 98.256366] md: minimum _guaranteed_ speed: 10000 KB/sec/disk. [ 98.256366] md: using maximum available idle IO bandwidth (but not more than 600000 KB/sec) for requested-resync. [ 98.256370] md: using 128k window, over a total of 483564544k. [ 184.817938] ata5.00: exception Emask 0x0 SAct 0x7fffffff SErr 0x0 action 0x6 frozen [ 184.825608] ata5.00: failed command: READ FPDMA QUEUED [ 184.830757] ata5.00: cmd 60/00:00:00:8a:cf/02:00:00:00:00/40 tag 0 ncq 262144 in res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [ 184.845546] ata5.00: status: { DRDY } [ 184.849222] ata5.00: failed command: READ FPDMA QUEUED [ 184.854373] ata5.00: cmd 60/00:08:00:8c:cf/02:00:00:00:00/40 tag 1 ncq 262144 in res 40/00:00:e0:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [ 184.869165] ata5.00: status: { DRDY } [ 184.872839] ata5.00: failed command: READ FPDMA QUEUED [ 184.877994] ata5.00: cmd 60/00:10:00:8e:cf/02:00:00:00:00/40 tag 2 ncq 262144 in res 40/00:00:e0:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [ 184.892784] ata5.00: status: { DRDY } ... [ 185.559602] ata5: hard resetting link [ 186.018820] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 186.022265] ata5.00: configured for UDMA/100 [ 186.022286] ata5.00: device reported invalid CHS sector 0 [ 186.022331] ata5: EH complete [ 311.788536] ata5.00: exception Emask 0x0 SAct 0x7ffe0003 SErr 0x0 action 0x6 frozen [ 311.796228] ata5.00: failed command: READ FPDMA QUEUED [ 311.801372] ata5.00: cmd 60/e0:00:88:3a:8e/00:00:01:00:00/40 tag 0 ncq 114688 in res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [ 311.816151] ata5.00: status: { DRDY } ... [ 312.171072] ata5.00: status: { DRDY } [ 312.174841] ata5: hard resetting link [ 312.634480] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 312.637992] ata5.00: configured for UDMA/100 [ 312.638002] ata5.00: device reported invalid CHS sector 0 [ 312.638034] ata5: EH complete [ 572.892855] ata5.00: exception Emask 0x0 SAct 0x7fffffff SErr 0x0 action 0x6 frozen [ 572.900523] ata5.00: failed command: READ FPDMA QUEUED [ 572.905680] ata5.00: cmd 60/00:00:78:0a:ec/02:00:03:00:00/40 tag 0 ncq 262144 in res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [ 572.920462] ata5.00: status: { DRDY } ... [ 573.630587] ata5.00: status: { DRDY } [ 573.634262] ata5: hard resetting link [ 574.093716] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 574.096662] ata5.00: configured for UDMA/100 [ 574.096688] ata5.00: device reported invalid CHS sector 0 [ 574.096732] ata5: EH complete [ 668.887853] ata5.00: NCQ disabled due to excessive errors [ 668.887857] ata5.00: exception Emask 0x0 SAct 0x7fffffff SErr 0x0 action 0x6 frozen [ 668.895522] ata5.00: failed command: READ FPDMA QUEUED [ 668.900667] ata5.00: cmd 60/00:00:98:67:53/02:00:04:00:00/40 tag 0 ncq 262144 in res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [ 668.915449] ata5.00: status: { DRDY } ... [ 669.601057] ata5.00: status: { DRDY } [ 669.604730] ata5.00: failed command: READ FPDMA QUEUED [ 669.609879] ata5.00: cmd 60/00:f0:98:65:53/02:00:04:00:00/40 tag 30 ncq 262144 in res 40/00:00:e0:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) [ 669.624748] ata5.00: status: { DRDY } [ 669.628425] ata5: hard resetting link [ 670.087717] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 670.090796] ata5.00: configured for UDMA/100 [ 670.090814] ata5.00: device reported invalid CHS sector 0 [ 670.090859] ata5: EH complete [ 6108.391162] md: md2: requested-resync done. [ 6108.646861] md: md2: current auto_remap = 0 i could shift the problem between ports by changing the port so its specific for WD disks it was on both kernels 3617 and 918+ (3.10.105 and 4.4.59) as the error points to NCQ and i found references on the internet a tried to "fix" it by disabling NCQ for the kernel i added "libata.force=noncq" to the kernel parameters in grub.cfg, rebooted and did the same procedure as before (with 918+) and i did not see the errors (there will be entry's about not using ncq for every disks, so its good to see that the kernel parameter is used as intended) in theory it might be possible to just disable ncq for some disks that are really WD but that would need intervention later if anything is changed on the disks in general there was not problem with the raid's i build even with the ncq errors and btrfs had nothing to complain i'd suggest to use this when having WD disks in the system i'm only using HGST and Seagate on the system with the jmb585 so it was not visible before on my main nas
  23. these driver is a nasty thing, it only contains older firmware and phys and even if you download a newer driver from a vendor or even tehuti website you dont get them all together even if the source references to them, its checked when compiling and then only the parts available get part of the compiled driver took a while to find all the parts by combining drivers from 3-4 oems that used different phys the driver in my extra.lzma should contain all used phys and would be much more "universal" (and newer) then the driver synology uses i have two thehuti's myself one is a older qnap that is compatible to synologys own card and works ootb with 6.2.3 and one bought later from a oem (trendnet) with a "incompatible" phy that needed some work with the driver and firmware files
  24. that would be my guess too, reconnection errors usually originate from cable problems, the value refers directly to s.m.a.r.t. "UDMA_CRC_Error_Count" also did a run of creating a raid5 with the 2nd jmb585 controller and 3617, no reconnection errors thats odd, the "normal" link speed is 6.0 Gbps
×
×
  • Create New...