Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 04/22/2020 in all areas

  1. edit 14.05.2020: 6.2.3 is back online as v25426, for newer coffeelake cpu's with problems using hardware transcoding (dev/dri present after boot) there is a new videostation that fixes the problem https://xpenology.com/forum/topic/28321-driver-extension-jun-104b-for-dsm623-for-918/?do=findComment&comment=144918 edit2 02.06.2020: as @richv31 pointed out here https://xpenology.com/forum/topic/21663-driver-extension-jun-103b104b-for-dsm622-for-3615xs-3617xs-918/?do=findComment&comment=148564 there seems to be a serious problem with 918+ and scsi/sas drivers, at least with mpt2sas/mpt3sas, not just with 6.2.2/6.2.3 it also happens with jun's original loader 1.04b and dsm 6.2.0 (23824), breaking raid sets after not properly waking up from hdd hibernation means potential data loss i had a two disk raid1 set on a lsi 9211-8i and after disks spinning down only one came up and i saw some really worrying messages on the serial console, i was not able to log in to the system, not on the web gui, even not on the serial console, the whole system was in lock down and only switching off seemed to work as of the problems with not getting s.m.a.r.t. values i used juns old original raid_class.ko, scsi_transport_sas.ko, scsi_transport_spi.ko to get the old state back (replacing my newly made ones from more recent synology kernel source 24922 ) in 0.11/0.12 - these version inherit the problem that seems to be present since the beginning with loader 1.04b anyone using mpt2sas/mpt3sas and disk hibernation on 918+ should disable it for now to not risk any data loss the new 0.13 for 918+ will have the raid_class.ko, scsi_transport_sas.ko, scsi_transport_spi.ko from kernel source 24922, that version did work on testing on my system without breaking anything and without such alarming errors on wakeup of disks, there will be no smart data but at least it seems safer then disks not waking up properly for "proper" lsi sas controller support i'd suggest using 3615 or 3617 as it is "native" in these units and should work better, maybe there are kernel options missing in the 918+ kernel and that cant be fixed, if anyone finds out more just add a comment (i might not have the time to dig into this) the other alternative is to use sata/ahci instead of scsi/sas with 918+, that works without problems on my system using 918+ (12 disks), JMB585 based controller seem to be the best choice atm as they support pcie 3.0 and can have up to 2000 MByte/s for its 5 sata ports (the older marvell and asm chips use only pcie 2.0 limiting the data rate to 500 MB/s or 1000 MB/s, even 8 port controller with two of the older chips use a pcie bridge chip with just two lanes making them terrible choice for a high port count - might be ok with just one or two 1GBit nic's but will at least limit the rebuild speed and ssd's should be kept away from these controllers and place in internal sata ports) for Instructions about installing or updating please read "Driver extension jun 1.03b/1.04b for DSM6.2.2 for 3615xs / 3617xs / 918+" if i have time i will write more in this place the new package is not well tested i just did some tests with hardware i have at hand (ahci, e1000e, r8168, igb, bnx2x, mpt2sas/mpt3sas) and tested update from 6.2.2 to 6.2.3 basically synology reverted the kernel config change made in 6.2.2 back to what was before so old drivers from original 1.04b loader (and older driver i made before 6.2.2) should work again - but as synology also introduced there own new i915 driver with 6.2.3 there will be a conflict when jun's i915 driver is loaded with 6.2.3 there are two positive new things, synology released a nearly recent kernel source code (24922) and 6.2.3 has a new i915 driver supporting as much gpu hardware as jun's backported i915 driver in loader 1.04b - so there is no need for jun's i915 driver anymore and in theory we should have good support for apollo lake, gemini lake and other newer hardware but it seems not all new UHD630 is supported as there is dev id "3E98" unsupported (i5-9400, i5-9600k, i7-9700t, i7-9700), ark.intel.com and wikichip.og are usually good sources to check the id https://ark.intel.com/content/www/us/en/ark/products/134898/intel-core-i5-9400-processor-9m-cache-up-to-4-10-ghz.html edit: there seems to be versions of the i5-9400 with a "3E92" GPU this versions don t need the patched driver, they will run with the default driver, /dev/dri should be present ootb, it also can be checked in /var/log/dmesg when searching for "[8086:3e92]" https://en.wikichip.org/wiki/intel/core_i5/i5-9400 there is also a good document from intel listing all coffeelake's https://01.org/sites/default/files/documentation/intel-gfx-prm-osrc-cfl-vol01-configurations.pdf coffeelake cup's without driver support (no hardware transcoding), SKU numbers should be listed when buying and can be checked on the box i9 SKU S82 i7 SKU S82 i5 SKU S6f2 a new 10th gen i5-10500 / i3-10300 have device id's "9BC8" and there are no "9xxx" numbers in the driver we use so don't expect any newer gen10 cpu to work with hardware transcoding even when it "only" has UHD630 igpu edit: there are also even lower end 10th gen cpus (2 core) with a different GPU device id "9BA8" like G5920, G5925, G6400 with a iGPU UHD 610, the equivalent in the 9th gen would be a G5400 and that comes with pci device id's 3E90/3E93, so we need to edit a different entry when patching for this edit: i made a modded i915 driver were the pci device id of the 9th gen UHD 630 iGPU (3E92/3E93) is replaces with the device id's of the newer/different UHD 610/630 iGPU's that are unsupported 8086:3E92 => iGPU UHD 630, Low End Desktop 9 Series (original driver) -> 8086:3E98 => iGPU UHD 630, High End Desktop 9 Series (i5-9400, i5-9600k, i7-9700t, i7-9700) 8086:9BC8 => iGPU UHD 630, Low End Desktop i5-10500, i5-10600T and lower 8086:9BC5 => iGPU UHD 630, High End Desktop i510600K and higher 8086:3E93 => iGPU UHD 610, Low End Desktop 9 Series -> 8086:9BA8 => iGPU UHD 610, low End Desktop Series like G6400 the zip file contains 3 versions in every one is 3E92 replaced with the one we want to get working, as its just a crude binary patch i choose 3E92 as it seemed the most similar device, was tested for 3E98 iGPU and seemed to work, for the 10th there is at least one positive feedback with plex its intended to be used with the extra/extra2 from this thread as this removes jun's old i915 driver (not just one file) that will prevent synologys new driver to work properly the patched i915.ko file is supposed to be copied to /usr/lib/modules/ and replaces the original file from synology for 6.2.3 Update3 (added 9BA8 support) https://gofile.io/d/4fFJA5 https://dailyuploads.net/x3e0nkxk6p0e https://usersdrive.com/zfl9csl91xwr.html https://www34.zippyshare.com/v/304gfbnO/file.html SHA256: EC2447F47FEE6457FE3F409E26B83E5BF73023310E10A624575A822FDBC10642 a little warning, in worst case the system might crash or freeze when transcoding and and such undefined states and hard resets can result i data loss (cache) or damaged raids (depending on the load of the system at this time) so until its more tested it should not be used on system with "important" data and a recent backup - ok i know its a little over cautious but i dont like the thought someone looses data because of this nice to have feature (software mdadm raids can be repaired in most cases if the worst happens) -> positive feedback for a i5-9400, i5-9600K, i9-9900T (8086:3E98) to fully working -> positive feedback for a G6400 (8086:9BA8) to have /dev/dri -> one user positive feedback for a i5-10600T (8086:9BC8) to fully working with plex -> one user negative feedback for a i5-10500 (8086:9BC8) to get /dev/dri devices but no transcoding with emby -> one user negative feedback for a i9-10900 (8086:9BC5) system does not boot anymore - seems to be a solid hands off? edit: there is a driver source patch for 10th gen that came up along 7.x and 9BC5 was confirmed to work with this new driver, so anyone with a 10th gen gpu and trouble can change to 7.x (tc rp loader) ot can try these 6.2.3 modules from this link (i915_918_623.7z) https://xpenology.com/forum/topic/59909-i915ko-backported-driver-for-intel-10th-gen-ds918-ver-701-up3/?do=findComment&comment=277236 i completely removed jun's i915 drivers from the extra/extra2 and changed/added the i915 firmware needed, also i took care of the "old" i915 drivers on the installed system in /usr/lib/modules/update/, they are now deleted on boot so if you come from 6.2.2 and used extra/extra2 std or recovery or you did already used juns original 1.04b extra/extra2, it should work as soon as you boot up (when drivers in "update" are not present then the default drivers from synology will be used and with the added i915 in place it will work on most intel gpu's up to coffee lake) the driver versions are the same as in the 6.2.2 extra/extra2 but are newly compiled, as every driver from 6.2.2 is renewed all the old drivers are overwritten and there should be no crashing drivers on boot (which can prevent proper shutdown or reboot) we now have one universal i915 driver (and not jun's and synologys) its back to one package for all cpu/gpu, if needed there will be a recovery version too i only did test a new created loader from 1.04b image file with zImage and rd.gz from "DSM_DS918+_25426.pat" and the new extra.extra2, it will also work with the 6.2.0 kernel that is by default in the 1.04b image if there are problems getting hardware transcoding to work it might help to disable vt-x/vt-d in bios (reported on a J5005 Gemini Lake), but there are other possible reasons because of the licensing thats needed for this to work, but at least it will not hurt as as lon as you dint intent to use the vmm package if you accidentally updated 6.2.2 to 6.2.3 and now have problems like no network after boot, no proper shutdown/reboot or missing /dev/dri (hardware transcoding) then you just copy the new extra/extra2 to your already updated usb drive (the update to 6.2.3 already installed the new kernel on it) with latest updates of win10 there is no drive letter anymore, its possible to still do it with the tools already used for creating the usb drive, read the usb to a imgae file with "Win32DiskImager 1.0" (activate "read only allocated partitions"), mount that image with osfmount (like in the tutorial section), overwrite old /extra/extra2.lzma and write the image back to usb with Win32DiskImager extra.lzma/extra2.lzma for loader 1.04b ds918+ DSM 6.2.3 v0.13.9 some intel and realtek driver updated so hopefully more onboard nic's will work (like realtek 8125), also realtek 8152 is newer so all 2.5G usb solutions from realtek schould work, there is still no way for the intel 2.5G nic as there is no standalone driver for older kernel versions, removed fireware, added bnxt_en and sr_mod/cdrom, nic Killer E2500, added *vf.ko in rc.modules https://pixeldrain.com/u/jHa2eYrc https://gofile.io/d/mUFYxQ 1CED32FCF63EB54DAA44335FA1EFBCE408D41A3E16D55771D35B0FD423F0B9CF extra.lzma/extra2.lzma for loader 1.04b ds918+ DSM 6.2.3 v0.13.3 scsi/sas disks will have no s.m.a.r.t. infos with lsi sas controllers (see edit2 above), newer atlantic.ko driver 2.3.4, r8125 added to rc.modules, used latest source for realtek drivers r8101/r8125/r8152/r8168/r8169, bna.ko firmware corrected https://pixeldrain.com/u/pkBY9XjC https://gofile.io/d/FvoFdo SHA256: EF6F26999C006A29B3B37A7D40C694943100F0A9F53EC22D50E749F729347EC6 for special purpose and tests, extra.lzma/extra2.lzma for loader 1.04b ds918+ DSM 6.2.3 v0.12.1 - this version shows s.m.a.r.t. info and serial of disks for lsi scsi/sas but might corrupt the raid when disk hibernation is active (see warning above) https://pixeldrain.com/u/kZJdPj1H https://gofile.io/d/nsglbX SHA256: 9089D38A4975AB212553DA7E35CE54027DE4F84D526A74A46A089FC7E88C1693 extra.lzma for loader 1.03b ds3615 DSM 6.2.3 v0.12.1_test, added virtio/9p, CDROM drivers, nic Killer E2500 lhttps://pixeldrain.com/u/Rx4tV6ay https://gofile.io/d/RnX1QA SHA256: E72820BF648CFD7F6075DEEB1208A3E0D8A61F38289AE17AC7E355910B9B0E0E extra.lzma for loader 1.03b ds3615 DSM 6.2.3 v0.11_test, same added drivers as for 6.2.2 like newer intel drivers, 10G nics, ... https://pixeldrain.com/u/5aN77nWf https://gofile.io/d/PMIDrX SHA256: 5DE93F95841CC01F9E87EE4EE2A330084B447E44EBAA6013A575A935D227D4AF extra.lzma for loader 1.03b ds3617 DSM 6.2.3 v0.12_test (2/2022), added virtio/9p, CDROM drivers, nic Killer E2500 https://pixeldrain.com/u/xmhCVxck https://gofile.io/d/VRtmC1 SHA256: B9AC8705D5D9DCEED1C0315346E4F2C7C4CD07C4ED519FC9901E8E368A3AE448 extra.lzma for loader 1.03b ds3617 DSM 6.2.3 v0.11.2_test, same added drivers as for 6.2.2 like newer intel drivers, 10G nics, ... (0.11.2 because i forgot bnx2/bnx2x firmware and mpt2/mp3 driver problem when updating from 6.2.2 in 0.11) https://pixeldrain.com/u/zwAJzKa9 https://gofile.io/d/0Tx8A7 SHA256: D467914E55582D238AC5EC4D31750F47AEB5347240F2EAE54F1866E58A8BD1C9
    1 point
  2. Всем доброго времени суток! Прошу помощи сообщества. Ситуация следующая: Есть настроенный ХРенолоджи, крутится на атоме D410+4Гб памяти. DSM 6.1.7-15284. Для моих нужд (используется только как личное облако и тайм машина для маков) его хватает. Проблема с доступом из интернета к ресурсам. Настроил KeenDNS. Доступ к вебморде роутера есть. Получил сертификат на домен 4-го уровня (на роутер) для ХРенолоджика. Пробросил порт 5000. Из внешней сети могу зайти на веб-интерфейс хренолоджи, (nas.xxxx.keenetic.pro), также могу зайти на драйв (nas.xxxx.keenetic.pro/drive). То есть по веб интерфейсу все ок, доступ есть. Но когда пытаюсь вбить адрес подключения (nas.xxxx.keenetic.pro) в клиенте Drive - сбой подключения. Пробовал и /drive, и порт указывать... Не могу подключиться из клиента в удаленной сети к своему облаку. В настройках использовать порт 10002 и пробрасывать его на роутере тоже пробовал, не помогает. Такая ситуация и на Win, и на Mac клиентах. На удивление, сработало подключение в мобильном приложении на IOS по адресу nas.xxxx.keenetic.pro/drive. Работает как во внутренней, так и во внешней сети. В то же время подключение на Win и Mac через клиент Drive по локальному IP без проблем. Может кто подсказать, как настроить клиент Drive для работы из внешней сети? Буду рад помощи. Заранее спасибо всем ответившим.
    1 point
  3. NOTE: This problem is consistently manifested when running Jun's loader on a virtual machine with 6.2.3, but some also have problems on baremetal, and under certain conditions, other 6.2.x versions. The fix can be implemented safely on all Jun loader installs. You can verify if you have the issue by launching SSH and issuing the following command: $ ls /dev/synoboot* /dev/synoboot /dev/synoboot1 /dev/synoboot2 If these files are not present, your Synoboot devices are not being configured correctly, and you should install the fix script. If synoboot3 exists that is okay. TL;DR: When running DSM 6.2.3 as a virtual machine (and sometimes on baremetal), Jun's 1.03b and 1.04b bootloaders fail to build /dev/synoboot Bootloader SATA device normally mapped beyond the MaxDisks limit becomes visible if /dev/synoboot is not created DSM 6.2.3 update rewrites the synoinfo.cfg disk port bitmasks which may break arrays >12 disks, and cause odd behavior with the bootloader device Background: Setting the PID/VID for a baremetal install allows Jun's loader to pretend that the USB key is a genuine Synology flash loader. On an ESXi install, there is no USB key - instead, the loader runs a script to find its own boot device, and then remakes it as /dev/synoboot. This was very reliable on 6.1.x and Jun's loader 1.02b. But moving to DSM 6.2.x and loaders 1.03b and 1.04b, there are circumstances when /dev/synoboot is created and the original boot device is not suppressed. The result is that sometimes the loader device is visible in Storage Manager. Someone found that if the controller was mapped beyond the maximum number of disk devices (MaxDisks), any errant /dev/sd boot device was suppressed. Adjusting DiskIdxMap became an alternative way to "hide" the loader device on ESXi and Jun's latest loaders use this technique. Now, DSM 6.2.3: The upgrade changes at least two fundamental DSM behaviors: SATA devices that are mapped beyond the MaxDisks limit no longer are suppressed, including the loader (appearing as /dev/sdm if DiskIdxMap is set to 0C) The disk port configuration bitmasks are rewritten in synoinfo.conf: internalportcfg, usbportcfg and esataportcfg and on 1.04b, do not match up with the default MaxDisks=16 anymore (or if you have modified MaxDisks). NOTE: If you have more than 12 disks, it will probably break your array and you will need to restore the values of those parameters Also, when running as a virtual machine (and sometimes on baremetal), DSM 6.2.3 breaks Jun's loader synoboot script such that /dev/synoboot is not created at all. Negative impacts: The loader device might be accidentally configured in Storage Manager, which will crash the system The loader partitions may inadvertently be mapped as USB or eSata folders in File Station and become corrupted Absence of /dev/synoboot devices may cause future upgrades to fail, when the upgrade wants to modify rd.gz in the loader (often, ERROR 21 or "file corrupt") Unpacking Jun's synoboot script reveals that it harvests the device nodes, deletes the devices altogether, and remakes them as /dev/synoboot. It tries to identify the boot device by looking for a partition smaller than the smallest array partition allowed. It's an ambiguous strategy to identify the device, and something new in 6.2.3 is causing it to fail during early boot system state. There are a few technical configuration options can can cause the script to select the correct device, but they are difficult and dependent upon loader version, DSM platform, and BIOS/EFI boot. Fix: However, if Jun's script is re-run after the system is fully started, everything is as it should be. So extracting the script from the loader, and adding it to post-boot actions appears to be a solution to this problem: Download the attached FixSynoboot.sh script (if you cannot see it attached to this post, be sure you are logged in) Copy the file to /usr/local/etc/rc.d chmod 0755 /usr/local/etc/rc.d/FixSynoboot.sh Thus, Jun's own code will re-run after the initial boot after whatever system initialization parameters that break the first run of the script no longer apply. This solution works with either 1.03b or 1.04b and is simple to install. This should be considered required for a virtual system running 6.2.3, and it won't hurt anything if installed or ported to another environment. FixSynoboot.sh
    1 point
  4. Es que no te debe saltas el enlace. El la placa tienes el puerto pcie normal. Pero también tienes un puesto minipcie. Que normalmente es donde se pincha la tarjeta wifi. O una ethernet o una de disco. Tienes dos opciones. Buscar una minipcie que lleve chip Intel. Y usar esa como tarjeta de red. Y poner la syba en el pcie estándar. https://www.amazon.es/dp/B0788DG7NQ/ref=cm_sw_r_wa_apa_i_2EVNEbVVTPKN1 https://www.amazon.es/dp/B00AZ9T3OU/ref=cm_sw_r_wa_apa_i_QLVNEb6NFXYVM O al revés. buscar una de almacenamiento. Que te valga. Por ejemplo. Pero no sé si vale el chip asm1061 (la que dice joakings lo lleva osea que supongo que sí vale): https://www.amazon.es/dp/B084T1CFWT/ref=cm_sw_r_wa_apa_i_8RVNEbYEVMRR7 (Que te dé más de dos puertos es tontería porque ese bus no da más que para 2.) Y poner la de red Intel que hablamos en varios ilos en el pcie. Enviado desde mi Mi A2 mediante Tapatalk
    1 point
  5. Drive server, ранее известный как Cloud station server + клиент на мобиле
    1 point
  6. Que tienes pichado en ese slot? La configuracion que tengo actualmente en el equipo es la siguiente: Software: Cargador de Jun1.03b y ds3617, montado con el DSM 6.2.2. Placa base q1900 itx Dos modulos de 4G de Ram En el Slot targeta de red. https://www.amazon.es/dp/B00BL4PQ9Y/ref=cm_sw_r_other_apa_i_zUuaEb5GMTD05 para que funcione correctamente el DSM con ese cargador. y como necesitava añadir mas discos, en el puerto mini pci-expres añadi este que funciona bien. https://es.aliexpress.com/item/32973542696.html?spm=a2g0s.9042311.0.0.7d9b63c0h2ems3 Espero te sirva de ayuda. Saludos.
    1 point
  7. the .pat files are the regular DSM OS files from synology. You can download the DS3615xs, DS3617xs and DS918+ build 24922 versions, which are the ones most commonly referred to in polanskiman and IG-88's installation guides, from here: https://archive.synology.com/download/DSM/release/6.2.2/24922/ (For other versions you can go to the parent directory and select the one you want. Do not go to the 6.2.3 version unless you know what you are doing (e.g. you know your hardware is natively supported by the DSM version as there isn't a working extra.lzma available for 6.2.3 yet afaik.) Make sure you download the correct one for the bootloader. So DSM_DS3615xs_24922.pat, DSM_DS2617xs_24922.pat for the two Bootloader 1.03b versions or DS918+_24922.pat for the 1.04b bootloader Also make sure you read this thread for upgrading to/installing DSM 6.2.2 using Bootloader 1.04b/1.03b:
    1 point
  8. No. Pero es famosa. Yo tengo una 3700-itx con un esxi y el DSM virtualizado. Ya tarjeta esa de red no te vale es realteck Enviado desde mi Mi A2 mediante Tapatalk
    1 point
  9. Con la q1900 no debería ir el boot 1.04 y tampoco la 918. Por la cpu Enviado desde mi Mi A2 mediante Tapatalk
    1 point
  10. i guess thats the same when using a real 3615/17, the mellanox drivers are not in rd.gz, only in the dsm pat file (system image), so that would be seen as normal behavior when adding newer mellanox drivers and they get loaded in extra,lzma then it will work for installing using over that 10G connection, but i guess thats kind of a luxury that most people wont need
    1 point
×
×
  • Create New...