Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 07/12/2021 in all areas

  1. according to the pci.update file in the driver source thats just one of the main id's of i350, there should be sub id's like the one's from above the sub id's might be there because of different phy chips (i guess), the 00a3 sub id not persent in the ootb driver is the only one (important) missing but present in the driver from my extra.lzma, if the card does not work ootb then try the extra.lzma with the newer driver
    1 point
  2. to be sure you would need to know pci vendor an product id these are supported by the igb.ko driver (5.3.5.39) in the last published extra.lzma 17aa 1074 ThinkServer I350-T4 AnyFabric 8086 0001 Ethernet Server Adapter I350-T4 8086 0003 Ethernet Server Adapter I350-T4 8086 00a1 Ethernet Server Adapter I350-T4 8086 00a3 Ethernet Server Adapter I350-T4 8086 5001 Ethernet Server Adapter I350-T4 a more recent 2021 5.5.2 igb driver has one i350-T4 entry more 8086 00aa Ethernet Network Adapter I350-T4 for OCP NIC 3.0 but thats a special version for .opencompute 3.0 spec, not relevant for your problem so i guess it should work, if not with synology own driver in 6.2.3 u3 (igb.ko version 5.3.5.3, source ~5 years old?) then at least with the extra.lzma's newer driver i had v5.3.5.4 around and that supports the following id's 17aa 1074 Lenovo ThinkServer I350-T4 AnyFabric 8086 0001 Ethernet Server Adapter I350-T4 8086 00a1 Ethernet Server Adapter I350-T4 8086 5001 Ethernet Server Adapter I350-T4 thats what is to expect to work ootb with 3617 6.2.3 u3 at least the extra.lzma it should work if your are unlucky and its a 8086:00a3
    1 point
  3. https://xpenology.com/forum/topic/13922-guide-to-native-drivers-dsm-617-and-621-on-ds3615/ This ought to help you with your research. Short answer, it should work fine.
    1 point
  4. I have this https://www.amazon.de/-/en/10Gtek®-Gigabit-PCIE-Network-I350-T4/dp/B01H6NE4X2/ref=sr_1_3?crid=1LDVHK8KG489E&dchild=1&keywords=i350-t4&qid=1626087365&sprefix=I350%2Caps%2C244&sr=8-3 I pass this through and never had an issue with it (DS918+). i350 chips was launched in 2011. I do not see any reason not to be supported.
    1 point
  5. You've done what I would do. You might still try running it with two DIMMs only to see if there is any change.
    1 point
  6. Buenas, Después de bastante tiempo que tiene este hilo, me vuelve a pasar lo mismo. Compré otro servidor igual (Microserver Gen8) e instalé ahí el DSM 6.20 y volví con el problema, con el WOL activado en BIOS y con la opción de Habilitar "WOL en LAN1" y "WOL en LAN2" activadas el servidor sólo levanta por WOL cuando al apagarlo, después apago la regleta de corriente donde está conectado, si no la apago no levanta. Pues lo que conseguido solucionar de nuevo, la otra vez debí dejarlo editado después de tantas pruebas y ni me di cuenta, pero el problema es que Xpenology no apaga correctamente las tarjetas al apagarse, he editado el fichero /etc/init/poweroff.conf y antes de la línea "halt -f $poweroff": ifconfig eth0 down ifconfig eth1 down quedando el fichero: description "Synology poweroff" start on runlevel 0 and stopped umount-root-fs and umount-root-ok stop on runlevel [!0] task console none script ## make sure runlevel is not 6 (reboot) run_level=`runlevel | awk '{ printf $2 }'` || true if [ "x${run_level}" = "x6" ]; then echo "incorrect runlevel, skip poweroff" exit 0 fi if [ "$INIT_HALT" = "" ]; then INIT_HALT=POWEROFF fi # If INIT_HALT=HALT don't poweroff. poweroff="-p" if [ "$INIT_HALT" = "HALT" ]; then poweroff="" fi echo PCE6 > /proc/acpi/wakeup ifconfig eth0 down ifconfig eth1 down halt -f $poweroff end script # vim:ft=upstart De esta manera consigo levantar el servidor cuando le mando apagar sin tener que cortar la corriente de la regleta para que funcione. Espero que os sirva a alguno.
    1 point
  7. У меня HP Microserver N54L, проблема возникла с пятым хардом, не в корзине, он отдельно стоящий (или лежащий ))) , так вот начали харды Бэды выдавать.... Естественно поиски пошли по наиболее сложному пути. Правка кода, логи там всякие и прочая хренотень, ни разу не относящаяся к проблеме. И уже вытрахав себе мозг, решил проверить напряжение на Хардах. Так вот на этом, проблемном была просадка . В общем оказался переход Молекс - Сата. Заменил и всё в норме уже пару лет. На G8 схемотехника несколько иная, но если у вас проблема только в двух, конкретных слотах, то имеет смысл обратить внимание на провода, питание и прочую электрику. Чудес не бывает....))))
    1 point
  8. Just a recap for those who'll want to try NVMe cache because all the thread is quite messy imho. The above shell script with DSM 6.2.3-25426 Update 2 (on DS918+ , indeed) does not work anymore. At least in my experience it leads in an incorrect state where the two NVMe are not recognised as the same and therefore they cannot be used for a necessary RAID 1 in case of R/W cache. The only thing is really working at the moment is a copy of the libsynonvme.so.1 to the right path. So put this file in a public zone of your volume (this is my case) or wherever you like and then with root's privileges (sudo -i) put the lib in the right place: cp /volume1/public/libsynonvme.so.1 /usr/lib64 cd /usr/lib64 chmod 777 libsynonvme.so.1 shutdown -r now and that's it. The Storage Manager should recognise correctly yours NVMe's and use them as cache.
    1 point
×
×
  • Create New...