Jump to content
XPEnology Community

IG-88

Developer
  • Posts

    4,640
  • Joined

  • Last visited

  • Days Won

    212

Everything posted by IG-88

  1. you should follow guide below, its also for 6.0->6.1, as 6.1 does have less drivers available you should follow sbv3000 advice and try with a empty disk to see if the hardware works imho you dont need to start with 6.1.(0), in my test installs lately i used 6.1.5 directly without problems, it will also update the usb flash drive when installing the newer version
  2. @zOnDeR @BonanzaCreek if its not about dsm 6.1. driver extension anymore then please open a new thread in DSM Installation @BonanzaCreek if you still want to try with dsm 6.1 you can insert a different nic and istall with this one and after installation you can check the log what it has about the internal nic an we can see what we can do about it
  3. did you downloaded and inserted the extra.lzma from the 6.0 thread (polabskiman's extra.lzma) to jun's 1.0.1 loader? "did not help" is not much, so its hard to give advice
  4. https://pci-ids.ucw.cz/read/PC/14e4/1648 NetXtreme BCM5704 Gigabit Ethernet should work, fastest solution would be using a different nic and after install checking log whats going on with the broadcom driver we already had people using it successful
  5. thanks, that was my point (but i dont had esxi at hand to try it, was just from memory as i use esxi/vmware at work and i wont do anything dsm related at work) vmfs datastore -> virtual disk marked as ssd -> ssd in dsm vm so does work that way without any tinkering on the ssd db file in dsm and its recognised as ssd and can be used both ways as data volume or as cache
  6. the revert the change and test if error 13 disappears (should not depend on the nic changes)
  7. does the vid/pid still matches the usb device? how does grub.cfg look now (did you insert numbers for the mac's)
  8. in what way does this help with the dsm/xpenology beside knowing the card does work? the driver you have on dsm (form synology) does support the card in question and found the card when the driver was loaded, i'd say that rules out driver problems like the plain mlx4*.ko file for the hardware
  9. as long as no one fixes the synology modded kernel source (can be downloaded) it will stay that way (at least for 6.1 and kernel 3.10.102) i'n not a coder, i can deal with diff files if someone creates a fix and is willing to help if someone gives us a newer model based on kernel 4.4 it could be different, maybe synology did not made that much changes there also i would not count to much on dsm 6.2 as it seems to come with signed drivers, even when the old way of loading dsm without the original hardware still work, witout additional drivers it would be only useful to people having hardware using drivers that synology is using too - but thats speculation, maybe there will be also a way to load additional drivers in your case maybe dsm 6.0 and jun's loader 1.01 might be a solution the extra.lzma for this version contains the sata_sil.ko ,
  10. only makes sense to test this if changing the grub.cfg fails, it its about the grub.cfg then the setting it for one card and it will make no difference if it is one or two more ports from the dmesg it look like the driver loads so the problem might not be the driver itself
  11. not sure about what you reference to, vmware or better esxi is the os that has the access over build in driver to the controller and through that to the disk(s) you cant give "physical access" to the disk to a vm, only thing (afaik) is rdm the disk (not the same as direct access, its just a sector mapping to the disk, not the same as accessing it through the controller and i guess it will look different to the vm then a real disk) or pass through the controller to the vm to give the vm fill controll with driver and full access to the disk, you dont get things like smart with rdm inside vm but get it with controller pass trough
  12. in the way of it detects a hardware present and does not crash, so maybe something else is missing i never tried what happens if you insert more nic's and do not change this settings, so yes try to change it, can be easily changed back later it you dont have the real mac addresses just make some up, its not important for testing it i guess so, the model you gave was a 2port if they are not present then the gui will not show anything about more nic's
  13. hi, einfache möglichkeit ist erst mal eine zusätzliche netzwerkkarte einzubauen und zu installieren, man kann dann später im log danach sehen was mit der anderen ist. so ist es wesentlich einfacher an log inforamtionen zu kommen, wenn man das in dem zustand will in dem das grade bei dir ist muss man da mit einem seriellen kabel dran, was kaum einer macht der nic ist vermutlich ein broadcom bcm5723, der dafür nöige tg3.ko treiber ist theoretisch schon bei juns 1.02b loaderr mit dabei prinzipiell geht die n40l aber mit 6.1, kann man hier z.b. einiges zu lesen https://xpenology.com/forum/topic/7527-hp-n54l-dsm-61-onwards/
  14. no when used a cache i guess the raid thing will be done under the hood by dsm, you just have to select the drive(s) the KB mentions that only specific drives can be used https://www.synology.com/en-global/knowledgebase/DSM/help/DSM/StorageManager/genericssdcache i know there is a text file as database of ssd's (support_ssd.db) and i'm pretty sure a virtual ssd created with vmware will not be in there, so selecting virtual disks marked as ssd in a xpenology vm might not work or can only work when the disk is in that db you might search the forum or internet for further info's about that ssd db
  15. hallo, ok wenn das dein einstieg in vlan war dann hilft dir vieleicht das https://www.thomas-krenn.com/de/wiki/VLAN_Grundlagen
  16. in most cases the onboard is a broadcom nic and jun's loader had some firmware files missing forsome of those, try with my extra.lzma
  17. thats the file format of vmware (esxi) when formating a data partition (multiple vm's and host can access it at the same time) thats documented in synologys faq, afaik it needs two drives (will run in raid1 i guess) for using it as cache
  18. as log as there is no final release of 6.2 you should not expect a loader to be released
  19. looks like the driver is working did you change the nic settings in grub.cfg on your usb flash drive? like: set netif_num=3 set mac1=... set mac2=... set mac3=... asuming a one port nic (eth0) with a 2port mellanox whats in the log about ethX cat /var/log/dmesg | grep eth1 cat /var/log/dmesg | grep eth2
  20. you could format the nvme as vmfs and use it as fast virtulal disk in theory you can mark a virual disk as ssd to make it look like a ssd in the vm, don't know how dsm reacts to this also yu can read this
  21. just two above you can read that 6.1.5 ist working, thats the lastest 6.1.x version
  22. try a live linux from usb to proof that the chip is working with "normal" linux in there see what this gives to you (drivers used) lspci -k | grep 'Kernel driver' at least its consistent that jun's and my extra.lzma do not work try older bootloader for 6.0/5.2 to see if it does works with these (looks like accessing the web interface before actually installing the *.pat seem to be enough)
  23. nvme is a completely new way of having a device and its not supported in the models we can use for xpenology, i did a driver for nvme in my extra.lzma but using it for dsm as cache does need much more so its not supported, you can read here an can experiment if you want do push things further
  24. just searched the web for the pci device vendor https://pci-ids.ucw.cz/read/PC/15b3 15b3:673c - MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] in kernel source of dsm 6.1 kernel /linux-3.10.x/drivers/net/ethernet/mellanox/mlx4/main.c static DEFINE_PCI_DEVICE_TABLE(mlx4_pci_table) = { ... { PCI_VDEVICE(MELLANOX, 0x673c), MLX4_PCI_DEV_FORCE_SENSE_PORT }, /* MT25408 "Hermon" EN 10GigE */ { PCI_VDEVICE(MELLANOX, 0x6368), MLX4_PCI_DEV_FORCE_SENSE_PORT }, /* MT25408 "Hermon" EN 10GigE PCIe gen2 */ { PCI_VDEVICE(MELLANOX, 0x6750), MLX4_PCI_DEV_FORCE_SENSE_PORT }, /* MT25458 ConnectX EN 10GBASE-T 10GigE */ { PCI_VDEVICE(MELLANOX, 0x6372), MLX4_PCI_DEV_FORCE_SENSE_PORT }, /* MT25458 ConnectX EN 10GBASE-T+Gen2 10GigE */ { PCI_VDEVICE(MELLANOX, 0x675a), MLX4_PCI_DEV_FORCE_SENSE_PORT }, /* MT26468 ConnectX EN 10GigE PCIe gen2*/ { PCI_VDEVICE(MELLANOX, 0x6764), MLX4_PCI_DEV_FORCE_SENSE_PORT }, /* MT26438 ConnectX EN 40GigE PCIe gen2 5GT/s */ { PCI_VDEVICE(MELLANOX, 0x6746), MLX4_PCI_DEV_FORCE_SENSE_PORT }, /* MT26478 ConnectX2 40GigE PCIe gen2 */ { PCI_VDEVICE(MELLANOX, 0x676e), MLX4_PCI_DEV_FORCE_SENSE_PORT }, /* MT25400 Family [ConnectX-2 Virtual Function] */ { PCI_VDEVICE(MELLANOX, 0x1002), MLX4_PCI_DEV_IS_VF }, /* MT27500 Family [ConnectX-3] */ { PCI_VDEVICE(MELLANOX, 0x1003), 0 }, /* MT27500 Family [ConnectX-3 Virtual Function] */ ... and synology seems to use even newer drivers (3.3.-1.0.4) as there is also a mlx5 module, which is not part of the original kernel, so you should work ootb i guess your dsm 6.1 is running with the card plugged in so have a look at /var/log/dmesg what it says about the card the driver is natively part of dsm and should load so there should be something in the log about it mellanox official supported cards for the 3.3-1.0.4 driver and needed min. firmware can be found here http://www.mellanox.com/page/mlnx_ofed_matrix?mtag=linux_sw_drivers
  25. network chip is Realtek 8168, what extra.lzma did you try? did you try the ip address in browser? if you get it going, dont count on the esata port (sil3531), dsm 6.1 extra.lzma does not have a driver for this
×
×
  • Create New...