Popular Post flyride Posted October 19, 2018 Popular Post Share #1 Posted October 19, 2018 This post was recognized by Polanskiman! flyride was awarded the badge 'Great Content' and 5 points. When setting up an XPEnology system, you must first select a DSM platform and version. XPEnology supports a few specific DSM platforms that enable certain hardware and software features. All support a minimum of 4 CPU cores, 64GB of RAM, 10Gbe network cards and 12-disk arrays. When you choose a platform and the desired DSM software version, you must download the correct corresponding loader. That may not be the "newest" loader available. The last 6.x version (6.2.4-25556) is functional only with the TCRP loader. TCRP is very different than the Jun loader. If you want to learn more, or if you are interested in deploying the latest 7.x versions, see the 7.x Loaders and Platforms thread. Be advised that installing 6.2.4 with TCRP is basically the same procedure as installing 7.x. Each of these combinations can be run "baremetal" as a stand-alone operating system OR as a virtual machine within a hypervisor (VMWare ESXi is most popular and best documented, but other hypervisors can be used if desired). Review the table and decision tree below to help you navigate the options. 6.x Loaders and Platforms as of 16-May-2022 Options Ranked DSM Platform DSM Version Loader Boot Methods*** Hardware Transcode Support NVMe Cache Support RAIDF1 Support Oldest CPU Supported Max CPU Threads Notes 1,3a DS918+ 6.2.0 to 6.2.3-25426 Jun 1.04b UEFI, BIOS/CSM Yes Yes No Haswell ** 8 6.2.0, 6.2.3 ok, 6.2.1/6.2.2 not recommended for new installs* 2,3b DS3617xs 6.2.0 to 6.2.3-25426 Jun 1.03b BIOS/CSM only No No Yes any x86-64 16 6.2.0, 6.2.3 ok, 6.2.1/6.2.2 not recommended for new installs* DS3615xs 6.2.0 to 6.2.3-25426 Jun 1.03b BIOS/CSM only No No Yes any x86-64 8 6.2.0, 6.2.3 ok, 6.2.1/6.2.2 not recommended for new installs* DS918+ 6.2.4-25556 TCRP 0.4.6 UEFI, BIOS/CSM Yes Yes No Haswell ** 8 recommend 7.x instead DS3615xs 6.2.4-25556 TCRP 0.4.6 UEFI, BIOS/CSM No No Yes any x86-64 8 recommend 7.x instead DS916+ 6.0.3 to 6.1.7 Jun 1.02b UEFI, BIOS/CSM Yes No No Haswell ** 8 obsolete, use DS918+ instead DS3617xs 6.0.3 to 6.1.6 Jun 1.02b UEFI, BIOS/CSM No No Yes any x86-64 16 6.1.7 may kernel panic on ESXi 4 DS3615xs 6.0.3 to 6.1.7 Jun 1.02b UEFI, BIOS/CSM No No Yes any x86-64 8 best compatibility on 6.1.x * 6.2.1 and 6.2.2 have a unique kernel signature causing issues with most kernel driver modules, including those included in the loader. Hardware compatibility is limited. ** FMA3 instruction support required. All Haswell Core processors or later support it. Only a select few Pentium, and no Celeron CPUs do. ** Piledriver is believed to be the minimum AMD CPU architecture to support the DS916+ and DS918+ DSM platforms. *** If you need an MBR version of the boot loader because your system does not support a modern boot methodology, follow this procedure. CURRENT LOADER/PLATFORM RECOMMENDATIONS/SAMPLE DECISION POINTS: 1. DEFAULT install DS918+ 6.2.3 - also if hardware transcoding or NVMe cache support is desired, or if your system only support UEFI boot Prerequisite: Intel Haswell (aka 4th generation) or newer CPU architecture (or AMD equivalent) Configuration: baremetal loader 1.04b, DSM platform DS918+ version 6.2.3 Compatibility troubleshooting options: extra.lzma or ESXi 2. ALTERNATE install DS3617xs 6.2.3 - if RAIDF1, 16-thread or best SAS support is desired, or your CPU is too old for DS918+ Prerequisite: USB key boot mode must be set to BIOS/CSM/Legacy Boot Configuration: baremetal loader 1.03b, DSM platform DS3617xs version 6.2.3 Compatibility troubleshooting options: extra.lzma, DS3615xs platform, or ESXi 3. ESXi (or other hypervisor) virtual machine install - generally, if hardware is unsupported by DSM but works with a hypervisor Prerequisites: ESXi hardware compatibility, free or full ESXi 6.x or 7.x license Use case examples: virtualize unsupported NIC, virtualize SAS/NVMe disks and present as SATA, run other ESXi VM's instead of Synology VMM Option 3a: 1.04b loader, DSM platform DS918+ version 6.2.3 Option 3b: 1.03b loader, DSM platform DS3617xs version 6.2.3 (VM must be set to BIOS Firmware) Preferred configurations: passthrough SATA controller and disks, and/or configure RDM/RAW disks 4. FALLBACK install DS3615xs 6.1.7 - if you can't get anything else to work Prerequisite: none Configuration: baremetal loader 1.02b, DSM platform DS3615xs version 6.1.7 SPECIAL NOTE for Intel 8th generation+ (Coffee Lake, Comet Lake, Ice Lake, etc.) motherboards with embedded Intel network controllers: Each time Intel releases a new chipset, it updates the PCI id for the embedded NIC. This means there is a driver update required to support it, which may or may not be available with an extra.lzma update. Alternatively, disable the onboard NIC and install a compatible PCIe NIC such as the Intel CT gigabit card. Spoiler [16-May-2022] Added 6.2.4 support via TinyCore RedPill [29-Dec-2020] Added options ranked to decision tree, added many links, fleshed out descriptions, cleanup [26-Nov-2020] Removed Linux kernel version, as features are now well defined to inform the platform selection [11-Aug-2020] Simplified notes due to 6.2.3 functionality. Added recommendations/sample decision points [15-Jan-2020] Added RAIDF1 support information [12-Jan-2020] Added CPU threads and updated extra.lzma applications [22/Dec/2018] Added information for Genesys loader build [20/Oct/2018] Updated for 1.04b loader [22/Nov/2018] Corrected E1000 vNIC dialect to e1000e for 1.03b on ESXi [10/Dec/2018] Updated earliest processor capability to reflect Nehalem report 31 22 Quote Link to comment Share on other sites More sharing options...
benok Posted November 21, 2018 Share #2 Posted November 21, 2018 Hi, Thank you for posting this very very useful matrix. On 10/19/2018 at 11:36 PM, flyride said: Loader Platform DSM version Kernel DSM /dev/dri support DSM NVMe cache support Boot method NIC 6.2.1+ CPU Requirement 1.03b DS3615 or DS3617 6.2+ 3.10 No No Legacy BIOS only Intel (baremetal)e1000 (ESXi) Sandy bridge equivalent or later I noticed that working NIC for esxi is not e1000, but e1000e for me. Could you confirm ? (kernel panic log from serial console) Spoiler [ 8.263832] Modules linked in: e1000(F+) sfc(F) netxen_nic(F) qlge(F) qlcnic(F) qla3xxx(F) pch_gbe(F) ptp_pch(F) sky2(F) skge(F) ipg(F) uio(F) alx(F) atl1c(F) atl1e(F) atl1(F) libphy(F) mii(F) exfat(O) btrfs synoacl_vfs(PO) zlib_deflate hfsplus md4 hmac bnx2x(O) libcrc32c mdio mlx5_core(O) mlx4_en(O) mlx4_core(O) mlx_compat(O) compat(O) qede(O) qed(O) atlantic(O) tn40xx(O) i40e(O) ixgbe(O) be2net(O) igb(O) i2c_algo_bit e1000e(O) dca vxlan fuse vfat fat crc32c_intel aesni_intel glue_helper lrw gf128mul ablk_helper arc4 cryptd ecryptfs sha256_generic sha1_generic ecb aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_conservative cpufreq_powersave cpufreq_performance cpufreq_ondemand mperf processor thermal_sys cpufreq_stats freq_table dm_snapshot crc_itu_t crc_ccitt quota_v2 quota_tree psnap p8022 llc sit tunnel4 ip_tunnel ipv6 zram(C) sg etxhci_hcd mpt3sas(F) mpt2sas(O) megaraid_sas(F) mptctl(F) mptsas(F) mptspi(F) mptscsih(F) mptbase(F) scsi_transport_spi(F) megaraid(F) megaraid_mbox(F) megaraid_mm(F) vmw_pvscsi(F) BusLogic(F) usb_storage xhci_hcd uhci_hcd ohci_hcd(F) ehci_pci ehci_hcd usbcore usb_common ceps(OF) [last unloaded: bromolow_synobios] [ 8.280733] CPU: 0 PID: 5921 Comm: insmod Tainted: PF C O 3.10.105 #23824 [ 8.280767] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 09/21/2015 [ 8.280800] task: ffff8800377d6800 ti: ffff88003afc0000 task.ti: ffff88003afc0000 [ 8.280834] RIP: 0010:[<ffffffff81007b22>] [<ffffffff81007b22>] dma_set_mask+0x22/0x50 [ 8.280867] RSP: 0018:ffff88003afc3c00 EFLAGS: 00010202 [ 8.280901] RAX: 00000000ffffffff RBX: ffff88003e34b098 RCX: 0000000000000000 [ 8.280935] RDX: 00000000ffffffff RSI: 00000000ffffffff RDI: ffff88003e34b098 [ 8.280968] RBP: 00000000ffffffff R08: 0000000000000002 R09: ffff88003afc3be4 [ 8.281002] R10: 0000000000000000 R11: ffffc9000b79ffff R12: 0000000000000055 [ 8.281035] R13: ffff88003d0a8a48 R14: ffff88003d0a8000 R15: 0000000000000000 [ 8.281069] FS: 00007f013e1a4700(0000) GS:ffff88003fc00000(0000) knlGS:0000000000000000 [ 8.281103] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [ 8.281136] CR2: 00000000ffffffff CR3: 000000003cd0c000 CR4: 00000000001607f0 [ 8.281170] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 8.281203] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 8.281237] Stack: [ 8.281271] ffff88003e34b000 ffff88003e34b098 ffffffffa0c87a13 ffffffff8116b6f1 [ 8.281304] 0000000000000000 ffff88003d0a8700 ffff88003d0c3358 ffff88003afc3c50 [ 8.281338] 000000008116c00000ffffffff ffff88003e34d748 ffff88003e34b090 [ 8.900725] Call Trace: [ 8.900799] [<ffffffffa0c87a13>] ? e1000_probe+0x373/0xf60 [e1000] [ 8.900868] [<ffffffff8116b6f1>] ? sysfs_add_one+0x11/0xc0 [ 8.900936] [<ffffffff8129f0d0>] ? pci_device_probe+0x60/0xa0 [ 8.901726] [<ffffffff8130a38e>] ? driver_probe_device+0x7e/0x3e0 [ 8.901793] [<ffffffff8130a7ab>] ? __driver_attach+0x7b/0x80 [ 8.901865] [<ffffffff8130a730>] ? __device_attach+0x40/0x40 [ 8.901934] [<ffffffff813083e3>] ? bus_for_each_dev+0x53/0x90 [ 8.902001] [<ffffffff81309a08>] ? bus_add_driver+0x1c [ 9.518527] [<ffffffff8130ad98>] ? driver_register+0x68/0x150 [ 9.518577] [<ffffffffa0ca3000>] ? 0xffffffffa0ca2fff [ 9.518617] [<ffffffffa0ca304c>] ? e1000_init_module+0x4c/0x82 [e1000] [ 9.518653] [<ffffffff8100038a>] ? do_one_initcall+0xca/0x180 [ 9.518693] [<ffffffff8108b38c>] ? load_module+0x1d0c/0x2360 [ 9.518730] [<ffffffff8128f770>] ? ddebug_proc_write+0xe0/0xe0 [ 9.518767] [<ffffffff810f93a3>] ? vfs_read+0xf3/0x160 [ 9.518806] [<ffffffff8108bb45>] ? SYSC_finit_module+0x75/0xa0 [ 9.519708] [<ffffffff814cadc4>] ? system_call_fastpath+0x22/0x27 [ 40] Code: 2e 0f 1f 84 00 00 00 00 00 48 83 bf 00 01 00 00 00 74 36 55 53 48 89 f5 48 89 fb e8 29 ff ff ff 85 c0 74 15 48 8b 83 00 01 00 00 <48> 89 28 31 c0 5b 5d c3 66 0f 1f 44 00 00 b8 fb ff ff ff eb f0 [ 10.140685] RIP [<ffffffff81007b22>] dma_set_mask+0x22/0x50 [ 10.140716] RSP <ffff88003afc3c00> [ 10.140748] CR2: 00000000ffffffff [ 10.140834] ---[ end trace dc095bc6309f2d80 ]--- 1 Quote Link to comment Share on other sites More sharing options...
flyride Posted November 22, 2018 Author Share #3 Posted November 22, 2018 Confirmed: VMXNET and E1000 cause kernel panics, e1000e is ok. Note that this is only with the explicit combination of ESXi, 1.03b loader and DSM for DS3615/3617 version 6.2+ Good catch, and thank you for pointing it out. Quote Link to comment Share on other sites More sharing options...
ed_co Posted November 29, 2018 Share #4 Posted November 29, 2018 Should be great to make it work e1000e into 1.04b... in order to support both NICs in H370M-ITX/ac... Is there any way to do it now? Quote Link to comment Share on other sites More sharing options...
Benoire Posted December 3, 2018 Share #5 Posted December 3, 2018 On 10/20/2018 at 3:36 AM, flyride said: In addition to bricked boxes due to inattentive upgrades, there seems to be a surge of questions regarding how to select a DSM platform, version and loader. I created this table to help navigate the options and current state of the loaders. While situations rapidly change, this should be correct as of the listed date. 6.x Loaders and Platforms as of 20 Oct 2018 Loader Platform DSM version Kernel DSM /dev/dri support DSM NVMe cache support Boot method NIC 6.2.1+ CPU Requirement 1.03b DS3615 or DS3617 6.2+ 3.10 No No Legacy BIOS only Intel (baremetal) e1000e (ESXi) Sandy bridge equivalent or later I'm running my baremetal on a Westmere Xeon L5630 using the 1.03b loader on 6.2; westmere was the generation before Sandy Bridge. Quote Link to comment Share on other sites More sharing options...
ZZer00 Posted December 4, 2018 Share #6 Posted December 4, 2018 Awesome matrix...thank you!! One thing that would be really cool to add would be whether each version supports VMXNET3 network adapters :) Quote Link to comment Share on other sites More sharing options...
haydibe Posted December 5, 2018 Share #7 Posted December 5, 2018 @ZZer00: every DSM version < 6.2.1 does! 6.2.1 does not. 1 Quote Link to comment Share on other sites More sharing options...
hedwigemk Posted December 9, 2018 Share #8 Posted December 9, 2018 On 10/19/2018 at 10:36 PM, flyride said: In addition to bricked boxes due to inattentive upgrades, there seems to be a surge of questions regarding how to select a DSM platform, version and loader. I created this table to help navigate the options and current state of the loaders. While situations rapidly change, this should be correct as of the listed date. 6.x Loaders and Platforms as of 20 Oct 2018 Loader Platform DSM version Kernel DSM /dev/dri support DSM NVMe cache support Boot method NIC 6.2.1+ CPU Requirement 1.04b DS918 6.2+ 4.4 Yes Yes EFI or Legacy BIOS n/a Haswell equivalent or later 1.03b DS3615 or DS3617 6.2+ 3.10 No No Legacy BIOS only Intel (baremetal) e1000e (ESXi) Sandy bridge equivalent or later 1.02b DS916 6.0.3 to 6.1.7 3.10 Yes No EFI or Legacy BIOS n/a Sandy bridge equivalent or later 1.02b DS3615 6.0.3 to 6.1.7 3.10 No No EFI or Legacy BIOS n/a Sandy bridge equivalent or later 1.02b DS3617 6.0.3 to 6.1.6* 3.10 No No EFI or Legacy BIOS n/a Sandy bridge equivalent or later 1.01 DS916 or DS3615 or DS3617 6.0 to 6.0.2 3.10 No No EFI or Legacy BIOS n/a Sandy bridge equivalent or later * 6.1.7 on DS3617 is incompatible with ESXi installation NOTE: Recent reports have indicated that Westmere (Xeon family development from Nehalem) is functional on loader versions prior to 1.04b. That may imply that Nehalem will work everywhere that Sandy Bridge is reported to work, but this is unverified. Note that Synology did not release a 64-bit Nehalem-derived DiskStation, which is why Sandy Bridge was selected as the earliest compatible microarchitecture. If you have a Nehalem chip running any 6.x-compatible loader, please advise. Change log (Reveal hidden contents) [20/10/2018] Updated for 1.04b loader [22/11/2018] Corrected E1000 vNIC dialect to e1000e for 1.03b on ESXi Thank you so much for the great matrix, the NIC column listed with "n/a" means any type of NIC selected in ESXI is good? Is that correct? Quote Link to comment Share on other sites More sharing options...
flyride Posted December 9, 2018 Author Share #9 Posted December 9, 2018 More specifically, 1.03b and DS3615/17 only supports an Intel-type NIC on 6.2.1+ (e1000e on ESXi is an emulated Intel NIC). On earlier versions of DSM, or on other loaders, other NICs may be supported depending upon your combination of drivers available and hardware - e.g. the above Intel limitation is not applicable. Quote Link to comment Share on other sites More sharing options...
vasia911 Posted December 10, 2018 Share #10 Posted December 10, 2018 (edited) On 10/19/2018 at 5:36 PM, flyride said: NOTE: Recent reports have indicated that Westmere (Xeon family development from Nehalem) is functional on loader versions prior to 1.04b. That may imply that Nehalem will work everywhere that Sandy Bridge is reported to work, but this is unverified. Note that Synology did not release a 64-bit Nehalem-derived DiskStation, which is why Sandy Bridge was selected as the earliest compatible microarchitecture. If you have a Nehalem chip running any 6.x-compatible loader, please advise. Xeon x5550 (Nehalem EP) works fine (DSM 6.2). Loader: 1.03b Platform: DS3615 and DS3617 Hardware: Supermicro X8DTH-i, lsi 9211-8i *** DSM 6.2.1 - brick Edited December 10, 2018 by vasia911 Quote Link to comment Share on other sites More sharing options...
lejurassien Posted December 13, 2018 Share #11 Posted December 13, 2018 (edited) Quote NOTE: Recent reports have indicated that Westmere (Xeon family development from Nehalem) is functional on loader versions prior to 1.04b. That may imply that Nehalem will work everywhere that Sandy Bridge is reported to work, but this is unverified. Note that Synology did not release a 64-bit Nehalem-derived DiskStation, which is why Sandy Bridge was selected as the earliest compatible microarchitecture. If you have a Nehalem chip running any 6.x-compatible loader, please advise. - JUN'S LOADER v1.03b - DS3617xs 6.2-23739 Update 2 - Installation type: BAREMETAL - moatherboard Gigabytes P55-UD3L REV-- / ram 8Go Patriot at 1333/1600 / and i5@760" Lynnfield / Realtek RTL 8168/8111 PCI-E Ethernet adapteur / Realtek PCI-e Gbe Familly controller / ADM Radeon HD 7000 #### DSM 6.2.1 - brick Thank's for all the work !! Edited December 13, 2018 by lejurassien Quote Link to comment Share on other sites More sharing options...
kazuni Posted December 21, 2018 Share #12 Posted December 21, 2018 Xeon X5670/L5640 (Nehalem EP) works fine (DSM 6.2.1). Loader: 1.03b Platform: DS3615 and DS3617 Hardware: Dell PowerEdge R510/610, esxi. #### DSM 6.2.1 - Successful with E1000E and BIOS. Quote Link to comment Share on other sites More sharing options...
kazuni Posted December 22, 2018 Share #13 Posted December 22, 2018 Xeon X5670/L5640 (Nehalem EP) works fine (DSM 6.1.7). Loader: 1.02b Platform: DS3617 Hardware: Dell PowerEdge R510/610, esxi. ESXi with vmxnet works on 6.1.7 pat and 6.7u1 esxi. Quote Link to comment Share on other sites More sharing options...
kazuni Posted December 22, 2018 Share #14 Posted December 22, 2018 Xeon X5670/L5640 (Nehalem EP) works fine (DSM 6.1.7). Loader: 1.02b Platform: DS3617xs Hardware: Dell PowerEdge R510/610, baremetal Baremetal mode works on 6.1.7u2 pat Quote Link to comment Share on other sites More sharing options...
mgrobins Posted January 4, 2019 Share #15 Posted January 4, 2019 Hi, I have a xeon build similar to the DS3617 in all respects (hardware in sig). Looking at the table I am uncertain what "DSM /dev/dri support" means. I currently run Jun's loader 1.02b with the last version of DS 6.1. I do have it backed up but am hoping to build a new USB with the best loader to run the latest DSM6.2 that is compatible. I'd be grateful for assistance or clarification :). Quote Link to comment Share on other sites More sharing options...
bearcat Posted January 4, 2019 Share #16 Posted January 4, 2019 @mgrobins the "DSM /dev/dri support" referrs to a folder on your system, that only will be present if your system supports hardware transcoding of media in ie. VideoStation/PhotoStation/Plex. Quote Link to comment Share on other sites More sharing options...
mgrobins Posted January 4, 2019 Share #17 Posted January 4, 2019 Thanks. It's 1.03b for me then and the DS3617xs setup :). Tried replying to your other post on Root access but sometimes the web page is not allowing me to reply to topics.... I'll work it out :P. Quote Link to comment Share on other sites More sharing options...
Eric on Fire Posted January 16, 2019 Share #18 Posted January 16, 2019 If I had known about this table earlier, it would have saved me endless hours trying to get 1.03b 6.21 ds3615xs working. I incorrectly assumed that UEFI would work, and had no idea why 1.02b 6.17 ds3615xs worked perfectly. Happily, I switched to legacy booting and the 1.03b boot loader works like a charm! Thanks!!! Quote Link to comment Share on other sites More sharing options...
tariq_niazi Posted January 20, 2019 Share #19 Posted January 20, 2019 I am current using DS361xs DSM Platform with 5 x HDD and 4 x SSD. If I switch to DS918, would there be any issues? Also, what is the importance of the recommended Processor? Currently, I am running 1.02b DS3617xs DSM Platform on Intel i7-920 (Bloomfield) processor. Tariq Quote Link to comment Share on other sites More sharing options...
flyride Posted January 20, 2019 Author Share #20 Posted January 20, 2019 8 minutes ago, tariq_niazi said: I am current using DS361xs DSM Platform with 5 x HDD and 4 x SSD. If I switch to DS918, would there be any issues? Also, what is the importance of the recommended Processor? Currently, I am running 1.02b DS3617xs DSM Platform on Intel i7-920 (Bloomfield) processor. Your CPU does not support the instructions that are compiled into the DS918 kernel, so if you change to 1.04b/DS918 it will crash. Quote Link to comment Share on other sites More sharing options...
kazuni Posted January 23, 2019 Share #21 Posted January 23, 2019 Does 1.04b support Mellanox Connectx-2/3/qlogics 10GBE sfp+ cards? Quote Link to comment Share on other sites More sharing options...
Flotho Posted January 24, 2019 Share #22 Posted January 24, 2019 (edited) I have loader 1.03b DS3617 and ESXi 6.7. ESXi running on MS Microserver Gen8. Newest Update is working fine in my situation. Edited January 24, 2019 by Flotho Quote Link to comment Share on other sites More sharing options...
flyride Posted January 24, 2019 Author Share #23 Posted January 24, 2019 On 1/23/2019 at 7:14 AM, kazuni said: Does 1.04b support Mellanox Connectx-2/3/qlogics 10GBE sfp+ cards? 10GBe driver support is partially removed from the DS918 image, which is what 1.04b loader supports. I am compiling on a complete driver/card inventory for DS918 on 6.2.1 and will post it when complete. I did one for DS3615 on 6.1.7 (1.02b loader) and will repeat it for DS3615 on 6.2.1 (using 1.03b loader). If you want to be absolutely certain these 10Gbe cards to work, use 1.02b loader, DS3615 DSM 6.1.7. However, I would expect 1.03b loader, DS3615 DSM 6.2.1 to work, provided that there is also a Intel 1Gbps card in the system supported by the e1000e driver. Quote Link to comment Share on other sites More sharing options...
kazuni Posted January 24, 2019 Share #24 Posted January 24, 2019 Thanks for the quick response. I am indeed on 6.1.7u4 with 1.02b baremetal. I have a qnap 4bay nas that works fine with esxi on 1.04b with vmxnet3 drivers that works, which would be the current solution at the moment since I've tested connectx2/3 and they won't load up with netif_num and mac modified. I believe the drivers are not presented (have yet to try a x520 based card, since synology uses intel based sfp+ cards i believe) Quote Link to comment Share on other sites More sharing options...
flyride Posted January 24, 2019 Author Share #25 Posted January 24, 2019 Synology has used Mellanox, Aquantia and Tehuti as OEM suppliers of 10Gbe cards. However, the Intel cards and a number of other vendors are also supported on the 3615 platform. FMI: https://xpenology.com/forum/topic/13922-guide-to-native-drivers-dsm-617-on-ds3615/ DS918 hardware has no provision for a PCIe add-in 10Gbe card, and Syno did not include many 10Gbe drivers in the DSM distro, presumably because it wasn't necessary for their customers. In order to get those drivers online you would need to use an extra.lzma solution, which appears to be problematic on 1.04b right now. My own primary production XPE install is using 1.02b/DS3615/6.1.7 with Mellanox 10Gbe, and I have not moved to 6.2.1 and DS918 for this reason (and because there isn't a really compelling reason from a feature standpoint to do so). 6 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.