Jump to content
XPEnology Community

Tutorial/Reference: 6.x Loaders and Platforms


flyride

Recommended Posts

  • 2 weeks later...

Hi, 

Thank you for posting this very very useful matrix.

 

On 10/19/2018 at 11:36 PM, flyride said:
 
Loader Platform DSM version Kernel DSM /dev/dri support DSM NVMe cache support Boot method NIC 6.2.1+ CPU Requirement
1.03b DS3615 or
DS3617
6.2+ 3.10 No No Legacy BIOS only

Intel (baremetal)
e1000

(ESXi)

Sandy bridge equivalent or later

 

I noticed that working NIC for esxi is not e1000, but e1000e for me.

Could you confirm ?

 

(kernel panic log from serial console)

Spoiler

[    8.263832] Modules linked in: e1000(F+) sfc(F) netxen_nic(F) qlge(F) qlcnic(F) qla3xxx(F) pch_gbe(F) ptp_pch(F) sky2(F) skge(F) ipg(F) uio(F) alx(F) atl1c(F) atl1e(F) atl1(F) libphy(F) mii(F) exfat(O) btrfs synoacl_vfs(PO) zlib_deflate hfsplus md4 hmac bnx2x(O) libcrc32c mdio mlx5_core(O) mlx4_en(O) mlx4_core(O) mlx_compat(O) compat(O) qede(O) qed(O) atlantic(O) tn40xx(O) i40e(O) ixgbe(O) be2net(O) igb(O) i2c_algo_bit e1000e(O) dca vxlan fuse vfat fat crc32c_intel aesni_intel glue_helper lrw gf128mul ablk_helper arc4 cryptd ecryptfs sha256_generic sha1_generic ecb aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_conservative cpufreq_powersave cpufreq_performance cpufreq_ondemand mperf processor thermal_sys cpufreq_stats freq_table dm_snapshot crc_itu_t crc_ccitt quota_v2 quota_tree psnap p8022 llc sit tunnel4 ip_tunnel ipv6 zram(C) sg etxhci_hcd mpt3sas(F) mpt2sas(O) megaraid_sas(F) mptctl(F) mptsas(F) mptspi(F) mptscsih(F) mptbase(F) scsi_transport_spi(F) megaraid(F) megaraid_mbox(F) megaraid_mm(F) vmw_pvscsi(F) BusLogic(F) usb_storage xhci_hcd uhci_hcd ohci_hcd(F) ehci_pci ehci_hcd usbcore usb_common ceps(OF) [last unloaded: bromolow_synobios]
[    8.280733] CPU: 0 PID: 5921 Comm: insmod Tainted: PF        C O 3.10.105 #23824
[    8.280767] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 09/21/2015
[    8.280800] task: ffff8800377d6800 ti: ffff88003afc0000 task.ti: ffff88003afc0000
[    8.280834] RIP: 0010:[<ffffffff81007b22>]  [<ffffffff81007b22>] dma_set_mask+0x22/0x50
[    8.280867] RSP: 0018:ffff88003afc3c00  EFLAGS: 00010202
[    8.280901] RAX: 00000000ffffffff RBX: ffff88003e34b098 RCX: 0000000000000000
[    8.280935] RDX: 00000000ffffffff RSI: 00000000ffffffff RDI: ffff88003e34b098
[    8.280968] RBP: 00000000ffffffff R08: 0000000000000002 R09: ffff88003afc3be4
[    8.281002] R10: 0000000000000000 R11: ffffc9000b79ffff R12: 0000000000000055
[    8.281035] R13: ffff88003d0a8a48 R14: ffff88003d0a8000 R15: 0000000000000000
[    8.281069] FS:  00007f013e1a4700(0000) GS:ffff88003fc00000(0000) knlGS:0000000000000000
[    8.281103] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[    8.281136] CR2: 00000000ffffffff CR3: 000000003cd0c000 CR4: 00000000001607f0
[    8.281170] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    8.281203] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[    8.281237] Stack:
[    8.281271]  ffff88003e34b000 ffff88003e34b098 ffffffffa0c87a13 ffffffff8116b6f1
[    8.281304]  0000000000000000 ffff88003d0a8700 ffff88003d0c3358 ffff88003afc3c50
[    8.281338]  000000008116c00000ffffffff ffff88003e34d748 ffff88003e34b090
[    8.900725] Call Trace:
[    8.900799]  [<ffffffffa0c87a13>] ? e1000_probe+0x373/0xf60 [e1000]
[    8.900868]  [<ffffffff8116b6f1>] ? sysfs_add_one+0x11/0xc0
[    8.900936]  [<ffffffff8129f0d0>] ? pci_device_probe+0x60/0xa0
[    8.901726]  [<ffffffff8130a38e>] ? driver_probe_device+0x7e/0x3e0
[    8.901793]  [<ffffffff8130a7ab>] ? __driver_attach+0x7b/0x80
[    8.901865]  [<ffffffff8130a730>] ? __device_attach+0x40/0x40
[    8.901934]  [<ffffffff813083e3>] ? bus_for_each_dev+0x53/0x90
[    8.902001]  [<ffffffff81309a08>] ? bus_add_driver+0x1c
                                                          [    9.518527]  [<ffffffff8130ad98>] ? driver_register+0x68/0x150
[    9.518577]  [<ffffffffa0ca3000>] ? 0xffffffffa0ca2fff
[    9.518617]  [<ffffffffa0ca304c>] ? e1000_init_module+0x4c/0x82 [e1000]
[    9.518653]  [<ffffffff8100038a>] ? do_one_initcall+0xca/0x180
[    9.518693]  [<ffffffff8108b38c>] ? load_module+0x1d0c/0x2360
[    9.518730]  [<ffffffff8128f770>] ? ddebug_proc_write+0xe0/0xe0
[    9.518767]  [<ffffffff810f93a3>] ? vfs_read+0xf3/0x160
[    9.518806]  [<ffffffff8108bb45>] ? SYSC_finit_module+0x75/0xa0
[    9.519708]  [<ffffffff814cadc4>] ? system_call_fastpath+0x22/0x27
[  40] Code: 2e 0f 1f 84 00 00 00 00 00 48 83 bf 00 01 00 00 00 74 36 55 53 48 89 f5 48 89 fb e8 29 ff ff ff 85 c0 74 15 48 8b 83 00 01 00 00 <48> 89 28 31 c0 5b 5d c3 66 0f 1f 44 00 00 b8 fb ff ff ff eb f0
[   10.140685] RIP  [<ffffffff81007b22>] dma_set_mask+0x22/0x50
[   10.140716]  RSP <ffff88003afc3c00>
[   10.140748] CR2: 00000000ffffffff
[   10.140834] ---[ end trace dc095bc6309f2d80 ]---

 

 

  • Thanks 1
Link to comment
Share on other sites

On 10/20/2018 at 3:36 AM, flyride said:

In addition to bricked boxes due to inattentive upgrades, there seems to be a surge of questions regarding how to select a DSM platform, version and loader.

I created this table to help navigate the options and current state of the loaders.  While situations rapidly change, this should be correct as of the listed date.

 

6.x Loaders and Platforms as of 20 Oct 2018
Loader Platform DSM version Kernel DSM /dev/dri support DSM NVMe cache support Boot method NIC 6.2.1+ CPU Requirement
                 
1.03b DS3615 or
DS3617
6.2+ 3.10 No No Legacy BIOS only Intel (baremetal)
e1000e (ESXi)
Sandy bridge equivalent or later
                 

I'm running my baremetal on a Westmere Xeon L5630 using the 1.03b loader on 6.2; westmere was the generation before Sandy Bridge.

Link to comment
Share on other sites

On 10/19/2018 at 10:36 PM, flyride said:

In addition to bricked boxes due to inattentive upgrades, there seems to be a surge of questions regarding how to select a DSM platform, version and loader.

I created this table to help navigate the options and current state of the loaders.  While situations rapidly change, this should be correct as of the listed date.

 

6.x Loaders and Platforms as of 20 Oct 2018
Loader Platform DSM version Kernel DSM /dev/dri support DSM NVMe cache support Boot method NIC 6.2.1+ CPU Requirement
1.04b DS918 6.2+ 4.4 Yes Yes EFI or Legacy BIOS n/a Haswell equivalent or later
1.03b DS3615 or
DS3617
6.2+ 3.10 No No Legacy BIOS only Intel (baremetal)
e1000e (ESXi)
Sandy bridge equivalent or later
1.02b DS916

6.0.3 to 6.1.7

3.10 Yes No EFI or Legacy BIOS n/a Sandy bridge equivalent or later
1.02b DS3615 6.0.3 to 6.1.7 3.10 No No EFI or Legacy BIOS n/a Sandy bridge equivalent or later
1.02b DS3617 6.0.3 to 6.1.6* 3.10 No No EFI or Legacy BIOS n/a Sandy bridge equivalent or later
1.01 DS916 or
DS3615 or
DS3617
6.0 to 6.0.2 3.10 No No EFI or Legacy BIOS n/a Sandy bridge equivalent or later

* 6.1.7 on DS3617 is incompatible with ESXi installation

 

NOTE: Recent reports have indicated that Westmere (Xeon family development from Nehalem) is functional on loader versions prior to 1.04b. That may imply that Nehalem will work everywhere that Sandy Bridge is reported to work, but this is unverified. Note that Synology did not release a 64-bit Nehalem-derived DiskStation, which is why Sandy Bridge was selected as the earliest compatible microarchitecture.  If you have a Nehalem chip running any 6.x-compatible loader, please advise.

 

  Change log (Reveal hidden contents)

[20/10/2018] Updated for 1.04b loader

[22/11/2018] Corrected E1000 vNIC dialect to e1000e for 1.03b on ESXi

 

 

Thank you so much for the great matrix, the NIC column listed with "n/a" means any type of NIC selected in ESXI is good? Is that correct?

Link to comment
Share on other sites

More specifically, 1.03b and DS3615/17 only supports an Intel-type NIC on 6.2.1+ (e1000e on ESXi is an emulated Intel NIC).

 

On earlier versions of DSM, or on other loaders, other NICs may be supported depending upon your combination of drivers available and hardware - e.g. the above Intel limitation is not applicable.

 

Link to comment
Share on other sites

On 10/19/2018 at 5:36 PM, flyride said:

NOTE: Recent reports have indicated that Westmere (Xeon family development from Nehalem) is functional on loader versions prior to 1.04b. That may imply that Nehalem will work everywhere that Sandy Bridge is reported to work, but this is unverified. Note that Synology did not release a 64-bit Nehalem-derived DiskStation, which is why Sandy Bridge was selected as the earliest compatible microarchitecture.  If you have a Nehalem chip running any 6.x-compatible loader, please advise.

  

 

Xeon x5550 (Nehalem EP) works fine (DSM 6.2).

Loader: 1.03b

Platform: DS3615 and DS3617

Hardware: Supermicro X8DTH-i, lsi 9211-8i

 

*** DSM 6.2.1 - brick :)

Edited by vasia911
Link to comment
Share on other sites

Quote

NOTE: Recent reports have indicated that Westmere (Xeon family development from Nehalem) is functional on loader versions prior to 1.04b. That may imply that Nehalem will work everywhere that Sandy Bridge is reported to work, but this is unverified. Note that Synology did not release a 64-bit Nehalem-derived DiskStation, which is why Sandy Bridge was selected as the earliest compatible microarchitecture.  If you have a Nehalem chip running any 6.x-compatible loader, please advise.

  

- JUN'S LOADER v1.03b - DS3617xs  6.2-23739 Update 2

- Installation type: BAREMETAL -  moatherboard Gigabytes  P55-UD3L REV--  / ram 8Go Patriot at 1333/1600 / and i5@760" Lynnfield / Realtek RTL 8168/8111 PCI-E  Ethernet adapteur /  Realtek PCI-e Gbe Familly controller / ADM Radeon HD 7000

 

####  DSM 6.2.1 - brick

 

Thank's for all the work !!

 

Edited by lejurassien
Link to comment
Share on other sites

Hi,

I have a xeon build similar to the DS3617 in all respects (hardware in sig). Looking at the table I am uncertain what "DSM /dev/dri  support" means.

 

I currently run Jun's loader 1.02b with the last version of DS 6.1. I do have it backed up but am hoping to build a new USB with the best loader to run the latest DSM6.2 that is compatible.

 

I'd be grateful for assistance or clarification :).

Link to comment
Share on other sites

If I had known about this table earlier, it would have saved me endless hours trying to get 1.03b 6.21 ds3615xs working. I incorrectly assumed that UEFI would work, and had no idea why 1.02b 6.17 ds3615xs worked perfectly. Happily, I switched to legacy booting and the 1.03b boot loader works like a charm! Thanks!!!

Link to comment
Share on other sites

8 minutes ago, tariq_niazi said:

I am current using DS361xs DSM Platform with 5 x HDD and 4 x SSD. If I switch to DS918, would there be any issues?

 

Also, what is the importance of the recommended Processor? Currently, I am running 1.02b DS3617xs DSM Platform on Intel i7-920 (Bloomfield) processor.

 

Your CPU does not support the instructions that are compiled into the DS918 kernel, so if you change to 1.04b/DS918 it will crash.

Link to comment
Share on other sites

On ‎1‎/‎23‎/‎2019 at 7:14 AM, kazuni said:

Does 1.04b support Mellanox Connectx-2/3/qlogics 10GBE sfp+ cards?

 

10GBe driver support is partially removed from the DS918 image, which is what 1.04b loader supports.  I am compiling on a complete driver/card inventory for DS918 on 6.2.1 and will post it when complete.  I did one for DS3615 on 6.1.7 (1.02b loader) and will repeat it for DS3615 on 6.2.1 (using 1.03b loader).

 

If you want to be absolutely certain these 10Gbe cards to work, use 1.02b loader, DS3615 DSM 6.1.7.  However, I would expect 1.03b loader, DS3615 DSM 6.2.1 to work, provided that there is also a Intel 1Gbps card in the system supported by the e1000e driver.

Link to comment
Share on other sites

Thanks for the quick response.  I am indeed on 6.1.7u4 with 1.02b baremetal.

 

I have a qnap 4bay nas that works fine with esxi on 1.04b with vmxnet3 drivers that works, which would be the current solution at the moment since I've tested connectx2/3 and they won't load up with netif_num and mac modified.  I believe the drivers are not presented (have yet to try a x520 based card, since synology uses intel based sfp+ cards i believe)

Link to comment
Share on other sites

Synology has used Mellanox, Aquantia and Tehuti as OEM suppliers of 10Gbe cards.  However, the Intel cards and a number of other vendors are also supported on the 3615 platform.

FMI: https://xpenology.com/forum/topic/13922-guide-to-native-drivers-dsm-617-on-ds3615/

 

DS918 hardware has no provision for a PCIe add-in 10Gbe card, and Syno did not include many 10Gbe drivers in the DSM distro, presumably because it wasn't necessary for their customers.  In order to get those drivers online you would need to use an extra.lzma solution, which appears to be problematic on 1.04b right now.

 

My own primary production XPE install is using 1.02b/DS3615/6.1.7 with Mellanox 10Gbe, and I have not moved to 6.2.1 and DS918 for this reason (and because there isn't a really compelling reason from a feature standpoint to do so).

  • Like 6
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...