Jump to content
XPEnology Community

DSM 6.2 Loader


jun

Recommended Posts

Just now, dp64 said:

 

Ah ok, maybe in that case I'll consider leaving it alone then, its more of a cosmetic thing really.

Regarding hiding the 50MB drive, after some extensive testing, I noticed that if you hide the 50MB drive by moving the SATA port in the VM settings to a higher number before installing DSM fresh, the installation fails with I think a code 13 stating that the file is probably corrupt. However, if it is set to a lower port number, the initial installation goes through smoothly no problems. This got me thinking perhaps during future security updates, if the 50MB drive is set back to a higher port number in the VM settings to where it is hidden, could subsequent DSM updates/security patches fail installing? Meaning I'd have to "reveal" the 50MB drive before an update then hide it again? 

Try and see. Install some older version and then update to newer.

Link to comment
Share on other sites

I made a mistake upgrade from 6.1.7 to 6.2.1 baremetal on N54L with 1.03b loader (DS3615x).  

 

-Hardware: HP N54L baremetal

-Network: HP N364T 4 nic OVS bond working

-DSM version: 6.1.7 

 

Updated with the 1.03b bootloader, migrated installation and after reboot nothing. The N54L doesn't have serial port to debug :(

 

So instead of giving up, I unplugged the disk and started a fresh new instlallation:

-DSM: DSM_DS3617xs_23739

-Bootloader:  1.03b 

-Hardware: HP N54L baremetal

-Network: HP N364T 4 NIC and integrated.

 

Everything worked fine, OVS with bond too, upgraded to 6.2.1 23824  and the integrated nic disappeared, but the bond worked fine.

 

So my hardware can handle upgrading from 6.2 to 6.21...

 

Now I wanted to recover my original installation, but couldn't get nothing from the NICs (inspecting traffic from the NICs (port mirror and wireshark and so on))

 

Right now I don't have here another box more modern or with 3,5" bays. So I installed on PROXMOX on N54L and passed the drives.

 

-DSM: DSM_DS3615xs_23824   (Migrate)

-Bootloader:  1.03b 

-Hardware: HP N54L PROXMOX VM

-Network: E1000 and VMnExt (This Server doens't have VT--d)

 

Migrated the DSM and Fired up a console and enter the DSM without problem I can browse my mounted volumes and check the files, but I can't have networking (E1000 on proxmox didn't work on 6.2.1).  So I edited the files from my working DSM 6.21 barematel to the disk..

 

Quote

/etc/sysconfig/network-scripts --> All files in that directory
/etc/rc.network
/etc/rc.network_routing
/etc/synoinfo.conf
/usr/syno/etc/synoovs --> All files in that directory

 

But no luck getting anything from the network card after removing all configuration.

 

I'm stuck trying to get back the DSM 6.2.1 on the N54L and without consle I don't know if kernel panic ot what.

 

Anything to try?

Link to comment
Share on other sites

Successfully on VMware Workstation run Loader-1.04b DS918+

 

In the environment of the default network card e1000, after booting, there is no search for the nas device, and the vm network card is set to ethernet0.virtualDev = "e1000" to ethernet0.virtualDev = "vlance". Finally, you can search for nas straight and online install DSM 6.2.1 is normal

 

But the network will only have a speed of 10/100. If you want to test it, you can try it. If you try to add an e1000 card, it will not work, so it should be caused by loader-1.04b and preset e1000

Link to comment
Share on other sites

Dear Xpenologists,

I took 1.04b loader described at https://xpenology.com/forum/topic/13420-загрузчик-104b-для-918-для-621-dsm/

and I am trying to start it up as a virtual machine under Ubuntu 18.04 LTS (QEMU/KVM v2.11, virt-manager v1.5.1, ovmf  20180205 and so on, all packages included with Ubuntu 18.04 and its updates, without side-added repositories)

I learned how to generate MAC addresses corresponding to DSM hardware serial number (actually I have a real s/n & mac, just from non-Intel CPU box)

I learned that under QEMU/KVM, my USB boot drive PID is 0x46f4 and VID is 0x0001

I dutifully recorded that into grub.cfg (which I accessed via losetup -P && mount), as well as into VM .xml description (using virsh edit)

I tried to boot that with both SeaBIOS and OVMF/UEFI loaders. Also I tried to choose every of 3 points of GRUB menu (baremetal install, baremetal reinstall, ESXi install)

 

and despite all these efforts, I still get the same message :

"Happy hacking
Screen will stop updating shortly, please open http://find.synology.com to continue"

 

no DHCP IP is requested from this MAC (unlike when booting Ubuntu Server 18.04 ISO image from the same VM)

 

my CPU is Intel(R) Celeron(R) CPU  J1900  @ 1.99GHz (from /proc/cpuinfo), could this be a problem ? Should I revert to loader 1.03b and firmware DS3615 instead ?

Thanks in advance for any useful responses from any kind soul on my topic.

Edited by Wladimir Mutel
Link to comment
Share on other sites

Just now, jeannotmer said:

Thanks

 

SO the only things i can do it' to wait a new version of loader or it's definitely dead and i need to buy a new baremetal server ?

I’d guess there will be an update for DS3615/17, it’s the most stable version after all. You’re not really missing anything, in fact 1.03b can work with DSM 6.2.1, at least if you have an Intel NIC it should. On my hardware it works just fine for example.

Link to comment
Share on other sites

Feel free to move this if it isn't in the right spot.  Was running 6.2 with 1.03b loader (DS3617) on a dual-xeon X5600 supermicro box (tylersburg chipset) and recently upgraded to 6.2.1. Post upgrade I'm getting a kernel panic around ehci-pci.  Anyone run into this or have any ideas?

 

patching file etc/rc
patching file etc/synoinfo.conf
Hunk #1 FAILED at 261.
1 out of 1 hunk FAILED -- saving rejects to file etc/synoinfo.conf.rej
patching file linuxrc.syno
Hunk #3 succeeded at 552 (offset 1 line).
patching file usr/sbin/init.post
cat: can't open '/etc/synoinfo_override.conf': No such file or directory
START /linuxrc.syno
Insert Marvell 1475 SATA controller driver
Insert basic USB modules...
:: Loading module usb-common ... [  OK  ]
:: Loading module usbcore ... [  OK  ]
:: Loading module ehci-hcd ... [  OK  ]
:: Loading modul[    1.249619] kernel tried to execute NX-protected page - exploit attempt? (uid: 0)
[    1.257466] BUG: unable to handle kernel paging request at ffff880c02784260
[    1.264447] IP: [<ffff880c02784260>] 0xffff880c0278425f
[    1.269688] PGD 1ac2067 PUD 8000000c000000e3
[    1.274081] Oops: 0011 [#1] SMP
[    1.277346] Modules linked in: ehci_pci(F+) ehci_hcd(F) usbcore usb_common mv14xx(O) tthphahhpxqlhpx(OF)
[    1.286936] CPU: 8 PID: 3548 Comm: insmod Tainted: GF          O 3.10.105 #23824
[    1.294323] Hardware name: Supermicro X8DAH/X8DAH, BIOS 2.1        12/30/2011
[    1.301445] task: ffff880c02706800 ti: ffff880bfdb88000 task.ti: ffff880bfdb88000
[    1.308921] RIP: 0010:[<ffff880c02784260>]  [<ffff880c02784260>] 0xffff880c0278425f
[    1.316579] RSP: 0018:ffff880bfdb8bb78  EFLAGS: 00010046
[    1.321887] RAX: ffff880c0277ae10 RBX: 0000000000000006 RCX: 0000000000000002
[    1.329009] RDX: 0000000000000006 RSI: 0000000000000000 RDI: ffff880c02797800
[    1.336127] RBP: ffff880c02797800 R08: ffff880bfdb8bb84 R09: 000000000000fffb
[    1.343248] R10: 0000000000000000 R11: 0000000000000027 R12: 0000000000000000
[    1.350369] R13: ffff880bfdb8bbbe R14: 0000000000000246 R15: ffffffffa00ebb40
[    1.357491] FS:  00007f4496161700(0000) GS:ffff880c3fd00000(0000) knlGS:0000000000000000
[    1.365565] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[    1.371298] CR2: ffff880c02784260 CR3: 0000000bfdbd8000 CR4: 00000000000207e0
[    1.378418] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    1.385539] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[    1.392659] Stack:
[    1.394670]  ffffffff81292f25 00000000fdb8bbd8 0000000000000015 000000000000000a
[    1.402119]  ffff880c02796ff8 0000000000000000 ffff880c02797090 ffffffff81298046
[    1.409570]  0000000000000001 ffff880c02796ff8 ffffffff8129808b ffff880bfdabd800
[    1.417018] Call Trace:
[    1.419466]  [<ffffffff81292f25>] ? pci_bus_read_config_word+0x65/0x90
e ehci-pci[    1.425987]  [<ffffffff81298046>] ? __pci_bus_find_cap_start+0x16/0x40
[    1.433365]  [<ffffffff8129808b>] ? pci_find_capability+0x1b/0x50
[    1.439449]  [<ffffffffa00eb09d>] ? ehci_pci_setup+0x9d/0x5a0 [ehci_pci]
[    1.446145]  [<ffffffffa00aac4d>] ? usb_add_hcd+0x1bd/0x660 [usbcore]
[    1.452580]  [<ffffffffa00b99d3>] ? usb_hcd_pci_probe+0x363/0x410 [usbcore]
[    1.459538]  [<ffffffff8129bd80>] ? pci_device_probe+0x60/0xa0
[    1.465366]  [<ffffffff8130575a>] ? really_probe+0x5a/0x220
[    1.470935]  [<ffffffff813059e1>] ? __driver_attach+0x81/0x90
[    1.476676]  [<ffffffff81305960>] ? __device_attach+0x40/0x40
[    1.482416]  [<ffffffff81303a53>] ? bus_for_each_dev+0x53/0x90
[    1.488243]  [<ffffffff81304ef8>] ? bus_add_driver+0x158/0x250
[    1.494072]  [<ffffffffa00ed000>] ? 0xffffffffa00ecfff
[    1.499201]  [<ffffffff81305fe8>] ? driver_register+0x68/0x150
[    1.505031]  [<ffffffffa00ed000>] ? 0xffffffffa00ecfff
[    1.510177]  [<ffffffff810003aa>] ? do_one_initcall+0xea/0x140
[    1.516008]  [<ffffffff8108baf4>] ? load_module+0x1a04/0x2120
[    1.521751]  [<ffffffff81088cb0>] ? store_uevent+0x40/0x40
[    1.527232]  [<ffffffff8108c2a1>] ? SYSC_init_module+0x91/0xc0
[    1.533065]  [<ffffffff814c0dc4>] ? system_call_fastpath+0x22/0x27
[    1.539238]  [<ffffffff814c0d11>] ? system_call_after_swapgs+0xae/0x13f
[    1.545847] Code: 88 ff ff 00 ae 77 02 0c 88 ff ff 70 63 69 30 30 30 30 3a 30 30 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <70> 63 69 30 30 30 30 3a 30 30 00 00 00 00 00 00 00 00 00 00 00
[    1.565788] RIP  [<ffff880c02784260>] 0xffff880c0278425f
[    1.571107]  RSP <ffff880bfdb8bb78>
[    1.574590] CR2: ffff880c02784260
[    1.577899] ---[ end trace 76b8d46e138e2d6c ]---
 ... [FAILED]
:: Loading modul[    1.588113] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
e ohci-hcd ... [  OK  ]
:: Loading module uhci-hcd[    1.599906] uhci_hcd: USB Universal Host Controller Interface driver
[    1.934848] tsc: Refined TSC clocksource calibration: 2533.422 MHz
[    1.941026] Switching to clocksource tsc

 

Link to comment
Share on other sites

1 hour ago, Ikyo said:

Any news about a loader for the older generation CPUs?

 

The issue isn't the loader, it's the DSM Linux kernel.  XPEnology uses DSM images direct from Synology, so you get what they build.

 

DSM 6 on DS3615/DS3617/DS916 requires 64-bit and Nehalem.  DSM 6.2 on DS918 requires 64-bit and Haswell.

 

If you need older than that, you will need to stay on DSM 5.

Link to comment
Share on other sites

9 minutes ago, flyride said:

 

The issue isn't the loader, it's the DSM Linux kernel.  XPEnology uses DSM images direct from Synology, so you get what they build.

 

DSM 6 on DS3615/DS3617/DS916 requires 64-bit and Nehalem.  DSM 6.2 on DS918 requires 64-bit and Haswell.

 

If you need older than that, you will need to stay on DSM 5.

 

6.2 works on 3615xs 1.03b loader but 6.2.1 does not seem to work. And I have an Ivy Bridge CPU... So did they change the kernel again between 6.2 and 6.2.1 and 6.2.1 requires Haswell?

Link to comment
Share on other sites

Hi guys, 

I have looked through several pages, but still don't understand whether it possible to upgrade my gen8 to 6.2.  😃

If yes, which loader shoud be used? 

Will the circuit with 3615xs 1.03b loader work for me?

6.2.1 doesn't work with my CPU, right? It's also  Ivy Bridge.

Thank you!

Link to comment
Share on other sites

Hello,

 

i got quicknicks 6.1.x loader running, now i want to update to juns version for DSM 6.2 but i got an problem.

 

I can`t find the the ip when i want to install DSM.

I start with the DS3617XS 6.2 loader and its starts but i can `t get an ip adress.

 

My motherboard is MSI Z170 Pro Gaming carbin with an I219V chipset what supposed to be working.

 

Anyone got an idea why i can`t get an IP?

Link to comment
Share on other sites

2 hours ago, ilovepancakes said:

6.2 works on 3615xs 1.03b loader but 6.2.1 does not seem to work. And I have an Ivy Bridge CPU... So did they change the kernel again between 6.2 and 6.2.1 and 6.2.1 requires Haswell?

 

I just booted up a DS3615 image on DSM 6.2.1 patched to latest.

 

Kernel version on 6.2 on DS918 - 4.4 (known to require Haswell)

Kernel version on 6.1 on DS3615 - 3.10 (known to work on Nehalem or later)

Kernel version on 6.2.1 on DS3615 - 3.10.105

 

So Synology did definitely recompile the DS3615/17 kernels for 6.2.  3.10.105 is the LTS release of the 3.10 kernel, with many security and core driver enhancements but can technically run on any x86 architecture unless specifically compiled to use processor-specific features (I have a DS412+ running 6.2.1 and 3.10.105 kernel. So 6.2.1 itself doesn't inherently require a Haswell CPU).

 

Is the FMA/AVX2 Haswell instruction requirement compiled into the DS3615 6.2.1 kernel? You're not the first person to report that earlier CPUs might not be supported.  However, it's hard to tell conclusively without ensuring it's not a network or driver problem. The most direct way to find out would be to set up a serial console and see if booting panics on your Ivy Bridge CPU.

 

Edited by flyride
Link to comment
Share on other sites

5 minutes ago, flyride said:

 

I just booted up a DS3615 image on DSM 6.2.1 patched to latest.

 

Kernel version on 6.2 on DS918 - 4.4 (known to require Haswell)

Kernel version on 6.1 on DS3615 - 3.10 (known to work on Nehalem or later)

Kernel version on 6.2.1 on DS3615 - 3.10.105

 

So Synology did definitely recompile the DS3615/17 kernels for 6.2.  3.10.105 is the LTS release of the 3.10 kernel, with many security and core driver enhancements but can technically run on any x86 architecture unless specifically compiled to use processor-specific features (I have a DS412+ running 6.2.1 and 3.10.105 kernel. So 6.2.1 itself doesn't inherently require a Haswell CPU).

 

Is the FMA/AVX2 Haswell instruction requirement compiled into the DS3615 6.2.1 kernel? You're not the first person to report that earlier CPUs might not be supported.  However, it's hard to tell conclusively without ensuring it's not a network or driver problem. The most direct way to find out would be to set up a serial console and see if booting panics on your Ivy Bridge CPU.

 

 

What was your process to setup and install 6.2.1 on 3615? Want to give it another try and see if I can narrow down the issue, so figured I should follow instructions from someone who did it successfully. What CPU and NIC are you running?

Link to comment
Share on other sites

Currently running 6.1 on an older server. I have new hardware OTW.

 

I have the following:

 

Intel I3-8100 Coffee Lake

GIGABYTE Z370 HD3

8GB Hyper X Fury DDR4 2600Mhz Ram

2 LSI SAS9220-8i cards flashed to IT mode

 

I am currently backing up my SHR array to external drives.

 

I have 13HDD's that I will be formatting and putting into this. I am currently running 12, with SHR1.

 

I want to use 13 with SHR2. Its over 50TB.

 

Can I use 1.04B with the DS918 image from Synology, running the latest DSM, with more than the 4 drives? Will it show automatically, or am I going to had to make some edits to the syno.conf file?

 

Will it be the same thing as before with SHR, and having to also make the edit to get it setup? I will be spinning up all drives at once.

 

 

Link to comment
Share on other sites

1 hour ago, viper359 said:

I want to use 13 with SHR2. Its over 50TB.

 

Can I use 1.04B with the DS918 image from Synology, running the latest DSM, with more than the 4 drives? Will it show automatically, or am I going to had to make some edits to the syno.conf file?

 

1.04b (DS918) has 16 drives preconfigured in synoinfo.conf

1.03b (DS3615/17) has 12 drives just like earlier loaders

 

6.2.1 has less hardware support per platform (extra.lzma has limitations or is not available).  You should pre-validate that LSI works on DS918.  If it does, I would expect you to migrate without much trouble.  Otherwise you will need to use 1.03b and DS3615/17, and edit synoinfo.conf as before. Alternatively, you might be able to move to ESXi and RDM all the drives into a DS918 VM. I haven't actually tested that yet though.

 

I guess the other question to ask is why?  If working well, why not stay on 6.1.x even on new hardware?  It's fully supported for a number of years to come, and there isn't much new in 6.2.1.

Edited by flyride
Link to comment
Share on other sites

@flyride

 

Biggest reason is for hardware transcoding via plex. I have several friends and family that utilize plex, and sometimes, transcoding happens. If software or hardware is better, I don't know.

 

That is my biggest reason. I just had a drive die (Crashed actually, the disk is fine, several tests confirm, again, another WD, go figure) so I figured it would be the perfect time to start fresh. This server is at least 6 years old.

Edited by viper359
  • Thanks 1
Link to comment
Share on other sites

Hi everyone!
I'm new and i'm writing here because i've already read the topic and looked for an answer to my question but with no luck.

Actually i'm running DSM6.1.7 "on" DS3615xs (loader 1.02b) with my Gigabyte BRIX (Celeron N3150 based mini-pc): is there any possibility to update loader and DSM? I know different kernels supports different architectures, but there are little info about Braswell.

Thank you for any help.

Link to comment
Share on other sites

16 hours ago, viper359 said:

 ...I just had a drive die (Crashed actually, the disk is fine, several tests confirm, again, another WD, go figure) ...

 

Caught my eye. I have had terrible experience lately with WD Red drives crashing. 

 

Phase 1 of my xpenology career: had 4 WD Red 1/2 TB drives in an old Intel SS4200E NAS enclosure, and never had an issue for probably 4 years. Not a single drive crash.

Phase 2 of my xpenology career: had 4 WD Red 2/3 TB drives in a GA Z77X-based system, and dropped a drive a month. About every month DSM would complain that a drive had crashed. Third party tests on the drives never seemed to indicate anything was really wrong with the drives but from DSMs perspective they were just bad. I must have RMAed about 8 WD drives over a 6-8 month period. When I built this system I set it up as BTRFS RAID 5 but after a little while I rebuilt it RAID 5 (non btrfs). I re-evaluated the PSU, the SATA connections to the motherboard, the BIOS version and settings, etc., etc. Drives would routinely crash with just a few hundred hours on them. I think I experimented with several generations of bootloader e.g. jun 101b/102 which means different gens of Synology-spoofed hardware (3615, 3617, 916 if I'm recalling those models right). System lasted about 8 months.

Phase 3 of my xpenology career: 4 WD Red 2/3 TB drives in a GA Z170X-based system. So far this system has been up about 2 months maybe 3 and it did lose a drive, but it was a legitimately old drive which could have legitimately crashed (out of warranty). It was 103b/3615 for awhile, now it's 104b/918+ (RAID 5, non-btrfs). I agonized about the 3615->918 upgrade but actually the system went unbootable and the cleanest path forward seemed to be to go to 104b/918+ bootloader. All I do on these systems is Plex really. I have a real Syno NAS that I use for everything else but which just doesn't support Plex transcoding well.

 

I don't really have any help, just, seeing your experience with a crashed drive made me recall my nightmare of drive crashing. Still don't know what was going on or if the problem is going to return in my current build. 

  • Like 1
Link to comment
Share on other sites

Side Information: Did a manual Bios Update of my Asrock J3455-ITX Board to the latest TWO ones on their website and tried for fun the 1.03b loader. No Change, still not working and "as expected". Funny: There were even TWO new BIOS Versions and both were newer than the one installed. But: Internet-Flash claimed, that there was no newer Bios available...

 

 

Edited by Robert Mitchum
Link to comment
Share on other sites

Hello everyone, I need your help.

 

I was running DSM 6.1.7 with Jun's loader 1.02b DS3617xs Baremetal. I'm migrating now to DSM 6.2 with 1.03b DS3617xs. I did all the steps in the tutorial using a 2nd USB drive, so I can keep the original one with the old DSM, just in case, (setting up VID/PID, MAC adress, Serial Number). I was able to boot my server to do the migration, it seemed to install properly and all. Last step in the migration is to restart, but unfortunately I cannot find my server anymore.

 

I tried with another USB drive (a 3rd one), just in case the 2nd USB drive is defective or something. I also made sure to put the right VID/PID. Unfortunately I cannot detect the server at all to reinstall, even if I go to the bootloader and select the second option (Reinstall)

 

What should I do?

Edited by Masrawy
Link to comment
Share on other sites

Thanks, works; but is there any way to use more than 2 Lan Ports? 

 

i have a PCI 4Gig Netzworkcard (Realtek 8111) and only one Port and the Mainboard-Lan-Port works but not all 5.

I've found in the synoinfo.conf (root/etc/synoinfo.conf) file "maxlanports=2" and i change it to 5, but it didnt work 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...