Jump to content
XPEnology Community

DSM 6.2 Loader


jun

Recommended Posts

12 hours ago, Next said:

Hi,

 

Made the USB drive 1.03B but when booting I can't find the Synology looks like no network connection.

Somebody have a solution for it? Its a baremetal Synology.

 

I have the same problem and have described a few posts before yours: I have a N4200 board with a Realtec Gigabit PCIe Family NIC  (RTL8168/8111 or RTL8169?).

It boots but never apears in the network. When I use the 918+ V1.03a2 loader it boots and is visibly in the nextwork and I can install 6.2 but it's instable, need 10min to boot DSM, crashes and then you cant no longer reinstall.

It would have been nice to have either an installable DS3617xs version for the very engergie efficient N4200 mini PC or a stable DS918+ version.

Link to comment
Share on other sites

23 hours ago, alese said:

 

Maybe the missing extra.lzma with additional modules I use on 6.1.7 are the problem so that  the pass through don't work like expected? I think the mptsas or mpt2sas driver is missing. I think I have to wait for additional drivers for 3615.

 

I  tried to compile the mpt2sas module for my LSI 2008 (checked in DSM6.1.7) and also intel e1000e but now I always get an IP of 169.254.x.y. Don't know what the problem is. Used the 22259branch for bromolow and the DSM 6.2 toolchain for bromolow. Something happens on the disks, cause I alway have to recover my DSm6.1.7 when going back to 1.02b loader.

Link to comment
Share on other sites

On 8/5/2018 at 12:34 PM, jun said:

Hi, options not available in vanilla DSM are hidden in /proc/cmdline
here is a example of disk order related options: SasIdxMap=0 DiskIdxMap=080C SataPortMap=4
it should works like this:   DiskIdxMap make first sata controller start from disk 8, second from disk 12 etc,
SataPortMap limit the number of disks a sata controller can have(vmware's virtual sata controller have 32 ports!)
then SasIdxMap make SAS controller start from disk 0, as a plus, this option should give you a stable SAS disk name

Sorry to come back to this; I'm using two SAS HBAs and not native SATA.  Should I be using SataPortMap at all here and let SASIDX do something?  Previously, I've used sataportmap 88 to represent both controllers having 8 ports but ideally, I'd like to try and get the drives listed in the right order with the SAS hba's if possible.

 

Thanks!

Link to comment
Share on other sites

11 hours ago, Next said:

Can I use that one "DS918+" I'm using the "3615XS"

The Jun's Loader 1.02B that one is running for the moment without any issue's.

If you are using 3615xs then keep using that. Don't really see the point in using 918+ unless you have a specific set of hardware which is close to the original ds918+

I recommend you read the FAQs and have a look at the past tutorials I made. that should provide enough information on the DOs and DONTs.

 

edit: I just realized there is no custom extra for 3615xs. Then yes you can try using ds918+ see if that works for you with the custom extra ramdisk.

  • Like 1
Link to comment
Share on other sites

So I took the plunge and attempted to migrate my 3615 based setup to the latest image.  Copied the details from the old USB stick for PiD/VID and serial/MAC setup, created the new image and booted up.  Interestingly if you're using LACP to bond ethernet together then you have to unbond at the switch to migrate as it won't do anything until then.  Once its reset, you then have to apply the bond again otherwise DSM cannot be found.  Only way for this not to work is if you reinstall completely in which case DSM won't have a bonded config.

 

All appears to be working now, will see how it goes!

Link to comment
Share on other sites

5 hours ago, Polanskiman said:

If you are using 3615xs then keep using that. Don't really see the point in using 918+ unless you have a specific set of hardware which is close to the original ds918+

I recommend you read the FAQs and have a look at the past tutorials I made. that should provide enough information on the DOs and DONTs.

 

edit: I just realized there is no custom extra for 3615xs. Then yes you can try using ds918+ see if that works for you with the custom extra ramdisk.

 

Agreed, it's worth pointing out that the DS3615xs is based off the regular consumer i3 architecture whereas the DS3617xs is Xeon based. Obviously if you're not using a Xeon of that generation then it's recommended not to use that specific config.

Link to comment
Share on other sites

12 hours ago, knopserl said:

 

I have the same problem and have described a few posts before yours: I have a N4200 board with a Realtec Gigabit PCIe Family NIC  (RTL8168/8111 or RTL8169?).

It boots but never apears in the network. When I use the 918+ V1.03a2 loader it boots and is visibly in the nextwork and I can install 6.2 but it's instable, need 10min to boot DSM, crashes and then you cant no longer reinstall.

It would have been nice to have either an installable DS3617xs version for the very engergie efficient N4200 mini PC or a stable DS918+ version.

 

I had the same issue with an Broadcom NIC. I changed the boot type from UEFI to Legacy and activated CSM. After this, the NAS was visible. My HP Gen10 Microserver runs Loader 1.03b and DS3617xs.

Link to comment
Share on other sites

On 8/7/2018 at 1:58 AM, Saoclyph said:

@extenue, @Lennartt, @wenlez, @pateretou, @sashxp, @enzo, @dodo-dk

 

- Outcome of the update: SUCCESSFUL 

- DSM version prior update: 6.1.7 Update 2 with Jun's loader v1.02b

- Loader version and model: Jun's Loader v1.03b - DS3617

- Using custom extra.lzma: NO

- Installation type: VM Proxmox 5.2.6 - Xeon D-1537 (need to pass kvm64 cpu type), passthrough LSI SAS2116 with 5 x WD RED 3TB Raid5, 2 x WD RED 4TB Raid1 & 2 x Intel DC S3700 200GB Raid1

- Additional comments : SeaBIOS, loader on sata and ESXi boot line. Update to U2 ok. Had to replace/delete remnant files from older loaders in /etc/rc.*, /.xpenoboot (see last paragraph below).

 

Using the usb method, I got a "mount failed" as others on Proxmox, but it was successful when using a sata image disk:

  • rename the loader with a .raw instead of .img and place it in the VM images folder /var/lib/vz/images/100/ (Proxmox parser does not understand .img)
  • add a sata0 disk in the vm .conf (/etc/pve/qemu-server/100.conf)  : 

sata0: local:100/synoboot_ds3617_v1.03b.raw,size=52429K
  • choose sata0 in Option/Boot Order in the GUI
  • at start in the GUI console, choose the ESXi boot line

 

My vm ID is 100, replace it with yours.

I also had to choose the kvm64 cpu type.

 

  Bonus: easy way to edit grub.cfg (Reveal hidden contents)

It easy to change the loader grub.cfg by mounting the loader image:



cd /var/lib/vz/images/100/
mkdir synoboot_mount
mount -o loop,rw,offset=$((2048*512)) synoboot_ds3617_v1.03b.raw synoboot_mount
vi synoboot_mount/grub/grub.cfg
# unmount it after editing
umount /var/lib/vz/images/100/synoboot_mount

 

 

A serial port is also a good thing to have for debug. You can access the serial console with the following line (type Ctrl-O to exit): 


socat UNIX-CONNECT:/var/run/qemu-server/100.serial0 STDIO,raw,echo=0,escape=0x0f

 

The serial port was very needed in my case.

After I first updated from 6.1 to 6.2, the VM was starting well (docker and ssh were Ok)  but I was not able to logging into DSM, and after ~5 mins from boot, everything was shutting down and I was losing network (as @JBark). I thought it had completely shutdown. But using the serial port, I saw that it just killed everything Synology related even the network config.

With a fresh VM, it was working well, so tried to find differences between the DSM filesystems.

I found that a lot of /etc/rc.* files where referencing old binaries that do not exist anymore so I replaced all the /etc/rc.* files by the ones from the fresh installation. When rebooting, it was still closing down after 5 mins, but I think it was needed in combination with the following removals.

I also saw several /etc/*.sh scripts, and a /.xpenology folder, that were not there in the fresh installation.

After deleting them, and cleaning a little the /etc/synoinfo.conf file (removed support_syno_hybrid_raid option, and some other options, not sure if it had an effect), everything was working great again!

 

@jun Thanks for the loader!

 

Hello,

 

Could you please share the .conf file of your VM? as I would like to create a DSM 6.2 VM on my Poxmox infrastructure?

Many thanks in advance!

 

Edited by Polanskiman
Changed .xpenology to .xpenoboot in the Additional comments line
Link to comment
Share on other sites

On 8/1/2018 at 4:30 PM, Dfds said:

@jun A fresh baremetal install on a HP Microserver Gen7 N54L seems to be working ok. Many thanks for your hard work

 

 

2 hours ago, bnicolae said:

Hi,

 

    Can I use this with HP Gen7 AMD Turion? I'm stuck on 6.0.2.

 

See above, so I suppose so.

 

PS.

For the ones who know, I'm still around... :)

Edited by Poechi
Link to comment
Share on other sites

18 hours ago, luchuma said:

not working on esxi 6.7 with hp microserver gen8 and b120i raid

disk set to scsi (no other option except IDE)

vm machine - linux other x64 with bios mode

 

loader stops with find.synology.com and cpu usage is 100%

 

booting the kernel. - never shows up

 

switching machine to windows 7x64 dont helps

 

why do you use 6.7? there is no pre gen9 image for 6.7.

 

I use 6.5u2 and works perfectly, with b120i set to ahci and the whole controller in passthrough to dsm.

Link to comment
Share on other sites

On 8/5/2018 at 3:03 PM, jun said:

yes, that is what I means by "stable SAS disk name"

 

And what i need to use for that to get it working? Or with SasIdxMap=0 should the HDD's appear at the correct position?

LSI 9211-8i in IT mode + onboard controller disabled.

 

Thanks and sorry if i post in the wrong section :).

Link to comment
Share on other sites

Il y a 6 heures, john_matrix a dit :

Hello,

 

Could you please share the .conf file of your VM? as I would like to create a DSM 6.2 VM on my Poxmox infrastructure?

Many thanks in advance!

 

 

My proxmox VM conf:

bios: seabios
boot: c
bootdisk: sata0
cores: 6
cpu: kvm64
hostpci0: 02:00.0 # Passthrough my HBA PCI
hotplug: disk,network,usb
memory: 12000
name: dsm6-jun-v1.0.3b
net0: e1000=CE:7B:13:B7:69:82,bridge=vmbr0
net1: e1000=AE:ED:7B:C4:44:5B,bridge=vmbr1
numa: 0
onboot: 1
ostype: l26
sata0: local:100/synoboot_ds3617_v1.03b.raw,size=52429K 
serial0: socket
smbios1: uuid=7c25000f-d5d1-454d-ac71-af6a0a89bf1a
sockets: 1

 

 

 

@Polanskiman could you edit my post ID 124 of this thread, to change "/.xpenology" to "/.xpenoboot" on the "additional comments" line and the last paragraph. I misspelled the folder name, but it won't let me edit it myself. 

Link to comment
Share on other sites

3 hours ago, pigr8 said:

 

why do you use 6.7? there is no pre gen9 image for 6.7.

 

I use 6.5u2 and works perfectly, with b120i set to ahci and the whole controller in passthrough to dsm.

 

why not use esxi 6.7? there will not be new esxi pre gen 9 hpe images and if you stay using 6.5u2 you will increase your technological debt.

hpe images are usual esxi images just equiped with nesesary drivers for hpe devices. pre gen9 has drivers for older hardware and g9 dont.

when you have 6.5u2 installed you can upgrade to 6.7 and it will use some drivers from 6.5u2 and some wiil be replaced with new version.

 

ahci mode means you dont use hardware raid (b120i controller) and when hard drive fails you can irreversably loose your data.

Link to comment
Share on other sites

8 hours ago, filippo333 said:

 

Agreed, it's worth pointing out that the DS3615xs is based off the regular consumer i3 architecture whereas the DS3617xs is Xeon based. Obviously if you're not using a Xeon of that generation then it's recommended not to use that specific config.

 

I disagree with this generalization - Xeon does not offer any capabilities that desktop processors do not have. ECC RAM and improved security are Xeon's two main benefits, both which are irrelevant to DSM.  So I would not worry about using DS3617xs with a non-Xeon processor as long as it supported the needed instruction set, which is a function of processor generation, not server vs. desktop.  The final answer for your CPU should be discernible by reviewing the hardware capabilities as described in the Intel Ark pages.

 

How to summarize all this?  Basically, the DS916/3615/3617 DSM code base, AS COMPILED BY SYNOLOGY, seems to require a Sandy Bridge or later CPU.

DSM on DS918 appears to use instructions that appeared in Haswell and later, which includes Broadwell, Skylake, Kaby Lake, Coffee Lake, Apollo Lake, Gemini Lake.  The community is still compiling information about exactly which CPU's are supported, so don't assume this list is definitive.

 

Also note that the embedded Linux kernel for DSM DS916/3615/3617 is v3.10 (even on DSM 6.2) and the DS918 uses v4.4.  This may also turn out to be a threshold for CPU selection.

 

For hardware transcoding, QuickSync processor extension is required both for Synology native support (via Video Station) or third party (i.e. Plex).  QuickSync capability increases with each processor generation, so even if your processor has QuickSync support, it may not meet the specific requirements of the software you are trying to run.  Also, not every modern processor has QuickSync silicon (no Atoms have it, most desktop processors do, some Xeons do).

 

In addition to the above, DSM must support QuickSync with kernel drivers, and the required components are only available with DSM on the DS916 and DS918 platforms today.

  • Like 3
Link to comment
Share on other sites

8 minutes ago, flyride said:

 

I disagree with this generalization - Xeon does not offer any capabilities that desktop processors do not have. ECC RAM and improved security are Xeon's two main benefits, both which are irrelevant to DSM.  So I would not worry about using DS3617xs with a non-Xeon processor as long as it supported the needed instruction set, which is a function of processor generation, not server vs. desktop.  The final answer for your CPU should be discernible by reviewing the hardware capabilities as described in the Intel Ark pages.

 

How to summarize all this?  Basically, the DS916/3615/3617 DSM code base, AS COMPILED BY SYNOLOGY, seems to require a Sandy Bridge or later CPU.

DSM on DS918 appears to use instructions that appeared in Haswell and later, which includes Broadwell, Skylake, Kaby Lake, Coffee Lake, Apollo Lake, Gemini Lake.  The community is still compiling information about exactly which CPU's are supported, so don't assume this list is definitive.

 

Also note that the embedded Linux kernel for DSM DS916/3615/3617 is v3.10 (even on DSM 6.2) and the DS918 uses v4.4.  This may also turn out to be a threshold for CPU selection.

 

For hardware transcoding, QuickSync processor extension is required both for Synology native support (via Video Station) or third party (i.e. Plex).  QuickSync capability increases with each processor generation, so even if your processor has QuickSync support, it may not meet the specific requirements of the software you are trying to run.  Also, not every modern processor has QuickSync silicon (no Atoms have it, most desktop processors do, some Xeons do).

 

In addition to the above, DSM must support QuickSync with kernel drivers, and the required components are only available with DSM on the DS916 and DS918 platforms today.

 

Thanks a lot for your post. Very usefull information to understand the whole versions map.

Link to comment
Share on other sites

2 hours ago, luchuma said:

 

why not use esxi 6.7? there will not be new esxi pre gen 9 hpe images and if you stay using 6.5u2 you will increase your technological debt.

hpe images are usual esxi images just equiped with nesesary drivers for hpe devices. pre gen9 has drivers for older hardware and g9 dont.

when you have 6.5u2 installed you can upgrade to 6.7 and it will use some drivers from 6.5u2 and some wiil be replaced with new version.

 

ahci mode means you dont use hardware raid (b120i controller) and when hard drive fails you can irreversably loose your data.

 

no, and no.

first you don't have to be on the latest cutting edge version of the software when you talk about server stuff, and HP clearly stated that will not support older gen hardware from now on and drivers not made for your gen could very likely no work correctly if not supported.. as you said, gen9+ images have drivers for newer hardware and pregen9 has older support, you would you you unsupported drivers on your server?

i asked you this because i've tested 6.7 gen9 images on my microserver gen8 and some vm weren't working correctly giving rw timeout, rolled back to 6.5.. if you major upgrade your hypervisor from 6.5u2 to 6.7 and use drivers that are not compiled for that kernel you are going to have bad time, and newer drivers as HP said will not support older hardware so could easily break your system, same what's going on with new loader images and new kernel in synology.

newer doesnt meen that your performance and stability will be better, expecially if your hardware is unsupported.

and to be clear, b120i it's not an hardware raid controller, it's a fake raid, and no, your statement about driver fails and data lost is wrong since your software raid in the OS of your choice will handle recovery.

ahci means that the physical drivers attached to the sata ports have no HP fake raid layer to deal with, and all of them are shown directly to the OS.. in that OS you can choose if and how handle the raid and what filesystem you want to use.. of example i have my 5 disks setup in a raid5 (actually shr1) array with a brtfs filesystem, but you could use another raid setup (even those that are not supported from your fake raid controller since it's handled by the os - like mdma) or another filesystem like zfs.

that's why most users that have raid controller flashed in IT mode, to handle the array OS side and not in BIOS.. for example some LSI card dont have RAID5/6, if you flash them IT mode you can pass the disks to the os using the controller as a simple HBA card and then handle the raid array in OS, like FreeNAS os DSM.

if you think that using the b120i is protecting you from data lost you are wrong, and btw raid is not a backup solution anyway so it doesnt have to protect you in the first place.


m2c.

Edited by pigr8
Link to comment
Share on other sites

ok

my esxi 6.7 works fine with all my vms so i dont see a problem to use it

 

have a nice day ;)

 

On 8/8/2018 at 6:34 PM, luchuma said:

not working on esxi 6.7 with hp microserver gen8 and b120i raid

disk set to scsi (no other option except IDE)

vm machine - linux other x64 with bios mode

 

loader stops with find.synology.com and cpu usage is 100%

 

booting the kernel. - never shows up

 

switching machine to windows 7x64 dont helps

problem solved

 

new vm made as ubuntux64 and SATA mode appeared possible :)

scsi controler - lsi logic sas

hdd scsi

 

cpu max 30% and idle below 1%

 

DSM 6.2-23739 Update 2 

 

Edited by luchuma
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...