jun

DSM 6.2 Loader

Recommended Posts

3 hours ago, nicoueron said:

 

Capture.PNG

 

and if you want to deploy a OVF file you can try mine https://file.niko44.fr/sharing/CAGHguOMW

Yes..

Now is working.

 

In you OVA file the synoboot.vmdk drive is SATA..

 

I have problem to add SATA drive in vsphere client.. But is web vsphere is possibel...

Thanks!!

I have EsXI 6.5

Share this post


Link to post
Share on other sites
à l’instant, lubland a dit :

vsphere client

 

This client is deprecated, don't use it anymore especially that you use ESXi 6.5

Share this post


Link to post
Share on other sites
On ‎8‎/‎3‎/‎2018 at 5:13 PM, autohintbot said:

 

You should edit grub.cfg with the MAC address you're using in the network adapter on the VM.  See 

 (basically just mount synoboot.img with OSFMount and find/edit there)

 

The other option with ESXi is to edit your Virtual Switch in networking, enable Security->MAC address changes.  DSM will use the MAC address in grub.cfg for its network card, and not the one set in your VM config.  By default with ESXi this security setting is on, which will prevent it from getting on the network if there's a mismatch.  If you're going to use multiple XPEnology installs you'll need to change the MAC address anyway, so might as well.

 

Share this post


Link to post
Share on other sites

I have tried the modification of the MAC in the grub.cfg. It doesn't work.

The only way I figured out is :

1. creating a VM in VM Workstation

2. connecting the synoboot.vmdk using SATA controller

3. creating a new diskusing SATA controller

4. moving to esxi server with VM Ware Converter.

 

It seems that only sata controller are working.

 

Share this post


Link to post
Share on other sites
On 8/12/2018 at 5:54 PM, marigo said:

Successfully migrated my Proxmox VM from 6.2 (918+)  to 6.2 (DS3615XS) with the new (1.03b) loader!

Great work Jun!  

 

Do you mind sharing your config? Did you have to setup anything differently?

I'm struggling to get it working with Proxmox currently.

Share this post


Link to post
Share on other sites

I can not make it work with 1.03b 3615/3617.

1.03a2 918+ Works

 

My setup: HP ProDesk G2 SFF / i5 - 6600 / 4GB RAM - Network Intel I219-LM.

 

It does not appear on the network.

Share this post


Link to post
Share on other sites
On 8/13/2018 at 3:09 AM, OzTechGeek said:

For the last 10 hours I've been struggling with getting any SAS controller (LSI Logic SAS or VMware Paravirtual) to work correctly with ESXI 6.7 and DS3617 1.03b, here is my current VM config:

   SATA Controller 0 : SATA(0:0) = synoboot.vmdk

   SATA Controller 1 : SATA(1:0) = Test0.vmdk (Shows in "Stroage Manager as Disk 9) = Correct

   SATA Controller 1 : SATA(1:3) = Test3.vmdk (Shows in "Stroage Manager as Disk 12) = Correct

   SCSI Controller 0 : SCSI(0:0) = SAS0.vmdk (Shows in "Stroage Manager as Disk 6) = WRONG

 

Over the last 10 hours I have tried different grub.cfg changes but here is my current config:


set sata_args='sata_uid=1 sata_pcislot=5 synoboot_satadom=1 DiskIdxMap=0C0801 SataPortMap=148 SasIdxMap=0'

DiskIdxMap=0C0801 = Starting Disk in DSM for each controllers:

   0C= 1st controller SATA0 start at "Disk 13" in DSM

   08= 2nd controller SATA1 start at "Disk 9" in DSM

   00= 3rd controller SCSI0 start at "Disk 1" in DSM = NOT WORKING

SataPortMap=148 = Ports Per Controller

   1 = 1 Port SATA0:0 synoboot.vmdk DSM Disk 13

   4 = 4 Ports SATA1:0,1,2,3 DSM DIsk 9, 10, 11, 12

   8 = 8 Port SCSI0:0-7 DSM Disk 1 -8

 

The problem is the SCSI/SAS disk SAS0.vmdk never starts at DISM Disk 1 it's currently DSM Disk 6, during some initally testing it was coming in a DSM Disk 2 but now it's Disk 6. I noticed the following in the serial console for the SAS disk:


:: Loading module mptsas[    2.087825] sd 5:0:0:0: [sdf] Assuming drive cache: write through
[    2.089707] sd 5:0:0:0: [sdf] Assuming drive cache: write through
[    2.091766] sd 5:0:0:0: [sdf] Assuming drive cache: write through
 ... [  OK  ]

Where is sda to sde? I only have 1 SCSI disk currently assigned in the VM SAS0.vmdk on port SCSI(0:0). I have tried different settings for "SasIdxMap=" such as "SasIdxMap=0xFFFFFFFA" (-6) and "SasIdxMap=0xFFFFFFFF" (-1) but the SAS0.vmdk still shows in DSM in Disk 6. not matter what I do I cannot get the SCSI/SAS disk to show in Disk 1 in DSM.

 

Also noticed the following from "cat /proc/cmdline" no mention of any "SasIdxMap" setting if it makes a differeance:


cat /proc/cmdline
syno_hdd_powerup_seq=0 HddHotplug=0 syno_hw_version=DS3617xs vender_format_version=2 console=ttyS0,115200n8 withefi quiet root=/dev/md0 sn=<removed> mac1=<removed> netif_num=1 synoboot_satadom=1 DiskIdxMap=0C0800 SataPortMap=148

I'm currently testing this in a VM in this configuration becuase my real production box has this same configuration but the SAS/SCSI controller is actually a LSI 9207-8e in pass-through mode, but I wanted to validate and test 6.2 before I migrate to 6.2, but so far no luck with the SCSI/SAS issue.

 

The only thing I have not tested yet and just thought of and wonder if it would make a differance is the "Guest OS Version" of the VM I currently have it set to "Other 3.x Linux 64-bit" since DS3617 is only running the 3.x Kernel that's why I chose it.

 

Anybody else having trouble/issues getting the SAS/SCSI controllers in a ESXi 6.7 VM to number/show correctly or is it just me or did I miss some configuration setting somehwere? BTW during this whole time trying to get it working I must have read this thread of 10 pages at least 8 times from start to finish looking for something I may have missed or a catch/gotcha :)

 

Thoughts/Suggestions?

 

 

Still trying to figure out the SAS controller mapping issue, it's driving me crazy, it seems the "SataPortMap" setting is controlling the SAS address but according to what I have read it's not working how I thought, here is what I get with different settings:

SataPortMap=1
:: Loading module mptsas[    2.221996] sd 31:0:0:0: [sdaf] Assuming drive cache: write through
[    2.224469] sd 31:0:0:0: [sdaf] Assuming drive cache: write through
[    2.225998] sd 31:0:0:0: [sdaf] Assuming drive cache: write through
 ... [  OK  ]

SataPortMap=11
:: Loading module mptsas[    2.101528] sd 2:0:0:0: [sdc] Assuming drive cache: write through
[    2.103279] sd 2:0:0:0: [sdc] Assuming drive cache: write through
[    2.105453] sd 2:0:0:0: [sdc] Assuming drive cache: write through
 ... [  OK  ]

SataPortMap=111
:: Loading module mptsas[    2.100789] sd 2:0:0:0: [sdc] Assuming drive cache: write through
[    2.102862] sd 2:0:0:0: [sdc] Assuming drive cache: write through
[    2.104874] sd 2:0:0:0: [sdc] Assuming drive cache: write through
 ... [  OK  ]

SataPortMap=148
:: Loading module mptsas[    2.085214] sd 5:0:0:0: [sdf] Assuming drive cache: write through
[    2.087088] sd 5:0:0:0: [sdf] Assuming drive cache: write through
[    2.089221] sd 5:0:0:0: [sdf] Assuming drive cache: write through
 ... [  OK  ]

 "SataPortMap=1" SAS disk "sdaf" does not even appear in DSM

 "SataPortMap=111" puts the SAS disk "sdc" on Disk 2 in DSM

 "SataPortMap=148" puts the SAS disk "sdf" on Disk 6 in DSM

 

I thought "SataPortMap=148" meant 1 Port on 1st Controller, 4 Ports on 2nd Controller and 8 Ports on 3rd Controller, but that not what that setting is doing, to me it looks like for the SAS controller it takes the first 2 nm,bers of "SataPortMap", adds them together to get the SCSI ID? and then adds 1 to that number to get the starting drive sdf?

 

I don't know at this point I'm just grasping at straws trying to figure something out :)

 

Share this post


Link to post
Share on other sites
13 hours ago, demiise said:

 

Do you mind sharing your config? Did you have to setup anything differently?

I'm struggling to get it working with Proxmox currently.

Not really a different setup. Just use SATA disks to setup the VM. Edit the grub.cfg to your needs.

copy the synoboot.img to your VM and rename it to the SATA bootdisk. Here is my config:

 

root@proxmox2:/etc/pve/qemu-server# cat 112.conf
boot: c
bootdisk: sata0
cores: 4
memory: 512
name: DSM6.2
net0: e1000=8A:9D:C6:25:F4:C3,bridge=vmbr0,tag=101
numa: 0
ostype: l26
sata0: local:112/vm-112-disk-2.raw,size=50M
sata1: local:112/vm-112-disk-1.qcow2,size=10G
scsihw: virtio-scsi-pci
smbios1: uuid=3f77ff8c-6c22-4503-8fc4-863df965f9ef
sockets: 1

 

Share this post


Link to post
Share on other sites
On 8/12/2018 at 11:06 PM, andrey00188 said:

Hуllo everyone!

Tell me, please, how to update me with 6.1 downloader 1.02b to 6.2 downloader 1.03b. Install in ESXi
I tried simply to replace the bootloader, but the machine willn't get ip ((

 

 

I'm go to migrate my DSM with 6.1 loader 1.02b to 6.2 bootloader 1.03b. To do this, I needed to remove the SCS controller from the VM and connect the SATA controller.
On the screen for selecting the type of download, choose the 1st option.

Share this post


Link to post
Share on other sites

Hello,
I am planning to upgrade from the following setup to the new loader:

 

- Loader version and model: Jun's Loader v1.02b - DS3617xs

- Installations type: Virtuell on aBaremetal ESX 6.5U1

- Installed DSM software: DSM 6.1.5-15254 Update 1

 

Can someone explain to me how I do that?

 

Kind Regards Heggeg

Edited by Heggeg

Share this post


Link to post
Share on other sites
13 minutes ago, Heggeg said:

Hello,
I am planning to upgrade from the following setup to the new loader:

 

- Loader version and model: Jun's Loader v1.02b - DS3617xs

- Installations type: Virtuell on aBaremetal ESX 6.5U1

- Installed DSM software: DSM 6.1.5-15254 Update 1

 

Can someone explain to me how I do that?

 

Kind Regards Heggeg

You are can create copy of VM, and testing upgrade.

In new VM delete SCSi controller and add SATA controller and old disk. Then you change loader on new. Run VM, go to find.synology.com and migrate your DSM.

In my case, it looked like this

Share this post


Link to post
Share on other sites
1 hour ago, Heggeg said:

Hello,
I am planning to upgrade from the following setup to the new loader:

 

- Loader version and model: Jun's Loader v1.02b - DS3617xs

- Installations type: Virtuell on aBaremetal ESX 6.5U1

- Installed DSM software: DSM 6.1.5-15254 Update 1

 

Can someone explain to me how I do that?

 

Kind Regards Heggeg

Share this post


Link to post
Share on other sites

I was finally able to get v1.03b to work successfully after a lot of runarounds with ESXI 6.7.

 

No Diskstation found within LAN:

I've created a new VM and set my modified synoboot (updated VID/PID/SN/MAC) as an SATA HDD. Booted the 3rd option (Vmware/ESXI) and it detected on the network. Once it was detected, I could then migrate to 6.2.

 

After migration was completed, all of my packages would close within minutes of the system booting up. To resolve this, I had to delete the hidden .xpenoboot that was located at root. Open SSH and use Putty to remove the .xpenoboot folder.

Quote

rm -rf .xpenoboot

 

Rebooted the system and all the packages would now stay up and running.

 

Hope this helps other users!

  • Like 2

Share this post


Link to post
Share on other sites

Hi, I successfully moved dsm 6.1 to 6.2 using Jun's Loader v1.03b - DS3617xs (thank you so much Jun)
my configuration is a microserver HP Gen 10:
Processor: AMD Opteron X3216
Memory: DDR4 8 GB
Network: Broadcom 5720

but there is a small problem, synology uses only 3.4 GB, writes that 4.6 GB is reserved by the system.

Can I fix this or work around it?

Thank you.

Sorry for the Google translator.

Share this post


Link to post
Share on other sites

This is a cautionary tale of migrating to 6.2, a case of the Bad, the Ugly and the Good.

 

My previous setup was as follows:

- DSM version prior update: DSM 6.1.7 15284 Update 2

- Loader version and model: Jun's Loader v1.02b - DS3615xs

- Using custom extra.lzma: NO

- Installation type: BAREMETAL - Supermicro X8DTI-F

- NIC: Intel 82576 Dual-Port Gigabit Ethernet, plus an Intel EXPI9301CTBLK PRO1000 Network Card CT PCIex

- Chipset: Dual Intel E5620 Xeon

- Disks: 6 x 2TB + 2 x 4TB WD Red, 2 x 256GB SSD

 

Having downloaded Jun's v1.03b loader (thank's Jun) I flashed it onto a new Kingston 8GB USB stick, having first modified the VID/PID, SN and mac addresses. I then powered down the NAS from the DSM dashboard. I have one ethernet cable connected to the onboard Intel 82576 (to control the IPMI) and another ethernet cable into the PRO1000 card (which was the DSM connection).

 

The Bad:

Having powered down the unit I swapped out the old USB stick and loader (v1.02b) for the new one (v1.03b). I then powered up the server and after a minute my 'Synology' was found....horaah! I was asked if I wanted to 'Migrate' or do a 'Clean Install', I chose the former - BIG MISTAKE! The update proceeded - usual 10 minute wait and everything looked good. When the update was complete I manually rebooted the server and after a minute my Synology was found. Then it disappeared.

 

The Ugly:

No sweat. Check the cables - ok, check my router for a connection - no connection to DSM, but a connection to the IPMI IP address. Log into the IPMI - everything OK, I can see Jun's completed loader page. So it must be a network issue, probably with my network card. Power down the system using the off button - not the best idea as it is an improper shutdown and you risk corruption. Swap out the network card for a spare/new 82574 controller (changed mac address on grub file). Reboot system, wait a minute, Synology found! Quickly go into DSM Control Panel - warning that Volume has crashed, needs repair. Start that process and then the network connection is lost again. Air turns blue! Log into IPMI - same as before, all ok, see Jun's completed loader page. Router doesn't see DSM. WTF! Decide to wait 12 hours for volume repair to complete - yes it continued to repair eventhough network connection lost. Next morning, repair complete but still no network connection. Reboot router, no change. Power down system (yeah data will be f*k*d again). On reboot get as far as 'You are not authorised' message from DSM. WTF! Then network connection lost again. Had enough, nuclear option. Powered down. Pulled all HDDs out of system. Had a new 4TB WD Red which I inserted in drive 1. Rebooted.

 

The Good:

Found Synology! Clean install. Install goes well. Need to reboot....fingers crossed. Reboot fine...asks if I want to update to 6.2-23739 Update 2. Not really, but I will. Update and reboot goes well. Network connection stable, visible on router as well. Clean/reformat my other 4TB drives. Load them into server - all found by DSM. Build new volume. Start transferring all my back-up data onto clean volume.......

 

The Lessons:

1. BACK-UP YOUR DATA BEFORE ANY UPDATE - Polanskiman has told you a hundred times to do it, JUST DO IT

2. Go for a Clean Install when moving from 6.1 to 6.2 - I am convinced that it was the Migration option that f*k*d up my system, and not anything to do with the v1.03b loader. The fact that all my network cards are now visible on the DSM and my router without me having to use the extra.lzma is evidence of this.

3. It has made me think about whether I should move to FreeNAS or to Windows Server 2016 or unRAID or something else - but that is a discussion for another time.

 

So I can now say:

- Update: SUCCESSFUL (eventually)

 

Good luck.

Share this post


Link to post
Share on other sites

My previous setup was as follows:

- DSM version prior update: DSM 6.1.7 15284 Update 2

- Loader version and model: Jun's Loader v1.02b - DS3615xs 

- Using custom extra.lzma: NO

- Installation type: BAREMETAL HP N54L

 

Updated to june 1.03B

Experienced problem with Lan and packages closing etc

I removed the .xpenoboot folder according to:

 

Il y a 15 heures, imaleecher a dit :

I was finally able to get v1.03b to work successfully after a lot of runarounds with ESXI 6.7.

 

No Diskstation found within LAN:

I've created a new VM and set my modified synoboot (updated VID/PID/SN/MAC) as an SATA HDD. Booted the 3rd option (Vmware/ESXI) and it detected on the network. Once it was detected, I could then migrate to 6.2.

 

After migration was completed, all of my packages would close within minutes of the system booting up. To resolve this, I had to delete the hidden .xpenoboot that was located at root. Open SSH and use Putty to remove the .xpenoboot folder.

 

Rebooted the system and all the packages would now stay up and running.

 

Hope this helps other users!

 

So I can now say:

- Update: SUCCESSFUL

 

Thanks for this loader and advice.

Share this post


Link to post
Share on other sites
20 hours ago, Gigaset said:

1. BACK-UP YOUR DATA BEFORE ANY UPDATE - Polanskiman has told you a hundred times to do it, JUST DO IT

2. Go for a Clean Install when moving from 6.1 to 6.2 - I am convinced that it was the Migration option that f*k*d up my system, and not anything to do with the v1.03b loader. The fact that all my network cards are now visible on the DSM and my router without me having to use the extra.lzma is evidence of this.

3. It has made me think about whether I should move to FreeNAS or to Windows Server 2016 or unRAID or something else - but that is a discussion for another time.

Good luck.

 

Thanks for relaying your experience.

But your comment #3 - let's all not forget that 1.03b development and 6.2 support is still seriously in beta.  If you want stability and lack of problems, stay on 6.1 for now.

Share this post


Link to post
Share on other sites

I just used your loader

but this has something wrrong

1.03b 3617 or 3615

in the dsm system,the vmm can't be rigth run with openwrt

it just make the openwrt kernal brokened and restart over and over again

but,if I used 1.03a2 918+,the has be fixed,the openwrt can be rigth run in vmm

but,but,but

if I has 8bay and I used 1.03a2 918+ loader,the dsm system can't Distinguish all hdd

so,I just try over and over to change loader 1.03b 3615/3617 or loader 1.03a2 918+

I just get a right way to use

Forgive me for my poor English.

I am a Chinese

Thank you for your dedication.

Share this post


Link to post
Share on other sites
16 hours ago, gill03 said:

My previous setup was as follows:

- DSM version prior update: DSM 6.1.7 15284 Update 2

- Loader version and model: Jun's Loader v1.02b - DS3615xs 

- Using custom extra.lzma: NO

- Installation type: BAREMETAL HP N54L

 

Updated to june 1.03B

Experienced problem with Lan and packages closing etc

I removed the .xpenoboot folder according to: @imaleecher

 

 

So I can now say:

- Update: SUCCESSFUL

 

Thanks for this loader and advice.

 

Right!

 

For a clean install just use the loader, but for a migration there is this /.xpenoboot/ folder in the root directory when applying the 6.2 DSM software.

Remove this folder with:

rm -r /.xpenoboot/

and reboot the system.

Edited by marigo

Share this post


Link to post
Share on other sites

My N40L has been upgraded to DSM 6.2 and went successful.

Everything is working, but when I try to use the web interface the NAS becomes unresponsive and I have to manually power off to get it working again.

Someone has a tip for me for fixing the web interface?

Share this post


Link to post
Share on other sites

Hey there,

 

I have an HP gen 8 on a E3-1265 L V2 with 16 gigs of RAM, 2x6TB hds (Raid 0), 2x1TB hds (Raid 1) and several external USB hds ... the rest is stock. Running on 5.2 it works like a charme but some software I use, recently started moaning, that it would like to see a new version of DSM.

So ... I am a pussy ... and I bought another gen 8 to try out the installation of DSM 6.2.

I downloaded the 1.03b loader and did the installation on the new arrived gen 8 wich is a Celeron machine with 2 gigs of RAM and 4 2TB harddrives (Raid 0) (rest is also stock).

I just downloaded the loader ... put the image on an SD card ... booted it and finished the default installation on bare metal.

It worked beautifully ... even the update (Update 2 I guess) worked well.

My plan is now to update my first gen8.

First I would like to migrate from 5.2 to 6.2 to avoid the resetup of some software and docker-stuff (nothing too complicated).

Second, if migration does not work, I would try a fresh installation (like on the new gen8) and resetup everything.

The new gen8 I would use as a backup machine beforehand ... not to loose any data.

 

Now to my questions :-)

 

Do I have to expect any issues or trouble during that process? Is there anything to keep in mind? Would you say the migration is chanceless and I should directly jump on a fresh installation (maybe also because of a cleaner system).

 

Please help me with your experience and let me know what you think. :-)

 

Many thanks in advance

Edited by qse

Share this post


Link to post
Share on other sites
1 hour ago, qse said:

Hey there,

 

I have an HP gen 8 on a E3-1265 L V2 with 16 gigs of RAM, 2x6TB hds (Raid 0), 2x1TB hds (Raid 1) and several external USB hds ... the rest is stock. Running on 5.2 it works like a charme but some software I use, recently started moaning, that it would like to see a new version of DSM.

So ... I am a pussy ... and I bought another gen 8 to try out the installation of DSM 6.2.

I downloaded the 1.03b loader and did the installation on the new arrived gen 8 wich is a Celeron machine with 2 gigs of RAM and 4 2TB harddrives (Raid 0) (rest is also stock).

I just downloaded the loader ... put the image on an SD card ... booted it and finished the default installation on bare metal.

It worked beautifully ... even the update (Update 2 I guess) worked well.

My plan is now to update my first gen8.

First I would like to migrate from 5.2 to 6.2 to avoid the resetup of some software and docker-stuff (nothing too complicated).

Second, if migration does not work, I would try a fresh installation (like on the new gen8) and resetup everything.

The new gen8 I would use as a backup machine beforehand ... not to loose any data.

 

Now to my questions :-)

 

Do I have to expect any issues or trouble during that process? Is there anything to keep in mind? Would you say the migration is chanceless and I should directly jump on a fresh installation (maybe also because of a cleaner system).

 

Please help me with your experience and let me know what you think. :-)

 

Many thanks in advance

 

it's a big jump from 5.2 to 6.2, a setup of mine didn't go that well and had to a fresh install instead of a migration (data remains intact btw).

you shouldn't have any major problem.

Share this post


Link to post
Share on other sites


Working great for me thanks

 

First boot no hdd found had to change bios to AHCI mode

 

-Loader: Jun's loader 1.03B. DS3615xs
-DSM Version: 6.2-23739-2
-Network:  4 x Marvell E8053 Gigabit Ethernet Controller.
-Motherboard:  COMMELL LV-674
-CPU: Core 2 Duo E6420
-Memory: 8GB
-SATA Drive: 4 x Serial ATA II Interface 

 

Edited by chris666uk1
mistake

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now