jun

DSM 6.1.x Loader

Recommended Posts

So Jun has confirmed that we need to establish the PCI devices in the model we wish to run in order to then modify the patcher to run correctly.

 

Can you elaborate a bit on that, what do you mean by "establish the PCI" ?

Just that official devices have a predetermined set of pci devices that must be present in order for the system to work. On a generic machine, these do not exist in the same way or with the same IRQ etc. Apparently the PCI devices must be modified by the patcher which means understanding the pci devices required and then altering the loader to make them appear as they would in an official device; once this is done then I think it would load.

 

I don't know which module Jun has created to patch so I can't see what he has done yet, hoping to find out soon :smile:

Share this post


Link to post
Share on other sites

 

Yesterday I updated with migration on two servers in my signature: the first with DSM6.0.1-7393 to DSM6.0.2;

second with DSM5.2-last to DSM6.0.2 - Update OK, all data and applications remain. Then DSM6.0.2 asked to upgrade some applications (for use in DSM6.0.2). Website with virtual hosts works fine, too.

 

I changed - VID, PID, SN and MAC addresses

I replaced the ramdisk.lzma file on the USB drive with the one in Jun's image (second link above) at \image\DS3615xs (in the post, a few pages earlier: viewtopic.php?f=2&t=20216&start=110#p73472)

 

my grub.cfg - 1 server

 

set default='0'
set timeout='2'
set fallback='0'

menuentry 'Synology DS3615xs' --class os {
  insmod   fat
  linux   /image/DS3615xs/zImage root=/dev/md0 syno_hdd_powerup_seq=0 netif_num=1 HddHotplug=0 SataPortMap=2 syno_hw_version=DS3615xs vid=0x**** pid=0x**** console=uart,io,0x3f8,115200n8 sn=********** mac1=********** withefi elevator=your_magic_elevator quiet
  initrd   /image/DS3615xs/ramdisk.lzma
}

 

my grub.cfg - 2 server

set default='0'
set timeout='2'
set fallback='0'

menuentry 'Synology DS3615xs' --class os {
  insmod   fat
  linux   /image/DS3615xs/zImage root=/dev/md0 syno_hdd_powerup_seq=0 netif_num=1 HddHotplug=0 SataPortMap=5 syno_hw_version=DS3615xs vid=0x**** pid=0x**** console=uart,io,0x3f8,115200n8 sn=********** mac1=********** withefi elevator=your_magic_elevator quiet
  initrd   /image/DS3615xs/ramdisk.lzma
}

 

Thanks aleksey_z. That is good to know. I did all the changes required to the grub.cfg file.

 

When you get to the synology assistant stage to update DSM were you told that your drives came from another machine and that you needed to install or did you have another screen?

Share this post


Link to post
Share on other sites

 

Thanks aleksey_z. That is good to know. I did all the changes required to the grub.cfg file.

 

When you get to the synology assistant stage to update DSM were you told that your drives came from another machine and that you needed to install or did you have another screen?

 

Yes, the installation has a message "that the drives have come from another system" and the need to update the DSM. There is a choice: to migration or clean install. I chose to migration and it worked.

Share this post


Link to post
Share on other sites

 

Thanks aleksey_z. That is good to know. I did all the changes required to the grub.cfg file.

 

When you get to the synology assistant stage to update DSM were you told that your drives came from another machine and that you needed to install or did you have another screen?

 

Yes, the installation has a message "that the drives have come from another system" and the need to update the DSM. There is a choice: to migration or clean install. I chose to migration and it worked.

And in this case, you still keep your ext4, right? What if I want to move to brtfs after the upgrade?

Thanks.

Share this post


Link to post
Share on other sites
So Jun has confirmed that we need to establish the PCI devices in the model we wish to run in order to then modify the patcher to run correctly.

 

Can you elaborate a bit on that, what do you mean by "establish the PCI" ?

Just that official devices have a predetermined set of pci devices that must be present in order for the system to work. On a generic machine, these do not exist in the same way or with the same IRQ etc. Apparently the PCI devices must be modified by the patcher which means understanding the pci devices required and then altering the loader to make them appear as they would in an official device; once this is done then I think it would load.

 

I don't know which module Jun has created to patch so I can't see what he has done yet, hoping to find out soon :smile:

 

Sounds good, thanks for the explanation. Looking forward to any news :smile:

Share this post


Link to post
Share on other sites

 

Thanks aleksey_z. That is good to know. I did all the changes required to the grub.cfg file.

 

When you get to the synology assistant stage to update DSM were you told that your drives came from another machine and that you needed to install or did you have another screen?

 

Yes, the installation has a message "that the drives have come from another system" and the need to update the DSM. There is a choice: to migration or clean install. I chose to migration and it worked.

 

I noticed you delete the first section of the grub.cfg file.

 

serial --port=0x3F8 --speed=115200
terminal_input serial console
terminal_output serial console

 

Why so?

 

Thanks

Share this post


Link to post
Share on other sites

 

I noticed you delete the first section of the grub.cfg file.

 

serial --port=0x3F8 --speed=115200
terminal_input serial console
terminal_output serial console

 

Why so?

 

Thanks

 

Nothing I did not delete from this file. The file is created (without these lines) after recording.

serial --port=0x3F8 --speed=115200
terminal_input serial console
terminal_output serial console

I just changed VID, PID, SN and MAC and added SataPortMap =

Share this post


Link to post
Share on other sites
that it’s showing i3 CPU with 2 Cores and I have i5 with 4 cores

Known issue.. I'm sure your system really sees i5 and 4 cores.. :smile:

You can check with SSH - look for the commands in this thread!

 

Good to know! At this point the only thing left that is still unclear with my setup is why would RAID 10 show disks 1, 2, 3, and 5 occupied, skipping disk 4? I only have 4 sata ports.

Also, what are the chances that future updates could make this setup unusable again and is QuickConnect with non-synology devices a good idea?

Share this post


Link to post
Share on other sites
Anyone experiencing kernel panic after enable jumbo frame (MTU 9000) for network interface?

I did see that on ESXI.

 

Are you running virtual or bare metal? what network adapter are you using?

Share this post


Link to post
Share on other sites

 

Thanks aleksey_z. That is good to know. I did all the changes required to the grub.cfg file.

 

When you get to the synology assistant stage to update DSM were you told that your drives came from another machine and that you needed to install or did you have another screen?

 

Yes, the installation has a message "that the drives have come from another system" and the need to update the DSM. There is a choice: to migration or clean install. I chose to migration and it worked.

And in this case, you still keep your ext4, right? What if I want to move to brtfs after the upgrade?

Thanks.

Reformat your volume in brtfs after the upgrade.

Share this post


Link to post
Share on other sites

I currently have oktisme's DSM 6 running (with RAID5, Btrfs). I wonder, will it be possible to create a new VM, using the OP's boot-loader trick, and use my existing disks? (sans losing the data, of course). I suspect the OP'a image won't finish the install (as Synology is effectively already on the disks). But will I be able to just use the Upgrade button then?

 

Thanks.

Share this post


Link to post
Share on other sites

Hi, as far you know can i use only one disk to test? it seems it's' mandatory have more then one disk to create volume

Share this post


Link to post
Share on other sites

Reformat your volume in brtfs after the upgrade.

Ouch... all my data in the old version will be deleted, right? :sad:

Share this post


Link to post
Share on other sites
Anyone experiencing kernel panic after enable jumbo frame (MTU 9000) for network interface?

I did see that on ESXI.

 

Are you running virtual or bare metal? what network adapter are you using?

 

I'm running on ESXI 6.0 u2. I tried on both vmxnet3 and e1000. Both gave me kernel panic error.

Share this post


Link to post
Share on other sites

Reformat your volume in brtfs after the upgrade.

Ouch... all my data in the old version will be deleted, right? :sad:

Copy your data to another system or other drives. If you don't have, recommend purchasing drives. It's good to have spare copies.

 

Sent from my SM-N930T using Tapatalk

Share this post


Link to post
Share on other sites

People shouldn't really be trying these hacked loaders with their real data until some time goes by and they are tested more thoroughly... unless you don't mind the possibility of losing your data.

  • Like 1

Share this post


Link to post
Share on other sites

Can anybody confirm if the Marvell 88SE9215 chipset is supported with this loader? I just did a migrate from 5.2 on my 8-disk box and it only sees 4 of my disks. I'm guessing this is a driver that needs to be added in and I jumped the gun. Can anybody confirm this? I've never compiled a driver so if anybody has any tips or guides on what, if anything, I can do I would greatly appreciate it!

Share this post


Link to post
Share on other sites
People shouldn't really be trying these hacked loaders with their real data until some time goes by and they are tested more thoroughly... unless you don't mind the possibility of losing your data.

 

That's so true but the call for an upgrade is always so exciting

Edited by Guest

Share this post


Link to post
Share on other sites

I noticed something strange after upgrading to DSM6. The volume / disk group name changed to reuse_1. See pic: 5f68379672099bb54a7ee4d8509c2cb6.png

 

This only shows in the DS Finder app, not through the web interface.

Share this post


Link to post
Share on other sites
People shouldn't really be trying these hacked loaders with their real data until some time goes by and they are tested more thoroughly... unless you don't mind the possibility of losing your data.

 

That's so true but the call for an upgrade is always so exciting

 

I agree. I will still go ahead with this setup but also sync the crucial data (photos, docs, etc) with external drive or another pc.

Share this post


Link to post
Share on other sites

Once its working with LSI cards, I would happily move across as all my data is backed up to crashplan and photos/music etc. is additionally on Google Drive/OneDrive... I can suffer complete failure and repopulate so while it would be annoying, it wouldn't be devastating now.

Share this post


Link to post
Share on other sites
I noticed something strange after upgrading to DSM6. The volume / disk group name changed to reuse_1. This only shows in the DS Finder app, not through the web interface.

Yes. On my two servers is the same record "reuse_1" in the DSFinder. Strange that so?

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.