jun

DSM 6.2 Loader

Recommended Posts

22 hours ago, autohintbot said:

I spent some time experimenting tonight, and I still can't get my disks starting at 1 with the 6.2 loader, either 3615 or 3617, with an LSI passed through in an ESXi 6.5u2 VM.

 

I noticed SasIdxMap=0 from grub.cfg doesn't actually come through on /proc/cmdline, which seems like it's trying to start enumeration at 0 for an SAS controller:


sh-4.3# cat /proc/cmdline
syno_hdd_powerup_seq=0 HddHotplug=0 syno_hw_version=DS3617xs vender_format_version=2 console=ttyS0,115200n8 withefi quiet root=/dev/md0 sn=XXXXXXXXXXXXX mac1=XXXXXXXXXXXX netif_num=1 synoboot_satadom=1 DiskIdxMap=0C SataPortMap=1

But, it doesn't come through on my 6.1.x VM with the same card either, and that correctly enumerates disks starting with /dev/sda.  Not sure what else to try.  I increased internalportcfg for now, but that's just a landmine on the next big update (I guess it's possible to include that in the jun.patch in extra.lzma though).

 

 

Hi, options not available in vanilla DSM are hidden in /proc/cmdline
here is a example of disk order related options: SasIdxMap=0 DiskIdxMap=080C SataPortMap=4
it should works like this:   DiskIdxMap make first sata controller start from disk 8, second from disk 12 etc,
SataPortMap limit the number of disks a sata controller can have(vmware's virtual sata controller have 32 ports!)
then SasIdxMap make SAS controller start from disk 0, as a plus, this option should give you a stable SAS disk name

  • Like 1

Share this post


Link to post
Share on other sites
6 minutes ago, jun said:

Hi, options not available in vanilla DSM are hidden in /proc/cmdline
here is a example of disk order related options: SasIdxMap=0 DiskIdxMap=080C SataPortMap=4
it should works like this:   DiskIdxMap make first sata controller start from disk 8, second from disk 12 etc,
SataPortMap limit the number of disks a sata controller can have(vmware's virtual sata controller have 32 ports!)
then SasIdxMap make SAS controller start from disk 0, as a plus, this option should give you a stable SAS disk name

 

Thanks for the info!  I still haven't been able to get my setup to enumerate disks starting at 1 (in the GUI, /dev/sda viewed from command line).  Specs:

Quote

 

ESXi 6.5.0 Update 2 (Build 8294253)

PowerEdge R320

Intel(R) Xeon(R) CPU E5-2450L 0 @ 1.80GHz

 

The VM has a LSI SAS9200-8e passed through, connected to a SA120 enclosure with 9/12 disks populated.

 

 

My grub.cfg line:

 

Quote

set sata_args='sata_uid=1 sata_pcislot=3 synoboot_satadom=1 DiskIdxMap=0C SataPortMap=1 SasIdxMap=0'

 

I changed sata_pcislot as an experiment, because I noticed the PCI slot number changed between my 6.1.x VMs (EFI Boot) and this VM (BIOS boot).

 

6.1.x VM is showing:

Quote

 

lspci -s 5

0000:02:05.0 Class 0106: Device 15ad:07e0

 

 

While this VM is showing:

Quote

 

lspci -s 3

0000:02:03.0 Class 0106: Device 15ad:07e0

0001:00:03.0 Class 0000: Device 8086:6f08 (rev ff)

0001:00:03.2 Class 0000: Device 8086:6f0a (rev ff)

 

 

The setup is the same, so my best guess is this is just how VMWare has set up their DSDT/whatever tables between the two firmware options.  Maybe the order of adding devices when I made the VM originally altered things, though.  Either way, I didn't notice a difference from sata_pcislot=5 to sata_pcislot=3.

 

I had to increase internalprtcfg in synoinfo.conf to see all the disks.  You can see the 8-disk gap in the UI:

 

691277189_ScreenShot2018-08-04at5_45_34PM.thumb.png.308c5f7dee836a80eae528cd5688124e.png

 

Or in the shell:

Quote

 

sudo fdisk -l | grep Disk\ /dev/sd

Disk /dev/sdj: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors

Disk /dev/sdk: 447.1 GiB, 480103981056 bytes, 937703088 sectors

Disk /dev/sdl: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors

Disk /dev/sdn: 447.1 GiB, 480103981056 bytes, 937703088 sectors

Disk /dev/sdo: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors

Disk /dev/sdq: 447.1 GiB, 480103981056 bytes, 937703088 sectors

Disk /dev/sdr: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors

Disk /dev/sdt: 447.1 GiB, 480103981056 bytes, 937703088 sectors

Disk /dev/sdi: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors

 

 

The VM is pretty minimally configured:

 

1790841821_ScreenShot2018-08-04at5_48_00PM.thumb.png.a97cd09f2f4e6d6e57d9f58038bf943e.png

 

I don't see anything glaringly obvious in dmesg or early boot from the serial port.  Happy to paste those your way if you want to take a peek, though!  At this point I'm at the limits of my (admittedly poor) XPE knowledge, so just kind of hoping the EFI-fixed version makes my 6.1 VMs behave the same with a 6.2 install...

  • Like 1

Share this post


Link to post
Share on other sites
1 hour ago, autohintbot said:

  

Thanks for the info!  I still haven't been able to get my setup to enumerate disks starting at 1 (in the GUI, /dev/sda viewed from command line).  Specs: 

 

My grub.cfg line:

 

 

I changed sata_pcislot as an experiment, because I noticed the PCI slot number changed between my 6.1.x VMs (EFI Boot) and this VM (BIOS boot). 

 

6.1.x VM is showing:

 

While this VM is showing:

 

The setup is the same, so my best guess is this is just how VMWare has set up their DSDT/whatever tables between the two firmware options.  Maybe the order of adding devices when I made the VM originally altered things, though.  Either way, I didn't notice a difference from sata_pcislot=5 to sata_pcislot=3. 

 

I had to increase internalprtcfg in synoinfo.conf to see all the disks.  You can see the 8-disk gap in the UI: 

 

691277189_ScreenShot2018-08-04at5_45_34PM.thumb.png.308c5f7dee836a80eae528cd5688124e.png

 

Or in the shell:

 

The VM is pretty minimally configured:

 

1790841821_ScreenShot2018-08-04at5_48_00PM.thumb.png.a97cd09f2f4e6d6e57d9f58038bf943e.png

 

I don't see anything glaringly obvious in dmesg or early boot from the serial port.  Happy to paste those your way if you want to take a peek, though!  At this point I'm at the limits of my (admittedly poor) XPE knowledge, so just kind of hoping the EFI-fixed version makes my 6.1 VMs behave the same with a 6.2 install... 

since you are using a external sas enclosure, that is a setup I've never experimented before,  here is my educated guess:

You issue is caused by my implementation of stable Sas disk naming, the disk name is simply derived from sas remote phy id plus SasIdxMap as start position.

I think ports directly connected to hba card are 0-7, and external expansion cards inside your enclosure will start from 8

You can try to set SasIdxMap=0xfffffff8, that is -8 to move start position to 0

if that does not work, simple remove SasIdxMap option, then sas disk names should occupy unused slots as before.

 

  • Like 1

Share this post


Link to post
Share on other sites
57 minutes ago, jun said:

since you are using a external sas enclosure, that is a setup I've never experimented before,  here is my educated guess:

You issue is caused by my implementation of stable Sas disk naming, the disk name is simply derived from sas remote phy id plus SasIdxMap as start position.

I think ports directly connected to hba card are 0-7, and external expansion cards inside your enclosure will start from 8

You can try to set SasIdxMap=0xfffffff8, that is -8 to move start position to 0

if that does not work, simple remove SasIdxMap option, then sas disk names should occupy unused slots as before.

 

 

I will try this out and report back!  FYI, this external enclosure works correctly with the DSM 6.1 loader.  The big difference between the two is the EFI->BIOS change.  I'll try to get some info on whether changing the 6.1 loader to BIOS causes the same symptoms or not (it's a little bit of a pain with real disks, because it'll sever RAID groups and require a rebuild to fix).

 

EDIT:  Using SasIdxMap=0xfffffff8 did the trick, thanks!

 

865020893_ScreenShot2018-08-04at9_07_35PM.png.5bd3de36d540ef49aeb199c649ddfe81.png

Edited by autohintbot

Share this post


Link to post
Share on other sites
6 hours ago, jun said:

Hi, options not available in vanilla DSM are hidden in /proc/cmdline
here is a example of disk order related options: SasIdxMap=0 DiskIdxMap=080C SataPortMap=4
it should works like this:   DiskIdxMap make first sata controller start from disk 8, second from disk 12 etc,
SataPortMap limit the number of disks a sata controller can have(vmware's virtual sata controller have 32 ports!)
then SasIdxMap make SAS controller start from disk 0, as a plus, this option should give you a stable SAS disk name

@jun if using the SASIDXmAP command, will this order the disks according to their connected positions instead of which one is detected first at boot up?  By this I mean, presently disk 1 may be in slot 1 of the enclosure with an LSI 9211-8i (IT) HBA but in DSM it could be disk 6... Does this fix that and I insert a disk in to slot 8, it will show it even if its the only disk?

Share this post


Link to post
Share on other sites

Thank you very much jun😍

Working with bare metal AMD System.😎

(AMD Athlon X2 250,GA-MA78GM-DS2H)

 

edit: First installation was sucessful.

After every reboot the Synology Web Assistant shows up and force me to migrate or reinstall DSM 6.2

 

Edited by withwolf1987

Share this post


Link to post
Share on other sites

Outcome of the installation/update: SUCCESSFUL for 6.2u2

- DSM version prior update: New installation

- Loader version and model: JUN'S LOADER v1.03b - DS3617

- Using custom extra.lzma: NO

- Installation type: VMware Fusion 10.1

- Additional comments: Jun's loader img convert into vmdk as SATA drive, legacy boot,

Edited by fitman

Share this post


Link to post
Share on other sites

OK So....

- Outcome of the installation / update Successful for DSM 6.2-23739 Update 2

- Loader version and model: JUN'S LOADER v1.03b - DS3617xs

- Using custom extra.lzma: NO

- Installation type: Baremetal - Intel Core I7 Bloomfield, Gigabyte GA-EX58-EXTREME Motherboard.

- Additional comments: Had Baremetal AMD running DSM 6.1.7-15284 Update 2 - Working Fine.

 

So I have just rebuilt my main PC and the system above is the result of leftovers and PSU from the AMD system.

I the Intel DSM6.2 is running as I type this and is scrubing some test hard drives, but the only way to get the system to boot with 5 hard drives was to boot with only 2 or 1 installed, still random results.

Then hot plug the additional drives. If I try to boot the system with more than 2 - 3 drives, Jun's boot loader would start but then start to read the HDD's then reset the machine.

 

Even in the beginning getting the latest DSM version installed resulted in 2 or 3 reboots b4 system was up and I could login.

 

Once I get past the point of random rebooting all is well so far and DSM system's work as they should.

 

Any Ideas on this would be helpful.

 

Tried so far....

Loaders, v1.03b (DS3615xs, DS3617xs) v1.02b ( DS3617xs )

 

Just looked and there are logs of System booted up from an improper shutdown. In DSM

Edited by QuickSwitch

Share this post


Link to post
Share on other sites

After upgrade my volume (Raid-6 with 6x3TB + Raid-1 with 2x256GB SSD) was shown as empty and the whole DSM UI hang in random places.

I removed the SSD Cache and the data came back.

Share this post


Link to post
Share on other sites
On 8/4/2018 at 10:02 AM, dodo-dk said:

 

Hi, with Proxmox 5.2-6 it is not working.

After the screen message "Screen will stop updating shortly,..." , I have the error "mount failed" in the serial console and nothing happens.

 

Same issue here. Proxmox 5.2-3 and using DS3617 boot loader. 6.1 boot loader works without a problem.

Share this post


Link to post
Share on other sites
On 8/3/2018 at 7:47 AM, arejaytee said:

- Outcome of the installation/update: SUCCESSFUL

- DSM version prior update: DSM 6.2-23739 UPDATE 1

- Loader version and model: JUN'S LOADER v1.03b - DS3617

- Using custom extra.lzma: NO

- Installation type: BAREMETAL - HP Microserver GEN 8 - Intel(R) Xeon(R) CPU E3-1220L V2 @ 2.30GHz 

- Additional comments: Blank install than updated

 

Very much appreciated @jun

Did you have any problem with hdd not found after install? Did you have to make changes to the bios?

i am trying to install it on my gen8 but getting the error hdd not found

Edited by niomar

Share this post


Link to post
Share on other sites
19 minutes ago, niomar said:

Did younhave any problem with hdd not found after install? Did younhave to make changesnto bios?

i am trying to install it on my gen8 butt getting the error hdd not found

 

I disabled the onboard raid card and set SATA to AHCI mode from memory. Try disabling the onbard raid first and see how you go.

Share this post


Link to post
Share on other sites
14 minutes ago, CrazyCreator said:

@niomar

Can you check your posting

 

younhave???

changesnto???

 

and

butt??? ;-):) <= mean but I think

i am sorry i was typing on a ipad haha thnx

Share this post


Link to post
Share on other sites
9 minutes ago, arejaytee said:

 

I disabled the onboard raid card and set SATA to AHCI mode from memory. Try disabling the onbard raid first and see how you go.

Thank you for you response, how do you disable the raid card? At the moment i dont have a hardware raid configured.

Share this post


Link to post
Share on other sites
5 hours ago, Benoire said:

@jun if using the SASIDXmAP command, will this order the disks according to their connected positions instead of which one is detected first at boot up?  By this I mean, presently disk 1 may be in slot 1 of the enclosure with an LSI 9211-8i (IT) HBA but in DSM it could be disk 6... Does this fix that and I insert a disk in to slot 8, it will show it even if its the only disk? 

yes, that is what I means by "stable SAS disk name"

Share this post


Link to post
Share on other sites

hi i have a HP n40l running the old jun loader on 3615

would i be able to update to 3617 and keep all my data intact there is 12TB with 8 TB used on a SHR raid

thanks for any help 

  • Like 1

Share this post


Link to post
Share on other sites

- Outcome of the installation / update Successful for DSM 6.2-23739 Update 2

- Loader version and model: JUN'S LOADER v1.03b - DS3617xs

- Using custom extra.lzma: NO

- Installation type: Baremetal - Intel Core i5 3330 Ivy Bridge, Gigabyte GA-B75M-D3H Motherboard.

- Additional comments: -

Share this post


Link to post
Share on other sites
3 hours ago, CrazyCreator said:

@niomar + @arejaytee

Please report if that worked. I have the same hardware and would like to know how it works.

Hello! changing the sata options in Bios to Enable SATA AHCI Support did the trick!

 

Thank @arejaytee

  • Like 1

Share this post


Link to post
Share on other sites
13 hours ago, jun said:

since you are using a external sas enclosure, that is a setup I've never experimented before,  here is my educated guess:

You issue is caused by my implementation of stable Sas disk naming, the disk name is simply derived from sas remote phy id plus SasIdxMap as start position.

I think ports directly connected to hba card are 0-7, and external expansion cards inside your enclosure will start from 8

You can try to set SasIdxMap=0xfffffff8, that is -8 to move start position to 0

if that does not work, simple remove SasIdxMap option, then sas disk names should occupy unused slots as before.

 

 

Is it possible to have the value for SasIdxMap= for -1 disk? 

Thanks Jun :)

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.