Jump to content
XPEnology Community

Tutorial - Modifying 1.04b Loader to Default to 24 Drives


Recommended Posts

Intro/Motivation

 

This seems like hard-to-find information, so I thought I'd write up a quick tutorial.  I'm running XPEnology as a VM under ESXi, now with a 24-bay Supermicro chassis.  The Supermicro world has a lot of similar-but-different options.  In particular, I'm running an E5-1225v3 Haswell CPU, with 32GB memory, on a X10SLM+-F motherboard in a 4U chassis using a BPN-SAS2-846EL1 backplane.  This means all 24 drives are connected to a single LSI 9211-8i based HBA, flashed to IT mode.  That should be enough Google-juice to find everything you need for a similar setup!

 

The various Jun loaders default to 12 drive bays (3615/3617 models), or 16 drive bays (DS918+).  This presents a problem when you update, if you increase maxdisks after install--you either have to design your volumes around those numbers, so whole volumes drop off after an update before you re-apply the settings, or just deal with volumes being sliced and checking integrity afterwards.

 

Since my new hardware supports the 4.x kernel, I wanted to use the DS918+ loader, but update the patching so that 24 drive bays was the new default.  Here's how.  Or, just grab the files attached to the post.

 

Locating extra.lzma/extra2.lzma

 

This tutorial assumes you've messed with the synoboot.img file before.  If not, a brief guide on mounting:

 

  • Install OSFMount
  • "Mount new" button, select synoboot.img
  • On first dialog, "Use entire image file"
  • On main settings dialog, "mount all partitions" radio button under volumes options, uncheck "read-only drive" under mount options
  • Click OK

 

You should know have three new drives mounted.  Exactly where will depend on your system, but if you had a C/D drive before, probably E/F/G.

 

The first readable drive has an EFI/grub.cfg file.  This is what you usually customize for i.e. serial number.

 

On the second drive, should have a extra.lzma and extra2.lzma file, alongside some other things.  Copy these somewhere else.

 

Unpacking, Modifying, Repacking

 

To be honest, I don't know why the patch exists in both of these files.  Maybe one is applied during updates, one at normal boot time?  I never looked into it.

 

But the patch that's being applied to the max disks count exists in these files.  We'll need to unpack them first.  Some of these tools exist on macOS, and likely Windows ports, but I just did this on a Linux system.  Spin up a VM if you need.  On a fresh system you likely won't have lzma or cpio installed, but apt-get should suggest the right packages.

 

Copy extra.lzma to a new, temporary folder.  Run:

lzma -d extra.lzma

cpio -idv < extra

In the new ./etc/ directory, you should see:

 

jun.patch

rc.modules

synoinfo_override.conf

 

  • Open up jun.patch in the text editor of your choice.
    • Search for maxdisks.  There should be two instances--one in the patch delta up top, and one in a larger script below.  Change the 16 to a 24.
    • Search for internalportcfg.  Again, two instances.  Change the 0xffff to 0xffffff for 24.  This is a bitmask--more info elsewhere on the forums.
  • Open up synoinfo_override.conf.
    • Change the 16 to a 24, and 0xffff to 0xffffff

 

To repack, in a shell at the root of the extracted files, run:

(find . -name modprobe && find . \! -name modprobe) | cpio --owner root:root -oH newc | lzma -8 > ../extra.lzma

Not at the resulting file sits one directory up (../extra.lzma).

 

Repeat the same steps for extra2.lzma.

 

Preparing synoboot.img

 

Just copy the updated extra/extra2.lzma files back where they came from, mounted under OSFMount.

 

While you're in there, you might need to update grub.cfg, especially if this is a new install.  For the hardware mentioned at the very top of the post, with a single SAS expander providing 24 drives, where synoboot.img is a SATA disk for a VM under ESXi 6.7, I use these sata_args:

# for 24-bay sas enclosure on 9211 LSI card (i.e. 24-bay supermicro)
set sata_args='DiskIdxMap=181C SataPortMap=1 SasIdxMap=0xfffffff4'

 

Close any explorer windows or text editors, and click dismount all in OSFMount.  This image is ready to use.

 

If you're using ESXi and having trouble getting the image to boot, you can attach a network serial port to telnet in and see what's happening at boot time.  You'll probably need to disable the ESXi firewall temporarily, or open port 23.  It's super useful.  Be aware that the 4.x kernel no longer supports extra hardware, so network card will have to be officially supported.  (I gave the VM a real network card via hardware passthrough).

 

Attached Files

 

I attached extra.lzma and extra2.lzma to this post.  They are both from Jun's Loader 1.04b with the above procedure applied to change default drives from 16 from 24.

extra2.lzma extra.lzma

  • Like 2
Link to comment
Share on other sites

  • 3 weeks later...
  • 2 months later...

Great tutorial, I am about to buy a 24 bays Supermicro and I am planning how to install DSM on the new hardware. Probably I will use ESXi 6.7 and create a vm for Xpenology. I most likely will use LSI 9211-8i like you, but I guess I read somewhere that ESXi passthrough has a 16 devices limit, which got me by surprised. Looking your description, it seems you managed to passthrough all 24 disks, right?

 

Regarding your tutorial, should I do the same procedure every time that I upgrade DSM or just when I upgrade the boot loader?

Link to comment
Share on other sites

10 hours ago, thiagocrepaldi said:

Great tutorial, I am about to buy a 24 bays Supermicro and I am planning how to install DSM on the new hardware. Probably I will use ESXi 6.7 and create a vm for Xpenology. I most likely will use LSI 9211-8i like you, but I guess I read somewhere that ESXi passthrough has a 16 devices limit, which got me by surprised. Looking your description, it seems you managed to passthrough all 24 disks, right?

 

Regarding your tutorial, should I do the same procedure every time that I upgrade DSM or just when I upgrade the boot loader?

 

You'll only need to do the procedure every time you update the bootloader (which is a pretty rare event).

 

It does look like there is a 16-device limit per VM with passthrough.  This is only passing through a single device, though:  The LSI controller.  ESXi isn't managing the individual hard drives, and has no knowledge they even exist once the LSI card is passthrough-enabled.  You would need some pretty esoteric hardware to even have 16 PCIe devices available for passthrough.

  • Thanks 1
Link to comment
Share on other sites

8 hours ago, autohintbot said:

 

You'll only need to do the procedure every time you update the bootloader (which is a pretty rare event).

 

It does look like there is a 16-device limit per VM with passthrough.  This is only passing through a single device, though:  The LSI controller.  ESXi isn't managing the individual hard drives, and has no knowledge they even exist once the LSI card is passthrough-enabled.  You would need some pretty esoteric hardware to even have 16 PCIe devices available for passthrough.

Thanks for the (quick) reply! I know that a single LSI 9211-8i board is capable of connecting 24 HDDs (and more)... I was wondering why a seller[1] would sell a similar setup with 3 of them. What are the scenarios that it would cover that a single wouldn't? That setup probably includes a backplane BPN-SAS-846A[2], that as far as I know, has 6 ipass connections to the HBA card, right? 

 

[1] https://www.theserverstore.com/SuperMicro-848A-R1K62B-w-X9QRi-F-24x-LFF-Server

[2] https://www.supermicro.com/manuals/other/BPN-SAS-846A.pdf

Edited by thiagocrepaldi
new information
Link to comment
Share on other sites

30 minutes ago, thiagocrepaldi said:

Thanks for the (quick) reply! I know that a single LSI 9211-8i board is capable of connecting 24 HDDs (and more)... I was wondering why a seller[1] would sell a similar setup with 3 of them. What are the scenarios that it would cover that a single wouldn't? That setup probably includes a backplane BPN-SAS-846A[2], that as far as I know, has 6 ipass connections to the HBA card, right? 

 

[1] https://www.theserverstore.com/SuperMicro-848A-R1K62B-w-X9QRi-F-24x-LFF-Server

[2] https://www.supermicro.com/manuals/other/BPN-SAS-846A.pdf

 

I'm not a Supermicro expert by any means, but my understanding is that any backplane that ends in A denotes a direct attach backplane.  It'll have 24 individual SAS/SATA connectors.  You'll see three cards, with six total x4 forward breakout cables (so 3 cards * 2 ports * 4 breakout per port = 24 drives, but literally as 24 fanned out cables).

 

The E backplanes have built-in SAS expanders.  I have two Supermicro 4U servers here, with BPN-SAS2-846EL1 backplanes.  They support larger drives, and still a single connector each.  The EL2 backplanes have two expanders, which I guess is there for performance reasons, with 12 bays per expander.  That's likely an issue only if you're running a lot of SSDs to back a lot of VMs.  I don't have any issues saturating 10gbit for my SSD volumes here.

Link to comment
Share on other sites

  • 1 year later...
On 3/8/2019 at 12:46 PM, autohintbot said:

Intro/Motivation

 

This seems like hard-to-find information, so I thought I'd write up a quick tutorial.  I'm running XPEnology as a VM under ESXi, now with a 24-bay Supermicro chassis.  The Supermicro world has a lot of similar-but-different options.  In particular, I'm running an E5-1225v3 Haswell CPU, with 32GB memory, on a X10SLM+-F motherboard in a 4U chassis using a BPN-SAS2-846EL1 backplane.  This means all 24 drives are connected to a single LSI 9211-8i based HBA, flashed to IT mode.  That should be enough Google-juice to find everything you need for a similar setup!

 

I know  this is a old post, but how it the world do you get 24 hdd's on one 9211-8i card?!?!?! I have one of these cards and i was under the impression 8 was the max?

Edited by merve04
Link to comment
Share on other sites

39 minutes ago, IG-88 said:

"SAS expander" is the key word here, you can read this http://www.sasexpanders.com

or google it

Are you able to share an example? I've read the site, i've google "SAS expander" but that just lead to different type of sas cards. My question was how does one use a 9211-8i card which only has 2 8087 ports and be able to accommodate 24 or more hard drives?

Link to comment
Share on other sites

This is what I would get? I guess take a 8087 to 8088 cable to feed the 9211-8i to this card and then i have 8 more 8087 ports?? Why the need for PCI-e again?

I only have 1 16x pci-e slot which is used by the lsi card.

IMG_0211.jpg

Edited by merve04
Link to comment
Share on other sites

sas expander are more likely to be seen as backplanes like this

https://www.dell.com/community/PowerEdge-HDD-SCSI-RAID/SAS-expander-backplane-reduce-to-direct-SATA-mode/td-p/5068904

 

maybe think of a expander like a network switch connecting the server hba to a lot more devices

sas is not a bus like scsi, its a single point to point connection, but as every device in unique (WWN)  you can address them like in a network

Link to comment
Share on other sites

https://gugucomputing.wordpress.com/2018/11/11/experiment-on-sata_args-in-grub-cfg/

sorry there.By learned this blog,I know that we can change sata controller  supports,but still don't know how to deal with the disk sequence.
wanna ask:if there's any possible to change mb sata controller Disk sequence,not default from 1 -6. Use 7-12 instead.
my MB has 6 sata ports default,and second sata controller is JMB585 with 5 sata ports.
I want change disk sequence like 1-5(JMB585) 6-11(MB sata ports from 0-5).
I‘ve tested with :
SasIdxMap=0 DiskIdxMap=0005 SataPortMap=56
SasIdxMap=0 DiskIdxMap=0500 SataPortMap=65
SasIdxMap=0 DiskIdxMap=05 SataPortMap=65
but none of them can work

Link to comment
Share on other sites

On 6/15/2020 at 1:48 PM, IG-88 said:

sas expander are more likely to be seen as backplanes like this

https://www.dell.com/community/PowerEdge-HDD-SCSI-RAID/SAS-expander-backplane-reduce-to-direct-SATA-mode/td-p/5068904

 

maybe think of a expander like a network switch connecting the server hba to a lot more devices

sas is not a bus like scsi, its a single point to point connection, but as every device in unique (WWN)  you can address them like in a network

I get what is being laid down here, but I've already got a 16x hdd hotswap rackmount case, but theres a ridiculous amount of room inside, i've seen some where people just populate a row of hdds on the inside standing up in the case. But this is where im trying to figure out how to I add more hdd's with my existing lsi card.

Link to comment
Share on other sites

2 hours ago, merve04 said:

just populate a row of hdds on the inside standing up in the case.

blog-60-drives-ooh-aah.jpg

 

but they use 5port sata multiplexer backplanes

 

2 hours ago, merve04 said:

But this is where im trying to figure out how to I add more hdd's with my existing lsi card.

 

you could use one of the cards you had a picture of but i guess it might more as expansive as a additional lsi controller, it would be easier to just use a 2nd controller

also keep in mind that there are problems going beyond 24 drives, you should read up an that and do test before going beyound 24 drives with xpenology

 

in business environment you will usually see sas enclosures as separate units, maybe there are some old ones to get cheap

 

 

Link to comment
Share on other sites

11 hours ago, yuyuko said:

https://gugucomputing.wordpress.com/2018/11/11/experiment-on-sata_args-in-grub-cfg/

sorry there.By learned this blog,I know that we can change sata controller  supports,but still don't know how to deal with the disk sequence.
wanna ask:if there's any possible to change mb sata controller Disk sequence,not default from 1 -6. Use 7-12 instead.
my MB has 6 sata ports default,and second sata controller is JMB585 with 5 sata ports.
I want change disk sequence like 1-5(JMB585) 6-11(MB sata ports from 0-5).
I‘ve tested with :
SasIdxMap=0 DiskIdxMap=0005 SataPortMap=56
SasIdxMap=0 DiskIdxMap=0500 SataPortMap=65
SasIdxMap=0 DiskIdxMap=05 SataPortMap=65
but none of them can work

after 4 tests with:
NO.1: doesn't work.default disk sequence
set extra_args_3617='SasIdxMap=0 DiskIdxMap=0005 SataPortMap=65'
set sata_args='sata_uid=1 sata_pcislot=5 synoboot_satadom=1 SasIdxMap=0 DiskIdxMap=0005 SataPortMap=56'
NO.2:ddoesn't work.default disk sequence
set extra_args_3617='SasIdxMap=0 DiskIdxMap=0005 SataPortMap=65'
set sata_args='SasIdxMap=0 DiskIdxMap=0005 SataPortMap=56'
NO.3: work,MB sata controller disk sequence from 6-11 ,JMB585 disk sepuuence from 1-5
set extra_args_3617='SasIdxMap=0 DiskIdxMap=0500 SataPortMap=65'
set sata_args='SasIdxMap=0 DiskIdxMap=0500 SataPortMap=65'
NO.4:doesn't work and disconnect with dsm
set extra_args_3617='SasIdxMap=0 DiskIdxMap=05 SataPortMap=65'
set sata_args='SasIdxMap=0 DiskIdxMap=05 SataPortMap=65'

 

now,uncertain settings:
sata_uid=1
ata_pcislot=5
synoboot_satadom=1
those three settings only show in DS3617 gruf.cfg. DS918 doesn't have it.
for sure is those three settings can let NO.3 back to default disk sequence.

I don't know these three Settings means what in DSM.

Edited by yuyuko
Link to comment
Share on other sites

  • 3 weeks later...

This sounds interesting, is there a good guide or explanation of how/where to set the extra_args_3617, I will have the 6 motherboard sata's and a 8 port LSI hbva (might add  a second) and would prefer to let the LSI ports take priority and use the motherboard ones last (16+6=22 so still under the 24 disk threshold). For starters though I would like to put 8 LSI ports first then the 6 intel ones. (I also saw the guide for setting these numbers as the default, but will have to read that a few more times to figure it out, I've previously edited in the running machine, but that resets with updates).

Link to comment
Share on other sites

Ok, this got me beaten for now.
I created a new Ubuntu 20 machine just for the task.
Got the extra.lzma in and copied to an empty folder
Unpacked it as per instructions.
I go into the new etc dir but only see:
    jun.patch

    rc.modules
Not synoinfo_override.conf
Opening jun.patch there is no "maxdisk" or "internalportcfg" anywhere in the file (searching for it or stepping through manually.
The difference for me is it's an extra.lzma from 1.03b/Ds3617XS, does this guide not work for it?

Link to comment
Share on other sites

I know this is an old thread, but..

 

Does anyone have the .lzma files or just a compiled .img that has the ds918+ with 24 drives edited? I cannot for the life of me get my .img to boot once Ive made the changes. I am 99% sure it has something to do with me recompiling everything once I make the edits. would really appreciate it!

Link to comment
Share on other sites

  • 1 month later...
On 7/14/2020 at 8:20 PM, Cheesemeister said:

I know this is an old thread, but..

 

Does anyone have the .lzma files or just a compiled .img that has the ds918+ with 24 drives edited? I cannot for the life of me get my .img to boot once Ive made the changes. I am 99% sure it has something to do with me recompiling everything once I make the edits. would really appreciate it!

autohintbot posted on the first post of this thread the 2 files you need to replace on your usb key in order to achieve 24 drives.

Link to comment
Share on other sites

  • 4 months later...
On 1/23/2021 at 4:17 PM, IG-88 said:

is it possible to go beyond 24 disk with jun loader, i have tried it but it goes into recovery with the ability to recover? 

Edited by aniel
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...