autohintbot

Tutorial - Modifying 1.04b Loader to Default to 24 Drives

Recommended Posts

Intro/Motivation

 

This seems like hard-to-find information, so I thought I'd write up a quick tutorial.  I'm running XPEnology as a VM under ESXi, now with a 24-bay Supermicro chassis.  The Supermicro world has a lot of similar-but-different options.  In particular, I'm running an E5-1225v3 Haswell CPU, with 32GB memory, on a X10SLM+-F motherboard in a 4U chassis using a BPN-SAS2-846EL1 backplane.  This means all 24 drives are connected to a single LSI 9211-8i based HBA, flashed to IT mode.  That should be enough Google-juice to find everything you need for a similar setup!

 

The various Jun loaders default to 12 drive bays (3615/3617 models), or 16 drive bays (DS918+).  This presents a problem when you update, if you increase maxdisks after install--you either have to design your volumes around those numbers, so whole volumes drop off after an update before you re-apply the settings, or just deal with volumes being sliced and checking integrity afterwards.

 

Since my new hardware supports the 4.x kernel, I wanted to use the DS918+ loader, but update the patching so that 24 drive bays was the new default.  Here's how.  Or, just grab the files attached to the post.

 

Locating extra.lzma/extra2.lzma

 

This tutorial assumes you've messed with the synoboot.img file before.  If not, a brief guide on mounting:

 

  • Install OSFMount
  • "Mount new" button, select synoboot.img
  • On first dialog, "Use entire image file"
  • On main settings dialog, "mount all partitions" radio button under volumes options, uncheck "read-only drive" under mount options
  • Click OK

 

You should know have three new drives mounted.  Exactly where will depend on your system, but if you had a C/D drive before, probably E/F/G.

 

The first readable drive has an EFI/grub.cfg file.  This is what you usually customize for i.e. serial number.

 

On the second drive, should have a extra.lzma and extra2.lzma file, alongside some other things.  Copy these somewhere else.

 

Unpacking, Modifying, Repacking

 

To be honest, I don't know why the patch exists in both of these files.  Maybe one is applied during updates, one at normal boot time?  I never looked into it.

 

But the patch that's being applied to the max disks count exists in these files.  We'll need to unpack them first.  Some of these tools exist on macOS, and likely Windows ports, but I just did this on a Linux system.  Spin up a VM if you need.  On a fresh system you likely won't have lzma or cpio installed, but apt-get should suggest the right packages.

 

Copy extra.lzma to a new, temporary folder.  Run:

lzma -d extra.lzma

cpio -idv < extra

In the new ./etc/ directory, you should see:

 

jun.patch

rc.modules

synoinfo_override.conf

 

  • Open up jun.patch in the text editor of your choice.
    • Search for maxdisks.  There should be two instances--one in the patch delta up top, and one in a larger script below.  Change the 16 to a 24.
    • Search for internalportcfg.  Again, two instances.  Change the 0xffff to 0xffffff for 24.  This is a bitmask--more info elsewhere on the forums.
  • Open up synoinfo_override.conf.
    • Change the 16 to a 24, and 0xffff to 0xffffff

 

To repack, in a shell at the root of the extracted files, run:

(find . -name modprobe && find . \! -name modprobe) | cpio --owner root:root -oH newc | lzma -8 > ../extra.lzma

Not at the resulting file sits one directory up (../extra.lzma).

 

Repeat the same steps for extra2.lzma.

 

Preparing synoboot.img

 

Just copy the updated extra/extra2.lzma files back where they came from, mounted under OSFMount.

 

While you're in there, you might need to update grub.cfg, especially if this is a new install.  For the hardware mentioned at the very top of the post, with a single SAS expander providing 24 drives, where synoboot.img is a SATA disk for a VM under ESXi 6.7, I use these sata_args:

# for 24-bay sas enclosure on 9211 LSI card (i.e. 24-bay supermicro)
set sata_args='DiskIdxMap=181C SataPortMap=1 SasIdxMap=0xfffffff4'

 

Close any explorer windows or text editors, and click dismount all in OSFMount.  This image is ready to use.

 

If you're using ESXi and having trouble getting the image to boot, you can attach a network serial port to telnet in and see what's happening at boot time.  You'll probably need to disable the ESXi firewall temporarily, or open port 23.  It's super useful.  Be aware that the 4.x kernel no longer supports extra hardware, so network card will have to be officially supported.  (I gave the VM a real network card via hardware passthrough).

 

Attached Files

 

I attached extra.lzma and extra2.lzma to this post.  They are both from Jun's Loader 1.04b with the above procedure applied to change default drives from 16 from 24.

extra2.lzma extra.lzma

  • Like 2

Share this post


Link to post
Share on other sites

Great tutorial, I am about to buy a 24 bays Supermicro and I am planning how to install DSM on the new hardware. Probably I will use ESXi 6.7 and create a vm for Xpenology. I most likely will use LSI 9211-8i like you, but I guess I read somewhere that ESXi passthrough has a 16 devices limit, which got me by surprised. Looking your description, it seems you managed to passthrough all 24 disks, right?

 

Regarding your tutorial, should I do the same procedure every time that I upgrade DSM or just when I upgrade the boot loader?

Share this post


Link to post
Share on other sites
10 hours ago, thiagocrepaldi said:

Great tutorial, I am about to buy a 24 bays Supermicro and I am planning how to install DSM on the new hardware. Probably I will use ESXi 6.7 and create a vm for Xpenology. I most likely will use LSI 9211-8i like you, but I guess I read somewhere that ESXi passthrough has a 16 devices limit, which got me by surprised. Looking your description, it seems you managed to passthrough all 24 disks, right?

 

Regarding your tutorial, should I do the same procedure every time that I upgrade DSM or just when I upgrade the boot loader?

 

You'll only need to do the procedure every time you update the bootloader (which is a pretty rare event).

 

It does look like there is a 16-device limit per VM with passthrough.  This is only passing through a single device, though:  The LSI controller.  ESXi isn't managing the individual hard drives, and has no knowledge they even exist once the LSI card is passthrough-enabled.  You would need some pretty esoteric hardware to even have 16 PCIe devices available for passthrough.

  • Thanks 1

Share this post


Link to post
Share on other sites
Posted (edited)
8 hours ago, autohintbot said:

 

You'll only need to do the procedure every time you update the bootloader (which is a pretty rare event).

 

It does look like there is a 16-device limit per VM with passthrough.  This is only passing through a single device, though:  The LSI controller.  ESXi isn't managing the individual hard drives, and has no knowledge they even exist once the LSI card is passthrough-enabled.  You would need some pretty esoteric hardware to even have 16 PCIe devices available for passthrough.

Thanks for the (quick) reply! I know that a single LSI 9211-8i board is capable of connecting 24 HDDs (and more)... I was wondering why a seller[1] would sell a similar setup with 3 of them. What are the scenarios that it would cover that a single wouldn't? That setup probably includes a backplane BPN-SAS-846A[2], that as far as I know, has 6 ipass connections to the HBA card, right? 

 

[1] https://www.theserverstore.com/SuperMicro-848A-R1K62B-w-X9QRi-F-24x-LFF-Server

[2] https://www.supermicro.com/manuals/other/BPN-SAS-846A.pdf

Edited by thiagocrepaldi
new information

Share this post


Link to post
Share on other sites
30 minutes ago, thiagocrepaldi said:

Thanks for the (quick) reply! I know that a single LSI 9211-8i board is capable of connecting 24 HDDs (and more)... I was wondering why a seller[1] would sell a similar setup with 3 of them. What are the scenarios that it would cover that a single wouldn't? That setup probably includes a backplane BPN-SAS-846A[2], that as far as I know, has 6 ipass connections to the HBA card, right? 

 

[1] https://www.theserverstore.com/SuperMicro-848A-R1K62B-w-X9QRi-F-24x-LFF-Server

[2] https://www.supermicro.com/manuals/other/BPN-SAS-846A.pdf

 

I'm not a Supermicro expert by any means, but my understanding is that any backplane that ends in A denotes a direct attach backplane.  It'll have 24 individual SAS/SATA connectors.  You'll see three cards, with six total x4 forward breakout cables (so 3 cards * 2 ports * 4 breakout per port = 24 drives, but literally as 24 fanned out cables).

 

The E backplanes have built-in SAS expanders.  I have two Supermicro 4U servers here, with BPN-SAS2-846EL1 backplanes.  They support larger drives, and still a single connector each.  The EL2 backplanes have two expanders, which I guess is there for performance reasons, with 12 bays per expander.  That's likely an issue only if you're running a lot of SSDs to back a lot of VMs.  I don't have any issues saturating 10gbit for my SSD volumes here.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.