autohintbot Posted March 8, 2019 Share #1 Posted March 8, 2019 Intro/Motivation This seems like hard-to-find information, so I thought I'd write up a quick tutorial. I'm running XPEnology as a VM under ESXi, now with a 24-bay Supermicro chassis. The Supermicro world has a lot of similar-but-different options. In particular, I'm running an E5-1225v3 Haswell CPU, with 32GB memory, on a X10SLM+-F motherboard in a 4U chassis using a BPN-SAS2-846EL1 backplane. This means all 24 drives are connected to a single LSI 9211-8i based HBA, flashed to IT mode. That should be enough Google-juice to find everything you need for a similar setup! The various Jun loaders default to 12 drive bays (3615/3617 models), or 16 drive bays (DS918+). This presents a problem when you update, if you increase maxdisks after install--you either have to design your volumes around those numbers, so whole volumes drop off after an update before you re-apply the settings, or just deal with volumes being sliced and checking integrity afterwards. Since my new hardware supports the 4.x kernel, I wanted to use the DS918+ loader, but update the patching so that 24 drive bays was the new default. Here's how. Or, just grab the files attached to the post. Locating extra.lzma/extra2.lzma This tutorial assumes you've messed with the synoboot.img file before. If not, a brief guide on mounting: Install OSFMount "Mount new" button, select synoboot.img On first dialog, "Use entire image file" On main settings dialog, "mount all partitions" radio button under volumes options, uncheck "read-only drive" under mount options Click OK You should know have three new drives mounted. Exactly where will depend on your system, but if you had a C/D drive before, probably E/F/G. The first readable drive has an EFI/grub.cfg file. This is what you usually customize for i.e. serial number. On the second drive, should have a extra.lzma and extra2.lzma file, alongside some other things. Copy these somewhere else. Unpacking, Modifying, Repacking To be honest, I don't know why the patch exists in both of these files. Maybe one is applied during updates, one at normal boot time? I never looked into it. But the patch that's being applied to the max disks count exists in these files. We'll need to unpack them first. Some of these tools exist on macOS, and likely Windows ports, but I just did this on a Linux system. Spin up a VM if you need. On a fresh system you likely won't have lzma or cpio installed, but apt-get should suggest the right packages. Copy extra.lzma to a new, temporary folder. Run: lzma -d extra.lzma cpio -idv < extra In the new ./etc/ directory, you should see: jun.patch rc.modules synoinfo_override.conf Open up jun.patch in the text editor of your choice. Search for maxdisks. There should be two instances--one in the patch delta up top, and one in a larger script below. Change the 16 to a 24. Search for internalportcfg. Again, two instances. Change the 0xffff to 0xffffff for 24. This is a bitmask--more info elsewhere on the forums. Open up synoinfo_override.conf. Change the 16 to a 24, and 0xffff to 0xffffff To repack, in a shell at the root of the extracted files, run: (find . -name modprobe && find . \! -name modprobe) | cpio --owner root:root -oH newc | lzma -8 > ../extra.lzma Not at the resulting file sits one directory up (../extra.lzma). Repeat the same steps for extra2.lzma. Preparing synoboot.img Just copy the updated extra/extra2.lzma files back where they came from, mounted under OSFMount. While you're in there, you might need to update grub.cfg, especially if this is a new install. For the hardware mentioned at the very top of the post, with a single SAS expander providing 24 drives, where synoboot.img is a SATA disk for a VM under ESXi 6.7, I use these sata_args: # for 24-bay sas enclosure on 9211 LSI card (i.e. 24-bay supermicro) set sata_args='DiskIdxMap=181C SataPortMap=1 SasIdxMap=0xfffffff4' Close any explorer windows or text editors, and click dismount all in OSFMount. This image is ready to use. If you're using ESXi and having trouble getting the image to boot, you can attach a network serial port to telnet in and see what's happening at boot time. You'll probably need to disable the ESXi firewall temporarily, or open port 23. It's super useful. Be aware that the 4.x kernel no longer supports extra hardware, so network card will have to be officially supported. (I gave the VM a real network card via hardware passthrough). Attached Files I attached extra.lzma and extra2.lzma to this post. They are both from Jun's Loader 1.04b with the above procedure applied to change default drives from 16 from 24. extra2.lzma extra.lzma 2 Quote Link to comment Share on other sites More sharing options...
kaktuss77 Posted March 25, 2019 Share #2 Posted March 25, 2019 Thank you for your share. I have test something similar for my Jun's v1.03b 3617xs loader, to make it for 16 disks. Can you check my extra.lzma file? Thanks extra16disks-1.03b.lzma Quote Link to comment Share on other sites More sharing options...
thiagocrepaldi Posted June 9, 2019 Share #3 Posted June 9, 2019 Great tutorial, I am about to buy a 24 bays Supermicro and I am planning how to install DSM on the new hardware. Probably I will use ESXi 6.7 and create a vm for Xpenology. I most likely will use LSI 9211-8i like you, but I guess I read somewhere that ESXi passthrough has a 16 devices limit, which got me by surprised. Looking your description, it seems you managed to passthrough all 24 disks, right? Regarding your tutorial, should I do the same procedure every time that I upgrade DSM or just when I upgrade the boot loader? Quote Link to comment Share on other sites More sharing options...
autohintbot Posted June 9, 2019 Author Share #4 Posted June 9, 2019 10 hours ago, thiagocrepaldi said: Great tutorial, I am about to buy a 24 bays Supermicro and I am planning how to install DSM on the new hardware. Probably I will use ESXi 6.7 and create a vm for Xpenology. I most likely will use LSI 9211-8i like you, but I guess I read somewhere that ESXi passthrough has a 16 devices limit, which got me by surprised. Looking your description, it seems you managed to passthrough all 24 disks, right? Regarding your tutorial, should I do the same procedure every time that I upgrade DSM or just when I upgrade the boot loader? You'll only need to do the procedure every time you update the bootloader (which is a pretty rare event). It does look like there is a 16-device limit per VM with passthrough. This is only passing through a single device, though: The LSI controller. ESXi isn't managing the individual hard drives, and has no knowledge they even exist once the LSI card is passthrough-enabled. You would need some pretty esoteric hardware to even have 16 PCIe devices available for passthrough. 1 Quote Link to comment Share on other sites More sharing options...
thiagocrepaldi Posted June 9, 2019 Share #5 Posted June 9, 2019 (edited) 8 hours ago, autohintbot said: You'll only need to do the procedure every time you update the bootloader (which is a pretty rare event). It does look like there is a 16-device limit per VM with passthrough. This is only passing through a single device, though: The LSI controller. ESXi isn't managing the individual hard drives, and has no knowledge they even exist once the LSI card is passthrough-enabled. You would need some pretty esoteric hardware to even have 16 PCIe devices available for passthrough. Thanks for the (quick) reply! I know that a single LSI 9211-8i board is capable of connecting 24 HDDs (and more)... I was wondering why a seller[1] would sell a similar setup with 3 of them. What are the scenarios that it would cover that a single wouldn't? That setup probably includes a backplane BPN-SAS-846A[2], that as far as I know, has 6 ipass connections to the HBA card, right? [1] https://www.theserverstore.com/SuperMicro-848A-R1K62B-w-X9QRi-F-24x-LFF-Server [2] https://www.supermicro.com/manuals/other/BPN-SAS-846A.pdf Edited June 9, 2019 by thiagocrepaldi new information Quote Link to comment Share on other sites More sharing options...
autohintbot Posted June 9, 2019 Author Share #6 Posted June 9, 2019 30 minutes ago, thiagocrepaldi said: Thanks for the (quick) reply! I know that a single LSI 9211-8i board is capable of connecting 24 HDDs (and more)... I was wondering why a seller[1] would sell a similar setup with 3 of them. What are the scenarios that it would cover that a single wouldn't? That setup probably includes a backplane BPN-SAS-846A[2], that as far as I know, has 6 ipass connections to the HBA card, right? [1] https://www.theserverstore.com/SuperMicro-848A-R1K62B-w-X9QRi-F-24x-LFF-Server [2] https://www.supermicro.com/manuals/other/BPN-SAS-846A.pdf I'm not a Supermicro expert by any means, but my understanding is that any backplane that ends in A denotes a direct attach backplane. It'll have 24 individual SAS/SATA connectors. You'll see three cards, with six total x4 forward breakout cables (so 3 cards * 2 ports * 4 breakout per port = 24 drives, but literally as 24 fanned out cables). The E backplanes have built-in SAS expanders. I have two Supermicro 4U servers here, with BPN-SAS2-846EL1 backplanes. They support larger drives, and still a single connector each. The EL2 backplanes have two expanders, which I guess is there for performance reasons, with 12 bays per expander. That's likely an issue only if you're running a lot of SSDs to back a lot of VMs. I don't have any issues saturating 10gbit for my SSD volumes here. Quote Link to comment Share on other sites More sharing options...
merve04 Posted June 14, 2020 Share #7 Posted June 14, 2020 (edited) On 3/8/2019 at 12:46 PM, autohintbot said: Intro/Motivation This seems like hard-to-find information, so I thought I'd write up a quick tutorial. I'm running XPEnology as a VM under ESXi, now with a 24-bay Supermicro chassis. The Supermicro world has a lot of similar-but-different options. In particular, I'm running an E5-1225v3 Haswell CPU, with 32GB memory, on a X10SLM+-F motherboard in a 4U chassis using a BPN-SAS2-846EL1 backplane. This means all 24 drives are connected to a single LSI 9211-8i based HBA, flashed to IT mode. That should be enough Google-juice to find everything you need for a similar setup! I know this is a old post, but how it the world do you get 24 hdd's on one 9211-8i card?!?!?! I have one of these cards and i was under the impression 8 was the max? Edited June 14, 2020 by merve04 Quote Link to comment Share on other sites More sharing options...
IG-88 Posted June 15, 2020 Share #8 Posted June 15, 2020 47 minutes ago, merve04 said: I know this is a old post, but how it the world do you get 24 hdd's on one 9211-8i card?!?!?! I have one of these cards and i was under the impression 8 was the max? "SAS expander" is the key word here, you can read this http://www.sasexpanders.com or google it Quote Link to comment Share on other sites More sharing options...
merve04 Posted June 15, 2020 Share #9 Posted June 15, 2020 39 minutes ago, IG-88 said: "SAS expander" is the key word here, you can read this http://www.sasexpanders.com or google it Are you able to share an example? I've read the site, i've google "SAS expander" but that just lead to different type of sas cards. My question was how does one use a 9211-8i card which only has 2 8087 ports and be able to accommodate 24 or more hard drives? Quote Link to comment Share on other sites More sharing options...
merve04 Posted June 15, 2020 Share #10 Posted June 15, 2020 (edited) This is what I would get? I guess take a 8087 to 8088 cable to feed the 9211-8i to this card and then i have 8 more 8087 ports?? Why the need for PCI-e again? I only have 1 16x pci-e slot which is used by the lsi card. Edited June 15, 2020 by merve04 Quote Link to comment Share on other sites More sharing options...
IG-88 Posted June 15, 2020 Share #11 Posted June 15, 2020 sas expander are more likely to be seen as backplanes like this https://www.dell.com/community/PowerEdge-HDD-SCSI-RAID/SAS-expander-backplane-reduce-to-direct-SATA-mode/td-p/5068904 maybe think of a expander like a network switch connecting the server hba to a lot more devices sas is not a bus like scsi, its a single point to point connection, but as every device in unique (WWN) you can address them like in a network Quote Link to comment Share on other sites More sharing options...
yuyuko Posted June 21, 2020 Share #12 Posted June 21, 2020 https://gugucomputing.wordpress.com/2018/11/11/experiment-on-sata_args-in-grub-cfg/ sorry there.By learned this blog,I know that we can change sata controller supports,but still don't know how to deal with the disk sequence. wanna ask:if there's any possible to change mb sata controller Disk sequence,not default from 1 -6. Use 7-12 instead. my MB has 6 sata ports default,and second sata controller is JMB585 with 5 sata ports. I want change disk sequence like 1-5(JMB585) 6-11(MB sata ports from 0-5). I‘ve tested with : SasIdxMap=0 DiskIdxMap=0005 SataPortMap=56 SasIdxMap=0 DiskIdxMap=0500 SataPortMap=65 SasIdxMap=0 DiskIdxMap=05 SataPortMap=65 but none of them can work Quote Link to comment Share on other sites More sharing options...
merve04 Posted June 21, 2020 Share #13 Posted June 21, 2020 On 6/15/2020 at 1:48 PM, IG-88 said: sas expander are more likely to be seen as backplanes like this https://www.dell.com/community/PowerEdge-HDD-SCSI-RAID/SAS-expander-backplane-reduce-to-direct-SATA-mode/td-p/5068904 maybe think of a expander like a network switch connecting the server hba to a lot more devices sas is not a bus like scsi, its a single point to point connection, but as every device in unique (WWN) you can address them like in a network I get what is being laid down here, but I've already got a 16x hdd hotswap rackmount case, but theres a ridiculous amount of room inside, i've seen some where people just populate a row of hdds on the inside standing up in the case. But this is where im trying to figure out how to I add more hdd's with my existing lsi card. Quote Link to comment Share on other sites More sharing options...
IG-88 Posted June 21, 2020 Share #14 Posted June 21, 2020 2 hours ago, merve04 said: just populate a row of hdds on the inside standing up in the case. but they use 5port sata multiplexer backplanes 2 hours ago, merve04 said: But this is where im trying to figure out how to I add more hdd's with my existing lsi card. you could use one of the cards you had a picture of but i guess it might more as expansive as a additional lsi controller, it would be easier to just use a 2nd controller also keep in mind that there are problems going beyond 24 drives, you should read up an that and do test before going beyound 24 drives with xpenology in business environment you will usually see sas enclosures as separate units, maybe there are some old ones to get cheap Quote Link to comment Share on other sites More sharing options...
yuyuko Posted June 22, 2020 Share #15 Posted June 22, 2020 (edited) 11 hours ago, yuyuko said: https://gugucomputing.wordpress.com/2018/11/11/experiment-on-sata_args-in-grub-cfg/ sorry there.By learned this blog,I know that we can change sata controller supports,but still don't know how to deal with the disk sequence. wanna ask:if there's any possible to change mb sata controller Disk sequence,not default from 1 -6. Use 7-12 instead. my MB has 6 sata ports default,and second sata controller is JMB585 with 5 sata ports. I want change disk sequence like 1-5(JMB585) 6-11(MB sata ports from 0-5). I‘ve tested with : SasIdxMap=0 DiskIdxMap=0005 SataPortMap=56 SasIdxMap=0 DiskIdxMap=0500 SataPortMap=65 SasIdxMap=0 DiskIdxMap=05 SataPortMap=65 but none of them can work after 4 tests with: NO.1: doesn't work.default disk sequence set extra_args_3617='SasIdxMap=0 DiskIdxMap=0005 SataPortMap=65' set sata_args='sata_uid=1 sata_pcislot=5 synoboot_satadom=1 SasIdxMap=0 DiskIdxMap=0005 SataPortMap=56' NO.2:ddoesn't work.default disk sequence set extra_args_3617='SasIdxMap=0 DiskIdxMap=0005 SataPortMap=65' set sata_args='SasIdxMap=0 DiskIdxMap=0005 SataPortMap=56' NO.3: work,MB sata controller disk sequence from 6-11 ,JMB585 disk sepuuence from 1-5 set extra_args_3617='SasIdxMap=0 DiskIdxMap=0500 SataPortMap=65' set sata_args='SasIdxMap=0 DiskIdxMap=0500 SataPortMap=65' NO.4:doesn't work and disconnect with dsm set extra_args_3617='SasIdxMap=0 DiskIdxMap=05 SataPortMap=65' set sata_args='SasIdxMap=0 DiskIdxMap=05 SataPortMap=65' now,uncertain settings: sata_uid=1 ata_pcislot=5 synoboot_satadom=1 those three settings only show in DS3617 gruf.cfg. DS918 doesn't have it. for sure is those three settings can let NO.3 back to default disk sequence. I don't know these three Settings means what in DSM. Edited June 22, 2020 by yuyuko Quote Link to comment Share on other sites More sharing options...
flyride Posted June 22, 2020 Share #16 Posted June 22, 2020 Only modify extra_args. Leave other settings (including sata_args) alone. Quote Link to comment Share on other sites More sharing options...
SteinerKD Posted July 9, 2020 Share #17 Posted July 9, 2020 This sounds interesting, is there a good guide or explanation of how/where to set the extra_args_3617, I will have the 6 motherboard sata's and a 8 port LSI hbva (might add a second) and would prefer to let the LSI ports take priority and use the motherboard ones last (16+6=22 so still under the 24 disk threshold). For starters though I would like to put 8 LSI ports first then the 6 intel ones. (I also saw the guide for setting these numbers as the default, but will have to read that a few more times to figure it out, I've previously edited in the running machine, but that resets with updates). Quote Link to comment Share on other sites More sharing options...
SteinerKD Posted July 10, 2020 Share #18 Posted July 10, 2020 Ok, this got me beaten for now. I created a new Ubuntu 20 machine just for the task. Got the extra.lzma in and copied to an empty folder Unpacked it as per instructions. I go into the new etc dir but only see: jun.patch rc.modules Not synoinfo_override.conf Opening jun.patch there is no "maxdisk" or "internalportcfg" anywhere in the file (searching for it or stepping through manually. The difference for me is it's an extra.lzma from 1.03b/Ds3617XS, does this guide not work for it? Quote Link to comment Share on other sites More sharing options...
flyride Posted July 10, 2020 Share #19 Posted July 10, 2020 (edited) <deleted> Edited July 10, 2020 by flyride Quote Link to comment Share on other sites More sharing options...
Cheesemeister Posted July 15, 2020 Share #20 Posted July 15, 2020 I know this is an old thread, but.. Does anyone have the .lzma files or just a compiled .img that has the ds918+ with 24 drives edited? I cannot for the life of me get my .img to boot once Ive made the changes. I am 99% sure it has something to do with me recompiling everything once I make the edits. would really appreciate it! Quote Link to comment Share on other sites More sharing options...
merve04 Posted September 14, 2020 Share #21 Posted September 14, 2020 On 7/14/2020 at 8:20 PM, Cheesemeister said: I know this is an old thread, but.. Does anyone have the .lzma files or just a compiled .img that has the ds918+ with 24 drives edited? I cannot for the life of me get my .img to boot once Ive made the changes. I am 99% sure it has something to do with me recompiling everything once I make the edits. would really appreciate it! autohintbot posted on the first post of this thread the 2 files you need to replace on your usb key in order to achieve 24 drives. Quote Link to comment Share on other sites More sharing options...
aniel Posted January 22, 2021 Share #22 Posted January 22, 2021 @merve04 is it possible to go beying 24 hdd ? Quote Link to comment Share on other sites More sharing options...
merve04 Posted January 23, 2021 Share #23 Posted January 23, 2021 On 1/21/2021 at 7:25 PM, aniel said: @merve04 is it possible to go beying 24 hdd ? possibly, Quote Link to comment Share on other sites More sharing options...
IG-88 Posted January 23, 2021 Share #24 Posted January 23, 2021 2 hours ago, merve04 said: possibly, :-))), and again https://xpenology.com/forum/topic/8057-physical-drive-limits-of-an-emulated-synology-machine/?do=findComment&comment=122819 Quote Link to comment Share on other sites More sharing options...
aniel Posted January 30, 2021 Share #25 Posted January 30, 2021 (edited) On 1/23/2021 at 4:17 PM, IG-88 said: :-))), and again https://xpenology.com/forum/topic/8057-physical-drive-limits-of-an-emulated-synology-machine/?do=findComment&comment=122819 is it possible to go beyond 24 disk with jun loader, i have tried it but it goes into recovery with the ability to recover? Edited January 30, 2021 by aniel Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.