Jump to content
XPEnology Community

autohintbot

Member
  • Posts

    19
  • Joined

  • Last visited

Posts posted by autohintbot

  1. 30 minutes ago, thiagocrepaldi said:

    Thanks for the (quick) reply! I know that a single LSI 9211-8i board is capable of connecting 24 HDDs (and more)... I was wondering why a seller[1] would sell a similar setup with 3 of them. What are the scenarios that it would cover that a single wouldn't? That setup probably includes a backplane BPN-SAS-846A[2], that as far as I know, has 6 ipass connections to the HBA card, right? 

     

    [1] https://www.theserverstore.com/SuperMicro-848A-R1K62B-w-X9QRi-F-24x-LFF-Server

    [2] https://www.supermicro.com/manuals/other/BPN-SAS-846A.pdf

     

    I'm not a Supermicro expert by any means, but my understanding is that any backplane that ends in A denotes a direct attach backplane.  It'll have 24 individual SAS/SATA connectors.  You'll see three cards, with six total x4 forward breakout cables (so 3 cards * 2 ports * 4 breakout per port = 24 drives, but literally as 24 fanned out cables).

     

    The E backplanes have built-in SAS expanders.  I have two Supermicro 4U servers here, with BPN-SAS2-846EL1 backplanes.  They support larger drives, and still a single connector each.  The EL2 backplanes have two expanders, which I guess is there for performance reasons, with 12 bays per expander.  That's likely an issue only if you're running a lot of SSDs to back a lot of VMs.  I don't have any issues saturating 10gbit for my SSD volumes here.

  2. 10 hours ago, thiagocrepaldi said:

    Great tutorial, I am about to buy a 24 bays Supermicro and I am planning how to install DSM on the new hardware. Probably I will use ESXi 6.7 and create a vm for Xpenology. I most likely will use LSI 9211-8i like you, but I guess I read somewhere that ESXi passthrough has a 16 devices limit, which got me by surprised. Looking your description, it seems you managed to passthrough all 24 disks, right?

     

    Regarding your tutorial, should I do the same procedure every time that I upgrade DSM or just when I upgrade the boot loader?

     

    You'll only need to do the procedure every time you update the bootloader (which is a pretty rare event).

     

    It does look like there is a 16-device limit per VM with passthrough.  This is only passing through a single device, though:  The LSI controller.  ESXi isn't managing the individual hard drives, and has no knowledge they even exist once the LSI card is passthrough-enabled.  You would need some pretty esoteric hardware to even have 16 PCIe devices available for passthrough.

    • Thanks 1
  3. Intro/Motivation

     

    This seems like hard-to-find information, so I thought I'd write up a quick tutorial.  I'm running XPEnology as a VM under ESXi, now with a 24-bay Supermicro chassis.  The Supermicro world has a lot of similar-but-different options.  In particular, I'm running an E5-1225v3 Haswell CPU, with 32GB memory, on a X10SLM+-F motherboard in a 4U chassis using a BPN-SAS2-846EL1 backplane.  This means all 24 drives are connected to a single LSI 9211-8i based HBA, flashed to IT mode.  That should be enough Google-juice to find everything you need for a similar setup!

     

    The various Jun loaders default to 12 drive bays (3615/3617 models), or 16 drive bays (DS918+).  This presents a problem when you update, if you increase maxdisks after install--you either have to design your volumes around those numbers, so whole volumes drop off after an update before you re-apply the settings, or just deal with volumes being sliced and checking integrity afterwards.

     

    Since my new hardware supports the 4.x kernel, I wanted to use the DS918+ loader, but update the patching so that 24 drive bays was the new default.  Here's how.  Or, just grab the files attached to the post.

     

    Locating extra.lzma/extra2.lzma

     

    This tutorial assumes you've messed with the synoboot.img file before.  If not, a brief guide on mounting:

     

    • Install OSFMount
    • "Mount new" button, select synoboot.img
    • On first dialog, "Use entire image file"
    • On main settings dialog, "mount all partitions" radio button under volumes options, uncheck "read-only drive" under mount options
    • Click OK

     

    You should know have three new drives mounted.  Exactly where will depend on your system, but if you had a C/D drive before, probably E/F/G.

     

    The first readable drive has an EFI/grub.cfg file.  This is what you usually customize for i.e. serial number.

     

    On the second drive, should have a extra.lzma and extra2.lzma file, alongside some other things.  Copy these somewhere else.

     

    Unpacking, Modifying, Repacking

     

    To be honest, I don't know why the patch exists in both of these files.  Maybe one is applied during updates, one at normal boot time?  I never looked into it.

     

    But the patch that's being applied to the max disks count exists in these files.  We'll need to unpack them first.  Some of these tools exist on macOS, and likely Windows ports, but I just did this on a Linux system.  Spin up a VM if you need.  On a fresh system you likely won't have lzma or cpio installed, but apt-get should suggest the right packages.

     

    Copy extra.lzma to a new, temporary folder.  Run:

    lzma -d extra.lzma
    
    cpio -idv < extra

    In the new ./etc/ directory, you should see:

     

    jun.patch

    rc.modules

    synoinfo_override.conf

     

    • Open up jun.patch in the text editor of your choice.
      • Search for maxdisks.  There should be two instances--one in the patch delta up top, and one in a larger script below.  Change the 16 to a 24.
      • Search for internalportcfg.  Again, two instances.  Change the 0xffff to 0xffffff for 24.  This is a bitmask--more info elsewhere on the forums.
    • Open up synoinfo_override.conf.
      • Change the 16 to a 24, and 0xffff to 0xffffff

     

    To repack, in a shell at the root of the extracted files, run:

    (find . -name modprobe && find . \! -name modprobe) | cpio --owner root:root -oH newc | lzma -8 > ../extra.lzma

    Not at the resulting file sits one directory up (../extra.lzma).

     

    Repeat the same steps for extra2.lzma.

     

    Preparing synoboot.img

     

    Just copy the updated extra/extra2.lzma files back where they came from, mounted under OSFMount.

     

    While you're in there, you might need to update grub.cfg, especially if this is a new install.  For the hardware mentioned at the very top of the post, with a single SAS expander providing 24 drives, where synoboot.img is a SATA disk for a VM under ESXi 6.7, I use these sata_args:

    # for 24-bay sas enclosure on 9211 LSI card (i.e. 24-bay supermicro)
    set sata_args='DiskIdxMap=181C SataPortMap=1 SasIdxMap=0xfffffff4'

     

    Close any explorer windows or text editors, and click dismount all in OSFMount.  This image is ready to use.

     

    If you're using ESXi and having trouble getting the image to boot, you can attach a network serial port to telnet in and see what's happening at boot time.  You'll probably need to disable the ESXi firewall temporarily, or open port 23.  It's super useful.  Be aware that the 4.x kernel no longer supports extra hardware, so network card will have to be officially supported.  (I gave the VM a real network card via hardware passthrough).

     

    Attached Files

     

    I attached extra.lzma and extra2.lzma to this post.  They are both from Jun's Loader 1.04b with the above procedure applied to change default drives from 16 from 24.

    extra2.lzma extra.lzma

    • Like 2
  4. 12 hours ago, tdutrieux said:

    Hi,

     

    I experience one small problem.

    I'm on ESXi6.5 with synoboot as SATA0:0 and a virtualdisk as SCSI0:0

    The system bootup and seems to work fine.

    The only problem is that I don't have a drive 1, and the virtual disk shows as drive 2.

    BTW, the synoboot disk isn't shown.

    I've tried with both 3615 and 3617.

    Anybody experiencing that ? and solved that ?

    Thanks,

    TitiD

    cap.png

     

    Are you using the ESXi option on the boot menu?  It's only shown for a second or two by default.

     

    778161264_ScreenShot2018-08-11at2_23_17PM.thumb.png.67b470bc336b6792c513488ccaaaf425.png

  5. 6 minutes ago, alese said:

    Hi all,

     

    I have a microserver gen8 with i5-3470t and 16gb ram, esxi 6.7.0 and LSI2008 (corss flashed Fujitsu D2607) with pci pass through where all my data and the system is on and it's working with 6.1. I updated the loader to 1.03b and made the changes for serial and mac (same as for 6.1).

     

    The old system is found, but now I get the Error 13 when I try to install the 6.2 pat file. What could be the problem?

    Has somebody a working esxi with pci pass through?

     

    When I put back the old loader, all works well again.

     

    Greets

     

    Alex

    esxi_xpenology.PNG

     

    Error 13 can happen if there is a mismatch between serial # and model on the boot loader (i.e. using a DS3615xs serial with the DS3617xs loader or vice versa).  Maybe that?  If it isn't that--how is your synoboot.img configured in ESXi?  Can expand the details in that screenshot to reveal--should be SATA 0:0.  I set to "Independent - Persistent", although I don't know if that matters.

  6. 1 hour ago, pigr8 said:


    i'm trying to do something similar using a vmdk loader and a vmdk virtual drive, loader is on sata controller 0 and drive on sata controller 1, booting and installing is fine but in dsm the first disk is on port 2 (/dev/sdb), the loader 50mb drive is not showing (as intended) but i wanted the first drive be /dev/sda.

    that's just for testing, since my production vm has a 6 port sata controller in passthrough and i always had this problem, resolved by moving the loader on a usb drive and boot from there, so that the physical disks are sda->f.. using the loader with the vmdk methon puts my drives in sdb->g.

    any advice?

     

    Change DiskIdxMap=0C to DiskIdxMap=0C00 in grub.cfg inside sata_args.  That's the starting disk number for each controller, in hex pairs.  The 0C is an offset of 12, which is what pushes the boot drive to Disk 13 (so doesn't show up in the GUI with default internalportcfg in synoinfo.conf).  Adding the 00 means the second controller will start numbering at 0 instead of default (so /dev/sda).

     

    I verified it here real quick--this was Disk 2 before the change:

     

    893265536_ScreenShot2018-08-05at1_50_06PM.png.738fee0a3a33e4fbabc5df9423313dc7.png

     

    261894844_ScreenShot2018-08-05at1_50_34PM.png.e7efabd147c266be3eb005c6c26a1e57.png

     

     

    • Like 1
    • Thanks 1
  7. 57 minutes ago, jun said:

    since you are using a external sas enclosure, that is a setup I've never experimented before,  here is my educated guess:

    You issue is caused by my implementation of stable Sas disk naming, the disk name is simply derived from sas remote phy id plus SasIdxMap as start position.

    I think ports directly connected to hba card are 0-7, and external expansion cards inside your enclosure will start from 8

    You can try to set SasIdxMap=0xfffffff8, that is -8 to move start position to 0

    if that does not work, simple remove SasIdxMap option, then sas disk names should occupy unused slots as before.

     

     

    I will try this out and report back!  FYI, this external enclosure works correctly with the DSM 6.1 loader.  The big difference between the two is the EFI->BIOS change.  I'll try to get some info on whether changing the 6.1 loader to BIOS causes the same symptoms or not (it's a little bit of a pain with real disks, because it'll sever RAID groups and require a rebuild to fix).

     

    EDIT:  Using SasIdxMap=0xfffffff8 did the trick, thanks!

     

    865020893_ScreenShot2018-08-04at9_07_35PM.png.5bd3de36d540ef49aeb199c649ddfe81.png

  8. 6 minutes ago, jun said:

    Hi, options not available in vanilla DSM are hidden in /proc/cmdline
    here is a example of disk order related options: SasIdxMap=0 DiskIdxMap=080C SataPortMap=4
    it should works like this:   DiskIdxMap make first sata controller start from disk 8, second from disk 12 etc,
    SataPortMap limit the number of disks a sata controller can have(vmware's virtual sata controller have 32 ports!)
    then SasIdxMap make SAS controller start from disk 0, as a plus, this option should give you a stable SAS disk name

     

    Thanks for the info!  I still haven't been able to get my setup to enumerate disks starting at 1 (in the GUI, /dev/sda viewed from command line).  Specs:

    Quote

     

    ESXi 6.5.0 Update 2 (Build 8294253)

    PowerEdge R320

    Intel(R) Xeon(R) CPU E5-2450L 0 @ 1.80GHz

     

    The VM has a LSI SAS9200-8e passed through, connected to a SA120 enclosure with 9/12 disks populated.

     

     

    My grub.cfg line:

     

    Quote

    set sata_args='sata_uid=1 sata_pcislot=3 synoboot_satadom=1 DiskIdxMap=0C SataPortMap=1 SasIdxMap=0'

     

    I changed sata_pcislot as an experiment, because I noticed the PCI slot number changed between my 6.1.x VMs (EFI Boot) and this VM (BIOS boot).

     

    6.1.x VM is showing:

    Quote

     

    lspci -s 5

    0000:02:05.0 Class 0106: Device 15ad:07e0

     

     

    While this VM is showing:

    Quote

     

    lspci -s 3

    0000:02:03.0 Class 0106: Device 15ad:07e0

    0001:00:03.0 Class 0000: Device 8086:6f08 (rev ff)

    0001:00:03.2 Class 0000: Device 8086:6f0a (rev ff)

     

     

    The setup is the same, so my best guess is this is just how VMWare has set up their DSDT/whatever tables between the two firmware options.  Maybe the order of adding devices when I made the VM originally altered things, though.  Either way, I didn't notice a difference from sata_pcislot=5 to sata_pcislot=3.

     

    I had to increase internalprtcfg in synoinfo.conf to see all the disks.  You can see the 8-disk gap in the UI:

     

    691277189_ScreenShot2018-08-04at5_45_34PM.thumb.png.308c5f7dee836a80eae528cd5688124e.png

     

    Or in the shell:

    Quote

     

    sudo fdisk -l | grep Disk\ /dev/sd

    Disk /dev/sdj: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors

    Disk /dev/sdk: 447.1 GiB, 480103981056 bytes, 937703088 sectors

    Disk /dev/sdl: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors

    Disk /dev/sdn: 447.1 GiB, 480103981056 bytes, 937703088 sectors

    Disk /dev/sdo: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors

    Disk /dev/sdq: 447.1 GiB, 480103981056 bytes, 937703088 sectors

    Disk /dev/sdr: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors

    Disk /dev/sdt: 447.1 GiB, 480103981056 bytes, 937703088 sectors

    Disk /dev/sdi: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors

     

     

    The VM is pretty minimally configured:

     

    1790841821_ScreenShot2018-08-04at5_48_00PM.thumb.png.a97cd09f2f4e6d6e57d9f58038bf943e.png

     

    I don't see anything glaringly obvious in dmesg or early boot from the serial port.  Happy to paste those your way if you want to take a peek, though!  At this point I'm at the limits of my (admittedly poor) XPE knowledge, so just kind of hoping the EFI-fixed version makes my 6.1 VMs behave the same with a 6.2 install...

    • Like 1
  9. I spent some time experimenting tonight, and I still can't get my disks starting at 1 with the 6.2 loader, either 3615 or 3617, with an LSI passed through in an ESXi 6.5u2 VM.

     

    I noticed SasIdxMap=0 from grub.cfg doesn't actually come through on /proc/cmdline, which seems like it's trying to start enumeration at 0 for an SAS controller:

    sh-4.3# cat /proc/cmdline
    syno_hdd_powerup_seq=0 HddHotplug=0 syno_hw_version=DS3617xs vender_format_version=2 console=ttyS0,115200n8 withefi quiet root=/dev/md0 sn=XXXXXXXXXXXXX mac1=XXXXXXXXXXXX netif_num=1 synoboot_satadom=1 DiskIdxMap=0C SataPortMap=1

    But, it doesn't come through on my 6.1.x VM with the same card either, and that correctly enumerates disks starting with /dev/sda.  Not sure what else to try.  I increased internalportcfg for now, but that's just a landmine on the next big update (I guess it's possible to include that in the jun.patch in extra.lzma though).

     

     

  10. 6 minutes ago, zenfire said:

    What trick with assigning MAC address in grub do you mean?

     

    You should edit grub.cfg with the MAC address you're using in the network adapter on the VM.  See 

     (basically just mount synoboot.img with OSFMount and find/edit there)

     

    The other option with ESXi is to edit your Virtual Switch in networking, enable Security->MAC address changes.  DSM will use the MAC address in grub.cfg for its network card, and not the one set in your VM config.  By default with ESXi this security setting is on, which will prevent it from getting on the network if there's a mismatch.  If you're going to use multiple XPEnology installs you'll need to change the MAC address anyway, so might as well.

    • Like 1
  11. 7 minutes ago, ideasman69 said:

     

    Now that you're running the new boot loader and BIOS mode - have you made sure to select ESX mode from Juns boot menu?

     

    I had the same thing when upgrading my ESX install. The first option was selected by default (bare metal) which left me without seeing all my drives but i could see the 50MB virtual boot device.

     

    After making sure ESX was selected the 50MB disappeared and all my drives were detected again.

     

    Yeah, this is using the ESXi/VMWare option (I tried the other options out of curiosity but nothing ever came up after initial message on VGA/serial).  If I can fix this now that'd be great, but I'm in no huge rush, so hopefully Jun has time to fix the EFI boot.  There's note that it's on his radar in the OP.

     

    Are you using a passthrough hardware HBA or virtual disks?

  12. 2 minutes ago, knopserl said:

    I have also tried SCSI LSIlogic but then it does not boot at all and tries to do a network boot. I have tried all kind of combination what you can do in VMware Pro 14 (HW version 12, 14, SATA, SCSI, BIOS, UEFI does not boot).

    The v1.03a2 boots immediatly and it finds disks and installation works. So there must be something whoch does not work with VM's in 1.0.3b

     

    You should be able to modify the boot order in the VM's bios setup.  It's a pain to get there with the keyboard, depending on how you're controlling the remote view--easiest is to force it to bios setup on next boot in advanced->firmware tab.  The SCSI drives will take priority over the SATA boot disk, and it'll only test one hard drive for boot files.  You can re-order them in bios setup.

  13. I did a DSM 6.2 update on one of my real ESXi VM setups today, after testing some VM-only disks overnight.

     

    ESXi 6.5.0 Update 2 (Build 8294253)

    PowerEdge R320

    Intel(R) Xeon(R) CPU E5-2450L 0 @ 1.80GHz

     

    The VM has a LSI SAS9200-8e passed through, connected to a SA120 enclosure with 10/12 disks populated.

     

    It didn't go great.  DSM came back up with only 4 disks showing (marked disks 9-12).

     

    1623625168_ScreenShot2018-08-02at11_37_59AM.png.571d7824cb78291e7c6610c2329c95d7.png

     

    After trying some things, I was able to get the other disks to appear by increasing the *portcfg settings.  Two weird things:

     

    - The 50MB SATA boot disk is showing through to DSM, where it didn't before

    - DSM is holding open disks 1-8, which is why only 4 disks made it through with the 12-disk limit

     

    379019840_ScreenShot2018-08-02at2_05_13PM.png.912aacabcb1eeee3d69d89d2c2ed672f.png

     

    1468681164_ScreenShot2018-08-02at1_55_49PM.thumb.png.a14adc09e3d1b17139e44511cde12466.png

     

     

     

    The only difference between this and my DSM 6.1 config is the firmware going from EFI -> BIOS.  I guess with a BIOS boot the SATA controller appears, and DSM thinks it has an 8-disk capacity, and with EFI it doesn't?

     

    I've had a bad history with any XPEnology setup that require portcfg changes to synoinfo.conf, since inevitably those values reset during an update and your volumes get severed.  Although I did notice this over serial in the very early boot stages, so maybe there's a new way to inject values from the bootloader now?

     

    Quote

    patching file linuxrc.syno
    patching file usr/sbin/init.post
    cat: can't open '/etc/synoinfo_override.conf': No such file or directory

     

    Fortunately, I can repair the volume affected here (and it's not a very important install overall).  I'll hold off until EFI is working before I do more updates.

  14. 2 minutes ago, knopserl said:

    Thanks for the hint. I have always been in BIOS and not UEFI mode.

    I still get the Server Error 38 (no disk found) although I have two SATA Disks defined.

     

    Keep the synoboot.img drive as SATA, but try adding the drives you'd like to use as SCSI (set the SCSI controller to LSI Logic Parallel).

  15. 2 hours ago, knopserl said:

    Has anybody tried to setup a VMware host with 1.03b either 3615 or 3617? Seems that some physical host are able to update or do a fresh install but I did not manage to install it with any VMware configuration, it starts the installation but after network setting and starting to format the system partition, it always ends up with error code 38 or in the web install windows it says no disk found. I can select whatever disk mode is available (SATA, SCSI Buslogic, LSIlogic) it never found a disk. The same with the 1.03a2 boot image for 918+ works.

    Anybody has any ideas?

     

    Change the firmware in advanced options tab from EFI to BIOS.

  16. 6 hours ago, autohintbot said:

    Does the Haswell-or-later requirement still apply here?  I am unable to get anything past the very first boot message (nothing shows up on network, no activity on serial port past the boot menu and initial message).

     

    Amending this post:  Turns out you just need to change Boot Options -> Firmware from EFI to BIOS.

     

    I tried this earlier, but didn't pay enough attention to realize it just wasn't booting at all.  When you change firmware in ESXi you'll get a default boot order, which is very likely wrong.  Easiest fix is to force a BIOS setup on next boot in the same config menu, or be quick on the keyboard and mash ESC to get into the VM's virtual BIOS setup to change boot order (probably SATA:0, depending on how you have the VM configured).

     

    My DSM 6.1 VMs are all EFI, which fails with the new loader.  I'll try an upgrade tomorrow on real disks and report back.

  17. Does the Haswell-or-later requirement still apply here?  I am unable to get anything past the very first boot message (nothing shows up on network, no activity on serial port past the boot menu and initial message).

     

    583365020_ScreenShot2018-08-01at4_00_50PM.thumb.png.ce06ae00991dff8f1d7c46cdf9fa86e9.png

     

     

    Host:

    ESXi 6.5.0 Update 2 (Build 8294253)

    2x Xeon E5-2670 on Intel S2600CP motherboard

     

    VM:

    Started by copying my working 6.1.x DSM setup, changing MAC address in VM config and synoboot.img.  Existing VM was hardware version 12.

     

    Tried a few other ESXi starting points from here on the forums, upgrading to VM hardware version 13, etc.  Same result across the board, both for 3615 and 3617 loaders.

     

    Really hoping a Sandy Bridge-compatible boot loader happens for DSM 6.2!  I have a bunch of that hardware here in the house.  Actually have some spare hosts--I'll try a bare metal test when I get a few extra minutes.

×
×
  • Create New...