Jump to content
XPEnology Community
  • 0

Migrating Baremetal to ESXi - Passthrough HDDs or Controller?


WiteWulf
 Share

Question

I'm contemplating migrating moving my baremetal install on an HP Gen8 Microserver to ESXi (ESXi because I use it at work and am more familiar with it than Proxmox).

 

It seems pretty simple: just replace the xpenology USB boot stick I'm currently using with an ESXi boot stick, create a VM for DSM with a virtual boot image, pass through the existing disks and boot it up. DSM will do the "I've detected disks from another server, do you want to migrate?" thing, and I'm done, right?

 

My main question before I do this is: given that I'm running the SATA controller on the Gen8 in AHCI mode (ie. no "hardware" RAID), should I pass through the controller to the VM, or the individual disks in Raw Disk Mode? Is there any performance benefit to either?

 

The disks (4x3TB) are full with DSM data, obviously, so I'll not be able to use that set of disks for any other ESXi guests, but I'm considering getting an HBA at some point to add some extra storage.

Link to comment
Share on other sites

Recommended Posts

  • 0
37 minutes ago, WiteWulf said:

I'm contemplating migrating moving my baremetal install on an HP Gen8 Microserver to ESXi (ESXi because I use it at work and am more familiar with it than Proxmox).

 

It seems pretty simple: just replace the xpenology USB boot stick I'm currently using with an ESXi boot stick, create a VM for DSM with a virtual boot image, pass through the existing disks and boot it up. DSM will do the "I've detected disks from another server, do you want to migrate?" thing, and I'm done, right?

 

My main question before I do this is: given that I'm running the SATA controller on the Gen8 in AHCI mode (ie. no "hardware" RAID), should I pass through the controller to the VM, or the individual disks in Raw Disk Mode? Is there any performance benefit to either?

 

The disks (4x3TB) are full with DSM data, obviously, so I'll not be able to use that set of disks for any other ESXi guests, but I'm considering getting an HBA at some point to add some extra storage.

To be honest I had really poor performances with onboard AHCI controller. And no SMART support in DSM.

This is why I choosed a LSI HBA IT card.it works great with jun's, But you know the status with redpill.

Let me find my tests i posted on xpenelogy if it can help you.

I can share with you my current ESXi conf.

Be warned, you can't use ESXi > than 6.7 with Mpt2sas card as VMware dropped their support. The only option if using ESXi 7.0+ is to passtrough the card to VM (I currently do)

Link to comment
Share on other sites

  • 0

And also, you won't be able to passtrough the whole internal controller, you must have a datastore (hdd/ssd) were VM are stored. If you passthrough the controller, where will you have the datastore ? USB disk is not an option unless you use a "hack" to have datastore on USB.

I have :

- USB boot ESXi OS (loaded in RAM)

- SSD plugged on odd sata port motherboard

- SAS connector removed from motherboard and plugged to LSI card.

- 4 disks 4To installed (on LSI card of course)

Edited by Orphée
Link to comment
Share on other sites

  • 0
1 hour ago, Orphée said:

Here were my tests :

 

if you read from there, you will find why I swiched to LSI card.

 

I didn't have any issues in test with ESXi 6.7 , but I did roll back the scsi-hpvsa driver , having read elsewhere about ESxi in general on the G8 

 

Have a read here https://www.johandraaisma.nl/fix-vmware-esxi-6-slow-disk-performance-on-hp-b120i-controller/ 

and here https://communities.vmware.com/t5/ESXi-Discussions/Very-slow-acces-to-datastores-on-HP-MIcroserver-Gen8-Can-t-edit/td-p/2276368

 

Iv'e since installed an LSI card, for greater flexibility , (and faster SATA ports) , only the first 2 sata ports on the MS G8 are 6Gbs Sata III , the others are 3gbs , for what it is worth .. 

Link to comment
Share on other sites

  • 0
1 hour ago, Orphée said:

To be honest I had really poor performances with onboard AHCI controller. And no SMART support in DSM.

 

This is incorrect for AHCI controller passthrough. SMART does not work with RDM (also, trying to TRIM will crash RDM'd SSDs) on Jun's loader but it's perfectly fine with drives attached to passthrough controllers.  I passthrough my onboard SATA controller and it is exactly as baremetal.

 

40 minutes ago, Orphée said:

And also, you won't be able to passtrough the whole internal controller, you must have a datastore (hdd/ssd) were VM are stored. If you passthrough the controller, where will you have the datastore

 

Some folks have more than one SATA controller, or use a NVMe disk as a datastore and scratch volume.

 

The reasons to use RDM is to split drives between ESXi datastore and DSM on one controller (Orphée's case) or provide DSM access to disks that cannot otherwise be used at all (no controller support, NVMe, etc).

Edited by flyride
Link to comment
Share on other sites

  • 0
30 minutes ago, scoobdriver said:

 

I didn't have any issues in test with ESXi 6.7 , but I did roll back the scsi-hpvsa driver , having read elsewhere about ESxi in general on the G8 

 

Have a read here https://www.johandraaisma.nl/fix-vmware-esxi-6-slow-disk-performance-on-hp-b120i-controller/ 

and here https://communities.vmware.com/t5/ESXi-Discussions/Very-slow-acces-to-datastores-on-HP-MIcroserver-Gen8-Can-t-edit/td-p/2276368

 

Iv'e since installed an LSI card, for greater flexibility , (and faster SATA ports) , only the first 2 sata ports on the MS G8 are 6Gbs Sata III , the others are 3gbs , for what it is worth .. 

The driver rollback did not improve the high latency issues for me.

Link to comment
Share on other sites

  • 0
13 hours ago, Orphée said:

And also, you won't be able to passtrough the whole internal controller, you must have a datastore (hdd/ssd) were VM are stored. If you passthrough the controller, where will you have the datastore ? USB disk is not an option unless you use a "hack" to have datastore on USB.

As I mentioned: all four HDDs are currently full with DSM data. I can't resize them so I need to pass those through to the VM (either as raw disks or via the controller). My plan is to use an SSD on the ODD connector inside the Gen8 for datastore. There's one in there already that's configured as a read-cache for DSM, but I'm not convinced it's making a lot of difference.

 

I've read a few places online explaining how to keep the datastore on the same USB stick you boot from, or the SD card slot on the motherboard, but I'm loathe to run from flash storage.

Link to comment
Share on other sites

  • 0
16 minutes ago, WiteWulf said:

As I mentioned: all four HDDs are currently full with DSM data. I can't resize them so I need to pass those through to the VM (either as raw disks or via the controller). My plan is to use an SSD on the ODD connector inside the Gen8 for datastore. There's one in there already that's configured as a read-cache for DSM, but I'm not convinced it's making a lot of difference.

 

I've read a few places online explaining how to keep the datastore on the same USB stick you boot from, or the SD card slot on the motherboard, but I'm loathe to run from flash storage.

If you have only one controller (internal SATA AHCI), your only choice will be your four disks with RDM feature (there is a tutorial for it).

I just meant you won't be able to passtrough the controller as if you do so, the SSD on odd port will be passed through also and won't be visible for ESXi host.

PCI Controller passtrough is all or nothing.

Maybe I'm wrong and in this case I missunderstood how PCI passtrough work.

 

Using a datastore on USB is a "hack" not supported by VMWare, never tried it. So I can't help on this subject.

 

Edit : Here for me with LSI card passed through :

image.thumb.png.30edd9ecc367c51e737a0379ca1b286d.png

 

My 4 data disks are not visible on ESXi.

 

Edit 2:

 

Edited by Orphée
  • Like 1
Link to comment
Share on other sites

  • 0
3 minutes ago, Orphée said:

I just meant you won't be able to passtrough the controller as if you do so, the SSD on odd port will be passed through also and won't be visible for ESXi host.

Ah, I hadn't thought of that! I'd assumed that it was on a different controller, actually.

 

Raw disk is definitely the only option, then, unless I get PCIe HBA and get redpill/DSM7 working with the internal NIC (which plenty of people seem to be doing now).

Link to comment
Share on other sites

  • 0
Just now, WiteWulf said:

Ah, I hadn't thought of that! I'd assumed that it was on a different controller, actually.

 

Raw disk is definitely the only option, then, unless I get PCIe HBA and get redpill/DSM7 working with the internal NIC (which plenty of people seem to be doing now).

Yes.

And from my personnal experience, I had "high latency" issues with disks as RDM with internal controller.

This is why I bought a LSI HBA IT card.

Link to comment
Share on other sites

  • 0
12 hours ago, flyride said:

Some folks have more than one SATA controller, or use a NVMe disk as a datastore and scratch volume.

Interesting idea, do those PCIe NVMe adapters work on xpenlogy, then? This sort of thing, for example:
https://www.amazon.co.uk/SupaGeek-PCIe-Express-Adapter-Card/dp/B07CBJ6RH7/ref=pd_lpo_3?pd_rd_i=B07CBJ6RH7&psc=1

Link to comment
Share on other sites

  • 0
2 minutes ago, WiteWulf said:

Interesting idea, do those PCIe NVMe adapters work on xpenlogy, then? This sort of thing, for example:
https://www.amazon.co.uk/SupaGeek-PCIe-Express-Adapter-Card/dp/B07CBJ6RH7/ref=pd_lpo_3?pd_rd_i=B07CBJ6RH7&psc=1

It should, actually you would use it not for "Xpenology" but for ESXi as Datastore.

As long as ESXi has the driver for this PCIe device (take care of PCIe slot compatibility of Gen8) you should be able to use it as datastore, and then passtrough the internal SATA AHCI to Xpenology VM.

  • Like 1
Link to comment
Share on other sites

  • 0
9 minutes ago, WiteWulf said:

Raw disk is definitely the only option, then, unless I get PCIe HBA and get redpill/DSM7 working with the internal NIC (which plenty of people seem to be doing now).

with ESXi, it will always work with internal NIC as you configure the VM with E1000e NIC (handled by DSM) or if you add vmxnet3 driver to set NIC as VMXNET 3.

  • Like 1
Link to comment
Share on other sites

  • 0
5 hours ago, WiteWulf said:

Interesting idea, do those PCIe NVMe adapters work on xpenlogy, then? This sort of thing, for example:
https://www.amazon.co.uk/SupaGeek-PCIe-Express-Adapter-Card/dp/B07CBJ6RH7/ref=pd_lpo_3?pd_rd_i=B07CBJ6RH7&psc=1

 

NVMe = PCIe.  They are different form factors for the same interface type.  So they have the same rules for ESXi and DSM as an onboard NVMe slot.

 

I use two of them on my NAS to drive enterprise NVMe disks, then RDM them into my VM for extremely high performance volume.

I also use an NVMe slot on the motherboard to run ESXi datastore and scratch.

  • Like 2
Link to comment
Share on other sites

  • 0

I'm finally picking this up again. I've run up a few VMs with TCRP over the last week or so and feel confident with it, so am taking the following approach:

- temporarily run a low spec DS6322xs+ VM on another machine with enough storage to migrate the Gen8's DS3615xs to

- run the migration (this is nearly complete, migrating 6TB of data over GigE to a USB datastore takes a long time at 300mb/s!)

- shut down the Gen8

  - install a P222 card

  - connect the internal drive cage to it

  - connect a couple of SATA SSDs to the B120i's 6Gb/s ports (I have 4x SATA cable for this)

- install ESXi on the Gen8, using the SSDs for datastore and passing through the HDDs on the P222

- build a new DS3622xs+ VM and migrate all the data back again

 

I realise that if the passthrough works properly first time I may be able to simply migrate the disks to the new VM, but the migration to the VM is an insurance policy in case this doesn't work and I lose data somehow.

 

Wish me luck!

BTW, there's an eBay seller in the UK doing P222 cards (albeit without cache or half height brackets) for £9 inc. delivery:
https://www.ebay.co.uk/itm/143592506228

 

A bargain if you're looking to upgrade an HP Microserver

Edited by WiteWulf
Link to comment
Share on other sites

  • 0

Hmmm, trying to get the P222 configured in the Gen8 today and running into a few problems. I wanted to use Intelligent Provisioning to update it's firmware, but couldn't launch it. This is typical of a corrupt NAND on the iLO, and can easily be fixed by formatting the NAND (guide here).

 

That sorted, I noticed that there's an iLO firmware update available (2.81, I'm on 2.78), but all download links 404. 2.80 is available so I've installed that instead.

 

Finally got into Intelligent Provisioning and it says the P222 is permanently disabled as there's no cache module present!

Well, that showed me. I thought I'd got a bargain with that card (with no cache) for £9, intending to run it without cache, but it won't even run at all without cache. I'll see what I can find on eBay :)

Link to comment
Share on other sites

  • 0

Another quick update:

- bad news: still waiting for the cache card and battery for the P222, so no progress with that, but...

- good news: I couldn't wait and decided to try passing through the HDDs in RDM mode to a new DS3622xs+ VM mode and it worked (eventually!)

 

Using TCRP I created a new DS3622xs+ 7.0.1-42218 bootloader in ESXi on sata0:0, and added the four HDDs on sata1:0-3

 

The first time it booted it wanted to do a clean install and obviously hadn't groked that there was a previous install on the four physical HDDs. I guessed that as I hadn't taken care to configure them in any particular order it might not be reinstating the RAID properly. So I booted the physical server off the old redpill USB stick and made a note of what HDDs (by serial number) were on which SATA port, then went back to ESXi and reconfigured them on the VM in the same order. I rebuilt the TCRP bootloader (I figured satamap may need to be refreshed at a minimum), and this time it correctly identified the disks as having come from a DS365xs and offered to migrate them to the new virtual DS3622xs+. The migration completed after updating a few packages and setting the IP address on the NIC back to static (why do they love to revert back to DHCP?)

 

Upgrading to 7.1.0-42661-1 didn't go very smoothly, though. TCRP "postupdate" identified it as 7.1.0-42661-3 update and it got stuck in a recover loop. I had to create a fresh 7.1.0-42661 bootloader, then reboot and run postupdate *again*. At this point it installed the update correctly (watching what was going on on the serial port), and I was subsequently able to update to 7.1.0-42661-4 "normally".

 

When the cache card arrives I'll move the HDDs over to the P222, configure that as a passthrough adapter, and see what breaks! :D

Edited by WiteWulf
Link to comment
Share on other sites

  • 0

Okay, got the cache card and battery over the weekend!

  • installed the cache and battery
  • moved the SATA cable for the internal drive cage over to the P222
  • connected a 1x4 SATA tail to the onboard B120i adapter and put the existing 60GB SSD on port 1 (which is 6Gb/s, as opposed to the 3Gb/s port it was on before)
  • booted up

I was pleasantly surprised that the P222 was immediately enabled (some reports online that it needs to be powered up for an hour or so for the battery to charge to a point where it's out of an error state and enabled). ESXi booted off the SD card on the motherboard and immediately recognised the SSD with it's datastore (it was on the same HBA, just moved to a different port, so not surprising).

 

Using ssacli from the ESXi command line I put the P222 in HBA mode and checked that all the drives were visible (they were), then configured ESXi to pass it through to the xpenology VM (this was quite complex, so I'll not go into detail here) and deleted the RDM disks.

 

Now the VM boots up and says it can't find any disks, which is to be expected, as I haven't added the hpsa driver to the bootloader yet, which is my next job.

Link to comment
Share on other sites

  • 0

lspci output from TCRP:
 

tc@box:~$ lspci -tnnvq
-[0000:00]-+-00.0  Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge [8086:7190]
           +-01.0-[01]--
           +-07.0  Intel Corporation 82371AB/EB/MB PIIX4 ISA [8086:7110]
           +-07.1  Intel Corporation 82371AB/EB/MB PIIX4 IDE [8086:7111]
           +-07.3  Intel Corporation 82371AB/EB/MB PIIX4 ACPI [8086:7113]
           +-07.7  VMware Virtual Machine Communication Interface [15ad:0740]
           +-0f.0  VMware SVGA II Adapter [15ad:0405]
           +-11.0-[02]----00.0  VMware SATA AHCI controller [15ad:07e0]
           +-15.0-[03]----00.0  VMware VMXNET3 Ethernet Controller [15ad:07b0]
           +-15.1-[04]--
           +-15.2-[05]--
           +-15.3-[06]--
           +-15.4-[07]--
           +-15.5-[08]--
           +-15.6-[09]--
           +-15.7-[0a]--
           +-16.0-[0b]----00.0  Hewlett-Packard Company Smart Array Gen8 Controllers [103c:323b]

So we can see the passthrough adapter at 16.0-[0b]----00.0

 

I've created a new bootloader with the hpsa driver added and it booted up okay.

 

Following the console output I saw it detect the drives (but they all seem to be SCSI, is this right?),

 

SynologyNAS> dmesg | grep hpsa
[    6.271992] hpsa 0000:0b:00.0: MSI-X capable controller
[    6.273518] hpsa 0000:0b:00.0: Logical aborts not supported
[    6.273827] hpsa 0000:0b:00.0: HP SSD Smart Path aborts not supported
[    6.309877] scsi host1: hpsa
[    6.312591] hpsa 0000:0b:00.0: scsi 1:0:0:0: added Direct-Access     ATA      WDC WD30EFRX-68E PHYS DRV SSDSmartPathCap- En- Exp=1
[    6.313671] hpsa 0000:0b:00.0: scsi 1:0:1:0: added Direct-Access     ATA      WDC WD30EFRX-68E PHYS DRV SSDSmartPathCap- En- Exp=1
[    6.314855] hpsa 0000:0b:00.0: scsi 1:0:2:0: added Direct-Access     ATA      WDC WD30EFRX-68E PHYS DRV SSDSmartPathCap- En- Exp=1
[    6.316852] hpsa 0000:0b:00.0: scsi 1:0:3:0: added Direct-Access     ATA      WDC WD30EFRX-68N PHYS DRV SSDSmartPathCap- En- Exp=1
[    6.318856] hpsa 0000:0b:00.0: scsi 1:0:4:0: masked Enclosure         PMCSIERA SRCv8x6G         enclosure SSDSmartPathCap- En- Exp=0
[    6.320857] hpsa 0000:0b:00.0: scsi 1:3:0:0: added RAID              HP       P222             controller SSDSmartPathCap- En- Exp=1

 

The web interface gave me the "Welcome back" page, said it had found drives that had been moved, and asked if I wanted to recover, which I said okay to. It rebooted, then kernel dumped. Rebooting again it doesn't see drives any more:

 

SynologyNAS> dmesg | grep hpsa
[    6.093779] hpsa 0000:0b:00.0: MSI-X capable controller
[    6.094499] hpsa 0000:0b:00.0: Logical aborts not supported
[    6.095484] hpsa 0000:0b:00.0: HP SSD Smart Path aborts not supported
[   29.377728] hpsa 0000:0b:00.0: failed to enter simple mode

 

Rebooting the VM doesn't seem to fix this, only rebooting the ESXi host. This suggests to me that the P222 card has gone into an error state and needs power cycling?

 

So, power cycling and booting it up again it sees the disks this time and doesn't kernel dump, but asks me to recover. It looks like I'm stuck in a recover loop now.

 

Is this a satamap issue, as the hpsa driver isn't loaded when I run satamap in TCRP and it can't see the drives?

 

satamap output from tcrp as follows:
 

Succesfully installed SCSI modules
Found "02:00.0 VMware SATA AHCI controller"
Detected 30 ports/1 drives. Mapping SATABOOT drive after maxdisks
Found SCSI/HBA "0b:00.0 Hewlett-Packard Company Smart Array Gen8 Controllers (rev 01)" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid bus number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
Computed settings:
SataPortMap=1
DiskIdxMap=10

 

This suggests to me that TCRP is seeing it as a SCSI adapter, rather than a SATA HBA, correct?

 

Do I need to manually specify the satamap parameters?

 

Any suggestions, @Orphée/ @pocopico / @flyride / @Atlas?

 

I'm keen to learn and fix this on my own, but can't seem to find any documentation on how sataportmap actually works, only people asking for help with their configuration and others responding with a working config 🙄

Edited by WiteWulf
Link to comment
Share on other sites

  • 0

Here's some (possibly) relevant dmesg output relating to ATA devices:

Spoiler

dmesg | grep -i ata
[    0.000000] Command line: BOOT_IMAGE=/zImage withefi earlyprintk syno_hw_version=DS3622xs+ console=ttyS0,115200n8 netif_num=5 pid=0xa4a5 earlycon=uart8250,io,0x3f8,115200n8 synoboot_satadom=1 syno_port_thaw=1 mac1=XXXXXXXXXXXX sn=XXXXXXXXXXXX vid=0x0525 elevator=elevator loglevel=15 HddHotplug=0 DiskIdxMap=0A00 syno_hdd_detect=0 vender_format_version=2 syno_hdd_powerup_seq=0 log_buf_len=32M root=/dev/md0 SataPortMap=58
[    0.000000] BIOS-e820: [mem 0x00000000bfee0000-0x00000000bfefefff] ACPI data
[    0.000000] Kernel command line: BOOT_IMAGE=/zImage withefi earlyprintk syno_hw_version=DS3622xs+ console=ttyS0,115200n8 netif_num=5 pid=0xa4a5 earlycon=uart8250,io,0x3f8,115200n8 synoboot_satadom=1 syno_port_thaw=1 mac1=XXXXXXXXXXXX sn=XXXXXXXXXXXX vid=0x0525 elevator=elevator loglevel=15 HddHotplug=0 DiskIdxMap=0A00 syno_hdd_detect=0 vender_format_version=2 syno_hdd_powerup_seq=0 log_buf_len=32M root=/dev/md0 SataPortMap=58
[    0.000000] Synology boot device SATADOM: 1
[    0.000000] Sata Port Map: 58
[    0.000000] Memory: 10158900K/10485240K available (5535K kernel code, 880K rwdata, 1776K rodata, 928K init, 1568K bss, 326340K reserved, 0K cma-reserved)
[    1.176755] libata version 3.00 loaded.
[    2.065813] <redpill/cmdline_delegate.c:389> Cmdline: BOOT_IMAGE=/zImage withefi earlyprintk syno_hw_version=DS3622xs+ console=ttyS0,115200n8 netif_num=5 pid=0xa4a5 earlycon=uart8250,io,0x3f8,115200n8 synoboot_satadom=1 syno_port_thaw=1 mac1=XXXXXXXXXXXX sn=XXXXXXXXXXXX vid=0x0525 elevator=elevator loglevel=15 HddHotplug=0 DiskIdxMap=0A00 syno_hdd_detect=0 vender_format_version=2 syno_hdd_powerup_seq=0 log_buf_len=32M root=/dev/md0 SataPortMap=58
[    2.089793] <redpill/cmdline_delegate.c:401> Param #8: |synoboot_satadom=1|
[    2.090820] <redpill/cmdline_delegate.c:59> Boot media SATADOM (native) requested
[    2.117824] <redpill/cmdline_delegate.c:401> Param #22: |SataPortMap=58|
[    2.118834] <redpill/cmdline_delegate.c:296> Option "SataPortMap=58" not recognized - ignoring
[    2.131827] <redpill/runtime_config.c:65> Using native SATA-DoM boot - vid= and pid= parameter values will be ignored
[    2.133821] <redpill/runtime_config.c:76> Configured boot device type to fake-SATA DOM
[    2.171836] <redpill/sata_port_shim.c:116> Registering SATA port emulator shim
[    2.172844] <redpill/sata_port_shim.c:120> Registering for new devices notifications
[    2.175839] <redpill/sata_port_shim.c:127> Iterating over existing devices
[    2.176841] <redpill/sata_port_shim.c:134> Successfully registered SATA port emulator shim
[    2.178842] <redpill/native_sata_boot_shim.c:205> Registering native SATA DOM boot device shim
[    2.181844] <redpill/native_sata_boot_shim.c:242> Successfully registered native SATA DOM boot device shim
[    2.267863] <redpill/intercept_execve.c:57> Filename /tmpData/upd@te/sas_fw_upgrade_tool will be blocked from execution
[    2.347882] <redpill/sanitize_cmdline.c:102> Sanitized cmdline to: BOOT_IMAGE=/zImage withefi syno_hw_version=DS3622xs+ console=ttyS0,115200n8 netif_num=5 earlycon=uart8250,io,0x3f8,115200n8 synoboot_satadom=1 mac1=XXXXXXXXXXXX sn=XXXXXXXXXXXX HddHotplug=0 DiskIdxMap=0A00 syno_hdd_detect=0 vender_format_version=2 syno_hdd_powerup_seq=0 root=/dev/md0 SataPortMap=58
[    2.773585] ahci 0000:02:00.0: AHCI 0001.0300 32 slots 30 ports 6 Gbps 0x3fffffff impl SATA mode
[    2.781297] ata1: SATA max UDMA/133 abar m4096@0xfd5ff000 port 0xfd5ff100 irq 56
[    2.782307] ata2: SATA max UDMA/133 abar m4096@0xfd5ff000 port 0xfd5ff180 irq 56
[    2.783019] ata3: SATA max UDMA/133 abar m4096@0xfd5ff000 port 0xfd5ff200 irq 56
[    2.783422] ata4: SATA max UDMA/133 abar m4096@0xfd5ff000 port 0xfd5ff280 irq 56
[    2.783998] ata5: SATA max UDMA/133 abar m4096@0xfd5ff000 port 0xfd5ff300 irq 56
[    6.138841] ata3: SATA link down (SStatus 0 SControl 300)
[    6.139853] ata2: SATA link down (SStatus 0 SControl 300)
[    6.140883] ata4: SATA link down (SStatus 0 SControl 300)
[    6.141859] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    6.142868] ata5: SATA link down (SStatus 0 SControl 300)
[    6.143859] ata1.00: ATA-6: VMware Virtual SATA Hard Drive, 00000001, max UDMA/100
[    6.144840] ata1.00: 2097152 sectors, multi 0: LBA48 NCQ (depth 31/32)
[    6.145839] ata1.00: SN:00000000000000000001
[    6.145895] ata1.00: configured for UDMA/100
[    6.145929] ata1.00: Find SSD disks. [VMware Virtual SATA Hard Drive]
[    6.147072] scsi 0:0:0:0: Direct-Access     VMware   Virtual SATA Hard Drive  0001 PQ: 0 ANSI: 5
[    6.149124] <redpill/native_sata_boot_shim.c:153> Found new SCSI disk vendor="VMware  Virtual         0001" model="Virtual SATA Hard Drive ": checking boot shim viability
[    6.151847] <redpill/boot_shim_base.c:34> Checking if SATA disk is a shim target - id=0 channel=0 vendor="VMware  Virtual         0001" model="Virtual SATA Hard Drive "
[    6.155822] <redpill/native_sata_boot_shim.c:124> Trying to shim SCSI device vendor="VMware  Virtual         0001" model="Virtual SATA Hard Drive "
[    6.157848] <redpill/native_sata_boot_shim.c:133> Shimming device to vendor="SATADOM-" model="TYPE D 3SE"
[    6.167875] Write protecting the kernel read-only data: 8192k
[    6.381541] hpsa 0000:0b:00.0: scsi 5:0:0:0: added Direct-Access     ATA      WDC WD30EFRX-68E PHYS DRV SSDSmartPathCap- En- Exp=1
[    6.382613] hpsa 0000:0b:00.0: scsi 5:0:1:0: added Direct-Access     ATA      WDC WD30EFRX-68E PHYS DRV SSDSmartPathCap- En- Exp=1
[    6.383891] hpsa 0000:0b:00.0: scsi 5:0:2:0: added Direct-Access     ATA      WDC WD30EFRX-68E PHYS DRV SSDSmartPathCap- En- Exp=1
[    6.385892] hpsa 0000:0b:00.0: scsi 5:0:3:0: added Direct-Access     ATA      WDC WD30EFRX-68N PHYS DRV SSDSmartPathCap- En- Exp=1
[    6.396361] <redpill/native_sata_boot_shim.c:153> Found new SCSI disk vendor="WDC     WD30EFRX-68E    0A82" model="WD30EFRX-68EUZN0        ": checking boot shim viability
[    6.398910] <redpill/boot_shim_base.c:29> scsi_is_boot_dev_target: it's not a SATA disk, ignoring
[    6.416956] <redpill/native_sata_boot_shim.c:153> Found new SCSI disk vendor="WDC     WD30EFRX-68E    0A82" model="WD30EFRX-68EUZN0        ": checking boot shim viability
[    6.418917] <redpill/boot_shim_base.c:29> scsi_is_boot_dev_target: it's not a SATA disk, ignoring
[    6.438281] <redpill/native_sata_boot_shim.c:153> Found new SCSI disk vendor="WDC     WD30EFRX-68E    0A82" model="WD30EFRX-68EUZN0        o.2": checking boot shim viability
[    6.439509] <redpill/boot_shim_base.c:29> scsi_is_boot_dev_target: it's not a SATA disk, ignoring
[    6.443896] <redpill/native_sata_boot_shim.c:153> Found new SCSI disk vendor="WDC     WD30EFRX-68N    0A82" model="WD30EFRX-68N32N0        ": checking boot shim viability
[    6.443899] <redpill/boot_shim_base.c:29> scsi_is_boot_dev_target: it's not a SATA disk, ignoring

 

I found the sataportmap/diskidxmap/sataremap docs on GitHub:
https://github.com/cake654326/xpenology/blob/master/synoconfigs/Kconfig.devices

Edited by WiteWulf
Link to comment
Share on other sites

  • 0

So if I'm reading the settings TCRP comes up with correctly:
 

SataPortMap=1
DiskIdxMap=10

 

Does that mean show one disk for SATA port 1?

 

And start naming disks from sdq ( hex 10 = decimal 16, hence the 17th letter of the alphabet, "q", right?)

 

I think I need:
 

SataPortMap=14

 

But I'm unsure what to do about DiskIdxMap...

 

*edit*

 

I tried with SataPortMap=14 and DiskIdxMap=0A00, with the hope of starting the drive naming on the second adapter (the P222) at sda, but instead I get no sda, and sdb, sdc, sdd and sde

 

It's not showing the bootloader device, at least, but still stuck in a recover loop.

Edited by WiteWulf
Link to comment
Share on other sites

  • 0
49 minutes ago, WiteWulf said:

So if I'm reading the settings TCRP comes up with correctly:
 

SataPortMap=1
DiskIdxMap=10

 

Does that mean show one disk for SATA port 1?

 

And start naming disks from sdq ( hex 10 = decimal 16, hence the 17th letter of the alphabet, "q", right?)

 

I think I need:
 

SataPortMap=14

 

But I'm unsure what to do about DiskIdxMap...

 

 

Following ThorGroup's advice with Pocopico described in TCRP's sataport() function:
Can I edit user_config.json like below before loader build?
Leave sataportmap and diskidxmap blank with "" like this.

      "SataPortMap": "",
      "DiskIdxMap": ""

------------------------------------------------------------ ---

ThorGroup advised against mapping controller ports.
Exceeding the MaxDisks limit, but there is no harm in doing so unless you have additional devices.
It is connected with SATABOOT. This will create a gap/empty first slot.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
 Share

×
×
  • Create New...