Jump to content
XPEnology Community

HP Gen 8 esxi, passhtrough controller, no disks visble?


NoFate

Recommended Posts

hi

 

i have an hp gen 8 , so i have setup bios in ahci legacy mode, so i can boot from my ssd disk attached to the ODD port

that works fine

in esxi , i have passthrough the b120 sata controller, on my host, i have attached the pci device (controller)

when i boot synology, it doesnt see the disks attached to the controller?

 

how can i see those disks?

 

i want to do this setup, to have the smart status ablility, with rdm its not possible

 

thnx

Link to comment
Share on other sites

  • 3 weeks later...

the 6.0 and 6.1 loaders have a limited number of add on controller cards in the default configuration and its likely that the HP drivers are missing.

@polanskiman and @ig-88 created some extra.lzma files with additional drivers for various nics, hdd controllers, polanskimans is currently off-line while he checks the build. if you can identify the driver needed for the card then you could create your own custom extra file following @ig-88's tutorial on injecting drivers. I did that for a card I have and it worked. 

Link to comment
Share on other sites

On 8/30/2017 at 1:55 PM, NoFate said:

is no one using passtrough the sata card on a hp gen 8 ? :-) (ahci mode)

 

thnx

 

i use it :smile:

where are you booting ESXi from? you cant boot ESXi from a disk attached to the ODD port since the ODD port is part of the 6 port sata controller.. you are booting ESXi from the ODD port and then during the hypervisor boot you tell him to detach the controller and give it to the VM you gonna have abad time :grin: 

i boot ESXi embedded from a usb stick in the internal usb header, have a second cheap pcie sata3 controller with 2 ports where i have my datastores (ssd) and have the whole Cougar Point passed through to the XPenology VM, everything works fine :smile: ESXi totally ignores the controllers and the disks attached to it.

 

ps: in the DSM gui disk1 is the virtual boot disk (synoboot.vmdk), it's there but cant be read or written from inside DSM, disk2345 are the 4 front hdd bay, disk6 is empty but will be populated with another wdred.

Screenshot from 2017-09-07 14-04-31.png

Screenshot from 2017-09-07 14-08-18.png

Screenshot from 2017-09-07 14-08-15.png

Link to comment
Share on other sites

hi pigr8 , thanx for the detailed explenation

 

well, you can boot from the SSD in odd port :-), in the bios you have selected ahci , but you can also select AHCI LEGACY mode

when you reboot, go back in the bios, then you have 2 cougar controllers, then you can change boot order : cougar 2 , the cougar 1 , that way you can boot from ODD with esxi on

 

but, anyway, dont try it, it really gives bad performance, i am going to buy an extra controller

 

i have another question though, since you have the same hardware as me , i have terrible slow smb speeds if i run under esxi, if i do baremetal, it all works fine

see my post here : have a look at the videos i created
in my last post in that thread , i solved it on my other ASROCK system by lowering the memory from 4 to 2 , but i didnt use 1 GB , i see you are using 1 GB memory in your vm ?

 

also last question : what esxi are you using? the HPE version? can you post me your iso filename? do you have disabled vmw_ahci 

 

 

 

can you try that ? do you run 5.2 or 6.1 ? can you maybe make an export of your vmware?

 

thnx in advance!!!

 

 

Link to comment
Share on other sites

Oh i didnt knew that you could split the Cougar controller in half, i didnt dig that deep into it since i wanted to use 5 disks in DSM anyway so i had to passthrough the whole controller :grin:
 

bad performance? o i didnt notice any issue, actually i had DSM installed on that array (shr1/btrfs) when i had it running baremetal, when i switched to a VM  the system was intact as before, it just booted and migrated a couple of seconds, on second reboot everything was running and performace wise (i use smb and nfs for different clients) it's like baremetal.. share write at 115mb/s, and i have clients and diskstation in different vlans, no issues.

 

yes, i use only 2core and 1gb in XPenology since it's more than enough for it, all that it has to do is run couple of packages (sickrage and couchpotato), the shares with the relative authentication, brtfs snapshots for the array and nothing else, memory never goes above 70% in use (es. atm it's 43% ), and since you need to have the memory preallocated for the passthrough to work give it more than 1gb is pointless imho.. if i go to add surveillance station i'm gonna see if it needs more memory, give it 2gb is overkill, RAM is precious even if i have 16gb in the Gen8.

I use the last 6.5.0u1 (5969303) from HPE, you can find it here https://my.vmware.com/web/vmware/details?productId=614&downloadGroup=OEM-ESXI65U1-HPE running on the Gen8 with a e3-1265L cpu and 16gb ram, loader is 1.02b for the DS3515xs with the latest DSM 6.1.3-15152 Update 4 running perfectly.

 

the synoboot is in a 50mb vmdk in the SSD datastore (thick, indipendent, persistent) it's only for booting the system :smile: the 4 front drives are untouched.

 

the server is in production and so is the dsm vm, next time i reboot or poweroff i could make an export and share it but it's nothing fancy at all :grin:

 

Link to comment
Share on other sites

ok, thnx for the details, iam going to test it also with 1 GB , maybe i had performance issues because of the LEGACY AHCI mode, i used 2 GB for plex transcoding

maybe 1 GB is enough also

but i still dont understand where you installed your DSM on ?, you have tndeed the .vmdk file (disk1 ), that points to your synoboot.img , right? thats indeed to boot the system

but first wizard of DSM, you need to select a disk to install the .pat file ? i dont see that disk in your screenshots? thats why i thought you used 1 of your 4 bay drives?

Link to comment
Share on other sites

i moved Plex out from XPenology and running it on a different VM in a different VLAN, i find it cleaner that way :)

DSM is installed on the drivers that form the array, every disk has a partition and it's mirrored, the vmdk only contains the loader and it's loaded in ram at boot.. when you boot the first time in a clean system you indeed select where to install DSM, and ofc it has to be on the disk that you are passing through in the VM.. i didnt install anything because DSM was already installed on those disk, i transplanted the whole system from a running baremetal to a running vm with minimal downtime and no need to reconfig.

so yes, DSM is installed on all the 4 disks and not only in 1 of them.. if you boot a real synology with only 1 disk it will install DSM on it, if you later add other 2-3-4 disks and DSM inizialise them it copies the system on every disk for redundancy.. that's why even if you loose the first original disk the system will boot anyway :)

Link to comment
Share on other sites

ok , its clear now :-)

 

i am going to make a second hard disk my datastore, that i will use for my DSM installation, shou should also work, so i can start testing without connecting the 4 drives first , should also work

didnt know though, when you that synology copies the system on every disk :smile:

 

thnx a lot for the help!!

 

also you posted, that your vmdk 50 mb disk, is thick independent, persistent, shouldnt it be : non-persistent?

i know that on 5.2 it needs to be non-persistent , maybe its different on 6.x 

Link to comment
Share on other sites

how is your setup exactly configured? i mean where have you esxi booting from? and where is your current datastore? what cpu are you using that allows you vt-d?

my setup is the cleanest i can figure out:
- the internal usb header with esxi installed in embedded (it always installs that way on a flash drive)
- a pcie card supported by esxi (an asmedia 1061) with 2x sata 3 ports, 1 ssd and 1 hdd, ssd is the main datastore for the vms and the hdd is for snapshots and iso files.
- the cougarpoint is passed through to the xpenology vm, 4 front bay populated and the 5 odd port is going to be asa i can find a wdred cheaply

that way if i have to mess around with esxi i can swap out the usb drive and reflash it as needed without loosing the vms in the datastore if something goes wrong.. thats why i didnt install esxi on the ssd.
the usb3 renasas controller is passed through another vm (router), i wanted to passthrough the usb2 also (to xpenology) but it went bad :grin: problems regarding the esxi boot.


yes guides says that it has to be non-persistent but i run into troubles in every reboot of the vm where dsm wanted to repair and reinstall.. i figured it out after some testing that setting the synoboot to persistent fixed the problem, dont really know if there is a cleaner way to fix this.

Link to comment
Share on other sites

well, at this moment, still running baremetal, because i had the SMB copy speed issues only with ESX , that i couldnt fix

but going to try now again, starting with 6.1

so i am going to install esxi on the SD card slot, instead of the USB you did  , should not an issue

going to put in this card : http://www.sybausa.com/index.php?route=product/product&product_id=156  , i have it around, its a slower x1 lane, but should maybe work also, otherwise going to buy that  asmedia 1061  you have :-)

so its goint to be similar

good tip about that persistent - non persistent, i think i had the same issue also on 6.1 on my older ASROCK system

 

thnx for the details

Link to comment
Share on other sites

  • 2 weeks later...

I did something unusual and it works fine.

 

I boot ESXi 6.5 from USB, as usual

but didn't use any local storage, i have been using NFS storage from Synology with VMWare plugin to snapshot. this solve my backups and replications problems. The performance is good, around 100mb/s on a gigabit lan (two lans on microserver, four on synology, using a LAG switch on the middle. because of the problems with ESXi drivers I found this setup better then using a magnetic local drive (not SSD). if the synology NFS server was running, I can reboot the ESXi fine, and it restarts with the NFS datastore mounted.

 

I have two VMs using this NFS datastore (vmdk sitting on the NFS). one Windows 10 (i access from my mac using Microsoft Remote Desktop Beta for mac) and a macOS sierra running a server app (I use the unlock 2.0.9 to do that) with serves cache apps, netinstall files, and other stuff. I dont need to login this machine, but if I needed, I simple use the native macOS Share Screen app. For the needs I have, it works fine.

 

and off course, there is a XPEnology VM running on this NFS (just the boot disk) and a 9211-8i HBA (M1015 card flashed) passthrough with is the main purpose of this microserver. all the four drivers and the xpenology setup runs perfectly from years.... I have a Xeon E3 1220V2 (not L) with 2 cores disable just for safe, but I can easily enable a 3rd core without any problem. a 4th core enable cause high temps and needs a better cooling solution. But 2 cores is fine for my needs, most of the times the 2 VM (windows and mac) are idling. 

 

I think I will gone a try to remove the HBA card and passtrough the B120i on AHCI mode directly to xpenology. There is no disk attached to that controller now, and I think I can move the cable from LSI to B120i without crash the xpenology setup (only adjusting the xpenology vmx file to correct PCI controller).

 

I expected with this move to reduce around 10w of power and lower a little bit the temps inside the case without a performance penalty. these M1015 runs hots...

 

it unusual, but it works. I didn't find anybody using ESXi 6.5 without local datastore on the web when I research about it. 

 

UPDATE: 

yep, I remove the 9211-8i (M1015 flashed) controller today and passthrough the cougar native controller instead, and it works fine. ahci mode, write cache enable (you need to see via terminal, the control panel didnt enable the option). no drive in the ODD bay. This is amazing, it boots esxi 6.5 on USB, no local drive for ESXI but use a NFS datastore for 2 VM (macOS and Windows 10) and boot XPEnology with passtrought controller. everything is working fine, with autostart working and no problem at all. but I notice some strange things since I move from LSI to native cougar...

 

1) i populate the bays 1-4 on microserver and the controller bios see all the drives (including 4TB and 6TB ones) but DSM see drives from bay 2 to 5. When I use the LSI controller, it sees the drives on bay 1 to 4 but in reverse order (drive 1 on bay 4, etc, etc) Now it sees the drive 1 on bay 2, drive 2 on bay 3, etc, etc. strange, but it seems to be cosmetic only.

 

2) the fan and temps overall are a little lower now, i think the LSI controller was hotter and affect the air inside the case

 

3) performance via network seems to be a little better, transfer flies or benchmarks are showing numbers around 2% higher than before. CPU look like a little lower too. This is a surprise for me, i thought 9211-8i will be faster than the native intel cougar chipset controller.

 

Link to comment
Share on other sites

  • 1 month later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...