Jump to content
XPEnology Community

New microserver gen 10 plus


Goliator

Recommended Posts

Hey,

so due to the tips before I went with the 3617xs version.

And it's working as a champ! I've tried 918+, but had CPU problems.

I want to create a windows VM with VMM, but the windows is crashing all the time with blue screen. Anybody had same experience?

The VM is going to an SSD pool with RAID1 (2 1TB), the windows is windows server 2019. Thanks!
 

Link to comment
Share on other sites

I tried a gen 10 plus with intel g5420 but it doesn't boot from the USB prepared with Win32 Disk Imager as the following guide says.

https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/

 

i tried 918+ and 3615xs loaders.

 

Have you some advice to prepare the usb stick?

Anyone tested the internal usb for boot?

Some bios setting are necessary? (I set ahci and usb are enabled)

It boot up correctly if i use with external USB DVD with linux ISO.

Link to comment
Share on other sites

On 5/15/2020 at 3:52 PM, flannell said:

Received my HPE Gen10 Plus yesterday (basic model P16005421) and almost have it up and running following a similar path to MarcoH; thank you for leading the charge!

 

DSM 6.2.2-24922 Update 6 with JUN'S Loader v1.04b

 

 

Mine did not boot from xpenology usb (but did with other kind of USB)

 

Did you use the internal USB slot for BOOT?

Wich loader/ramdisk did you choose?

how you make USB bootable? RUFUS or win32imagedisk?

Some advice for bios settings?

Do you have Xeon or g5420

 

Thank you in advance

Edited by choko
Link to comment
Share on other sites

  • 3 weeks later...
18 hours ago, choko said:

Forr gen10+  i tried 918+ loader.

It only see 2 NICs of the 4 avaible.

I edited grup inserting 4 MAC address and changing the counter from2 to 4 into grub.cfg.

It show only 2 NIC at boot. Any ideas?

 

https://xpenology.com/forum/topic/21663-driver-extension-jun-103b104b-for-dsm622-for-3615xs-3617xs-918/

 

 !!! still network limit in 1.04b loader for 918+ !!!

atm 918+ has a limit of 2 nic's (as the original hardware)
If there are more than 2 nic's present and you can't find your system in network then you will have to try after boot witch nic is "active" (not necessarily the onboard) or remove additional nic's and look for this after installation
You can change the synoinfo.conf after install to support more then 2 nic's (with 3615/17 it was 8  and keep in mind when doing a major update it will  be reset to 2 and you will have manually change this again, same as when you change for more disk as there are in jun's default setting) - more info's are already in the old thread about 918+ DSM 6.2.(0) and here

https://xpenology.com/forum/topic/12679-progress-of-62-loader/?do=findComment&comment=92682

I might change that later so it will be set the same way as more disks are set by jun's patch - syno's max disk default for this hardware was 4 disks but jun's pach changes it on boot to 16!!! (so if you have 6+8 sata ports then you should not have problems when updating like you used to have with 3615/17) 

 

  • Like 1
Link to comment
Share on other sites

  • 3 weeks later...
On 11/5/2020 at 7:34 AM, Balrog said:

I have a Microserver Gen 10+ with a Xeon E-2236 cpu under ESXi 7.0b. For me a VM with 4 CPUs as 918 works fine and fast and with all cores.
So maybe it is an issue with baremetal installations.

Hi Balrog, I just bought the exact same setup to replace my bare metal setup on a HPE Microserver Gen8 with a e3-1230v2 and 16GB ram. For the Gen10+ I have installed 64Gb ECC Micron dimes DDR4-3200 EUDIMM (MTA18asf4g72az-3g2b1).

How is your setup handling? I have not installed the new Xeon yet (just got it today) is now running with the new memory and the Pentium G54XX that came with it. So before I install the CPU I would love to hear what your experience is.

Also, did you pass the HD's through to the xpenology? In the past I had ran some test with VMDK's but got issues with several setups loosing disks. 
On one of my other servers (Supermicro) I am running ESXI 6.7 with Freenas as a VM with HBA passthrough to freenas and hosting its storage though NFS for the ESXi host on a dedicated internal network bridge. Then I run a xpenology VM and mount the NFS storage in DSM and only use one vmdk as volume1 for local installs. This is not my production xpenology but it works great. The reason why I chose this approach is that if I want to snapshot the VM lets say for testing a new DSM version I don't have to deal with the large amounts of data and even if something goes wrong the most important data will be on a NFS share that can be simultaneous accessed by various systems.
Example. I can use a VM running sabnzbd storing it on the NFS share and with plex on synology access the same data etc. (just a example).

 

So Now I am wondering how I should go about it for my next setup. Same way I did on the SuperMicro Or just Pass the disks to xpenology and share some storage via Iscsi or NFS to the ESXI host.

Also looking and testing proxmox.

Anyhow, would love to hear what you (or other members) think of it

Link to comment
Share on other sites

Hi Balrog, I just bought the exact same setup to replace my bare metal setup on a HPE Microserver Gen8 with a e3-1230v2 and 16GB ram. For the Gen10+ I have installed 64Gb ECC Micron dimes DDR4-3200 EUDIMM (MTA18asf4g72az-3g2b1).

How is your setup handling? I have not installed the new Xeon yet (just got it today) is now running with the new memory and the Pentium G54XX that came with it. So before I install the CPU I would love to hear what your experience is.

Also, did you pass the HD's through to the xpenology? In the past I had ran some test with VMDK's but got issues with several setups loosing disks. 
On one of my other servers (Supermicro) I am running ESXI 6.7 with Freenas as a VM with HBA passthrough to freenas and hosting its storage though NFS for the ESXi host on a dedicated internal network bridge. Then I run a xpenology VM and mount the NFS storage in DSM and only use one vmdk as volume1 for local installs. This is not my production xpenology but it works great. The reason why I chose this approach is that if I want to snapshot the VM lets say for testing a new DSM version I don't have to deal with the large amounts of data and even if something goes wrong the most important data will be on a NFS share that can be simultaneous accessed by various systems.
Example. I can use a VM running sabnzbd storing it on the NFS share and with plex on synology access the same data etc. (just a example).
 
So Now I am wondering how I should go about it for my next setup. Same way I did on the SuperMicro Or just Pass the disks to xpenology and share some storage via Iscsi or NFS to the ESXI host.

Also looking and testing proxmox.

Anyhow, would love to hear what you (or other members) think of it
Hi ferno, I have done a passthrough of the whole onboard SATA ahci-controller to the xpenology-vm. The local esxi storage is done via a 2tb-nvme-ssd on a qnap QM2-2P10G1TA. Not the fastest solution for the nvme but I get about 6.6 gbit via iperf3 which is more than the RAID10 of 4 HDDs is able to deliver. I must add that I have to cut a hole onto the right side of the case to be able to attach an additional noctua fan (otherwise the fan from the qnap card has a high annoying noise). With the additional Noctua fan the Microserver is cool and silent enough for me.
Link to comment
Share on other sites

Hi balrog,

 

How did you do this:

 

"I have done a passthrough of the whole onboard SATA ahci-controller to the xpenology-vm."

 

Does ESXI see the SATA controller when in AHCI modus? I thought the only way to pass the hardware on the microservers was by adding a raid card like the P222 ( That is the model that comes to mind but might be a different one) or by doing raw device mappings.

 

Are you satisfied with the E-2236 Xeon, no issues so far?
 

Edited by ferno
Link to comment
Share on other sites

38 minutes ago, ferno said:

Hi balrog,

 

How did you do this:

 

"I have done a passthrough of the whole onboard SATA ahci-controller to the xpenology-vm."

 

Does ESXI see the SATA controller when in AHCI modus? I thought the only way to pass the hardware on the microservers was by adding a raid card like the P222 ( That is the model that comes to mind but might be a different one) or by doing raw device mappings.

 

Are you satisfied with the E-2236 Xeon, no issues so far?
 

 

An  onboard SATA controller is a PCI device just like a RAID controller, passthrough works just the same.

Connected disks are not visible to ESXi when the controller is passed through.

 

I do this also - my Intel E236 chipset based "Sunrise Point" SATA controller is passed through to my DSM VM.  You may need to add the PCI device to your VM as it won't be visible initially.

 

image.thumb.png.47712b48036ac2b050e20184fd71a9b7.png

Link to comment
Share on other sites

Hi balrog,
 
How did you do this:
 
"I have done a passthrough of the whole onboard SATA ahci-controller to the xpenology-vm."
 
Does ESXI see the SATA controller when in AHCI modus? I thought the only way to pass the hardware on the microservers was by adding a raid card like the P222 ( That is the model that comes to mind but might be a different one) or by doing raw device mappings.
 
Are you satisfied with the E-2236 Xeon, no issues so far?
 
I never used raw device mapping for xpenology.
There is a little trick for enabling the possibility for the passthrough. I will later have a look in my notes and write it here.

And yes: I am very satisfied with the 2236. It has lots of power (for a home-solution of course). But the Microserver 10+ is at his power supply limits with this kind of cpu, 64 gbit RAM, the Qnap card with Nvme-ssd and 4 HDDs. But I never run a type of hard disk benchmark parallel to a prime95 benchmark over all cpu cores for hours, so this is no problem for me.
It is a pretty little powerful server and till now very reliable. But of course WITH the additional Noctua fan. Without it the temperatures where a little bit high for my taste with the additional Qnap card (plus the annoying high fan noise from the qnap card if it's hotter than about 38-40°C).
Link to comment
Share on other sites

Hi Balrog,

 

Yep, I am going to install it tomorrow, the 64GB is already in.

I will just run it with 4 HD drives a usb stick and a Seagate one touch attached to the USB3 port like they showed on the STH site.

Th only expansion I would love is a Nvidia P400 for Transcoding but don't want to stress the PSU and since this server will run 24/7 I don't want it to draw more than 70W when not handling heavy workloads. My Gen8 Now draws 50W with 2 dimms 4 internal drives, internal usb and external USB drive.

My ESXI ML310 Gen8 also the same with 32Gb and 4 10Tb drives some 7200rpm Seagates (noisy bastards compared to my WD reds in the other servers).

 

If you have the notes it would be greatly appreciated! 

Link to comment
Share on other sites

6 hours ago, ferno said:

Hi Balrog,

 

Yep, I am going to install it tomorrow, the 64GB is already in.

I will just run it with 4 HD drives a usb stick and a Seagate one touch attached to the USB3 port like they showed on the STH site.

Th only expansion I would love is a Nvidia P400 for Transcoding but don't want to stress the PSU and since this server will run 24/7 I don't want it to draw more than 70W when not handling heavy workloads. My Gen8 Now draws 50W with 2 dimms 4 internal drives, internal usb and external USB drive.

My ESXI ML310 Gen8 also the same with 32Gb and 4 10Tb drives some 7200rpm Seagates (noisy bastards compared to my WD reds in the other servers).

 

If you have the notes it would be greatly appreciated! 

As promised here some notes I have choosen to remember:

 

Enable passthrough of the onboard Cannon Lake AHCI controller:

 

- enable SSH access in ESXi and login as root

- edit the file "/etc/vmware/passthru.map" (it is NOT "passthrough.map" :D:

vi /etc/vmware/passthru.map

- add this at the end of this file:

# Intel Cannon Lake PCH-H Controller [AHCI mode]
8086  a352  d3d0     false

- reboot ESXi

- login to the ESXi-GUI and enable passthrough of the Cannon Lake AHCI controller

image.thumb.png.1971138b859ced66b7fdd1eecfc65198.png

- reboot ESXi again

- Now you are able to attach the Cannon Lake AHCI controller to a VM (like as seen in the screenshots from @flyride)

 

Pictures of an additional Noctua Fan

- these pictures are my template for my mods (not my pictures!): Noctua NF-A8-5V Fan Pictures on HPE Microserver 10+

- I added an additional fan controller like this one: 5V USB Fan Controller

This allows me to adjust the Nocuta step-less from 0 rpm to max rpm to get the sweet spot of maximum coolness with minimum noise emission. :D

 

I think without an additional PCIe card (in detail the QNAP QM2-2P10G1TA with its tiny and high noise fan under load (in idle its quiet!)) it is not really necessary to add the additional fan, but for the sake of completeness I added this information as well.

 

Edited by Balrog
typos / additional information
  • Like 1
Link to comment
Share on other sites

Hi Balrog, which loader did you use ? I saw that you have an nvme card from QNAP So it seems that is should be the 918+.

 

Any issue with the CPU ? As only one core should be seen by the 918+ ? 

 

I switch from 918+ to 3617xs in order to use all core from the xeon Processor but i can't use an nvme card as the 3617xs will not recognize it. 

Link to comment
Share on other sites

Hi Balrog, which loader did you use ? I saw that you have an nvme card from QNAP So it seems that is should be the 918+.
 
Any issue with the CPU ? As only one core should be seen by the 918+ ? 
 
I switch from 918+ to 3617xs in order to use all core from the xeon Processor but i can't use an nvme card as the 3617xs will not recognize it. 
Hi@sebg35! I think I have to make some points more clear:
- I use the 2tb-nvme-ssd as a local storage for the esxi-host as I have a bunch of VMs and not only the xpenology vm itself

- I use a 256 Gbyte-part as vmdk of the nvme as a "volume1" (without redundancy and formatted with btrfs) in xpenology for e.g. Docker container

- I do regularly backups of the Xpenology vm with Veeam so I have some sort of redundancy/backup of "volume1" (not in real-time but it's okay for me as the data of my docker containers do not change that much)

- the main data is on the 4 x 14TB-HDDs in RAID10. The risc of data lost at a rebuild with RAID5 is way too high for me with this high density HDDs and the speed is awesome.

- so I use NO ssd-cache for now.

- I use 4 cpu cores of the Xeon 2236 for the Xpenology VM.

- All 4 cores are used with full power and I do not have seen slowdowns anywhere.

- it does not make sense to give all 12 cpu-cores (6 physical cores+6 hyperthreading) to the Xpenology VM as I need also free cpu cores for my other VMs and have to make sure that the "CPU ready times" are not raising up. "High Ready times" under ESXi occurs if there is an over provisioning of cpu cores to multiple VMs and some VMs want the cpu for them but do not get the resources which are blocked by other VMs.
In short: "more CPU-Cores" for a VM do not necessary result in "more speed" or "more power" for the VM in every case. There is a break even where one gets a worse speed for the VM with more cpu cores attached to it.

- yes: I use the 918+ image
  • Thanks 1
Link to comment
Share on other sites

Hi balrog, this is pretty much what I am trying to accomplish with my setup. 

But I think need the redundancy of RAID10 most dat ais pretty much backed up on other NAS drives (TRUENAS gets synced each sunday and the important files are synced with my one drive business in almost realtime).

 

Thank you for the how to, I am still in the discovery mode and switching back and forth between PROXMOX and ESXI but in the meantime fiddling with Behive on Truenas and KVM on OMV. 

 

Probably will endup going back to ESXI, want to run ESXI from a attached SSD on the USB port and the actual VM's also. Having some trouble making the rest of the SSD where ESXIO runs from (500GB) available to ESXI but found a guide that might work.

Th trick and guides which involve stopping the USB arbitrator etc. is not what I am looking, that  works and I already played with it. But I just want the USB SSD pin which the ESXI 7 install reside to be available as a DATASTORE.

 

Anyway, installed the E-2236 today and seems to be running fine with the 64GB ram etc.

 

Can I ask why you installed the NOCTUA? What the server running too hot? I have E3-1230V2 running on my 2 GEN8 and never had any issues with high temps with the stock cooling.

 

I have a couple of these lying around and might put them in front of the grill to push some extra air though. 

 

https://www.amazon.nl/Infinity-dual-USB-ventilator-UL-gecertificeerd-Playstation-computerbehuizing/dp/B00IJ2J2K0/ref=asc_df_B00IJ2J2K0/?tag=nlshogostdde-21&linkCode=df0&hvadid=472122620630&hvpos=&hvnetw=g&hvrand=13756350533054949332&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9064941&hvtargid=pla-457643462363&psc=1

 

One other question, is the e-2236 powerful enough to transcode 2 4K streams with PLEX or will I need to add a P400 and if so do you know if this will use too much power? TDP of th eP400 is 30w.

thank you for all you info so far!

Link to comment
Share on other sites

ESXi boot is a read-only device unless updating so it is ideal for pen drive/cheap USB flash storage.

 

scratch gets r/w a lot so you are rolling the dice if you don't use a SSD-class device there.

 

Best practice is to boot using USB pen drive or DOM, save the NVMe exclusively for scratch and you don't have to fuss the USB arbitrator etc or do something nonstandard with your scratch volume.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...