Jump to content
XPEnology Community

GnoBoot 4.3.3827 ESXi Performance


kinaholm

Recommended Posts

Hi Xpenology Users

 

I have been using Trantors Xpenology build directly on my N54L for some time now, but i've decided to switch to an ESXi 5.5 with GnoBoot 4.3.3827 installed.

However, even though i'm using RDM directly to my 3TB WD RED and 3TB WD Green, performance is very sluggish.. Around 50 MB/s for reads and writes, even when i'm transferring files using DSM.

 

What performance do you have ?

Is there any tweaks I can try, to make it perform faster?

Link to comment
Share on other sites

Hi Xpenology Users

 

I have been using Trantors Xpenology build directly on my N54L for some time now, but i've decided to switch to an ESXi 5.5 with GnoBoot 4.3.3827 installed.

However, even though i'm using RDM directly to my 3TB WD RED and 3TB WD Green, performance is very sluggish.. Around 50 MB/s for reads and writes, even when i'm transferring files using DSM.

 

What performance do you have ?

Is there any tweaks I can try, to make it perform faster?

 

What version are you using? Was it the same setup before you switch? How many green drives do you have? IMHO, green drives doesn't perform well.

 

I'm getting 90+ to 100MB/s for 5.1 VM using virtual LSI parallel SCSI, though, it's running on ssd (raid0) and 9211-8i raid controller. I even tried paravirtual drivers but it doesn't perform well compared to virtual LSI controller.

Link to comment
Share on other sites

It was performing fine running DSM 4.3.3810 Baremetal on my N54L.

Was running 2x2TB WD Greens in Raid 0 and getting more than 100MB/s..

The same with the 3TB WD Green.

Now i have gotten a 3 TB WD RED but it's much slower that what I expected.

I've both tried the Virtual SAS and Parallel controller, but to no avail.

It seems like i am getting a huge amount of IO Wait as you can see from the screenshot

gnobootesxislow.PNG

 

I'm running Gnoboot Alpha 8 with 4.3-3827 in ESXi 5.5 N54L Custom edition.

Link to comment
Share on other sites

Since you were running fine on baremetal before, and now you're on a virtual machine. There are a lot of factors you need to consider, here are few that I know of:

 

  • Number of VMs running on host
  • VM vCPU vs pCPU count
  • Proper VM sizing
  • Host swapping
  • VM network (e1000X vs vmxnet3, jumbo frames)

 

There's also a bug on alpha8 that incorrectly sets the IO scheduler which is fix in the next release. Try to manually switch back to cfq IO scheduler for all your attached disks if that helps.

 

echo cfq > /sys/block/sdXX/queue/scheduler

 

If you can get gnoboot to baremetal then we can possibly compare the performance. You can also perform troubleshooting from ESXi host by running esxtop.

 

note:

[spoiler=]VCAP-DCA 4/5 here :wink:

 

 

EDIT:

 

You are using iSCSI, right?

 

iSCSI doesn't perform well on writes if you are using file backend. And iSCSI performance has been an issue by many DSM users, try to google it.

Link to comment
Share on other sites

Not running any other VM's on the machine ATM.

Using default settings for everything else (i think, however, if you have any tips i'll gladly try them out..

I'm not using Jumbo frames (and have never used it).

 

I will try the scheduler fix and see if it changes anything.

 

Not using iSCSI right now, i tried it but it was even slower than FTP, NFS & Samba)

 

And BTW GnoBoot: thanks for the quick answers and support! Your work is highly appreciated ! :smile:

Link to comment
Share on other sites

I'm only getting ~50MB/s too, with paravirtual. Used to get 100MB/s with paravirtual on releases prior to using gnoboot. Then again, that was on DMS 4.2, and esxi 5.1. switched to esxi 5.5, gnoboot , and dsm 5 beta at the same time, so not sure which is the culprit.

Link to comment
Share on other sites

I'm using an dual port Internet NIC which is compatible.

The N54L should also be compatible, a lot of users is at least using it.

Just tried with ESXi 5.1 U2 which isn't changing anything.

 

Will now try GnoBoot baremetal and see how that works. If it's as powerful as it should be, i will just run DSM baremetal and don't worry about vm's..

Link to comment
Share on other sites

I have compared GnoBoot 4.3.3827 performance with a Win2008 server both running on ESXi 5.5 (storage on SSD).

 

Client: Filezilla client running on MacOS (native). 1Gb NW.

 

Running esxtop on ESXi while transferring a 4,6GB iso image via FTP to DSM and to the win2008 server running Filezilla server.

 

DSM:

KAVG/cmd = 82

DAVG/cmd = 85

QUED = 33

DQLEN =31

 

Win2008:

KAVG/cmd = 0.01

DAVG/cmd = 6.79

QUED = 0

DQLEN =31

 

Win2008 is maxing out the 1Gbit NW connection. With DSM I get approx. 75MB/s.

Seems to be some kind of storage bottleneck with DSM.

 

Both are running pvscsi.

Link to comment
Share on other sites

I have compared GnoBoot 4.3.3827 performance with a Win2008 server both running on ESXi 5.5 (storage on SSD).

 

Client: Filezilla client running on MacOS (native). 1Gb NW.

 

Running esxtop on ESXi while transferring a 4,6GB iso image via FTP to DSM and to the win2008 server running Filezilla server.

 

DSM:

KAVG/cmd = 82

DAVG/cmd = 85

QUED = 33

DQLEN =31

 

Win2008:

KAVG/cmd = 0.01

DAVG/cmd = 6.79

QUED = 0

DQLEN =31

 

Win2008 is maxing out the 1Gbit NW connection. With DSM I get approx. 75MB/s.

Seems to be some kind of storage bottleneck with DSM.

 

Both are running pvscsi.

Try using other controller, I was able to saturate 1Gb network link using virtual LSI parallel controller on alpha5. My worst was 1MB CIFS transfer using LSI 9211-8i VTd on alpha10.2. It should be getting more than that considering its SSD, will do more tests next week. :sad:

Link to comment
Share on other sites

Try using other controller, I was able to saturate 1Gb network link using virtual LSI parallel controller on alpha5. My worst was 1MB CIFS transfer using LSI 9211-8i VTd on alpha10.2. It should be getting more than that considering its SSD, will do more tests next week. :sad:

 

Yes, with LSI parallel controller I get better results but not as good as with pvscsi on Win 2008.

 

esxtop:

KAVG/cmd = 2.60

DAVG/cmd = 79.80

QUED = 1

 

approx. 85MB/s (FTP)

 

I also tried with a virtual machine running ubuntu (pvscsi) and get approx. the same results as with win2008.

 

So, it seems there is something fishy with the pvscsi module in gnoboot?

I'm still using alpha 10. Can I get access to alpha 10.2?

 

Thanks,

Nicklas

Link to comment
Share on other sites

Is your Ubuntu using MD Raid and LVM as well? Did you change its IO scheduler to noop?

 

The vendor has customized a lot of underlying kernel subsystem for their product. It might be the reason (BUGS???) why we're not getting the same performance results from other Linux distro. DSM disk power saving feature will also contribute to performance degradation to other drivers (e.g. pvscsi, etc) that doesn't have this feature.

 

Do you have other copies of older boot image? You might as well compare their results if it's ok with you. :wink:

Link to comment
Share on other sites

I'm at work but will check the ubuntu installation when I get home. However, I have not changed the IO scheduler so it's the default one.

 

I don't have any older copies but I can try an older one if you send me an image.

Also, I would like to try the alpha10.2 if I can get the password. :wink:

 

I will check the powersaving settings in DSM. Also, noticed there is a newer vmw_pvscsi version available. It seems you are running 1.0.1.0-k? Perhaps it would be ineresting to try the 1.0.2.0 version?

Link to comment
Share on other sites

  • 2 weeks later...
Is your Ubuntu using MD Raid and LVM as well? Did you change its IO scheduler to noop?

 

The vendor has customized a lot of underlying kernel subsystem for their product. It might be the reason (BUGS???) why we're not getting the same performance results from other Linux distro. DSM disk power saving feature will also contribute to performance degradation to other drivers (e.g. pvscsi, etc) that doesn't have this feature.

 

Do you have other copies of older boot image? You might as well compare their results if it's ok with you. :wink:

 

Hi gnoboot,

 

Can I have the password for 10.4? I will continue to troubleshoot the performance in ESXi.

 

Thanks

Link to comment
Share on other sites

×
×
  • Create New...