kinaholm Posted February 27, 2014 Share #1 Posted February 27, 2014 Hi Xpenology Users I have been using Trantors Xpenology build directly on my N54L for some time now, but i've decided to switch to an ESXi 5.5 with GnoBoot 4.3.3827 installed. However, even though i'm using RDM directly to my 3TB WD RED and 3TB WD Green, performance is very sluggish.. Around 50 MB/s for reads and writes, even when i'm transferring files using DSM. What performance do you have ? Is there any tweaks I can try, to make it perform faster? Link to comment Share on other sites More sharing options...
gnoboot Posted February 27, 2014 Share #2 Posted February 27, 2014 Hi Xpenology Users I have been using Trantors Xpenology build directly on my N54L for some time now, but i've decided to switch to an ESXi 5.5 with GnoBoot 4.3.3827 installed. However, even though i'm using RDM directly to my 3TB WD RED and 3TB WD Green, performance is very sluggish.. Around 50 MB/s for reads and writes, even when i'm transferring files using DSM. What performance do you have ? Is there any tweaks I can try, to make it perform faster? What version are you using? Was it the same setup before you switch? How many green drives do you have? IMHO, green drives doesn't perform well. I'm getting 90+ to 100MB/s for 5.1 VM using virtual LSI parallel SCSI, though, it's running on ssd (raid0) and 9211-8i raid controller. I even tried paravirtual drivers but it doesn't perform well compared to virtual LSI controller. Link to comment Share on other sites More sharing options...
kinaholm Posted February 28, 2014 Author Share #3 Posted February 28, 2014 It was performing fine running DSM 4.3.3810 Baremetal on my N54L. Was running 2x2TB WD Greens in Raid 0 and getting more than 100MB/s.. The same with the 3TB WD Green. Now i have gotten a 3 TB WD RED but it's much slower that what I expected. I've both tried the Virtual SAS and Parallel controller, but to no avail. It seems like i am getting a huge amount of IO Wait as you can see from the screenshot I'm running Gnoboot Alpha 8 with 4.3-3827 in ESXi 5.5 N54L Custom edition. Link to comment Share on other sites More sharing options...
gnoboot Posted February 28, 2014 Share #4 Posted February 28, 2014 Since you were running fine on baremetal before, and now you're on a virtual machine. There are a lot of factors you need to consider, here are few that I know of: Number of VMs running on host VM vCPU vs pCPU count Proper VM sizing Host swapping VM network (e1000X vs vmxnet3, jumbo frames) There's also a bug on alpha8 that incorrectly sets the IO scheduler which is fix in the next release. Try to manually switch back to cfq IO scheduler for all your attached disks if that helps. echo cfq > /sys/block/sdXX/queue/scheduler If you can get gnoboot to baremetal then we can possibly compare the performance. You can also perform troubleshooting from ESXi host by running esxtop. note: [spoiler=]VCAP-DCA 4/5 here EDIT: You are using iSCSI, right? iSCSI doesn't perform well on writes if you are using file backend. And iSCSI performance has been an issue by many DSM users, try to google it. Link to comment Share on other sites More sharing options...
kinaholm Posted February 28, 2014 Author Share #5 Posted February 28, 2014 Not running any other VM's on the machine ATM. Using default settings for everything else (i think, however, if you have any tips i'll gladly try them out.. I'm not using Jumbo frames (and have never used it). I will try the scheduler fix and see if it changes anything. Not using iSCSI right now, i tried it but it was even slower than FTP, NFS & Samba) And BTW GnoBoot: thanks for the quick answers and support! Your work is highly appreciated ! Link to comment Share on other sites More sharing options...
kinaholm Posted February 28, 2014 Author Share #6 Posted February 28, 2014 Tried the Scheduler fix, not really doing anything other than letting me transfer the initial 300MB's with around 120MB/s, then it gets very slow and showing crazy amounts of I/O Wait.. Here is a screenshot of the ESXTop while transferring files over FTP Link to comment Share on other sites More sharing options...
gnoboot Posted February 28, 2014 Share #7 Posted February 28, 2014 Try this KB1008205. BTW, are you using officially supported ESXi5.5 NIC and disk controller? You might be using an unsupported desktop drivers which has been dropped starting from 5.5 release and causing performance issues on your host. Link to comment Share on other sites More sharing options...
Diverge Posted February 28, 2014 Share #8 Posted February 28, 2014 I'm only getting ~50MB/s too, with paravirtual. Used to get 100MB/s with paravirtual on releases prior to using gnoboot. Then again, that was on DMS 4.2, and esxi 5.1. switched to esxi 5.5, gnoboot , and dsm 5 beta at the same time, so not sure which is the culprit. Link to comment Share on other sites More sharing options...
kinaholm Posted February 28, 2014 Author Share #9 Posted February 28, 2014 I'm using an dual port Internet NIC which is compatible. The N54L should also be compatible, a lot of users is at least using it. Just tried with ESXi 5.1 U2 which isn't changing anything. Will now try GnoBoot baremetal and see how that works. If it's as powerful as it should be, i will just run DSM baremetal and don't worry about vm's.. Link to comment Share on other sites More sharing options...
gnoboot Posted March 1, 2014 Share #10 Posted March 1, 2014 Are you using tg3 (Broadcom) driver on your ESXi host? Link to comment Share on other sites More sharing options...
nylund Posted March 9, 2014 Share #11 Posted March 9, 2014 I have compared GnoBoot 4.3.3827 performance with a Win2008 server both running on ESXi 5.5 (storage on SSD). Client: Filezilla client running on MacOS (native). 1Gb NW. Running esxtop on ESXi while transferring a 4,6GB iso image via FTP to DSM and to the win2008 server running Filezilla server. DSM: KAVG/cmd = 82 DAVG/cmd = 85 QUED = 33 DQLEN =31 Win2008: KAVG/cmd = 0.01 DAVG/cmd = 6.79 QUED = 0 DQLEN =31 Win2008 is maxing out the 1Gbit NW connection. With DSM I get approx. 75MB/s. Seems to be some kind of storage bottleneck with DSM. Both are running pvscsi. Link to comment Share on other sites More sharing options...
nylund Posted March 9, 2014 Share #12 Posted March 9, 2014 Now I tried copying the file locally within the same filesystem on both win2008 and DSM. Almost the same results in esxtop. Link to comment Share on other sites More sharing options...
gnoboot Posted March 9, 2014 Share #13 Posted March 9, 2014 I have compared GnoBoot 4.3.3827 performance with a Win2008 server both running on ESXi 5.5 (storage on SSD). Client: Filezilla client running on MacOS (native). 1Gb NW. Running esxtop on ESXi while transferring a 4,6GB iso image via FTP to DSM and to the win2008 server running Filezilla server. DSM: KAVG/cmd = 82 DAVG/cmd = 85 QUED = 33 DQLEN =31 Win2008: KAVG/cmd = 0.01 DAVG/cmd = 6.79 QUED = 0 DQLEN =31 Win2008 is maxing out the 1Gbit NW connection. With DSM I get approx. 75MB/s. Seems to be some kind of storage bottleneck with DSM. Both are running pvscsi. Try using other controller, I was able to saturate 1Gb network link using virtual LSI parallel controller on alpha5. My worst was 1MB CIFS transfer using LSI 9211-8i VTd on alpha10.2. It should be getting more than that considering its SSD, will do more tests next week. Link to comment Share on other sites More sharing options...
nylund Posted March 9, 2014 Share #14 Posted March 9, 2014 Try using other controller, I was able to saturate 1Gb network link using virtual LSI parallel controller on alpha5. My worst was 1MB CIFS transfer using LSI 9211-8i VTd on alpha10.2. It should be getting more than that considering its SSD, will do more tests next week. Yes, with LSI parallel controller I get better results but not as good as with pvscsi on Win 2008. esxtop: KAVG/cmd = 2.60 DAVG/cmd = 79.80 QUED = 1 approx. 85MB/s (FTP) I also tried with a virtual machine running ubuntu (pvscsi) and get approx. the same results as with win2008. So, it seems there is something fishy with the pvscsi module in gnoboot? I'm still using alpha 10. Can I get access to alpha 10.2? Thanks, Nicklas Link to comment Share on other sites More sharing options...
gnoboot Posted March 9, 2014 Share #15 Posted March 9, 2014 Is your Ubuntu using MD Raid and LVM as well? Did you change its IO scheduler to noop? The vendor has customized a lot of underlying kernel subsystem for their product. It might be the reason (BUGS???) why we're not getting the same performance results from other Linux distro. DSM disk power saving feature will also contribute to performance degradation to other drivers (e.g. pvscsi, etc) that doesn't have this feature. Do you have other copies of older boot image? You might as well compare their results if it's ok with you. Link to comment Share on other sites More sharing options...
nylund Posted March 10, 2014 Share #16 Posted March 10, 2014 I'm at work but will check the ubuntu installation when I get home. However, I have not changed the IO scheduler so it's the default one. I don't have any older copies but I can try an older one if you send me an image. Also, I would like to try the alpha10.2 if I can get the password. I will check the powersaving settings in DSM. Also, noticed there is a newer vmw_pvscsi version available. It seems you are running 1.0.1.0-k? Perhaps it would be ineresting to try the 1.0.2.0 version? Link to comment Share on other sites More sharing options...
nylund Posted March 10, 2014 Share #17 Posted March 10, 2014 Also, I would like to try the alpha10.2 if I can get the password. Ahh, just saw that I have already got it. Thanks! Link to comment Share on other sites More sharing options...
nylund Posted March 24, 2014 Share #18 Posted March 24, 2014 Is your Ubuntu using MD Raid and LVM as well? Did you change its IO scheduler to noop? The vendor has customized a lot of underlying kernel subsystem for their product. It might be the reason (BUGS???) why we're not getting the same performance results from other Linux distro. DSM disk power saving feature will also contribute to performance degradation to other drivers (e.g. pvscsi, etc) that doesn't have this feature. Do you have other copies of older boot image? You might as well compare their results if it's ok with you. Hi gnoboot, Can I have the password for 10.4? I will continue to troubleshoot the performance in ESXi. Thanks Link to comment Share on other sites More sharing options...
tsygam Posted March 24, 2014 Share #19 Posted March 24, 2014 @nylund, there was quite a discussion re DSM on ESXi in http://xpenology.com/forum/viewtopic.php?f=2&t=558 about a year ago. May be it could be of interest for you (not sure though). Link to comment Share on other sites More sharing options...
Recommended Posts