frank_zero

[SOLVED] Install on vmware ESXI

Recommended Posts

What are folks seeing for transfer speeds between VMs? I finally got most of my VMs setup and am not sure I'm seeing the best speeds when "writing" to the Synology. Could this be the hard drives? I'm using Samsung 1.5TB 5400RPM drives thru RDM.

 

Synology VM --> WinServer2008R2 ~ 75MB/s

W2K8R2 VM --> Synology VM ~ 25MB/s

Share this post


Link to post
Share on other sites

Hi Folks,

 

I was using DSM on my ESXI server for about a half year but last week i migrated everything to my new server and stupid that i was i already delete the old datastore.

When i now start up the moved VM of DSM i need to reinstall it again, the problem with this is that when i do that my 4TB drive will be erased and all data on it will be lost.

The drives was mapped as RDM to the VM and the only thing i can see is the RDM file on the drive.

Does anybody know if it's possible to access the drive so i can restore the data in the RDM file, i hope someone tell it's possible but i don't think so.

I already tried mapping it to a linux and windows VM but that don't give anything.

 

Your help would be appreciated.

 

MarQuez

Share this post


Link to post
Share on other sites
Hi Folks,

 

I was using DSM on my ESXI server for about a half year but last week i migrated everything to my new server and stupid that i was i already delete the old datastore.

When i now start up the moved VM of DSM i need to reinstall it again, the problem with this is that when i do that my 4TB drive will be erased and all data on it will be lost.

The drives was mapped as RDM to the VM and the only thing i can see is the RDM file on the drive.

Does anybody know if it's possible to access the drive so i can restore the data in the RDM file, i hope someone tell it's possible but i don't think so.

I already tried mapping it to a linux and windows VM but that don't give anything.

 

Your help would be appreciated.

 

MarQuez

 

 

Just follow this guide http://forum.synology.com/enu/viewtopic.php?f=160&t=51393, and you should be able to restore everything.

I used it not long ago, and it worked perfectly.

 

Good luck.

Share this post


Link to post
Share on other sites
Hi, many thanks to all contributed to DSM on on PC and on ESXi!

 

just wanted to share my small experience so far.

 

CPU: Xeon E3-1265L-v2

MB: Jetway NF9E-Q77

HDD: 500Gb Hitachi

SAS/SATA: Adaptec 7805, so far only 1Tb WD SATA disk is connected for testing (the disk has MBR)

 

- compiled aacraid.ko for DSM from the source code avaliable at Adaptec's site

- enabled pcipassthru for 7805 in ESXi

- added PCI controller to DSM's VM

- added pciPassthru0.msiEnabled = "FALSE" to my .vmx file

- copied aacraid.ko and arcconf (command-line tool for linux x64) to DSM

- insmod aacraid.ko

 

I do not have dmesg and arcconf outputs to post now but both aacraid.ko and arcconf seems to work.

DSM does see the diskas eSATA, but as opposed to arcconf, DSM does not display SMART in its web shell - SMART is not supported msg pops up.

 

Will keep playing :smile:

 

Hi tsygam,

 

do you confirm that the Jetway NF9E-Q77 is working flawlessly with the adaptec 7805 ?

I have tested this mobo with an Adaptec 2405 and any OS will freeze at startup.

Share this post


Link to post
Share on other sites

Hi tsygam,

 

do you confirm that the Jetway NF9E-Q77 is working flawlessly with the adaptec 7805 ?

I have tested this mobo with an Adaptec 2405 and any OS will freeze at startup.

 

Hi!

for me Jetway NF9E-Q77 works with 7805 without any issues. The card is not that fast at boot, but it seems to be normal. Running latest Adaptec's firmware & drivers. Though you will not be able to see adaptec's health status in esxi client, ie Adaptec is not recognised as storage in health section, i've read somewhere that it is posible for LSI cards.

 

PS: The only thing I could not make working at NF9E-Q77 is boot via AMT after power outage AND having bios set to keep previous state - the box is shut down by UPS and i want to boot it from another server only if power is on long enough and UPS battery is more than say 30%. Have you played with these things?

 

BR

Share this post


Link to post
Share on other sites

Just for reference on the new ESXi 5.5, there is a virtual SATA controller now. I haven't tried it yet but as soon as I upgrade my ESXi server to 5.5 I will give it a try...

Share this post


Link to post
Share on other sites

There seems to be some issue with passthrough the H77 onbard SATA controller. it was working normally with 5.1. Once i upgraded to ESXi5.5, the xpenology vm would start and hand at [8.3###]. In the middle, it kept complaining about something about : ACHI FAILED to stop engine.

 

Then the VM just hang, couldn't start.

Share this post


Link to post
Share on other sites

hello peoples,

 

i have the following Problem; i have installed the DSM42 on a vmware esxi 5.1 Server. all running fine, but only the nfs service seems not work.

the ports for the nfs Server is open, but i can't connect to it (i have tried *, IP/NETMASK, ... nothing works)...

 

service runs, but i cannot connect to the dsm42 nfs service...

 

has anybody a solution for this?

 

many greets!!

Share this post


Link to post
Share on other sites
hello peoples,

 

i have the following Problem; i have installed the DSM42 on a vmware esxi 5.1 Server. all running fine, but only the nfs service seems not work.

the ports for the nfs Server is open, but i can't connect to it (i have tried *, IP/NETMASK, ... nothing works)...

 

service runs, but i cannot connect to the dsm42 nfs service...

 

has anybody a solution for this?

 

many greets!!

 

modify 19th line of /usr/syno/etc/rc.d/S83nfsd.sh to add auth_rpcgss just before rpcsec_gss_krb5:

 

KERNELMODULEV4="auth_rpcgss rpcsec_gss_krb5"

 

and then run /usr/syno/etc/rc.d/S83nfsd.sh restart

 

while starting it should look as:

 

Starting NFS server...

:: Loading module auth_rpcgss ... [ OK ]

:: Loading module rpcsec_gss_krb5 ... [ OK ]

:: Loading module exportfs ... [ OK ]

:: Loading module nfsd ... [ OK ]

 

idea is from here http://xpenology.com/forum/viewtopic.php?f=2&t=6&p=4872#p4872

Share this post


Link to post
Share on other sites

Hey guys,

 

I just joined to this forum as I'm planning to build the setup above (and would like to discuss the idea with you)

 

HP N36L microserver

16GB (2x8GB) Micron/Spectek DDR3 1600MHz CL11 (non-ECC ) -> Dell 2x8GB ECC is on the way :smile:

2x128GB Samsung PM810 SSD (on ODD and eSata ports after TheBay(thanks!) BIOS patch)

2x2TB Seagate ST2000DL003 (could be another 2 added later from my current Synology DS209)

HP NC360T dual port NIC (cabling both ports when I get a proper managed 8 port gigabit switch later like Cisco SRW2008 or Netgear GS108T)

My router is a Linksys E3000 running TomatoUSB 1.28.110 by Shibby

 

Running ESXi 5.5 and vCenter Appliance (I have to set up my own homelab for 5.5)

Enabling vSphere Flash Read Cache (this requires Enterprise Plus licensing though, sorry)

I'll create standard VMFS5 datastores on all drives.

 

DSM VM will have 1vCPU and 2GB of memory configure.

I'm going to use thin provisioned virtual disks for DSM, 1x20GB (on SSD) 2x1.8TB (on HDDs)

 

I hope that the new SSD caching feature will correct the performance drop of the standard virtual disks vs. RDM :smile:

Share this post


Link to post
Share on other sites

I used trantor repack v1.2, install went well.

But when I added my RDM to the VM, the VM shuts down during boot.

When i remove the RDM, the VM boots up fine.

 

Anybody knows how to solve this?

Share this post


Link to post
Share on other sites
Hey guys,

 

I just joined to this forum as I'm planning to build the setup above (and would like to discuss the idea with you)

 

HP N36L microserver

16GB (2x8GB) Micron/Spectek DDR3 1600MHz CL11 (non-ECC ) -> Dell 2x8GB ECC is on the way :smile:

2x128GB Samsung PM810 SSD (on ODD and eSata ports after TheBay(thanks!) BIOS patch)

2x2TB Seagate ST2000DL003 (could be another 2 added later from my current Synology DS209)

HP NC360T dual port NIC (cabling both ports when I get a proper managed 8 port gigabit switch later like Cisco SRW2008 or Netgear GS108T)

My router is a Linksys E3000 running TomatoUSB 1.28.110 by Shibby

 

Running ESXi 5.5 and vCenter Appliance (I have to set up my own homelab for 5.5)

Enabling vSphere Flash Read Cache (this requires Enterprise Plus licensing though, sorry)

I'll create standard VMFS5 datastores on all drives.

 

DSM VM will have 1vCPU and 2GB of memory configure.

I'm going to use thin provisioned virtual disks for DSM, 1x20GB (on SSD) 2x1.8TB (on HDDs)

 

I hope that the new SSD caching feature will correct the performance drop of the standard virtual disks vs. RDM :smile:

 

Unlikely the vSphere caching feature will help with the swap files and read operations it will do nothing for write performance increases.

 

Thin provisioned aren't the best for performance either, go with eager zeroed thick vmdks instead if you don't want to use RDMs in this case.

The controller in the NL36L is meh under vSphere don't expect anything major from it, you will get better performance just installing Xpenology on the NL36 itself and getting a decent server ML110/150 G6/G7 for and use that for your lab instead or the newer Microserver.

If you want a proper managed switch for your homelab get yourself a SG300-10P L3 switch :smile:

Enjoy

Share this post


Link to post
Share on other sites

Well i have been running 5 servers on my N40L with 16gb ram without any preformance issues for about one year now.

If you not have 8 servers with 100% load, the the N36L will do just fine.

 

But yes you can get an Proliant ml series or Dell Edge series or other but it is overkill and not very power economic of you dont need an powerserver.

Share this post


Link to post
Share on other sites

I know there was some discussion in the first few pages about getting IPKG installed with open-vm-tools. I was wondering if anybody has been able to get it to work and enable esxi to gracefully shutdown thier Xpenology in case of a power failure.

 

I am putting together an ESXI server for myself and I think I have figured out what I want to do.

-Cyberpower UPS set up with Business Edition (run by CentOS) to send shutdown command to ESXI

-ESXI then gracefully shuts down all the VMs.

 

The last piece of the puzzle for me to figure out is installing VM tools (or possibly open vm-tools) on the Xpenology. Last I have seen didi was working on getting open vm-tools working and said there was some work to be done. Is there an update on getting it working?

 

I appreciate all the work many of you have put into this project to make it a reality.

Share this post


Link to post
Share on other sites
I know there was some discussion in the first few pages about getting IPKG installed with open-vm-tools. I was wondering if anybody has been able to get it to work and enable esxi to gracefully shutdown thier Xpenology in case of a power failure.

 

I am putting together an ESXI server for myself and I think I have figured out what I want to do.

-Cyberpower UPS set up with Business Edition (run by CentOS) to send shutdown command to ESXI

-ESXI then gracefully shuts down all the VMs.

 

The last piece of the puzzle for me to figure out is installing VM tools (or possibly open vm-tools) on the Xpenology. Last I have seen didi was working on getting open vm-tools working and said there was some work to be done. Is there an update on getting it working?

 

I appreciate all the work many of you have put into this project to make it a reality.

I did this exact thing, but when I installed the vmtools, the system crashed and it ended up marking the volume as "Crashed" within Synology. I had to backup my data through a SSH session and start over. Instead what I did was create a script on the Cyberpower VM that SSH's to the Synology VM, shuts it down and then SSH's to the ESXi instance and shuts it down. The script works well, but even though it looks like I'm getting a clean shutdown, I still get a "Crashed" volume. Even if I do a clean shutdown from the GUI, my volume gets marked as "Crashed." Fortunately this time, the volume goes back to "Normal" after a couple reboots, but all my folders and settings are gone. Thankfully I've backed up the configuration and just o a restore.

 

I had actually visited the forum today to ask this very question - does anyone else get a "Crashed" volume when they shut down their VM?

Share this post


Link to post
Share on other sites
I know there was some discussion in the first few pages about getting IPKG installed with open-vm-tools. I was wondering if anybody has been able to get it to work and enable esxi to gracefully shutdown thier Xpenology in case of a power failure.

 

I am putting together an ESXI server for myself and I think I have figured out what I want to do.

-Cyberpower UPS set up with Business Edition (run by CentOS) to send shutdown command to ESXI

-ESXI then gracefully shuts down all the VMs.

 

The last piece of the puzzle for me to figure out is installing VM tools (or possibly open vm-tools) on the Xpenology. Last I have seen didi was working on getting open vm-tools working and said there was some work to be done. Is there an update on getting it working?

 

I appreciate all the work many of you have put into this project to make it a reality.

I did this exact thing, but when I installed the vmtools, the system crashed and it ended up marking the volume as "Crashed" within Synology. I had to backup my data through a SSH session and start over. Instead what I did was create a script on the Cyberpower VM that SSH's to the Synology VM, shuts it down and then SSH's to the ESXi instance and shuts it down. The script works well, but even though it looks like I'm getting a clean shutdown, I still get a "Crashed" volume. Even if I do a clean shutdown from the GUI, my volume gets marked as "Crashed." Fortunately this time, the volume goes back to "Normal" after a couple reboots, but all my folders and settings are gone. Thankfully I've backed up the configuration and just o a restore.

 

I had actually visited the forum today to ask this very question - does anyone else get a "Crashed" volume when they shut down their VM?

 

May I ask which Cyberpower UPS you have? I am looking at buying one and unsure which one to pick up.

Share this post


Link to post
Share on other sites
May I ask which Cyberpower UPS you have? I am looking at buying one and unsure which one to pick up.
I had a CP1285AVRLCD that I purchased years ago. Was using it as a UPS for my main system, but repurposed it for my ESXi box. Be sure to download the AGENT appliance, not the CLIENT one.

Share this post


Link to post
Share on other sites
As requested, I've created an Idiot's Guide document which details the installation steps and options required in order to install DSM 4.2 on ESXi 5.1. A complete screen-shot example of all configuration steps is provided, with a fully virtual configuration for testing purposes. Details are also included about creating RDM VMDK disk images instead of VMFS based virtual disks.

 

Thanks again go to jukolaut and odie82544 for making DSM 4.2 on ESXi 5.1 possible.

 

http://depositfiles.com/files/virzefc1a[/url]

 

@ Tuatara:I just wanted to say "Thank you!" for the guide you wrote. @ jukolaut and odie82544 "Thank you!" for the work you did.

 

Actually "esxi_install_3202_v2" is the only version of DSM working on ESXi, having the VM configured with hard disk 2 as SCSI(0:0), controller SCSI Paravirtual.

Share this post


Link to post
Share on other sites

So a small tidbit of information for those who repeatedly get stuck with Grub 22 error after installing the .pat file.

 

When you convert the VID file to a VMDK file, MAKE SURE, you convert it to IDE, NOT SATA. Mine defaulted to SATA everytime.

 

To fix this, when the conversion is done, edit the SMALL 1k .vmdk file (it's really a descriptor file), and change :

 

FROM : ddb.adapterType = "lsilogic"

 

TO : ddb.adapterType = "ide"

 

This will then allow you to import the vmdk as an IDE drive. Create your second drive as an IDE drive as well and then proceed with the boot and install. Once the install is done, shut down your VM, and replace the Disk 1 ide VMDK with the original vmdk file. Reboot, and you should be good to go!

 

I just spent hours stuck on this stupid little detail, figured I would save some other people some hair loss.

 

To help people out : here is a link to the ESXi VMDK in IDE format to save those going through the conversion hassle :

 

https://www.dropbox.com/s/6ptsu961r1nijwm/Synoboot-Trantor-4.2-3211-v1.2_FLAT-ESXi-IDE_VMDK.7z

 

or :

 

https://db.tt/slEmjgQv

Share this post


Link to post
Share on other sites

i get a server error 38 when i try to install the pat file v2.1

if i access ip though a browser, its saying that there are no hard drives connected to the nas.

i followed the guide to the letter. i even downgraded esxi to 5.1 from 5.5. but that didn't help

 

what to do? please :smile:

 

never minde.... my mistake :grin:

Share this post


Link to post
Share on other sites

Seems the Grub 22 error of DSM 4.2 3211 upon restart after installed in ESXi cannot be solved.

The repeated uploading the flat.vmdk is not practical.

Plus the secondary drive needs to be an IDE drive is not a solution.

 

so can only stick to 3202 for now.

Share this post


Link to post
Share on other sites

Why?

Replacing disk is only needed once, and ide is only used for boot device and 1 (small) disk).

All other RDM /pass through devices should be able to use non ide controllers.

 

And in the other thread Trantor said he was already working on a fix to the ESX/Grub 22 issue;)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now