[SOLVED] Install on vmware ESXI


Recommended Posts

Hi! How can I change the mac address on the virtual machine? I tried to mount /dev/sdaX but I keep getting the error "no such file or directory". Any idea?

Edit: forget it, I resolved it by mounting the synoboot image from another Linux virtual machine

Link to post
Share on other sites
  • 2 weeks later...
hi!

 

very strange can't have vpn working as it was on DSM 4.1....

 

 

Ok my problems comes from ovpn_o1360059439.conf i have to manualy change things.

 

 

One more question how can i do to boot alway from synology_1 grub conf ?

 

becaus when i ran in synology_2 i have some trouble...

Link to post
Share on other sites
Anyone tried with VmDirectPath I/O for a direct access to a a SATA/RAID controler ?

 

Not yet. I was going to use RDM (Raw Device Mapping) for the drives, but I do have a box running ESXi with Vt-d support, so I could give it a go that way too. What SATA controllers are supported?

Link to post
Share on other sites
Anyone tried with VmDirectPath I/O for a direct access to a a SATA/RAID controler ?

 

Not yet. I was going to use RDM (Raw Device Mapping) for the drives, but I do have a box running ESXi with Vt-d support, so I could give it a go that way too. What SATA controllers are supported?

Some people reported that RDM performances are similar to "classic" virtual hard drive: poor. Regarding the supported SATA controllers, you should check this post. VmDirectPath I/O is tricky, everything has to support the pci passthrough (motherboard, BIOS, CPU, and of course the controllers), but it could be the most practical way to use XPEnology regarding performances and reliability (snapshots).

 

I wish I could test this but none of my current hardware supports XPEnology in standalone mode, let alone VT-d virtualization.

Link to post
Share on other sites

Some people reported that RDM performances are similar to "classic" virtual hard drive: poor. Regarding the supported SATA controllers, you should check this post. VmDirectPath I/O is tricky, everything has to support the pci passthrough (motherboard, BIOS, CPU, and of course the controllers), but it could be the most practical way to use XPEnology regarding performances and reliability (snapshots).

 

I wish I could test this but none of my current hardware supports XPEnology in standalone mode, let alone VT-d virtualization.

I have tested XPEnology with RDM on ESXI installed on AMD E350 CPU (1.6Ghz) HD transfer speed was about 50Mbps.

Running it now on HP N54L (2.2Ghz) with ESXi/XPEnology and the same drives connected via RDM. Now the HD transfer speed is 100-105Mbps limited by 1Gb LAN. Faster CPU just did the trick. 1.6Ghz AMD just was not enough for both ESXi and XPenology on top of it.

Link to post
Share on other sites
Anyone tried with VmDirectPath I/O for a direct access to a a SATA/RAID controler ?

 

Not yet. I was going to use RDM (Raw Device Mapping) for the drives, but I do have a box running ESXi with Vt-d support, so I could give it a go that way too. What SATA controllers are supported?

Some people reported that RDM performances are similar to "classic" virtual hard drive: poor. Regarding the supported SATA controllers, you should check this post. VmDirectPath I/O is tricky, everything has to support the pci passthrough (motherboard, BIOS, CPU, and of course the controllers), but it could be the most practical way to use XPEnology regarding performances and reliability (snapshots).

 

I wish I could test this but none of my current hardware supports XPEnology in standalone mode, let alone VT-d virtualization.

 

I've found RDM performance to be very good - better than any VMFS configuration (either iSCSI / local), but not quite up to a native or VMDirectPath I/O setup. Performance loss for my (informal) testing was less than 5% (not 50% as in the link) - but I was using an i7 CPU. I've only used Vt-d and VMDirectPath I/O in a high-I/O requirement database system, otherwise the performance loss has not been noticeable for anything else.

 

For me, RDM works quite well - I had used it for my storage with FreeNAS (RAID-Z) without issues. Synology has since taken over my storage needs, but having an ESXi VM would provide a backup and also a virtualised testing/deployment environment - especially quick on an i7. :grin:

 

Here's a link to a whole bunch of test/reviews - probably less biased than a 'personal' experience.

http://communities.vmware.com/docs/DOC-10799

 

Because I have a machine with VMDirectPath support, I'll give it a try (soon I hope), and will report back. I've got a Silicon Image 3124 lying around somewhere, and Intel ICH on the motherboard ... possibly have a Marvell in hiding somewhere.

Link to post
Share on other sites

Thanks to XPEH and Tuatara for sharing your experiences. I've ordered a PRO/1000 MT and a 3ware 9650SE-2LP on eBay (my motherboard is an Asus P5QPL-AM, nothing is currently working on it natively, XPEnology or ESXi), I can't wait to try an ESXi setup !

Link to post
Share on other sites
Because I have a machine with VMDirectPath support, I'll give it a try (soon I hope), and will report back. I've got a Silicon Image 3124 lying around somewhere, and Intel ICH on the motherboard ... possibly have a Marvell in hiding somewhere.

 

Also, could you test if HDD hibernation is working ? Thanks.

Link to post
Share on other sites
Also, could you test if HDD hibernation is working ? Thanks.

 

I've only got Intel SSD's and WD Caviar Green drives available at the moment, which are low power & auto-hibernate (why I got them). I'll see what I can get to test though ... I must have another old drive around I could use. What drive(s) are you wanting to test? Seagates?

 

Note: I'm assuming you mean on VMDirectPath I/O & Synology controlled spin-down time.

Link to post
Share on other sites
Also, could you test if HDD hibernation is working ? Thanks.

 

I've only got Intel SSD's and WD Caviar Green drives available at the moment, which are low power & auto-hibernate (why I got them). I'll see what I can get to test though ... I must have another old drive around I could use. What drive(s) are you wanting to test? Seagates?

 

Note: I'm assuming you mean on VMDirectPath I/O & Synology controlled spin-down time.

Oh, I didn't knew that some hard drives can auto-hibernate. That's a good news, I was afraid that my fake DSM box will let my hard drives spinning 24/7, which cannot be a good thing unless these drives are server-class super-expensive hard drives. I only have two WD 500GB and one Samsung 500GB for now, for testing purposes, and I was planning to buy 2x3TB WD Green or Red if I'm able to make the firmware run smoothly on my box.

 

So, yes, could you test the synology hibernation with your WD drives in VMDirectPath I/O please ? I don't see how synology hibernation could work with VHD or RDM, but if that's actually the case, please let us know :smile:

Link to post
Share on other sites
Also, could you test if HDD hibernation is working ? Thanks.

 

I've only got Intel SSD's and WD Caviar Green drives available at the moment, which are low power & auto-hibernate (why I got them). I'll see what I can get to test though ... I must have another old drive around I could use. What drive(s) are you wanting to test? Seagates?

 

Note: I'm assuming you mean on VMDirectPath I/O & Synology controlled spin-down time.

Oh, I didn't knew that some hard drives can auto-hibernate. That's a good news, I was afraid that my fake DSM box will let my hard drives spinning 24/7, which cannot be a good thing unless these drives are server-class super-expensive hard drives. I only have two WD 500GB and one Samsung 500GB for now, for testing purposes, and I was planning to buy 2x3TB WD Green or Red if I'm able to make the firmware run smoothly on my box.

 

So, yes, could you test the synology hibernation with your WD drives in VMDirectPath I/O please ? I don't see how synology hibernation could work with VHD or RDM, but if that's actually the case, please let us know :smile:

 

AFAIK any VHD or RDM drive will not be controllable through the Synology hibernation, as in my (limited) testing, none of the SMART control or diagnostics are available to the VM OS. Therefore, I believe that only a VMDirectPath I/O attached drive would have SMART attributes available.

 

I've found a 1Tb Seagate I could wipe & use, and a SIL 3124 for VMDirectPath I/O testing. I'll see when I can do this ... but it won't be for a while - at least until the weekend, maybe sometime next week. Although, it would be nice to get an ESXi Synology up and humming ... :mrgreen:

Link to post
Share on other sites

For reference, I've found the answer to my earlier question:

(http://blogs.vmware.com/vsphere/tag/pvscsi)

 

I'm hoping to be able to use 10 drives (Raw Device Mapping) in a SHR-2 (two drive data redundancy) combination under ESXi. Do you know if your version will support this many (physical pass-through PVSCSI) drives?

 

The following is true for all the Virtual SCSI Controllers that are supported on VMs at this time (LSILogic, BUSLogic & PVSCSI).

  • 1. VMDK (Virtual Machine Disks) approach. VMDKs have a maximum size of 2TB – 512 bytes. Maximum amount of storage that can be assigned to a VM using VMDKs is as follows: 4 controllers x 15 disks each x 2TB (-512 bytes) = ~120TB.
    2. Virtual (non pass-thru) RDMs approach. vRDMs also have a maximum size of 2TB – 512 bytes (same as VMDK). Therefore, the maximum amount of storage that can be assigned to a VM using vRDMs is as follows: 4 controllers x 15 disks each x (2TB – 512) = ~120TB
    3. Physical (pass-thru) RDMs approach. The maximum size of a pRDM since vSphere 5.0 is ~64TB. Hence, the maximum amount of storage that can be assigned to a VM using pRDMs (assuming vSphere 5.0) is as follows: 4 controllers x 15 disks each x 64TB = ~3.75PB

Of course, these are theoretical maximums & should be considered as such. I personally don't know of any customers who are close to this maximum size.

 

So ... my goal of 10 drives, using PVSCSI controllers with RDM passthrough should in theory be possible. Now I just need the time to set all this up!

Link to post
Share on other sites

Is there an idiots guide on how to install this on ESXi 5.1 please?

 

I've tried with IDE disks, with SCSI disks, with E1000, with VMXnet, and each time the VM boots and is found by the Synology Assistant but I get an Error 38 when the installer tried to format/install.

 

Ideally I'd like to get a pair of these up and running to be able to test the HA feature.

Link to post
Share on other sites
Is there an idiots guide on how to install this on ESXi 5.1 please?

 

I've tried with IDE disks, with SCSI disks, with E1000, with VMXnet, and each time the VM boots and is found by the Synology Assistant but I get an Error 38 when the installer tried to format/install.

 

 

Same here...

Link to post
Share on other sites
Is there an idiots guide on how to install this on ESXi 5.1 please?

 

I've tried with IDE disks, with SCSI disks, with E1000, with VMXnet, and each time the VM boots and is found by the Synology Assistant but I get an Error 38 when the installer tried to format/install.

 

Error 38 means that the motherboard cannot communicate with the HDD. Basically something is wrong. [Ref: Synology Forums]

 

From the initial instructions:

Create the virtual machine to esxi and add hardware:

IDE controller with single harddisk (vmdk file in archive). Boot from this.

PVSCSI controller and raw harddisks or vmdk files for data. I've tested with 3 disks so far.

VMXNET3 network. Only single interface in bridged mode is tested so far. MAC address can be anything.

 

My recommendation would be to use RDM and PVSCSI controllers.

 

I'll take notes during my attempts (ESXi 5.1, Vt-d & VMDirectPath I/O & SIL controller (SMART / Spin Down), RDM using PVSCSI ...) and post back a write up when I'm successful.

Link to post
Share on other sites
How do you add an IDE controller to ESXi though?

 

The only controller (not disk) options are SCSI.

 

IDE Controller is built-in the VM by default. You can not add or remove it ... just set the New Hard Disk you're adding to use IDE (0:0) and you are good to go.

Link to post
Share on other sites

As requested, I've created an Idiot's Guide document which details the installation steps and options required in order to install DSM 4.2 on ESXi 5.1. A complete screen-shot example of all configuration steps is provided, with a fully virtual configuration for testing purposes. Details are also included about creating RDM VMDK disk images instead of VMFS based virtual disks.

 

Thanks again go to jukolaut and odie82544 for making DSM 4.2 on ESXi 5.1 possible.

 

http://depositfiles.com/files/virzefc1a[/url]

Idiot's Guide to DSM 4.2 and ESXi 5.1.pdf

Link to post
Share on other sites
As requested, I've created an Idiot's Guide document which details the installation steps and options required in order to install DSM 4.2 on ESXi 5.1. A complete screen-shot example of all configuration steps is provided, with a fully virtual configuration for testing purposes. Details are also included about creating RDM VMDK disk images instead of VMFS based virtual disks.

 

Thanks again go to jukolaut and odie82544 for making DSM 4.2 on ESXi 5.1 possible.

 

Idiot's Guide to DSM 4.2 and ESXi 5.1.docx - http://depositfiles.com/files/virzefc1a

Many thanks for this excellent guide, Tuatara :wink:

Link to post
Share on other sites

Thanks for the guide :smile:

 

I've followed it to the letter (the only thing I hadn't done first time around was use a PV adaptor.

 

The thing boots and is recognised by the assistant but when I connect I get the background come up but after a minute or so I just get an "Unable to connect" error message.

 

Seems to be using a lot of CPU to say it isn't doing anything.

Link to post
Share on other sites
Thanks for the guide :smile:

 

I've followed it to the letter (the only thing I hadn't done first time around was use a PV adaptor.

 

The thing boots and is recognised by the assistant but when I connect I get the background come up but after a minute or so I just get an "Unable to connect" error message.

 

Seems to be using a lot of CPU to say it isn't doing anything.

Did you install DSM, and the timeout is in the manager?

Or are you timing out pre-install of the pat file?

 

What ESXi hardware are you running this on? Intel / AMD, memory, disk, motherboard, etc.

Are there any other VMs on the same machine?

Are you using a VMFS Disk or a RDM Disk? You may have faulty hardware (fails to virtualise).

 

Follow the guide, use a VMFS Disk first to test that it installs and works, then delete/disconnect the disk and redo it properly with the desired Drive. All else being the same, the guide setup will work, since it is purely virtual.

Link to post
Share on other sites
Thanks for the guide :smile:

 

I've followed it to the letter (the only thing I hadn't done first time around was use a PV adaptor.

 

The thing boots and is recognised by the assistant but when I connect I get the background come up but after a minute or so I just get an "Unable to connect" error message.

 

Seems to be using a lot of CPU to say it isn't doing anything.

Did you install DSM, and the timeout is in the manager?

Or are you timing out pre-install of the pat file?

 

What ESXi hardware are you running this on? Intel / AMD, memory, disk, motherboard, etc.

Are there any other VMs on the same machine?

Are you using a VMFS Disk or a RDM Disk? You may have faulty hardware (fails to virtualise).

 

Follow the guide, use a VMFS Disk first to test that it installs and works, then delete/disconnect the disk and redo it properly with the desired Drive. All else being the same, the guide setup will work, since it is purely virtual.

 

The guide is very good, i had the same problem with error code 38. The DSM doenst recognize the hdd, so put the vmdk file (200mb) in paravirtual scsi mode and it will work perfectly and fast enough.

 

i got 40-60mbps on my gigabit home network :grin::grin:

Link to post
Share on other sites
Thanks for the guide :smile:

 

I've followed it to the letter (the only thing I hadn't done first time around was use a PV adaptor.

 

The thing boots and is recognised by the assistant but when I connect I get the background come up but after a minute or so I just get an "Unable to connect" error message.

 

Seems to be using a lot of CPU to say it isn't doing anything.

Did you install DSM, and the timeout is in the manager?

Or are you timing out pre-install of the pat file?

 

What ESXi hardware are you running this on? Intel / AMD, memory, disk, motherboard, etc.

Are there any other VMs on the same machine?

Are you using a VMFS Disk or a RDM Disk? You may have faulty hardware (fails to virtualise).

 

Follow the guide, use a VMFS Disk first to test that it installs and works, then delete/disconnect the disk and redo it properly with the desired Drive. All else being the same, the guide setup will work, since it is purely virtual.

 

Same thing here...

 

Followed the guide to the letter, but when connecting to the Synology, it says it can not find any hard drive ...

Any thoughts about this ?

 

I'm using a VMFS disk.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.