Jump to content
XPEnology Community

[SOLVED] Install on vmware ESXI


Recommended Posts

Another issue with DSM on ESXI:

i have 5 USB HD that have to be connected to my DSM

 

... the 4 i first added were set as SATASHARE in DSM and the last one as a USBSHARE

if i unplug those HDD's and plug it back, they're all recognized as USBSHARE

 

Anyone has the same issue or know a way to fix it?

 

This is probably due to the Synology DSM kernel thinking the drives are physical drives at boot time, and not USB (Removable) drives. As soon as they are removed, then DSM "recognises" that they are removable (probably due to forceful removal - make sure you're not losing data!). Upon re-inserting them, then they become USB drives only, with the removable flag set correctly. [this is a simplification, but you get the point].

 

The only way around this I can think of - is to have none of the USB devices available at boot time, and then attach them once booted. Not the best solution, but the only one I can think of. It may be possible to automate this with scripting ...

 

The best way around this would be to use VMDirectPathIO. Get a PCI/PCI-e USB card and put that in the machine. Map the physical PCI device into the VM, so that the VM sees the USB controller directly. Plug your drives into this USB controller card. DSM will see the USB hardware, spin up the drives connected, read their parameters directly, etc. You will also have SMART functionality available, and can fully monitor the drives. If you're using USB 3.0, then you shouldn't notice much of a difference between that and SATA drives.

Link to comment
Share on other sites

Thanks for the tutorial. Does the vmware tools actually make any difference? I installed everything with VMFS hdds. Can I do the rdm vmdk without reinstalling the synology stuff? I enabled SSH on the esxi server but I do not understand this "NOTE: Instead of immediately adding an additional drive to the VM, first Finish creating the VM (but do not start it). This will create the VM Directory on the Datastore, into which you can then create the new RDM VMDK files for your disks." Ahh im confused. Thanks!

Link to comment
Share on other sites

OK there is almost certainly a bug in this version where your storage device does not persist a reboot.

 

The VMDK is still attached but if you go into Storage Manager, there are no volumes.

 

I'm not sure what's happening for you, but the RDMs I'm using are all fine and survive reboots without any issues. Potentially there is something in your ESXi installation at issue. Are you using any 3rd party drivers on your ESXi whitebox? Anything special in your configuration or hardware?

 

This VM host has been running > 12 months with an array of different guests so I don't think there is anything wrong with it.

 

I've just ordered the latest HP MicroServer so will use that as a physical host rather than virtualised and see how I get on.

 

Thanks for taking the time to respond.

Link to comment
Share on other sites

As promised, the kernel modules compiled for the Synology ESXi build :smile:

 

still some work to do, but the main goal is reached; we can compile and load open-vm-tools in the vSynology.

 

When I have some time this week, I'll try to package everything into an IPKG, which makes it much easier to install.

 

Cheers

 

 

Thank you for the great work. did you have the chance to create an IPKG package?

Link to comment
Share on other sites

Thanks for the tutorial. Does the vmware tools actually make any difference? I installed everything with VMFS hdds. Can I do the rdm vmdk without reinstalling the synology stuff? I enabled SSH on the esxi server but I do not understand this "NOTE: Instead of immediately adding an additional drive to the VM, first Finish creating the VM (but do not start it). This will create the VM Directory on the Datastore, into which you can then create the new RDM VMDK files for your disks." Ahh im confused. Thanks!

 

VMWare tools will allow you to control the VM (Synology) through the vSphere control panel. Shutdown being the most important one, for those planned (or unplanned / UPS directed) shutdowns.

 

If you've used VMFS drives, then your HD's are virtualised (i.e. files on the VMFS filesystem under ESXi). Performance will be lower, but you can move the files around as you see fit. The biggest downfall is that the data on the drive is not directly usable (i.e. you can't just plug it into a Linux machine or another Synology and read it as EXT4).

 

SSH being enabled is required to use the console (CLI) to enter in the commands to make a RDM (Raw Device Mapping) for a physical drive. There is no way to create this through the vSphere GUI. A RDM VMDK will then map the physical HDD 1:1 into the VM, so that it maps exactly and is not a VMFS (file system) based VMDK.

 

By my last comment, I meant to make the VM first (complete the Wizard), and then the VM directory will be created for you in the DataStore. Then SSH in, create the RDM files in the VM directory already created for you, alongside your VM. Then, once the RDMs have been created, add them to your VM, and fire it up!

 

Apologies for the confusion.

Link to comment
Share on other sites

I reinstalled everything. My server has two datastores. My first data store has all of the VMs such as Ubuntu, osx, debian etc and I am trying to put Synology_ESXi on the second or at least you the entire drive using rdm vmdk.

 

If I put Synology_ESXi on datastore1 and use the command vmkfstools -z /vmfs/devices/disks/vml.0100000000202020202020202020202020

39514a313345345a535433313030 RDM1.vmdk -a lsilogic, it works fine but the identifier is for the drive with datastore2.

 

If I put Synology_ESXi on datastore2 and use the same command above in the Synology_ESXi folder. I get Failed to reopen virtual disk: Failed to lock the file (16392).

 

Right now I have it on the datastore1 because it worked without giving me a problem but I might be running out of space on datastore1 since it has all of the other vms. How would I get it to go on the other harddrive? Thanks!

Link to comment
Share on other sites

I reinstalled everything. My server has two datastores. My first data store has all of the VMs such as Ubuntu, osx, debian etc and I am trying to put Synology_ESXi on the second or at least you the entire drive using rdm vmdk.

 

If I put Synology_ESXi on datastore1 and use the command vmkfstools -z /vmfs/devices/disks/vml.0100000000202020202020202020202020

39514a313345345a535433313030 RDM1.vmdk -a lsilogic, it works fine but the identifier is for the drive with datastore2.

 

If I put Synology_ESXi on datastore2 and use the same command above in the Synology_ESXi folder. I get Failed to reopen virtual disk: Failed to lock the file (16392).

 

Right now I have it on the datastore1 because it worked without giving me a problem but I might be running out of space on datastore1 since it has all of the other vms. How would I get it to go on the other harddrive? Thanks!

 

Quick answer ... a RDM is not a file on a Datastore! It must have complete exclusive access to the hardware, since it is using the RAW DISK - the WHOLE DISK.

 

To use the drive exclusively for Synology you must remove it as a DataStore from ESXi. Then you can use a RDM and map the entire disk to the Synology VM.

 

What you probably want/intend is a standard VMDK - virtual Disk. Use the GUI to make one - just make sure you use a Thick disk. Thin Provisioning will just cause you immense grief. This type of disk will have lower performance (about 5%+), but then you can have the Synology boot, VM, files, and Disks on the single drive ESXi DataStore.

Link to comment
Share on other sites

That makes a lot more sense! Thanks. I appreciate it. What does the bootstrapper do? I thought the performance loss was a lot more so I decided to go with the physical rdm vmdk. Would you recommend physical or virtual rdm vmdk? The drive is 1 terabyte.

Link to comment
Share on other sites

That makes a lot more sense! Thanks. I appreciate it. What does the bootstrapper do? I thought the performance loss was a lot more so I decided to go with the physical rdm vmdk. Would you recommend physical or virtual rdm vmdk? The drive is 1 terabyte.

 

1. Bootstrapping will allow you to add more packages to your Synology VM - ipkg will be available, and you can install any ipkg packages available for the architecture (x64). You can then (manually for now) install the open-vm-tools which will allow you to control startup/shutdown/ip reporting/etc. through vSphere.

 

2. Performance loss in my limited testing of VMFS based VMDKs was at least 5% ... sometimes more, depending on hardware. It is perfect for quick testing, but not for something "robust" or for performance testing. VMDirectPathIO (physically mapped hardware) was the fastest ... but it had less than a 1% improvement over using RDM.

 

i.e.: VMDirectPathIO (fastest) > RDM (1% slower) > VMDK (5%+ slower)

 

Do NOT use Thin Provisioned disks for a NAS server. Ever. Just think about it carefully. :wink:

 

3. For physical vs. virtual ... google is your friend! :geek:

RDM Virtual and Physical Compatibility Modes
You can use RDMs in virtual compatibility or physical compatibility modes. Virtual mode specifies full virtualization of the mapped device. Physical mode specifies minimal SCSI virtualization of the mapped device, allowing the greatest flexibility for SAN management software.
In virtual mode, the VMkernel sends only READ and WRITE to the mapped device. The mapped device appears to the guest operating system exactly the same as a virtual disk file in a VMFS volume. The real hardware characteristics are hidden. If you are using a raw disk in virtual mode, you can realize the benefits of VMFS such as advanced file locking for data protection and snapshots for streamlining development processes. Virtual mode is also more portable across storage hardware than physical mode, presenting the same behavior as a virtual disk file.
In physical mode, the VMkernel passes all SCSI commands to the device, with one exception: the REPORT LUNs command is virtualized so that the VMkernel can isolate the LUN to the owning virtual machine. Otherwise, all physical characteristics of the underlying hardware are exposed. Physical mode is useful to run SAN management agents or other SCSI target-based software in the virtual machine. Physical mode also allows virtual-to-physical clustering for cost-effective high availability.
VMFS5 supports greater than 2TB disk size for RDMs in physical compatibility mode only. The following restrictions apply:
- You cannot relocate larger than 2TB RDMs to datastores other than VMFS5.
- You cannot convert larger than 2TB RDMs to virtual disks, or perform other operations that involve RDM to virtual disk conversion. Such operations include cloning.

 

Only one caveat:

Creating RDMs on SATA Drives

Once you have created the physical RDM you can add it to a VM. This is where I ran into my first problem with SATA RDMs. I found that a VM would not boot from a physical RDM (it would hang after the main BIOS screen) but it would be just fine if I recreated a virtual RDM with the same physical disk. I was able to add the physical RDM as a data disk to existing servers and those would boot without any issues. In the first image below you'll notice that the Compatibility Mode for the RDM is listed as Physical. In the second image, you can see the the VM sees the actual hard drive model (Seagate - ATA ST3500630AS) instead of as a VMware virtual disk as was the case in the virtual RDM example above.

 

My physical RDM mappings are all working fine, however I'm using a ServeRAID M1015 SAS/SATA Controller, and didn't really do too much testing with the other ones (Sil 3114, Sil 3124, Sil3132, and gave a crappy JMicron a go with a self-compiled/installed driver). All appeared to work, but I only tested for a short time - for my own interest.

 

Translation: Your mileage may vary (YMMV). Try the Physical, as then you will get the physical drive exposed. If that's not stable, go for virtual, where ESXi manages the drive, and only the READ & WRITE operations are passed directly to the controller you're using.

 

1 Terabyte drives are fine with RDM. There were issues with newer drives >=3TB, but I believe these issues have now been resolved in the latest ESXi updates.

Link to comment
Share on other sites

Thanks it works well now. I guess I will install bootstrapper and vmware tools later when there are guides out. Anyone have problems adding third party packages and updating the list? It wont show new third party applications. Do I have to change my mac address? Thanks.

Link to comment
Share on other sites

Hi, many thanks to all contributed to DSM on on PC and on ESXi!

 

just wanted to share my small experience so far.

 

CPU: Xeon E3-1265L-v2

MB: Jetway NF9E-Q77

HDD: 500Gb Hitachi

SAS/SATA: Adaptec 7805, so far only 1Tb WD SATA disk is connected for testing (the disk has MBR)

 

- compiled aacraid.ko for DSM from the source code avaliable at Adaptec's site

- enabled pcipassthru for 7805 in ESXi

- added PCI controller to DSM's VM

- added pciPassthru0.msiEnabled = "FALSE" to my .vmx file

- copied aacraid.ko and arcconf (command-line tool for linux x64) to DSM

- insmod aacraid.ko

 

I do not have dmesg and arcconf outputs to post now but both aacraid.ko and arcconf seems to work.

DSM does see the diskas eSATA, but as opposed to arcconf, DSM does not display SMART in its web shell - SMART is not supported msg pops up.

 

Will keep playing :smile:

Link to comment
Share on other sites

I am wondering if anyone here can help me out. I would like to install vm-ware tools on my Synology, but not sure where to go. I have seen open-vm-tools mentioned, but not sure what to do. I have ipkg up and running and have successfully installed nails. Can some one give me a hand?

Link to comment
Share on other sites

If your ESXi supports PCI passthrough, then you'll have the option to assign PCI Cards to your VM. This requires a number of hardware devices to be compatible, (processor, motherboard, bios, pci card, etc.). If and only if all those are compatible, and the PCI card you want to pass through to the Synology is recognised by it (and it's hardware drivers), then you're good to go. I had to change out a fair few hardware controllers before I found one which was stable and worked in VMDirectPathIO mode. My advice, use RDM unless you really, really need SMART monitoring.

How are you doing? I came across this post and I was wondering what controller you found to be the most stable. I just prefer controller passthrough instead of RDM.

Link to comment
Share on other sites

If your ESXi supports PCI passthrough, then you'll have the option to assign PCI Cards to your VM. This requires a number of hardware devices to be compatible, (processor, motherboard, bios, pci card, etc.). If and only if all those are compatible, and the PCI card you want to pass through to the Synology is recognised by it (and it's hardware drivers), then you're good to go. I had to change out a fair few hardware controllers before I found one which was stable and worked in VMDirectPathIO mode. My advice, use RDM unless you really, really need SMART monitoring.

How are you doing? I came across this post and I was wondering what controller you found to be the most stable. I just prefer controller passthrough instead of RDM.

I didn't find any real-world advantages to using VMDirectPathIO over the physical RDM capabilities using the PVSCSI driver. Using everything in an abstracted/virtualised environment just "feels" better to me, even if there is a (very slight - imperceptible) performance loss. My feeling is, should anything ever happen to this (old) hardware, I can quite readily replace any of it without affecting anything else operational on the machine.

 

Also (somewhat unfortunately) in my testing I've found that this "home-brew" NAS under ESXI is performing better for me than my real Synology boxes. I'm moving over to using XPEnology personally - which I'm more than just a bit sad about. (Yes, I'm keeping my other Synologies!) But it's not surprising in a way, as I can use an old motherboard with crap-tons of memory and a good CPU in it, to have something perform better (Plex Transcoding, I'm looking at you!) than any box I could afford. It's obviously more power hungry, but through using ESXi, I've moved the firewall, mail server, web server, development platform, and an internal XP Administration VM onto the one machine. Is it now really that power-expensive? Not when you total up all machines.

 

My current ESXi testing/storage system: (with 4 other VMs on the same box - Ubuntu, pfSense, Windows XP, and SME Server)

[22Tb Storage (RAW), 360Gb ESXi Datastore (RAW) --- It's old gear re-purposed well - most everything is already in ark.intel.com! :grin:]

I did give a few other cards a (short) try - Sil 3114, Sil 3124, Sil 3132, and a crappy JMicron JMB363 in ESXI. For the JMicron I don't think I tested it in passthrough (used a self-compiled/installed ESXi driver). All appeared to work fine in RDM, but I only tested for a short time in Passthrough - for my own interest. I believe it was the Sil 3132 I used for my performance testing & comparison.

 

In short - most stable ... PVSCSI physical RDM - Hardware as above. Very happy to date.

[Edit - fixed RAM - looked up the wrong one!]

Link to comment
Share on other sites

Thanks for the reply. I'm in the process of building another whitebox and was going to test this out.

 

*My Whitebox Specs*

 

*Supermicro X9SRE-3F

*Intel Xeon E5-2430L

*64GB Kingston ECC RAM

*4x 300GB WD Velociraptor for Datastores

*M1015 that I was going to IT Mode

*NORCO SS-500 5-Bay Hot Swap

*5x WD 2tb RED Harddrive

 

I really wanted to use the M1015 for the SATA 3 port and that's why I was asking about the RAID card.

Link to comment
Share on other sites

Thanks for the reply. I'm in the process of building another whitebox and was going to test this out.

 

*My Whitebox Specs*

A serious kick-ass box you're building there! Personally though ... I went SSD for datastores, and I'm not going back. Ever. 'Nuff Sed.

So, if you've got the 'spare change', put the core VMs on SSD ... just brilliant.

 

I really wanted to use the M1015 for the SATA 3 port and that's why I was asking about the RAID card.

I've had no issues with the M1015 in IT mode ... PVSCSI RDMs all work perfect.

Link to comment
Share on other sites

Thanks for the reply. I'm in the process of building another whitebox and was going to test this out.

 

*My Whitebox Specs*

A serious kick-ass box you're building there! Personally though ... I went SSD for datastores, and I'm not going back. Ever. 'Nuff Sed.

So, if you've got the 'spare change', put the core VMs on SSD ... just brilliant.

Thanks!

 

I wanted SSDs but I still can't justify cost per GB. The Velociraptors are also from my previous whitebox so I'm saving money on that front...lol. And that's good to hear regarding the M1015.

Link to comment
Share on other sites

Hi,

 

i installed everything using the idiots guide for ESXi and Synology on my HP N40L and it is working fine execpt a little problem when transferring files. As soon as I copy a file the transfer rate goes up and down and comes to halt every few seconds.

 

datatransfer.PNG

 

Even small transfers take forever because of this. Has anyone a clue what causes this?

Link to comment
Share on other sites

i installed everything using the idiots guide for ESXi and Synology on my HP N40L and it is working fine

Another successful installation!

 

jukolaut said he was surprised at how many people downloaded his modified patch ... I'm now curious as to how many XPEnology ESXi installations are out there?

With the next source release, it might be time to put some real effort into it and update the idiot's guide, and make a few installation helpers. Thinking about it ... :ugeek:

 

... execpt a little problem when transferring files. As soon as I copy a file the transfer rate goes up and down and comes to halt every few seconds.

Even small transfers take forever because of this. Has anyone a clue what causes this?

 

This looks suspiciously like the standard networking problems:

Isolate, isolate, isolate. Separate and eliminate everything else.

Disconnect all other devices. Use a separate local switch if needed. Do this first.

Check the negotiated link speed (is it 1Gb/s everywhere?)

Check other traffic on the LAN - congestion due to some machine constantly TCP/UDP spamming? (NETBIOS, Torrent, eMule, Skype, other crap)

Replace network cables. Bad cables/connections happen very often.

Ensure you have the latest/best drivers for your network card and hard drive.

Check hard drive firmware - especially if it's a Seagate (SeaCrate). If it's not a recent drive, it could have issues.

Check Transfer Type - 1000's of small files will take a much longer time than one file 1000 times larger

Check Machine usage - Windows Defender, other high CPU use applications

Check for any Virus / Command & Control / Backdoor / Ransomware applications

Since you're probably running Windows 8 - Ensure that ALL 3rd party software is actually compatible!

Check and disable/uninstall/remove anything integrating into Explorer (including WinZIP, WinRAR, any explorer extensions, anti-virus, anti-adware, etc.)

Otherwise, it could be the Synology Setup you have:

Check the Disks. Seriously. They could be bad and the Synology is retrying ... endlessly

If you have a SHR RAID (at least 2 disks) - if one disk is bad, performance will be crap, but it will keep working and trucking along. This is why I F**KING LOVE SYNOLOGY!

Stop the Synology VM. Stop Esxi and Reboot. Put Ubuntu or any other Linux flavour you love - Mint and Cinnamon into the CD/DVD and fire that puppy up. Check the SMART log of the Physical Drives. Do a Drive Test. Do a FSCK and ensure everything is good on the drive itself.

You can also configure a new VM in ESXi for testing. This is why we have ESXi right? Again choose your flavour - Linux preferred, and test data copies from that VM on the ESXi box itself. This will show up any local/physical hardware problems (other than external network card/congestion) in a jiffy!

You can also use this new VM (if Linux), and map in the RDM disks. Install and boot your Linux Flavour in the VM and away you go. Do a FSCK, read your data, just don't EVER map both RDMs to two different running VMs. I'll let you figure out what would happen!

 

If checking all these things doesn't help, and you've eliminated ALL software problems (trust me - odds are you have some crap installed - keep looking there first!) then throw out and replace the network cards, switches and cables ... one of them will be bad.

 

And if ALL else fails, and you don't want to listen to me and don't do anything I wrote above - try using Teracopy. This copies files using a buffered approach and bypasses Explorer during the copy process. Give it a go - it might not solve it, but it will point you in another direction.

 

IT guy ... :ugeek: ... trust me.

Link to comment
Share on other sites

Great work jukolaut

 

The installation was done without any difficulty on ESXI 5.1.

 

But performance don't be very high: only 5MB/s with CIFS (same performance as DSM 4.1)

In comparaison, FreeNas (aka Nas4free) transfers speed are about 25MB/s in the same environment.

 

I use Microserver N40L for this tests.

 

Have you an idea why is it so slow ?

wow,you are better than me

I tested on two machine and got 5MB/s =_="

Link to comment
Share on other sites

But performance don't be very high: only 5MB/s with CIFS (same performance as DSM 4.1)

In comparaison, FreeNas (aka Nas4free) transfers speed are about 25MB/s in the same environment.

I use Microserver N40L for this tests.

wow,you are better than me

I tested on two machine and got 5MB/s =_="

So very, very strange.

 

I'm getting 50MB/s+ on 4 disk SHR RDM PVSCSI drives for random file copies (mixed file sizes).

I'll do some more checking (sometime), and test it on another (really crap) ESXi box and see if I encounter anything.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...