XPEnology - Virtualized vs Standalone


Recommended Posts

Hi,

 

I want to move to XPEnology but I have some doubts... Which setup do you think it's better?

 

Hardware: N40L with 8GB RAM ECC + 4x3TB HDs + 1xSSD 64GB for OS

 

**Note: Please consider that I might be wrong in some of these statements

 

Scenario 1) Virtualized

 

PROS:

=====

- NO USB flashdrive needed to boot.
- Hardware independent: If the server motherboard crashes, you can port it to a different hardware as long as you have the same HDs.
- Can use both SOFTWARE / HARDWARE RAID as long as Hypervisor is compatible.
- Snapshots
- Possibility to install other Virtual Machines.
- I can install the MainOS (such a Debian + Headless Vbox or VMWare ESXi) in 64GB SSD drive.

 

CONS:

=====

- Virtualized HDs are really Files stored in other filesystems. Slower. 
** Edit: Reported 50MB/s in VMWare even with RAW DEVICE MAPPING. It's 50% performance of a standalone installation.
- No USBs??? (not sure about this).
** Edit: VirtualBox has some instructions to link USBs to hosted machines.
- More failure points, the need to have XPEnology hosted in another OS, makes it less reliable as both systems + virtualization soft (in VirtualBox case) may have bugs.

 

Scenario 2) Standalone

 

PROS:

=====

- SHR RAID.
- Performance should be sightly better. Again, feedback is appreciated.
** Edit: Reported transfers up to 100MB/s limited by Gigabit network card.
- USB (updating the modules for N40L).

 

CONS:

=====

- USB flashdrive required
- If HW crashes... little mess to bring everything back online. 
**Edit: XPEH explained that Synology O.S. is stored in all hard drives, and even using different synoboot images to boot, it worked OK.
- Adding SSD drive, would provide no beneffit unless I create as a different volume - (Maybe using it as SSD cache?).

 

 

 

This is all I can think about, but I'm sure that you know more PROs/CONs.

 

My Needs (if anyone asks):

- PLEX Media Server

- Transmission (or another BT Client).

- Emule client

- Sickbeard

- Samba sharing

- AFS sharing

- (Planned) Fog Clone server (will install in Debian-Chroot) or nClone.

- OwnCloud

- Photostation

- DynDNS

 

 

Thank you in advance!!

Edited by Guest
Link to post
Share on other sites

This is old question in IT field ... virtual vs real ...

I'd say it's really up to your environment. I have a real synology 712+ at home and a major PRO for me is the power consumption. It uses like 15W. Most PC hardware uses around 100W and in some cases even more (in my gaming PC, only VGA card needs around 275W). That can add up on your monthly bill. Other thing is availability. If it is virtual on a home machine that tends to be rebooted - the synology goes offline too. So at my home i'd go for a real BOX running xpenology on it. In my company that has existing VMWare platform that is based on 5 clustered hosts i'd go for VMWare image of xpenology because the platform is already there and anyway needs to be up 24/7 (and company pays the electrical bill).

 

Ideal home platform would consist of mobo with dual core atom, 4Gb ram, and 2x or 4x 2Tb drives spinning at 5400 RPM.

My synology ds712+ has 1Gb ram and singe core atom 1.8GHz. It is behind 50/5mbit line. I use pptp vpn, download station, file server, media server, video station, ...

CPU hits 100% only when i add new pics to photo station (because they are 12mpix, roughly 6Mb average in size). Video station streams flawlessly without any hickups, while my downloads run in background. So my normal scenario consists of my son watching movies at PS3 over syno media server(LAN), my download station downloading torrents(WAN), my wife watching tv show over cifs share(LAN), my friend downloading my stuff over file station(LAN) and me watching a movie over video station(WAN). My 1.8Ghz cpu is at 7% and memory usage is at 21% ... so i think having a strong cpu like i3/i5/i7 is waste of money and a massive overkill (for home users, not ones that will put syno in gigabit LAN enviroment with >50 users accessing it).

 

Also one thing i noticed is people wanting to use SSD for OS. People - it is not a gaming machine that needs to load BF3 map within seconds. Any 5400/7200rpm sata HDD will do. My first synology ds107 is today up and running with 250Gb samsung sata2 drive. It is running 24/7 without hibernation or standby and it works like a charm.

Link to post
Share on other sites

Thank you for your replies...

 

In my case, both virtual/real options will fit in the same server (HP Proliant N40L with 4x3TB drives + 64GB SSD), so power consumption difference it's not the issue. It will be running 24/7.

 

The SSD drive came from my HTPC, the GOAL was to generate as less noise as possible, but the HTPC crashed, so I reused SSD for OS in the server since 64GB is not enough to store almost anything :grin:

 

Any comment is welcome in this thread, but my goal was to discuss what environment would be "better" for home use (storage movies and photos) and if anyone finds more PROs/CONs for each scenario.

 

My biggest concern is how can I restore the information from SHR raid if the hardware (server) fails. Assuming that HDDs are OK, It be easier to move them to a different Server if it's virtualized, but I still have some doubts about the performance and failure points, too many layers (Debian >> VirtualBox >> XPEnology >> Data).

On the other hand if it was installed as main system, I don't know if the recovery process to different hardware would be factible.

 

For now, looks like virtualization has more advantages, but I always like to have third users opinions :grin:

 

I appreciate your performance feedback.

Link to post
Share on other sites

I have tested XPEnology in the virtual environment with VMWare ESXi 5.1

There is big performance penalty while using VM even with drives connected via RDM (Raw Device Mapping). About half of the speed that achievable with direct DSM installation. Also in ESXI currently it works with IDE emulated drives only and you are limited to 4 drives (I have 6 that I would like to use).

Configured drives are migratable to another system without loosing data. On the direct hardware install, I was able to replace USB synoboot with different versions and still see and use existing volumes.

Link to post
Share on other sites
I have tested XPEnology in the virtual environment with VMWare ESXi 5.1

There is big performance penalty while using VM even with drives connected via RDM (Raw Device Mapping). About half of the speed that achievable with direct DSM installation. Also in ESXI currently it works with IDE emulated drives only and you are limited to 4 drives (I have 6 that I would like to use).

Configured drives are migratable to another system without loosing data. On the direct hardware install, I was able to replace USB synoboot with different versions and still see and use existing volumes.

 

Half Speed? Wow is far worse than I expected. I don't know if VirtualBox has the same IDE limitation, but this is a good reason to go "standalone".

 

Glad to hear that replacing USB worked ok!

 

Does this mean that Synology "O.S" is distributed among the existing hard drives?

Link to post
Share on other sites

Half Speed? Wow is far worse than I expected. I don't know if VirtualBox has the same IDE limitation, but this is a good reason to go "standalone".

 

Glad to hear that replacing USB worked ok!

 

Does this mean that Synology "O.S" is distributed among the existing hard drives?

Virtualbox can emulate more controllers. You can add more SATA adapters and drives, but you are creating the files for your virtual drives that even slower than RDM.

In my tests on ESXi, RDM RW speed were hovering around 50mbps. On the direct, standalone installation over 100mbps, limited by 1Gbps LAN speed.

 

Synology DSM is distributed and installed on all "initialized" internal drives. If any of them survive the move to another system, they will have migratable or degraded status, but can be repaired or updated without erasing data. System partition can be upgraded with different version of DSM during the repair, keeping volumes with data safe.

I've moved 4 drives from DS412+ to XPEnology box and it didn't even require repair. The volume just showed up with all the data.

Link to post
Share on other sites

Thats great!

 

At the beginning of this post I was 95% sure about configuring xpenology on Virtual, but after your comments, I think I'll try with native installation.

 

Thank you for the info!

 

PD: Don't worry, if anything goes wrong I will not blame you hehehehe :mrgreen:

Link to post
Share on other sites

It depend on what synoboot image you use.

System does not need the synoboot image after boot.

Images built by "Andy928" identify USB synoboot as an internal harddrive and while not used by system after boot, are still visible and can be overwritten by volume creation process if you are not careful deselecting them.

DSM4.2(beta) repack images by "odie82544" identify USB synoboot as USB drive and can be safely ejected after boot. You still can leave them plugged in for the next reboot.

Single internal SSD drive will have no use in this native installation. In HP microserver I would use 4 drives as main RAID storage, one large drive in the CDROM bay as a second volume for the backups. Disable built in RAID in the BIOS and let software to handle disks directly.

Link to post
Share on other sites
  • 3 weeks later...
I have tested XPEnology in the virtual environment with VMWare ESXi 5.1

There is big performance penalty while using VM even with drives connected via RDM (Raw Device Mapping). About half of the speed that achievable with direct DSM installation. Also in ESXI currently it works with IDE emulated drives only and you are limited to 4 drives (I have 6 that I would like to use).

Configured drives are migratable to another system without loosing data. On the direct hardware install, I was able to replace USB synoboot with different versions and still see and use existing volumes.

I would like to reopen this thread with some more testing results.

The previous test were done on the Asus 35M1-I motherboard with AMD E-350 1.6Gz CPU.

The new tests are done with HP N54L Microserver (AMD Turon II 2.2Gz). ESXi5.1, DSM3202. Single drive speed not more that 100MBps on both virtual drive (file based) or RDM. Apparently faster CPU makes a big difference in performance and now with DSM under ESXi almost saturating 1GB network link its much more usable.

Slower CPUs just don't have enough power to make it usable under virtual environment. Natively installed on direct hardware, both work great with full speed.

Link to post
Share on other sites
  • 4 weeks later...

I would like to share my experience here.

 

I installed the 4.1++ virtual machine on VMware, my box is a refurbished computer with an ASUS P5QPL-AM motherboard, a Vertex 3 128GB SSD and some 500gb drives to toy around with differents configurations under Windows Server 2012 (you know, RAID 5, storage pools...). My local network is running at 1GB/s.

 

I tried to run the VM on the SSD, on a RAID 5 array and the storage pool thingy, and each time the results were the same : max 30MB/s when I was copying large files on the virtual machine. With a previous try with the OpenMediaVault distribution, I was hovering around 90MB/s with a RAID 5 array (SMB shares in both cases).

 

So I found this cheap Intel server NIC on Ebay, and when I receive it I will try the standalone firmware.

 

A big thanks to the XPEnology team !

Link to post
Share on other sites