Maxhawk

Hardware and overall system/software topology questions

Recommended Posts

I currently have a 1511+ with 5x 4TB WD RED drives in SHR. With the demise of Crashplan's Home backup service and the current business version not playing nicely in headless installations, I've started looking into backing up to another Synology at a friend's house (we both have AT&T gigabit fiber and older 151x+ boxes). At first we thought about each buying a DX513 to use as a volume for remote backup but I found that the CPU in my 1511+ gets into the 90+% load range while the other is backing up, leaving little headroom if I need to do something myself with the Synology. Plus it's been announced that future DSM versions won't be supported with my aging hardware.

 

With new 1517/1817+ being quite expensive, my new thought was to use Xpenology with a used Dell server such as an R510, R520, or R710 and build my own RS3617xs. There are some nice DIY solutions using mini-ITX boards but I want the external hot-swap bays that I'm used to having with my 1511+. Since the CPU in these servers are overkill for just Xpenology, I figured I could use ESXi to run Xpenology along with a handful of other Linux VMs to run my Ubiquiti controller, Ubiquiti NVR, Pi-hole, OpenVPN server, etc. I understand that I need a drive controller that supports HBA, so for these Dell models I would need either the H200 or H310.

 

Some of these questions get into how ESXi works, but I figure there are folks here who may be familiar:

 

1. Is this a feasible system topology or is better to have just Xpenology running on the hardware and if so, is bare metal the preferred option? I think I read bare metal has consequences with driver support such as the standard Broadcom NICs, RAID controller, etc?

2. With a 12-bay server, could I dedicate 6 drives to Xpenology, 1 to NVR, 1 to various Linux VMs, with the ability to add 4 more drives to my Synology share in the future for expansion (do I need SHR or can RAID6 do this too)?

3. Among these 3 Dell models, is one easier to configure/maintain? The R710 is cheapest but least preferred since it only has six 3.5" bays.

4. How much memory does Xpenology need?  Do you think 32GB total RAM would be enough to do what I have in mind?

5. If doing multiple VMs, do I need to dedicate a drive for ESXi or can all this be done from a USB stick/drive?

6. Is there something I'm forgetting? Is this a stupid idea?

 

Thanks in advance for your attention and responses.

 

Share this post


Link to post
Share on other sites

4. Synology is able to run with 512Mb of ram... If you are not using SSD cache, 4Gb is more than enough !

 

Share this post


Link to post
Share on other sites

From my experience and general playing around

1) With this level of hardware I'd go with ESXi and run all the VMs with that

2) All possible in ESXi, depending on how you setup the drives/controllers/storage, some experimenting may be needed to get the best setup for performance/resilience. SHR is not the standard in DSM6 (now raid groups) little difference unless you have lots of mismatched sized drives

3) Dell has its 'quirks' but I'd say no more or less once you get to know the hardware

5) ESXi boots from a usb

6) There are quite a few forum members running Dell similar to yours so you should be ok for any help. Be prepared for time to be spent getting the best setup

Share this post


Link to post
Share on other sites

Thanks for the responses. I've installed ESXi and Xpenology 6.1 alpha on a machine I built with old spare parts just to get my feet wet. Due to the old CPU (Core 2 Duo E6750) I have to use ESXi 6.0 as this CPU is no longer supported in 6.5. I've got 5 WD 1TB green drives and a 60 GB SSD. The motherboard is a Gigabite GA-P35-DS3R and has 8 built in SATA ports. I've installed two HP NC360T (Intel  82571) for a total of 4 gigabit ports.

 

1. I notice I'm not able to create a datastore as RDM. Is this because I don't have a separate drive controller? Will RDM become an option if I have an H200/H310 in IT mode?

 

2. Since I can't use RDM I'm simply creating a virtual disk to present to Xpenology. I notice that DSM can't read the drive temperature but the S.M.A.R.T status says OK. Will temperature readings work when RDM is used? Is the S.M.A.R.T "ok" status a false positive the way I've connected the drives?

 

3. What's the proper way to do ethernet link aggregation with ESXi and Xpenology? I found within ESXi I can create a switch that does load balancing between two NICs and Xpenology sees only LAN 1. Alternatively I can present the Xpenology with two NICs and let DSM create an 802.3ad bond between LAN 1 and LAN 2. Is there any difference?

 

4. There are folks who consider Xpenology to be a hack and don't think it's reliable and stable. However I'm using a DS3617xs .PAT file that I downloaded directly from Synology so the DSM software certainly is "authentic". Is it the boot image that's considered the "hack"? 

 

5. I'm now considering a Dell R720xd because it seems every generation of hardware comes with significant improvement in power efficiency. Two to three years of power bill savings will pay for the difference in hardware cost from the R510 I was eyeing before. I don't expect there to be any issues, but are there any known issues with using Xpenology with the R720xd?

 

6. I've seen some ESXi/Xpenology tutorials that say the boot drive should be set up as IDE (0:0). However when I use the 1.02b boot image and 1.01 .VMDK file, ESXi only lets me choose SCSI. Is that because the .VMDK file is set up to use SCSI instead of IDE?

 

7. My boot drive (SSD) shows up as one of the drives in DSM. Is there any harm in leaving it there? Is there a way to prevent DSM from seeing it?

 

Thanks again for bearing with my noob questions.

 

 

 

Share this post


Link to post
Share on other sites
On 1/7/2018 at 11:27 PM, Maxhawk said:

5. I'm now considering a Dell R720xd because it seems every generation of hardware comes with significant improvement in power efficiency. Two to three years of power bill savings will pay for the difference in hardware cost from the R510 I was eyeing before. I don't expect there to be any issues, but are there any known issues with using Xpenology with the R720xd?

 

 

I'm Looking into one of these as well. Have you purchased yours, and have you gotten xpenology loaded with minimal issues? I've got a Supermicro 2U X8DTU-F 24 bay running about 12 drives now with 10g iSCSI to my VMware Cluster, but I'm looking to upgrade so I can get better speeds off more SSDs in raid 0 I'm capped at about 800mb read on 4 SSDs. I'm hoping the PCIe 3 and a Adaptec 1000 series card in addition to a couple more SSDs will bring me to the 1500 mark.

Share this post


Link to post
Share on other sites

I've had my R720xd since mid January and have had Xpenology running on ESXi since then. I've had zero issues and I'm very happy with the setup. I'm using the H310 mini to control two 2½" SSD drives in rear flex bays for VM storage and an IT-flashed H310 for the front 12-bays. The H310 is passed through in ESXi so Xpenology can control the drives directly, with access to the SMART data and temperature readings. 

 

I can't comment on whether this would be an upgrade to your Supermicro. I can only say that everything has been 100% stable with 9 total VMs and it seems I'm barely taxing my dual Xeon 2630L CPUs.

 

 

 

Share this post


Link to post
Share on other sites
12 hours ago, Maxhawk said:

I've had my R720xd since mid January and have had Xpenology running on ESXi since then. I've had zero issues and I'm very happy with the setup. I'm using the H310 mini to control two 2½" SSD drives in rear flex bays for VM storage and an IT-flashed H310 for the front 12-bays. The H310 is passed through in ESXi so Xpenology can control the drives directly, with access to the SMART data and temperature readings. 

 

I can't comment on whether this would be an upgrade to your Supermicro. I can only say that everything has been 100% stable with 9 total VMs and it seems I'm barely taxing my dual Xeon 2630L CPUs.

 

 

 

 

Have you tried running baremetal? I don’t want to use esxi on this system 

Share this post


Link to post
Share on other sites
6 hours ago, ccxpenologyxcc said:

 

Have you tried running baremetal? I don’t want to use esxi on this system 

 

Sorry I've not.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now