Start adding disks with plan to epxand storage


Recommended Posts

Hi guys,

i am new to the xpenology game.

 

My System:

Server with ESXi 6.7

16 GB RAM

Quad Core AMD CPU

1 TB SSD for VMs

4 Bay

4x4 TB WD Red Plus (WD40EFRX)

Raid5

 

I followed the instructions of this thread "Tutorial: Install DSM 6.2 on ESXi 6.7"

Worked great.

 

Now i am struggeling with hard disk management.

 

Because it's an esx server platform i don't want to give the dsm vm the hole 11 GB datastore storage.

 

I played a litlle with the storagepool and volume settings but can't find the right method.

 

I only need/want 1 volume so that the shares are not so clustered over many volumes.

 

How do i proceed if i only want 3 TB at the beginning and at more storage later if i need some?

I tried to expand the disk in the esx vm settings but can't expand the disk in dsm.

 

So, can someone explain a good or best method for my target scenario?

Link to post
Share on other sites
  • 3 weeks later...

For me, I used SHR instead of the typical RAIDx configurations. Anytime I need to increase the storage pool, I just add another drive to the pool.  SHR is no longer a choice in the defaults. You'll need to SSH and modify the /etc.defaults/synoinfo.conf file.

Link to post
Share on other sites

@seb0p how you are trying to allocate your storage isn't exactly how DSM intends for you to use it.  DSM's purpose is to manage physical disks for redundancy and performance.  Instead you are tasking ESXi with the storage management role and only using DSM for access.

 

The preferable solution is to pass through your storage controller to the DSM VM, or RDM certain drives on the storage controller to DSM.  Then everything works as expected.

 

If your use case requires that you use scratch storage and virtual disks, you have a couple of choices.  1) create multiple identical vdisks as you want to grow and then RAID them in DSM.  Again, this is suboptimal from a performance and data redundancy perspective.  Or, 2) you can allocate a single vdisk and grow it.  Unfortunately the disk management tools in DSM are not anticipating such a scenario (as it is not possible without running XPEnology in a virtual environment) so the procedure to grow the vdisk is manual.

 

This thread will help, but I really recommend you consider giving DSM access to your physical disks.

  • Like 1
Link to post
Share on other sites
  • 4 months later...

Hello There,

 

My system:

HP Proilant DL380e Gen8 / 2 x Intel Xeon 8 Core E5-2450L @ 1,8GHz 20MB Cache / 192GB DDR3 RAM / Intel C600 Chipset / 14 x 3,5" HDD bays / P820 RAID Controller with 1GB RAM / 4 x 1GB NIC

 

Installed ESXi 6.5.0U3 host on a SD card.

 

Created a HW RAID1 SAS SSD mirror for VMs.

 

VMs so far are:

-pfSense

-Plex Media Server on FreeBSD (will access data via vnetwork)

-Roon Server on Ubuntu Server (will access data via vnetwork)

-Xpenology 6.2.1

 

I need a helping hand related to configuring the storage for data.

 

I'd like to start with a 3x3TB disks in a HW RAID5 volume managed by the P820 RAID Controller and expand it to 6x3TB after the first data migration.

 

As far as I understand I've got three options:

-creating a datastore on the RAID5 volume under the ESXi and add this datastore as a vdisk to the Xpenology VM (already tried, working, can create a volume in the DSM)

-passthrough the disks (it is 1pc disk from the Xpenology point of view) (already tried, working, can create a volume in the DSM)

-passthrough the controller (not tried yet)

 

My plan is to expand the HW RAID5 till 12x3TB

 

Which solution from above is the best one?

With which one will be able the Xpenology manage the expansion (I would not like to create separate volumes)?

What kind of volume (basic etc.) do I need to create?

 

Thank you in advance!

 

Edited by Zsolo
Link to post
Share on other sites

Again, if you read the feedback, using the RAID controller to manage the volume is counterproductive in a DSM environment.

 

3 hours ago, Zsolo said:

As far as I understand I've got three options:

-creating a datastore on the RAID5 volume under the ESXi and add this datastore as a vdisk to the Xpenology VM (already tried, working, can create a volume in the DSM)

-passthrough the disks (it is 1pc disk from the Xpenology point of view) (already tried, working, can create a volume in the DSM)

-passthrough the controller (not tried yet)

 

#3 is preferable, but your controller needs to be able to address and syndicate the drives individually and isn't typical for a RAID controller.  If it is not an ACPI compliant controller, you will need driver support in DSM, which may be problematic.

 

#2 is probably the nest best choice.  I'm not 100% sure how you are doing this however.  RDM?  I am not aware that you can "passthrough" disks.

 

#1 is the least beneficial to you. RAID5 would be supported by the controller, not ESXi.  So any problems with your array cannot be solved by DSM or ESXi, nor is there an easy path to add disks.

 

  • Like 1
Link to post
Share on other sites

Thanks for your reply!
 

5 hours ago, flyride said:

Again, if you read the feedback, using the RAID controller to manage the volume is counterproductive in a DSM environment.

 

I'd like to clarify, I wouldn't like to arguing, just trying to understand the "why".

Trying to understand why not beneficial to use a controller designed for this purpose with its own chip and memory and burning resources for SW raid.

All my listed options above based on the idea used the controller to manage the RAID5 array.

 

In case if I miss the controller based HW RAID5 (or 6)

-can add many datastores and vdisks to the Xpenology VM - as I understand it is not the best way

-passthrough the controller - I think it could be impossible as the DSM itself can not set up to this hardver as bare metal, due the driver issue

-passthrough disks - yes, I'd use RDM for this

 

I did the last one already, but as I configured the RAID5 on HW level the ESXi saw it as a RAID5 block and the DSM saw it as a single disk and was able to create a volume on it.

It is a big question for me if I'll able to expand the volume size within the DSM after I expand the RAID5 block on HW level (adding physical drives to the block).

 

But again, I would not like to arguing, instead learn the best possible way to create, manage and later expand the storage pool.

Link to post
Share on other sites

Meanwhile I discussed with a datacenter / virtualisation and a data restore expert.

From their point of view (and based on their experiences) the SW RAID is the worst option, but they aren't working with Xpenology instead Windows clusters etc.

 

So it still cause me a headache to understand and choose the best possible option.

 

 

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.