RDM or standard ESXi VM disk?


Recommended Posts

Proliant Microserver Gen8

P222 RAID array controller

ESXi 6.0 (HP image)

2 x 1TB in RAID1 for VM's

2 x 4TB in RAID1 for XPEnology storage (data shared on my network)

 

Going to do a clean install of XPEnology (DSM 6.2.3)

 

My question:

Should I configure the 4TB storage for XPEnology as RDM or as standard ESXi VM disk?
What is the advantage of RDM?

 

Also: is the this tutorial still relevant? 

 

Will appreciate any advice.

 

Thanks.

 

Link to post
Share on other sites
13 minutes ago, flyride said:

 

Thank you for the link.

 

Unfortunately I still don't know the difference and therefore cannot decide what's best for me.

Either way, I have a physical RAID array controller, so I'm not letting DSM manage my physical disks. My choice is between a standard VMDK and RDM, I just don't know which...

 

Link to post
Share on other sites

Again, you are losing much of the benefit of DSM if you insist on using a physical RAID controller in a hardware RAID mode.  Most can be put into AHCI mode or a bunch of RAID 0 disks and then can provide direct access to the drives via RDM or passthrough.

 

RDM simply translates the native drive instructions into whatever virtual controller dialect you want to use in your VM, otherwise providing direct access to the disk.  This has benefits for disk portability to baremetal, etc in addition to giving DSM the direct disk access it needs to do what it does best.

 

But if you insist on using the hardware RAID, there is no need to RDM. Just create a virtual disk and be done with it.

Edited by flyride
Link to post
Share on other sites

You don't want to use the raid controller and the dsm raid.because it makes it impossible for dsm to handle it. I had this done with my dell R720, I made a VD in the raid controller with 5 x 4tb hdd in raid 5. but my system got shutdown. unexpected. the raid controller of the dell could not do anything and the raid of dsm could also not do anything. becuase it had only 1 VMDK of 16tb. now i RMD all my disks to the dsm and dsm handles the raid fine. I tested it with plugging out a harddrive and plugging it in and i did not have a problem. at al.

 

so use RDM trust me it is better.

 

also if you make a raid with the physical raid controller and add it as a datastore in esxi it gets a VMFS format. and if you install the dsm it gets btrfs or ext4 on top of the vmfs which also makes it impossible to save it if somethings gets corrupt.

 

also Don't use btrfs.

 

Greetz Moi

Link to post
Share on other sites
4 hours ago, flyride said:

Again, you are losing much of the benefit of DSM if you insist on using a physical RAID controller in a hardware RAID mode.  Most can be put into AHCI mode or a bunch of RAID 0 disks and then can provide direct access to the drives via RDM or passthrough.

 

RDM simply translates the native drive instructions into whatever virtual controller dialect you want to use in your VM, otherwise providing direct access to the disk.  This has benefits for disk portability to baremetal, etc in addition to giving DSM the direct disk access it needs to do what it does best.

 

But if you insist on using the hardware RAID, there is no need to RDM. Just create a virtual disk and be done with it.

 

Thank you!

This is very helpful.

 

Started going through the process. Created the new VM as per tutorial. 

... tried at least...

Created the VM on my VM datastore. Removed the devices as per tutorial.

Attempted to create a drive on my 4TB datastore and no matter what I do it always gets created on the VM datastore. Tried to move it and to no avail. 

Link to post
Share on other sites
2 hours ago, Rihc0 said:

You don't want to use the raid controller and the dsm raid

 

Thanks.

I wasn't going to use the DSM RAID. Only the hardware RAID controller.

Link to post
Share on other sites
3 hours ago, yud said:

 

Thanks.

I wasn't going to use the DSM RAID. Only the hardware RAID controller.

You still get the problem with with the btrfs or the ext4 fs over the btrfs. Trust me, dont use the physical raid controller and use the dsm one. Been there 

Link to post
Share on other sites
On 10/15/2020 at 2:09 AM, Rihc0 said:

You still get the problem with with the btrfs or the ext4 fs over the btrfs. Trust me, dont use the physical raid controller and use the dsm one. Been there 

Thank you.

But I have to use the RAID controller. I have other VM's running on the ESXi host, all use storage attached to the array controller.

Link to post
Share on other sites
1 hour ago, flyride said:

Why use XPEnology and DSM then? A regular Linux server with Samba will suffice... in any case the answer to your original question is to just alllocate a virtual disk.

 

 

You are right, no reason...

...except that I like DSM...

Been using XPEnology and DSM for quite a few years now. First on my N54L, then on my Gen8, ran DSM 5.x and didn't update much. It was rock solid and I simply like it. I know a plain Linux server will do the job, perhaps even more efficiently.

 

I run a few services on my XPEnology and I like the ease of installing packages in DSM. Sure, all can run on plain Linux as well.

 

I really appreciate your help and advice.

 

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.