Jump to content
XPEnology Community

Proxmox: best practice decisions for best performance


pipsen

Recommended Posts

Hi,

 

currently I have DSM 7.1 installed via ARPL on a BAREMETAL HP ProDesk 600 G4 (Intel I5-8500, 16GB RAM).

My Harddisk setup is:

  • 256 GB SATA SSD: Volume 1 for packages and docker container's appdata
  • 8 TB SATA HDD: Volume 2 for file storage

Currently I can read and write and with 113 MB/sec via Gigabit Ethernet continously on both volumes.

 

  • Problem: Everytime a little risky, when DSM updates appear, whether bootloader and installation survives.
  • Idea: Using Proxmox Hypervisor to have snapshot capabilities before each update - just in case.

 

I have now a few topics to discuss, which decisions make sense:

 

Host file system

What would you recommend? ZFS? ext4? These two seem to be most popular.

https://lowendspirit.com/discussion/2786/poll-which-type-of-filesystem-you-prefer-on-proxmox-host-node

Any suggestions in terms of Xpenology?

 

Passthrough vs. virtual disks

I have read here in some threads, that passing through harddisks would be best idea in terms of best performance.

Question: I assume, that I will lose the main benefit with my snapshots, right?

I mean, the bootloader disk can be snap-shot, but I have seen, that on each used HDD and SSD, the DSM installs two partitions with OS.

II case of a bricked DSM after update, I restore an old snapshot before update, the bootloader is downgraded, but the passthrough HDD is unaffected and I have a version mismatch

=> DSM will automatically upgrade the bootloader to have same version => bricked again.

 

Any Ideas, how to get bet the benefits of both worlds?

 

Passthrough of NVMe cache

I have an unused M.2 slot, where I could integrate a NVMe SSD as Cache device. This can be passed-through as well, as I have seen.

Question: If I decided to have SSD and HDD as virtual disks (for snapshots), is it still possible to passthrough an NVMe SSD to attach als Cache-SSD?

 

Virtual Disk parameters

I case I decide for virtual disks: How to configure the parameters to have maximum write performance without running into write cache problems:

  • VirtIO SCSI
  • Bus: SATA (some say SCSI?)
  • Cache: none (some say write back?)
  • DSM write cache setting: activated?
  • Model VirtIO (some say others?)
  • SSD emulation for SSD Disk on?

 

Problem: With my test installation witht he parameters above, I have the effect, when I copy a 20GB file, I have 113 MB/sec for the first seconds until about 5 GB, and then the write speed decreases to around 70 MB/sec. With cache settings "Write back", the effect was even worse.

 

Encryption

I would like to have my whole personal data encrypted. What would you recommend (in terms of performance):

  • Encryption on Host Level? If yes: Whats the best idea?
  • Encryption on DSM Level: Enrypt each Btrfs share?

 

Synology Model

For my setup here, I assume DS-920+ should be the best one, correct?

 

Anything else?

Anything else what you have in mind, which is very important to have maximum performance?

Thank you!

 

Thank you very much in advance for you tips and input and the discussion!

 

Edited by pipsen
Link to comment
Share on other sites

A lot to unpack here.  I went through a similar path when I setup DSM years ago.  Here are my thoughts:

 

On 12/23/2022 at 2:15 AM, pipsen said:

Currently I can read and write and with 113 MB/sec via Gigabit Ethernet continously on both volumes.

 

Have you benchmarked the Gigabit Ethernet when it's in proxmox yet?  I'd be curious to see if it's a bottleneck.

 

On 12/23/2022 at 2:15 AM, pipsen said:

Host file system

What would you recommend? ZFS? ext4? These two seem to be most popular.

https://lowendspirit.com/discussion/2786/poll-which-type-of-filesystem-you-prefer-on-proxmox-host-node

Any suggestions in terms of Xpenology?

 

I have a cluster of 3 HP EliteDesk 800 G3 Minis that I run proxmox on.  2 of the systems have 16G of ram and 1 has 32G of ram, total of 64G of ram.  All 3 systems are using a single m.2 NVMe ssd with ZFS. 

 

The main issue with ZFS is that it will use more ram than ext4, but ZFS adds a lot of cool features as well.  I say this because you only have 16G of ram, if you only run a DSM VM with 4G of ram you might be alright using ZFS but keep that in mind.  You can limit the ram ZFS uses but you have more storage than I do so It might not help: https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_limit_memory_usage

 

TLDR: If you have the RAM use ZFS.

 

On 12/23/2022 at 2:15 AM, pipsen said:

Passthrough vs. virtual disks

I have read here in some threads, that passing through harddisks would be best idea in terms of best performance.

Question: I assume, that I will lose the main benefit with my snapshots, right?

I mean, the bootloader disk can be snap-shot, but I have seen, that on each used HDD and SSD, the DSM installs two partitions with OS.

II case of a bricked DSM after update, I restore an old snapshot before update, the bootloader is downgraded, but the passthrough HDD is unaffected and I have a version mismatch

=> DSM will automatically upgrade the bootloader to have same version => bricked again.

 

Any Ideas, how to get bet the benefits of both worlds?

 

You are correct in that you will loose the snapshot feature if you passthrough.  I don't think there is a way to get both benefits, but I'd love to actually see benchmarks between passthrough vs virtual disk.

 

On 12/23/2022 at 2:15 AM, pipsen said:

Passthrough of NVMe cache

I have an unused M.2 slot, where I could integrate a NVMe SSD as Cache device. This can be passed-through as well, as I have seen.

Question: If I decided to have SSD and HDD as virtual disks (for snapshots), is it still possible to passthrough an NVMe SSD to attach als Cache-SSD?

 

I think you can do that, although I'm not sure what the pros/cons are of passthrough + cache.  Keep in mind you can create a cache ssd with ZFS as well, more info here: https://pve.proxmox.com/wiki/ZFS_on_Linux

 

On 12/23/2022 at 2:15 AM, pipsen said:

Virtual Disk parameters

I case I decide for virtual disks: How to configure the parameters to have maximum write performance without running into write cache problems:

  • VirtIO SCSI
  • Bus: SATA (some say SCSI?)
  • Cache: none (some say write back?)
  • DSM write cache setting: activated?
  • Model VirtIO (some say others?)
  • SSD emulation for SSD Disk on?

 

Problem: With my test installation witht he parameters above, I have the effect, when I copy a 20GB file, I have 113 MB/sec for the first seconds until about 5 GB, and then the write speed decreases to around 70 MB/sec. With cache settings "Write back", the effect was even worse.

 

These are all great questions that I don't have the answer to.  benchmarks side-by-side with the same hardware would be what I would want to make decisions.

 

In general I've heard that SCSI is better than SATA in proxmox but I think that is just because you can have more virtual disks with SCSI.  I also know that you cannot boot TCRP off of SCSI, you either have to boot of USB or SATA.  But you could probably boot TCRP off SATA and have additional virtual disks as SCSI.

 

I've heard VIRTIO is what you want to use, but again I haven't seen data to support this.

On 12/23/2022 at 2:15 AM, pipsen said:

 

Encryption

I would like to have my whole personal data encrypted. What would you recommend (in terms of performance):

  • Encryption on Host Level? If yes: Whats the best idea?
  • Encryption on DSM Level: Enrypt each Btrfs share?

 

From a security standpoint you should have encryption at the host level I would think.  Not sure about from a performance standpoint.

On 12/23/2022 at 2:15 AM, pipsen said:

 

Synology Model

For my setup here, I assume DS-920+ should be the best one, correct?

 

DS920+ will not work with the default proxmox CPU type (KVM64) as it doesn't support FMA3.  Seems like a lot of people go with changing the CPU type to host (assuming the host cpu supports FMA3) or they go with  DS3622xs+

 

On 12/23/2022 at 2:15 AM, pipsen said:

 

Anything else?

Anything else what you have in mind, which is very important to have maximum performance?

Thank you!

 

Thank you very much in advance for you tips and input and the discussion!

 

 

I'm very curious what you find out as I have a lot of the same questions.  Another question that I had which led me to this post is:  If you go virtual disks what is the best setup if you want to start with a small disk and grow it later?  Currently my DSM setup is I have a 32G virtual disk attached and when I run out of space I add another 32G virtual disk and add it to the storage pool in DSM.  My preference would be to just increase the virtual disk and grow the disk/storage pool in DSM but I haven't had very good luck with that.

 

Btw, here is a benchmark of one of my virtual disks that are on a NVMe SSD.  Not 100% sure why the write benchmark is not there, but my read benchmark is much better than yours:

 

image.png.ee06d3b1e158094f86ce931cfa5a410a.png

Link to comment
Share on other sites

  • 2 months later...

I have been working for days on trying to get virtio SCSI disks detected in tcrp/arpl as sata nvme speeds are so slow. 

 

The nvme disks are detected at os level in dmesg and lspci but in arpl menu they don't appear (only SATA disks are green). In redpill satamap the controller is detected but has error and cannot map them.

 

I could have sworn I previously had it working fine but after trying different tcrp and arpl versions, q35 to i440, proxmox SCSI controllers, nothing I have done seems to work. I may just have to use SATA but before I do:

 

Can someone please confirm virtio-scsi is supposed to work? 

 

(Can't get it working with passthrough disks or from proxmox disk images)

Link to comment
Share on other sites

For anyone else that goes down the rabbit hole, I looked at the code of ARPL and TCRP and all satamap code is looking for only SATA controllers not SCSI controllers so not sure how SCSI ever worked (perhaps it never did?).

 

I ended up dumping proxmox for ESXI. IOP's performance is better but no where near native. 

 

Native nvme drive passthrough:

 

image.png.f8aed18494086407530b8c32c67a57c0.png

 


VMWare SATA Disk on identical nvme disk:

image.thumb.png.90d0c0eaf7e0a0779834b2b058f53680.png

 

I was getting some inconsistent results which did not make a lot of sense as there was no IO on the nvme disk in ESXI. Possible there was trimming or internal SSD load balancing running at the time. 

 

TLDR; Spent too many hours trying to get the right performance out of these devices. I have to go buy a real Syn NAS.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...