nyxynyx Posted September 16, 2020 Share #1 Posted September 16, 2020 (edited) Hi, I am new to Xpenology and Proxmox but I just managed to install Xpenology DSM 6.2.3 on Proxmox 6.2-4. DSM was assigned a single disk that is created by Proxmox from a RAID10 (mirrored striped) ZFS storage. Seeing that this disk used by DSM already has redundancy from the underlying ZFS storage and also has some features similar to BTRFS like snapshots, replication, quotas, integrity protection, Is it redundant to use BTRFS instead of ext4 for a new volume in DMS? Should we use 'Basic', 'JBOD' for the storage pool in DMS? DSM only sees a single disk here. Thank you for any guidance on this issue! Edited September 16, 2020 by nyxynyx Quote Link to comment Share on other sites More sharing options...
flyride Posted September 16, 2020 Share #2 Posted September 16, 2020 This is not ideal. DSM is intended to have access to the physical disks for the features you mention. Why use DSM over a regular Linux server if you aren't going to take advantage of the disk optimizations? Storage pool should be Basic for a single virtual disk. btrfs is still superior if you intend to run the docker client in DSM. Otherwise the filesystem type doesn't really matter all that much in this neutered state. 1 Quote Link to comment Share on other sites More sharing options...
nyxynyx Posted September 16, 2020 Author Share #3 Posted September 16, 2020 (edited) Thanks. I had an overkill of a system for just running DSM, so I decided to install Proxmox on bare metal and run DSM on it along with other VMs (mainly docker containers). I think I will be using DSM mainly for file serving over the LAN, for downloading BT using Download Station, and for remote backups using Hyper Backup and maybe rsync. Maybe even Surveillance Station once I get my cameras set up. For my use case, should I setup HDD passthrough on Proxmox (if this is possible) for DSM to access directly? Edited September 16, 2020 by nyxynyx Quote Link to comment Share on other sites More sharing options...
flyride Posted September 16, 2020 Share #4 Posted September 16, 2020 Not sure what else you are using the virtualized environment and storage for... but if it is intended mostly to run DSM, then yes, passthrough of physical drives is preferable. btrfs on DSM will provide RAID, RAIDF1 (if you are using SSD's), and SHR (for dissimilar drives). btrfs on DSM, combined with array duties will offer inline bitrot and file corruption repair, plus snapshots and snapshot replication. I have a very similar system to yours running ESXi. One NVMe SSD hosting the VM configurations and virtualized storage for non-DSM VM's. SATA controller and 10GBe passthrough for DSM full management since that's the primary workload. 1 Quote Link to comment Share on other sites More sharing options...
asdfaeeee Posted October 8, 2020 Share #5 Posted October 8, 2020 On 9/15/2020 at 7:08 PM, flyride said: Not sure what else you are using the virtualized environment and storage for... but if it is intended mostly to run DSM, then yes, passthrough of physical drives is preferable. btrfs on DSM will provide RAID, RAIDF1 (if you are using SSD's), and SHR (for dissimilar drives). btrfs on DSM, combined with array duties will offer inline bitrot and file corruption repair, plus snapshots and snapshot replication. I have a very similar system to yours running ESXi. One NVMe SSD hosting the VM configurations and virtualized storage for non-DSM VM's. SATA controller and 10GBe passthrough for DSM full management since that's the primary workload. Did you do anything special to get PCI pass through to work? Mine doesn't show up on my 3617. Quote Link to comment Share on other sites More sharing options...
flyride Posted October 8, 2020 Share #6 Posted October 8, 2020 Add PCIe device, pick from the list, it should be there. Does any passthrough work? Is hardware support (VT-d) enabled? Quote Link to comment Share on other sites More sharing options...
asdfaeeee Posted October 10, 2020 Share #7 Posted October 10, 2020 On 10/8/2020 at 2:41 PM, flyride said: Add PCIe device, pick from the list, it should be there. Does any passthrough work? Is hardware support (VT-d) enabled? Of course I did all that. When i pass through my freenas vm, the drives shows up but not on my xpenology vm. I'm lost. Quote Link to comment Share on other sites More sharing options...
IG-88 Posted October 10, 2020 Share #8 Posted October 10, 2020 7 hours ago, asdfaeeee said: I'm lost. dmesg would be your friend here, the pcie device would at least be visible and it should be visible if the mpt sas drivers dont recognise the hardware or throw errors when loading Quote Link to comment Share on other sites More sharing options...
asdfaeeee Posted October 10, 2020 Share #9 Posted October 10, 2020 Oh *******, it never cross my mind to check that first lol. I forgot that Synology is just a Linux. 1 hour ago, IG-88 said: dmesg would be your friend here, the pcie device would at least be visible and it should be visible if the mpt sas drivers dont recognise the hardware or throw errors when loading Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.