cwiggs Posted December 29, 2022 #1 Posted December 29, 2022 Hello! I'm looking at creating a new Xpenology VM in proxmox 7 and am looking into what the best virtual disk setup I should use. Currently my DSM 6 install is using JBOD and when I need to expand I just add a new virtual disk to the VM and add that new vdisk to the JBOD "array", however I would rather just increase the vdisk in proxmox and then expand it in DSM. Searching this forum and I haven't found much info so maybe if I ask the questions here someone can answer. Here are my questions: Is there a benefit to using SCSI vs SATA for your storage disks? If you are already using ZFS in proxmox there isn't a need to us RAID or BRTFS in DSM so which type of Raid/filesystem should I use? How can I expand a virtual disk in DSM after things are installed to it without data lose? The vdisks live on an SSD, should I check the SSD emulation in proxmox? Why there is so much storage loss with two 16G vdisks in JBOD mode. I would think it would be slightly less than 32G, but It seems to be ~10G usable, why? Quote
cwiggs Posted December 29, 2022 Author #2 Posted December 29, 2022 (edited) Looking through the synology docs and it doesn't seem like there is a good way to grow a virtual disk via the DSM gui. Not being able to grow the vdisk makes sense since the official DSM systems don't have vdisks for the main DSM. There is some talk on the forum about how to grow a vdisk (usually with esxi) but they are look older. These tests are on DVA3221 7.1.1-42962 with 1 vdisk in a "basic" storage pool. Here is some info growing a virtual disk: # Check which volumes we are dealing with (in this e.g it's a basic storage pool) df | fgrep volume1 /dev/mapper/cachedev_0 21543440 107312 21317344 1% /volume1 # Looks like we are using LVM, lets check the physical volume sudo pvdisplay --- Physical volume --- PV Name /dev/md2 VG Name vg1 PV Size 21.77 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 5573 Free PE 194 Allocated PE 5379 PV UUID TmLYpo-38Ky-3Ycm-hUqn-n2M7-u19B-ooOJpo # Now we can check the "raid" array sudo mdadm --detail /dev/md2 /dev/md2: Version : 1.2 Creation Time : Wed Dec 28 14:25:33 2022 Raid Level : raid1 Array Size : 22830080 (21.77 GiB 23.38 GB) Used Dev Size : 22830080 (21.77 GiB 23.38 GB) Raid Devices : 1 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Dec 29 09:35:04 2022 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : nas04:2 (local to host nas04) UUID : d0a76bee:7701a3b8:9b3649e9:9f0dafc2 Events : 13 Number Major Minor RaidDevice State 0 8 19 0 active sync /dev/sdb3 # used to stop DSM service (syno_poweroff_task used to be used) sudo synostgvolume --unmount -p /volume1 # Resize the parition parted /dev/sdb resizepart 3 100% # Then resize the raid array mdadm --grow /dev/md2 --size=max # Not sure if we need to reboot, but idk how to bring the services and volume back online. reboot # Once DSM is back up you can login go to storage manager > storage pool and expand it there, however we will stick with the cli. # Expand the physical volume first sudo pvresize /dev/md2 # Then extend the logical volume sudo lvextend -l +100%FREE /dev/vg1/volume_1 # Next resize the actual partition (ext4) # I got an error saying the device was busy, I tried using synostgvolume again but i still wasn't able to resize the partition. # trying to resize via the GUI also gave an error and asked me to submit a support ticket. sudo resize2fs -f /dev/vg1/volume_1 Edited December 29, 2022 by cwiggs Quote
cwiggs Posted December 29, 2022 Author #3 Posted December 29, 2022 After failing to be able to resize a virtual disk that was using "basic" raid level and ext4 I tried with "basic" raid level and brtfs. The interesting thing I've found is the `brtfs` command seems to be missing in the cli on 7.1.1, however I was able to expand the filesystem using the gui, here is how: # Not 100% sure if we need this command sudo synostgvolume --unmount -p volume1 parted /dev/sda resizepart 3 100% mdadm --grow /dev/md2 --size=max reboot Then after it reboots go into DSM > storage manager > storage pool > expand I assume we could do the same thing with ext4 but I haven't tried it. Quote
cwiggs Posted December 29, 2022 Author #4 Posted December 29, 2022 More findings: the JBOD type in DSM is just a "linear" mdadm array when you ssh in and check. It seems you can't grow a mdadm array that is linear: sudo mdadm --grow /dev/md2 --size=max mdadm: Cannot set device size in this type of array. So with JBOD mode the only way to add storage space would be to attach another vdisk and add it to the JBOD array. That is currently how I do it with DSM 6 but IMO it isn't ideal. Quote
cwiggs Posted December 29, 2022 Author #5 Posted December 29, 2022 More findings: I rebuilt the storage pool as a "basic" array with ext4 as the filesystem. Here is then what I did to grow the vdisk: sudo parted /dev/sdb resizepart 3 100% sudo mdadm --grow /dev/md2 --size=max sudo reboot After a reboot login to DSM > Storage Manager > Storage pool > Expand So it looks like you don't need to take any DSM service offline (synostgvolume --unmount -p volume1) in order to grow a virtual disk. It would be nice to be able to do all this in the CLI or all of it in the GUI but it looks like for now that isn't possible. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.