Jump to content
XPEnology Community

nyxynyx

Rookie
  • Posts

    4
  • Joined

  • Last visited

Everything posted by nyxynyx

  1. Oh no, how do I clone it into a new vdisk with GPT partition table? Will changing to the new fdisk with the stuff copied/mirrored over be transparent to DSM? I'm pretty new to Proxmox, hope there is an existing guide out there for this. admin@NAS:/$ sudo fdisk -l /dev/sdc Password: Disk /dev/sdc: 3.9 TiB, 4294967296000 bytes, 8388608000 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x9694dbae Device Boot Start End Sectors Size Id Type /dev/sdc1 2048 4982527 4980480 2.4G fd Linux raid autodetect /dev/sdc2 4982528 9176831 4194304 2G fd Linux raid autodetect /dev/sdc3 9437184 4194303999 4184866816 2T fd Linux raid autodetect admin@NAS:/$ sudo parted /dev/sdc print | grep -i '^Partition Table' Partition Table: msdos
  2. Hi, I have Xpenology DSM running inside a Proxmox VM. I have previously successfully resized the Proxmox disk used by Xpenology from 1 TB to 2 TB. However, now when I try to do the same set of commands to expand from 2 TB to 4 TB, I get an error Error: partition length of 8379170816 sectors exceeds the msdos-partition-table-imposed maximum of 4294967295 What can we do to resize the drive? Here's the set of commands that I'm using and their output. admin@NAS:/$ df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/md0 ext4 2.3G 1.1G 1.2G 47% / none devtmpfs 486M 0 486M 0% /dev /tmp tmpfs 500M 928K 499M 1% /tmp /run tmpfs 500M 2.7M 498M 1% /run /dev/shm tmpfs 500M 4.0K 500M 1% /dev/shm none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup cgmfs tmpfs 100K 0 100K 0% /run/cgmanager/fs /dev/md2 btrfs 1.9T 1.6T 328G 83% /volume1 admin@NAS:/$ sudo mdadm --detail /dev/md2 /dev/md2: Version : 1.2 Creation Time : Tue Sep 15 20:31:02 2020 Raid Level : raid1 Array Size : 2092432384 (1995.50 GiB 2142.65 GB) Used Dev Size : 2092432384 (1995.50 GiB 2142.65 GB) Raid Devices : 1 Total Devices : 1 Persistence : Superblock is persistent Update Time : Sun Jun 6 23:58:06 2021 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : NAS:2 (local to host NAS) UUID : d87ce388:3e525dd5:cad9ca50:fed7642c Events : 30 Number Major Minor RaidDevice State 0 8 35 0 active sync /dev/sdc3 admin@NAS:/$ syno_poweroff_task -d admin@NAS:/$ sudo umount /volume1 -f -k admin@NAS:/$ sudo mdadm --stop /dev/md2 mdadm: stopped /dev/md2 admin@NAS:/$ sudo parted /dev/sdc resizepart 3 100% Error: partition length of 8379170816 sectors exceeds the msdos-partition-table-imposed maximum of 4294967295 Proxmox Xpenology VM (After resizing disk to 4000G) Proxmox VM Image: Originally: vzdump-qemu-xpenology-3617xs-6.2_23739.clean.vma.lzo.iso Renamed to: vzdump-qemu-100-2020_09_15-12_32_12.vma.lzo Source:
  3. Thanks. I had an overkill of a system for just running DSM, so I decided to install Proxmox on bare metal and run DSM on it along with other VMs (mainly docker containers). I think I will be using DSM mainly for file serving over the LAN, for downloading BT using Download Station, and for remote backups using Hyper Backup and maybe rsync. Maybe even Surveillance Station once I get my cameras set up. For my use case, should I setup HDD passthrough on Proxmox (if this is possible) for DSM to access directly?
  4. Hi, I am new to Xpenology and Proxmox but I just managed to install Xpenology DSM 6.2.3 on Proxmox 6.2-4. DSM was assigned a single disk that is created by Proxmox from a RAID10 (mirrored striped) ZFS storage. Seeing that this disk used by DSM already has redundancy from the underlying ZFS storage and also has some features similar to BTRFS like snapshots, replication, quotas, integrity protection, Is it redundant to use BTRFS instead of ext4 for a new volume in DMS? Should we use 'Basic', 'JBOD' for the storage pool in DMS? DSM only sees a single disk here. Thank you for any guidance on this issue!
×
×
  • Create New...