Jump to content
XPEnology Community

ESXI, Unable to expand Syno Volume, even after disk space increased


munchgeil1

Recommended Posts

I have issue in ESXI 6.6 and Xpenology (ds3615xs, with Jun 1.02 loader)

I allocated 16GB of HD data when I installed Xpenology on ESXI.

I later expanded in ESXI HD from 16 to 20 GB (same HD).

In Storage pool on Synology  I can see 20 GB, but I cannot increase the size of Volume. See image.

 

Does anyone know how to expand Volume size? Through terminal/SSH maybe?

 

1202080590_Volumeexp.thumb.png.458cb1dde9972b802f9dc78bc18fe855.png

Link to comment
Share on other sites

The issue is that DSM only does an automatic expansion when a drive add/replace event occurs. This isn't triggered when you deliberately expand a basic volume using virtualized storage - a user action that would never occur on a "real" Synology hardware system.  It also underscores the fact that DSM is really intended to manage the drives directly instead of ESXi or a RAID controller providing disk redundancy.

 

EDIT: One limitation of this strategy is if the disk was initially partitioned as MBR, which is possible if the initial size of the disk was less than 2TB.  In order to increase the size of an MBR disk to more than 2TB, the partition type will need to be changed to GPT, which is beyond the scope of this advice.

 

To grow a volume manually, the general tasks are: expand the partition(s) on the drive(s) hosting the array, expand the array, and then expand the volume on the array.  This has varying levels of complexity depending upon the array type, whether you are using LVM, whether you are using SHR, and whether you are using btrfs or ext4.  Your situation is probably the least complicated - a Basic, single disk ext4.  Note that even that configuration is implemented within DSM as an array - in your case a 1-disk RAID1.

 

Needless to say, have a backup before you try this.  A lot of things can go wrong.

 

The sequence is thus:

 

1. Sign in via SSH, and determine your volume array device (should be /dev/md2 using your example screenshots)

$ df | fgrep volume1
/dev/md2       22490509088 16838480532 5652028556  75% /volume1

NOTE: If df returns something like /dev/vgX/volume_X instead of a /dev/md device, then your system is using LVM.  Verify the /dev/mdX array device under the LVM with the pvdisplay command, and use that for the commands below.  If multiple devices are returned for the specific volume to be resized, you have a multi-disk SHR array and should stop now.

 

2. Determine the host disk servicing the array (might be /dev/sda or /dev/sdb given your specific example).

Note example below, /dev/sda3 is the 3rd partition on /dev/sda

$ sudo mdadm --detail /dev/md2 | fgrep /dev/
/dev/md2:
       0       8        3        0      active sync   /dev/sda3

3. Stop Synology services.

This also unmounts the array volumes, but does not stop the arrays.

$ sudo syno_poweroff_task -d

4. Stop the array (reference array device from step #1)

$ sudo mdadm --stop /dev/md2

5. Delete partition 3 from array host disk and create a new one to use all the available space (referencing array host device from step #2).

IMPORTANT: The start sector of the new partition must be the same as the old one.

$ sudo fdisk /dev/sda

Welcome to fdisk (util-linux 2.26.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa60f300d

Device     Boot   Start      End Sectors  Size Id Type
/dev/sda1          2048  4982527 4980480  2.4G fd Linux raid autodetect
/dev/sda2       4982528  9176831 4194304    2G fd Linux raid autodetect
/dev/sda3       9437184 16572415 7135232  3.4G fd Linux raid autodetect   <<-- this is the one we are interested in, note the start sector

Command (m for help): d
Partition number (1-3, default 3): 3

Partition 3 has been deleted.

Command (m for help): n
Partition type
   p   primary (2 primary, 0 extended, 2 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (3,4, default 3): 3
First sector (9176832-41943039, default 9177088): 9437184
Last sector, +sectors or +size{K,M,G,T,P} (9437184-41943039, default 41943039):

Created a new partition 3 of type 'Linux' and of size 15.6 GiB.

Command (m for help): p
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa60f300d

Device     Boot   Start      End  Sectors  Size Id Type
/dev/sda1          2048  4982527  4980480  2.4G fd Linux raid autodetect
/dev/sda2       4982528  9176831  4194304    2G fd Linux raid autodetect
/dev/sda3       9437184 41943039 32505856 15.5G 83 Linux

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Re-reading the partition table failed.: Device or resource busy

The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8).

6. Reboot (sudo shutdown -r now)

 

7. OPTIONAL: Verify that your array is still intact

alternatively, cat /proc/mdstat

$ df | fgrep volume1
/dev/md2       22490509088 16838480532 5652028556  75% /volume1

8. Expand the array to use all the new space in the host disk partition

$ sudo mdadm --grow /dev/md2 --size=max

 

9. If your system is using LVM, follow the steps in post #10:

https://xpenology.com/forum/topic/14091-esxi-unable-to-expand-syno-volume-even-after-disk-space-increased/?do=findComment&amp;comment=105841

 

10. Otherwise, expand the volume to use all the new space in the array (this is for ext4, btrfs has a different command)

$ sudo resize2fs -f /dev/md2

11. And reboot one last time (sudo shutdown -r now)

Edited by flyride
  • Thanks 2
Link to comment
Share on other sites

syno_poweroff_task -d is a Synology script that runs at shutdown, and something in the script generated an internal error.  I've seen that message on occasion, but it doesn't matter as long as the services are shut down and the volumes are unmounted.

 

You could run the df | fgrep volume1 command again to verify that the volume is unmounted, and if it is, move on to the next step.  You could also repeat the syno_poweroff_task -d command and it would probably run without error.

Link to comment
Share on other sites

With LVM, the procedure you need to use to access the additional physical storage is different.

 

It looks like this:

 

/dev/sdb3 -> /dev/md2 -> /dev/vg1 -> filesystem

 

So you must resize the partition on the physical device sdb3, then the array, then the lvm, then the filesystem

 

After step 8, follow the instructions in this link to "pvresize" and "lvextend"

https://ma.ttias.be/increase-a-vmware-disk-size-vmdk-formatted-as-linux-lvm-without-rebooting/

 

Once you get to resizing the filesystem (resize2fs), resume with step #9 above.

 

Just to explain what is happening: in DSM 6.2.1, Synology now allows you to choose whether to use the LVM (logical volume manager) or not when a Storage Group is created.  It's actually the setting where you choose "performance" (native) vs "flexibility" (LVM) and that will also enable or disable the option for RAID or SHR as your redundancy mode.

 

image.png.14474a4e76740b76ad7f46651fe4d80b.png

Edited by flyride
Link to comment
Share on other sites

I went with terminal route....and have issue.

 

I successfully went through step 5, then tried lvextend (hyperlink)  and couldn't proceed: error stated "

  Device /dev/sdb3 not found (or ignored by filtering)

 

see below code....but bottom line. I noticed you said "/dev/sdb3 -> /dev/md2 -> /dev/vg1 -> filesystem"

so I tried to expand md2 first,...and it worked (see fdisk -l)....then I tried again "lvextend" and got the same error:

  Device /dev/sdb3 not found (or ignored by filtering)

 

Then I ran "df" and noticed that in addition to volume_1 I see other volumes,...because of "docker" app that was installed within synology....

I think array is corrupt....

See full code below.

 

Is there anyway out of this, or should I just recreate VM or even reinstall Xpenology?

(Before making changes, I exported entire VM, using "Export as OVF template" function in ESXI (Left checkbox next to VM, then in menu "Export as OVF template"...into following files: disk-0.vmdk, disk-1.vmdk, Xpeno6.2.ovf -> I don't know how/where to put them back into ESXI however) 

 

 

Here is the terminal code:

(I made mistake on fdisk, but stopped in time)

 

START OF TERMINAL CODE********

 

admins-iMac:~ admin$ ssh adminsyno@192.168.1.123

adminsyno@192.168.1.123's password: 
adminsyno@Syno:~$ df|fgrep volume1
/dev/vg1/volume_1  11621336 8114632   3387920  71% /volume1
adminsyno@Syno:~$ sudo mdadm --detail /dev/md2 | fgrep /dev/
Password: 
/dev/md2:
       0       8       19        0      active sync   /dev/sdb3
adminsyno@Syno:~$ cd /
adminsyno@Syno:/$ sudo syno_poweroff_task -d
adminsyno@Syno:/$ df|fgrep volume1
adminsyno@Syno:/$ df
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/md0         2385528 1009088   1257656  45% /
none             2013752       0   2013752   0% /dev
/tmp             2027544     592   2026952   1% /tmp
/run             2027544    2392   2025152   1% /run
/dev/shm         2027544      12   2027532   1% /dev/shm
none                   4       0         4   0% /sys/fs/cgroup
cgmfs                100       0       100   0% /run/cgmanager/fs
adminsyno@Syno:/$ sudo mdadm --detail /dev/md2 | fgrep /dev/
/dev/md2:
       0       8       19        0      active sync   /dev/sdb3
adminsyno@Syno:/$ sudo mdadm --stop /dev/md2
mdadm: stopped /dev/md2
adminsyno@Syno:/$ sudo fdisk /dev/sdb3

Welcome to fdisk (util-linux 2.26.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

/dev/sdb3: device contains a valid 'linux_raid_member' signature; it is strongly recommended to wipe the device with wipefs(8) if this is unexpected, in order to avoid possible collisions

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xe05944f1.

Command (m for help): m

Help:

  DOS (MBR)
   a   toggle a bootable flag
   b   edit nested BSD disklabel
   c   toggle the dos compatibility flag

  Generic
   d   delete a partition
   l   list known partition types
   n   add a new partition
   p   print the partition table
   t   change a partition type
   v   verify the partition table

  Misc
   m   print this menu
   u   change display/entry units
   x   extra functionality (experts only)

  Script
   I   load disk layout from sfdisk script file
   O   dump disk layout to sfdisk script file

  Save & Exit
   w   write table to disk and exit
   q   quit without saving changes

  Create a new label
   g   create a new empty GPT partition table
   G   create a new empty SGI (IRIX) partition table
   o   create a new empty DOS partition table
   s   create a new empty Sun partition table


Command (m for help): q

adminsyno@Syno:/$ sudo fdisk /dev/sdb
Password: 

Welcome to fdisk (util-linux 2.26.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf8c48628

Device     Boot   Start      End  Sectors  Size Id Type
/dev/sdb1          2048  4982527  4980480  2.4G fd Linux raid autodetect
/dev/sdb2       4982528  9176831  4194304    2G fd Linux raid autodetect
/dev/sdb3       9437184 33349631 23912448 11.4G fd Linux raid autodetect

Command (m for help): d
Partition number (1-3, default 3): 3

Partition 3 has been deleted.

Command (m for help): n
Partition type
   p   primary (2 primary, 0 extended, 2 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (3,4, default 3): 3
First sector (9176832-41943039, default 9177088): 9437184
Last sector, +sectors or +size{K,M,G,T,P} (9437184-41943039, default 41943039): 

Created a new partition 3 of type 'Linux' and of size 15.5 GiB.

Command (m for help): p
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf8c48628

Device     Boot   Start      End  Sectors  Size Id Type
/dev/sdb1          2048  4982527  4980480  2.4G fd Linux raid autodetect
/dev/sdb2       4982528  9176831  4194304    2G fd Linux raid autodetect
/dev/sdb3       9437184 41943039 32505856 15.5G 83 Linux

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Re-reading the partition table failed.: Device or resource busy

The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8).

adminsyno@Syno:/$ lvgdisplay
-sh: lvgdisplay: command not found
adminsyno@Syno:/$ vgdisplay
  WARNING: Running as a non-root user. Functionality may be unavailable.
  /dev/md2: open failed: Permission denied
adminsyno@Syno:/$ sudo -i
Password: 
root@Syno:~# lvgdisplay
-ash: lvgdisplay: command not found
root@Syno:~# vgdisplay
root@Syno:~# lvextend /dev/vg1/volume_1 /dev/sdb3
  Volume group "vg1" not found
root@Syno:~# partprobe -s
-ash: partprobe: command not found
root@Syno:~# vgdisplay
root@Syno:~# sudo shutdown -r now

Broadcast message from adminsyno@Syno
    (/dev/pts/12) at 20:19 ...

The system is going down for reboot NOW!
root@Syno:~# Connection to 192.168.1.123 closed by remote host.
Connection to 192.168.1.123 closed.
admins-iMac:~ admin$ ssh adminsyno@192.168.1.123
adminsyno@192.168.1.123's password: 
adminsyno@Syno:~$ cd /
adminsyno@Syno:/$ df
Filesystem        1K-blocks    Used Available Use% Mounted on
/dev/md0            2385528 1010660   1256084  45% /
none                2013752       0   2013752   0% /dev
/tmp                2027544     584   2026960   1% /tmp
/run                2027544    3504   2024040   1% /run
/dev/shm            2027544     188   2027356   1% /dev/shm
none                      4       0         4   0% /sys/fs/cgroup
cgmfs                   100       0       100   0% /run/cgmanager/fs
/dev/vg1/volume_1  11621336 8114640   3387912  71% /volume1
adminsyno@Syno:/$ lvextend /dev/vg1/volume_1 /dev/sdb3
  WARNING: Running as a non-root user. Functionality may be unavailable.
  /var/lock/lvm/V_vg1:aux: open failed: Permission denied
  Can't get lock for vg1
adminsyno@Syno:/$ sudo -i
Password: 
root@Syno:~# lvextend /dev/vg/volume_1 /dev/sdb3
  Volume group "vg" not found
root@Syno:~# lvextend /dev/vg1/volume_1 /dev/sdb3
  Physical Volume "/dev/sdb3" not found in Volume Group "vg1".
root@Syno:~# vgextend vg1 /dev/sdb3
  Device /dev/sdb3 not found (or ignored by filtering).
  Unable to add physical volume '/dev/sdb3' to volume group 'vg1'.
root@Syno:~# sudo syno_poweroff_task -d
root@Syno:~# lvextend /dev/vg1/volume_1 /dev/sdb3
  Physical Volume "/dev/sdb3" not found in Volume Group "vg1".
root@Syno:~# fdisk -l
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf8c48628

Device     Boot   Start      End  Sectors  Size Id Type
/dev/sdb1          2048  4982527  4980480  2.4G fd Linux raid autodetect
/dev/sdb2       4982528  9176831  4194304    2G fd Linux raid autodetect
/dev/sdb3       9437184 41943039 32505856 15.5G 83 Linux


Disk /dev/md0: 2.4 GiB, 2549940224 bytes, 4980352 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/zram0: 594 MiB, 622854144 bytes, 152064 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram1: 594 MiB, 622854144 bytes, 152064 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram2: 594 MiB, 622854144 bytes, 152064 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram3: 594 MiB, 622854144 bytes, 152064 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md2: 11.4 GiB, 12242124800 bytes, 23910400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@Syno:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] 
md2 : active raid1 sdb3[0]
      11955200 blocks super 1.2 [1/1]
      
md1 : active raid1 sdb2[0]
      2097088 blocks [12/1] [U___________]
      
md0 : active raid1 sdb1[0]
      2490176 blocks [12/1] [U___________]
      
unused devices: <none>
root@Syno:~# sudo mdadm --grow /dev/md2 --size=max
mdadm: component size of /dev/md2 has been set to 16251904K
root@Syno:~# fdisk -l
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf8c48628

Device     Boot   Start      End  Sectors  Size Id Type
/dev/sdb1          2048  4982527  4980480  2.4G fd Linux raid autodetect
/dev/sdb2       4982528  9176831  4194304    2G fd Linux raid autodetect
/dev/sdb3       9437184 41943039 32505856 15.5G 83 Linux


Disk /dev/md0: 2.4 GiB, 2549940224 bytes, 4980352 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/zram0: 594 MiB, 622854144 bytes, 152064 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram1: 594 MiB, 622854144 bytes, 152064 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram2: 594 MiB, 622854144 bytes, 152064 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram3: 594 MiB, 622854144 bytes, 152064 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md2: 15.5 GiB, 16641949696 bytes, 32503808 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@Syno:~# vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               11.40 GiB
  PE Size               4.00 MiB
  Total PE              2918
  Alloc PE / Size       2918 / 11.40 GiB
  Free  PE / Size       0 / 0   
  VG UUID               f8NayU-uRhd-Yo2G-XcEC-8I4A-H0C3-kidAwF
   
root@Syno:~# pvsacn
-ash: pvsacn: command not found
root@Syno:~# pvscan
-ash: pvscan: command not found
root@Syno:~# vgextend vg1 /dev/sdb3
  Device /dev/sdb3 not found (or ignored by filtering).
  Unable to add physical volume '/dev/sdb3' to volume group 'vg1'.
root@Syno:~# pvcreate /dev/sdb3
  Device /dev/sdb3 not found (or ignored by filtering).
root@Syno:~# cat/proc/mdstat
-ash: cat/proc/mdstat: No such file or directory
root@Syno:~# sudo shutdown -r now

Broadcast message from adminsyno@Syno
    (/dev/pts/20) at 21:03 ...

The system is going down for reboot NOW!
root@Syno:~# Connection to 192.168.1.123 closed by remote host.
Connection to 192.168.1.123 closed.
admins-iMac:~ admin$ ssh adminsyno@192.168.1.123
adminsyno@192.168.1.123's password: 
adminsyno@Syno:~$ cd /
adminsyno@Syno:/$ ls
bin     dev  etc.defaults  lib    lib64       mnt   root  sbin  tmp      usr  var.defaults  volume2
config  etc  initrd        lib32  lost+found  proc  run   sys   tmpRoot  var  volume1       volumeSATA1
adminsyno@Syno:/$ sudo -i
Password: 
root@Syno:~# cat/proc/mdstat
-ash: cat/proc/mdstat: No such file or directory
root@Syno:~# df
Filesystem        1K-blocks    Used Available Use% Mounted on
/dev/md0            2385528 1011432   1255312  45% /
none                2013752       0   2013752   0% /dev
/tmp                2027544     584   2026960   1% /tmp
/run                2027544    3516   2024028   1% /run
/dev/shm            2027544     188   2027356   1% /dev/shm
none                      4       0         4   0% /sys/fs/cgroup
cgmfs                   100       0       100   0% /run/cgmanager/fs
/dev/vg1/volume_1  11621336 8114660   3387892  71% /volume1
none               11621336 8114660   3387892  71% /volume1/@docker/aufs/mnt/0b8b66d8e2b10914a6643256da1ea74ab016460a54f05643fc4af4f1be4cd964
shm                   65536       0     65536   0% /volume1/@docker/containers/b42390ab0cfc666bdcb983b238c1ad4109e438f9802c0131daa86696e8fe3b38/shm
none               11621336 8114660   3387892  71% /volume1/@docker/aufs/mnt/f16c7e6ef763086e5c50c9886229067a2c48ee30e4cbac7f608db994475c0725
shm                   65536       0     65536   0% /volume1/@docker/containers/f6d2fcf17ab7404e632944f60531401cbf605603937af131e5f2b3a5a9dc3850/shm
none               11621336 8114660   3387892  71% /volume1/@docker/aufs/mnt/c944b65166871552ea9747d5b4ed7fe5b7dc56b9d7fea44a2f09c6d684351650
shm                   65536       0     65536   0% /volume1/@docker/containers/6636385e08e6756f28304f4d0a751e5c2d7e5ac807112827d0f77a21066ca225/shm
root@Syno:~# df|fgrep volume1
/dev/vg1/volume_1  11621336 8114800   3387752  71% /volume1
none               11621336 8114800   3387752  71% /volume1/@docker/aufs/mnt/0b8b66d8e2b10914a6643256da1ea74ab016460a54f05643fc4af4f1be4cd964
shm                   65536       0     65536   0% /volume1/@docker/containers/b42390ab0cfc666bdcb983b238c1ad4109e438f9802c0131daa86696e8fe3b38/shm
none               11621336 8114800   3387752  71% /volume1/@docker/aufs/mnt/f16c7e6ef763086e5c50c9886229067a2c48ee30e4cbac7f608db994475c0725
shm                   65536       0     65536   0% /volume1/@docker/containers/f6d2fcf17ab7404e632944f60531401cbf605603937af131e5f2b3a5a9dc3850/shm
none               11621336 8114800   3387752  71% /volume1/@docker/aufs/mnt/c944b65166871552ea9747d5b4ed7fe5b7dc56b9d7fea44a2f09c6d684351650
shm                   65536       0     65536   0% /volume1/@docker/containers/6636385e08e6756f28304f4d0a751e5c2d7e5ac807112827d0f77a21066ca225/shm
root@Syno:~# sudo syno_poweroff_task -d
root@Syno:~# 
 

 

 

Edited by munchgeil1
Link to comment
Share on other sites

From this posting, your array does not appear corrupt.  You have extended the base host disk partition and your array, but nothing else has been done. You should be able to use the system with no ill effect as it is now.

 

I'll try to simulate your exact scenario later today and post an updated command list.

Edited by flyride
Link to comment
Share on other sites

Update: 

 

I rebooted Xpenology and I can log in normally into DSM. Nothing is missing. everything looks normal.

I stopped docker within DSM.

I SSH back into Synology via terminal and when I run command "df" the array looks normal. 

 

I see volume is ok when mounted, and unmounted.

I see sdb3 and md2 are correctly expanded.

 

So I think everything is normal...I just cannot expand Volume group and logical volume...

😞

 

 

With Volume mounted:

 

root@Syno:~# df

Filesystem        1K-blocks    Used Available Use% Mounted on

/dev/md0            2385528 1016796   1249948  45% /

none                2013752       0   2013752   0% /dev

/tmp                2027544     612   2026932   1% /tmp

/run                2027544    3136   2024408   1% /run

/dev/shm            2027544      12   2027532   1% /dev/shm

none                      4       0         4   0% /sys/fs/cgroup

cgmfs                   100       0       100   0% /run/cgmanager/fs

/dev/vg1/volume_1  11621336 8114668   3387884  71% /volume1

 

 

Volume Unmounted:

 

root@Syno:~# sudo syno_poweroff_task -d

Unknown format [ pre-stop process 20270

], parse failed

root@Syno:~# sudo syno_poweroff_task -d

root@Syno:~# df | fgrep volume1

root@Syno:~# df

Filesystem     1K-blocks    Used Available Use% Mounted on

/dev/md0         2385528 1016028   1250716  45% /

none             2013752       0   2013752   0% /dev

/tmp             2027544     604   2026940   1% /tmp

/run             2027544    2392   2025152   1% /run

/dev/shm         2027544      12   2027532   1% /dev/shm

none                   4       0         4   0% /sys/fs/cgroup

cgmfs                100       0       100   0% /run/cgmanager/fs

 

 

And I can see that sdb3 and md2 are correctly expanded:

 

root@Syno:~# fdisk -l

Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: dos

Disk identifier: 0xf8c48628

 

Device     Boot   Start      End  Sectors  Size Id Type

/dev/sdb1          2048  4982527  4980480  2.4G fd Linux raid autodetect

/dev/sdb2       4982528  9176831  4194304    2G fd Linux raid autodetect

/dev/sdb3       9437184 41943039 32505856 15.5G 83 Linux

 

 

Disk /dev/md0: 2.4 GiB, 2549940224 bytes, 4980352 sectors

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

Disk /dev/zram0: 594 MiB, 622854144 bytes, 152064 sectors

Units: sectors of 1 * 4096 = 4096 bytes

Sector size (logical/physical): 4096 bytes / 4096 bytes

I/O size (minimum/optimal): 4096 bytes / 4096 bytes

 

 

Disk /dev/zram1: 594 MiB, 622854144 bytes, 152064 sectors

Units: sectors of 1 * 4096 = 4096 bytes

Sector size (logical/physical): 4096 bytes / 4096 bytes

I/O size (minimum/optimal): 4096 bytes / 4096 bytes

 

 

Disk /dev/zram2: 594 MiB, 622854144 bytes, 152064 sectors

Units: sectors of 1 * 4096 = 4096 bytes

Sector size (logical/physical): 4096 bytes / 4096 bytes

I/O size (minimum/optimal): 4096 bytes / 4096 bytes

 

 

Disk /dev/zram3: 594 MiB, 622854144 bytes, 152064 sectors

Units: sectors of 1 * 4096 = 4096 bytes

Sector size (logical/physical): 4096 bytes / 4096 bytes

I/O size (minimum/optimal): 4096 bytes / 4096 bytes

 

 

Disk /dev/md2: 15.5 GiB, 16641949696 bytes, 32503808 sectors

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

and  cat /proc/mdstat shows this:

 

root@Syno:~# cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] 

md2 : active raid1 sdb3[0]

      16251904 blocks super 1.2 [1/1]

      

md1 : active raid1 sdb2[0]

      2097088 blocks [12/1] [U___________]

      

md0 : active raid1 sdb1[0]

      2490176 blocks [12/1] [U___________]

      

unused devices: <none>

 

 

 

But the Volume group is NOT explanded:

 

root@Syno:~# vgdisplay

  --- Volume group ---

  VG Name               vg1

  System ID             

  Format                lvm2

  Metadata Areas        1

  Metadata Sequence No  3

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                2

  Open LV               0

  Max PV                0

  Cur PV                1

  Act PV                1

  VG Size               11.40 GiB

  PE Size               4.00 MiB

  Total PE              2918

  Alloc PE / Size       2918 / 11.40 GiB

  Free  PE / Size       0 / 0   

  VG UUID               f8NayU-uRhd-Yo2G-XcEC-8I4A-H0C3-kidAwF

Edited by munchgeil1
Link to comment
Share on other sites

Ok, I think this will do it for you.

 

You grew the partition on your host disk and extended the /dev/md2 array that sits on top of that.

We still need to tell lvm that its "physical" device (/dev/md2) has changed.

We need to extend the logical volume

and we need to resize the filesystem.

The good news is that all this can be done without shutting down services.

 

Right now a vgdisplay will show all the space available to the volume group is completely used even though we have space free on /dev/md2

 

So, we resume essentially at step #9 of the prior guide. 

 

9. Inform lvm that the physical device got bigger.

$ sudo pvresize /dev/md2
  Physical volume "/dev/md2" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

If you re-run vgdisplay now, you should see some free space.

 

10. Extend the lv

$ sudo lvextend -l +100%FREE /dev/vg1/volume_1
  Size of logical volume vg1/volume_1 changed from 15.39 GiB (3939 extents) to 20.48 GiB (5244 extents).
  Logical volume volume_1 successfully resized.

11. Finally, extend the filesystem (this is for ext4, there is a different command for btrfs)

$ sudo resize2fs -f /dev/vg1/volume_1
resize2fs 1.42.6 (21-Sep-2012)
Filesystem at /dev/vg1/volume_1 is mounted on /volume1; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 3
The filesystem on /dev/vg1/volume_1 is now 5369856 blocks long.

12. And reboot one last time (sudo shutdown -r now)

Edited by flyride
Link to comment
Share on other sites

For those who want to use Synology DSM to "expand" Volume in DSM, and do not want to use Terminal:

 

  1. In ESXi, build a new VMDK (to existing Xpenology VM):
    1. Add New HDD
    2. Use existing SATA controller (tied to HDD, where Synology data is being stored), but a different bus node.
      1. For example, if existing HDD is using 1:0, the new HDD should use 1:1
  2. Log into DSM and open Storage Manager app
    1. Create new Storage Group
    2.  Create new Volume within that storage group
    3. Copy all items from existing (e.g. Volume 1) to new volume (Volume 2):
      1. Copy data from Shared folders
      2. Some apps, require to be opened and “new storage” to be specified within the app itself:
        1. Example: MariaDB
      3. May need to re-install some apps into new volume(stated by some on Synology forums
    4. Go back to Storage Manager
      1. Delete the original Volume and associated Storage Group.
      2. In ESXi, delete the old VMDK.
Link to comment
Share on other sites

Hey guys I am trying this on another VM which I can't seem to make it work and I am seeing the same problems as muchgeil1 did.  I am running 6.2.1 on Juns 1.03b bootloader using ESXI.  I am trying to expand my storage from 2TB to 3TB which has been set in ESXI.

 

When I try step 9 to Inform lvm that the physical device got bigger it says this:

Quote

$ sudo pvresize /dev/md2

  Failed to find physical volume "/dev/md2".

  0 physical volume(s) resized / 0 physical volume(s) not resized

 

but /dev/md2 is there when I do df -l:

Quote

$ df -l | fgrep volume1

/dev/md2       2109000848 1284698592 824183472  61% /volume1

 

If I do sudo vgdisplay nothing appears.

 

If I run vgdisplay as non root I get this:

Quote

$ vgdisplay

  WARNING: Running as a non-root user. Functionality may be unavailable.

  /dev/md2: open failed: Permission denied

 

 

When I stop services and start the fdisk stuff you can see the 3TB there:

Quote

# sudo fdisk /dev/sdb

 

Welcome to fdisk (util-linux 2.26.2).

Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.

 

The size of this disk is 3 TiB (3298534883328 bytes). DOS partition table format can not be used on drives for volumes larger than 2199023255040 bytes for 512-byte sectors. Use GUID partition table format (GPT).

 

Command (m for help): p

 

Disk /dev/sdb: 3 TiB, 3298534883328 bytes, 6442450944 sectors

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: dos

Disk identifier: 0x66624a2c

 

Device     Boot   Start        End    Sectors  Size Id Type

/dev/sdb1          2048    4982527    4980480  2.4G fd Linux raid autodetect

/dev/sdb2       4982528    9176831    4194304    2G fd Linux raid autodetect

/dev/sdb3       9437184 4294967294 4285530111    2T 83 Linux

 

Any ideas?  

 

I think it could be because you can't use a 3TB since the disk type is Dos?

 

Edited by bagheera
Link to comment
Share on other sites

On 2/1/2019 at 12:50 AM, bagheera said:

If I do sudo vgdisplay nothing appears.

 

That's a pretty good sign that the Storage Pool is not using LVM.  Also when you df the volume and see that the host device is the /dev/md2 array and not the logical volume, you may conclude the same thing. Therefore you can just follow the plan from the very first post in this thread - in other words, the only step left should be to expand the filesystem.  Do you know if you are running btrfs or ext4?  If it is btrfs, the command is different and it is preferable to have the volume mounted:

$ sudo btrfs filesystem resize max /volume1

 

Edited by flyride
Link to comment
Share on other sites

12 hours ago, flyride said:

 

That's a pretty good sign that the Storage Pool is not using LVM.  Also when you df the volume and see that the host device is the /dev/md2 array and not the logical volume, you may conclude the same thing. Therefore you can just follow the plan from the very first post in this thread - in other words, the only step left should be to expand the filesystem.  Do you know if you are running btrfs or ext4?  If it is btrfs, the command is different and it is preferable to have the volume mounted:


$ sudo btrfs filesystem resize max /volume1

 

 

My filesystem is ext4 so that command won't work.

 

When I do the last few steps to resize the filesystem it doesn't expand it - it remains unchanged. This is what I get:

 

Quote

admin@thor:/$ sudo mdadm --grow /dev/md2 --size=max

Password: 

mdadm: component size of /dev/md2 unchanged at 2142763968K

admin@thor:/$ sudo resize2fs -f /dev/md2 

resize2fs 1.42.6 (21-Sep-2012)

The filesystem is already 535690992 blocks long.  Nothing to do! (Device size is 535690992 blocks long) 

 

 

Here is the full output from what I did:

Spoiler

root@thor:~# df | fgrep volume1

/dev/md2       2109000848 1304561392 804320672  62% /volume1

none           2109000848 1304561392 804320672  62% /volume1/@docker/aufs/mnt/27f59591e3903ea64b59648ab812eaf52cd162c3ccf2f48c64fb074bafe26779

shm                 65536          0     65536   0% /volume1/@docker/containers/cd6258556db17d6036ce4d38d038d10e3a704ad2d26dd54f8caa2cfbfb40c249/shm

root@thor:~# sudo mdadm --detail /dev/md2 | fgrep /dev/

/dev/md2:

       0       8       19        0      active sync   /dev/sdb3

root@thor:~# sudo syno_poweroff_task -d

Unknown format [ pre-stop process 15577

], parse failed

root@thor:~# sudo syno_poweroff_task -d

root@thor:~# 

root@thor:~# sudo mdadm --stop /dev/md2

mdadm: stopped /dev/md2

root@thor:~# 

root@thor:~# sudo fdisk /dev/sdb

 

Welcome to fdisk (util-linux 2.26.2).

Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.

 

The size of this disk is 3 TiB (3298534883328 bytes). DOS partition table format can not be used on drives for volumes larger than 2199023255040 bytes for 512-byte sectors. Use GUID partition table format (GPT).

 

Command (m for help): p

 

Disk /dev/sdb: 3 TiB, 3298534883328 bytes, 6442450944 sectors

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: dos

Disk identifier: 0x66624a2c

 

Device     Boot   Start        End    Sectors  Size Id Type

/dev/sdb1          2048    4982527    4980480  2.4G fd Linux raid autodetect

/dev/sdb2       4982528    9176831    4194304    2G fd Linux raid autodetect

/dev/sdb3       9437184 4294967294 4285530111    2T 83 Linux

 

Command (m for help): d

Partition number (1-3, default 3): 3

 

Partition 3 has been deleted.

 

Command (m for help): n

Partition type

   p   primary (2 primary, 0 extended, 2 free)

   e   extended (container for logical partitions)

Select (default p): p

Partition number (3,4, default 3): 3

First sector (9176832-4294967295, default 9177088): 9437184

Last sector, +sectors or +size{K,M,G,T,P} (9437184-4294967294, default 4294967294): 

 

Created a new partition 3 of type 'Linux' and of size 2 TiB.

 

Command (m for help): p

Disk /dev/sdb: 3 TiB, 3298534883328 bytes, 6442450944 sectors

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: dos

Disk identifier: 0x66624a2c

 

Device     Boot   Start        End    Sectors  Size Id Type

/dev/sdb1          2048    4982527    4980480  2.4G fd Linux raid autodetect

/dev/sdb2       4982528    9176831    4194304    2G fd Linux raid autodetect

/dev/sdb3       9437184 4294967294 4285530111    2T 83 Linux

 

Command (m for help): w

The partition table has been altered.

Calling ioctl() to re-read partition table.

Re-reading the partition table failed.: Device or resource busy

 

The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8).

 

root@thor:~# sudo shutdown -r now

 

Broadcast message from admin@thor

(/dev/pts/9) at 12:36 ...

 

The system is going down for reboot NOW!

root@thor:~# Connection to thor closed by remote host.

Connection to thor closed.

it-vpn-152-188:~ Pranay$ ssh -l admin thor

Could not chdir to home directory /var/services/homes/admin: No such file or directory

admin@thor:/$ 

admin@thor:/$ df | fgrep volume1

/dev/md2       2109000848 1306859020 802023044  62% /volume1

admin@thor:/$ sudo mdadm --grow /dev/md2 --size=max

Password: 

mdadm: component size of /dev/md2 unchanged at 2142763968K

admin@thor:/$ sudo resize2fs -f /dev/md2 

resize2fs 1.42.6 (21-Sep-2012)

The filesystem is already 535690992 blocks long.  Nothing to do! (Device size is 535690992 blocks long) 

 

Link to comment
Share on other sites

On 2/1/2019 at 12:50 AM, bagheera said:

I think it could be because you can't use a 3TB since the disk type is Dos?

 

You are probably correct, in this particular example you need to switch partition type to gpt.  gdisk can do this but you'll have to load a binary up on your system, it's not there now.  Again, this problem would never actually happen with a real Synology (since disks cannot change sizes) so they probably don't have a utility/method for this.  Please note that changing partition types is a pretty high risk operation and that you might be better served just creating a new store that is large enough, then delete your old one.

 

If you want to pursue it, start here: https://askubuntu.com/questions/84501/how-can-i-change-convert-a-ubuntu-mbr-drive-to-a-gpt-and-make-ubuntu-boot-from

Link to comment
Share on other sites

  • 3 weeks later...

Hi again,

 

Well, I hit the max size again, and wanted to follow the same method to extend the volume via terminal , just like last time.

However, I ran into a problem this time and I cannot go around it.

 

The problem is at Step 3 and 4 -> more actually at setp 4

 

  • Step 3:
    • When i SSH as "admin" user, then switch to root (sudo -i), then run syno_poweroff_task -d the connection to the server is closed (eventough -d is used) 
    • When i SSH as "admin" user, then run syno_poweroff_task -d the command executes correctly (Success)
      • So I proceed to step 4 as "admin" user
  • Step 4:
    • Running sudo mdadm --stop /dev/md2 produces error: 
      • mdadm: Cannot get exclusive access to /dev/md2:Perhaps a running process, mounted filesystem or active volume group?   
        • When I run sudo umount /dev/md2 I get target is busy
        • When I run vgdisplay, I can see all Volume group details 
        • When I run cat /proc/mounts, I can see the volume group and volume still mounted!
    • How can i get past this error?

 

If this is not workable, then I'll try adding new datastore, and moving everything to that datastore. 

Edited by munchgeil1
Link to comment
Share on other sites

Moving everything (apps and associated data) from one volume to another isn't as easy as I thought.

 

Does anyone have experience on how to do it?

 

I currently have data on Volume 1, I created new Volume 3. I would like to move all apps. and associated data from Volume 1 to Volume 3.. Then I would  delete Volume 1 (since both Volume 1 and 3 and or the same SSD, and I need the space).

 

I have 30 installed apps, and some of them do, but most do NOT allow changing "target volume". Deinstalling, reinstalling and reconfiguring would take just too much time.

 

Is there an easier way?

Link to comment
Share on other sites

  • 1 year later...
On 1/28/2019 at 1:40 PM, flyride said:

The issue is that DSM only does an automatic expansion when a drive add/replace event occurs. This isn't triggered when you deliberately expand a basic volume using virtualized storage - a user action that would never occur on a "real" Synology hardware system.  It also underscores the fact that DSM is really intended to manage the drives directly instead of ESXi or a RAID controller providing disk redundancy.

 

EDIT: One limitation of this strategy is if the disk was initially partitioned as MBR, which is possible if the initial size of the disk was less than 2TB.  In order to increase the size of an MBR disk to more than 2TB, the partition type will need to be changed to GPT, which is beyond the scope of this advice.

 

To grow a volume manually, the general tasks are: expand the partition(s) on the drive(s) hosting the array, expand the array, and then expand the volume on the array.  This has varying levels of complexity depending upon the array type, whether you are using LVM, whether you are using SHR, and whether you are using btrfs or ext4.  Your situation is probably the least complicated - a Basic, single disk ext4.  Note that even that configuration is implemented within DSM as an array - in your case a 1-disk RAID1.

 

Needless to say, have a backup before you try this.  A lot of things can go wrong.

 

The sequence is thus:

 

1. Sign in via SSH, and determine your volume array device (should be /dev/md2 using your example screenshots)


$ df | fgrep volume1
/dev/md2       22490509088 16838480532 5652028556  75% /volume1

NOTE: If df returns something like /dev/vgX/volume_X instead of a /dev/md device, then your system is using LVM.  Verify the /dev/mdX array device under the LVM with the pvdisplay command, and use that for the commands below.  If multiple devices are returned for the specific volume to be resized, you have a multi-disk SHR array and should stop now.

 

2. Determine the host disk servicing the array (might be /dev/sda or /dev/sdb given your specific example).

Note example below, /dev/sda3 is the 3rd partition on /dev/sda


$ sudo mdadm --detail /dev/md2 | fgrep /dev/
/dev/md2:
       0       8        3        0      active sync   /dev/sda3

3. Stop Synology services.

This also unmounts the array volumes, but does not stop the arrays.


$ sudo syno_poweroff_task -d

4. Stop the array (reference array device from step #1)


$ sudo mdadm --stop /dev/md2

5. Delete partition 3 from array host disk and create a new one to use all the available space (referencing array host device from step #2).

IMPORTANT: The start sector of the new partition must be the same as the old one.


$ sudo fdisk /dev/sda

Welcome to fdisk (util-linux 2.26.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa60f300d

Device     Boot   Start      End Sectors  Size Id Type
/dev/sda1          2048  4982527 4980480  2.4G fd Linux raid autodetect
/dev/sda2       4982528  9176831 4194304    2G fd Linux raid autodetect
/dev/sda3       9437184 16572415 7135232  3.4G fd Linux raid autodetect   <<-- this is the one we are interested in, note the start sector

Command (m for help): d
Partition number (1-3, default 3): 3

Partition 3 has been deleted.

Command (m for help): n
Partition type
   p   primary (2 primary, 0 extended, 2 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (3,4, default 3): 3
First sector (9176832-41943039, default 9177088): 9437184
Last sector, +sectors or +size{K,M,G,T,P} (9437184-41943039, default 41943039):

Created a new partition 3 of type 'Linux' and of size 15.6 GiB.

Command (m for help): p
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa60f300d

Device     Boot   Start      End  Sectors  Size Id Type
/dev/sda1          2048  4982527  4980480  2.4G fd Linux raid autodetect
/dev/sda2       4982528  9176831  4194304    2G fd Linux raid autodetect
/dev/sda3       9437184 41943039 32505856 15.5G 83 Linux

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Re-reading the partition table failed.: Device or resource busy

The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8).

6. Reboot (sudo shutdown -r now)

 

7. OPTIONAL: Verify that your array is still intact

alternatively, cat /proc/mdstat


$ df | fgrep volume1
/dev/md2       22490509088 16838480532 5652028556  75% /volume1

8. Expand the array to use all the new space in the host disk partition


$ sudo mdadm --grow /dev/md2 --size=max

 

9. If your system is using LVM, follow the steps in post #10:

https://xpenology.com/forum/topic/14091-esxi-unable-to-expand-syno-volume-even-after-disk-space-increased/?do=findComment&amp;comment=105841

 

10. Otherwise, expand the volume to use all the new space in the array (this is for ext4, btrfs has a different command)


$ sudo resize2fs -f /dev/md2

11. And reboot one last time (sudo shutdown -r now)

 

Thanks a lot. I follow your guide and able to expend the harddisk. 

FYI, i am not able to do step 4. but i'm still able to extend my disk size

Link to comment
Share on other sites

  • 2 months later...
On 1/29/2019 at 3:00 PM, flyride said:

Ok, I think this will do it for you.

 

You grew the partition on your host disk and extended the /dev/md2 array that sits on top of that.

We still need to tell lvm that its "physical" device (/dev/md2) has changed.

We need to extend the logical volume

and we need to resize the filesystem.

The good news is that all this can be done without shutting down services.

 

Right now a vgdisplay will show all the space available to the volume group is completely used even though we have space free on /dev/md2

 

So, we resume essentially at step #9 of the prior guide. 

 

9. Inform lvm that the physical device got bigger.



$ sudo pvresize /dev/md2
  Physical volume "/dev/md2" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

If you re-run vgdisplay now, you should see some free space.

 

10. Extend the lv



$ sudo lvextend -l +100%FREE /dev/vg1/volume_1
  Size of logical volume vg1/volume_1 changed from 15.39 GiB (3939 extents) to 20.48 GiB (5244 extents).
  Logical volume volume_1 successfully resized.

11. Finally, extend the filesystem (this is for ext4, there is a different command for btrfs)



$ sudo resize2fs -f /dev/vg1/volume_1
resize2fs 1.42.6 (21-Sep-2012)
Filesystem at /dev/vg1/volume_1 is mounted on /volume1; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 3
The filesystem on /dev/vg1/volume_1 is now 5369856 blocks long.

12. And reboot one last time (sudo shutdown -r now)

This works well! You are a hero.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...