Jump to content
XPEnology Community

Expand volume error GDT


Recommended Posts

Hello everybody,


I'm running Xpenology 5.0 4458 update 2 with gnoboot on a Asus Motherboard C60M1-I.

I have already 5 disk of 4TB in SHR1, xpenolgy see 14.42 TB in volume1 since few mouth.


Last days i install the 6th and last disk of 4TB, the system try to expand during 24hours to 18607 GB but fail.

http://www.hostingpics.net/viewer.php?i ... isques.png

http://www.hostingpics.net/viewer.php?i ... Volume.png


Now i can't modify the group disk, it's greyed.


I search in logfile and find this:


Jun 12 07:44:03 Xpenology kernel: [126572.462253] EXT4-fs warning (device dm-1): ext4_resize_fs:1997: No reserved GDT blocks, can't resize

Jun 12 07:44:03 Xpenology volumehandler.cgi: (fs_vol_expand.c)ExtFSExpand(88):Failed to '/sbin/resize2fs -fpF /dev/vg1/volume_1 > /dev/null 2>&1', WEXITSTATUS® = 1

Jun 12 07:44:03 Xpenology volumehandler.cgi: volume_manage_with_temp_dev.c:279 Failed to expand file system on /dev/vg1/volume_1

Jun 12 07:44:03 Xpenology volumehandler.cgi: volume_lib.cpp:952 Failed to expand file system on /dev/vg1/volume_1

Jun 12 07:44:03 Xpenology volumehandler.cgi: volumehandler.cpp:331 failed to expand unallocated file system: /volume1



Xpenology> tune2fs -l /dev/vg1/volume_1

tune2fs 1.42.6 (21-Sep-2012)

Filesystem volume name: 1.42.6-3810

Last mounted on: /volume1

Filesystem UUID: c543f116-3e98-4d00-a7d6-e0944fea5d97

Filesystem magic number: 0xEF53

Filesystem revision #: 1 (dynamic)

Filesystem features: has_journal ext_attr resize_inode filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize

Filesystem flags: signed_directory_hash

Default mount options: user_xattr acl

Filesystem state: clean

Errors behavior: Continue

Filesystem OS type: Linux

Inode count: 487788544

Block count: 3902279680

Reserved block count: 25600

Free blocks: 1151150548

Free inodes: 487482583

First block: 0

Block size: 4096

Fragment size: 4096

Reserved GDT blocks: 94

Blocks per group: 32768

Fragments per group: 32768

Inodes per group: 4096

Inode blocks per group: 256

Flex block group size: 16

Filesystem created: Tue Dec 10 19:47:22 2013

Last mount time: Fri Jun 13 19:22:26 2014

Last write time: Fri Jun 13 19:22:26 2014

Mount count: 21

Maximum mount count: -1

Last checked: Tue Dec 10 19:47:22 2013

Check interval: 0 ()

Lifetime writes: 11 TB

Reserved blocks uid: 0 (user root)

Reserved blocks gid: 0 (group root)

First inode: 11

Inode size: 256

Required extra isize: 28

Desired extra isize: 28

Journal inode: 8

Default directory hash: half_md4

Directory Hash Seed: 9e545dcd-ea13-4832-83ac-26576f3f7007

Journal backup: inode blocks


Thanks in advance, but i think i must create again the SHR1 ? but i'm afraid that"s the problem don't disapear.

Link to comment
Share on other sites

  • 10 months later...
  • Create New...