NeoID Posted June 25, 2014 #1 Posted June 25, 2014 (edited) The guide has been moved in order to be able to keep it up to date: http://idmedia.no/general/expand-volume ... esnt-want/ Edited February 10, 2015 by Guest
Werter Posted June 27, 2014 #2 Posted June 27, 2014 I just encounterd the same problem butt with my 5th drive. I press the expand butten and it starts butt stops almost right away and then It goes back to the the line saying "you can expand to about 14886gb"
NeoID Posted June 27, 2014 Author #3 Posted June 27, 2014 @Werter: Do you also use Hyper-V? I think about giving ESXI a try... My best bet is that Hyper-V's SCSI controller is messing something up, but I'm not completely sure...
Werter Posted June 27, 2014 #4 Posted June 27, 2014 No I am using an pc whit usb stick with Nano Boat 5.0.3.1 DSM 5.0-4493 update 1 X64
NeoID Posted June 27, 2014 Author #5 Posted June 27, 2014 That's very interesting! If you also experience this problem on a physical PC running NanoBoot, then I would suggest that it's either something that is not emulated correctly by NanoBoot or a problem with drivers/controllers.
manfriday Posted June 27, 2014 #6 Posted June 27, 2014 Thinking out loud: might help if we post configurations to aid with the trouble shoot process. VM's are obviously different to physical boxes. I have swapped out smaller disks for larger, swapped bad disks for good, added SATA cards and added disks to the cards, added and removed disk while system is running, across versions of gnoBoot and Nanoboot all without issues on Intel based systems. System - ZOTAC Atom Dual-Core 1.6GHz/MCP7A-ION/DVI&HDMI/A&V&GbE/Mini ITX Motherboard IONITX-G-E x2 2x1GB DDR2 SATA Card IO Crest 2 Port SATA III PCI-Express x1 Card (SY-PEX40039) HDD of all types I have had mixed results with other boards/chipsets, some ok, others not so much, tend to avoid AMD systems Intel based systems seem to provide most stable experience. No experience with VIA, ARM or VM's even though I run VM for other purposes. The ZOTAC Atoms are 24x7 systems
stanza Posted June 29, 2014 #7 Posted June 29, 2014 Does the 9th drive show up in hdd manager? If not edit your synoinfo.conf file to allow past 8 drives .
Werter Posted July 1, 2014 #8 Posted July 1, 2014 I tried an test build with 6 disks. started with 2 disks and then added on bye one and it worked. and then I build the real rigg with 2 x 4tb as one volume and then added 3 disks one bye one. and with the last disk number five it says that it is ready to be expanded to 14 tb something and when I start it it stops before it even gets to 0,01 procent and I have fild the disks with media and don't want to lose it. my rigg is i7 960 GIGABYTE GA-X58A-UD3R V2.0 Motherboard Intel X58 Express LGA 1366 DDR3 5x4 tb segate nas hd 6gig corsair mem 650 corsair power
Werter Posted July 1, 2014 #9 Posted July 1, 2014 found this butt don't know if it will help. im not any god at commands and editing xpenology http://www.mauchle.name/blog/?p=235
NeoID Posted July 1, 2014 Author #10 Posted July 1, 2014 I've tried that and it did't go very well. The volume got marked as crashed, set to read-only and I could not get it to rebuild itself. DSM asked me to reboot in order to fix issues with the volume, but it didn't do anything upon a new boot (as I guess GnoBoot/NanoBoot does not support this).
Werter Posted July 1, 2014 #11 Posted July 1, 2014 I've tried that and it did't go very well. The volume got marked as crashed, set to read-only and I could not get it to rebuild itself.DSM asked me to reboot in order to fix issues with the volume, but it didn't do anything upon a new boot (as I guess GnoBoot/NanoBoot does not support this). okey that's to bad. wounder what it could be I was so happy that my test build worked and now with the real setup it dose not work. and I don't have anywhere to backup my data
Werter Posted July 1, 2014 #12 Posted July 1, 2014 I've tried that and it did't go very well. The volume got marked as crashed, set to read-only and I could not get it to rebuild itself.DSM asked me to reboot in order to fix issues with the volume, but it didn't do anything upon a new boot (as I guess GnoBoot/NanoBoot does not support this). I also found this thred. have you loked in to that http://59-124-41-244.hinet-ip.hinet.net ... =7&t=56468
Werter Posted July 2, 2014 #13 Posted July 2, 2014 can someone help me with the commands in the text below? when I log in with putty as root I get in to my nas and then I tried to enter the command like this "umount /volume1" and enter butt it says umount/volume1: not found Make sure all users on all PC's are logged out on all the shares of the volume (also make sure all network drives on all PC's are disconnected!) Log in as root (with the admin password) Via ssh try to "umount /volume1"If that fails, shut down all connected network shares If that still fails, look with "smbstatus" if there are no locked connections => if you see locked connections, look at the pid (process id) of the smb process, and issue a "kill " per process "umount /volume1" again Issue a "fsck.ext4 /dev/md2" (this can take some time), if you want an unattended fsck issue "fsck.ext4 -y /dev/md2" <= in this case all fix operations will be executed automatically w/o prompts to the user (=you) to enter yes/no for the execution After checking / fixing is complete, "mount /volume1" Your volume should now come back online and you should be able to successfully expand the volume via the Storage Manager If you keep getting problems unmounting the /volume1 directory, do a: "mount -o remount,ro /volume1" instead of "umount /volume1".This will make the volume read-only, you can then fsck the /dev/md2.
Werter Posted July 4, 2014 #14 Posted July 4, 2014 I get this when I type the command vgdisplay -v. how do I change the size of the Total PE / Free PE 3810823 / 0 ? TVIX> vgdisplay -v Finding all volume groups Finding volume group "vg1000" --- Volume group --- VG Name vg1000 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 9 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 14.54 TB PE Size 4.00 MB Total PE 3810823 Alloc PE / Size 3810823 / 14.54 TB Free PE / Size 0 / 0 VG UUID Md51rm-ke80-E104-cN2s-fuJT-Dbqo-WiOIN3 --- Logical volume --- LV Name /dev/vg1000/lv VG Name vg1000 LV UUID YiBJnA-k8yD-F2rq-Aj5t-rgwL-rkmz-UQG2SW LV Write Access read/write LV Status available # open 1 LV Size 14.54 TB Current LE 3810823 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 4096 Block device 253:0 --- Physical volumes --- PV Name /dev/md2 PV UUID H5YF1N-iNUc-CmJM-fjbZ-yKyp-Srz8-QdcwQJ PV Status allocatable Total PE / Free PE 3810823 / 0
goldserve Posted July 5, 2014 #15 Posted July 5, 2014 Check your log messages and it may say you may have file system errors. Find a guide online to unmount and run fix disk.
Werter Posted July 6, 2014 #16 Posted July 6, 2014 Check your log messages and it may say you may have file system errors. Find a guide online to unmount and run fix disk. where do I find the log file?
NeoID Posted July 7, 2014 Author #17 Posted July 7, 2014 Even after upgrading to ESXI and having no issues whatsoever, I got the same problem again after inserting disk no. 11 and 12. My next question is.. might this issue only be related to SHR/SHR-2 or also traditional Raid 5 and 6. Anyone having this problem with raid 5/6? There has to be a "magical" ~14TB limit...
NeoID Posted July 13, 2014 Author #19 Posted July 13, 2014 Yeah, that's true. Strange thing is that Synology has patched that issue inn DSM 3 or so, so in theory it should not be present anymore...
manfriday Posted July 14, 2014 #20 Posted July 14, 2014 this last 2 weeks I replaced 2x2TB with 2x3TB and had similar issue regarding expansion of capacity. Volumes were all hot swapped. As soon as I rebooted the option to expand appeared. I start the expansion and it stops straight away. Running 4.3-3810. with SHR1 Followed instructions from further up the page but no joy. Tried umount /volume1 but get a device busy or in use, cannot see what has a hold of the volume, all shares are shutdown, no PC's accessing shares. Next this is to power off any media devices or smart TV's attached to SHARES via media server
NeoID Posted July 14, 2014 Author #21 Posted July 14, 2014 Actually, this is not a 16TB limit problem... I've made a 18TB disk group without any problems, but when I try to create a 14.5TB volume it doesn't work: http://i.imgur.com/ieat9SB.png
manfriday Posted July 14, 2014 #22 Posted July 14, 2014 Another option http://www.naschenweng.info/2012/03/25/ ... ology-nas/ and why not another http://forum.synology.com/wiki/index.ph ... sic_faults
Werter Posted July 16, 2014 #23 Posted July 16, 2014 Actually, this is not a 16TB limit problem...I've made a 18TB disk group without any problems, but when I try to create a 14.5TB volume it doesn't work: http://i.imgur.com/ieat9SB.png if you putt another drive in when you try to create an 14,5tbvolume +the extra disk. would that fix the problem ?
manfriday Posted July 16, 2014 #24 Posted July 16, 2014 16TB Volume(not disk group) limit. Disk Groups limit = size of wallet. Issue on Trantor 4-3-3810 system is a bad file system stopping expansion. While the various "how to's" point in the right direction I can not correct FS problem and therefore can not expand the volume. A little more light reading after a hard crash of server. -Plugged in monitor. -Reboot -login to DSM -Storage Manager > Expand volume and watch monitor traffic -Expand fails and last line on console "no reserved gdt blocks can't resize" https://bugs.launchpad.net/ubuntu/+sour ... bug/656115 This outside my pay grade so it's a clean install for from here
Werter Posted July 17, 2014 #25 Posted July 17, 2014 16TB Volume(not disk group) limit. Disk Groups limit = size of wallet.Issue on Trantor 4-3-3810 system is a bad file system stopping expansion. While the various "how to's" point in the right direction I can not correct FS problem and therefore can not expand the volume. A little more light reading after a hard crash of server. -Plugged in monitor. -Reboot -login to DSM -Storage Manager > Expand volume and watch monitor traffic -Expand fails and last line on console "no reserved gdt blocks can't resize" https://bugs.launchpad.net/ubuntu/+sour ... bug/656115 This outside my pay grade so it's a clean install for from here so in volume1 that I have is a 16tb limit? found something about making volume bigger then 16 tb to be able to expand further . and if you make an volume smaller then that you can only expand to 16bt
Recommended Posts