gizmomelb

Members
  • Content Count

    21
  • Joined

  • Last visited

Community Reputation

0 Neutral

About gizmomelb

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. thank you for the detailed info, yes that is the process I had when updating my drives.. I replaced the failed 4TB with a 10TB as I had 2x 4TB drives fail me (WD purples, each with 22K hours usage on them) and I only had 1x 4TB replacement drive (a Seagate.. yeah desperate times) and I didn't want to run the NAS with a degraded volume so I moved some files around and shucked a 10TB external I had which hadn't had much use and installed that in the NAS. The only HDD I Could purchase was a 6TB WD which I then found out was SMR and not CMR so I'll be returning that as I've read it'll
  2. I know a backup would be best but I don't have the storage space to be able to do that (data is non essential, but a pain to have to re-rip all my DVDs, CDs, blurays etc. and at least another few months of work). If it's possible to expand the 4TB rebuilt partition to the 10TB capacity of the actual replacement drive it'd be a nice win. But also many, many thanks for sharing your time and knowledge helping me out and for my learning a little more how mdadm handles LVs and VGs.
  3. good news!! yes I rebooted and the volume mounts and my data is there (whether it is intact is another thing, but it should be!) I don't know if this is too early but thank you so much for you help recovering the volume.
  4. GIZNAS01> vgcfgrestore vg1000 Restored volume group vg1000 GIZNAS01> lvm vgscan Reading all physical volumes. This may take a while... Found volume group "vg1000" using metadata type lvm2 GIZNAS01> lvm lvscan inactive '/dev/vg1000/lv' [25.45 TB] inherit GIZNAS01> GIZNAS01> pvs PV VG Fmt Attr PSize PFree /dev/md2 vg1000 lvm2 a- 13.62T 0 /dev/md3 vg1000 lvm2 a- 4.55T 0 /dev/md4 vg1000 lvm2 a- 7.28T 0 GIZNAS01> vgs VG #PV #LV #SN Attr VSize VFree vg1000 3 1 0 w
  5. where is the automatic backup and how do I restore it please? Thank you.
  6. GIZNAS01> lvm vgscan Reading all physical volumes. This may take a while... GIZNAS01> GIZNAS01> lvm lvscan GIZNAS01> nothing
  7. Ahh I just found a screenshot I made last night doesn't appear to be destructive though, just testing the filesystem for errors. This is what I had typed: syno_poweroff_task -d vgchange -ay fsck.ext4 -pvf -C 0 /dev/vg1000/lv then I executed these commands: vgchange -an vg1000 sync init 6 that looks to be it.. I deactivated the VG which explains why there isn't a VG when I execute vgs or vgdisplay etc. To re-activate the VG I need to execute 'vgchange -ay vg1000' - but I'll wait until it is confirmed. Thank you
  8. Hi Flyride, thank you for continuing to assist me - it's been a busy day but I will make the time to read up more about how mdadm works (it makes sense to me a little already). I think the damage I caused was in step 10 or 11 as detailed here: 9. Inform lvm that the physical device got bigger. $ sudo pvresize /dev/md2 Physical volume "/dev/md2" changed 1 physical volume(s) resized / 0 physical volume(s) not resized If you re-run vgdisplay now, you should see some free space. 10. Extend the lv $ sudo lvextend -l +100%FREE /dev/vg1/volu
  9. Hi yes that was my post about expanding the volume before the volume crashed - I tried expanding the volume following your instructions here: My apologies for not saying this was DSM 5.2 earlier - I don't know what I need to post and that is why I was asking for assistance. I was looking at similar issues across many forums but most solutions involved earlier versions of DSM with resize being available.
  10. Hi, yes I am not a *nix expert.. I ran the vgchange -ay command it is literally does nothing, displays nothing and goes to the new command line I tried vgscan but I do not have that command. ok maybe I'm going in the wrong direction (please tell me if I am) but looking at this thread - https://community.synology.com/enu/forum/17/post/84956 the array seems to still be intact but has a size of '0'. fdisk -l fdisk: device has more than 2^32 sectors, can't use all of them Disk /dev/sda: 2199.0 GB, 2199023255040 bytes 255 heads, 63 sectors/trac
  11. Hi Flyride, vgchange -ay displays nothing, just goes to the next command line. I followed all the steps in the thread you mentioed and posted the results of the questions asked in that thread as well. mount /dev/vg1000/lv /volume1 mount: open failed, msg:No such file or directory mount: mounting /dev/vg1000/lv on /volume1 failed: No such device mount -o clear_cache /dev/vg1000/lv /volume1 mount: open failed, msg:No such file or directory mount: mounting /dev/vg1000/lv on /volume1 failed: No such device mount -o recovery /dev/vg100
  12. hi Flyride I know I definitely have ext4 filesystem and if I SSH in I have no volumes listed. I do not have the 'dump' command. vi /etc/fstab: none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/vg1000/lv /volume1 ext4 0 0 cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md3 : active raid5 sdf6[9] sda6[6] sdb6[7] sdc6[10] sdd6[2] sde6[8] 4883714560 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] md4 : active raid1 sda7[0] sdb7[1] 7811854208 blocks super 1.2 [
  13. sigh... my NAS is now reporting a crashed volume. if I do the following 'cat /proc/mdstat' it reports: cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md3 : active raid5 sdf6[9] sda6[6] sdb6[7] sdc6[10] sdd6[2] sde6[8] 4883714560 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] md4 : active raid1 sda7[0] sdb7[1] 7811854208 blocks super 1.2 [2/2] [UU] md2 : active raid5 sda5[11] sde5[6] sdf5[7] sdd5[8] sdc5[9] sdb5[10] 14627177280 blocks super 1.2 level 5, 64k chunk,
  14. Hi all, an old issue but so far my google-fu has not been able to resolve (some mentioned web pages and links not existing any more does not help). I recently installed a dead 4TB drive with a 10TB drive and after the volume was repaired it would not let me expand the volume. The array is a multi-disk SHR affair. if I SSH to the NAS and execute 'print devices' the results are: print devices /dev/hda (12.0TB) /dev/sda (12.0TB) /dev/sdb (12.0TB) /dev/sdc (10.0TB) <- the new HDD /dev/sdd (4001GB) /dev/sde (4001GB) /dev/sdf (4001GB) /dev/md0 (2550M