Storage Pool Crashed


Recommended Posts

root@DS415:~# pvcreate --uuid bocvSr-hmj0-LUH0-BM8g-BXBS-TicT-LbjYQ9 --restorefile /etc/lvm/backup/vg1000 /dev/md3
  Couldn't find device with uuid bocvSr-hmj0-LUH0-BM8g-BXBS-TicT-LbjYQ9.
  Physical volume "/dev/md3" successfully created


root@DS415:~# vgcfgrestore vg1000
  Restored volume group vg1000


root@DS415:~# pvs
  PV         VG     Fmt  Attr PSize PFree
  /dev/md2   vg1000 lvm2 a--  3.63t    0
  /dev/md3   vg1000 lvm2 a--  1.82t    0
  /dev/md4   vg1000 lvm2 a--  1.82t    0
  /dev/md5   vg1001 lvm2 a--  1.81t    0
 

I assume that's good ???

Link to post
Share on other sites

root@DS415:~# vgchange -ay
  1 logical volume(s) in volume group "vg1001" now active
  1 logical volume(s) in volume group "vg1000" now active


root@DS415:~# mount
/dev/md0 on / type ext4 (rw,relatime,journal_checksum,barrier,data=ordered)
none on /dev type devtmpfs (rw,nosuid,noexec,relatime,size=1013044k,nr_inodes=253261,mode=755)
none on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
none on /proc type proc (rw,nosuid,nodev,noexec,relatime)
none on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
/tmp on /tmp type tmpfs (rw,relatime)
/run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
/dev/shm on /dev/shm type tmpfs (rw,nosuid,nodev,relatime)
none on /sys/fs/cgroup type tmpfs (rw,relatime,size=4k,mode=755)
cgmfs on /run/cgmanager/fs type tmpfs (rw,relatime,size=100k,mode=755)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset,release_agent=/run/cgmanager/agents/cgm-release-agent.cpuset,clone_children)
cgroup on /sys/fs/cgroup/cpu type cgroup (rw,relatime,cpu,release_agent=/run/cgmanager/agents/cgm-release-agent.cpu)
cgroup on /sys/fs/cgroup/cpuacct type cgroup (rw,relatime,cpuacct,release_agent=/run/cgmanager/agents/cgm-release-agent.cpuacct)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,relatime,memory,release_agent=/run/cgmanager/agents/cgm-release-agent.memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,relatime,devices,release_agent=/run/cgmanager/agents/cgm-release-agent.devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,relatime,freezer,release_agent=/run/cgmanager/agents/cgm-release-agent.freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,relatime,blkio,release_agent=/run/cgmanager/agents/cgm-release-agent.blkio)
none on /proc/bus/usb type devtmpfs (rw,nosuid,noexec,relatime,size=1013044k,nr_inodes=253261,mode=755)
none on /sys/kernel/debug type debugfs (rw,relatime)
securityfs on /sys/kernel/security type securityfs (rw,relatime)
/dev/mapper/vg1001-lv on /volume2 type btrfs (rw,relatime,synoacl,space_cache=v2,auto_reclaim_space,metadata_ratio=50)
none on /config type configfs (rw,relatime)
none on /proc/fs/nfsd type nfsd (rw,relatime)
 

Link to post
Share on other sites

You're a little ahead of me.  It doesn't look like it mounted /volume1 (but if it did, please confirm).  Also let me know if you see anything different in the Storage Manager UI.

 

I just want to make sure nothing else has changed.

 

# pvs

# vgs

# lvs

# pvdisplay

# vgdisplay

# lvdisplay

Link to post
Share on other sites

Sorry, got carried away with the excitement

No volume1 didn't mount 

 

root@DS415:~# pvs
  PV         VG     Fmt  Attr PSize PFree
  /dev/md2   vg1000 lvm2 a--  3.63t    0
  /dev/md3   vg1000 lvm2 a--  1.82t    0
  /dev/md4   vg1000 lvm2 a--  1.82t    0
  /dev/md5   vg1001 lvm2 a--  1.81t    0


root@DS415:~# vgs
  VG     #PV #LV #SN Attr   VSize VFree
  vg1000   3   1   0 wz--n- 7.27t    0
  vg1001   1   1   0 wz--n- 1.81t    0


root@DS415:~# lvs
  LV   VG     Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv   vg1000 -wi-a----- 7.27t
  lv   vg1001 -wi-ao---- 1.81t


root@DS415:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md5
  VG Name               vg1001
  PV Size               1.81 TiB / not usable 3.19 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              475752
  Free PE               0
  Allocated PE          475752
  PV UUID               p7cJsO-la6l-vXp7-ga51-4ugu-he7H-SeoHgZ

  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               vg1000
  PV Size               3.63 TiB / not usable 1.44 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              951505
  Free PE               0
  Allocated PE          951505
  PV UUID               2NkG9U-9Rh6-5xFW-M1iM-GA0f-nbOd-aJHEUS

  --- Physical volume ---
  PV Name               /dev/md4
  VG Name               vg1000
  PV Size               1.82 TiB / not usable 128.00 KiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              476925
  Free PE               0
  Allocated PE          476925
  PV UUID               nolEeP-0392-QvMt-ZOkW-1JDr-COn4-QybVK7

  --- Physical volume ---
  PV Name               /dev/md3
  VG Name               vg1000
  PV Size               1.82 TiB / not usable 1.31 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              476927
  Free PE               0
  Allocated PE          476927
  PV UUID               bocvSr-hmj0-LUH0-BM8g-BXBS-TicT-LbjYQ9

 

root@DS415:~# vgdisplay
  --- Volume group ---
  VG Name               vg1001
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.81 TiB
  PE Size               4.00 MiB
  Total PE              475752
  Alloc PE / Size       475752 / 1.81 TiB
  Free  PE / Size       0 / 0
  VG UUID               pHhunz-cg0H-Fkcg-na1y-AAcT-D9fU-gdDTet

  --- Volume group ---
  VG Name               vg1000
  System ID
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  13
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               7.27 TiB
  PE Size               4.00 MiB
  Total PE              1905357
  Alloc PE / Size       1905357 / 7.27 TiB
  Free  PE / Size       0 / 0
  VG UUID               kPgiVt-X4fO-Eoxr-f0GL-rsKm-s4fE-Zl6u4Z

 

root@DS415:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg1001/lv
  LV Name                lv
  VG Name                vg1001
  LV UUID                Pl33so-ldeW-HS2w-QGeE-3Zwh-QLuG-pqC1TE
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                1.81 TiB
  Current LE             475752
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/vg1000/lv
  LV Name                lv
  VG Name                vg1000
  LV UUID                Zqg7q0-2u5X-oQcl-ejyL-zh1Y-iUA5-531Ls9
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 0
  LV Size                7.27 TiB
  Current LE             1905357
  Segments               4
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     512
  Block device           253:1


 

Link to post
Share on other sites

Ok, stop looking at the Storage Manager UI, it will only stress you out.

 

I'm not sure you can get to your files via peer network protocol (CIFS, Appletalk, etc).  Don't use the UI to configure anything.  Do not reboot the NAS.

 

You should be able to access you files in one of the following ways:

  • rcp/rsh
  • NFS mount from command line to another server
  • File Station

I strongly, strongly advise you to offload all your files with no further changes to the system.  When that is done, delete your Volume 1 / Storage Pool 2 and recreate it.  I'll leave it to you whether you want to configure a SHR again.  It's not something I would choose, but it works well for many thousands of people.

Link to post
Share on other sites

BTW, if you have btrfs checksum on, btrfs will flag you in the UI when it tries to access files that may be corrupted. There is no way to fix the files since your filesystems have no redundancy now. But at least you will know exactly which files are compromised, if any.

Link to post
Share on other sites

I don't know how to thank you, to say you are a Guru Master is an understatement.

If I could I would give you a PhD ... DR Flyride.

 

Now the million dollar question ... WHY?

What should I do next?

Can I power off or use the web interface?  Should I scrap the HDDs and replace with 3 x 8TB (non SMR) in RAID 5?

Link to post
Share on other sites

I think we are crossing paths again, File Station is the only web interface tool that I would use at this point.

 

If it were my data, I would use RAID5 with non-SMR drives and then a remote backup copy (RAID is not backup) but I don't intend to soapbox you until you actually get your data recovered :-)

Link to post
Share on other sites
13 hours ago, pdavey said:

Can I use a USB drive and copy files to that?

If so how can I copy ALL files in a directory to the USB?

 

If you attached a large USB connected disk, it will be accessible from the command line as /USBVolume1 (or something similar to that).  You may have to format it from the Control Panel UI.  rsync is probably the best way to get everything over intact if you have direct-connected storage.

Edited by flyride
Link to post
Share on other sites

One last point, if you offload the files and find that critical items are corrupted, we still have the secondary option to use the /dev/sda7 copy of /dev/md3, repeat the process, and manually extract those specific corrupted files (in hopes that they are intact on the other copy).

 

Obviously that's a little more heavy lifting, but do keep it in mind to evaluate before you do anything destructive with your Storage Pools, drives, etc.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.