Jump to content
XPEnology Community

Critical storage pools and crashed volumes


Recommended Posts

Hello all,

 

Just updated the loader to the latest TCRP v0.9.4.9c in order to update DSM to 7.2.

Box got updated. Everything seemed fine after the first reboot until I noticed all package icons in the Package Center were missing and had a Synology placeholder icon instead. I proceeded to reboot and that's were all went sideways. I wasn't able to log back in my box after reboot. I was getting the "System is getting ready. Please log in later" forever. I eventually got back in by running the following via ssh:

 

> synobootseq --set-boot-done
> synobootseq --is-ready

 

only to discover that both Storage pools 1 and 2 were critical and both volumes 1 and 2 had crashed. I had 2 drives in each pool/volume. Box can't be reached though SynologyAssistant strangely.

 

I tried a repair of the main Storage pool 1 but that didn't seem to solve anything and the volume was still crashed. I can't see any of the data in Volume 1. All apps on the Package center show an Error.

 

I deleted the second Storage pool 2 as it didn't contain any important data anyway so I wasn't bothered by it. 

 

I am running the repair again on Storage pool 1.

 

Any help would be appreciated on how to get the Volume 1 running.

 

Thank you.

 

Screen Shot 2023-08-12 at 16.24.00.jpg

 

Screen Shot 2023-08-12 at 16.31.36.jpg

 

Screen Shot 2023-08-12 at 16.32.49.jpg

 

Link to comment
Share on other sites

Here are the result of what I have run so far:

 

root@Home:~# cat /proc/mdstat
Personalities : [raid1] 
md2 : active raid1 sda5[0] sdb5[2]
      1948683456 blocks super 1.2 [2/2] [UU]
      
md1 : active raid1 sda2[0] sdb2[1]
      2097088 blocks [12/2] [UU__________]
      
md0 : active raid1 sda1[0] sdb1[1]
      2490176 blocks [12/2] [UU__________]
      
unused devices: <none>

 

root@Home:~# df
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/md0         2385528 1490492    776252  66% /
devtmpfs         8097044       0   8097044   0% /dev
tmpfs            8121504       4   8121500   1% /dev/shm
tmpfs            8121504    7816   8113688   1% /run
tmpfs            8121504       0   8121504   0% /sys/fs/cgroup
tmpfs            8121504     316   8121188   1% /tmp

 

root@Home:~# lvdisplay --verbose
    Using logical volume(s) on command line.
  --- Logical volume ---
  LV Path                /dev/vg1000/lv
  LV Name                lv
  VG Name                vg1000
  LV UUID                iy0Y2t-bn9a-n7iV-fMfg-hijA-LOXz-psaH8c
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                1.81 TiB
  Current LE             475752
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           249:0

 

root@Home:~# vgdisplay --verbose
    Using volume group(s) on command line.
  --- Volume group ---
  VG Name               vg1000
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.81 TiB
  PE Size               4.00 MiB
  Total PE              475752
  Alloc PE / Size       475752 / 1.81 TiB
  Free  PE / Size       0 / 0   
  VG UUID               VSVCpH-ae5U-h4MG-XaQj-fn8A-BJ88-ckQrxm
   
  --- Logical volume ---
  LV Path                /dev/vg1000/lv
  LV Name                lv
  VG Name                vg1000
  LV UUID                iy0Y2t-bn9a-n7iV-fMfg-hijA-LOXz-psaH8c
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                1.81 TiB
  Current LE             475752
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           249:0
   
  --- Physical volumes ---
  PV Name               /dev/md2     
  PV UUID               Nonb8L-VX5t-Cg7H-lgKL-Urvn-zJH2-yArTHE
  PV Status             allocatable
  Total PE / Free PE    475752 / 0

 

root@Home:~# sudo btrfs rescue super /dev/vg1000/lv
All supers are valid, no need to recover

 

root@Home:~# sudo btrfs insp dump-s -f /dev/vg1000/lv
superblock: bytenr=65536, device=/dev/vg1000/lv
---------------------------------------------------------
csum            0xa61301c7 [match]
bytenr            65536
flags            0x1
            ( WRITTEN )
magic            _BHRfS_M [match]
fsid            c95df4c1-94c8-4154-827a-fb89c0af1f27
metadata_uuid        c95df4c1-94c8-4154-827a-fb89c0af1f27
label            2017.09.17-16:34:16 v15152
generation        5717282
root            916065419264
sys_array_size        129
chunk_root_generation    5705541
root_level        1
chunk_root        1211914829824
chunk_root_level    1
log_root        0
log_root_transid    0
log_root_level        0
log tree reserve bg    0
rbd_mapping_table_first_offset    0
total_bytes        1995448516608
bytes_used        884257968128
sectorsize        4096
nodesize        16384
leafsize        16384
stripesize        4096
root_dir        6
num_devices        1
compat_flags        0x8000000000000000
compat_ro_flags        0x0
incompat_flags        0x16b
            ( MIXED_BACKREF |
              DEFAULT_SUBVOL |
              COMPRESS_LZO |
              BIG_METADATA |
              EXTENDED_IREF |
              SKINNY_METADATA )
syno_capability_flags        0x0
syno_capability_generation    5717282
csum_type        0
csum_size        4
cache_generation    9
uuid_tree_generation    5717282
dev_item.uuid        2aeb6bc5-aebc-469c-871f-a6f660eb61a2
dev_item.fsid        c95df4c1-94c8-4154-827a-fb89c0af1f27 [match]
dev_item.type        0
dev_item.total_bytes    1995448516608
dev_item.bytes_used    1080251383808
dev_item.io_align    4096
dev_item.io_width    4096
dev_item.sector_size    4096
dev_item.devid        1
dev_item.dev_group    0
dev_item.seek_speed    0
dev_item.bandwidth    0
dev_item.generation    0
sys_chunk_array[2048]:
    item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 1211914780672)
        chunk length 33554432 owner 2 stripe_len 65536
        type SYSTEM|DUP num_stripes 2
            stripe 0 devid 1 offset 34397487104
            dev uuid: 2aeb6bc5-aebc-469c-871f-a6f660eb61a2
            stripe 1 devid 1 offset 34431041536
            dev uuid: 2aeb6bc5-aebc-469c-871f-a6f660eb61a2
backup_roots[4]:
    backup 0:
        backup_tree_root:    916118945792    gen: 5717279    level: 1
        backup_chunk_root:    1211914829824    gen: 5705541    level: 1
        backup_extent_root:    916104200192    gen: 5717279    level: 2
        backup_fs_root:        29573120    gen: 7    level: 0
        backup_dev_root:    916066844672    gen: 5716928    level: 0
        backup_csum_root:    916068270080    gen: 5717280    level: 2
        backup_total_bytes:    1995448516608
        backup_bytes_used:    884257628160
        backup_num_devices:    1
    backup 1:
        backup_tree_root:    916121993216    gen: 5717280    level: 1
        backup_chunk_root:    1211914829824    gen: 5705541    level: 1
        backup_extent_root:    916086849536    gen: 5717280    level: 2
        backup_fs_root:        29573120    gen: 7    level: 0
        backup_dev_root:    916066844672    gen: 5716928    level: 0
        backup_csum_root:    916066435072    gen: 5717281    level: 2
        backup_total_bytes:    1995448516608
        backup_bytes_used:    884258029568
        backup_num_devices:    1
    backup 2:
        backup_tree_root:    916136837120    gen: 5717281    level: 1
        backup_chunk_root:    1211914829824    gen: 5705541    level: 1
        backup_extent_root:    916129677312    gen: 5717281    level: 2
        backup_fs_root:        29573120    gen: 7    level: 0
        backup_dev_root:    916066844672    gen: 5716928    level: 0
        backup_csum_root:    916118732800    gen: 5717281    level: 2
        backup_total_bytes:    1995448516608
        backup_bytes_used:    884257968128
        backup_num_devices:    1
    backup 3:
        backup_tree_root:    916065419264    gen: 5717282    level: 1
        backup_chunk_root:    1211914829824    gen: 5705541    level: 1
        backup_extent_root:    916065583104    gen: 5717282    level: 2
        backup_fs_root:        29573120    gen: 7    level: 0
        backup_dev_root:    916066844672    gen: 5716928    level: 0
        backup_csum_root:    916118732800    gen: 5717281    level: 2
        backup_total_bytes:    1995448516608
        backup_bytes_used:    884257968128
        backup_num_devices:    1

 

root@Home~# sudo mount /dev/vg1000/lv /volume1
mount: /volume1: /dev/vg1000/lv already mounted or mount point busy.

root@Home:~# sudo mount -o clear_cache /dev/vg1000/lv /volume1
mount: /volume1: /dev/vg1000/lv already mounted or mount point busy.

root@Home:~# sudo mount -o recovery,ro /dev/vg1000/lv /volume1
mount: /volume1: /dev/vg1000/lv already mounted or mount point busy.

 

 

dmesg.txt sudo btrfs-find-root :dev:vg1000:lv.txt

Link to comment
Share on other sites

16 часов назад, Polanskiman сказал:

Any help would be appreciated on how to get the Volume 1 running.

From which version of the DSM did you upgrade to 7.2? If from version 6, in what way exactly?

Perhaps the simplest and most reliable now will be: to make a full backup (all data and settings) to an external disk using Hyper Backup; delete the Storage pool; install DSM again and restore data and settings from the backup

Link to comment
Share on other sites

14 minutes ago, dj_nsk said:

From which version of the DSM did you upgrade to 7.2? If from version 6, in what way exactly?

Perhaps the simplest and most reliable now will be: to make a full backup (all data and settings) to an external disk using Hyper Backup; delete the Storage pool; install DSM again and restore data and settings from the backup

From 7.1.1.

I can't make any backup. Most packages show an error and DSM is not allowing me to install anything saying there is no volume 😬

As I said I can't even see my data. Volume1 shows empty. As if it wasn't mounted. Tried mounting it, didn't allow me to.

Link to comment
Share on other sites

34 minutes ago, dj_nsk said:

after the words "The repair of the Storage pool just finished", I decided that the data became available to you. If no:(t, then most likely - everything is sad

perhaps someone with knowledge of low-level working with RAID will help

Once the repair was over the Storage pool still showed 'Warning' and the volume still showed 'Crashed'. I am hoping Flyride can give me a hand as he is the godfather in these types of situations!

Thank you for your answer though.

Link to comment
Share on other sites

I just realized I was using pre-DSM 7 mount point, reason why it was not working. This time I used:

sudo mount -v /dev/mapper/cachedev_0 /volume1

and it worked. The Volume1 and Storage Pool 1 instantly became healthy and all my data re-appeared. Strange thing though is I can only access the data through ssh. Through the GUI, File Station is empty and all Share folders in the Control Panel are greyed out and DSM says Volume1 is missing. 

Looks like DSM system files are all screwed up.

 

Screen Shot 2023-08-13 at 12.09.03.jpg

 

Screen Shot 2023-08-13 at 12.09.18.jpg

Link to comment
Share on other sites

7 hours ago, TJemuran said:

Few weeks ago I have similar problem, what I do is, I tried to reinstall using another loader. and the data still safe. that day, I'm use arpl and switch to arc loader. as long as you not touch the partition, the data is still there.

I was able to backup all my data first through SSH as I wanted to make sure it was out of these drives prior trying anything.

I used another loader as you suggested and was able to re-install DSM. Now everything is working as it should!

 

I am suspecting that during the initial DSM upgrade I did 2 days ago something went wrong when I rebooted and some system files got corrupted some way.

 

Anyway, now I am back on track.

 

Thank you all.

  • Like 2
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...