Jump to content
XPEnology Community

Help save my 55TB SHR1! Or mount it via Ubuntu :(


Recommended Posts

 

2 minutes ago, flyride said:

So far, so good.

 

Whoops :D

 

root@homelab:~# cat /etc/fstab
none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/vg1/volume_1 /volume1 btrfs  0 0
root@homelab:~# vgchange -ay
  Refusing activation of partial LV vg1/syno_vg_reserved_area.  Use '--activationmode partial' to override.
  Refusing activation of partial LV vg1/volume_1.  Use '--activationmode partial' to override.
  0 logical volume(s) in volume group "vg1" now active

 

Link to comment
Share on other sites

1 minute ago, flyride said:

# pvdisplay -a

Is this normal?

 

root@homelab:~# pvdisplay -a
  Incompatible options selected
  Run 'pvdisplay --help' for more information.
root@homelab:~# pvdisplay --help
  pvdisplay: Display various attributes of physical volume(s)

pvdisplay
        [-c|--colon]
        [--commandprofile ProfileName]
        [-d|--debug]
        [--foreign]
        [-h|--help]
        [--ignorelockingfailure]
        [--ignoreskippedcluster]
        [-m|--maps]
        [--nosuffix]
        [--readonly]
        [-S|--select Selection]
        [-s|--short]
        [--units hHbBsSkKmMgGtTpPeE]
        [-v|--verbose]
        [--version]
        [PhysicalVolumePath [PhysicalVolumePath...]]

pvdisplay --columns|-C
        [--aligned]
        [-a|--all]
        [--binary]
        [--commandprofile ProfileName]
        [-d|--debug]
        [--foreign]
        [-h|--help]
        [--ignorelockingfailure]
        [--ignoreskippedcluster]
        [--noheadings]
        [--nosuffix]
        [-o|--options [+]Field[,Field]]
        [-O|--sort [+|-]key1[,[+|-]key2[,...]]]
        [-S|--select Selection]
        [--readonly]
        [--separator Separator]
        [--unbuffered]
        [--units hHbBsSkKmMgGtTpPeE]
        [-v|--verbose]
        [--version]
        [PhysicalVolumePath [PhysicalVolumePath...]]


 

Link to comment
Share on other sites

Just now, flyride said:

# pvdisplay -v

Here ya go

 

pvdisplay -v
    Using physical volume(s) on command line.
    Wiping cache of LVM-capable devices
    There are 1 physical volumes missing.
    There are 1 physical volumes missing.
  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               vg1
  PV Size               32.69 TiB / not usable 2.19 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              8570611
  Free PE               0
  Allocated PE          8570611
  PV UUID               xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf

  --- Physical volume ---
  PV Name               /dev/md4
  VG Name               vg1
  PV Size               10.92 TiB / not usable 128.00 KiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2861569
  Free PE               0
  Allocated PE          2861569
  PV UUID               f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh

  --- Physical volume ---
  PV Name               /dev/md5
  VG Name               vg1
  PV Size               3.64 TiB / not usable 1.38 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              953588
  Free PE               229
  Allocated PE          953359
  PV UUID               U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF

 

Link to comment
Share on other sites

@C-Fu

whats the controller arrangement in your system? (2 x 8 port sas controller and 1 x 4 port onboard?)

ssd seemed to be 1st, on what controller was that? in what order does dsm use the controllers?

the driver of the sas fusion controllers presents disks in thew order as finds them, there are no gaps between the disks in the way the driver presents it, that has consequences for disk names like /dev/sdr, if you take out (or add) a disk by hardware (manual or a hardware fault) the order of the disks within the 8 positions of the sas controller changes for the software and the "letter" changes (thats one of the reasons linux in genereal changed to uuid for handling disks)

 

you might have power supply troubles, more disks more problems

also check the power cables and reduce cascading spliters is possible, also look for any relation between spliters and failing disks, one spliter can be faulty (had that once with a 4xsata power spliter)

the "temporary" clicking of one disk that is in general ok might also be a indicator of power problems, there might be unusual noises of the spinning motor too but it will be hard to hear with all the other disks, it will not stand out that much as the clicking

 

if you get things running in the state as it is now (access to data) you might think about not doing any hardware changes, no additional disks, nothing inside the case/housing, maybe even not rebooting

if you want your data then buy disks and offload your data by network or usb to external media or a 2nd system in network (your new nas?) and you might use your "old" system (after fixing it) as backup destination later

if you want to change hardware instead of offloading (like no money for new disks) leave all behind, only take the working disks to a new system and test that hardware before using it

Link to comment
Share on other sites

4 hours ago, IG-88 said:

whats the controller arrangement in your system? (2 x 8 port sas controller and 1 x 4 port onboard?)

ssd seemed to be 1st, on what controller was that? in what order does dsm use the controllers?

2x4 port sas controller, 1x6 port onboard. 

SSD is onboard. 

Wd reds and some wd purple are onboard) 5 in total, all 3tb)

Some 2tb wd purple, 10tb wd white, 3x5 ironwolfs are on the sas. 

I also have a sata multiplier card just in case. 

4 hours ago, IG-88 said:

you might have power supply troubles, more disks more problems

That's true. But I have a reliable cooler master 750w that I use. I switched to a 1600w psu during troubleshooting on the hardware issues, but even 14 drives x 20w (at peak) = 280w. If include the i7 4770 and sas card and 4 sticks of ram, I'm well within 80% load. 

 

I too figured it might (slight chance, unlikely) be a power issue with the bad 10tb drive, but right now my priority is I rather salvage what's left and figure out what to do about the 10tb drive later :)

4 hours ago, IG-88 said:

if you get things running in the state as it is now (access to data) you might think about not doing any hardware changes, no additional disks, nothing inside the case/housing, maybe even not rebooting

I still can't access any data yet, I believe there are still steps that I need @flyrideto help me with, but I don't even wanna touch the rig at the moment.

 

Thanks for the insight, I'm gonna need every help that I can. 

Link to comment
Share on other sites

I'm at work and can't help for awhile...

 

A few things have to be sorted out logically - is /dev/md2 in a valid state?  If not, the LVM probably won't come up correctly.  2 of 3 PV's are active but I'm not quite sure yet how to prove the array is or is not the cause.  We don't want to make any changes to the array since it could be in a bad order, and we can try some options (like pushing the 10TB into slot 11 instead of 10).

Anyone who wants to investigate some way to understand the PV down state, and/or how to check a degraded array for quality, please be my guest!

 

I agree with not touching anything or rebooting right now since remapping is not our friend now.  If we do get data from it, I advise copying everything off, burning it down and rebuilding.

 

Back in awhile.

Edited by flyride
  • Thanks 1
Link to comment
Share on other sites

Ok, sorry for being away so long.  Will have some time in the next 24 hours to work on this.  I was kinda hoping someone might have come up with some ideas about investigating lvm in my absence.  But I built up a test environment to see if it can help at all.  In the meantime, please run and post:

 

# pvdisplay -m

# cat /etc/lvm/backup/vg1

Edited by flyride
Link to comment
Share on other sites

4 hours ago, flyride said:

Ok, sorry for being away so long.  Will have some time in the next 24 hours to work on this.  I was kinda hoping someone might have come up with some ideas about investigating lvm in my absence.  But I built up a test environment to see if it can help at all.  In the meantime, please run and post:

 

# pvdisplay -m

# cat /etc/lvm/backup/vg1

 

It's ok dude. Like you said, if the data is there, it will wait :D I'm just taking the time reading and understanding the commands and what everything did.

 

# pvdisplay -m
  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               vg1
  PV Size               32.69 TiB / not usable 2.19 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              8570611
  Free PE               0
  Allocated PE          8570611
  PV UUID               xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf

  --- Physical Segments ---
  Physical extent 0 to 2:
    Logical volume      /dev/vg1/syno_vg_reserved_area
    Logical extents     0 to 2
  Physical extent 3 to 6427957:
    Logical volume      /dev/vg1/volume_1
    Logical extents     0 to 6427954
  Physical extent 6427958 to 8570610:
    Logical volume      /dev/vg1/volume_1
    Logical extents     9289524 to 11432176

  --- Physical volume ---
  PV Name               /dev/md4
  VG Name               vg1
  PV Size               10.92 TiB / not usable 128.00 KiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2861569
  Free PE               0
  Allocated PE          2861569
  PV UUID               f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh

  --- Physical Segments ---
  Physical extent 0 to 2861568:
    Logical volume      /dev/vg1/volume_1
    Logical extents     6427955 to 9289523

  --- Physical volume ---
  PV Name               /dev/md5
  VG Name               vg1
  PV Size               3.64 TiB / not usable 1.38 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              953588
  Free PE               229
  Allocated PE          953359
  PV UUID               U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF

  --- Physical Segments ---
  Physical extent 0 to 953358:
    Logical volume      /dev/vg1/volume_1
    Logical extents     11432177 to 12385535
  Physical extent 953359 to 953587:
    FREE
# cat /etc/lvm/backup/vg1
# Generated by LVM2 version 2.02.132(2)-git (2015-09-22): Fri Jan 17 17:13:56 2020

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing '/sbin/vgchange -ay /dev/vg1'"

creation_host = "homelab"       # Linux homelab 3.10.105 #23739 SMP Tue Jul 3 19:50:10 CST 2018 x86_64
creation_time = 1579252436      # Fri Jan 17 17:13:56 2020

vg1 {
        id = "2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp"
        seqno = 13
        format = "lvm2"                 # informational
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0
        metadata_copies = 0

        physical_volumes {

                pv0 {
                        id = "xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf"
                        device = "/dev/md2"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = ["MISSING"]
                        dev_size = 70210449792  # 32.6943 Terabytes
                        pe_start = 1152
                        pe_count = 8570611      # 32.6943 Terabytes
                }

                pv1 {
                        id = "f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh"
                        device = "/dev/md4"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 23441974144  # 10.916 Terabytes
                        pe_start = 1152
                        pe_count = 2861569      # 10.916 Terabytes
                }

                pv2 {
                        id = "U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF"
                        device = "/dev/md5"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 7811795712   # 3.63765 Terabytes
                        pe_start = 1152
                        pe_count = 953588       # 3.63765 Terabytes
                }
        }

        logical_volumes {

                syno_vg_reserved_area {
                        id = "OJfeP6-Rnd9-2TgX-wPFd-P3pk-NDt5-pPOhr3"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 1

                        segment1 {
                                start_extent = 0
                                extent_count = 3        # 12 Megabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 0
                                ]
                        }
                }

                volume_1 {
                        id = "SfGkye-GcMM-HrO2-z9xK-oGwY-cqmm-XZnHMv"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 4

                        segment1 {
                                start_extent = 0
                                extent_count = 6427955  # 24.5207 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 3
                                ]
                        }
                        segment2 {
                                start_extent = 6427955
                                extent_count = 2861569  # 10.916 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv1", 0
                                ]
                        }
                        segment3 {
                                start_extent = 9289524
                                extent_count = 2142653  # 8.17357 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 6427958
                                ]
                        }
                        segment4 {
                                start_extent = 11432177
                                extent_count = 953359   # 3.63678 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv2", 0
                                ]
                        }
                }
        }
}

 

Link to comment
Share on other sites

10 minutes ago, flyride said:

Thanks.  Look in the archive folder (/etc/lvm/archive) and post the file with the newest timestamp, and also one with a date stamp from BEFORE the crash.

/etc/lvm/archive# ls -l
total 80
-rw-r--r-- 1 root root 2157 Sep 22 21:05 vg1_00000-1668747838.vg
-rw-r--r-- 1 root root 2157 Sep 22 21:44 vg1_00001-583193481.vg
-rw-r--r-- 1 root root 1459 Sep 22 21:44 vg1_00002-1264921506.vg
-rw-r--r-- 1 root root 1153 Sep 22 21:55 vg1_00003-185086001.vg
-rw-r--r-- 1 root root 1147 Sep 22 21:55 vg1_00004-805154513.vg
-rw-r--r-- 1 root root 1478 Sep 22 21:55 vg1_00005-448325956.vg
-rw-r--r-- 1 root root 2202 Sep 26 17:43 vg1_00006-1565525435.vg
-rw-r--r-- 1 root root 2202 Sep 26 17:43 vg1_00007-368672770.vg
-rw-r--r-- 1 root root 2200 Sep 26 17:43 vg1_00008-1121218288.vg
-rw-r--r-- 1 root root 2242 Sep 26 20:06 vg1_00009-1448678039.vg
-rw-r--r-- 1 root root 2569 Dec  8 15:45 vg1_00010-478377468.vg
-rw-r--r-- 1 root root 2569 Dec  8 15:45 vg1_00011-945038746.vg
-rw-r--r-- 1 root root 2570 Dec  8 15:45 vg1_00012-109591933.vg
-rw-r--r-- 1 root root 2589 Jan 17 03:26 vg1_00013-1309520600.vg
-rw-r--r-- 1 root root 2589 Jan 17 03:26 vg1_00014-1824124453.vg
-rw-r--r-- 1 root root 2583 Jan 17 17:13 vg1_00015-451330715.vg
-rw-r--r-- 1 root root 2583 Jan 17 17:13 vg1_00016-1144631688.vg
-rw-r--r-- 1 root root 1534 Sep 22 21:05 vg2_00000-531856239.vg
-rw-r--r-- 1 root root 1534 Sep 29 13:30 vg2_00001-629071759.vg
-rw-r--r-- 1 root root 1207 Sep 29 13:31 vg2_00002-739831571.vg
# cat vg1_00016-1144631688.vg
# Generated by LVM2 version 2.02.132(2)-git (2015-09-22): Fri Jan 17 17:13:56 2020

contents = "Text Format Volume Group"
version = 1

description = "Created *before* executing '/sbin/vgchange -ay /dev/vg1'"

creation_host = "homelab"       # Linux homelab 3.10.105 #23739 SMP Tue Jul 3 19:50:10 CST 2018 x86_64
creation_time = 1579252436      # Fri Jan 17 17:13:56 2020

vg1 {
        id = "2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp"
        seqno = 13
        format = "lvm2"                 # informational
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0
        metadata_copies = 0

        physical_volumes {

                pv0 {
                        id = "xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf"
                        device = "/dev/md2"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = ["MISSING"]
                        dev_size = 70210449792  # 32.6943 Terabytes
                        pe_start = 1152
                        pe_count = 8570611      # 32.6943 Terabytes
                }

                pv1 {
                        id = "f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh"
                        device = "/dev/md4"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 23441974144  # 10.916 Terabytes
                        pe_start = 1152
                        pe_count = 2861569      # 10.916 Terabytes
                }

                pv2 {
                        id = "U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF"
                        device = "/dev/md5"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 7811795712   # 3.63765 Terabytes
                        pe_start = 1152
                        pe_count = 953588       # 3.63765 Terabytes
                }
        }

        logical_volumes {

                syno_vg_reserved_area {
                        id = "OJfeP6-Rnd9-2TgX-wPFd-P3pk-NDt5-pPOhr3"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 1

                        segment1 {
                                start_extent = 0
                                extent_count = 3        # 12 Megabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 0
                                ]
                        }
                }

                volume_1 {
                        id = "SfGkye-GcMM-HrO2-z9xK-oGwY-cqmm-XZnHMv"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 4

                        segment1 {
                                start_extent = 0
                                extent_count = 6427955  # 24.5207 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 3
                                ]
                        }
                        segment2 {
                                start_extent = 6427955
                                extent_count = 2861569  # 10.916 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv1", 0
                                ]
                        }
                        segment3 {
                                start_extent = 9289524
                                extent_count = 2142653  # 8.17357 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 6427958
                                ]
                        }
                        segment4 {
                                start_extent = 11432177
                                extent_count = 953359   # 3.63678 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv2", 0
                                ]
                        }
                }
        }
}

That's the newest, two logs at Jan 13 17:13. As for when the crash happened, I think it should be on Dec 8? I don't really remember the exact date though. But I checked my chatlogs with friends and on Dec 12 I could still download stuff (it's not yet read-only).

 

3 logs at Dec 8:

vg1_00010-478377468.vg

# cat vg1_00010-478377468.vg
# Generated by LVM2 version 2.02.132(2)-git (2015-09-22): Sun Dec  8 15:45:06 2019

contents = "Text Format Volume Group"
version = 1

description = "Created *before* executing '/sbin/pvresize /dev/md5'"

creation_host = "homelab"       # Linux homelab 3.10.105 #23739 SMP Tue Jul 3 19:50:10 CST 2018 x86_64
creation_time = 1575791106      # Sun Dec  8 15:45:06 2019

vg1 {
        id = "2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp"
        seqno = 8
        format = "lvm2"                 # informational
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0
        metadata_copies = 0

        physical_volumes {

                pv0 {
                        id = "xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf"
                        device = "/dev/md2"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 70210449792  # 32.6943 Terabytes
                        pe_start = 1152
                        pe_count = 8570611      # 32.6943 Terabytes
                }

                pv1 {
                        id = "f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh"
                        device = "/dev/md4"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 23441974144  # 10.916 Terabytes
                        pe_start = 1152
                        pe_count = 2861569      # 10.916 Terabytes
                }

                pv2 {
                        id = "U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF"
                        device = "/dev/md5"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 7811795712   # 3.63765 Terabytes
                        pe_start = 1152
                        pe_count = 953588       # 3.63765 Terabytes
                }
        }

        logical_volumes {

                syno_vg_reserved_area {
                        id = "OJfeP6-Rnd9-2TgX-wPFd-P3pk-NDt5-pPOhr3"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 1

                        segment1 {
                                start_extent = 0
                                extent_count = 3        # 12 Megabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 0
                                ]
                        }
                }

                volume_1 {
                        id = "SfGkye-GcMM-HrO2-z9xK-oGwY-cqmm-XZnHMv"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 4

                        segment1 {
                                start_extent = 0
                                extent_count = 6427955  # 24.5207 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 3
                                ]
                        }
                        segment2 {
                                start_extent = 6427955
                                extent_count = 2861569  # 10.916 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv1", 0
                                ]
                        }
                        segment3 {
                                start_extent = 9289524
                                extent_count = 2142653  # 8.17357 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 6427958
                                ]
                        }
                        segment4 {
                                start_extent = 11432177
                                extent_count = 953359   # 3.63678 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv2", 0
                                ]
                        }
                }
        }
}

 

 

vg1_00011-945038746.vg

# cat vg1_00011-945038746.vg
# Generated by LVM2 version 2.02.132(2)-git (2015-09-22): Sun Dec  8 15:45:06 2019

contents = "Text Format Volume Group"
version = 1

description = "Created *before* executing '/sbin/pvresize /dev/md4'"

creation_host = "homelab"       # Linux homelab 3.10.105 #23739 SMP Tue Jul 3 19:50:10 CST 2018 x86_64
creation_time = 1575791106      # Sun Dec  8 15:45:06 2019

vg1 {
        id = "2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp"
        seqno = 9
        format = "lvm2"                 # informational
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0
        metadata_copies = 0

        physical_volumes {

                pv0 {
                        id = "xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf"
                        device = "/dev/md2"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 70210449792  # 32.6943 Terabytes
                        pe_start = 1152
                        pe_count = 8570611      # 32.6943 Terabytes
                }

                pv1 {
                        id = "f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh"
                        device = "/dev/md4"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 23441974144  # 10.916 Terabytes
                        pe_start = 1152
                        pe_count = 2861569      # 10.916 Terabytes
                }

                pv2 {
                        id = "U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF"
                        device = "/dev/md5"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 7811795712   # 3.63765 Terabytes
                        pe_start = 1152
                        pe_count = 953588       # 3.63765 Terabytes
                }
        }

        logical_volumes {

                syno_vg_reserved_area {
                        id = "OJfeP6-Rnd9-2TgX-wPFd-P3pk-NDt5-pPOhr3"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 1

                        segment1 {
                                start_extent = 0
                                extent_count = 3        # 12 Megabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 0
                                ]
                        }
                }

                volume_1 {
                        id = "SfGkye-GcMM-HrO2-z9xK-oGwY-cqmm-XZnHMv"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 4

                        segment1 {
                                start_extent = 0
                                extent_count = 6427955  # 24.5207 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 3
                                ]
                        }
                        segment2 {
                                start_extent = 6427955
                                extent_count = 2861569  # 10.916 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv1", 0
                                ]
                        }
                        segment3 {
                                start_extent = 9289524
                                extent_count = 2142653  # 8.17357 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 6427958
                                ]
                        }
                        segment4 {
                                start_extent = 11432177
                                extent_count = 953359   # 3.63678 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv2", 0
                                ]
                        }
                }
        }
}

 

 

vg1_00012-109591933.vg

# cat vg1_00012-109591933.vg
# Generated by LVM2 version 2.02.132(2)-git (2015-09-22): Sun Dec  8 15:45:07 2019

contents = "Text Format Volume Group"
version = 1

description = "Created *before* executing '/sbin/pvresize /dev/md2'"

creation_host = "homelab"       # Linux homelab 3.10.105 #23739 SMP Tue Jul 3 19:50:10 CST 2018 x86_64
creation_time = 1575791107      # Sun Dec  8 15:45:07 2019

vg1 {
        id = "2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp"
        seqno = 10
        format = "lvm2"                 # informational
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0
        metadata_copies = 0

        physical_volumes {

                pv0 {
                        id = "xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf"
                        device = "/dev/md2"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 70210449792  # 32.6943 Terabytes
                        pe_start = 1152
                        pe_count = 8570611      # 32.6943 Terabytes
                }

                pv1 {
                        id = "f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh"
                        device = "/dev/md4"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 23441974144  # 10.916 Terabytes
                        pe_start = 1152
                        pe_count = 2861569      # 10.916 Terabytes
                }

                pv2 {
                        id = "U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF"
                        device = "/dev/md5"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 7811795712   # 3.63765 Terabytes
                        pe_start = 1152
                        pe_count = 953588       # 3.63765 Terabytes
                }
        }

        logical_volumes {

                syno_vg_reserved_area {
                        id = "OJfeP6-Rnd9-2TgX-wPFd-P3pk-NDt5-pPOhr3"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 1

                        segment1 {
                                start_extent = 0
                                extent_count = 3        # 12 Megabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 0
                                ]
                        }
                }

                volume_1 {
                        id = "SfGkye-GcMM-HrO2-z9xK-oGwY-cqmm-XZnHMv"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 4

                        segment1 {
                                start_extent = 0
                                extent_count = 6427955  # 24.5207 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 3
                                ]
                        }
                        segment2 {
                                start_extent = 6427955
                                extent_count = 2861569  # 10.916 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv1", 0
                                ]
                        }
                        segment3 {
                                start_extent = 9289524
                                extent_count = 2142653  # 8.17357 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 6427958
                                ]
                        }
                        segment4 {
                                start_extent = 11432177
                                extent_count = 953359   # 3.63678 Terabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv2", 0
                                ]
                        }
                }
        }
}

 

 

 

Sept 29 - vg2_00002-739831571.vg

e# cat vg2_00002-739831571.vg
# Generated by LVM2 version 2.02.132(2)-git (2015-09-22): Sun Sep 29 13:31:56 2019

contents = "Text Format Volume Group"
version = 1

description = "Created *before* executing '/sbin/vgremove -f /dev/vg2'"

creation_host = "homelab"       # Linux homelab 3.10.105 #23739 SMP Tue Jul 3 19:50:10 CST 2018 x86_64
creation_time = 1569735116      # Sun Sep 29 13:31:56 2019

vg2 {
        id = "RJWo7j-o1qi-oYT7-jZP2-Ig5k-8ZOg-znuyZj"
        seqno = 4
        format = "lvm2"                 # informational
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0
        metadata_copies = 0

        physical_volumes {

                pv0 {
                        id = "mPVTql-CJyu-b3mA-GZmd-B6OD-ItR9-6pEOfB"
                        device = "/dev/md3"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 459199872    # 218.964 Gigabytes
                        pe_start = 1152
                        pe_count = 56054        # 218.961 Gigabytes
                }
        }

        logical_volumes {

                syno_vg_reserved_area {
                        id = "cjfGiU-1Hnb-MNte-P2Fk-u7ZA-g8QH-7jfTWY"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        segment_count = 1

                        segment1 {
                                start_extent = 0
                                extent_count = 3        # 12 Megabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 0
                                ]
                        }
                }
        }
}

 

Edited by C-Fu
Link to comment
Share on other sites

FWIW, I am no LVM expert. I don't use SHR and therefore LVM on any of my systems. The problem we need to solve isn't very complicated.  But, if you don't like the advice (or someone more knowledgeable on LVM wants to jump in), please respond accordingly.

 

Here's where we are now.  The system's logical volumes (syno_vg_reserved_area and volume1) are allocated from a volume group (vg1), which is comprised of the three physical arrays /dev/md2, /dev/md4 and /dev/md5.  We know that /dev/md4 and /dev/md5 are in good shape, albeit degraded (because of the missing 10TB drive).  /dev/md2 is also activated in degraded mode, but we aren't totally sure it is good because we had to recover a non-optimal drive into the array and also intelligently guess at its position within it.  Also, the integrity of the data on that drive is not completely known. When and if we do get the share accessible, we will want to mount it read-only and verify that files are accessible and not corrupted.

 

The system was booted many times where DSM would try to start the lv when /dev/md2 was not available, so somewhere along the line it was flagged as "missing."  Now, it is potentially NOT missing (if it is functional), or possibly still missing (if it is corrupted). I think we should avoid booting the system as much as possible, which means trying to correct lvm from the command line.

 

So, here are some next steps:

 

# vgextend --restoremissing vg1 /dev/md2

# lvm vgscan

# vgdisplay

Link to comment
Share on other sites

6 minutes ago, flyride said:

# vgextend --restoremissing vg1 /dev/md2

# lvm vgscan

# vgdisplay

Many thanks for the indepth explanation. My question is, where's /dev/md3?

 

# vgextend --restoremissing vg1 /dev/md2
  Volume group "vg1" successfully extended
# lvm vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vg1" using metadata type lvm2
# vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  14
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               47.25 TiB
  PE Size               4.00 MiB
  Total PE              12385768
  Alloc PE / Size       12385539 / 47.25 TiB
  Free  PE / Size       229 / 916.00 MiB
  VG UUID               2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp

I mean is md3 intentionally missing or it's because my vg is broken?

Edited by C-Fu
Link to comment
Share on other sites

3 minutes ago, flyride said:

I think /dev/md3 is your cache, which we really don't care about at this point.

 

The lvm is reporting all its parts are present. Now we want to activate it.

 

# vgchange -ay

I see. SHR/Syno's cache implementation is basically a copy of selected hdds' files, right? In my understanding.

 

root@homelab:/# vgchange -ay
  2 logical volume(s) in volume group "vg1" now active
root@homelab:/# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md4 : active raid5 sdl6[0] sdo6[5] sdn6[2] sdm6[1]
      11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_]

md2 : active raid5 sdb5[0] sdk5[12] sdq5[10] sdp5[9] sdo5[8] sdn5[7] sdm5[6] sdl5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1]
      35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUUU_U]

md5 : active raid1 sdo7[2]
      3905898432 blocks super 1.2 [2/1] [_U]

md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdk2[5] sdl2[6] sdm2[7] sdn2[8] sdo2[9] sdp2[10] sdq2[11]
      2097088 blocks [24/12] [UUUUUUUUUUUU____________]

md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5]
      2490176 blocks [12/4] [_UUU_U______]

unused devices: <none>

 

Link to comment
Share on other sites

Synology's cache is just the open source flashcache: https://en.wikipedia.org/wiki/Flashcache

 

All right.  Now we want to try and mount the filesystem and if successful, have you start pulling files off of it. It might not mount, and/or those files might be garbage when they get to the part of the filesystem serviced by /dev/md2, so you need to verify the files that you copy off.  Since you are using btrfs, it should detect corruption and alert you through the DSM web interface, so please monitor that too.  Note that even if you are getting a lot of good data off the filesystem, some files are very likely to be corrupted.

 

# mount -o ro,norecovery /dev/vg1/volume_1 /volume1

Link to comment
Share on other sites

3 minutes ago, flyride said:

# mount -o ro,norecovery /dev/vg1/volume_1 /volume1

 

Uh oh.

 

# mount -o ro,norecovery /dev/vg1/volume_1 /volume1
mount: wrong fs type, bad option, bad superblock on /dev/vg1/volume_1,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
# dmesg | tail
[16079.284996] init: dhcp-client (eth4) main process (16851) killed by TERM signal
[16079.439991] init: nmbd main process (17405) killed by TERM signal
[16083.428550] alx 0000:05:00.0 eth4: NIC Up: 100 Mbps Full
[16084.471931] iSCSI:iscsi_target.c:520:iscsit_add_np CORE[0] - Added Network Portal: 192.168.0.83:3260 on iSCSI/TCP
[16084.472048] iSCSI:iscsi_target.c:520:iscsit_add_np CORE[0] - Added Network Portal: [fe80::d250:99ff:fe26:36a8]:3260 on iSCSI/TCP
[16084.498404] init: dhcp-client (eth4) main process (17245) killed by TERM signal
[27893.370010] usb 3-13: usbfs: USBDEVFS_CONTROL failed cmd blazer_usb rqt 33 rq 9 len 8 ret -110
[33089.867131] bio: create slab <bio-2> at 2
[33485.021943] hfsplus: unable to parse mount options
[33485.026980] UDF-fs: bad mount option "norecovery" or missing value

 

DSM's Storage Manager shows that I can repair though.

image.thumb.png.3230f45957fe7d6213ac14167051c445.png

Edited by C-Fu
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...