Jump to content
XPEnology Community

no LVM information - VG lost


Recommended Posts

Hi to everybody here at the forum.

I'm new to this forum and I hope you can help me with my problem, so thanks in advance to everybody.

 

A "short" description to my setup.

I have my Xpenology running on DSM 5.2 Build 5644 onto a Hyper-V VM with a Raid-5 and 5 data disks.

The disks are mapped via passthrough to the VM, so that means I cannot use Snapshots on this VM.

 

My Problem is, that after a power outage the DSM tried to check the data consistency and within that check there where some errors on 2 of the disks that caused those disks to go offline.

So the LVM was in a failed state with only 3 of 5 disks active.

I rebootet the DSM to try to automatically fix this problem, but that does not work. I guess because these disk got marked as failed.

After google-ing around to find a solution I decided to recreate the LVM as from what I read, it should detect the active raid-system on the disks and rebuild the lvm.

 

After I did that the LVM was active again, but I'm not able to see my volumes. But the used size of the "new" LVM is showing a correct/similiar size from what I last knew it.

Here are some outputs from my box, that might be helpful to find a solution

 

Current LVM status:

mdadm --detail /dev/md2
/dev/md2:
       Version : 1.2
 Creation Time : Sun Apr 23 11:25:47 2017
    Raid Level : raid5
    Array Size : 7551430656 (7201.61 GiB 7732.66 GB)
 Used Dev Size : 1887857664 (1800.40 GiB 1933.17 GB)
  Raid Devices : 5
 Total Devices : 5
   Persistence : Superblock is persistent

   Update Time : Tue Apr 25 19:42:26 2017
         State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
 Spare Devices : 0

        Layout : left-symmetric
    Chunk Size : 64K

          Name : vmstation:2  (local to host vmstation)
          UUID : fa906565:71190489:ea521509:a7999784
        Events : 2

   Number   Major   Minor   RaidDevice State
      0       8       67        0      active sync   /dev/sde3
      1       8       83        1      active sync   /dev/sdf3
      2       8      115        2      active sync   /dev/sdh3
      3       8       35        3      active sync   /dev/sdc3
      4       8       51        4      active sync   /dev/sdd3

 

"vgchange -ay" - no output

"pvs" - no output

"lvdisplay" - no output

"lvscan" - no output

"vgscan" - no output

"vgs" - no output

 

I have something from /etc/lvm/archive/

 

vmstation> cat vg1_00004.vg
# Generated by LVM2 version 2.02.38 (2008-06-11): Mon Mar  6 07:59:38 2017

contents = "Text Format Volume Group"
version = 1

description = "Created *before* executing '/sbin/lvextend --alloc inherit /dev/vg1/volume_1 --size 6289408M'"

creation_host = "vmstation"     # Linux vmstation 3.10.35 #1 SMP Tue Feb 2 17:44:24 CET 2016 x86_64
creation_time = 1488783578      # Mon Mar  6 07:59:38 2017

vg1 {
       id = "V7LhPs-YK34-rCUy-TAE9-eNJP-5t9M-exZx3C"
       seqno = 11
       status = ["RESIZEABLE", "READ", "WRITE"]
       extent_size = 8192              # 4 Megabytes
       max_lv = 0
       max_pv = 0

       physical_volumes {

               pv0 {
                       id = "DKv9e6-mh0q-6SZ5-VNS6-bXHw-hSkL-wFPFSK"
                       device = "/dev/md2"     # Hint only

                       status = ["ALLOCATABLE"]
                       dev_size = 15102858624  # 7.03282 Terabytes
                       pe_start = 1152
                       pe_count = 1843610      # 7.03281 Terabytes
               }
       }

       logical_volumes {

               syno_vg_reserved_area {
                       id = "Uc1Q6V-f2vY-xKLI-kH0n-OS1f-6P6Q-fWQD5I"
                       status = ["READ", "WRITE", "VISIBLE"]
                       segment_count = 1

                       segment1 {
                               start_extent = 0
                               extent_count = 3        # 12 Megabytes

                               type = "striped"
                               stripe_count = 1        # linear

                               stripes = [
                                       "pv0", 0
                               ]
                       }
               }

               volume_1 {
                       id = "6WOtW4-g3mq-796S-sAcm-bqpI-HUzk-OOE3Kc"
                       status = ["READ", "WRITE", "VISIBLE"]
                       segment_count = 2

                       segment1 {
                               start_extent = 0
                               extent_count = 1281280  # 4.8877 Terabytes

                               type = "striped"
                               stripe_count = 1        # linear

                               stripes = [
                                       "pv0", 3
                               ]
                       }
                       segment2 {
                               start_extent = 1281280
                               extent_count = 118528   # 463 Gigabytes

                               type = "striped"
                               stripe_count = 1        # linear

                               stripes = [
                                       "pv0", 1290243
                               ]
                       }
               }

               iscsi_0 {
                       id = "MoikYK-1lu2-qV2L-VsH8-FreO-W01t-htNHt8"
                       status = ["READ", "WRITE", "VISIBLE"]
                       segment_count = 1

                       segment1 {
                               start_extent = 0
                               extent_count = 3840     # 15 Gigabytes

                               type = "striped"
                               stripe_count = 1        # linear

                               stripes = [
                                       "pv0", 1281283
                               ]
                       }
               }

               iscsi_1 {
                       id = "VOl2iq-NFNs-eNXv-uCi5-BmC1-al9s-88f8eg"
                       status = ["READ", "WRITE", "VISIBLE"]
                       segment_count = 1

                       segment1 {
                               start_extent = 0
                               extent_count = 5120     # 20 Gigabytes

                               type = "striped"
                               stripe_count = 1        # linear

                               stripes = [
                                       "pv0", 1285123
                               ]
                       }
               }
       }
}

 

cat /etc/fstab
none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1

 

e2fsck /dev/md2
Bad magic number in super-block while trying to open /dev/md2

 

This is everything I currently know what could be useful

information for you to help.

 

I hope anybody can hell to reassemble the volumes.

 

As I said, thanks in advance.

 

KR

Thomas

Link to comment
Share on other sites

I would try mounting the array with linux first. Sometimes Synonolgy will mark the array as bad, when it's not really bad. How to undo that I'm not sure. But if your array is working (even if synology says it's bad), you should be able to mount it in linux and get to your data to back it up, then start over w/ synology.

 

I went through my own experience years ago, that I documented here: https://forum.cgsecurity.org/phpBB3/fou ... t2600.html

 

Good luck!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...