Chavelle

New Members
  • Content count

    3
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Chavelle

  • Rank
    Newbie
  1. Hi Leute ich versuche es auch hier noch mal im deutschen Bereich, vielleicht hat ja jemand hier Hilfe für mich. Ich habe ein Problem mit meinem Xpenology-System, dass der LVM nicht mehr sauber online kommt Kurze Beschreibung des Setups. Mein Xpenology-Systen läuft auf DSM 5.2 Build 5644 als Hyper-V VM mit einem Raid-5 und 5 Daten Disks. Die Disks sind via Passthrough der VM zugewiesen, daher kann ich auch keine Snapshots nutzen. Eine kurze Beschreibung wie es zu dem Problem gekommen ist. Nach einem Stromausfall hat DSM versucht eine Daten Konsistenzprüfung durchzuführen. Während dieser Prüfung kam es zu Disk-Fehlern so dass diese Disks vom LVM als Fehlerhaft markiert wurden. Danach waren also nur noch 3 der 5 Disks aktiv und das Raid damit Fehlerhaft. Ich habe gehofft über einen DSM reboot könnte der LVM wieder neu aufgebaut werden, dem war aber nicht so. Nach ein bisschen google'n um eine entsprechende Lösung zu finden, habe ich mich entschieden das Raid neu aufzubauen, da dabei entsprechend bekannte Raid-Infos wohl entsprechend gelesen werden könnten. Nachdem der LVM neu aufgebaut wurde, war ich aber immer noch nicht in der Lage die Volumes zu sehen. Allerdings entspricht die nun angezeigte "Used size" durchaus der zuletzt bekannten Auslastung. Ich habe mal einige, vielleicht nützliche, Informationen gesammelt und packe diese hier mit rein. Current LVM status: mdadm --detail /dev/md2 /dev/md2: Version : 1.2 Creation Time : Sun Apr 23 11:25:47 2017 Raid Level : raid5 Array Size : 7551430656 (7201.61 GiB 7732.66 GB) Used Dev Size : 1887857664 (1800.40 GiB 1933.17 GB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent Update Time : Tue Apr 25 19:42:26 2017 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : vmstation:2 (local to host vmstation) UUID : fa906565:71190489:ea521509:a7999784 Events : 2 Number Major Minor RaidDevice State 0 8 67 0 active sync /dev/sde3 1 8 83 1 active sync /dev/sdf3 2 8 115 2 active sync /dev/sdh3 3 8 35 3 active sync /dev/sdc3 4 8 51 4 active sync /dev/sdd3 "vgchange -ay" - no output "pvs" - no output "lvdisplay" - no output "lvscan" - no output "vgscan" - no output "vgs" - no output I have something from /etc/lvm/archive/ vmstation> cat vg1_00004.vg # Generated by LVM2 version 2.02.38 (2008-06-11): Mon Mar 6 07:59:38 2017 contents = "Text Format Volume Group" version = 1 description = "Created *before* executing '/sbin/lvextend --alloc inherit /dev/vg1/volume_1 --size 6289408M'" creation_host = "vmstation" # Linux vmstation 3.10.35 #1 SMP Tue Feb 2 17:44:24 CET 2016 x86_64 creation_time = 1488783578 # Mon Mar 6 07:59:38 2017 vg1 { id = "V7LhPs-YK34-rCUy-TAE9-eNJP-5t9M-exZx3C" seqno = 11 status = ["RESIZEABLE", "READ", "WRITE"] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 physical_volumes { pv0 { id = "DKv9e6-mh0q-6SZ5-VNS6-bXHw-hSkL-wFPFSK" device = "/dev/md2" # Hint only status = ["ALLOCATABLE"] dev_size = 15102858624 # 7.03282 Terabytes pe_start = 1152 pe_count = 1843610 # 7.03281 Terabytes } } logical_volumes { syno_vg_reserved_area { id = "Uc1Q6V-f2vY-xKLI-kH0n-OS1f-6P6Q-fWQD5I" status = ["READ", "WRITE", "VISIBLE"] segment_count = 1 segment1 { start_extent = 0 extent_count = 3 # 12 Megabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 0 ] } } volume_1 { id = "6WOtW4-g3mq-796S-sAcm-bqpI-HUzk-OOE3Kc" status = ["READ", "WRITE", "VISIBLE"] segment_count = 2 segment1 { start_extent = 0 extent_count = 1281280 # 4.8877 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 3 ] } segment2 { start_extent = 1281280 extent_count = 118528 # 463 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 1290243 ] } } iscsi_0 { id = "MoikYK-1lu2-qV2L-VsH8-FreO-W01t-htNHt8" status = ["READ", "WRITE", "VISIBLE"] segment_count = 1 segment1 { start_extent = 0 extent_count = 3840 # 15 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 1281283 ] } } iscsi_1 { id = "VOl2iq-NFNs-eNXv-uCi5-BmC1-al9s-88f8eg" status = ["READ", "WRITE", "VISIBLE"] segment_count = 1 segment1 { start_extent = 0 extent_count = 5120 # 20 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 1285123 ] } } } } cat /etc/fstab none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 e2fsck /dev/md2 Bad magic number in super-block while trying to open /dev/md2 Das war soweit alles... Ich hoffe jemand von euch kann mir helfen. Danke euch. Gruß Thomas
  2. Hi to everybody here at the forum. I'm new to this forum and I hope you can help me with my problem, so thanks in advance to everybody. A "short" description to my setup. I have my Xpenology running on DSM 5.2 Build 5644 onto a Hyper-V VM with a Raid-5 and 5 data disks. The disks are mapped via passthrough to the VM, so that means I cannot use Snapshots on this VM. My Problem is, that after a power outage the DSM tried to check the data consistency and within that check there where some errors on 2 of the disks that caused those disks to go offline. So the LVM was in a failed state with only 3 of 5 disks active. I rebootet the DSM to try to automatically fix this problem, but that does not work. I guess because these disk got marked as failed. After google-ing around to find a solution I decided to recreate the LVM as from what I read, it should detect the active raid-system on the disks and rebuild the lvm. After I did that the LVM was active again, but I'm not able to see my volumes. But the used size of the "new" LVM is showing a correct/similiar size from what I last knew it. Here are some outputs from my box, that might be helpful to find a solution Current LVM status: mdadm --detail /dev/md2 /dev/md2: Version : 1.2 Creation Time : Sun Apr 23 11:25:47 2017 Raid Level : raid5 Array Size : 7551430656 (7201.61 GiB 7732.66 GB) Used Dev Size : 1887857664 (1800.40 GiB 1933.17 GB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent Update Time : Tue Apr 25 19:42:26 2017 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : vmstation:2 (local to host vmstation) UUID : fa906565:71190489:ea521509:a7999784 Events : 2 Number Major Minor RaidDevice State 0 8 67 0 active sync /dev/sde3 1 8 83 1 active sync /dev/sdf3 2 8 115 2 active sync /dev/sdh3 3 8 35 3 active sync /dev/sdc3 4 8 51 4 active sync /dev/sdd3 "vgchange -ay" - no output "pvs" - no output "lvdisplay" - no output "lvscan" - no output "vgscan" - no output "vgs" - no output I have something from /etc/lvm/archive/ vmstation> cat vg1_00004.vg # Generated by LVM2 version 2.02.38 (2008-06-11): Mon Mar 6 07:59:38 2017 contents = "Text Format Volume Group" version = 1 description = "Created *before* executing '/sbin/lvextend --alloc inherit /dev/vg1/volume_1 --size 6289408M'" creation_host = "vmstation" # Linux vmstation 3.10.35 #1 SMP Tue Feb 2 17:44:24 CET 2016 x86_64 creation_time = 1488783578 # Mon Mar 6 07:59:38 2017 vg1 { id = "V7LhPs-YK34-rCUy-TAE9-eNJP-5t9M-exZx3C" seqno = 11 status = ["RESIZEABLE", "READ", "WRITE"] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 physical_volumes { pv0 { id = "DKv9e6-mh0q-6SZ5-VNS6-bXHw-hSkL-wFPFSK" device = "/dev/md2" # Hint only status = ["ALLOCATABLE"] dev_size = 15102858624 # 7.03282 Terabytes pe_start = 1152 pe_count = 1843610 # 7.03281 Terabytes } } logical_volumes { syno_vg_reserved_area { id = "Uc1Q6V-f2vY-xKLI-kH0n-OS1f-6P6Q-fWQD5I" status = ["READ", "WRITE", "VISIBLE"] segment_count = 1 segment1 { start_extent = 0 extent_count = 3 # 12 Megabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 0 ] } } volume_1 { id = "6WOtW4-g3mq-796S-sAcm-bqpI-HUzk-OOE3Kc" status = ["READ", "WRITE", "VISIBLE"] segment_count = 2 segment1 { start_extent = 0 extent_count = 1281280 # 4.8877 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 3 ] } segment2 { start_extent = 1281280 extent_count = 118528 # 463 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 1290243 ] } } iscsi_0 { id = "MoikYK-1lu2-qV2L-VsH8-FreO-W01t-htNHt8" status = ["READ", "WRITE", "VISIBLE"] segment_count = 1 segment1 { start_extent = 0 extent_count = 3840 # 15 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 1281283 ] } } iscsi_1 { id = "VOl2iq-NFNs-eNXv-uCi5-BmC1-al9s-88f8eg" status = ["READ", "WRITE", "VISIBLE"] segment_count = 1 segment1 { start_extent = 0 extent_count = 5120 # 20 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 1285123 ] } } } } cat /etc/fstab none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 e2fsck /dev/md2 Bad magic number in super-block while trying to open /dev/md2 This is everything I currently know what could be useful information for you to help. I hope anybody can hell to reassemble the volumes. As I said, thanks in advance. KR Thomas