Jump to content
XPEnology Community

Any LVM experts out there?


Diverge

Recommended Posts

I've started the process of swapping out the disks one by one and repairing. I'm up to the 3rd disk now, but have a couple questions.

 

I noticed that I now have a md3 now, and that my new disk are physically partitioned to match the older 2TB disk, and the balance is in a 4th partition that becomes md3. When the process is complete will I always have my 1 volume split between 2 partitions per disk? or will it resize at the end and leave me with just md0 (dsm), md1 (swap), and md2 (volume1)?

 

sda,sdb,sdc new 3TB drives, sdd old 2TB:

DiskStation> sfdisk -l
/dev/sda1                   256         4980735         4980480  fd
/dev/sda2               4980736         9175039         4194304  fd
/dev/sda5               9453280      3907015007      3897561728  fd
/dev/sda6            3907031104      5860519007      1953487904  fd


/dev/sdb1                   256         4980735         4980480  fd
/dev/sdb2               4980736         9175039         4194304  fd
/dev/sdb5               9453280      3907015007      3897561728  fd
/dev/sdb6            3907031104      5860519007      1953487904  fd


/dev/sdc1                   256         4980735         4980480  fd
/dev/sdc2               4980736         9175039         4194304  fd
/dev/sdc5               9453280      3907015007      3897561728  fd
/dev/sdc6            3907031104      5860519007      1953487904  fd


/dev/sdd1                   256         4980735         4980480  fd
/dev/sdd2               4980736         9175039         4194304  fd
/dev/sdd3               9437184      3907015007      3897577824   f
/dev/sdd5               9453280      3907015007      3897561728  fd


/dev/md01                     0         4980351         4980352   0


/dev/md11                     0         4194175         4194176   0


Error: /dev/md2: unrecognised disk label
get disk fail


Error: /dev/md3: unrecognised disk label
get disk fail


DiskStation>

 

DiskStation> pvdisplay
 --- Physical volume ---
 PV Name               /dev/md2
 VG Name               vg1000
 PV Size               5.44 TB / not usable 3.38 MB
 Allocatable           yes (but full)
 PE Size (KByte)       4096
 Total PE              1427328
 Free PE               0
 Allocated PE          1427328
 PV UUID               UDpJBS-unrB-wzi8-c52i-mqd2-54e1-SWpsVL

 --- Physical volume ---
 PV Name               /dev/md3
 VG Name               vg1000
 PV Size               931.49 GB / not usable 2.38 MB
 Allocatable           yes (but full)
 PE Size (KByte)       4096
 Total PE              238462
 Free PE               0
 Allocated PE          238462
 PV UUID               YFpKHI-4UDi-8J2M-y7s4-CRrI-Stlb-1xJ2oi

DiskStation> lvdisplay
 --- Logical volume ---
 LV Name                /dev/vg1000/lv
 VG Name                vg1000
 LV UUID                RD3nVc-LiPu-CsGZ-ZqA1-K4LA-OD72-GshZUa
 LV Write Access        read/write
 LV Status              available
 # open                 1
 LV Size                6.35 TB
 Current LE             1665790
 Segments               2
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     4096
 Block device           253:0

DiskStation> vgdisplay
 --- Volume group ---
 VG Name               vg1000
 System ID
 Format                lvm2
 Metadata Areas        2
 Metadata Sequence No  4
 VG Access             read/write
 VG Status             resizable
 MAX LV                0
 Cur LV                1
 Open LV               1
 Max PV                0
 Cur PV                2
 Act PV                2
 VG Size               6.35 TB
 PE Size               4.00 MB
 Total PE              1665790
 Alloc PE / Size       1665790 / 6.35 TB
 Free  PE / Size       0 / 0
 VG UUID               a7L11z-Pukv-f052-tv0N-CdNP-qQou-SFjIjC

DiskStation>

 

DiskStation> cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md3 : active raid5 sda6[1] sdb6[0]
     976742784 blocks super 1.2 level 5, 64k chunk, algorithm 2 [2/2] [uU]

md2 : active raid5 sdc5[6] sda5[4] sdd5[2] sdb5[5]
     5846338944 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [uUU_]
     [==============>......]  recovery = 70.9% (1383486592/1948779648) finish=121.0min speed=77818K/sec

md1 : active raid1 sdd2[3] sdc2[2] sdb2[1] sda2[0]
     2097088 blocks [12/4] [uUUU________]

md0 : active raid1 sdc1[3] sda1[1] sdb1[2] sdd1[0]
     2490176 blocks [12/4] [uUUU________]

unused devices: 
DiskStation>

 

DiskStation> pvs
 PV         VG     Fmt  Attr PSize   PFree
 /dev/md2   vg1000 lvm2 a-     5.44T    0
 /dev/md3   vg1000 lvm2 a-   931.49G    0
DiskStation> lvs
 LV   VG     Attr   LSize Origin Snap%  Move Log Copy%  Convert
 lv   vg1000 -wi-ao 6.35T
DiskStation> lvs --segments
 LV   VG     Attr   #Str Type   SSize
 lv   vg1000 -wi-ao    1 linear   5.44T
 lv   vg1000 -wi-ao    1 linear 931.49G
DiskStation>

 

I have my doubts that at the end of the process it will automatically resize md2, instead of using the extra space for md3 as it did. Is there any way to manually fix this w/o breaking DSM? I just want my data volume on all one physical and logical partition.

Link to comment
Share on other sites

Yes it will end up as multiple partitions

 

If you want one big md2 then you have only one choice...

 

.

 

thanks. looks like i'll be forced to that choice. after swapping out last disk the volume shows as crashed in DSM with no option to repair. I did some digging, and all md partitions there, but missing logical volume and volume group. put back old disk, the same. going to revert to my old array (hope it still works), backup all my data and start fresh :cry:

 

edit: looks like re-inserting old disks has issues too. 1 disk was missing from array, booted into DSM and it listed volume as crashed, but gave a popup box saying raid was disassembled, and wanted to run scan at reboot... picked that option. now just gotta wait and hope it fixes itself :cry::cry:

 

for whatever reason, now only 1 disk is listed in md0

 

edit2 think i figured out why. because it's booting off last disk that was left from new array, that now has new disk volumegroup data on it. I removed that disk, and booted. dsm still said it was crashed, but my volume was mounted in terminal and could see data... going to see if I can reverse order of disks and get it to boot from the first disk that was swapped out rather than the last.

 

edit3: couldn't fix it from DSM machine. Moved all 4 original disks to linux machine, it automatically booted with my array there, but 1 disk wasn't in the array for whatever reason. added it back w/ mdadm /dev/md2 -a /dev/sdc5 and it added. now it's doing a recovery... *crosses fingers* hope it works :cry:

 

If it works I'll backup the data off it, then dd the sdx1 partition from the first disk that was swapped out to the other 3 drives so they all have same data there.

Link to comment
Share on other sites

after all day of playing around, i think i've figured it out and in the process of fixing my array of new disks that failed on last disk swap (data is on my old array, after fixing it... but DSM was complaining of missing inodes or something like that in the console, probably cause data changed while in process of swapping out disk by disk.. so didn't feel safe using it).

 

in /etc/lvm/archive are all different copy of the lvm config at different stages of changes to the array. i had like 8-9 of them.. after looking at them all, and trying to narrow down the config that last worked before the volume was crashed i thought i found the correct one.

 

while searching the internet i found this page on fixing LVM raid arrays, and found the command I needed.

 

http://www.novell.com/coolsolutions/appnote/19386.html

 

vgcfgrestore

 

except it wouldn't let me restore the files from the backup folder path, and seemed like it was hardcoded to only work with the /etc/lvm/backup/ folder. so i renamed the config there, vg1000 to vg1000.old. and copied the config I thought might fix it to the /etc/lvm/backup/, renaming it vg1000

 

then I did :

 

vgcfgrestore vg1000

 

it said it was restored, or something like that. so I rebooted the system (leaving out the last disk in the array I swapped out/in), it booted to a degraded array. I added the disk, and an in the process of a repair :mrgreen:

 

edit: gnoboot, if you read this. Since my array was fucked (prior to my last boot), i figured i'd move to gnoboot 10.2 and try passing through my lsi 9201-8i card (was getting tired of deleting and adding RDM devices). so far while repairing my array I'm getting 103MB/s :grin:

 

edit2: now 118MB/s :ugeek:

Link to comment
Share on other sites

×
×
  • Create New...