Recommended Posts

Hi all,

During a raid extend I had a failure and unexpected restart. Since then my data volume is showing as crashed. The last lvm archive file is from Jan 2017 and shows "description = "Created *before* executing '/sbin/vgremove -f /dev/vg1000'"". 

 

Has anything changed on DSM disk layout? Am I on the wrong track with LVM?

 cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md4 : active raid6 sdh3[8] sdd3[7] sdf3[6] sde3[3] sdi3[9] sdc3[10]
      9743961280 blocks super 1.2 level 6, 64k chunk, algorithm 18 [7/6] [UUUUUU_]
md2 : active raid0 sda1[0] sdb1[1]
      390712704 blocks super 1.2 64k chunks [2/2] [UU]
md1 : active raid1 sdf2[6] sdj2[12](F) sdi2[5] sdh2[4] sdg2[3] sde2[2] sdd2[1] sdc2[0]
      522048 blocks [12/7] [UUUUUUU_____]
md0 : active raid1 sdc1[1] sdd1[5] sde1[0] sdf1[4] sdg1[2] sdh1[6] sdi1[3]
      2489920 blocks [12/7] [UUUUUUU_____]
unused devices: <none>

image.png.d2a3696f8d257bb579b68176c54a5aa4.png

 

My vol in the GUI that is showing crashed is Volume3, and the lvm/archive files all reference md3????

 

Any guidance/pointers appreciated.

 

Cheers,

Tony

Share this post


Link to post
Share on other sites

Any ideas guys? or any further information that would help here? 

I appreciate any pointers... :)

Share this post


Link to post
Share on other sites

there seems to be a sdg3 missing, in the second picture there is a free (green) disk on that position

that disk is part of md0/md1 (dsm system and swap)

there also seems to be a 2nd completely unused disk sdj

 

what did you have before you started extending (number of disks) and what did you try to add (one disk?)

first two disks seem to be ssd cache, starting from 3rd disk (sdc) seems to be your raid array(s)

is there anything in the log what was going wrong when dsm tried to repair?

 

a raid5 or raid 6 should be still unsable (making a backup) when booting up a live linux

the safe way is to have a backup, trying to repair with a live linux and if that does not work kill the data volume3, insert all disks that will be part of a new volume and create it

keep in mind that it is advisable to find the reason of the crashed raid extension, can be a defective disk, problems with cabling or power, if that is not sorted out you will carry on problems to the new volume3 (like disks falling out of raid) so if you do not find a reason and want to go on you should do  good testing before trusting the new volume3

 

 

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now