teepee Posted February 12, 2018 Share #1 Posted February 12, 2018 Hi all, During a raid extend I had a failure and unexpected restart. Since then my data volume is showing as crashed. The last lvm archive file is from Jan 2017 and shows "description = "Created *before* executing '/sbin/vgremove -f /dev/vg1000'"". Has anything changed on DSM disk layout? Am I on the wrong track with LVM? cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md4 : active raid6 sdh3[8] sdd3[7] sdf3[6] sde3[3] sdi3[9] sdc3[10] 9743961280 blocks super 1.2 level 6, 64k chunk, algorithm 18 [7/6] [UUUUUU_] md2 : active raid0 sda1[0] sdb1[1] 390712704 blocks super 1.2 64k chunks [2/2] [UU] md1 : active raid1 sdf2[6] sdj2[12](F) sdi2[5] sdh2[4] sdg2[3] sde2[2] sdd2[1] sdc2[0] 522048 blocks [12/7] [UUUUUUU_____] md0 : active raid1 sdc1[1] sdd1[5] sde1[0] sdf1[4] sdg1[2] sdh1[6] sdi1[3] 2489920 blocks [12/7] [UUUUUUU_____] unused devices: <none> My vol in the GUI that is showing crashed is Volume3, and the lvm/archive files all reference md3???? Any guidance/pointers appreciated. Cheers, Tony Quote Link to comment Share on other sites More sharing options...
Polanskiman Posted February 13, 2018 Share #2 Posted February 13, 2018 Repairing the volume does not solve the issue? Quote Link to comment Share on other sites More sharing options...
teepee Posted February 13, 2018 Author Share #3 Posted February 13, 2018 Hi @Polanskiman, unfortunately not. This is what I get in the GUI if I try to repair: Quote Link to comment Share on other sites More sharing options...
teepee Posted February 14, 2018 Author Share #4 Posted February 14, 2018 Any ideas guys? or any further information that would help here? I appreciate any pointers... Quote Link to comment Share on other sites More sharing options...
IG-88 Posted February 17, 2018 Share #5 Posted February 17, 2018 there seems to be a sdg3 missing, in the second picture there is a free (green) disk on that position that disk is part of md0/md1 (dsm system and swap) there also seems to be a 2nd completely unused disk sdj what did you have before you started extending (number of disks) and what did you try to add (one disk?) first two disks seem to be ssd cache, starting from 3rd disk (sdc) seems to be your raid array(s) is there anything in the log what was going wrong when dsm tried to repair? a raid5 or raid 6 should be still unsable (making a backup) when booting up a live linux the safe way is to have a backup, trying to repair with a live linux and if that does not work kill the data volume3, insert all disks that will be part of a new volume and create it keep in mind that it is advisable to find the reason of the crashed raid extension, can be a defective disk, problems with cabling or power, if that is not sorted out you will carry on problems to the new volume3 (like disks falling out of raid) so if you do not find a reason and want to go on you should do good testing before trusting the new volume3 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.