New Members
  • Content count

  • Joined

  • Last visited

Everything posted by teepee

  1. Hi all, During a raid extend I had a failure and unexpected restart. Since then my data volume is showing as crashed. The last lvm archive file is from Jan 2017 and shows "description = "Created *before* executing '/sbin/vgremove -f /dev/vg1000'"". Has anything changed on DSM disk layout? Am I on the wrong track with LVM? cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md4 : active raid6 sdh3[8] sdd3[7] sdf3[6] sde3[3] sdi3[9] sdc3[10] 9743961280 blocks super 1.2 level 6, 64k chunk, algorithm 18 [7/6] [UUUUUU_] md2 : active raid0 sda1[0] sdb1[1] 390712704 blocks super 1.2 64k chunks [2/2] [UU] md1 : active raid1 sdf2[6] sdj2[12](F) sdi2[5] sdh2[4] sdg2[3] sde2[2] sdd2[1] sdc2[0] 522048 blocks [12/7] [UUUUUUU_____] md0 : active raid1 sdc1[1] sdd1[5] sde1[0] sdf1[4] sdg1[2] sdh1[6] sdi1[3] 2489920 blocks [12/7] [UUUUUUU_____] unused devices: <none> My vol in the GUI that is showing crashed is Volume3, and the lvm/archive files all reference md3???? Any guidance/pointers appreciated. Cheers, Tony
  2. Any ideas guys? or any further information that would help here? I appreciate any pointers...
  3. Hi @Polanskiman, unfortunately not. This is what I get in the GUI if I try to repair: