Recommended Posts

Hi, 

 

I updated to DSM 6 using Jun's loader. 

 

Shortly after, I tried to add another HDD to my NAS. DSM wasn't recognizing the HDD, so I powered it off and used the SATA cables from one of the working drives to ensure it wasn't the cables. This is where everything went wrong. When DSM booted up, I saw the drive I needed, but DSM gave an error of course. After powering the unit off and swapping the cables back, it still said it needed repair, so I pressed repair in Storage Manager. Everything seemed fine. After another reboot, it said it had crashed. I let the parity check run overnight and now the RAID is running healthy as well as each individual disk, but the Volume has crashed. 

 

It's running SHR-1.

 

I believe the mdstat results below show that it can be recovered without data loss, but I'm not sure where to go from here? Another thread added the removed '/dev/sdf2' back into active sync, but I'm not sure which letters are assigned where.

 

 

/proc/mdstat/  
admin@DiskStationNAS:/usr$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md3 : active raid1 sdc6[0] sdd6[1]
      976742784 blocks super 1.2 [2/2] [UU]
      
md2 : active raid5 sde5[0] sdd5[3] sdf5[2] sdc5[1]
      8776305792 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      
md1 : active raid1 sdc2[1] sdd2[2] sde2[0] sdf2[3]
      2097088 blocks [12/4] [UUUU________]
      
md0 : active raid1 sdc1[0] sdd1[1] sde1[2] sdf1[3]
      2490176 blocks [12/4] [UUUU________]
      
unused devices: <none>
admin@DiskStationNAS:/usr$ mdadm --detail /dev/md2
mdadm: must be super-user to perform this action
admin@DiskStationNAS:/usr$ sudo mdadm --detail /dev/md2
Password: 
/dev/md2:
        Version : 1.2
  Creation Time : Sat Aug 29 05:40:53 2015
     Raid Level : raid5
     Array Size : 8776305792 (8369.74 GiB 8986.94 GB)
  Used Dev Size : 2925435264 (2789.91 GiB 2995.65 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sun Aug 27 09:38:44 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : DiskStationNAS:2  (local to host DiskStationNAS)
           UUID : d97694ec:e0cb31e2:f22b36f2:86cfd4eb
         Events : 17027

    Number   Major   Minor   RaidDevice State
       0       8       69        0      active sync   /dev/sde5
       1       8       37        1      active sync   /dev/sdc5
       2       8       85        2      active sync   /dev/sdf5
       3       8       53        3      active sync   /dev/sdd5
admin@DiskStationNAS:/usr$ sudo mdadm --detail /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Thu Jun  8 22:33:42 2017
     Raid Level : raid1
     Array Size : 976742784 (931.49 GiB 1000.18 GB)
  Used Dev Size : 976742784 (931.49 GiB 1000.18 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sun Aug 27 00:07:01 2017
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : DiskStationNAS:3  (local to host DiskStationNAS)
           UUID : 4976db98:081bd234:e07be759:a005082b
         Events : 2

    Number   Major   Minor   RaidDevice State
       0       8       38        0      active sync   /dev/sdc6
       1       8       54        1      active sync   /dev/sdd6
admin@DiskStationNAS:/usr$ sudo mdadm --detail /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Sun Aug 27 00:10:09 2017
     Raid Level : raid1
     Array Size : 2097088 (2048.28 MiB 2147.42 MB)
  Used Dev Size : 2097088 (2048.28 MiB 2147.42 MB)
   Raid Devices : 12
  Total Devices : 4
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Sun Aug 27 09:38:38 2017
          State : clean, degraded
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           UUID : a56a9bcf:e721db67:060f5afc:c3279ded (local to host DiskStationNAS)
         Events : 0.44

    Number   Major   Minor   RaidDevice State
       0       8       66        0      active sync   /dev/sde2
       1       8       34        1      active sync   /dev/sdc2
       2       8       50        2      active sync   /dev/sdd2
       3       8       82        3      active sync   /dev/sdf2
       4       0        0        4      removed
       5       0        0        5      removed
       6       0        0        6      removed
       7       0        0        7      removed
       8       0        0        8      removed
       9       0        0        9      removed
      10       0        0       10      removed
      11       0        0       11      removed
admin@DiskStationNAS:/usr$ sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Fri Dec 31 17:00:25 1999
     Raid Level : raid1
     Array Size : 2490176 (2.37 GiB 2.55 GB)
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 12
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sun Aug 27 10:28:58 2017
          State : clean, degraded
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           UUID : 147ea3ff:bddf1774:3017a5a8:c86610be
         Events : 0.9139

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1
       2       8       65        2      active sync   /dev/sde1
       3       8       81        3      active sync   /dev/sdf1
       4       0        0        4      removed
       5       0        0        5      removed
       6       0        0        6      removed
       7       0        0        7      removed
       8       0        0        8      removed
       9       0        0        9      removed
      10       0        0       10      removed
      11       0        0       11      removed

 

Hide  

Share this post


Link to post
Share on other sites


So, my problem may be related to the disks being out of order. Checking /etc/space for an earlier .xml log, it shows they were in alphabetical order (sdc5, sdd5, sde5, sdf5). 

I tried using:

syno_poweroff_task -d 
mdadm --stop /dev/md2
mdadm -Cf /dev/md2 -e1.2 -n4 -l5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 -ud97694ec:e0cb31e2:f22b36f2:86cfd4eb

but it kept resulting in the error "mdadm: failed to stop array /dev/md2: Device or resource busy"

Share this post


Link to post
Share on other sites

Are you sure you didn't change the drives order when plugging them back? They need to be plugged in the exact same order as they were originally. 

Share this post


Link to post
Share on other sites
18 minutes ago, Polanskiman said:

Are you sure you didn't change the drives order when plugging them back? They need to be plugged in the exact same order as they were originally. 

 

Wouldn't I have had to make two swaps in order to get the result below?:

 Number   Major   Minor   RaidDevice State
       0       8       69        0      active sync   /dev/sde5
       1       8       37        1      active sync   /dev/sdc5
       2       8       85        2      active sync   /dev/sdf5
       3       8       53        3      active sync   /dev/sdd5

They were originally ordered c5, d5, e5, f5.

 

If that is the case, then it wasn't the physical connection of the drives that changed things because I only unplugged one drive.

Edited by redcinex

Share this post


Link to post
Share on other sites

I just checked the /etc/space logs and the order currently appears alphabetically there as well. So that information I found earlier doesn't prove that the disks are out of order. :(

Share this post


Link to post
Share on other sites
13 hours ago, Balrog said:

I hope I'm wrong but I saw today the same thread on the official synology forum. Even with "...updated with Juns loader". It is more than silly to post an issue with Xpenology on an official synology forum. Just my 2 cents.

https://forum.synology.com/enu/viewtopic.php?f=39&t=134716

 

The thread has now been deleted in Synology forum for obvious reasons...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now