I started a repair as I do usually after an install of the latest DSM 6.1 patch/udate. (See this post where I discuss my problem with raid always being degraded after an update due to my 8 SHR-2 raid starting at no 7 up to 14.)
This time the repair started directly with a 2nd repair when the first one was done. Any idea why this is happening?
I don't think that I have seen this before and I tired to search the forum for this but can´t find any other posts with this issue.
I see this in the kern.log:
2017-07-18T05:55:02+02:00 CrazyServer kernel: [50877.963147] md: md2: recovery done.
2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157924] md: md2: set sdn5 to auto_remap [0]
2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157931] md: md2: set sdm5 to auto_remap [0]
2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157934] md: md2: set sdg5 to auto_remap [0]
2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157936] md: md2: set sdl5 to auto_remap [0]
2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157939] md: md2: set sdk5 to auto_remap [0]
2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157941] md: md2: set sdi5 to auto_remap [0]
2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157944] md: md2: set sdj5 to auto_remap [0]
2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157946] md: md2: set sdh5 to auto_remap [0]
2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.261429] md: md2: current auto_remap = 0
2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.261433] md: md2: flushing inflight I/O
2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.289637] md: recovery of RAID array md2
Question
CrazyFin
I started a repair as I do usually after an install of the latest DSM 6.1 patch/udate. (See this post where I discuss my problem with raid always being degraded after an update due to my 8 SHR-2 raid starting at no 7 up to 14.)
This time the repair started directly with a 2nd repair when the first one was done. Any idea why this is happening?
I don't think that I have seen this before and I tired to search the forum for this but can´t find any other posts with this issue.
I see this in the kern.log:
2017-07-18T05:55:02+02:00 CrazyServer kernel: [50877.963147] md: md2: recovery done. 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157924] md: md2: set sdn5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157931] md: md2: set sdm5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157934] md: md2: set sdg5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157936] md: md2: set sdl5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157939] md: md2: set sdk5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157941] md: md2: set sdi5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157944] md: md2: set sdj5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157946] md: md2: set sdh5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.261429] md: md2: current auto_remap = 0 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.261433] md: md2: flushing inflight I/O 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.289637] md: recovery of RAID array md2
Link to comment
Share on other sites
0 answers to this question
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.