Jump to content
XPEnology Community

Help save my 55TB SHR1! Or mount it via Ubuntu :(


Recommended Posts

7 minutes ago, flyride said:

Something isn't quite right.  Do you have 13 drives plus a cache drive, or 12 plus cache?  Which drive is your cache drive now?  Please post fdisk -l

 

It was originally 13 drives + cache. The cache is not initialized now in DSM for some reason. And also I stupidly deactivated one 9.10TB (10TB WD) drive. 

 

 

 

root@homelab:~# fdisk -l
Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x696935dc

Device     Boot Start       End   Sectors   Size Id Type
/dev/sda1        2048 468857024 468854977 223.6G fd Linux raid autodetect


Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 43C8C355-AE0A-42DC-97CC-508B0FB4EF37

Device       Start        End    Sectors  Size Type
/dev/sdb1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdb2  4982528    9176831    4194304    2G Linux RAID
/dev/sdb5  9453280 5860326239 5850872960  2.7T Linux RAID


Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0600DFFC-A576-4242-976A-3ACAE5284C4C

Device       Start        End    Sectors  Size Type
/dev/sdc1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdc2  4982528    9176831    4194304    2G Linux RAID
/dev/sdc5  9453280 5860326239 5850872960  2.7T Linux RAID


Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 58B43CB1-1F03-41D3-A734-014F59DE34E8

Device       Start        End    Sectors  Size Type
/dev/sdd1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdd2  4982528    9176831    4194304    2G Linux RAID
/dev/sdd5  9453280 5860326239 5850872960  2.7T Linux RAID


Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: E5FD9CDA-FE14-4F95-B776-B176E7130DEA

Device       Start        End    Sectors  Size Type
/dev/sde1     2048    4982527    4980480  2.4G Linux RAID
/dev/sde2  4982528    9176831    4194304    2G Linux RAID
/dev/sde5  9453280 5860326239 5850872960  2.7T Linux RAID


Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 48A13430-10A1-4050-BA78-723DB398CE87

Device       Start        End    Sectors  Size Type
/dev/sdf1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdf2  4982528    9176831    4194304    2G Linux RAID
/dev/sdf5  9453280 5860326239 5850872960  2.7T Linux RAID


GPT PMBR size mismatch (102399 != 30277631) will be corrected by w(rite).
Disk /dev/synoboot: 14.4 GiB, 15502147584 bytes, 30277632 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: B3CAAA25-3CA1-48FA-A5B6-105ADDE4793F

Device         Start    End Sectors Size Type
/dev/synoboot1  2048  32767   30720  15M EFI System
/dev/synoboot2 32768  94207   61440  30M Linux filesystem
/dev/synoboot3 94208 102366    8159   4M BIOS boot


Disk /dev/sdk: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 1D5B8B09-8D4A-4729-B089-442620D3D507

Device       Start        End    Sectors  Size Type
/dev/sdk1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdk2  4982528    9176831    4194304    2G Linux RAID
/dev/sdk5  9453280 5860326239 5850872960  2.7T Linux RAID


Disk /dev/sdl: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 849E02B2-2734-496B-AB52-A572DF8FE63F

Device          Start         End    Sectors  Size Type
/dev/sdl1        2048     4982527    4980480  2.4G Linux RAID
/dev/sdl2     4982528     9176831    4194304    2G Linux RAID
/dev/sdl5     9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdl6  5860342336 11720838239 5860495904  2.7T Linux RAID


Disk /dev/sdm: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 423D33B4-90CE-4E34-9C40-6E06D1F50C0C

Device          Start         End    Sectors  Size Type
/dev/sdm1        2048     4982527    4980480  2.4G Linux RAID
/dev/sdm2     4982528     9176831    4194304    2G Linux RAID
/dev/sdm5     9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdm6  5860342336 11720838239 5860495904  2.7T Linux RAID


Disk /dev/sdn: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 1713E819-3B9A-4CE3-94E8-5A3DBF1D5983

Device           Start         End    Sectors  Size Type
/dev/sdn1         2048     4982527    4980480  2.4G Linux RAID
/dev/sdn2      4982528     9176831    4194304    2G Linux RAID
/dev/sdn5      9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdn6   5860342336 11720838239 5860495904  2.7T Linux RAID
/dev/sdn7  11720854336 19532653311 7811798976  3.7T Linux RAID


Disk /dev/sdo: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 09CB7303-C2E7-46F8-ADA0-D4853F25CB00

Device          Start         End    Sectors  Size Type
/dev/sdo1        2048     4982527    4980480  2.4G Linux RAID
/dev/sdo2     4982528     9176831    4194304    2G Linux RAID
/dev/sdo5     9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdo6  5860342336 11720838239 5860495904  2.7T Linux RAID


Disk /dev/sdp: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: A3E39D34-4297-4BE9-B4FD-3A21EFC38071

Device       Start        End    Sectors  Size Type
/dev/sdp1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdp2  4982528    9176831    4194304    2G Linux RAID
/dev/sdp5  9453280 5860326239 5850872960  2.7T Linux RAID


Disk /dev/sdq: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 54D81C51-AB85-4DE2-AA16-263DF1C6BB8A

Device       Start        End    Sectors  Size Type
/dev/sdq1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdq2  4982528    9176831    4194304    2G Linux RAID
/dev/sdq5  9453280 5860326239 5850872960  2.7T Linux RAID


Disk /dev/sdr: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: EA537505-55B5-4C27-A7CA-C7BBB7E7B56F

Device           Start         End    Sectors  Size Type
/dev/sdr1         2048     4982527    4980480  2.4G Linux RAID
/dev/sdr2      4982528     9176831    4194304    2G Linux RAID
/dev/sdr5      9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdr6   5860342336 11720838239 5860495904  2.7T Linux RAID
/dev/sdr7  11720854336 19532653311 7811798976  3.7T Linux RAID


Disk /dev/md0: 2.4 GiB, 2549940224 bytes, 4980352 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram0: 2.3 GiB, 2488270848 bytes, 607488 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram1: 2.3 GiB, 2488270848 bytes, 607488 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram2: 2.3 GiB, 2488270848 bytes, 607488 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram3: 2.3 GiB, 2488270848 bytes, 607488 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md5: 3.7 TiB, 3999639994368 bytes, 7811796864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md4: 10.9 TiB, 12002291351552 bytes, 23441975296 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 262144 bytes


Disk /dev/md2: 32.7 TiB, 35947750883328 bytes, 70210450944 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 786432 bytes

 

Edited by C-Fu
Link to comment
Share on other sites

OK I just remembered, I replaced an old 3TB drive because it was showing increasing bad sectors. I replaced the drive, resync the array and everything went well for a few months at least.

 

I believe the old data is still there. Sorry about that, it's 6am right now and I just remembered about this :D but nothing happened during the few months after replacing.

 

Should I plug it in and do mdstat?

Link to comment
Share on other sites

On ‎1‎/‎13‎/‎2020 at 8:36 PM, C-Fu said:

Story begins with this:

13 drives in total, including one cache SSD.

 

Ok, the above from your very first post is what threw me.  We are where we ought to be.

 

Now, let's add your 10TB drive back in:

 

# mdadm --manage /dev/md2 --add /dev/sdr5

# mdadm --manage /dev/md4 --add /dev/sdr6

# mdadm --manage /dev/md5 --add /dev/sdr7

# cat /proc/mdstat

Link to comment
Share on other sites

1 minute ago, flyride said:

Ok, the above from your very first post is what threw me.  We are where we ought to be.

 

Aaah, I see now what you mean. My mistake, sorry! :D

 

root@homelab:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sdr5[14] sdb5[0] sdp5[10] sdn5[11] sdo5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1]
      35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UU]
      [>....................]  recovery =  0.0% (2322676/2925435456) finish=739.2min speed=65905K/sec

md4 : active raid5 sdr6[5] sdl6[0] sdn6[3] sdo6[2] sdm6[1]
      11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U]
        resync=DELAYED

md5 : active raid1 sdr7[2] sdn7[0]
      3905898432 blocks super 1.2 [2/1] [U_]
        resync=DELAYED

md1 : active raid1 sdr2[12] sdq2[11] sdp2[10] sdo2[9] sdn2[8] sdm2[7] sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0]
      2097088 blocks [24/13] [UUUUUUUUUUUUU___________]

md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[0] sdo1[10] sdp1[4]
      2490176 blocks [12/10] [UUUUUUUUU_U_]

unused devices: <none>

md2 looks... promising, am I right? 

Link to comment
Share on other sites

2 minutes ago, flyride said:

Things are promising, yes.  All three arrays are set to repair.  Don't do anything from Storage Manager.  You can monitor progress with cat /proc/mdstat.

 

Post another mdstat when it's all done.  It might be awhile.

 

I will! Many thanks for spending time to a stranger!!! I can go to sleep now thanks to you! :D 

Link to comment
Share on other sites

hi,

 

good read, especially

mdadm --examine /dev/sd[bcdefklmnopqr]5 | egrep 'Event|/dev/sd'

was interesting

 

in post#11 are 14 drives and in the /dev/sd* list are only /dev/sda and /dev/sda1 and the first drive in the picture is the ssd

how to find out what files are broken?

 

Link to comment
Share on other sites

 

2 hours ago, IG-88 said:

in post#11 are 14 drives and in the /dev/sd* list are only /dev/sda and /dev/sda1 and the first drive in the picture is the ssd

how to find out what files are broken?

 

All the devices got remapped three times, first with the Ubuntu mount, then with the migration install, and then with the change to the bitmask in synoinfo.conf.  mdraid tried to initialize the broken array three different ways and I don't know how or if it affects the metadata. Also I was not sure how the drive deactivation would affect metadata either.

 

Once all drives were online, mdstat showed degraded but clean /dev/md4 and /dev/md5 so I focused on /dev/md2 only.  SHR always has a smallest partition array on every drive and that was /dev/md2, so it made inventory simple (just look for any missing device, cross-reference device list and fdisk, try to make sense of mdadm --detail, don't count the SSD cache). But I got confused about the total number of disks, and so after forcing sdq into the array, starts still did not work. This made me worried about crosslinked disk device pointers, not sure if that could happen with all the device thrashing.

 

But finally I made up the UUID list to be sure, and also its easy to map to the arrays then:

image.thumb.png.2ab66f81db29e3e2715d95a140b99632.png

 

Now everything matches up to create confidence that we had the right inventory to start /dev/md2 in degraded mode.

 

There might be data loss from /dev/sdq5 but we have to accept it in order to do anything at all, so that is the best outcome.  If btrfs is in use it will probably checksum report any affected files as corrupted and unrecoverable.  If the filesystem is ext4 then affected files will just be garbage data with no error report.

 

I don't know if I answered your question, but this was how I was trying to figure out the missing devices.

 

Also we still aren't done yet until LVM does its thing, but better to have our redundancy back first.

Edited by flyride
Link to comment
Share on other sites

38 minutes ago, flyride said:

But finally I made up the UUID list to be sure, and also its easy to map to the arrays then:

i did draw up the same in parallel as you where working on it but i forgot to look for the uuid's

 

41 minutes ago, flyride said:

I don't know if I answered your question, but this was how I was trying to figure out the missing devices.

yes thanks, i did not read further about it but i guess the 100 difference in the sequence number of sdq5 might be 64k each and i was thinking if there is anything else beside btrfs checksums that could be done, at least with that it should be possible to see what files have taken some damage, and most videos will still work after some loss

 

i did not get the part where the 10TB disk was removed, "I ... clicked deactivate with the 9.10/10TB drive", where could you remove a redundancy disk and disable the raid?

 

57 minutes ago, flyride said:

Also we still aren't done yet until LVM does its thing,

 

i'm looking forward to it, kind of riskfree as a spectator (i only recovered lvm once)

Link to comment
Share on other sites

5 hours ago, IG-88 said:

yes thanks, i did not read further about it but i guess the 100 difference in the sequence number of sdq5 might be 64k each and i was thinking if there is anything else beside btrfs checksums that could be done, at least with that it should be possible to see what files have taken some damage, and most videos will still work after some loss

 

Minus 1/13 average parity chunks, so 93 x 64K perhaps.  But odds are also that the same chunks were rewritten or retried during the failure several times, so probably less.

 

That is the dirty secret of most filesystems, is that corruption may happen and nobody will ever know.  Syno btrfs implementation is pretty darn good in that way.  You might find this an interesting article in that regard. It will make you a btrfs believer if you weren't already.

 

5 hours ago, IG-88 said:

i did not get the part where the 10TB disk was removed, "I ... clicked deactivate with the 9.10/10TB drive", where could you remove a redundancy disk and disable the raid?

 

In Storage Manager under HDD/SDD panel, there is a neat Action drop down to deactivate a drive.  I have never bothered to try it, but I guess it works.  If I wanted to fail out a drive I always just mdadm /dev/md -f /dev/sd

Edited by flyride
Link to comment
Share on other sites

28 minutes ago, flyride said:

In Storage Manager under HDD/SDD panel, there is a neat Action drop down to deactivate a drive.  I have never bothered to try it, but I guess it works.  If I wanted to fail out a drive I always just mdadm /dev/md -f /dev/sd

 

I foolishly thought if I deactivated a drive, unplug it and replug it (hotswap-capable) DSM would recognize, reactivate the 10TB and rebuild array. That's obviously not the case.

 

Current status is, md2 has finished resync.

md2 : active raid5 sdb5[0] sdp5[10](E) sdn5[11] sdo5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1]
      35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UE]

this means that sdp5 has system partition error, and one sd?5 partition is missing, right?

md5 is 95% complete.

 

6 hours ago, flyride said:

All the devices got remapped three times, first with the Ubuntu mount, then with the migration install, and then with the change to the bitmask in synoinfo.conf.  mdraid tried to initialize the broken array three different ways and I don't know how or if it affects the metadata. Also I was not sure how the drive deactivation would affect metadata either.

 

 

Wow, I didn't know just booting to ubuntu had such a large effect (drives got auto remapped). Or did it get remapped because of a command like mdadm --assemble --scan that I did?

 

28 minutes ago, flyride said:

You might find this an interesting article in that regard. It will make you a btrfs believer if you weren't already.

 

That's.... awesome. I never would've come across that article on my own.

Edited by C-Fu
Link to comment
Share on other sites

/dev/md2 has a drive with an error state so do a mdadm --detail for each array when it's done, along with the /proc/mdstat.  We aren't out of the woods yet.

 

7 minutes ago, C-Fu said:

Wow, I didn't know just booting to ubuntu had such a large effect (drives got auto remapped). Or did it get remapped because of a command like mdadm --assemble --scan that I did?

 

If everything is healthy, it probably doesn't matter.  The point of booting to Ubuntu is when there is a boot problem on DSM - you can get to the files to fix whatever's wrong with DSM.  I haven't ever heard of someone transporting a broken array over to Ubuntu for the purposes of fixing it.  Not sure how that would be better than working with the same MDRAID in DSM.

Edited by flyride
  • Like 1
Link to comment
Share on other sites

12 minutes ago, C-Fu said:

this means that sdp5 has system partition error, and one sd?5 partition is missing, right?

 

System Partition error is just if there are missing members of /dev/md0 (root) or /dev/md1 (swap).  Those are RAID1 arrays and you have lots of copies of those, which is why we don't really care too much about them right now.  I'm not sure what the problem is with sdp5 yet, and sdr5 seems to be missing, will look at it when resync is complete.

  • Like 1
Link to comment
Share on other sites

Well a power trip just happened. And obviously I freaked out lol. hopefully nothing damaging happened.

 

But a drive in md2 doesn't show [E] anymore.

 

6 hours ago, flyride said:

 

System Partition error is just if there are missing members of /dev/md0 (root) or /dev/md1 (swap).  Those are RAID1 arrays and you have lots of copies of those, which is why we don't really care too much about them right now.  I'm not sure what the problem is with sdp5 yet, and sdr5 seems to be missing, will look at it when resync is complete.

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sdb5[0] sdp5[10] sdo5[11] sdn5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1]
      35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UU]

md4 : active raid5 sdl6[0] sdo6[3] sdr6[5] sdn6[2] sdm6[1]
      11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md5 : active raid1 sdo7[0] sdr7[2]
      3905898432 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdk2[5] sdl2[6] sdm2[7] sdn2[9] sdo2[8] sdp2[10] sdq2[11] sdr2[12]
      2097088 blocks [24/13] [UUUUUUUUUUUUU___________]

md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[10] sdo1[0] sdp1[4]
      2490176 blocks [12/10] [UUUUUUUUU_U_]

unused devices: <none>

yup, sdr5 is still missing. :( 

Link to comment
Share on other sites

9 minutes ago, flyride said:

mdadm --detail /dev/md2

# mdadm --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
  Used Dev Size : 2925435456 (2789.91 GiB 2995.65 GB)
   Raid Devices : 13
  Total Devices : 12
    Persistence : Superblock is persistent

    Update Time : Fri Jan 17 17:14:06 2020
          State : clean, degraded
 Active Devices : 12
Working Devices : 12
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : homelab:2  (local to host homelab)
           UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
         Events : 370940

    Number   Major   Minor   RaidDevice State
       0       8       21        0      active sync   /dev/sdb5
       1       8       37        1      active sync   /dev/sdc5
       2       8       53        2      active sync   /dev/sdd5
       3       8       69        3      active sync   /dev/sde5
       4       8       85        4      active sync   /dev/sdf5
       5       8      165        5      active sync   /dev/sdk5
      13      65        5        6      active sync   /dev/sdq5
       7       8      181        7      active sync   /dev/sdl5
       8       8      197        8      active sync   /dev/sdm5
       9       8      213        9      active sync   /dev/sdn5
       -       0        0       10      removed
      11       8      229       11      active sync   /dev/sdo5
      10       8      245       12      active sync   /dev/sdp5

 

 

10 minutes ago, flyride said:

mdadm --examine /dev/sd[bcdefklmnpoqr]5

 

# mdadm --examine /dev/sd[bcdefklmnpoqr]5
/dev/sdb5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : a8109f74:46bc8509:6fc3bca8:9fddb6a7

    Update Time : Fri Jan 17 17:14:06 2020
       Checksum : b3409c8 - correct
         Events : 370940

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 8dfdc601:e01f8a98:9a8e78f1:a7951260

    Update Time : Fri Jan 17 17:14:06 2020
       Checksum : 2877dd7b - correct
         Events : 370940

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : f98bc050:a4b46deb:c3168fa0:08d90061

    Update Time : Fri Jan 17 17:14:06 2020
       Checksum : 7a59cc36 - correct
         Events : 370940

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 1e2742b7:d1847218:816c7135:cdf30c07

    Update Time : Fri Jan 17 17:14:06 2020
       Checksum : 48cea80b - correct
         Events : 370940

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : ce60c47e:14994160:da4d1482:fd7901f2

    Update Time : Fri Jan 17 17:14:06 2020
       Checksum : 8fb75d89 - correct
         Events : 370940

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 4
   Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdk5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 706c5124:d647d300:733fb961:e5cd8127

    Update Time : Fri Jan 17 17:14:06 2020
       Checksum : eafb5d6d - correct
         Events : 370940

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 5
   Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdl5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 6993b9eb:8ad7c80f:dc17268f:a8efa73d

    Update Time : Fri Jan 17 17:14:06 2020
       Checksum : 4ec0e4a - correct
         Events : 370940

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 7
   Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdm5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 2f1247d1:a536d2ad:ba2eb47f:a7eaf237

    Update Time : Fri Jan 17 17:14:06 2020
       Checksum : 735bfe09 - correct
         Events : 370940

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 8
   Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdn5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 1b4ab27d:bb7488fa:a6cc1f75:d21d1a83

    Update Time : Fri Jan 17 17:14:06 2020
       Checksum : ad104523 - correct
         Events : 370940

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 9
   Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdo5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 73610f83:fb3cf895:c004147e:b4de2bfe

    Update Time : Fri Jan 17 17:14:06 2020
       Checksum : d1e31db9 - correct
         Events : 370940

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 11
   Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdp5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : a64f01c2:76c56102:38ad7c4e:7bce88d1

    Update Time : Fri Jan 17 17:14:06 2020
       Checksum : 21042cad - correct
         Events : 370940

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 12
   Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdq5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 5cc6456d:bfc950bf:1baf6fef:aabec947

    Update Time : Fri Jan 17 17:14:06 2020
       Checksum : a06b99b9 - correct
         Events : 370940

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 6
   Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdr5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x2
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
Recovery Offset : 79355560 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : bfc0d160:b7147b64:ca088295:9c6ab3b2

    Update Time : Fri Jan 17 07:11:38 2020
       Checksum : 4ed7b628 - correct
         Events : 370924

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 10
   Array State : AAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)

 

 

11 minutes ago, flyride said:

dmesg | fgrep "md:"

# dmesg | fgrep "md:"
[    0.895403] md: linear personality registered for level -1
[    0.895405] md: raid0 personality registered for level 0
[    0.895406] md: raid1 personality registered for level 1
[    0.895407] md: raid10 personality registered for level 10
[    0.895776] md: raid6 personality registered for level 6
[    0.895778] md: raid5 personality registered for level 5
[    0.895779] md: raid4 personality registered for level 4
[    0.895780] md: raidF1 personality registered for level 45
[    9.002596] md: Autodetecting RAID arrays.
[    9.007616] md: invalid raid superblock magic on sda1
[    9.012674] md: sda1 does not have a valid v0.90 superblock, not importing!
[    9.075202] md: invalid raid superblock magic on sdb5
[    9.080258] md: sdb5 does not have a valid v0.90 superblock, not importing!
[    9.131736] md: invalid raid superblock magic on sdc5
[    9.136787] md: sdc5 does not have a valid v0.90 superblock, not importing!
[    9.184688] md: invalid raid superblock magic on sdd5
[    9.189741] md: sdd5 does not have a valid v0.90 superblock, not importing!
[    9.254542] md: invalid raid superblock magic on sde5
[    9.259597] md: sde5 does not have a valid v0.90 superblock, not importing!
[    9.310317] md: invalid raid superblock magic on sdf5
[    9.315372] md: sdf5 does not have a valid v0.90 superblock, not importing!
[    9.370415] md: invalid raid superblock magic on sdk5
[    9.375468] md: sdk5 does not have a valid v0.90 superblock, not importing!
[    9.423869] md: invalid raid superblock magic on sdl5
[    9.428919] md: sdl5 does not have a valid v0.90 superblock, not importing!
[    9.468250] md: invalid raid superblock magic on sdl6
[    9.473300] md: sdl6 does not have a valid v0.90 superblock, not importing!
[    9.519960] md: invalid raid superblock magic on sdm5
[    9.525015] md: sdm5 does not have a valid v0.90 superblock, not importing!
[    9.556049] md: invalid raid superblock magic on sdm6
[    9.561101] md: sdm6 does not have a valid v0.90 superblock, not importing!
[    9.614718] md: invalid raid superblock magic on sdn5
[    9.619773] md: sdn5 does not have a valid v0.90 superblock, not importing!
[    9.642163] md: invalid raid superblock magic on sdn6
[    9.647220] md: sdn6 does not have a valid v0.90 superblock, not importing!
[    9.689354] md: invalid raid superblock magic on sdo5
[    9.694404] md: sdo5 does not have a valid v0.90 superblock, not importing!
[    9.711917] md: invalid raid superblock magic on sdo6
[    9.716972] md: sdo6 does not have a valid v0.90 superblock, not importing!
[    9.731387] md: invalid raid superblock magic on sdo7
[    9.736444] md: sdo7 does not have a valid v0.90 superblock, not importing!
[    9.793088] md: invalid raid superblock magic on sdp5
[    9.798143] md: sdp5 does not have a valid v0.90 superblock, not importing!
[    9.845631] md: invalid raid superblock magic on sdq5
[    9.850684] md: sdq5 does not have a valid v0.90 superblock, not importing!
[    9.895380] md: invalid raid superblock magic on sdr5
[    9.900435] md: sdr5 does not have a valid v0.90 superblock, not importing!
[    9.914093] md: invalid raid superblock magic on sdr6
[    9.919143] md: sdr6 does not have a valid v0.90 superblock, not importing!
[    9.938110] md: invalid raid superblock magic on sdr7
[    9.943161] md: sdr7 does not have a valid v0.90 superblock, not importing!
[    9.943162] md: Scanned 47 and added 26 devices.
[    9.943163] md: autorun ...
[    9.943163] md: considering sdb1 ...
[    9.943165] md:  adding sdb1 ...
[    9.943166] md: sdb2 has different UUID to sdb1
[    9.943168] md:  adding sdc1 ...
[    9.943169] md: sdc2 has different UUID to sdb1
[    9.943170] md:  adding sdd1 ...
[    9.943171] md: sdd2 has different UUID to sdb1
[    9.943172] md:  adding sde1 ...
[    9.943173] md: sde2 has different UUID to sdb1
[    9.943174] md:  adding sdf1 ...
[    9.943175] md: sdf2 has different UUID to sdb1
[    9.943176] md:  adding sdk1 ...
[    9.943177] md: sdk2 has different UUID to sdb1
[    9.943178] md:  adding sdl1 ...
[    9.943179] md: sdl2 has different UUID to sdb1
[    9.943181] md:  adding sdm1 ...
[    9.943182] md: sdm2 has different UUID to sdb1
[    9.943183] md:  adding sdn1 ...
[    9.943184] md: sdn2 has different UUID to sdb1
[    9.943185] md:  adding sdo1 ...
[    9.943186] md: sdo2 has different UUID to sdb1
[    9.943187] md:  adding sdp1 ...
[    9.943188] md: sdp2 has different UUID to sdb1
[    9.943189] md:  adding sdq1 ...
[    9.943190] md: sdq2 has different UUID to sdb1
[    9.943191] md:  adding sdr1 ...
[    9.943192] md: sdr2 has different UUID to sdb1
[    9.943203] md: kicking non-fresh sdr1 from candidates rdevs!
[    9.943203] md: export_rdev(sdr1)
[    9.943205] md: kicking non-fresh sdq1 from candidates rdevs!
[    9.943205] md: export_rdev(sdq1)
[    9.943207] md: kicking non-fresh sde1 from candidates rdevs!
[    9.943207] md: export_rdev(sde1)
[    9.943208] md: created md0
[    9.943209] md: bind<sdp1>
[    9.943214] md: bind<sdo1>
[    9.943220] md: bind<sdn1>
[    9.943223] md: bind<sdm1>
[    9.943226] md: bind<sdl1>
[    9.943229] md: bind<sdk1>
[    9.943232] md: bind<sdf1>
[    9.943235] md: bind<sdd1>
[    9.943238] md: bind<sdc1>
[    9.943241] md: bind<sdb1>
[    9.943244] md: running: <sdb1><sdc1><sdd1><sdf1><sdk1><sdl1><sdm1><sdn1><sdo1><sdp1>
[    9.981355] md: considering sdb2 ...
[    9.981356] md:  adding sdb2 ...
[    9.981357] md:  adding sdc2 ...
[    9.981358] md:  adding sdd2 ...
[    9.981360] md:  adding sde2 ...
[    9.981361] md:  adding sdf2 ...
[    9.981362] md:  adding sdk2 ...
[    9.981363] md:  adding sdl2 ...
[    9.981364] md:  adding sdm2 ...
[    9.981365] md:  adding sdn2 ...
[    9.981367] md:  adding sdo2 ...
[    9.981368] md:  adding sdp2 ...
[    9.981369] md: md0: current auto_remap = 0
[    9.981369] md:  adding sdq2 ...
[    9.981370] md:  adding sdr2 ...
[    9.981372] md: resync of RAID array md0
[    9.981504] md: created md1
[    9.981505] md: bind<sdr2>
[    9.981511] md: bind<sdq2>
[    9.981515] md: bind<sdp2>
[    9.981520] md: bind<sdo2>
[    9.981525] md: bind<sdn2>
[    9.981530] md: bind<sdm2>
[    9.981535] md: bind<sdl2>
[    9.981540] md: bind<sdk2>
[    9.981544] md: bind<sdf2>
[    9.981549] md: bind<sde2>
[    9.981554] md: bind<sdd2>
[    9.981559] md: bind<sdc2>
[    9.981565] md: bind<sdb2>
[    9.981574] md: running: <sdb2><sdc2><sdd2><sde2><sdf2><sdk2><sdl2><sdm2><sdn2><sdo2><sdp2><sdq2><sdr2>
[    9.989470] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[    9.989470] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[    9.989474] md: using 128k window, over a total of 2490176k.
[   10.052110] md: ... autorun DONE.
[   10.052124] md: md1: current auto_remap = 0
[   10.052126] md: resync of RAID array md1
[   10.060221] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[   10.060222] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[   10.060224] md: using 128k window, over a total of 2097088k.
[   29.602277] md: bind<sdr7>
[   29.602651] md: bind<sdo7>
[   29.679014] md: md2 stopped.
[   29.803959] md: bind<sdc5>
[   29.804024] md: bind<sdd5>
[   29.804084] md: bind<sde5>
[   29.804145] md: bind<sdf5>
[   29.804218] md: bind<sdk5>
[   29.804286] md: bind<sdq5>
[   29.828033] md: bind<sdl5>
[   29.828589] md: bind<sdm5>
[   29.828942] md: bind<sdn5>
[   29.829120] md: bind<sdr5>
[   29.854008] md: bind<sdo5>
[   29.855384] md: bind<sdp5>
[   29.857913] md: bind<sdb5>
[   29.857922] md: kicking non-fresh sdr5 from array!
[   29.857925] md: unbind<sdr5>
[   29.865755] md: export_rdev(sdr5)
[   29.993600] md: bind<sdm6>
[   29.993748] md: bind<sdn6>
[   29.993917] md: bind<sdr6>
[   29.994084] md: bind<sdo6>
[   29.994230] md: bind<sdl6>
[   30.034680] md: md4: set sdl6 to auto_remap [1]
[   30.034681] md: md4: set sdo6 to auto_remap [1]
[   30.034681] md: md4: set sdr6 to auto_remap [1]
[   30.034682] md: md4: set sdn6 to auto_remap [1]
[   30.034682] md: md4: set sdm6 to auto_remap [1]
[   30.034684] md: delaying recovery of md4 until md1 has finished (they share one or more physical units)
[   30.222237] md: md2: set sdb5 to auto_remap [0]
[   30.222238] md: md2: set sdp5 to auto_remap [0]
[   30.222238] md: md2: set sdo5 to auto_remap [0]
[   30.222239] md: md2: set sdn5 to auto_remap [0]
[   30.222239] md: md2: set sdm5 to auto_remap [0]
[   30.222240] md: md2: set sdl5 to auto_remap [0]
[   30.222241] md: md2: set sdq5 to auto_remap [0]
[   30.222241] md: md2: set sdk5 to auto_remap [0]
[   30.222242] md: md2: set sdf5 to auto_remap [0]
[   30.222242] md: md2: set sde5 to auto_remap [0]
[   30.222243] md: md2: set sdd5 to auto_remap [0]
[   30.222244] md: md2: set sdc5 to auto_remap [0]
[   30.222244] md: md2 stopped.
[   30.222246] md: unbind<sdb5>
[   30.228152] md: export_rdev(sdb5)
[   30.228157] md: unbind<sdp5>
[   30.231173] md: export_rdev(sdp5)
[   30.231178] md: unbind<sdo5>
[   30.236190] md: export_rdev(sdo5)
[   30.236205] md: unbind<sdn5>
[   30.239180] md: export_rdev(sdn5)
[   30.239183] md: unbind<sdm5>
[   30.244169] md: export_rdev(sdm5)
[   30.244172] md: unbind<sdl5>
[   30.247189] md: export_rdev(sdl5)
[   30.247192] md: unbind<sdq5>
[   30.252207] md: export_rdev(sdq5)
[   30.252211] md: unbind<sdk5>
[   30.255196] md: export_rdev(sdk5)
[   30.255200] md: unbind<sdf5>
[   30.259408] md: export_rdev(sdf5)
[   30.259411] md: unbind<sde5>
[   30.271235] md: export_rdev(sde5)
[   30.271242] md: unbind<sdd5>
[   30.280228] md: export_rdev(sdd5)
[   30.280233] md: unbind<sdc5>
[   30.288234] md: export_rdev(sdc5)
[   30.680068] md: md2 stopped.
[   30.731994] md: bind<sdc5>
[   30.732110] md: bind<sdd5>
[   30.732258] md: bind<sde5>
[   30.732340] md: bind<sdf5>
[   30.732481] md: bind<sdk5>
[   30.732606] md: bind<sdq5>
[   30.737432] md: bind<sdl5>
[   30.748124] md: bind<sdm5>
[   30.748468] md: bind<sdn5>
[   30.748826] md: bind<sdr5>
[   30.749254] md: bind<sdo5>
[   30.763073] md: bind<sdp5>
[   30.776215] md: bind<sdb5>
[   30.776229] md: kicking non-fresh sdr5 from array!
[   30.776231] md: unbind<sdr5>
[   30.780828] md: export_rdev(sdr5)
[   60.552383] md: md1: resync done.
[   60.569174] md: md1: current auto_remap = 0
[   60.569202] md: delaying recovery of md4 until md0 has finished (they share one or more physical units)
[  109.601864] md: md0: resync done.
[  109.615133] md: md0: current auto_remap = 0
[  109.615149] md: md4: flushing inflight I/O
[  109.618280] md: recovery of RAID array md4
[  109.618282] md: minimum _guaranteed_  speed: 600000 KB/sec/disk.
[  109.618283] md: using maximum available idle IO bandwidth (but not more than 800000 KB/sec) for recovery.
[  109.618296] md: using 128k window, over a total of 2930246912k.
[  109.618297] md: resuming recovery of md4 from checkpoint.
[17557.409842] md: md4: recovery done.
[17557.601751] md: md4: set sdl6 to auto_remap [0]
[17557.601753] md: md4: set sdo6 to auto_remap [0]
[17557.601754] md: md4: set sdr6 to auto_remap [0]
[17557.601754] md: md4: set sdn6 to auto_remap [0]
[17557.601755] md: md4: set sdm6 to auto_remap [0]

 

Link to comment
Share on other sites

 

2 minutes ago, flyride said:

Also, just out of curiosity, is your volume back since there was a reboot?

 

nope. ls /volume1 still is empty. But something weird happened.

 

While running

23 minutes ago, flyride said:

# mdadm --zero-superblock /dev/sdr5

# mdadm --manage /dev/md2 --add /dev/sdr5

# cat /proc/mdstat

 

It just terminated my ssh session. Two times. and not resyncing anymore. Should I redo those three commands?

Current /proc/mdstat:

 

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sdb5[0] sdp5[10](E) sdo5[11] sdn5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1]
      35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UE]

md4 : active raid5 sdl6[0] sdo6[3] sdr6[5] sdn6[2] sdm6[1]
      11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md5 : active raid1 sdo7[0] sdr7[2]
      3905898432 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdk2[5] sdl2[6] sdm2[7] sdn2[9] sdo2[8] sdp2[10] sdq2[11] sdr2[12]
      2097088 blocks [24/13] [UUUUUUUUUUUUU___________]

md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[10] sdo1[0] sdp1[4]
      2490176 blocks [12/10] [UUUUUUUUU_U_]

unused devices: <none>
# dmesg | fgrep "md:"
[    0.895403] md: linear personality registered for level -1
[    0.895405] md: raid0 personality registered for level 0
[    0.895406] md: raid1 personality registered for level 1
[    0.895407] md: raid10 personality registered for level 10
[    0.895776] md: raid6 personality registered for level 6
[    0.895778] md: raid5 personality registered for level 5
[    0.895779] md: raid4 personality registered for level 4
[    0.895780] md: raidF1 personality registered for level 45
[    9.002596] md: Autodetecting RAID arrays.
[    9.007616] md: invalid raid superblock magic on sda1
[    9.012674] md: sda1 does not have a valid v0.90 superblock, not importing!
[    9.075202] md: invalid raid superblock magic on sdb5
[    9.080258] md: sdb5 does not have a valid v0.90 superblock, not importing!
[    9.131736] md: invalid raid superblock magic on sdc5
[    9.136787] md: sdc5 does not have a valid v0.90 superblock, not importing!
[    9.184688] md: invalid raid superblock magic on sdd5
[    9.189741] md: sdd5 does not have a valid v0.90 superblock, not importing!
[    9.254542] md: invalid raid superblock magic on sde5
[    9.259597] md: sde5 does not have a valid v0.90 superblock, not importing!
[    9.310317] md: invalid raid superblock magic on sdf5
[    9.315372] md: sdf5 does not have a valid v0.90 superblock, not importing!
[    9.370415] md: invalid raid superblock magic on sdk5
[    9.375468] md: sdk5 does not have a valid v0.90 superblock, not importing!
[    9.423869] md: invalid raid superblock magic on sdl5
[    9.428919] md: sdl5 does not have a valid v0.90 superblock, not importing!
[    9.468250] md: invalid raid superblock magic on sdl6
[    9.473300] md: sdl6 does not have a valid v0.90 superblock, not importing!
[    9.519960] md: invalid raid superblock magic on sdm5
[    9.525015] md: sdm5 does not have a valid v0.90 superblock, not importing!
[    9.556049] md: invalid raid superblock magic on sdm6
[    9.561101] md: sdm6 does not have a valid v0.90 superblock, not importing!
[    9.614718] md: invalid raid superblock magic on sdn5
[    9.619773] md: sdn5 does not have a valid v0.90 superblock, not importing!
[    9.642163] md: invalid raid superblock magic on sdn6
[    9.647220] md: sdn6 does not have a valid v0.90 superblock, not importing!
[    9.689354] md: invalid raid superblock magic on sdo5
[    9.694404] md: sdo5 does not have a valid v0.90 superblock, not importing!
[    9.711917] md: invalid raid superblock magic on sdo6
[    9.716972] md: sdo6 does not have a valid v0.90 superblock, not importing!
[    9.731387] md: invalid raid superblock magic on sdo7
[    9.736444] md: sdo7 does not have a valid v0.90 superblock, not importing!
[    9.793088] md: invalid raid superblock magic on sdp5
[    9.798143] md: sdp5 does not have a valid v0.90 superblock, not importing!
[    9.845631] md: invalid raid superblock magic on sdq5
[    9.850684] md: sdq5 does not have a valid v0.90 superblock, not importing!
[    9.895380] md: invalid raid superblock magic on sdr5
[    9.900435] md: sdr5 does not have a valid v0.90 superblock, not importing!
[    9.914093] md: invalid raid superblock magic on sdr6
[    9.919143] md: sdr6 does not have a valid v0.90 superblock, not importing!
[    9.938110] md: invalid raid superblock magic on sdr7
[    9.943161] md: sdr7 does not have a valid v0.90 superblock, not importing!
[    9.943162] md: Scanned 47 and added 26 devices.
[    9.943163] md: autorun ...
[    9.943163] md: considering sdb1 ...
[    9.943165] md:  adding sdb1 ...
[    9.943166] md: sdb2 has different UUID to sdb1
[    9.943168] md:  adding sdc1 ...
[    9.943169] md: sdc2 has different UUID to sdb1
[    9.943170] md:  adding sdd1 ...
[    9.943171] md: sdd2 has different UUID to sdb1
[    9.943172] md:  adding sde1 ...
[    9.943173] md: sde2 has different UUID to sdb1
[    9.943174] md:  adding sdf1 ...
[    9.943175] md: sdf2 has different UUID to sdb1
[    9.943176] md:  adding sdk1 ...
[    9.943177] md: sdk2 has different UUID to sdb1
[    9.943178] md:  adding sdl1 ...
[    9.943179] md: sdl2 has different UUID to sdb1
[    9.943181] md:  adding sdm1 ...
[    9.943182] md: sdm2 has different UUID to sdb1
[    9.943183] md:  adding sdn1 ...
[    9.943184] md: sdn2 has different UUID to sdb1
[    9.943185] md:  adding sdo1 ...
[    9.943186] md: sdo2 has different UUID to sdb1
[    9.943187] md:  adding sdp1 ...
[    9.943188] md: sdp2 has different UUID to sdb1
[    9.943189] md:  adding sdq1 ...
[    9.943190] md: sdq2 has different UUID to sdb1
[    9.943191] md:  adding sdr1 ...
[    9.943192] md: sdr2 has different UUID to sdb1
[    9.943203] md: kicking non-fresh sdr1 from candidates rdevs!
[    9.943203] md: export_rdev(sdr1)
[    9.943205] md: kicking non-fresh sdq1 from candidates rdevs!
[    9.943205] md: export_rdev(sdq1)
[    9.943207] md: kicking non-fresh sde1 from candidates rdevs!
[    9.943207] md: export_rdev(sde1)
[    9.943208] md: created md0
[    9.943209] md: bind<sdp1>
[    9.943214] md: bind<sdo1>
[    9.943220] md: bind<sdn1>
[    9.943223] md: bind<sdm1>
[    9.943226] md: bind<sdl1>
[    9.943229] md: bind<sdk1>
[    9.943232] md: bind<sdf1>
[    9.943235] md: bind<sdd1>
[    9.943238] md: bind<sdc1>
[    9.943241] md: bind<sdb1>
[    9.943244] md: running: <sdb1><sdc1><sdd1><sdf1><sdk1><sdl1><sdm1><sdn1><sdo1><sdp1>
[    9.981355] md: considering sdb2 ...
[    9.981356] md:  adding sdb2 ...
[    9.981357] md:  adding sdc2 ...
[    9.981358] md:  adding sdd2 ...
[    9.981360] md:  adding sde2 ...
[    9.981361] md:  adding sdf2 ...
[    9.981362] md:  adding sdk2 ...
[    9.981363] md:  adding sdl2 ...
[    9.981364] md:  adding sdm2 ...
[    9.981365] md:  adding sdn2 ...
[    9.981367] md:  adding sdo2 ...
[    9.981368] md:  adding sdp2 ...
[    9.981369] md: md0: current auto_remap = 0
[    9.981369] md:  adding sdq2 ...
[    9.981370] md:  adding sdr2 ...
[    9.981372] md: resync of RAID array md0
[    9.981504] md: created md1
[    9.981505] md: bind<sdr2>
[    9.981511] md: bind<sdq2>
[    9.981515] md: bind<sdp2>
[    9.981520] md: bind<sdo2>
[    9.981525] md: bind<sdn2>
[    9.981530] md: bind<sdm2>
[    9.981535] md: bind<sdl2>
[    9.981540] md: bind<sdk2>
[    9.981544] md: bind<sdf2>
[    9.981549] md: bind<sde2>
[    9.981554] md: bind<sdd2>
[    9.981559] md: bind<sdc2>
[    9.981565] md: bind<sdb2>
[    9.981574] md: running: <sdb2><sdc2><sdd2><sde2><sdf2><sdk2><sdl2><sdm2><sdn2><sdo2><sdp2><sdq2><sdr2>
[    9.989470] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[    9.989470] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[    9.989474] md: using 128k window, over a total of 2490176k.
[   10.052110] md: ... autorun DONE.
[   10.052124] md: md1: current auto_remap = 0
[   10.052126] md: resync of RAID array md1
[   10.060221] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[   10.060222] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[   10.060224] md: using 128k window, over a total of 2097088k.
[   29.602277] md: bind<sdr7>
[   29.602651] md: bind<sdo7>
[   29.679014] md: md2 stopped.
[   29.803959] md: bind<sdc5>
[   29.804024] md: bind<sdd5>
[   29.804084] md: bind<sde5>
[   29.804145] md: bind<sdf5>
[   29.804218] md: bind<sdk5>
[   29.804286] md: bind<sdq5>
[   29.828033] md: bind<sdl5>
[   29.828589] md: bind<sdm5>
[   29.828942] md: bind<sdn5>
[   29.829120] md: bind<sdr5>
[   29.854008] md: bind<sdo5>
[   29.855384] md: bind<sdp5>
[   29.857913] md: bind<sdb5>
[   29.857922] md: kicking non-fresh sdr5 from array!
[   29.857925] md: unbind<sdr5>
[   29.865755] md: export_rdev(sdr5)
[   29.993600] md: bind<sdm6>
[   29.993748] md: bind<sdn6>
[   29.993917] md: bind<sdr6>
[   29.994084] md: bind<sdo6>
[   29.994230] md: bind<sdl6>
[   30.034680] md: md4: set sdl6 to auto_remap [1]
[   30.034681] md: md4: set sdo6 to auto_remap [1]
[   30.034681] md: md4: set sdr6 to auto_remap [1]
[   30.034682] md: md4: set sdn6 to auto_remap [1]
[   30.034682] md: md4: set sdm6 to auto_remap [1]
[   30.034684] md: delaying recovery of md4 until md1 has finished (they share one or more physical units)
[   30.222237] md: md2: set sdb5 to auto_remap [0]
[   30.222238] md: md2: set sdp5 to auto_remap [0]
[   30.222238] md: md2: set sdo5 to auto_remap [0]
[   30.222239] md: md2: set sdn5 to auto_remap [0]
[   30.222239] md: md2: set sdm5 to auto_remap [0]
[   30.222240] md: md2: set sdl5 to auto_remap [0]
[   30.222241] md: md2: set sdq5 to auto_remap [0]
[   30.222241] md: md2: set sdk5 to auto_remap [0]
[   30.222242] md: md2: set sdf5 to auto_remap [0]
[   30.222242] md: md2: set sde5 to auto_remap [0]
[   30.222243] md: md2: set sdd5 to auto_remap [0]
[   30.222244] md: md2: set sdc5 to auto_remap [0]
[   30.222244] md: md2 stopped.
[   30.222246] md: unbind<sdb5>
[   30.228152] md: export_rdev(sdb5)
[   30.228157] md: unbind<sdp5>
[   30.231173] md: export_rdev(sdp5)
[   30.231178] md: unbind<sdo5>
[   30.236190] md: export_rdev(sdo5)
[   30.236205] md: unbind<sdn5>
[   30.239180] md: export_rdev(sdn5)
[   30.239183] md: unbind<sdm5>
[   30.244169] md: export_rdev(sdm5)
[   30.244172] md: unbind<sdl5>
[   30.247189] md: export_rdev(sdl5)
[   30.247192] md: unbind<sdq5>
[   30.252207] md: export_rdev(sdq5)
[   30.252211] md: unbind<sdk5>
[   30.255196] md: export_rdev(sdk5)
[   30.255200] md: unbind<sdf5>
[   30.259408] md: export_rdev(sdf5)
[   30.259411] md: unbind<sde5>
[   30.271235] md: export_rdev(sde5)
[   30.271242] md: unbind<sdd5>
[   30.280228] md: export_rdev(sdd5)
[   30.280233] md: unbind<sdc5>
[   30.288234] md: export_rdev(sdc5)
[   30.680068] md: md2 stopped.
[   30.731994] md: bind<sdc5>
[   30.732110] md: bind<sdd5>
[   30.732258] md: bind<sde5>
[   30.732340] md: bind<sdf5>
[   30.732481] md: bind<sdk5>
[   30.732606] md: bind<sdq5>
[   30.737432] md: bind<sdl5>
[   30.748124] md: bind<sdm5>
[   30.748468] md: bind<sdn5>
[   30.748826] md: bind<sdr5>
[   30.749254] md: bind<sdo5>
[   30.763073] md: bind<sdp5>
[   30.776215] md: bind<sdb5>
[   30.776229] md: kicking non-fresh sdr5 from array!
[   30.776231] md: unbind<sdr5>
[   30.780828] md: export_rdev(sdr5)
[   60.552383] md: md1: resync done.
[   60.569174] md: md1: current auto_remap = 0
[   60.569202] md: delaying recovery of md4 until md0 has finished (they share one or more physical units)
[  109.601864] md: md0: resync done.
[  109.615133] md: md0: current auto_remap = 0
[  109.615149] md: md4: flushing inflight I/O
[  109.618280] md: recovery of RAID array md4
[  109.618282] md: minimum _guaranteed_  speed: 600000 KB/sec/disk.
[  109.618283] md: using maximum available idle IO bandwidth (but not more than 800000 KB/sec) for recovery.
[  109.618296] md: using 128k window, over a total of 2930246912k.
[  109.618297] md: resuming recovery of md4 from checkpoint.
[17557.409842] md: md4: recovery done.
[17557.601751] md: md4: set sdl6 to auto_remap [0]
[17557.601753] md: md4: set sdo6 to auto_remap [0]
[17557.601754] md: md4: set sdr6 to auto_remap [0]
[17557.601754] md: md4: set sdn6 to auto_remap [0]
[17557.601755] md: md4: set sdm6 to auto_remap [0]
[24035.085886] md: bind<sdr5>
[24035.123679] md: md2: set sdr5 to auto_remap [1]
[24035.123681] md: md2: set sdb5 to auto_remap [1]
[24035.123682] md: md2: set sdp5 to auto_remap [1]
[24035.123682] md: md2: set sdo5 to auto_remap [1]
[24035.123683] md: md2: set sdn5 to auto_remap [1]
[24035.123684] md: md2: set sdm5 to auto_remap [1]
[24035.123685] md: md2: set sdl5 to auto_remap [1]
[24035.123685] md: md2: set sdq5 to auto_remap [1]
[24035.123697] md: md2: set sdk5 to auto_remap [1]
[24035.123697] md: md2: set sdf5 to auto_remap [1]
[24035.123698] md: md2: set sde5 to auto_remap [1]
[24035.123698] md: md2: set sdd5 to auto_remap [1]
[24035.123699] md: md2: set sdc5 to auto_remap [1]
[24035.123700] md: md2: flushing inflight I/O
[24035.154625] md: recovery of RAID array md2
[24035.154628] md: minimum _guaranteed_  speed: 600000 KB/sec/disk.
[24035.154629] md: using maximum available idle IO bandwidth (but not more than 800000 KB/sec) for recovery.
[24035.154646] md: using 128k window, over a total of 2925435456k.
[24523.174858] md: md2: recovery stop due to MD_RECOVERY_INTR set.
[24544.258476] md: md2: set sdr5 to auto_remap [0]
[24544.258488] md: md2: set sdb5 to auto_remap [0]
[24544.258489] md: md2: set sdp5 to auto_remap [0]
[24544.258489] md: md2: set sdo5 to auto_remap [0]
[24544.258490] md: md2: set sdn5 to auto_remap [0]
[24544.258491] md: md2: set sdm5 to auto_remap [0]
[24544.258491] md: md2: set sdl5 to auto_remap [0]
[24544.258492] md: md2: set sdq5 to auto_remap [0]
[24544.258493] md: md2: set sdk5 to auto_remap [0]
[24544.258494] md: md2: set sdf5 to auto_remap [0]
[24544.258494] md: md2: set sde5 to auto_remap [0]
[24544.258495] md: md2: set sdd5 to auto_remap [0]
[24544.258496] md: md2: set sdc5 to auto_remap [0]
[24545.106266] md: unbind<sdr5>
[24545.117398] md: export_rdev(sdr5)

 

Link to comment
Share on other sites

Do you have another SATA port you can plug /dev/sdp into?  Maybe take out your SSD and put it there? At the very least, change the SATA cable.  There is a hardware failure of some sort we are encountering.

 

We need to get the array resynced because you still have two drives affected, otherwise we will probably lose data.  But it would be nice if the array stopped self-immolating.

 

If you can do this, boot and post a new mdstat.

Link to comment
Share on other sites

 

10 minutes ago, flyride said:

Do you have another SATA port you can plug /dev/sdp into?  Maybe take out your SSD and put it there? At the very least, change the SATA cable.  There is a hardware failure of some sort we are encountering.

 

 

I'll do that and reboot.

The last few log from dmesg:

[24543.896520] sd 10:0:5:0: [sdp] CDB:
[24543.896521] cdb[0]=0x88: 88 00 00 00 00 00 05 7e 8c 90 00 00 00 08 00 00
[24543.896530] sd 10:0:5:0: [sdp] Unhandled error code
[24543.896530] drivers/md/raid5.c[3418]:syno_error_for_internal: disk error on sdp5
[24543.896531] sd 10:0:5:0: [sdp]
[24543.896532] Result: hostbyte=0x0b driverbyte=0x00
[24543.896532] sd 10:0:5:0: [sdp] CDB:
[24543.896533] cdb[0]=0x88: 88 00 00 00 00 00 05 7e 8e 10 00 00 00 08 00 00
[24543.896541] drivers/md/raid5.c[3418]:syno_error_for_internal: disk error on sdp5
[24543.896541] sd 10:0:5:0: [sdp] Unhandled error code
[24543.896542] sd 10:0:5:0: [sdp]
[24543.896542] Result: hostbyte=0x0b driverbyte=0x00
[24543.896543] sd 10:0:5:0: [sdp] CDB:
[24543.896543] cdb[0]=0x88: 88 00 00 00 00 00 05 7e 8c a0 00 00 00 08 00 00
[24543.896551] drivers/md/raid5.c[3418]:syno_error_for_internal: disk error on sdp5
[24543.896551] sd 10:0:5:0: [sdp]
[24543.896552] Result: hostbyte=0x00 driverbyte=0x08
[24543.896553] sd 10:0:5:0: [sdp]
[24543.896554] Sense Key : 0xb [current]
[24543.896555] sd 10:0:5:0: [sdp]
[24543.896555] ASC=0x0 ASCQ=0x0
[24543.896556] sd 10:0:5:0: [sdp] CDB:
[24543.896556] cdb[0]=0x88: 88 00 00 00 00 00 05 7e 8d e8 00 00 00 08 00 00
[24543.896564] drivers/md/raid5.c[3418]:syno_error_for_internal: disk error on sdp5
[24544.104089] SynoCheckRdevIsWorking (10283): remove active disk sdr5 from md2 raid_disks 13 mddev->degraded 1 mddev->level 5
[24544.104092] syno_hot_remove_disk (10183): cannot remove active disk sdr5 from md2 ... rdev->raid_disk 10 pending 0
[24544.258476] md: md2: set sdr5 to auto_remap [0]
[24544.258488] md: md2: set sdb5 to auto_remap [0]
[24544.258489] md: md2: set sdp5 to auto_remap [0]
[24544.258489] md: md2: set sdo5 to auto_remap [0]
[24544.258490] md: md2: set sdn5 to auto_remap [0]
[24544.258491] md: md2: set sdm5 to auto_remap [0]
[24544.258491] md: md2: set sdl5 to auto_remap [0]
[24544.258492] md: md2: set sdq5 to auto_remap [0]
[24544.258493] md: md2: set sdk5 to auto_remap [0]
[24544.258494] md: md2: set sdf5 to auto_remap [0]
[24544.258494] md: md2: set sde5 to auto_remap [0]
[24544.258495] md: md2: set sdd5 to auto_remap [0]
[24544.258496] md: md2: set sdc5 to auto_remap [0]
[24544.414606] RAID conf printout:
[24544.414609]  --- level:5 rd:13 wd:12
[24544.414610]  disk 0, o:1, dev:sdb5
[24544.414611]  disk 1, o:1, dev:sdc5
[24544.414612]  disk 2, o:1, dev:sdd5
[24544.414612]  disk 3, o:1, dev:sde5
[24544.414613]  disk 4, o:1, dev:sdf5
[24544.414632]  disk 5, o:1, dev:sdk5
[24544.414633]  disk 6, o:1, dev:sdq5
[24544.414634]  disk 7, o:1, dev:sdl5
[24544.414634]  disk 8, o:1, dev:sdm5
[24544.414635]  disk 9, o:1, dev:sdn5
[24544.414636]  disk 10, o:0, dev:sdr5
[24544.414636]  disk 11, o:1, dev:sdo5
[24544.414637]  disk 12, o:1, dev:sdp5
[24544.422985] RAID conf printout:
[24544.422987]  --- level:5 rd:13 wd:12
[24544.422988]  disk 0, o:1, dev:sdb5
[24544.422989]  disk 1, o:1, dev:sdc5
[24544.422990]  disk 2, o:1, dev:sdd5
[24544.422991]  disk 3, o:1, dev:sde5
[24544.422992]  disk 4, o:1, dev:sdf5
[24544.422992]  disk 5, o:1, dev:sdk5
[24544.422993]  disk 6, o:1, dev:sdq5
[24544.422994]  disk 7, o:1, dev:sdl5
[24544.422994]  disk 8, o:1, dev:sdm5
[24544.422995]  disk 9, o:1, dev:sdn5
[24544.422996]  disk 11, o:1, dev:sdo5
[24544.422997]  disk 12, o:1, dev:sdp5
[24545.106250] SynoCheckRdevIsWorking (10283): remove active disk sdr5 from md2 raid_disks 13 mddev->degraded 1 mddev->level 5
[24545.106266] md: unbind<sdr5>
[24545.117398] md: export_rdev(sdr5)
[24545.153580] init: synowsdiscoveryd main process (17065) killed by TERM signal
[24545.439468] init: ddnsd main process (12284) terminated with status 1
[24546.186194] init: smbd main process (17423) killed by TERM signal
[24546.729176] nfsd: last server has exited, flushing export cache
[24548.636153] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[24548.656468] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[24548.656489] NFSD: starting 90-second grace period (net ffffffff81854f80)

 

Link to comment
Share on other sites

45 minutes ago, flyride said:

If you can do this, boot and post a new mdstat.

 

I changed the WD Purple drive from connected to my LSI SAS card into a sata port multiplier card.

cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sdb5[0] sdg5[10] sdp5[11] sdn5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1]
      35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UU]

md4 : active raid5 sdl6[0] sdp6[3] sdo6[5] sdn6[2] sdm6[1]
      11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md5 : active raid1 sdp7[0] sdo7[2]
      3905898432 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdg2[10] sdk2[5] sdl2[6] sdm2[7] sdn2[9] sdo2[12] sdp2[8] sdq2[11]
      2097088 blocks [24/13] [UUUUUUUUUUUUU___________]

md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdg1[4] sdk1[6] sdl1[7] sdm1[8] sdn1[10] sdp1[0]
      2490176 blocks [12/10] [UUUUUUUUU_U_]

unused devices: <none>

 

Edited by C-Fu
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...