flyride Posted January 17, 2020 Share #51 Posted January 17, 2020 # mdadm --examine /dev/sd[gp]5 Let's confirm the drives look the same despite remapping, then we'll try and resync again. Quote Link to comment Share on other sites More sharing options...
C-Fu Posted January 17, 2020 Author Share #52 Posted January 17, 2020 (edited) 7 minutes ago, flyride said: # mdadm --examine /dev/sd[gp]5 Let's confirm the drives look the same despite remapping, then we'll try and resync again. Seems like it's not sdp anymore. mdadm --examine /dev/sd[gp]5 /dev/sdg5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : a64f01c2:76c56102:38ad7c4e:7bce88d1 Update Time : Sat Jan 18 01:03:16 2020 Checksum : 21049ab2 - correct Events : 370955 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 12 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdp5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 73610f83:fb3cf895:c004147e:b4de2bfe Update Time : Sat Jan 18 01:03:16 2020 Checksum : d1e38bbe - correct Events : 370955 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 11 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) hdparm -I /dev/sdg /dev/sdg: ATA device, with non-removable media Model Number: WDC WD30PURX-64P6ZY0 Serial Number: WD-WMC4N0H64Z0C Firmware Revision: 80.00A80 Transport: Serial, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0 Standards: Supported: 9 8 7 6 5 Likely used: 9 Configuration: Logical max current cylinders 16383 16383 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 5860533168 Logical Sector size: 512 bytes Physical Sector size: 4096 bytes Logical Sector-0 offset: 0 bytes device size with M = 1024*1024: 2861588 MBytes device size with M = 1000*1000: 3000592 MBytes (3000 GB) cache/buffer size = unknown Nominal Media Rotation Rate: 5400 Edit: OK it just closed my SSH connection again. last few dmseg log: [ 1274.095438] disk 0, wo:1, o:0, dev:sdp1 [ 1274.095439] disk 1, wo:0, o:1, dev:sdb1 [ 1274.095440] disk 2, wo:0, o:1, dev:sdc1 [ 1274.095440] disk 3, wo:0, o:1, dev:sdd1 [ 1274.095441] disk 4, wo:0, o:1, dev:sdg1 [ 1274.095441] disk 5, wo:0, o:1, dev:sdf1 [ 1274.095442] disk 6, wo:0, o:1, dev:sdk1 [ 1274.095443] disk 7, wo:0, o:1, dev:sdl1 [ 1274.095443] disk 8, wo:0, o:1, dev:sdm1 [ 1274.095444] disk 10, wo:0, o:1, dev:sdn1 [ 1274.099973] syno_hot_remove_disk (10183): cannot remove active disk sdp7 from md5 ... rdev->raid_disk 0 pending 0 [ 1274.100009] SynoCheckRdevIsWorking (10283): remove active disk sdp2 from md1 raid_disks 24 mddev->degraded 11 mddev->level 1 [ 1274.100011] raid1: Disk failure on sdp2, disabling device. Operation continuing on 12 devices [ 1274.110125] syno_hot_remove_disk (10183): cannot remove active disk sdp2 from md1 ... rdev->raid_disk 8 pending 0 [ 1274.110998] RAID1 conf printout: [ 1274.110999] --- wd:9 rd:12 [ 1274.111000] disk 1, wo:0, o:1, dev:sdb1 [ 1274.111001] disk 2, wo:0, o:1, dev:sdc1 [ 1274.111001] disk 3, wo:0, o:1, dev:sdd1 [ 1274.111002] disk 4, wo:0, o:1, dev:sdg1 [ 1274.111002] disk 5, wo:0, o:1, dev:sdf1 [ 1274.111003] disk 6, wo:0, o:1, dev:sdk1 [ 1274.111004] disk 7, wo:0, o:1, dev:sdl1 [ 1274.111004] disk 8, wo:0, o:1, dev:sdm1 [ 1274.111005] disk 10, wo:0, o:1, dev:sdn1 [ 1274.151975] RAID1 conf printout: [ 1274.151976] --- wd:1 rd:2 [ 1274.151977] disk 0, wo:1, o:0, dev:sdp7 [ 1274.151978] disk 1, wo:0, o:1, dev:sdo7 [ 1274.159045] RAID1 conf printout: [ 1274.159046] --- wd:1 rd:2 [ 1274.159047] disk 1, wo:0, o:1, dev:sdo7 [ 1274.171175] RAID conf printout: [ 1274.171176] --- level:5 rd:5 wd:4 [ 1274.171177] disk 0, o:1, dev:sdl6 [ 1274.171177] disk 1, o:1, dev:sdm6 [ 1274.171178] disk 2, o:1, dev:sdn6 [ 1274.171178] disk 3, o:1, dev:sdo6 [ 1274.171179] disk 4, o:0, dev:sdp6 [ 1274.179062] SynoCheckRdevIsWorking (10283): remove active disk sdp1 from md0 raid_disks 12 mddev->degraded 3 mddev->level 1 [ 1274.179078] RAID conf printout: [ 1274.179080] md: unbind<sdp1> [ 1274.179095] --- level:5 rd:5 wd:4 [ 1274.179095] disk 0, o:1, dev:sdl6 [ 1274.179096] disk 1, o:1, dev:sdm6 [ 1274.179097] disk 2, o:1, dev:sdn6 [ 1274.179097] disk 3, o:1, dev:sdo6 [ 1274.182067] md: export_rdev(sdp1) [ 1274.205352] RAID conf printout: [ 1274.205353] --- level:5 rd:13 wd:11 [ 1274.205354] disk 0, o:1, dev:sdb5 [ 1274.205355] disk 1, o:1, dev:sdc5 [ 1274.205355] disk 2, o:1, dev:sdd5 [ 1274.205356] disk 3, o:1, dev:sde5 [ 1274.205356] disk 4, o:1, dev:sdf5 [ 1274.205357] disk 5, o:1, dev:sdk5 [ 1274.205358] disk 6, o:1, dev:sdq5 [ 1274.205358] disk 7, o:1, dev:sdl5 [ 1274.205359] disk 8, o:1, dev:sdm5 [ 1274.205359] disk 9, o:1, dev:sdn5 [ 1274.205360] disk 11, o:0, dev:sdp5 [ 1274.205360] disk 12, o:1, dev:sdg5 [ 1274.208680] RAID1 conf printout: [ 1274.208680] --- wd:12 rd:24 [ 1274.208681] disk 0, wo:0, o:1, dev:sdb2 [ 1274.208682] disk 1, wo:0, o:1, dev:sdc2 [ 1274.208682] disk 2, wo:0, o:1, dev:sdd2 [ 1274.208683] disk 3, wo:0, o:1, dev:sde2 [ 1274.208683] disk 4, wo:0, o:1, dev:sdf2 [ 1274.208684] disk 5, wo:0, o:1, dev:sdk2 [ 1274.208685] disk 6, wo:0, o:1, dev:sdl2 [ 1274.208685] disk 7, wo:0, o:1, dev:sdm2 [ 1274.208686] disk 8, wo:1, o:0, dev:sdp2 [ 1274.208686] disk 9, wo:0, o:1, dev:sdn2 [ 1274.208687] disk 10, wo:0, o:1, dev:sdg2 [ 1274.208687] disk 11, wo:0, o:1, dev:sdq2 [ 1274.208688] disk 12, wo:0, o:1, dev:sdo2 [ 1274.215123] RAID conf printout: [ 1274.215124] --- level:5 rd:13 wd:11 [ 1274.215125] disk 0, o:1, dev:sdb5 [ 1274.215126] disk 1, o:1, dev:sdc5 [ 1274.215127] disk 2, o:1, dev:sdd5 [ 1274.215127] disk 3, o:1, dev:sde5 [ 1274.215128] disk 4, o:1, dev:sdf5 [ 1274.215128] disk 5, o:1, dev:sdk5 [ 1274.215129] disk 6, o:1, dev:sdq5 [ 1274.215141] disk 7, o:1, dev:sdl5 [ 1274.215141] disk 8, o:1, dev:sdm5 [ 1274.215142] disk 9, o:1, dev:sdn5 [ 1274.215142] disk 12, o:1, dev:sdg5 [ 1274.219116] RAID1 conf printout: [ 1274.219117] --- wd:12 rd:24 [ 1274.219117] disk 0, wo:0, o:1, dev:sdb2 [ 1274.219118] disk 1, wo:0, o:1, dev:sdc2 [ 1274.219119] disk 2, wo:0, o:1, dev:sdd2 [ 1274.219119] disk 3, wo:0, o:1, dev:sde2 [ 1274.219120] disk 4, wo:0, o:1, dev:sdf2 [ 1274.219120] disk 5, wo:0, o:1, dev:sdk2 [ 1274.219121] disk 6, wo:0, o:1, dev:sdl2 [ 1274.219121] disk 7, wo:0, o:1, dev:sdm2 [ 1274.219122] disk 9, wo:0, o:1, dev:sdn2 [ 1274.219134] disk 10, wo:0, o:1, dev:sdg2 [ 1274.219134] disk 11, wo:0, o:1, dev:sdq2 [ 1274.219135] disk 12, wo:0, o:1, dev:sdo2 [ 1275.080123] SynoCheckRdevIsWorking (10283): remove active disk sdp5 from md2 raid_disks 13 mddev->degraded 2 mddev->level 5 [ 1275.080139] md: unbind<sdp5> [ 1275.088132] md: export_rdev(sdp5) [ 1275.091120] SynoCheckRdevIsWorking (10283): remove active disk sdp6 from md4 raid_disks 5 mddev->degraded 1 mddev->level 5 [ 1275.091124] md: unbind<sdp6> [ 1275.096135] md: export_rdev(sdp6) [ 1275.102133] SynoCheckRdevIsWorking (10283): remove active disk sdp7 from md5 raid_disks 2 mddev->degraded 1 mddev->level 1 [ 1275.102137] md: unbind<sdp7> [ 1275.112157] SynoCheckRdevIsWorking (10283): remove active disk sdp2 from md1 raid_disks 24 mddev->degraded 12 mddev->level 1 [ 1275.112161] md: unbind<sdp2> [ 1275.118702] md: export_rdev(sdp7) [ 1275.124168] md: export_rdev(sdp2) [ 1276.321330] init: synowsdiscoveryd main process (16584) killed by TERM signal [ 1276.595188] init: ddnsd main process (12353) terminated with status 1 [ 1277.569230] init: smbd main process (16670) killed by TERM signal [ 1278.234862] nfsd: last server has exited, flushing export cache [ 1280.264875] Installing knfsd (copyright (C) 1996 okir@monad.swb.de). [ 1280.284991] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory [ 1280.285012] NFSD: starting 90-second grace period (net ffffffff81854f80) Edited January 17, 2020 by C-Fu Quote Link to comment Share on other sites More sharing options...
flyride Posted January 17, 2020 Share #53 Posted January 17, 2020 3 minutes ago, C-Fu said: Seems like it's not sdp anymore. Yes, /dev/sdp moved to /dev/sdg, and /dev/sdo moved to /dev/sdp They are the same drives, show clean and are in the right order still. So now let's try and resync again: # mdadm --zero-superblock /dev/sdr5 # mdadm --manage /dev/md2 --add /dev/sdr5 # cat /proc/mdstat Quote Link to comment Share on other sites More sharing options...
flyride Posted January 17, 2020 Share #54 Posted January 17, 2020 (edited) Woops, /dev/sdr got remapped too, to /dev/sdo. Those commands won't do anything. I don't advise changing it now but your Idx mapping could use some work. # mdadm --zero-superblock /dev/sdo5 # mdadm --manage /dev/md2 --add /dev/sdo5 # cat /proc/mdstat Edited January 17, 2020 by flyride Quote Link to comment Share on other sites More sharing options...
C-Fu Posted January 17, 2020 Author Share #55 Posted January 17, 2020 1 minute ago, flyride said: Woops, /dev/sdr got remapped too, to /dev/sdo Those commands won't do anything. I don't advise changing it now but your Idx mapping could use some work. How do I change that? Anyways.. root@homelab:~# mdadm --zero-superblock /dev/sdr5 mdadm: Couldn't open /dev/sdr5 for write - not zeroing root@homelab:~# mdadm --zero-superblock /dev/sdo5 root@homelab:~# mdadm --manage /dev/md2 --add /dev/sdo5 mdadm: /dev/md2 has failed so using --add cannot work and might destroy mdadm: data on /dev/sdo5. You should stop the array and re-assemble it. root@homelab:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdb5[0] sdg5[10] sdn5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/11] [UUUUUUUUUU__U] md4 : active raid5 sdl6[0] sdo6[5] sdn6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_] md5 : active raid1 sdo7[2] 3905898432 blocks super 1.2 [2/1] [_U] md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdg2[10] sdk2[5] sdl2[6] sdm2[7] sdn2[9] sdo2[12] sdq2[11] 2097088 blocks [24/12] [UUUUUUUU_UUUU___________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdg1[4] sdk1[6] sdl1[7] sdm1[8] sdn1[10] 2490176 blocks [12/9] [_UUUUUUUU_U_] unused devices: <none> Quote Link to comment Share on other sites More sharing options...
flyride Posted January 17, 2020 Share #56 Posted January 17, 2020 Between the last mdstat and your current one, your /dev/sdp went offline - that is one of your 10TB drives. Check all your connections and cables, if they are "stretched" or not stable, secure them. Reboot. Post another mdstat. If you can't get your hardware stable, this is a lost cause. Standing by for status. 1 Quote Link to comment Share on other sites More sharing options...
C-Fu Posted January 17, 2020 Author Share #57 Posted January 17, 2020 (edited) 16 minutes ago, flyride said: Between the last mdstat and your current one, your /dev/sdp went offline - that is one of your 10TB drives. Check all your connections and cables, if they are "stretched" or not stable, secure them. Reboot. Post another mdstat. If you can't get your hardware stable, this is a lost cause. Standing by for status. Changed the cable, and I believe it's secure enough. cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdb5[0] sdg5[10] sdp5[11] sdo5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UU] md4 : active raid5 sdl6[0] sdn6[5] sdo6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_] md5 : active raid1 sdn7[2] 3905898432 blocks super 1.2 [2/1] [_U] md1 : active raid1 sdq2[12] sdp2[11] sdo2[10] sdn2[9] sdm2[8] sdl2[7] sdk2[6] sdg2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0] 2097088 blocks [24/13] [UUUUUUUUUUUUU___________] [=================>...] resync = 86.4% (1813056/2097088) finish=0.0min speed=56421K/sec md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdg1[4] sdk1[6] sdl1[7] sdm1[8] sdo1[10] 2490176 blocks [12/9] [_UUUUUUUU_U_] unused devices: <none> # fdisk -l | grep 9.1 GPT PMBR size mismatch (102399 != 30277631) will be corrected by w(rite). Disk /dev/sdn: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Disk /dev/sdp: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors Disk /dev/sdp: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1713E819-3B9A-4CE3-94E8-5A3DBF1D5983 Device Start End Sectors Size Type /dev/sdp1 2048 4982527 4980480 2.4G Linux RAID /dev/sdp2 4982528 9176831 4194304 2G Linux RAID /dev/sdp5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdp6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdp7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdn: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: EA537505-55B5-4C27-A7CA-C7BBB7E7B56F Device Start End Sectors Size Type /dev/sdn1 2048 4982527 4980480 2.4G Linux RAID /dev/sdn2 4982528 9176831 4194304 2G Linux RAID /dev/sdn5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdn6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdn7 11720854336 19532653311 7811798976 3.7T Linux RAID Edited January 17, 2020 by C-Fu Quote Link to comment Share on other sites More sharing options...
flyride Posted January 17, 2020 Share #58 Posted January 17, 2020 (edited) # mdadm --manage /dev/md5 --add /dev/sdp7 # mdadm --manage /dev/md4 --add /dev/sdp6 # cat /proc/mdstat Edited January 17, 2020 by flyride Quote Link to comment Share on other sites More sharing options...
C-Fu Posted January 17, 2020 Author Share #59 Posted January 17, 2020 (edited) Oh crap. Heard something clicking. Rebooted and cat /proc/mdstat keeps hanging. 😫 # fdisk -l Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x696935dc Device Boot Start End Sectors Size Id Type /dev/sda1 2048 468857024 468854977 223.6G fd Linux raid autodetect Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 43C8C355-AE0A-42DC-97CC-508B0FB4EF37 Device Start End Sectors Size Type /dev/sdb1 2048 4982527 4980480 2.4G Linux RAID /dev/sdb2 4982528 9176831 4194304 2G Linux RAID /dev/sdb5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 0600DFFC-A576-4242-976A-3ACAE5284C4C Device Start End Sectors Size Type /dev/sdc1 2048 4982527 4980480 2.4G Linux RAID /dev/sdc2 4982528 9176831 4194304 2G Linux RAID /dev/sdc5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 58B43CB1-1F03-41D3-A734-014F59DE34E8 Device Start End Sectors Size Type /dev/sdd1 2048 4982527 4980480 2.4G Linux RAID /dev/sdd2 4982528 9176831 4194304 2G Linux RAID /dev/sdd5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: E5FD9CDA-FE14-4F95-B776-B176E7130DEA Device Start End Sectors Size Type /dev/sde1 2048 4982527 4980480 2.4G Linux RAID /dev/sde2 4982528 9176831 4194304 2G Linux RAID /dev/sde5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 48A13430-10A1-4050-BA78-723DB398CE87 Device Start End Sectors Size Type /dev/sdf1 2048 4982527 4980480 2.4G Linux RAID /dev/sdf2 4982528 9176831 4194304 2G Linux RAID /dev/sdf5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdg: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: A3E39D34-4297-4BE9-B4FD-3A21EFC38071 Device Start End Sectors Size Type /dev/sdg1 2048 4982527 4980480 2.4G Linux RAID /dev/sdg2 4982528 9176831 4194304 2G Linux RAID /dev/sdg5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdk: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1D5B8B09-8D4A-4729-B089-442620D3D507 Device Start End Sectors Size Type /dev/sdk1 2048 4982527 4980480 2.4G Linux RAID /dev/sdk2 4982528 9176831 4194304 2G Linux RAID /dev/sdk5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdl: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 849E02B2-2734-496B-AB52-A572DF8FE63F Device Start End Sectors Size Type /dev/sdl1 2048 4982527 4980480 2.4G Linux RAID /dev/sdl2 4982528 9176831 4194304 2G Linux RAID /dev/sdl5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdl6 5860342336 11720838239 5860495904 2.7T Linux RAID GPT PMBR size mismatch (102399 != 30277631) will be corrected by w(rite). Disk /dev/synoboot: 14.4 GiB, 15502147584 bytes, 30277632 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: B3CAAA25-3CA1-48FA-A5B6-105ADDE4793F Device Start End Sectors Size Type /dev/synoboot1 2048 32767 30720 15M EFI System /dev/synoboot2 32768 94207 61440 30M Linux filesystem /dev/synoboot3 94208 102366 8159 4M BIOS boot Disk /dev/sdm: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 423D33B4-90CE-4E34-9C40-6E06D1F50C0C Device Start End Sectors Size Type /dev/sdm1 2048 4982527 4980480 2.4G Linux RAID /dev/sdm2 4982528 9176831 4194304 2G Linux RAID /dev/sdm5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdm6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdn: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: EA537505-55B5-4C27-A7CA-C7BBB7E7B56F Device Start End Sectors Size Type /dev/sdn1 2048 4982527 4980480 2.4G Linux RAID /dev/sdn2 4982528 9176831 4194304 2G Linux RAID /dev/sdn5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdn6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdn7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdo: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 09CB7303-C2E7-46F8-ADA0-D4853F25CB00 Device Start End Sectors Size Type /dev/sdo1 2048 4982527 4980480 2.4G Linux RAID /dev/sdo2 4982528 9176831 4194304 2G Linux RAID /dev/sdo5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdo6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdp: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1713E819-3B9A-4CE3-94E8-5A3DBF1D5983 Device Start End Sectors Size Type /dev/sdp1 2048 4982527 4980480 2.4G Linux RAID /dev/sdp2 4982528 9176831 4194304 2G Linux RAID /dev/sdp5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdp6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdp7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdq: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 54D81C51-AB85-4DE2-AA16-263DF1C6BB8A Device Start End Sectors Size Type /dev/sdq1 2048 4982527 4980480 2.4G Linux RAID /dev/sdq2 4982528 9176831 4194304 2G Linux RAID /dev/sdq5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/md0: 2.4 GiB, 2549940224 bytes, 4980352 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram0: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram1: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram2: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram3: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes fdisk command can't complete. Ugh. And to think the two 10TB drives were new! Edit: oh and the Web UI doesn't work anymore. Edited January 17, 2020 by C-Fu Quote Link to comment Share on other sites More sharing options...
flyride Posted January 17, 2020 Share #60 Posted January 17, 2020 See if you can sort out what's going on. Maybe a power problem? Cabling, drive physical stability, it can all factor. Advise when you have made a decision. Sorry this might be going down the tubes. I was pretty confident of our success until very recently! Quote Link to comment Share on other sites More sharing options...
C-Fu Posted January 17, 2020 Author Share #61 Posted January 17, 2020 (edited) 25 minutes ago, flyride said: See if you can sort out what's going on. Maybe a power problem? Cabling, drive physical stability, it can all factor. Advise when you have made a decision. Sorry this might be going down the tubes. I was pretty confident of our success until very recently! I think it might just be a loose data and/or power cable. I connected the 2x10TB to a LSI SAS card btw. No more clicking. And mdstat didn't hang, just took a few mins. # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md4 : active raid5 sdl6[0] sdo6[5] sdn6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_] md2 : active raid5 sdb5[0] sdg5[10] sdp5[11] sdn5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UU] md5 : active raid1 sdp7[3] 3905898432 blocks super 1.2 [2/0] [__] md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdg2[5] sdk2[6] sdl2[7] sdm2[8] sdn2[10] sdo2[9] sdp2[11] sdq2[12] 2097088 blocks [24/13] [UUUUUUUUUUUUU___________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdg1[4] sdk1[6] sdl1[7] sdm1[8] sdn1[10] 2490176 blocks [12/9] [_UUUUUUUU_U_] unused devices: <none> I'm going to try and wait until fdisk -l finishes. Currently stops at Disk /dev/zram3. # fdisk -l Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x696935dc Device Boot Start End Sectors Size Id Type /dev/sda1 2048 468857024 468854977 223.6G fd Linux raid autodetect Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 43C8C355-AE0A-42DC-97CC-508B0FB4EF37 Device Start End Sectors Size Type /dev/sdb1 2048 4982527 4980480 2.4G Linux RAID /dev/sdb2 4982528 9176831 4194304 2G Linux RAID /dev/sdb5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 0600DFFC-A576-4242-976A-3ACAE5284C4C Device Start End Sectors Size Type /dev/sdc1 2048 4982527 4980480 2.4G Linux RAID /dev/sdc2 4982528 9176831 4194304 2G Linux RAID /dev/sdc5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 58B43CB1-1F03-41D3-A734-014F59DE34E8 Device Start End Sectors Size Type /dev/sdd1 2048 4982527 4980480 2.4G Linux RAID /dev/sdd2 4982528 9176831 4194304 2G Linux RAID /dev/sdd5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: E5FD9CDA-FE14-4F95-B776-B176E7130DEA Device Start End Sectors Size Type /dev/sde1 2048 4982527 4980480 2.4G Linux RAID /dev/sde2 4982528 9176831 4194304 2G Linux RAID /dev/sde5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 48A13430-10A1-4050-BA78-723DB398CE87 Device Start End Sectors Size Type /dev/sdf1 2048 4982527 4980480 2.4G Linux RAID /dev/sdf2 4982528 9176831 4194304 2G Linux RAID /dev/sdf5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdg: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: A3E39D34-4297-4BE9-B4FD-3A21EFC38071 Device Start End Sectors Size Type /dev/sdg1 2048 4982527 4980480 2.4G Linux RAID /dev/sdg2 4982528 9176831 4194304 2G Linux RAID /dev/sdg5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdk: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1D5B8B09-8D4A-4729-B089-442620D3D507 Device Start End Sectors Size Type /dev/sdk1 2048 4982527 4980480 2.4G Linux RAID /dev/sdk2 4982528 9176831 4194304 2G Linux RAID /dev/sdk5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdl: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 849E02B2-2734-496B-AB52-A572DF8FE63F Device Start End Sectors Size Type /dev/sdl1 2048 4982527 4980480 2.4G Linux RAID /dev/sdl2 4982528 9176831 4194304 2G Linux RAID /dev/sdl5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdl6 5860342336 11720838239 5860495904 2.7T Linux RAID GPT PMBR size mismatch (102399 != 30277631) will be corrected by w(rite). Disk /dev/synoboot: 14.4 GiB, 15502147584 bytes, 30277632 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: B3CAAA25-3CA1-48FA-A5B6-105ADDE4793F Device Start End Sectors Size Type /dev/synoboot1 2048 32767 30720 15M EFI System /dev/synoboot2 32768 94207 61440 30M Linux filesystem /dev/synoboot3 94208 102366 8159 4M BIOS boot Disk /dev/sdm: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 423D33B4-90CE-4E34-9C40-6E06D1F50C0C Device Start End Sectors Size Type /dev/sdm1 2048 4982527 4980480 2.4G Linux RAID /dev/sdm2 4982528 9176831 4194304 2G Linux RAID /dev/sdm5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdm6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdn: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 09CB7303-C2E7-46F8-ADA0-D4853F25CB00 Device Start End Sectors Size Type /dev/sdn1 2048 4982527 4980480 2.4G Linux RAID /dev/sdn2 4982528 9176831 4194304 2G Linux RAID /dev/sdn5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdn6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdo: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: EA537505-55B5-4C27-A7CA-C7BBB7E7B56F Device Start End Sectors Size Type /dev/sdo1 2048 4982527 4980480 2.4G Linux RAID /dev/sdo2 4982528 9176831 4194304 2G Linux RAID /dev/sdo5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdo6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdo7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdp: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1713E819-3B9A-4CE3-94E8-5A3DBF1D5983 Device Start End Sectors Size Type /dev/sdp1 2048 4982527 4980480 2.4G Linux RAID /dev/sdp2 4982528 9176831 4194304 2G Linux RAID /dev/sdp5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdp6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdp7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdq: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 54D81C51-AB85-4DE2-AA16-263DF1C6BB8A Device Start End Sectors Size Type /dev/sdq1 2048 4982527 4980480 2.4G Linux RAID /dev/sdq2 4982528 9176831 4194304 2G Linux RAID /dev/sdq5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/md0: 2.4 GiB, 2549940224 bytes, 4980352 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram0: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram1: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram2: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram3: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes I was pretty confident too until recently! However it turns out you have my appreciation for sticking with me dmesg is full of this btw. [ 1663.470519] md: md5: set sdp7 to auto_remap [1] [ 1663.470520] md: recovery of RAID array md5 [ 1663.470523] md: minimum _guaranteed_ speed: 600000 KB/sec/disk. [ 1663.470523] md: using maximum available idle IO bandwidth (but not more than 800000 KB/sec) for recovery. [ 1663.470525] md: using 128k window, over a total of 3905898432k. [ 1663.470800] md: md5: set sdp7 to auto_remap [0] [ 1663.496370] RAID1 conf printout: [ 1663.496372] --- wd:0 rd:2 [ 1663.496373] disk 0, wo:1, o:1, dev:sdp7 [ 1663.500414] RAID1 conf printout: [ 1663.500415] --- wd:0 rd:2 [ 1663.500420] RAID1 conf printout: [ 1663.500420] --- wd:0 rd:2 [ 1663.500421] disk 0, wo:1, o:1, dev:sdp7 Edited January 17, 2020 by C-Fu Quote Link to comment Share on other sites More sharing options...
C-Fu Posted January 17, 2020 Author Share #62 Posted January 17, 2020 (edited) # hdparm -i /dev/sd? /dev/sda: Model=KINGSTON SV300S37A240G, FwRev=580ABBF0, SerialNo=50026B724C0476CA Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4 BuffType=unknown, BuffSize=unknown, MaxMultSect=1, MultSect=1 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=468862128 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=yes: unknown setting WriteCache=enabled Drive conforms to: unknown: ATA/ATAPI-2,3,4,5,6,7 * signifies the current active mode /dev/sdb: Model=WDC WD30EFRX-68AX9N0, FwRev=80.00A80, SerialNo=WD-WMC1T1324120 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=5860533168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode /dev/sdc: Model=WDC WD30EFRX-68AX9N0, FwRev=80.00A80, SerialNo=WD-WMC1T0889051 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=5860533168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode /dev/sdd: Model=WDC WD30EFRX-68AX9N0, FwRev=80.00A80, SerialNo=WD-WMC1T1064335 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=5860533168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode /dev/sde: Model=WDC WD30EFRX-68AX9N0, FwRev=80.00A80, SerialNo=WD-WMC1T1020714 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=5860533168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode /dev/sdf: Model=WDC WD30EFRX-68AX9N0, FwRev=80.00A80, SerialNo=WD-WMC1T1342187 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=5860533168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode /dev/sdg: Model=WDC WD30PURX-64P6ZY0, FwRev=80.00A80, SerialNo=WD-WMC4N0H64Z0C Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=off CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=5860533168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode /dev/sdk: HDIO_GET_IDENTITY failed: Invalid argument hdparm stops at /dev/sdk. # smartctl -i /dev/sdk smartctl 6.5 (build date Jun 8 2018) [x86_64-linux-3.10.105] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Western Digital Purple Device Model: WDC WD30PURX-64P6ZY0 Serial Number: WD-WMC4N0H7LYL1 LU WWN Device Id: 5 0014ee 6afcf440c Firmware Version: 80.00A80 User Capacity: 3,000,592,982,016 bytes [3.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5400 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Sat Jan 18 03:59:18 2020 CST SMART support is: Available - device has SMART capability. SMART support is: Enabled Edited January 17, 2020 by C-Fu Quote Link to comment Share on other sites More sharing options...
C-Fu Posted January 17, 2020 Author Share #63 Posted January 17, 2020 @flyride If this helps, this is the output of mdadm --examine /dev/sd?? # mdadm --examine /dev/sd?? /dev/sda1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 10743092:68743fb9:59e82c9a:24dcf27b Name : homelab:3 (local to host homelab) Creation Time : Sun Sep 29 13:33:05 2019 Raid Level : raid0 Raid Devices : 1 Avail Dev Size : 468852864 (223.57 GiB 240.05 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=65 sectors State : clean Device UUID : 515b66a6:1281d06f:01f2f8a0:26f16b69 Update Time : Sat Jan 11 20:19:40 2020 Checksum : a281b55f - correct Events : 30 Chunk Size : 64K Device Role : Active device 0 Array State : A ('A' == active, '.' == missing, 'R' == replacing) /dev/sdb1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd674406 - correct Events : 589489 Number Major Minor RaidDevice State this 1 8 17 1 active sync /dev/sdb1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdb2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca3bb - correct Events : 77 Number Major Minor RaidDevice State this 0 8 18 0 active sync /dev/sdb2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdb5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : a8109f74:46bc8509:6fc3bca8:9fddb6a7 Update Time : Sat Jan 18 03:53:54 2020 Checksum : b349fec - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 0 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd674418 - correct Events : 589489 Number Major Minor RaidDevice State this 2 8 33 2 active sync /dev/sdc1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdc2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca3cd - correct Events : 77 Number Major Minor RaidDevice State this 1 8 34 1 active sync /dev/sdc2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdc5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 8dfdc601:e01f8a98:9a8e78f1:a7951260 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 2878739f - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdd1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd67442a - correct Events : 589489 Number Major Minor RaidDevice State this 3 8 49 3 active sync /dev/sdd1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdd2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca3df - correct Events : 77 Number Major Minor RaidDevice State this 2 8 50 2 active sync /dev/sdd2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdd5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : f98bc050:a4b46deb:c3168fa0:08d90061 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 7a5a625a - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 2 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sde1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 11 Preferred Minor : 0 Update Time : Sat Jan 11 17:05:52 2020 State : active Active Devices : 11 Working Devices : 11 Failed Devices : 1 Spare Devices : 0 Checksum : dd547182 - correct Events : 507168 Number Major Minor RaidDevice State this 11 8 65 11 active sync /dev/sde1 0 0 8 209 0 active sync /dev/sdn1 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 225 4 active sync /dev/sdo1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 241 10 active sync /dev/sdp1 11 11 8 65 11 active sync /dev/sde1 /dev/sde2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca3f1 - correct Events : 77 Number Major Minor RaidDevice State this 3 8 66 3 active sync /dev/sde2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sde5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 1e2742b7:d1847218:816c7135:cdf30c07 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 48cf3e2f - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 3 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdf1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd67444e - correct Events : 589489 Number Major Minor RaidDevice State this 5 8 81 5 active sync /dev/sdf1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdf2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca403 - correct Events : 77 Number Major Minor RaidDevice State this 4 8 82 4 active sync /dev/sdf2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdf5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : ce60c47e:14994160:da4d1482:fd7901f2 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 8fb7f3ad - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 4 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdg1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd67445c - correct Events : 589489 Number Major Minor RaidDevice State this 4 8 97 4 active sync /dev/sdg1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdg2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca415 - correct Events : 77 Number Major Minor RaidDevice State this 5 8 98 5 active sync /dev/sdg2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdg5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : a64f01c2:76c56102:38ad7c4e:7bce88d1 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 2104c2d1 - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 12 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdk1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd6744a0 - correct Events : 589489 Number Major Minor RaidDevice State this 6 8 161 6 active sync /dev/sdk1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdk2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca457 - correct Events : 77 Number Major Minor RaidDevice State this 6 8 162 6 active sync /dev/sdk2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdk5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 706c5124:d647d300:733fb961:e5cd8127 Update Time : Sat Jan 18 03:53:54 2020 Checksum : eafbf391 - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 5 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdl1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd6744b2 - correct Events : 589489 Number Major Minor RaidDevice State this 7 8 177 7 active sync /dev/sdl1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdl2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca469 - correct Events : 77 Number Major Minor RaidDevice State this 7 8 178 7 active sync /dev/sdl2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdl5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 6993b9eb:8ad7c80f:dc17268f:a8efa73d Update Time : Sat Jan 18 03:53:54 2020 Checksum : 4eca46e - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 7 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdl6: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Name : homelab:4 (local to host homelab) Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 5860493856 (2794.50 GiB 3000.57 GB) Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 5860493824 (2794.50 GiB 3000.57 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=32 sectors State : clean Device UUID : 7012016d:a3255ddf:3f30807e:1f591523 Update Time : Sat Jan 18 03:53:54 2020 Checksum : aac46c0 - correct Events : 7165 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 0 Array State : AAAA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdm1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd6744c4 - correct Events : 589489 Number Major Minor RaidDevice State this 8 8 193 8 active sync /dev/sdm1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdm2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca47b - correct Events : 77 Number Major Minor RaidDevice State this 8 8 194 8 active sync /dev/sdm2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdm5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 2f1247d1:a536d2ad:ba2eb47f:a7eaf237 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 735c942d - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 8 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdm6: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Name : homelab:4 (local to host homelab) Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 5860493856 (2794.50 GiB 3000.57 GB) Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 5860493824 (2794.50 GiB 3000.57 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=32 sectors State : clean Device UUID : a8d3f92e:69942435:fc88a07d:fed5cf67 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 66474c5a - correct Events : 7165 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAAA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdn1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 13 Preferred Minor : 0 Update Time : Fri Jan 3 02:47:02 2020 State : clean Active Devices : 12 Working Devices : 13 Failed Devices : 0 Spare Devices : 1 Checksum : dd4170cb - correct Events : 2053 Number Major Minor RaidDevice State this 12 8 177 12 spare /dev/sdl1 0 0 8 193 0 active sync /dev/sdm1 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 65 4 active sync /dev/sde1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 129 6 active sync 7 7 8 145 7 active sync 8 8 8 161 8 active sync /dev/sdk1 9 9 8 241 9 active sync /dev/sdp1 10 10 8 225 10 active sync /dev/sdo1 11 11 8 209 11 active sync /dev/sdn1 12 12 8 177 12 spare /dev/sdl1 /dev/sdn2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca48d - correct Events : 77 Number Major Minor RaidDevice State this 9 8 210 9 active sync /dev/sdn2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed mdadm: No md superblock detected on /dev/sdn5. /dev/sdn6: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Name : homelab:4 (local to host homelab) Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 5860493856 (2794.50 GiB 3000.57 GB) Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 5860493824 (2794.50 GiB 3000.57 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=32 sectors State : clean Device UUID : 063b1204:f6d34bd3:84076416:c4d99e6f Update Time : Sat Jan 18 03:53:54 2020 Checksum : 7a197597 - correct Events : 7165 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 3 Array State : AAAA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdn7: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : ae55eeff:e6a5cc66:2609f5e0:2e2ef747 Name : homelab:5 (local to host homelab) Creation Time : Tue Sep 24 19:36:08 2019 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 7811796928 (3724.96 GiB 3999.64 GB) Array Size : 3905898432 (3724.96 GiB 3999.64 GB) Used Dev Size : 7811796864 (3724.96 GiB 3999.64 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=64 sectors State : clean Device UUID : 43162b40:db6c93b3:98025029:72b7e3d4 Update Time : Sat Jan 18 01:48:45 2020 Checksum : 2940157e - correct Events : 223913 Device Role : Active device 1 Array State : .A ('A' == active, '.' == missing, 'R' == replacing) /dev/sdo1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd6744e8 - correct Events : 589489 Number Major Minor RaidDevice State this 10 8 225 10 active sync /dev/sdo1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdo2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca49f - correct Events : 77 Number Major Minor RaidDevice State this 10 8 226 10 active sync /dev/sdo2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdo5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 1b4ab27d:bb7488fa:a6cc1f75:d21d1a83 Update Time : Sat Jan 18 03:53:54 2020 Checksum : ad10db47 - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 9 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdo6: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Name : homelab:4 (local to host homelab) Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 5860493856 (2794.50 GiB 3000.57 GB) Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 5860493824 (2794.50 GiB 3000.57 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=32 sectors State : clean Device UUID : 856be4c3:8a458aaf:f0051c80:c8969855 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 65dbd318 - correct Events : 7165 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 2 Array State : AAAA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdp1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 10 Preferred Minor : 0 Update Time : Sat Jan 18 01:23:44 2020 State : clean Active Devices : 10 Working Devices : 10 Failed Devices : 2 Spare Devices : 0 Checksum : dd66c00d - correct Events : 579348 Number Major Minor RaidDevice State this 0 8 241 0 active sync /dev/sdp1 0 0 8 241 0 active sync /dev/sdp1 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 209 10 active sync /dev/sdn1 11 11 0 0 11 faulty removed /dev/sdp2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca4b1 - correct Events : 77 Number Major Minor RaidDevice State this 11 8 242 11 active sync /dev/sdp2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdp5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 73610f83:fb3cf895:c004147e:b4de2bfe Update Time : Sat Jan 18 03:53:54 2020 Checksum : d1e3b3dd - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 11 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdp6: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Name : homelab:4 (local to host homelab) Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 5860493856 (2794.50 GiB 3000.57 GB) Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 5860493824 (2794.50 GiB 3000.57 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=32 sectors State : clean Device UUID : 8a4f9f1d:b7041df1:dd74acd4:2dbb4f4b Update Time : Sat Jan 18 01:03:16 2020 Checksum : 4b76e17f - correct Events : 7134 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 4 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdp7: Magic : a92b4efc Version : 1.2 Feature Map : 0x2 Array UUID : ae55eeff:e6a5cc66:2609f5e0:2e2ef747 Name : homelab:5 (local to host homelab) Creation Time : Tue Sep 24 19:36:08 2019 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 7811796928 (3724.96 GiB 3999.64 GB) Array Size : 3905898432 (3724.96 GiB 3999.64 GB) Used Dev Size : 7811796864 (3724.96 GiB 3999.64 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Recovery Offset : 3 sectors Unused Space : before=1968 sectors, after=64 sectors State : clean Device UUID : 2be1c9ab:04ddd9c3:56f1a702:c429de92 Update Time : Sat Jan 18 05:16:27 2020 Checksum : 3c894d60 - correct Events : 1168826 Device Role : Active device 0 Array State : A. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdq1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 12 Preferred Minor : 0 Update Time : Fri Jan 10 19:58:06 2020 State : clean Active Devices : 11 Working Devices : 11 Failed Devices : 1 Spare Devices : 0 Checksum : dd594c7c - correct Events : 450677 Number Major Minor RaidDevice State this 9 65 1 9 active sync /dev/sdq1 0 0 8 225 0 active sync /dev/sdo1 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 0 0 4 faulty removed 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 65 1 9 active sync /dev/sdq1 10 10 8 209 10 active sync /dev/sdn1 11 11 8 65 11 active sync /dev/sde1 /dev/sdq2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca3fc - correct Events : 77 Number Major Minor RaidDevice State this 12 65 2 12 active sync /dev/sdq2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdq5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 5cc6456d:bfc950bf:1baf6fef:aabec947 Update Time : Sat Jan 18 03:53:54 2020 Checksum : a06c2fdd - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 6 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) cat /proc/mdstat hangs indefinitely now. I didnt' do anything lse other than cat /proc/mdstat, fdisk -l, and whatever I've written above. Quote Link to comment Share on other sites More sharing options...
flyride Posted January 17, 2020 Share #64 Posted January 17, 2020 (edited) From my first post "your data will patiently wait for you." Please, try and isolate and fix your hardware. Have you replaced every SATA cable? Are you SURE your power is good? You have a LOT of drives. Vibration? Cooling? Or just a drive failure? The fact that multiple drives have gone down suggests that you have some fundamental problem. If mdstat hangs, there's a reason. If there is really a bad drive, narrow it down and take it out of the system. We'll try to work with what's left. But the hardware has to work, otherwise time is being wasted. Edited January 17, 2020 by flyride Quote Link to comment Share on other sites More sharing options...
C-Fu Posted January 17, 2020 Author Share #65 Posted January 17, 2020 1 hour ago, flyride said: Please, try and isolate and fix your hardware. Have you replaced every SATA cable? Are you SURE your power is good? You have a LOT of drives. Vibration? Cooling? Or just a drive failure? The fact that multiple drives have gone down suggests that you have some fundamental problem. If mdstat hangs, there's a reason. If there is really a bad drive, narrow it down and take it out of the system. We'll try to work with what's left. But the hardware has to work, otherwise time is being wasted. Sata cables for 2x10TB drives have been replaced. All drives have their own fans, in groups of 4 or 5. PSU is 750W bronze. I'm using i7 4770 with one sata multiplier card and one hba sas card. Reason I posted the past few messages are to have your opinion on the potential bad drives that I need to isolate. But I'll keep trying and let you know, thanks!! Quote Link to comment Share on other sites More sharing options...
C-Fu Posted January 17, 2020 Author Share #66 Posted January 17, 2020 @flyride I believe I've pinpointed the "bad" drive, it's one of the 10TB one # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdb5[0] sdn5[9] sdm5[8] sdl5[7] sdq5[13] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/9] [UUUUU_UUUU___] md4 : active raid5 sdl6[0] sdo6[5] sdn6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_] md5 : active raid1 sdo7[2] 3905898432 blocks super 1.2 [2/1] [_U] md1 : active raid1 sdq2[11] sdp2[10] sdo2[9] sdn2[8] sdm2[7] sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0] 2097088 blocks [24/12] [UUUUUUUUUUUU____________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] 2490176 blocks [12/4] [_UUU_U______] unused devices: <none> # fdisk -l Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x696935dc Device Boot Start End Sectors Size Id Type /dev/sda1 2048 468857024 468854977 223.6G fd Linux raid autodetect Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 43C8C355-AE0A-42DC-97CC-508B0FB4EF37 Device Start End Sectors Size Type /dev/sdb1 2048 4982527 4980480 2.4G Linux RAID /dev/sdb2 4982528 9176831 4194304 2G Linux RAID /dev/sdb5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 0600DFFC-A576-4242-976A-3ACAE5284C4C Device Start End Sectors Size Type /dev/sdc1 2048 4982527 4980480 2.4G Linux RAID /dev/sdc2 4982528 9176831 4194304 2G Linux RAID /dev/sdc5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 58B43CB1-1F03-41D3-A734-014F59DE34E8 Device Start End Sectors Size Type /dev/sdd1 2048 4982527 4980480 2.4G Linux RAID /dev/sdd2 4982528 9176831 4194304 2G Linux RAID /dev/sdd5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: E5FD9CDA-FE14-4F95-B776-B176E7130DEA Device Start End Sectors Size Type /dev/sde1 2048 4982527 4980480 2.4G Linux RAID /dev/sde2 4982528 9176831 4194304 2G Linux RAID /dev/sde5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 48A13430-10A1-4050-BA78-723DB398CE87 Device Start End Sectors Size Type /dev/sdf1 2048 4982527 4980480 2.4G Linux RAID /dev/sdf2 4982528 9176831 4194304 2G Linux RAID /dev/sdf5 9453280 5860326239 5850872960 2.7T Linux RAID GPT PMBR size mismatch (102399 != 30277631) will be corrected by w(rite). Disk /dev/synoboot: 14.4 GiB, 15502147584 bytes, 30277632 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: B3CAAA25-3CA1-48FA-A5B6-105ADDE4793F Device Start End Sectors Size Type /dev/synoboot1 2048 32767 30720 15M EFI System /dev/synoboot2 32768 94207 61440 30M Linux filesystem /dev/synoboot3 94208 102366 8159 4M BIOS boot Disk /dev/sdk: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: A3E39D34-4297-4BE9-B4FD-3A21EFC38071 Device Start End Sectors Size Type /dev/sdk1 2048 4982527 4980480 2.4G Linux RAID /dev/sdk2 4982528 9176831 4194304 2G Linux RAID /dev/sdk5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdl: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 849E02B2-2734-496B-AB52-A572DF8FE63F Device Start End Sectors Size Type /dev/sdl1 2048 4982527 4980480 2.4G Linux RAID /dev/sdl2 4982528 9176831 4194304 2G Linux RAID /dev/sdl5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdl6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdm: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 423D33B4-90CE-4E34-9C40-6E06D1F50C0C Device Start End Sectors Size Type /dev/sdm1 2048 4982527 4980480 2.4G Linux RAID /dev/sdm2 4982528 9176831 4194304 2G Linux RAID /dev/sdm5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdm6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdn: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 09CB7303-C2E7-46F8-ADA0-D4853F25CB00 Device Start End Sectors Size Type /dev/sdn1 2048 4982527 4980480 2.4G Linux RAID /dev/sdn2 4982528 9176831 4194304 2G Linux RAID /dev/sdn5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdn6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdo: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: EA537505-55B5-4C27-A7CA-C7BBB7E7B56F Device Start End Sectors Size Type /dev/sdo1 2048 4982527 4980480 2.4G Linux RAID /dev/sdo2 4982528 9176831 4194304 2G Linux RAID /dev/sdo5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdo6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdo7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdp: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1D5B8B09-8D4A-4729-B089-442620D3D507 Device Start End Sectors Size Type /dev/sdp1 2048 4982527 4980480 2.4G Linux RAID /dev/sdp2 4982528 9176831 4194304 2G Linux RAID /dev/sdp5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdq: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 54D81C51-AB85-4DE2-AA16-263DF1C6BB8A Device Start End Sectors Size Type /dev/sdq1 2048 4982527 4980480 2.4G Linux RAID /dev/sdq2 4982528 9176831 4194304 2G Linux RAID /dev/sdq5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/md0: 2.4 GiB, 2549940224 bytes, 4980352 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram0: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram1: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram2: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram3: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md5: 3.7 TiB, 3999639994368 bytes, 7811796864 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md4: 10.9 TiB, 12002291351552 bytes, 23441975296 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 65536 bytes / 262144 bytes Quote Link to comment Share on other sites More sharing options...
flyride Posted January 17, 2020 Share #67 Posted January 17, 2020 Do you have another 10TB drive available? Assuming not, we'll just have to recover in a degraded state - the next step will be to double-check that we can force the md2 array online reasonably safely with what is left. If there is an unexpected reboot or an additional device failure please let me know. Here's a map of the disks as just presented. # mdadm --detail /dev/md2 # mdadm --detail /dev/md4 # mdadm --detail /dev/md5 # mdadm --examine /dev/sd[bcdefklmnopq]5 | egrep 'Event|/dev/sd' Quote Link to comment Share on other sites More sharing options...
C-Fu Posted January 17, 2020 Author Share #68 Posted January 17, 2020 6 minutes ago, flyride said: If there is an unexpected reboot or an additional device failure please let me know. Nothing happened after I took out power and sata cable from the 10TB. I would have to get another 10TB from Amazon if I need to replace it. Shipping's gonna take at most two weeks. I have two 8TB drives on standby ready to be shucked right before this happened. root@homelab:~# mdadm --detail /dev/md2 /dev/md2: Version : 1.2 Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Used Dev Size : 2925435456 (2789.91 GiB 2995.65 GB) Raid Devices : 13 Total Devices : 9 Persistence : Superblock is persistent Update Time : Sat Jan 18 07:23:02 2020 State : clean, FAILED Active Devices : 9 Working Devices : 9 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : homelab:2 (local to host homelab) UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Events : 371031 Number Major Minor RaidDevice State 0 8 21 0 active sync /dev/sdb5 1 8 37 1 active sync /dev/sdc5 2 8 53 2 active sync /dev/sdd5 3 8 69 3 active sync /dev/sde5 4 8 85 4 active sync /dev/sdf5 - 0 0 5 removed 13 65 5 6 active sync /dev/sdq5 7 8 181 7 active sync /dev/sdl5 8 8 197 8 active sync /dev/sdm5 9 8 213 9 active sync /dev/sdn5 - 0 0 10 removed - 0 0 11 removed - 0 0 12 removed # mdadm --detail /dev/md4 /dev/md4: Version : 1.2 Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 2930246912 (2794.50 GiB 3000.57 GB) Raid Devices : 5 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sat Jan 18 07:23:02 2020 State : clean, degraded Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : homelab:4 (local to host homelab) UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Events : 7200 Number Major Minor RaidDevice State 0 8 182 0 active sync /dev/sdl6 1 8 198 1 active sync /dev/sdm6 2 8 214 2 active sync /dev/sdn6 5 8 230 3 active sync /dev/sdo6 - 0 0 4 removed # mdadm --detail /dev/md5 /dev/md5: Version : 1.2 Creation Time : Tue Sep 24 19:36:08 2019 Raid Level : raid1 Array Size : 3905898432 (3724.96 GiB 3999.64 GB) Used Dev Size : 3905898432 (3724.96 GiB 3999.64 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Sat Jan 18 07:22:58 2020 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : homelab:5 (local to host homelab) UUID : ae55eeff:e6a5cc66:2609f5e0:2e2ef747 Events : 223918 Number Major Minor RaidDevice State - 0 0 0 removed 2 8 231 1 active sync /dev/sdo7 # mdadm --examine /dev/sd[bcdefklmnopq]5 | egrep 'Event|/dev/sd' /dev/sdb5: Events : 371031 /dev/sdc5: Events : 371031 /dev/sdd5: Events : 371031 /dev/sde5: Events : 371031 /dev/sdf5: mdadm: No md superblock detected on /dev/sdo5. Events : 371031 /dev/sdk5: Events : 370998 /dev/sdl5: Events : 371031 /dev/sdm5: Events : 371031 /dev/sdn5: Events : 371031 /dev/sdp5: Events : 370988 /dev/sdq5: Events : 371031 Quote Link to comment Share on other sites More sharing options...
flyride Posted January 18, 2020 Share #69 Posted January 18, 2020 Well, that's fun. # mdadm --examine /dev/sd[bcdefklmnopq]5 | egrep 'Role|/dev/sd' Quote Link to comment Share on other sites More sharing options...
C-Fu Posted January 18, 2020 Author Share #70 Posted January 18, 2020 2 minutes ago, flyride said: Well, that's fun. # mdadm --examine /dev/sd[bcdefklmnopq]5 | egrep 'Role|/dev/sd' # mdadm --examine /dev/sd[bcdefklmnopq]5 | egrep 'Role|/dev/sd' /dev/sdb5: Device Role : Active device 0 /dev/sdc5: Device Role : Active device 1 /dev/sdd5: Device Role : Active device 2 /dev/sde5: Device Role : Active device 3 /dev/sdf5: mdadm: No md superblock detected on /dev/sdo5. Device Role : Active device 4 /dev/sdk5: Device Role : Active device 12 /dev/sdl5: Device Role : Active device 7 /dev/sdm5: Device Role : Active device 8 /dev/sdn5: Device Role : Active device 9 /dev/sdp5: Device Role : Active device 5 /dev/sdq5: Device Role : Active device 6 Quote Link to comment Share on other sites More sharing options...
flyride Posted January 18, 2020 Share #71 Posted January 18, 2020 (edited) Well, it's hero or jackass time. It's been fun trying to figure this out, but that's over now. The huge problem with this recovery is that the ongoing hardware failures have caused the disk devices to renumber on a frequent basis. It makes matching up current state with saved history virtually impossible and basically just created a basket case. To recap, we've done a total of two "irrevocable" things. First we forced /dev/md2 to accept a slightly out of sequence (3TB) disk, which initially made the array mountable. Then, since we were having difficulty resyncing the 10TB disk, we zeroed the superblock (second irrevocable thing) on its /dev/md2 array member, to force it completely out of the array and resync. The resync problem was due to a 6TB disk apparently failing and recovering itself, and we finally saw it fail completely across all the arrays. Neither of these irrevocable things are too much of a problem, except now you have determined that the "good" 10TB drive is failed and removed it from the system, which leaves the "bad" superblock-zeroed 10TB drive as the only path to recover /dev/md2. It is important to be correct about that, as it is possible that the "bad" 10TB drive has corrupted data on it due to the failed resync. And, we can't tell at all what state it is in, because there is no superblock. Working with such a large/complex array, we rely on the array and member metadata to keep things straight (normal procedure). Now we cannot do that either because of the zeroed superblock. We have to try and force the "bad" array member back into the correct position in the array because it no longer knows where to go. If we force create the array now and the device addressing isn't correct (meaning that it doesn't match the RAID order that was extracted from the device numbering), we are virtually certain to lose the array (we can try a couple of obvious permutations, but with 13 disk slots for combinations, the problem is significant). So, before you execute the commands below, be ABSOLUTELY sure that everything is totally stable, and that the active drive names match the drive map, which was derived from your most recent fdisk: Research, cross-check, ask questions, validate anything you see fit. At the end of the day, this is your data, not mine. And frankly, I have never put an array into this position before and cannot easily simulate or test a recovery. If something doesn't match up in your mind, you want a second opinion, or you just don't want to for any reason, do NOT do the following: # mdadm --stop /dev/md2 # mdadm -Cf /dev/md2 -e1.2 -n13 -l5 --verbose --assume-clean /dev/sd[bcdefpqlmno]5 missing /dev/sdk5 -u43699871:217306be:dc16f5e8:dcbe1b0d # cat /proc/mdstat But if everything goes well, you should have a mounted, degraded /dev/md2. Under no circumstances attempt to add or replace a disk to resync it. Edited January 18, 2020 by flyride 1 Quote Link to comment Share on other sites More sharing options...
C-Fu Posted January 18, 2020 Author Share #72 Posted January 18, 2020 29 minutes ago, flyride said: But if everything goes well, you should have a mounted, degraded /dev/md2. Under no circumstances attempt to add or replace a disk to resync it. I understand what you're trying to say, and whatever happens I accept 😁 I've tried multiple times to plug and replug the "bad" 10TB drive, but mdstat always hangs. Anyway, I've done the 3 commands. # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdk5[12] sdq5[10] sdp5[9] sdo5[8] sdn5[7] sdm5[6] sdl5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] sdb5[0] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUUU_U] md4 : active raid5 sdl6[0] sdo6[5] sdn6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_] md5 : active raid1 sdo7[2] 3905898432 blocks super 1.2 [2/1] [_U] md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdk2[5] sdl2[6] sdm2[7] sdn2[8] sdo2[9] sdp2[10] sdq2[11] 2097088 blocks [24/12] [UUUUUUUUUUUU____________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] 2490176 blocks [12/4] [_UUU_U______] unused devices: <none> Quote Link to comment Share on other sites More sharing options...
flyride Posted January 18, 2020 Share #73 Posted January 18, 2020 Neat. # pvs # vgs # lvs # pvdisplay # vgdisplay # lvdisplay Quote Link to comment Share on other sites More sharing options...
C-Fu Posted January 18, 2020 Author Share #74 Posted January 18, 2020 @flyride root@homelab:~# pvs PV VG Fmt Attr PSize PFree /dev/md2 vg1 lvm2 a-m 32.69t 0 /dev/md4 vg1 lvm2 a-- 10.92t 0 /dev/md5 vg1 lvm2 a-- 3.64t 916.00m root@homelab:~# vgs VG #PV #LV #SN Attr VSize VFree vg1 3 2 0 wz-pn- 47.25t 916.00m root@homelab:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert syno_vg_reserved_area vg1 -wi-----p- 12.00m volume_1 vg1 -wi-----p- 47.25t # pvdisplay --- Physical volume --- PV Name /dev/md2 VG Name vg1 PV Size 32.69 TiB / not usable 2.19 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 8570611 Free PE 0 Allocated PE 8570611 PV UUID xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf --- Physical volume --- PV Name /dev/md4 VG Name vg1 PV Size 10.92 TiB / not usable 128.00 KiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 2861569 Free PE 0 Allocated PE 2861569 PV UUID f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh --- Physical volume --- PV Name /dev/md5 VG Name vg1 PV Size 3.64 TiB / not usable 1.38 MiB Allocatable yes PE Size 4.00 MiB Total PE 953588 Free PE 229 Allocated PE 953359 PV UUID U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF # vgdisplay --- Volume group --- VG Name vg1 System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 13 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 0 Max PV 0 Cur PV 3 Act PV 2 VG Size 47.25 TiB PE Size 4.00 MiB Total PE 12385768 Alloc PE / Size 12385539 / 47.25 TiB Free PE / Size 229 / 916.00 MiB VG UUID 2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp # lvdisplay --- Logical volume --- LV Path /dev/vg1/syno_vg_reserved_area LV Name syno_vg_reserved_area VG Name vg1 LV UUID OJfeP6-Rnd9-2TgX-wPFd-P3pk-NDt5-pPOhr3 LV Write Access read/write LV Creation host, time , LV Status NOT available LV Size 12.00 MiB Current LE 3 Segments 1 Allocation inherit Read ahead sectors auto --- Logical volume --- LV Path /dev/vg1/volume_1 LV Name volume_1 VG Name vg1 LV UUID SfGkye-GcMM-HrO2-z9xK-oGwY-cqmm-XZnHMv LV Write Access read/write LV Creation host, time , LV Status NOT available LV Size 47.25 TiB Current LE 12385536 Segments 4 Allocation inherit Read ahead sectors auto Quote Link to comment Share on other sites More sharing options...
flyride Posted January 18, 2020 Share #75 Posted January 18, 2020 So far, so good. # cat /etc/fstab # vgchange -ay Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.