C-Fu
-
Posts
90 -
Joined
-
Last visited
Posts posted by C-Fu
-
-
10 minutes ago, flyride said:
Thanks. Look in the archive folder (/etc/lvm/archive) and post the file with the newest timestamp, and also one with a date stamp from BEFORE the crash.
/etc/lvm/archive# ls -l total 80 -rw-r--r-- 1 root root 2157 Sep 22 21:05 vg1_00000-1668747838.vg -rw-r--r-- 1 root root 2157 Sep 22 21:44 vg1_00001-583193481.vg -rw-r--r-- 1 root root 1459 Sep 22 21:44 vg1_00002-1264921506.vg -rw-r--r-- 1 root root 1153 Sep 22 21:55 vg1_00003-185086001.vg -rw-r--r-- 1 root root 1147 Sep 22 21:55 vg1_00004-805154513.vg -rw-r--r-- 1 root root 1478 Sep 22 21:55 vg1_00005-448325956.vg -rw-r--r-- 1 root root 2202 Sep 26 17:43 vg1_00006-1565525435.vg -rw-r--r-- 1 root root 2202 Sep 26 17:43 vg1_00007-368672770.vg -rw-r--r-- 1 root root 2200 Sep 26 17:43 vg1_00008-1121218288.vg -rw-r--r-- 1 root root 2242 Sep 26 20:06 vg1_00009-1448678039.vg -rw-r--r-- 1 root root 2569 Dec 8 15:45 vg1_00010-478377468.vg -rw-r--r-- 1 root root 2569 Dec 8 15:45 vg1_00011-945038746.vg -rw-r--r-- 1 root root 2570 Dec 8 15:45 vg1_00012-109591933.vg -rw-r--r-- 1 root root 2589 Jan 17 03:26 vg1_00013-1309520600.vg -rw-r--r-- 1 root root 2589 Jan 17 03:26 vg1_00014-1824124453.vg -rw-r--r-- 1 root root 2583 Jan 17 17:13 vg1_00015-451330715.vg -rw-r--r-- 1 root root 2583 Jan 17 17:13 vg1_00016-1144631688.vg -rw-r--r-- 1 root root 1534 Sep 22 21:05 vg2_00000-531856239.vg -rw-r--r-- 1 root root 1534 Sep 29 13:30 vg2_00001-629071759.vg -rw-r--r-- 1 root root 1207 Sep 29 13:31 vg2_00002-739831571.vg
# cat vg1_00016-1144631688.vg # Generated by LVM2 version 2.02.132(2)-git (2015-09-22): Fri Jan 17 17:13:56 2020 contents = "Text Format Volume Group" version = 1 description = "Created *before* executing '/sbin/vgchange -ay /dev/vg1'" creation_host = "homelab" # Linux homelab 3.10.105 #23739 SMP Tue Jul 3 19:50:10 CST 2018 x86_64 creation_time = 1579252436 # Fri Jan 17 17:13:56 2020 vg1 { id = "2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp" seqno = 13 format = "lvm2" # informational status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 metadata_copies = 0 physical_volumes { pv0 { id = "xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf" device = "/dev/md2" # Hint only status = ["ALLOCATABLE"] flags = ["MISSING"] dev_size = 70210449792 # 32.6943 Terabytes pe_start = 1152 pe_count = 8570611 # 32.6943 Terabytes } pv1 { id = "f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh" device = "/dev/md4" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 23441974144 # 10.916 Terabytes pe_start = 1152 pe_count = 2861569 # 10.916 Terabytes } pv2 { id = "U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF" device = "/dev/md5" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 7811795712 # 3.63765 Terabytes pe_start = 1152 pe_count = 953588 # 3.63765 Terabytes } } logical_volumes { syno_vg_reserved_area { id = "OJfeP6-Rnd9-2TgX-wPFd-P3pk-NDt5-pPOhr3" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 1 segment1 { start_extent = 0 extent_count = 3 # 12 Megabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 0 ] } } volume_1 { id = "SfGkye-GcMM-HrO2-z9xK-oGwY-cqmm-XZnHMv" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 4 segment1 { start_extent = 0 extent_count = 6427955 # 24.5207 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 3 ] } segment2 { start_extent = 6427955 extent_count = 2861569 # 10.916 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv1", 0 ] } segment3 { start_extent = 9289524 extent_count = 2142653 # 8.17357 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 6427958 ] } segment4 { start_extent = 11432177 extent_count = 953359 # 3.63678 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv2", 0 ] } } } }
That's the newest, two logs at Jan 13 17:13. As for when the crash happened, I think it should be on Dec 8? I don't really remember the exact date though. But I checked my chatlogs with friends and on Dec 12 I could still download stuff (it's not yet read-only).
3 logs at Dec 8:
vg1_00010-478377468.vg
# cat vg1_00010-478377468.vg # Generated by LVM2 version 2.02.132(2)-git (2015-09-22): Sun Dec 8 15:45:06 2019 contents = "Text Format Volume Group" version = 1 description = "Created *before* executing '/sbin/pvresize /dev/md5'" creation_host = "homelab" # Linux homelab 3.10.105 #23739 SMP Tue Jul 3 19:50:10 CST 2018 x86_64 creation_time = 1575791106 # Sun Dec 8 15:45:06 2019 vg1 { id = "2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp" seqno = 8 format = "lvm2" # informational status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 metadata_copies = 0 physical_volumes { pv0 { id = "xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf" device = "/dev/md2" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 70210449792 # 32.6943 Terabytes pe_start = 1152 pe_count = 8570611 # 32.6943 Terabytes } pv1 { id = "f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh" device = "/dev/md4" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 23441974144 # 10.916 Terabytes pe_start = 1152 pe_count = 2861569 # 10.916 Terabytes } pv2 { id = "U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF" device = "/dev/md5" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 7811795712 # 3.63765 Terabytes pe_start = 1152 pe_count = 953588 # 3.63765 Terabytes } } logical_volumes { syno_vg_reserved_area { id = "OJfeP6-Rnd9-2TgX-wPFd-P3pk-NDt5-pPOhr3" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 1 segment1 { start_extent = 0 extent_count = 3 # 12 Megabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 0 ] } } volume_1 { id = "SfGkye-GcMM-HrO2-z9xK-oGwY-cqmm-XZnHMv" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 4 segment1 { start_extent = 0 extent_count = 6427955 # 24.5207 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 3 ] } segment2 { start_extent = 6427955 extent_count = 2861569 # 10.916 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv1", 0 ] } segment3 { start_extent = 9289524 extent_count = 2142653 # 8.17357 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 6427958 ] } segment4 { start_extent = 11432177 extent_count = 953359 # 3.63678 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv2", 0 ] } } } }
vg1_00011-945038746.vg
# cat vg1_00011-945038746.vg # Generated by LVM2 version 2.02.132(2)-git (2015-09-22): Sun Dec 8 15:45:06 2019 contents = "Text Format Volume Group" version = 1 description = "Created *before* executing '/sbin/pvresize /dev/md4'" creation_host = "homelab" # Linux homelab 3.10.105 #23739 SMP Tue Jul 3 19:50:10 CST 2018 x86_64 creation_time = 1575791106 # Sun Dec 8 15:45:06 2019 vg1 { id = "2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp" seqno = 9 format = "lvm2" # informational status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 metadata_copies = 0 physical_volumes { pv0 { id = "xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf" device = "/dev/md2" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 70210449792 # 32.6943 Terabytes pe_start = 1152 pe_count = 8570611 # 32.6943 Terabytes } pv1 { id = "f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh" device = "/dev/md4" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 23441974144 # 10.916 Terabytes pe_start = 1152 pe_count = 2861569 # 10.916 Terabytes } pv2 { id = "U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF" device = "/dev/md5" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 7811795712 # 3.63765 Terabytes pe_start = 1152 pe_count = 953588 # 3.63765 Terabytes } } logical_volumes { syno_vg_reserved_area { id = "OJfeP6-Rnd9-2TgX-wPFd-P3pk-NDt5-pPOhr3" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 1 segment1 { start_extent = 0 extent_count = 3 # 12 Megabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 0 ] } } volume_1 { id = "SfGkye-GcMM-HrO2-z9xK-oGwY-cqmm-XZnHMv" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 4 segment1 { start_extent = 0 extent_count = 6427955 # 24.5207 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 3 ] } segment2 { start_extent = 6427955 extent_count = 2861569 # 10.916 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv1", 0 ] } segment3 { start_extent = 9289524 extent_count = 2142653 # 8.17357 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 6427958 ] } segment4 { start_extent = 11432177 extent_count = 953359 # 3.63678 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv2", 0 ] } } } }
vg1_00012-109591933.vg
# cat vg1_00012-109591933.vg # Generated by LVM2 version 2.02.132(2)-git (2015-09-22): Sun Dec 8 15:45:07 2019 contents = "Text Format Volume Group" version = 1 description = "Created *before* executing '/sbin/pvresize /dev/md2'" creation_host = "homelab" # Linux homelab 3.10.105 #23739 SMP Tue Jul 3 19:50:10 CST 2018 x86_64 creation_time = 1575791107 # Sun Dec 8 15:45:07 2019 vg1 { id = "2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp" seqno = 10 format = "lvm2" # informational status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 metadata_copies = 0 physical_volumes { pv0 { id = "xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf" device = "/dev/md2" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 70210449792 # 32.6943 Terabytes pe_start = 1152 pe_count = 8570611 # 32.6943 Terabytes } pv1 { id = "f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh" device = "/dev/md4" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 23441974144 # 10.916 Terabytes pe_start = 1152 pe_count = 2861569 # 10.916 Terabytes } pv2 { id = "U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF" device = "/dev/md5" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 7811795712 # 3.63765 Terabytes pe_start = 1152 pe_count = 953588 # 3.63765 Terabytes } } logical_volumes { syno_vg_reserved_area { id = "OJfeP6-Rnd9-2TgX-wPFd-P3pk-NDt5-pPOhr3" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 1 segment1 { start_extent = 0 extent_count = 3 # 12 Megabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 0 ] } } volume_1 { id = "SfGkye-GcMM-HrO2-z9xK-oGwY-cqmm-XZnHMv" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 4 segment1 { start_extent = 0 extent_count = 6427955 # 24.5207 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 3 ] } segment2 { start_extent = 6427955 extent_count = 2861569 # 10.916 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv1", 0 ] } segment3 { start_extent = 9289524 extent_count = 2142653 # 8.17357 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 6427958 ] } segment4 { start_extent = 11432177 extent_count = 953359 # 3.63678 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv2", 0 ] } } } }
Sept 29 - vg2_00002-739831571.vg
e# cat vg2_00002-739831571.vg # Generated by LVM2 version 2.02.132(2)-git (2015-09-22): Sun Sep 29 13:31:56 2019 contents = "Text Format Volume Group" version = 1 description = "Created *before* executing '/sbin/vgremove -f /dev/vg2'" creation_host = "homelab" # Linux homelab 3.10.105 #23739 SMP Tue Jul 3 19:50:10 CST 2018 x86_64 creation_time = 1569735116 # Sun Sep 29 13:31:56 2019 vg2 { id = "RJWo7j-o1qi-oYT7-jZP2-Ig5k-8ZOg-znuyZj" seqno = 4 format = "lvm2" # informational status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 metadata_copies = 0 physical_volumes { pv0 { id = "mPVTql-CJyu-b3mA-GZmd-B6OD-ItR9-6pEOfB" device = "/dev/md3" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 459199872 # 218.964 Gigabytes pe_start = 1152 pe_count = 56054 # 218.961 Gigabytes } } logical_volumes { syno_vg_reserved_area { id = "cjfGiU-1Hnb-MNte-P2Fk-u7ZA-g8QH-7jfTWY" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 1 segment1 { start_extent = 0 extent_count = 3 # 12 Megabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 0 ] } } } }
-
4 hours ago, flyride said:
Ok, sorry for being away so long. Will have some time in the next 24 hours to work on this. I was kinda hoping someone might have come up with some ideas about investigating lvm in my absence. But I built up a test environment to see if it can help at all. In the meantime, please run and post:
# pvdisplay -m
# cat /etc/lvm/backup/vg1
It's ok dude. Like you said, if the data is there, it will wait I'm just taking the time reading and understanding the commands and what everything did.
# pvdisplay -m --- Physical volume --- PV Name /dev/md2 VG Name vg1 PV Size 32.69 TiB / not usable 2.19 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 8570611 Free PE 0 Allocated PE 8570611 PV UUID xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf --- Physical Segments --- Physical extent 0 to 2: Logical volume /dev/vg1/syno_vg_reserved_area Logical extents 0 to 2 Physical extent 3 to 6427957: Logical volume /dev/vg1/volume_1 Logical extents 0 to 6427954 Physical extent 6427958 to 8570610: Logical volume /dev/vg1/volume_1 Logical extents 9289524 to 11432176 --- Physical volume --- PV Name /dev/md4 VG Name vg1 PV Size 10.92 TiB / not usable 128.00 KiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 2861569 Free PE 0 Allocated PE 2861569 PV UUID f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh --- Physical Segments --- Physical extent 0 to 2861568: Logical volume /dev/vg1/volume_1 Logical extents 6427955 to 9289523 --- Physical volume --- PV Name /dev/md5 VG Name vg1 PV Size 3.64 TiB / not usable 1.38 MiB Allocatable yes PE Size 4.00 MiB Total PE 953588 Free PE 229 Allocated PE 953359 PV UUID U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF --- Physical Segments --- Physical extent 0 to 953358: Logical volume /dev/vg1/volume_1 Logical extents 11432177 to 12385535 Physical extent 953359 to 953587: FREE
# cat /etc/lvm/backup/vg1 # Generated by LVM2 version 2.02.132(2)-git (2015-09-22): Fri Jan 17 17:13:56 2020 contents = "Text Format Volume Group" version = 1 description = "Created *after* executing '/sbin/vgchange -ay /dev/vg1'" creation_host = "homelab" # Linux homelab 3.10.105 #23739 SMP Tue Jul 3 19:50:10 CST 2018 x86_64 creation_time = 1579252436 # Fri Jan 17 17:13:56 2020 vg1 { id = "2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp" seqno = 13 format = "lvm2" # informational status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 metadata_copies = 0 physical_volumes { pv0 { id = "xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf" device = "/dev/md2" # Hint only status = ["ALLOCATABLE"] flags = ["MISSING"] dev_size = 70210449792 # 32.6943 Terabytes pe_start = 1152 pe_count = 8570611 # 32.6943 Terabytes } pv1 { id = "f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh" device = "/dev/md4" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 23441974144 # 10.916 Terabytes pe_start = 1152 pe_count = 2861569 # 10.916 Terabytes } pv2 { id = "U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF" device = "/dev/md5" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 7811795712 # 3.63765 Terabytes pe_start = 1152 pe_count = 953588 # 3.63765 Terabytes } } logical_volumes { syno_vg_reserved_area { id = "OJfeP6-Rnd9-2TgX-wPFd-P3pk-NDt5-pPOhr3" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 1 segment1 { start_extent = 0 extent_count = 3 # 12 Megabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 0 ] } } volume_1 { id = "SfGkye-GcMM-HrO2-z9xK-oGwY-cqmm-XZnHMv" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 4 segment1 { start_extent = 0 extent_count = 6427955 # 24.5207 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 3 ] } segment2 { start_extent = 6427955 extent_count = 2861569 # 10.916 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv1", 0 ] } segment3 { start_extent = 9289524 extent_count = 2142653 # 8.17357 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 6427958 ] } segment4 { start_extent = 11432177 extent_count = 953359 # 3.63678 Terabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv2", 0 ] } } } }
-
4 hours ago, IG-88 said:
whats the controller arrangement in your system? (2 x 8 port sas controller and 1 x 4 port onboard?)
ssd seemed to be 1st, on what controller was that? in what order does dsm use the controllers?
2x4 port sas controller, 1x6 port onboard.
SSD is onboard.
Wd reds and some wd purple are onboard) 5 in total, all 3tb)
Some 2tb wd purple, 10tb wd white, 3x5 ironwolfs are on the sas.
I also have a sata multiplier card just in case.
4 hours ago, IG-88 said:you might have power supply troubles, more disks more problems
That's true. But I have a reliable cooler master 750w that I use. I switched to a 1600w psu during troubleshooting on the hardware issues, but even 14 drives x 20w (at peak) = 280w. If include the i7 4770 and sas card and 4 sticks of ram, I'm well within 80% load.
I too figured it might (slight chance, unlikely) be a power issue with the bad 10tb drive, but right now my priority is I rather salvage what's left and figure out what to do about the 10tb drive later
4 hours ago, IG-88 said:if you get things running in the state as it is now (access to data) you might think about not doing any hardware changes, no additional disks, nothing inside the case/housing, maybe even not rebooting
I still can't access any data yet, I believe there are still steps that I need @flyrideto help me with, but I don't even wanna touch the rig at the moment.
Thanks for the insight, I'm gonna need every help that I can.
-
Just now, flyride said:
# pvdisplay -v
Here ya go
pvdisplay -v Using physical volume(s) on command line. Wiping cache of LVM-capable devices There are 1 physical volumes missing. There are 1 physical volumes missing. --- Physical volume --- PV Name /dev/md2 VG Name vg1 PV Size 32.69 TiB / not usable 2.19 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 8570611 Free PE 0 Allocated PE 8570611 PV UUID xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf --- Physical volume --- PV Name /dev/md4 VG Name vg1 PV Size 10.92 TiB / not usable 128.00 KiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 2861569 Free PE 0 Allocated PE 2861569 PV UUID f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh --- Physical volume --- PV Name /dev/md5 VG Name vg1 PV Size 3.64 TiB / not usable 1.38 MiB Allocatable yes PE Size 4.00 MiB Total PE 953588 Free PE 229 Allocated PE 953359 PV UUID U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF
-
1 minute ago, flyride said:
# pvdisplay -a
Is this normal?
root@homelab:~# pvdisplay -a Incompatible options selected Run 'pvdisplay --help' for more information. root@homelab:~# pvdisplay --help pvdisplay: Display various attributes of physical volume(s) pvdisplay [-c|--colon] [--commandprofile ProfileName] [-d|--debug] [--foreign] [-h|--help] [--ignorelockingfailure] [--ignoreskippedcluster] [-m|--maps] [--nosuffix] [--readonly] [-S|--select Selection] [-s|--short] [--units hHbBsSkKmMgGtTpPeE] [-v|--verbose] [--version] [PhysicalVolumePath [PhysicalVolumePath...]] pvdisplay --columns|-C [--aligned] [-a|--all] [--binary] [--commandprofile ProfileName] [-d|--debug] [--foreign] [-h|--help] [--ignorelockingfailure] [--ignoreskippedcluster] [--noheadings] [--nosuffix] [-o|--options [+]Field[,Field]] [-O|--sort [+|-]key1[,[+|-]key2[,...]]] [-S|--select Selection] [--readonly] [--separator Separator] [--unbuffered] [--units hHbBsSkKmMgGtTpPeE] [-v|--verbose] [--version] [PhysicalVolumePath [PhysicalVolumePath...]]
-
1 minute ago, flyride said:
# lvm vgscan
That was quick. The command didn't take any time at all.
# lvm vgscan Reading all physical volumes. This may take a while... Found volume group "vg1" using metadata type lvm2
-
4 minutes ago, flyride said:
# pvscan -v -v
Weird. Command not found
root@homelab:~# find / -name pvscan root@homelab:~#
-
2 minutes ago, flyride said:
So far, so good.
Whoops
root@homelab:~# cat /etc/fstab none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/vg1/volume_1 /volume1 btrfs 0 0 root@homelab:~# vgchange -ay Refusing activation of partial LV vg1/syno_vg_reserved_area. Use '--activationmode partial' to override. Refusing activation of partial LV vg1/volume_1. Use '--activationmode partial' to override. 0 logical volume(s) in volume group "vg1" now active
-
root@homelab:~# pvs PV VG Fmt Attr PSize PFree /dev/md2 vg1 lvm2 a-m 32.69t 0 /dev/md4 vg1 lvm2 a-- 10.92t 0 /dev/md5 vg1 lvm2 a-- 3.64t 916.00m root@homelab:~# vgs VG #PV #LV #SN Attr VSize VFree vg1 3 2 0 wz-pn- 47.25t 916.00m root@homelab:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert syno_vg_reserved_area vg1 -wi-----p- 12.00m volume_1 vg1 -wi-----p- 47.25t
# pvdisplay --- Physical volume --- PV Name /dev/md2 VG Name vg1 PV Size 32.69 TiB / not usable 2.19 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 8570611 Free PE 0 Allocated PE 8570611 PV UUID xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf --- Physical volume --- PV Name /dev/md4 VG Name vg1 PV Size 10.92 TiB / not usable 128.00 KiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 2861569 Free PE 0 Allocated PE 2861569 PV UUID f8dzdz-Eb43-Q7PD-6Vxx-qCJT-4okI-R7ffIh --- Physical volume --- PV Name /dev/md5 VG Name vg1 PV Size 3.64 TiB / not usable 1.38 MiB Allocatable yes PE Size 4.00 MiB Total PE 953588 Free PE 229 Allocated PE 953359 PV UUID U5BW8z-Pm2a-x0hj-5BpO-8NCp-nocX-icNciF
# vgdisplay --- Volume group --- VG Name vg1 System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 13 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 0 Max PV 0 Cur PV 3 Act PV 2 VG Size 47.25 TiB PE Size 4.00 MiB Total PE 12385768 Alloc PE / Size 12385539 / 47.25 TiB Free PE / Size 229 / 916.00 MiB VG UUID 2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp
# lvdisplay --- Logical volume --- LV Path /dev/vg1/syno_vg_reserved_area LV Name syno_vg_reserved_area VG Name vg1 LV UUID OJfeP6-Rnd9-2TgX-wPFd-P3pk-NDt5-pPOhr3 LV Write Access read/write LV Creation host, time , LV Status NOT available LV Size 12.00 MiB Current LE 3 Segments 1 Allocation inherit Read ahead sectors auto --- Logical volume --- LV Path /dev/vg1/volume_1 LV Name volume_1 VG Name vg1 LV UUID SfGkye-GcMM-HrO2-z9xK-oGwY-cqmm-XZnHMv LV Write Access read/write LV Creation host, time , LV Status NOT available LV Size 47.25 TiB Current LE 12385536 Segments 4 Allocation inherit Read ahead sectors auto
-
29 minutes ago, flyride said:
But if everything goes well, you should have a mounted, degraded /dev/md2. Under no circumstances attempt to add or replace a disk to resync it.
I understand what you're trying to say, and whatever happens I accept 😁
I've tried multiple times to plug and replug the "bad" 10TB drive, but mdstat always hangs.
Anyway, I've done the 3 commands.
# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdk5[12] sdq5[10] sdp5[9] sdo5[8] sdn5[7] sdm5[6] sdl5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] sdb5[0] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUUU_U] md4 : active raid5 sdl6[0] sdo6[5] sdn6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_] md5 : active raid1 sdo7[2] 3905898432 blocks super 1.2 [2/1] [_U] md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdk2[5] sdl2[6] sdm2[7] sdn2[8] sdo2[9] sdp2[10] sdq2[11] 2097088 blocks [24/12] [UUUUUUUUUUUU____________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] 2490176 blocks [12/4] [_UUU_U______] unused devices: <none>
-
2 minutes ago, flyride said:
Well, that's fun.
# mdadm --examine /dev/sd[bcdefklmnopq]5 | egrep 'Role|/dev/sd'
# mdadm --examine /dev/sd[bcdefklmnopq]5 | egrep 'Role|/dev/sd' /dev/sdb5: Device Role : Active device 0 /dev/sdc5: Device Role : Active device 1 /dev/sdd5: Device Role : Active device 2 /dev/sde5: Device Role : Active device 3 /dev/sdf5: mdadm: No md superblock detected on /dev/sdo5. Device Role : Active device 4 /dev/sdk5: Device Role : Active device 12 /dev/sdl5: Device Role : Active device 7 /dev/sdm5: Device Role : Active device 8 /dev/sdn5: Device Role : Active device 9 /dev/sdp5: Device Role : Active device 5 /dev/sdq5: Device Role : Active device 6
-
6 minutes ago, flyride said:
If there is an unexpected reboot or an additional device failure please let me know.
Nothing happened after I took out power and sata cable from the 10TB. I would have to get another 10TB from Amazon if I need to replace it. Shipping's gonna take at most two weeks. I have two 8TB drives on standby ready to be shucked right before this happened.
root@homelab:~# mdadm --detail /dev/md2 /dev/md2: Version : 1.2 Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Used Dev Size : 2925435456 (2789.91 GiB 2995.65 GB) Raid Devices : 13 Total Devices : 9 Persistence : Superblock is persistent Update Time : Sat Jan 18 07:23:02 2020 State : clean, FAILED Active Devices : 9 Working Devices : 9 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : homelab:2 (local to host homelab) UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Events : 371031 Number Major Minor RaidDevice State 0 8 21 0 active sync /dev/sdb5 1 8 37 1 active sync /dev/sdc5 2 8 53 2 active sync /dev/sdd5 3 8 69 3 active sync /dev/sde5 4 8 85 4 active sync /dev/sdf5 - 0 0 5 removed 13 65 5 6 active sync /dev/sdq5 7 8 181 7 active sync /dev/sdl5 8 8 197 8 active sync /dev/sdm5 9 8 213 9 active sync /dev/sdn5 - 0 0 10 removed - 0 0 11 removed - 0 0 12 removed
# mdadm --detail /dev/md4 /dev/md4: Version : 1.2 Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 2930246912 (2794.50 GiB 3000.57 GB) Raid Devices : 5 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sat Jan 18 07:23:02 2020 State : clean, degraded Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : homelab:4 (local to host homelab) UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Events : 7200 Number Major Minor RaidDevice State 0 8 182 0 active sync /dev/sdl6 1 8 198 1 active sync /dev/sdm6 2 8 214 2 active sync /dev/sdn6 5 8 230 3 active sync /dev/sdo6 - 0 0 4 removed
# mdadm --detail /dev/md5 /dev/md5: Version : 1.2 Creation Time : Tue Sep 24 19:36:08 2019 Raid Level : raid1 Array Size : 3905898432 (3724.96 GiB 3999.64 GB) Used Dev Size : 3905898432 (3724.96 GiB 3999.64 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Sat Jan 18 07:22:58 2020 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : homelab:5 (local to host homelab) UUID : ae55eeff:e6a5cc66:2609f5e0:2e2ef747 Events : 223918 Number Major Minor RaidDevice State - 0 0 0 removed 2 8 231 1 active sync /dev/sdo7
# mdadm --examine /dev/sd[bcdefklmnopq]5 | egrep 'Event|/dev/sd' /dev/sdb5: Events : 371031 /dev/sdc5: Events : 371031 /dev/sdd5: Events : 371031 /dev/sde5: Events : 371031 /dev/sdf5: mdadm: No md superblock detected on /dev/sdo5. Events : 371031 /dev/sdk5: Events : 370998 /dev/sdl5: Events : 371031 /dev/sdm5: Events : 371031 /dev/sdn5: Events : 371031 /dev/sdp5: Events : 370988 /dev/sdq5: Events : 371031
-
@flyride I believe I've pinpointed the "bad" drive, it's one of the 10TB one
# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdb5[0] sdn5[9] sdm5[8] sdl5[7] sdq5[13] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/9] [UUUUU_UUUU___] md4 : active raid5 sdl6[0] sdo6[5] sdn6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_] md5 : active raid1 sdo7[2] 3905898432 blocks super 1.2 [2/1] [_U] md1 : active raid1 sdq2[11] sdp2[10] sdo2[9] sdn2[8] sdm2[7] sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0] 2097088 blocks [24/12] [UUUUUUUUUUUU____________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] 2490176 blocks [12/4] [_UUU_U______] unused devices: <none>
# fdisk -l Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x696935dc Device Boot Start End Sectors Size Id Type /dev/sda1 2048 468857024 468854977 223.6G fd Linux raid autodetect Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 43C8C355-AE0A-42DC-97CC-508B0FB4EF37 Device Start End Sectors Size Type /dev/sdb1 2048 4982527 4980480 2.4G Linux RAID /dev/sdb2 4982528 9176831 4194304 2G Linux RAID /dev/sdb5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 0600DFFC-A576-4242-976A-3ACAE5284C4C Device Start End Sectors Size Type /dev/sdc1 2048 4982527 4980480 2.4G Linux RAID /dev/sdc2 4982528 9176831 4194304 2G Linux RAID /dev/sdc5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 58B43CB1-1F03-41D3-A734-014F59DE34E8 Device Start End Sectors Size Type /dev/sdd1 2048 4982527 4980480 2.4G Linux RAID /dev/sdd2 4982528 9176831 4194304 2G Linux RAID /dev/sdd5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: E5FD9CDA-FE14-4F95-B776-B176E7130DEA Device Start End Sectors Size Type /dev/sde1 2048 4982527 4980480 2.4G Linux RAID /dev/sde2 4982528 9176831 4194304 2G Linux RAID /dev/sde5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 48A13430-10A1-4050-BA78-723DB398CE87 Device Start End Sectors Size Type /dev/sdf1 2048 4982527 4980480 2.4G Linux RAID /dev/sdf2 4982528 9176831 4194304 2G Linux RAID /dev/sdf5 9453280 5860326239 5850872960 2.7T Linux RAID GPT PMBR size mismatch (102399 != 30277631) will be corrected by w(rite). Disk /dev/synoboot: 14.4 GiB, 15502147584 bytes, 30277632 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: B3CAAA25-3CA1-48FA-A5B6-105ADDE4793F Device Start End Sectors Size Type /dev/synoboot1 2048 32767 30720 15M EFI System /dev/synoboot2 32768 94207 61440 30M Linux filesystem /dev/synoboot3 94208 102366 8159 4M BIOS boot Disk /dev/sdk: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: A3E39D34-4297-4BE9-B4FD-3A21EFC38071 Device Start End Sectors Size Type /dev/sdk1 2048 4982527 4980480 2.4G Linux RAID /dev/sdk2 4982528 9176831 4194304 2G Linux RAID /dev/sdk5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdl: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 849E02B2-2734-496B-AB52-A572DF8FE63F Device Start End Sectors Size Type /dev/sdl1 2048 4982527 4980480 2.4G Linux RAID /dev/sdl2 4982528 9176831 4194304 2G Linux RAID /dev/sdl5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdl6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdm: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 423D33B4-90CE-4E34-9C40-6E06D1F50C0C Device Start End Sectors Size Type /dev/sdm1 2048 4982527 4980480 2.4G Linux RAID /dev/sdm2 4982528 9176831 4194304 2G Linux RAID /dev/sdm5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdm6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdn: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 09CB7303-C2E7-46F8-ADA0-D4853F25CB00 Device Start End Sectors Size Type /dev/sdn1 2048 4982527 4980480 2.4G Linux RAID /dev/sdn2 4982528 9176831 4194304 2G Linux RAID /dev/sdn5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdn6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdo: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: EA537505-55B5-4C27-A7CA-C7BBB7E7B56F Device Start End Sectors Size Type /dev/sdo1 2048 4982527 4980480 2.4G Linux RAID /dev/sdo2 4982528 9176831 4194304 2G Linux RAID /dev/sdo5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdo6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdo7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdp: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1D5B8B09-8D4A-4729-B089-442620D3D507 Device Start End Sectors Size Type /dev/sdp1 2048 4982527 4980480 2.4G Linux RAID /dev/sdp2 4982528 9176831 4194304 2G Linux RAID /dev/sdp5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdq: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 54D81C51-AB85-4DE2-AA16-263DF1C6BB8A Device Start End Sectors Size Type /dev/sdq1 2048 4982527 4980480 2.4G Linux RAID /dev/sdq2 4982528 9176831 4194304 2G Linux RAID /dev/sdq5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/md0: 2.4 GiB, 2549940224 bytes, 4980352 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram0: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram1: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram2: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram3: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md5: 3.7 TiB, 3999639994368 bytes, 7811796864 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md4: 10.9 TiB, 12002291351552 bytes, 23441975296 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 65536 bytes / 262144 bytes
-
1 hour ago, flyride said:
Please, try and isolate and fix your hardware. Have you replaced every SATA cable? Are you SURE your power is good? You have a LOT of drives. Vibration? Cooling? Or just a drive failure? The fact that multiple drives have gone down suggests that you have some fundamental problem.
If mdstat hangs, there's a reason. If there is really a bad drive, narrow it down and take it out of the system. We'll try to work with what's left. But the hardware has to work, otherwise time is being wasted.
Sata cables for 2x10TB drives have been replaced.
All drives have their own fans, in groups of 4 or 5.
PSU is 750W bronze.
I'm using i7 4770 with one sata multiplier card and one hba sas card.
Reason I posted the past few messages are to have your opinion on the potential bad drives that I need to isolate. But I'll keep trying and let you know, thanks!!
-
@flyride If this helps, this is the output of mdadm --examine /dev/sd??
# mdadm --examine /dev/sd?? /dev/sda1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 10743092:68743fb9:59e82c9a:24dcf27b Name : homelab:3 (local to host homelab) Creation Time : Sun Sep 29 13:33:05 2019 Raid Level : raid0 Raid Devices : 1 Avail Dev Size : 468852864 (223.57 GiB 240.05 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=65 sectors State : clean Device UUID : 515b66a6:1281d06f:01f2f8a0:26f16b69 Update Time : Sat Jan 11 20:19:40 2020 Checksum : a281b55f - correct Events : 30 Chunk Size : 64K Device Role : Active device 0 Array State : A ('A' == active, '.' == missing, 'R' == replacing) /dev/sdb1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd674406 - correct Events : 589489 Number Major Minor RaidDevice State this 1 8 17 1 active sync /dev/sdb1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdb2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca3bb - correct Events : 77 Number Major Minor RaidDevice State this 0 8 18 0 active sync /dev/sdb2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdb5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : a8109f74:46bc8509:6fc3bca8:9fddb6a7 Update Time : Sat Jan 18 03:53:54 2020 Checksum : b349fec - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 0 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd674418 - correct Events : 589489 Number Major Minor RaidDevice State this 2 8 33 2 active sync /dev/sdc1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdc2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca3cd - correct Events : 77 Number Major Minor RaidDevice State this 1 8 34 1 active sync /dev/sdc2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdc5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 8dfdc601:e01f8a98:9a8e78f1:a7951260 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 2878739f - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdd1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd67442a - correct Events : 589489 Number Major Minor RaidDevice State this 3 8 49 3 active sync /dev/sdd1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdd2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca3df - correct Events : 77 Number Major Minor RaidDevice State this 2 8 50 2 active sync /dev/sdd2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdd5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : f98bc050:a4b46deb:c3168fa0:08d90061 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 7a5a625a - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 2 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sde1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 11 Preferred Minor : 0 Update Time : Sat Jan 11 17:05:52 2020 State : active Active Devices : 11 Working Devices : 11 Failed Devices : 1 Spare Devices : 0 Checksum : dd547182 - correct Events : 507168 Number Major Minor RaidDevice State this 11 8 65 11 active sync /dev/sde1 0 0 8 209 0 active sync /dev/sdn1 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 225 4 active sync /dev/sdo1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 241 10 active sync /dev/sdp1 11 11 8 65 11 active sync /dev/sde1 /dev/sde2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca3f1 - correct Events : 77 Number Major Minor RaidDevice State this 3 8 66 3 active sync /dev/sde2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sde5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 1e2742b7:d1847218:816c7135:cdf30c07 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 48cf3e2f - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 3 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdf1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd67444e - correct Events : 589489 Number Major Minor RaidDevice State this 5 8 81 5 active sync /dev/sdf1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdf2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca403 - correct Events : 77 Number Major Minor RaidDevice State this 4 8 82 4 active sync /dev/sdf2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdf5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : ce60c47e:14994160:da4d1482:fd7901f2 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 8fb7f3ad - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 4 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdg1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd67445c - correct Events : 589489 Number Major Minor RaidDevice State this 4 8 97 4 active sync /dev/sdg1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdg2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca415 - correct Events : 77 Number Major Minor RaidDevice State this 5 8 98 5 active sync /dev/sdg2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdg5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : a64f01c2:76c56102:38ad7c4e:7bce88d1 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 2104c2d1 - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 12 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdk1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd6744a0 - correct Events : 589489 Number Major Minor RaidDevice State this 6 8 161 6 active sync /dev/sdk1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdk2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca457 - correct Events : 77 Number Major Minor RaidDevice State this 6 8 162 6 active sync /dev/sdk2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdk5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 706c5124:d647d300:733fb961:e5cd8127 Update Time : Sat Jan 18 03:53:54 2020 Checksum : eafbf391 - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 5 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdl1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd6744b2 - correct Events : 589489 Number Major Minor RaidDevice State this 7 8 177 7 active sync /dev/sdl1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdl2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca469 - correct Events : 77 Number Major Minor RaidDevice State this 7 8 178 7 active sync /dev/sdl2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdl5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 6993b9eb:8ad7c80f:dc17268f:a8efa73d Update Time : Sat Jan 18 03:53:54 2020 Checksum : 4eca46e - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 7 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdl6: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Name : homelab:4 (local to host homelab) Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 5860493856 (2794.50 GiB 3000.57 GB) Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 5860493824 (2794.50 GiB 3000.57 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=32 sectors State : clean Device UUID : 7012016d:a3255ddf:3f30807e:1f591523 Update Time : Sat Jan 18 03:53:54 2020 Checksum : aac46c0 - correct Events : 7165 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 0 Array State : AAAA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdm1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd6744c4 - correct Events : 589489 Number Major Minor RaidDevice State this 8 8 193 8 active sync /dev/sdm1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdm2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca47b - correct Events : 77 Number Major Minor RaidDevice State this 8 8 194 8 active sync /dev/sdm2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdm5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 2f1247d1:a536d2ad:ba2eb47f:a7eaf237 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 735c942d - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 8 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdm6: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Name : homelab:4 (local to host homelab) Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 5860493856 (2794.50 GiB 3000.57 GB) Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 5860493824 (2794.50 GiB 3000.57 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=32 sectors State : clean Device UUID : a8d3f92e:69942435:fc88a07d:fed5cf67 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 66474c5a - correct Events : 7165 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAAA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdn1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 13 Preferred Minor : 0 Update Time : Fri Jan 3 02:47:02 2020 State : clean Active Devices : 12 Working Devices : 13 Failed Devices : 0 Spare Devices : 1 Checksum : dd4170cb - correct Events : 2053 Number Major Minor RaidDevice State this 12 8 177 12 spare /dev/sdl1 0 0 8 193 0 active sync /dev/sdm1 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 65 4 active sync /dev/sde1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 129 6 active sync 7 7 8 145 7 active sync 8 8 8 161 8 active sync /dev/sdk1 9 9 8 241 9 active sync /dev/sdp1 10 10 8 225 10 active sync /dev/sdo1 11 11 8 209 11 active sync /dev/sdn1 12 12 8 177 12 spare /dev/sdl1 /dev/sdn2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca48d - correct Events : 77 Number Major Minor RaidDevice State this 9 8 210 9 active sync /dev/sdn2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed mdadm: No md superblock detected on /dev/sdn5. /dev/sdn6: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Name : homelab:4 (local to host homelab) Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 5860493856 (2794.50 GiB 3000.57 GB) Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 5860493824 (2794.50 GiB 3000.57 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=32 sectors State : clean Device UUID : 063b1204:f6d34bd3:84076416:c4d99e6f Update Time : Sat Jan 18 03:53:54 2020 Checksum : 7a197597 - correct Events : 7165 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 3 Array State : AAAA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdn7: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : ae55eeff:e6a5cc66:2609f5e0:2e2ef747 Name : homelab:5 (local to host homelab) Creation Time : Tue Sep 24 19:36:08 2019 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 7811796928 (3724.96 GiB 3999.64 GB) Array Size : 3905898432 (3724.96 GiB 3999.64 GB) Used Dev Size : 7811796864 (3724.96 GiB 3999.64 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=64 sectors State : clean Device UUID : 43162b40:db6c93b3:98025029:72b7e3d4 Update Time : Sat Jan 18 01:48:45 2020 Checksum : 2940157e - correct Events : 223913 Device Role : Active device 1 Array State : .A ('A' == active, '.' == missing, 'R' == replacing) /dev/sdo1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 9 Preferred Minor : 0 Update Time : Sat Jan 18 05:16:23 2020 State : clean Active Devices : 9 Working Devices : 9 Failed Devices : 2 Spare Devices : 0 Checksum : dd6744e8 - correct Events : 589489 Number Major Minor RaidDevice State this 10 8 225 10 active sync /dev/sdo1 0 0 0 0 0 removed 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 225 10 active sync /dev/sdo1 11 11 0 0 11 faulty removed /dev/sdo2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca49f - correct Events : 77 Number Major Minor RaidDevice State this 10 8 226 10 active sync /dev/sdo2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdo5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 1b4ab27d:bb7488fa:a6cc1f75:d21d1a83 Update Time : Sat Jan 18 03:53:54 2020 Checksum : ad10db47 - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 9 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdo6: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Name : homelab:4 (local to host homelab) Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 5860493856 (2794.50 GiB 3000.57 GB) Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 5860493824 (2794.50 GiB 3000.57 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=32 sectors State : clean Device UUID : 856be4c3:8a458aaf:f0051c80:c8969855 Update Time : Sat Jan 18 03:53:54 2020 Checksum : 65dbd318 - correct Events : 7165 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 2 Array State : AAAA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdp1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 10 Preferred Minor : 0 Update Time : Sat Jan 18 01:23:44 2020 State : clean Active Devices : 10 Working Devices : 10 Failed Devices : 2 Spare Devices : 0 Checksum : dd66c00d - correct Events : 579348 Number Major Minor RaidDevice State this 0 8 241 0 active sync /dev/sdp1 0 0 8 241 0 active sync /dev/sdp1 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 97 4 active sync /dev/sdg1 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 0 0 9 faulty removed 10 10 8 209 10 active sync /dev/sdn1 11 11 0 0 11 faulty removed /dev/sdp2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca4b1 - correct Events : 77 Number Major Minor RaidDevice State this 11 8 242 11 active sync /dev/sdp2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdp5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 73610f83:fb3cf895:c004147e:b4de2bfe Update Time : Sat Jan 18 03:53:54 2020 Checksum : d1e3b3dd - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 11 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdp6: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Name : homelab:4 (local to host homelab) Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 5860493856 (2794.50 GiB 3000.57 GB) Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 5860493824 (2794.50 GiB 3000.57 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=32 sectors State : clean Device UUID : 8a4f9f1d:b7041df1:dd74acd4:2dbb4f4b Update Time : Sat Jan 18 01:03:16 2020 Checksum : 4b76e17f - correct Events : 7134 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 4 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdp7: Magic : a92b4efc Version : 1.2 Feature Map : 0x2 Array UUID : ae55eeff:e6a5cc66:2609f5e0:2e2ef747 Name : homelab:5 (local to host homelab) Creation Time : Tue Sep 24 19:36:08 2019 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 7811796928 (3724.96 GiB 3999.64 GB) Array Size : 3905898432 (3724.96 GiB 3999.64 GB) Used Dev Size : 7811796864 (3724.96 GiB 3999.64 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Recovery Offset : 3 sectors Unused Space : before=1968 sectors, after=64 sectors State : clean Device UUID : 2be1c9ab:04ddd9c3:56f1a702:c429de92 Update Time : Sat Jan 18 05:16:27 2020 Checksum : 3c894d60 - correct Events : 1168826 Device Role : Active device 0 Array State : A. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdq1: Magic : a92b4efc Version : 0.90.00 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Array Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 12 Preferred Minor : 0 Update Time : Fri Jan 10 19:58:06 2020 State : clean Active Devices : 11 Working Devices : 11 Failed Devices : 1 Spare Devices : 0 Checksum : dd594c7c - correct Events : 450677 Number Major Minor RaidDevice State this 9 65 1 9 active sync /dev/sdq1 0 0 8 225 0 active sync /dev/sdo1 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 0 0 4 faulty removed 5 5 8 81 5 active sync /dev/sdf1 6 6 8 161 6 active sync /dev/sdk1 7 7 8 177 7 active sync /dev/sdl1 8 8 8 193 8 active sync /dev/sdm1 9 9 65 1 9 active sync /dev/sdq1 10 10 8 209 10 active sync /dev/sdn1 11 11 8 65 11 active sync /dev/sde1 /dev/sdq2: Magic : a92b4efc Version : 0.90.00 UUID : 192347bc:9d076f47:cc8c244d:4f76664d (local to host homelab) Creation Time : Sat Jan 18 02:07:12 2020 Raid Level : raid1 Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Array Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Update Time : Sat Jan 18 03:54:16 2020 State : active Active Devices : 13 Working Devices : 13 Failed Devices : 11 Spare Devices : 0 Checksum : 37bca3fc - correct Events : 77 Number Major Minor RaidDevice State this 12 65 2 12 active sync /dev/sdq2 0 0 8 18 0 active sync /dev/sdb2 1 1 8 34 1 active sync /dev/sdc2 2 2 8 50 2 active sync /dev/sdd2 3 3 8 66 3 active sync /dev/sde2 4 4 8 82 4 active sync /dev/sdf2 5 5 8 98 5 active sync /dev/sdg2 6 6 8 162 6 active sync /dev/sdk2 7 7 8 178 7 active sync /dev/sdl2 8 8 8 194 8 active sync /dev/sdm2 9 9 8 210 9 active sync /dev/sdn2 10 10 8 226 10 active sync /dev/sdo2 11 11 8 242 11 active sync /dev/sdp2 12 12 65 2 12 active sync /dev/sdq2 13 13 0 0 13 faulty removed 14 14 0 0 14 faulty removed 15 15 0 0 15 faulty removed 16 16 0 0 16 faulty removed 17 17 0 0 17 faulty removed 18 18 0 0 18 faulty removed 19 19 0 0 19 faulty removed 20 20 0 0 20 faulty removed 21 21 0 0 21 faulty removed 22 22 0 0 22 faulty removed 23 23 0 0 23 faulty removed /dev/sdq5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 5cc6456d:bfc950bf:1baf6fef:aabec947 Update Time : Sat Jan 18 03:53:54 2020 Checksum : a06c2fdd - correct Events : 370988 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 6 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
cat /proc/mdstat hangs indefinitely now. I didnt' do anything lse other than cat /proc/mdstat, fdisk -l, and whatever I've written above.
-
# hdparm -i /dev/sd? /dev/sda: Model=KINGSTON SV300S37A240G, FwRev=580ABBF0, SerialNo=50026B724C0476CA Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4 BuffType=unknown, BuffSize=unknown, MaxMultSect=1, MultSect=1 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=468862128 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=yes: unknown setting WriteCache=enabled Drive conforms to: unknown: ATA/ATAPI-2,3,4,5,6,7 * signifies the current active mode /dev/sdb: Model=WDC WD30EFRX-68AX9N0, FwRev=80.00A80, SerialNo=WD-WMC1T1324120 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=5860533168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode /dev/sdc: Model=WDC WD30EFRX-68AX9N0, FwRev=80.00A80, SerialNo=WD-WMC1T0889051 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=5860533168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode /dev/sdd: Model=WDC WD30EFRX-68AX9N0, FwRev=80.00A80, SerialNo=WD-WMC1T1064335 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=5860533168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode /dev/sde: Model=WDC WD30EFRX-68AX9N0, FwRev=80.00A80, SerialNo=WD-WMC1T1020714 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=5860533168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode /dev/sdf: Model=WDC WD30EFRX-68AX9N0, FwRev=80.00A80, SerialNo=WD-WMC1T1342187 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=5860533168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode /dev/sdg: Model=WDC WD30PURX-64P6ZY0, FwRev=80.00A80, SerialNo=WD-WMC4N0H64Z0C Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=off CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=5860533168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode /dev/sdk: HDIO_GET_IDENTITY failed: Invalid argument
hdparm stops at /dev/sdk.
# smartctl -i /dev/sdk smartctl 6.5 (build date Jun 8 2018) [x86_64-linux-3.10.105] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Western Digital Purple Device Model: WDC WD30PURX-64P6ZY0 Serial Number: WD-WMC4N0H7LYL1 LU WWN Device Id: 5 0014ee 6afcf440c Firmware Version: 80.00A80 User Capacity: 3,000,592,982,016 bytes [3.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5400 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Sat Jan 18 03:59:18 2020 CST SMART support is: Available - device has SMART capability. SMART support is: Enabled
-
25 minutes ago, flyride said:
See if you can sort out what's going on. Maybe a power problem? Cabling, drive physical stability, it can all factor. Advise when you have made a decision.
Sorry this might be going down the tubes. I was pretty confident of our success until very recently!
I think it might just be a loose data and/or power cable. I connected the 2x10TB to a LSI SAS card btw. No more clicking.
And mdstat didn't hang, just took a few mins.
# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md4 : active raid5 sdl6[0] sdo6[5] sdn6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_] md2 : active raid5 sdb5[0] sdg5[10] sdp5[11] sdn5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UU] md5 : active raid1 sdp7[3] 3905898432 blocks super 1.2 [2/0] [__] md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdg2[5] sdk2[6] sdl2[7] sdm2[8] sdn2[10] sdo2[9] sdp2[11] sdq2[12] 2097088 blocks [24/13] [UUUUUUUUUUUUU___________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdg1[4] sdk1[6] sdl1[7] sdm1[8] sdn1[10] 2490176 blocks [12/9] [_UUUUUUUU_U_] unused devices: <none>
I'm going to try and wait until fdisk -l finishes. Currently stops at Disk /dev/zram3.
# fdisk -l Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x696935dc Device Boot Start End Sectors Size Id Type /dev/sda1 2048 468857024 468854977 223.6G fd Linux raid autodetect Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 43C8C355-AE0A-42DC-97CC-508B0FB4EF37 Device Start End Sectors Size Type /dev/sdb1 2048 4982527 4980480 2.4G Linux RAID /dev/sdb2 4982528 9176831 4194304 2G Linux RAID /dev/sdb5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 0600DFFC-A576-4242-976A-3ACAE5284C4C Device Start End Sectors Size Type /dev/sdc1 2048 4982527 4980480 2.4G Linux RAID /dev/sdc2 4982528 9176831 4194304 2G Linux RAID /dev/sdc5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 58B43CB1-1F03-41D3-A734-014F59DE34E8 Device Start End Sectors Size Type /dev/sdd1 2048 4982527 4980480 2.4G Linux RAID /dev/sdd2 4982528 9176831 4194304 2G Linux RAID /dev/sdd5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: E5FD9CDA-FE14-4F95-B776-B176E7130DEA Device Start End Sectors Size Type /dev/sde1 2048 4982527 4980480 2.4G Linux RAID /dev/sde2 4982528 9176831 4194304 2G Linux RAID /dev/sde5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 48A13430-10A1-4050-BA78-723DB398CE87 Device Start End Sectors Size Type /dev/sdf1 2048 4982527 4980480 2.4G Linux RAID /dev/sdf2 4982528 9176831 4194304 2G Linux RAID /dev/sdf5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdg: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: A3E39D34-4297-4BE9-B4FD-3A21EFC38071 Device Start End Sectors Size Type /dev/sdg1 2048 4982527 4980480 2.4G Linux RAID /dev/sdg2 4982528 9176831 4194304 2G Linux RAID /dev/sdg5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdk: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1D5B8B09-8D4A-4729-B089-442620D3D507 Device Start End Sectors Size Type /dev/sdk1 2048 4982527 4980480 2.4G Linux RAID /dev/sdk2 4982528 9176831 4194304 2G Linux RAID /dev/sdk5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdl: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 849E02B2-2734-496B-AB52-A572DF8FE63F Device Start End Sectors Size Type /dev/sdl1 2048 4982527 4980480 2.4G Linux RAID /dev/sdl2 4982528 9176831 4194304 2G Linux RAID /dev/sdl5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdl6 5860342336 11720838239 5860495904 2.7T Linux RAID GPT PMBR size mismatch (102399 != 30277631) will be corrected by w(rite). Disk /dev/synoboot: 14.4 GiB, 15502147584 bytes, 30277632 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: B3CAAA25-3CA1-48FA-A5B6-105ADDE4793F Device Start End Sectors Size Type /dev/synoboot1 2048 32767 30720 15M EFI System /dev/synoboot2 32768 94207 61440 30M Linux filesystem /dev/synoboot3 94208 102366 8159 4M BIOS boot Disk /dev/sdm: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 423D33B4-90CE-4E34-9C40-6E06D1F50C0C Device Start End Sectors Size Type /dev/sdm1 2048 4982527 4980480 2.4G Linux RAID /dev/sdm2 4982528 9176831 4194304 2G Linux RAID /dev/sdm5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdm6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdn: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 09CB7303-C2E7-46F8-ADA0-D4853F25CB00 Device Start End Sectors Size Type /dev/sdn1 2048 4982527 4980480 2.4G Linux RAID /dev/sdn2 4982528 9176831 4194304 2G Linux RAID /dev/sdn5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdn6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdo: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: EA537505-55B5-4C27-A7CA-C7BBB7E7B56F Device Start End Sectors Size Type /dev/sdo1 2048 4982527 4980480 2.4G Linux RAID /dev/sdo2 4982528 9176831 4194304 2G Linux RAID /dev/sdo5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdo6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdo7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdp: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1713E819-3B9A-4CE3-94E8-5A3DBF1D5983 Device Start End Sectors Size Type /dev/sdp1 2048 4982527 4980480 2.4G Linux RAID /dev/sdp2 4982528 9176831 4194304 2G Linux RAID /dev/sdp5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdp6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdp7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdq: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 54D81C51-AB85-4DE2-AA16-263DF1C6BB8A Device Start End Sectors Size Type /dev/sdq1 2048 4982527 4980480 2.4G Linux RAID /dev/sdq2 4982528 9176831 4194304 2G Linux RAID /dev/sdq5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/md0: 2.4 GiB, 2549940224 bytes, 4980352 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram0: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram1: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram2: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram3: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
I was pretty confident too until recently! However it turns out you have my appreciation for sticking with me
dmesg is full of this btw.
[ 1663.470519] md: md5: set sdp7 to auto_remap [1]
[ 1663.470520] md: recovery of RAID array md5
[ 1663.470523] md: minimum _guaranteed_ speed: 600000 KB/sec/disk.
[ 1663.470523] md: using maximum available idle IO bandwidth (but not more than 800000 KB/sec) for recovery.
[ 1663.470525] md: using 128k window, over a total of 3905898432k.
[ 1663.470800] md: md5: set sdp7 to auto_remap [0]
[ 1663.496370] RAID1 conf printout:
[ 1663.496372] --- wd:0 rd:2
[ 1663.496373] disk 0, wo:1, o:1, dev:sdp7
[ 1663.500414] RAID1 conf printout:
[ 1663.500415] --- wd:0 rd:2
[ 1663.500420] RAID1 conf printout:
[ 1663.500420] --- wd:0 rd:2
[ 1663.500421] disk 0, wo:1, o:1, dev:sdp7
-
Oh crap. Heard something clicking.
Rebooted and cat /proc/mdstat keeps hanging. 😫
# fdisk -l Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x696935dc Device Boot Start End Sectors Size Id Type /dev/sda1 2048 468857024 468854977 223.6G fd Linux raid autodetect Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 43C8C355-AE0A-42DC-97CC-508B0FB4EF37 Device Start End Sectors Size Type /dev/sdb1 2048 4982527 4980480 2.4G Linux RAID /dev/sdb2 4982528 9176831 4194304 2G Linux RAID /dev/sdb5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 0600DFFC-A576-4242-976A-3ACAE5284C4C Device Start End Sectors Size Type /dev/sdc1 2048 4982527 4980480 2.4G Linux RAID /dev/sdc2 4982528 9176831 4194304 2G Linux RAID /dev/sdc5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 58B43CB1-1F03-41D3-A734-014F59DE34E8 Device Start End Sectors Size Type /dev/sdd1 2048 4982527 4980480 2.4G Linux RAID /dev/sdd2 4982528 9176831 4194304 2G Linux RAID /dev/sdd5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: E5FD9CDA-FE14-4F95-B776-B176E7130DEA Device Start End Sectors Size Type /dev/sde1 2048 4982527 4980480 2.4G Linux RAID /dev/sde2 4982528 9176831 4194304 2G Linux RAID /dev/sde5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 48A13430-10A1-4050-BA78-723DB398CE87 Device Start End Sectors Size Type /dev/sdf1 2048 4982527 4980480 2.4G Linux RAID /dev/sdf2 4982528 9176831 4194304 2G Linux RAID /dev/sdf5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdg: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: A3E39D34-4297-4BE9-B4FD-3A21EFC38071 Device Start End Sectors Size Type /dev/sdg1 2048 4982527 4980480 2.4G Linux RAID /dev/sdg2 4982528 9176831 4194304 2G Linux RAID /dev/sdg5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdk: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1D5B8B09-8D4A-4729-B089-442620D3D507 Device Start End Sectors Size Type /dev/sdk1 2048 4982527 4980480 2.4G Linux RAID /dev/sdk2 4982528 9176831 4194304 2G Linux RAID /dev/sdk5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdl: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 849E02B2-2734-496B-AB52-A572DF8FE63F Device Start End Sectors Size Type /dev/sdl1 2048 4982527 4980480 2.4G Linux RAID /dev/sdl2 4982528 9176831 4194304 2G Linux RAID /dev/sdl5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdl6 5860342336 11720838239 5860495904 2.7T Linux RAID GPT PMBR size mismatch (102399 != 30277631) will be corrected by w(rite). Disk /dev/synoboot: 14.4 GiB, 15502147584 bytes, 30277632 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: B3CAAA25-3CA1-48FA-A5B6-105ADDE4793F Device Start End Sectors Size Type /dev/synoboot1 2048 32767 30720 15M EFI System /dev/synoboot2 32768 94207 61440 30M Linux filesystem /dev/synoboot3 94208 102366 8159 4M BIOS boot Disk /dev/sdm: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 423D33B4-90CE-4E34-9C40-6E06D1F50C0C Device Start End Sectors Size Type /dev/sdm1 2048 4982527 4980480 2.4G Linux RAID /dev/sdm2 4982528 9176831 4194304 2G Linux RAID /dev/sdm5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdm6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdn: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: EA537505-55B5-4C27-A7CA-C7BBB7E7B56F Device Start End Sectors Size Type /dev/sdn1 2048 4982527 4980480 2.4G Linux RAID /dev/sdn2 4982528 9176831 4194304 2G Linux RAID /dev/sdn5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdn6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdn7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdo: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 09CB7303-C2E7-46F8-ADA0-D4853F25CB00 Device Start End Sectors Size Type /dev/sdo1 2048 4982527 4980480 2.4G Linux RAID /dev/sdo2 4982528 9176831 4194304 2G Linux RAID /dev/sdo5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdo6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdp: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1713E819-3B9A-4CE3-94E8-5A3DBF1D5983 Device Start End Sectors Size Type /dev/sdp1 2048 4982527 4980480 2.4G Linux RAID /dev/sdp2 4982528 9176831 4194304 2G Linux RAID /dev/sdp5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdp6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdp7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdq: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 54D81C51-AB85-4DE2-AA16-263DF1C6BB8A Device Start End Sectors Size Type /dev/sdq1 2048 4982527 4980480 2.4G Linux RAID /dev/sdq2 4982528 9176831 4194304 2G Linux RAID /dev/sdq5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/md0: 2.4 GiB, 2549940224 bytes, 4980352 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram0: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram1: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram2: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram3: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
fdisk command can't complete. Ugh. And to think the two 10TB drives were new!
Edit: oh and the Web UI doesn't work anymore.
-
16 minutes ago, flyride said:
Between the last mdstat and your current one, your /dev/sdp went offline - that is one of your 10TB drives. Check all your connections and cables, if they are "stretched" or not stable, secure them. Reboot. Post another mdstat.
If you can't get your hardware stable, this is a lost cause. Standing by for status.
Changed the cable, and I believe it's secure enough.
cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdb5[0] sdg5[10] sdp5[11] sdo5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UU] md4 : active raid5 sdl6[0] sdn6[5] sdo6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_] md5 : active raid1 sdn7[2] 3905898432 blocks super 1.2 [2/1] [_U] md1 : active raid1 sdq2[12] sdp2[11] sdo2[10] sdn2[9] sdm2[8] sdl2[7] sdk2[6] sdg2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0] 2097088 blocks [24/13] [UUUUUUUUUUUUU___________] [=================>...] resync = 86.4% (1813056/2097088) finish=0.0min speed=56421K/sec md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdg1[4] sdk1[6] sdl1[7] sdm1[8] sdo1[10] 2490176 blocks [12/9] [_UUUUUUUU_U_] unused devices: <none>
# fdisk -l | grep 9.1 GPT PMBR size mismatch (102399 != 30277631) will be corrected by w(rite). Disk /dev/sdn: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Disk /dev/sdp: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors Disk /dev/sdp: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1713E819-3B9A-4CE3-94E8-5A3DBF1D5983 Device Start End Sectors Size Type /dev/sdp1 2048 4982527 4980480 2.4G Linux RAID /dev/sdp2 4982528 9176831 4194304 2G Linux RAID /dev/sdp5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdp6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdp7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdn: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: EA537505-55B5-4C27-A7CA-C7BBB7E7B56F Device Start End Sectors Size Type /dev/sdn1 2048 4982527 4980480 2.4G Linux RAID /dev/sdn2 4982528 9176831 4194304 2G Linux RAID /dev/sdn5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdn6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdn7 11720854336 19532653311 7811798976 3.7T Linux RAID
-
1 minute ago, flyride said:
Woops, /dev/sdr got remapped too, to /dev/sdo Those commands won't do anything. I don't advise changing it now but your Idx mapping could use some work.
How do I change that?
Anyways..
root@homelab:~# mdadm --zero-superblock /dev/sdr5 mdadm: Couldn't open /dev/sdr5 for write - not zeroing root@homelab:~# mdadm --zero-superblock /dev/sdo5 root@homelab:~# mdadm --manage /dev/md2 --add /dev/sdo5 mdadm: /dev/md2 has failed so using --add cannot work and might destroy mdadm: data on /dev/sdo5. You should stop the array and re-assemble it. root@homelab:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdb5[0] sdg5[10] sdn5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/11] [UUUUUUUUUU__U] md4 : active raid5 sdl6[0] sdo6[5] sdn6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_] md5 : active raid1 sdo7[2] 3905898432 blocks super 1.2 [2/1] [_U] md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdg2[10] sdk2[5] sdl2[6] sdm2[7] sdn2[9] sdo2[12] sdq2[11] 2097088 blocks [24/12] [UUUUUUUU_UUUU___________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdg1[4] sdk1[6] sdl1[7] sdm1[8] sdn1[10] 2490176 blocks [12/9] [_UUUUUUUU_U_] unused devices: <none>
-
7 minutes ago, flyride said:
# mdadm --examine /dev/sd[gp]5
Let's confirm the drives look the same despite remapping, then we'll try and resync again.
Seems like it's not sdp anymore.
mdadm --examine /dev/sd[gp]5 /dev/sdg5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : a64f01c2:76c56102:38ad7c4e:7bce88d1 Update Time : Sat Jan 18 01:03:16 2020 Checksum : 21049ab2 - correct Events : 370955 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 12 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdp5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 73610f83:fb3cf895:c004147e:b4de2bfe Update Time : Sat Jan 18 01:03:16 2020 Checksum : d1e38bbe - correct Events : 370955 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 11 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
hdparm -I /dev/sdg /dev/sdg: ATA device, with non-removable media Model Number: WDC WD30PURX-64P6ZY0 Serial Number: WD-WMC4N0H64Z0C Firmware Revision: 80.00A80 Transport: Serial, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0 Standards: Supported: 9 8 7 6 5 Likely used: 9 Configuration: Logical max current cylinders 16383 16383 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 5860533168 Logical Sector size: 512 bytes Physical Sector size: 4096 bytes Logical Sector-0 offset: 0 bytes device size with M = 1024*1024: 2861588 MBytes device size with M = 1000*1000: 3000592 MBytes (3000 GB) cache/buffer size = unknown Nominal Media Rotation Rate: 5400
Edit: OK it just closed my SSH connection again.
last few dmseg log:
[ 1274.095438] disk 0, wo:1, o:0, dev:sdp1 [ 1274.095439] disk 1, wo:0, o:1, dev:sdb1 [ 1274.095440] disk 2, wo:0, o:1, dev:sdc1 [ 1274.095440] disk 3, wo:0, o:1, dev:sdd1 [ 1274.095441] disk 4, wo:0, o:1, dev:sdg1 [ 1274.095441] disk 5, wo:0, o:1, dev:sdf1 [ 1274.095442] disk 6, wo:0, o:1, dev:sdk1 [ 1274.095443] disk 7, wo:0, o:1, dev:sdl1 [ 1274.095443] disk 8, wo:0, o:1, dev:sdm1 [ 1274.095444] disk 10, wo:0, o:1, dev:sdn1 [ 1274.099973] syno_hot_remove_disk (10183): cannot remove active disk sdp7 from md5 ... rdev->raid_disk 0 pending 0 [ 1274.100009] SynoCheckRdevIsWorking (10283): remove active disk sdp2 from md1 raid_disks 24 mddev->degraded 11 mddev->level 1 [ 1274.100011] raid1: Disk failure on sdp2, disabling device. Operation continuing on 12 devices [ 1274.110125] syno_hot_remove_disk (10183): cannot remove active disk sdp2 from md1 ... rdev->raid_disk 8 pending 0 [ 1274.110998] RAID1 conf printout: [ 1274.110999] --- wd:9 rd:12 [ 1274.111000] disk 1, wo:0, o:1, dev:sdb1 [ 1274.111001] disk 2, wo:0, o:1, dev:sdc1 [ 1274.111001] disk 3, wo:0, o:1, dev:sdd1 [ 1274.111002] disk 4, wo:0, o:1, dev:sdg1 [ 1274.111002] disk 5, wo:0, o:1, dev:sdf1 [ 1274.111003] disk 6, wo:0, o:1, dev:sdk1 [ 1274.111004] disk 7, wo:0, o:1, dev:sdl1 [ 1274.111004] disk 8, wo:0, o:1, dev:sdm1 [ 1274.111005] disk 10, wo:0, o:1, dev:sdn1 [ 1274.151975] RAID1 conf printout: [ 1274.151976] --- wd:1 rd:2 [ 1274.151977] disk 0, wo:1, o:0, dev:sdp7 [ 1274.151978] disk 1, wo:0, o:1, dev:sdo7 [ 1274.159045] RAID1 conf printout: [ 1274.159046] --- wd:1 rd:2 [ 1274.159047] disk 1, wo:0, o:1, dev:sdo7 [ 1274.171175] RAID conf printout: [ 1274.171176] --- level:5 rd:5 wd:4 [ 1274.171177] disk 0, o:1, dev:sdl6 [ 1274.171177] disk 1, o:1, dev:sdm6 [ 1274.171178] disk 2, o:1, dev:sdn6 [ 1274.171178] disk 3, o:1, dev:sdo6 [ 1274.171179] disk 4, o:0, dev:sdp6 [ 1274.179062] SynoCheckRdevIsWorking (10283): remove active disk sdp1 from md0 raid_disks 12 mddev->degraded 3 mddev->level 1 [ 1274.179078] RAID conf printout: [ 1274.179080] md: unbind<sdp1> [ 1274.179095] --- level:5 rd:5 wd:4 [ 1274.179095] disk 0, o:1, dev:sdl6 [ 1274.179096] disk 1, o:1, dev:sdm6 [ 1274.179097] disk 2, o:1, dev:sdn6 [ 1274.179097] disk 3, o:1, dev:sdo6 [ 1274.182067] md: export_rdev(sdp1) [ 1274.205352] RAID conf printout: [ 1274.205353] --- level:5 rd:13 wd:11 [ 1274.205354] disk 0, o:1, dev:sdb5 [ 1274.205355] disk 1, o:1, dev:sdc5 [ 1274.205355] disk 2, o:1, dev:sdd5 [ 1274.205356] disk 3, o:1, dev:sde5 [ 1274.205356] disk 4, o:1, dev:sdf5 [ 1274.205357] disk 5, o:1, dev:sdk5 [ 1274.205358] disk 6, o:1, dev:sdq5 [ 1274.205358] disk 7, o:1, dev:sdl5 [ 1274.205359] disk 8, o:1, dev:sdm5 [ 1274.205359] disk 9, o:1, dev:sdn5 [ 1274.205360] disk 11, o:0, dev:sdp5 [ 1274.205360] disk 12, o:1, dev:sdg5 [ 1274.208680] RAID1 conf printout: [ 1274.208680] --- wd:12 rd:24 [ 1274.208681] disk 0, wo:0, o:1, dev:sdb2 [ 1274.208682] disk 1, wo:0, o:1, dev:sdc2 [ 1274.208682] disk 2, wo:0, o:1, dev:sdd2 [ 1274.208683] disk 3, wo:0, o:1, dev:sde2 [ 1274.208683] disk 4, wo:0, o:1, dev:sdf2 [ 1274.208684] disk 5, wo:0, o:1, dev:sdk2 [ 1274.208685] disk 6, wo:0, o:1, dev:sdl2 [ 1274.208685] disk 7, wo:0, o:1, dev:sdm2 [ 1274.208686] disk 8, wo:1, o:0, dev:sdp2 [ 1274.208686] disk 9, wo:0, o:1, dev:sdn2 [ 1274.208687] disk 10, wo:0, o:1, dev:sdg2 [ 1274.208687] disk 11, wo:0, o:1, dev:sdq2 [ 1274.208688] disk 12, wo:0, o:1, dev:sdo2 [ 1274.215123] RAID conf printout: [ 1274.215124] --- level:5 rd:13 wd:11 [ 1274.215125] disk 0, o:1, dev:sdb5 [ 1274.215126] disk 1, o:1, dev:sdc5 [ 1274.215127] disk 2, o:1, dev:sdd5 [ 1274.215127] disk 3, o:1, dev:sde5 [ 1274.215128] disk 4, o:1, dev:sdf5 [ 1274.215128] disk 5, o:1, dev:sdk5 [ 1274.215129] disk 6, o:1, dev:sdq5 [ 1274.215141] disk 7, o:1, dev:sdl5 [ 1274.215141] disk 8, o:1, dev:sdm5 [ 1274.215142] disk 9, o:1, dev:sdn5 [ 1274.215142] disk 12, o:1, dev:sdg5 [ 1274.219116] RAID1 conf printout: [ 1274.219117] --- wd:12 rd:24 [ 1274.219117] disk 0, wo:0, o:1, dev:sdb2 [ 1274.219118] disk 1, wo:0, o:1, dev:sdc2 [ 1274.219119] disk 2, wo:0, o:1, dev:sdd2 [ 1274.219119] disk 3, wo:0, o:1, dev:sde2 [ 1274.219120] disk 4, wo:0, o:1, dev:sdf2 [ 1274.219120] disk 5, wo:0, o:1, dev:sdk2 [ 1274.219121] disk 6, wo:0, o:1, dev:sdl2 [ 1274.219121] disk 7, wo:0, o:1, dev:sdm2 [ 1274.219122] disk 9, wo:0, o:1, dev:sdn2 [ 1274.219134] disk 10, wo:0, o:1, dev:sdg2 [ 1274.219134] disk 11, wo:0, o:1, dev:sdq2 [ 1274.219135] disk 12, wo:0, o:1, dev:sdo2 [ 1275.080123] SynoCheckRdevIsWorking (10283): remove active disk sdp5 from md2 raid_disks 13 mddev->degraded 2 mddev->level 5 [ 1275.080139] md: unbind<sdp5> [ 1275.088132] md: export_rdev(sdp5) [ 1275.091120] SynoCheckRdevIsWorking (10283): remove active disk sdp6 from md4 raid_disks 5 mddev->degraded 1 mddev->level 5 [ 1275.091124] md: unbind<sdp6> [ 1275.096135] md: export_rdev(sdp6) [ 1275.102133] SynoCheckRdevIsWorking (10283): remove active disk sdp7 from md5 raid_disks 2 mddev->degraded 1 mddev->level 1 [ 1275.102137] md: unbind<sdp7> [ 1275.112157] SynoCheckRdevIsWorking (10283): remove active disk sdp2 from md1 raid_disks 24 mddev->degraded 12 mddev->level 1 [ 1275.112161] md: unbind<sdp2> [ 1275.118702] md: export_rdev(sdp7) [ 1275.124168] md: export_rdev(sdp2) [ 1276.321330] init: synowsdiscoveryd main process (16584) killed by TERM signal [ 1276.595188] init: ddnsd main process (12353) terminated with status 1 [ 1277.569230] init: smbd main process (16670) killed by TERM signal [ 1278.234862] nfsd: last server has exited, flushing export cache [ 1280.264875] Installing knfsd (copyright (C) 1996 okir@monad.swb.de). [ 1280.284991] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory [ 1280.285012] NFSD: starting 90-second grace period (net ffffffff81854f80)
-
45 minutes ago, flyride said:
If you can do this, boot and post a new mdstat.
I changed the WD Purple drive from connected to my LSI SAS card into a sata port multiplier card.
cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdb5[0] sdg5[10] sdp5[11] sdn5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UU] md4 : active raid5 sdl6[0] sdp6[3] sdo6[5] sdn6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU] md5 : active raid1 sdp7[0] sdo7[2] 3905898432 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdg2[10] sdk2[5] sdl2[6] sdm2[7] sdn2[9] sdo2[12] sdp2[8] sdq2[11] 2097088 blocks [24/13] [UUUUUUUUUUUUU___________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdg1[4] sdk1[6] sdl1[7] sdm1[8] sdn1[10] sdp1[0] 2490176 blocks [12/10] [UUUUUUUUU_U_] unused devices: <none>
-
10 minutes ago, flyride said:
Do you have another SATA port you can plug /dev/sdp into? Maybe take out your SSD and put it there? At the very least, change the SATA cable. There is a hardware failure of some sort we are encountering.
I'll do that and reboot.
The last few log from dmesg:
[24543.896520] sd 10:0:5:0: [sdp] CDB: [24543.896521] cdb[0]=0x88: 88 00 00 00 00 00 05 7e 8c 90 00 00 00 08 00 00 [24543.896530] sd 10:0:5:0: [sdp] Unhandled error code [24543.896530] drivers/md/raid5.c[3418]:syno_error_for_internal: disk error on sdp5 [24543.896531] sd 10:0:5:0: [sdp] [24543.896532] Result: hostbyte=0x0b driverbyte=0x00 [24543.896532] sd 10:0:5:0: [sdp] CDB: [24543.896533] cdb[0]=0x88: 88 00 00 00 00 00 05 7e 8e 10 00 00 00 08 00 00 [24543.896541] drivers/md/raid5.c[3418]:syno_error_for_internal: disk error on sdp5 [24543.896541] sd 10:0:5:0: [sdp] Unhandled error code [24543.896542] sd 10:0:5:0: [sdp] [24543.896542] Result: hostbyte=0x0b driverbyte=0x00 [24543.896543] sd 10:0:5:0: [sdp] CDB: [24543.896543] cdb[0]=0x88: 88 00 00 00 00 00 05 7e 8c a0 00 00 00 08 00 00 [24543.896551] drivers/md/raid5.c[3418]:syno_error_for_internal: disk error on sdp5 [24543.896551] sd 10:0:5:0: [sdp] [24543.896552] Result: hostbyte=0x00 driverbyte=0x08 [24543.896553] sd 10:0:5:0: [sdp] [24543.896554] Sense Key : 0xb [current] [24543.896555] sd 10:0:5:0: [sdp] [24543.896555] ASC=0x0 ASCQ=0x0 [24543.896556] sd 10:0:5:0: [sdp] CDB: [24543.896556] cdb[0]=0x88: 88 00 00 00 00 00 05 7e 8d e8 00 00 00 08 00 00 [24543.896564] drivers/md/raid5.c[3418]:syno_error_for_internal: disk error on sdp5 [24544.104089] SynoCheckRdevIsWorking (10283): remove active disk sdr5 from md2 raid_disks 13 mddev->degraded 1 mddev->level 5 [24544.104092] syno_hot_remove_disk (10183): cannot remove active disk sdr5 from md2 ... rdev->raid_disk 10 pending 0 [24544.258476] md: md2: set sdr5 to auto_remap [0] [24544.258488] md: md2: set sdb5 to auto_remap [0] [24544.258489] md: md2: set sdp5 to auto_remap [0] [24544.258489] md: md2: set sdo5 to auto_remap [0] [24544.258490] md: md2: set sdn5 to auto_remap [0] [24544.258491] md: md2: set sdm5 to auto_remap [0] [24544.258491] md: md2: set sdl5 to auto_remap [0] [24544.258492] md: md2: set sdq5 to auto_remap [0] [24544.258493] md: md2: set sdk5 to auto_remap [0] [24544.258494] md: md2: set sdf5 to auto_remap [0] [24544.258494] md: md2: set sde5 to auto_remap [0] [24544.258495] md: md2: set sdd5 to auto_remap [0] [24544.258496] md: md2: set sdc5 to auto_remap [0] [24544.414606] RAID conf printout: [24544.414609] --- level:5 rd:13 wd:12 [24544.414610] disk 0, o:1, dev:sdb5 [24544.414611] disk 1, o:1, dev:sdc5 [24544.414612] disk 2, o:1, dev:sdd5 [24544.414612] disk 3, o:1, dev:sde5 [24544.414613] disk 4, o:1, dev:sdf5 [24544.414632] disk 5, o:1, dev:sdk5 [24544.414633] disk 6, o:1, dev:sdq5 [24544.414634] disk 7, o:1, dev:sdl5 [24544.414634] disk 8, o:1, dev:sdm5 [24544.414635] disk 9, o:1, dev:sdn5 [24544.414636] disk 10, o:0, dev:sdr5 [24544.414636] disk 11, o:1, dev:sdo5 [24544.414637] disk 12, o:1, dev:sdp5 [24544.422985] RAID conf printout: [24544.422987] --- level:5 rd:13 wd:12 [24544.422988] disk 0, o:1, dev:sdb5 [24544.422989] disk 1, o:1, dev:sdc5 [24544.422990] disk 2, o:1, dev:sdd5 [24544.422991] disk 3, o:1, dev:sde5 [24544.422992] disk 4, o:1, dev:sdf5 [24544.422992] disk 5, o:1, dev:sdk5 [24544.422993] disk 6, o:1, dev:sdq5 [24544.422994] disk 7, o:1, dev:sdl5 [24544.422994] disk 8, o:1, dev:sdm5 [24544.422995] disk 9, o:1, dev:sdn5 [24544.422996] disk 11, o:1, dev:sdo5 [24544.422997] disk 12, o:1, dev:sdp5 [24545.106250] SynoCheckRdevIsWorking (10283): remove active disk sdr5 from md2 raid_disks 13 mddev->degraded 1 mddev->level 5 [24545.106266] md: unbind<sdr5> [24545.117398] md: export_rdev(sdr5) [24545.153580] init: synowsdiscoveryd main process (17065) killed by TERM signal [24545.439468] init: ddnsd main process (12284) terminated with status 1 [24546.186194] init: smbd main process (17423) killed by TERM signal [24546.729176] nfsd: last server has exited, flushing export cache [24548.636153] Installing knfsd (copyright (C) 1996 okir@monad.swb.de). [24548.656468] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory [24548.656489] NFSD: starting 90-second grace period (net ffffffff81854f80)
-
2 minutes ago, flyride said:
Also, just out of curiosity, is your volume back since there was a reboot?
nope. ls /volume1 still is empty. But something weird happened.
While running
23 minutes ago, flyride said:# mdadm --zero-superblock /dev/sdr5
# mdadm --manage /dev/md2 --add /dev/sdr5
# cat /proc/mdstat
It just terminated my ssh session. Two times. and not resyncing anymore. Should I redo those three commands?
Current /proc/mdstat:
# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdb5[0] sdp5[10](E) sdo5[11] sdn5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UE] md4 : active raid5 sdl6[0] sdo6[3] sdr6[5] sdn6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU] md5 : active raid1 sdo7[0] sdr7[2] 3905898432 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdk2[5] sdl2[6] sdm2[7] sdn2[9] sdo2[8] sdp2[10] sdq2[11] sdr2[12] 2097088 blocks [24/13] [UUUUUUUUUUUUU___________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[10] sdo1[0] sdp1[4] 2490176 blocks [12/10] [UUUUUUUUU_U_] unused devices: <none>
# dmesg | fgrep "md:" [ 0.895403] md: linear personality registered for level -1 [ 0.895405] md: raid0 personality registered for level 0 [ 0.895406] md: raid1 personality registered for level 1 [ 0.895407] md: raid10 personality registered for level 10 [ 0.895776] md: raid6 personality registered for level 6 [ 0.895778] md: raid5 personality registered for level 5 [ 0.895779] md: raid4 personality registered for level 4 [ 0.895780] md: raidF1 personality registered for level 45 [ 9.002596] md: Autodetecting RAID arrays. [ 9.007616] md: invalid raid superblock magic on sda1 [ 9.012674] md: sda1 does not have a valid v0.90 superblock, not importing! [ 9.075202] md: invalid raid superblock magic on sdb5 [ 9.080258] md: sdb5 does not have a valid v0.90 superblock, not importing! [ 9.131736] md: invalid raid superblock magic on sdc5 [ 9.136787] md: sdc5 does not have a valid v0.90 superblock, not importing! [ 9.184688] md: invalid raid superblock magic on sdd5 [ 9.189741] md: sdd5 does not have a valid v0.90 superblock, not importing! [ 9.254542] md: invalid raid superblock magic on sde5 [ 9.259597] md: sde5 does not have a valid v0.90 superblock, not importing! [ 9.310317] md: invalid raid superblock magic on sdf5 [ 9.315372] md: sdf5 does not have a valid v0.90 superblock, not importing! [ 9.370415] md: invalid raid superblock magic on sdk5 [ 9.375468] md: sdk5 does not have a valid v0.90 superblock, not importing! [ 9.423869] md: invalid raid superblock magic on sdl5 [ 9.428919] md: sdl5 does not have a valid v0.90 superblock, not importing! [ 9.468250] md: invalid raid superblock magic on sdl6 [ 9.473300] md: sdl6 does not have a valid v0.90 superblock, not importing! [ 9.519960] md: invalid raid superblock magic on sdm5 [ 9.525015] md: sdm5 does not have a valid v0.90 superblock, not importing! [ 9.556049] md: invalid raid superblock magic on sdm6 [ 9.561101] md: sdm6 does not have a valid v0.90 superblock, not importing! [ 9.614718] md: invalid raid superblock magic on sdn5 [ 9.619773] md: sdn5 does not have a valid v0.90 superblock, not importing! [ 9.642163] md: invalid raid superblock magic on sdn6 [ 9.647220] md: sdn6 does not have a valid v0.90 superblock, not importing! [ 9.689354] md: invalid raid superblock magic on sdo5 [ 9.694404] md: sdo5 does not have a valid v0.90 superblock, not importing! [ 9.711917] md: invalid raid superblock magic on sdo6 [ 9.716972] md: sdo6 does not have a valid v0.90 superblock, not importing! [ 9.731387] md: invalid raid superblock magic on sdo7 [ 9.736444] md: sdo7 does not have a valid v0.90 superblock, not importing! [ 9.793088] md: invalid raid superblock magic on sdp5 [ 9.798143] md: sdp5 does not have a valid v0.90 superblock, not importing! [ 9.845631] md: invalid raid superblock magic on sdq5 [ 9.850684] md: sdq5 does not have a valid v0.90 superblock, not importing! [ 9.895380] md: invalid raid superblock magic on sdr5 [ 9.900435] md: sdr5 does not have a valid v0.90 superblock, not importing! [ 9.914093] md: invalid raid superblock magic on sdr6 [ 9.919143] md: sdr6 does not have a valid v0.90 superblock, not importing! [ 9.938110] md: invalid raid superblock magic on sdr7 [ 9.943161] md: sdr7 does not have a valid v0.90 superblock, not importing! [ 9.943162] md: Scanned 47 and added 26 devices. [ 9.943163] md: autorun ... [ 9.943163] md: considering sdb1 ... [ 9.943165] md: adding sdb1 ... [ 9.943166] md: sdb2 has different UUID to sdb1 [ 9.943168] md: adding sdc1 ... [ 9.943169] md: sdc2 has different UUID to sdb1 [ 9.943170] md: adding sdd1 ... [ 9.943171] md: sdd2 has different UUID to sdb1 [ 9.943172] md: adding sde1 ... [ 9.943173] md: sde2 has different UUID to sdb1 [ 9.943174] md: adding sdf1 ... [ 9.943175] md: sdf2 has different UUID to sdb1 [ 9.943176] md: adding sdk1 ... [ 9.943177] md: sdk2 has different UUID to sdb1 [ 9.943178] md: adding sdl1 ... [ 9.943179] md: sdl2 has different UUID to sdb1 [ 9.943181] md: adding sdm1 ... [ 9.943182] md: sdm2 has different UUID to sdb1 [ 9.943183] md: adding sdn1 ... [ 9.943184] md: sdn2 has different UUID to sdb1 [ 9.943185] md: adding sdo1 ... [ 9.943186] md: sdo2 has different UUID to sdb1 [ 9.943187] md: adding sdp1 ... [ 9.943188] md: sdp2 has different UUID to sdb1 [ 9.943189] md: adding sdq1 ... [ 9.943190] md: sdq2 has different UUID to sdb1 [ 9.943191] md: adding sdr1 ... [ 9.943192] md: sdr2 has different UUID to sdb1 [ 9.943203] md: kicking non-fresh sdr1 from candidates rdevs! [ 9.943203] md: export_rdev(sdr1) [ 9.943205] md: kicking non-fresh sdq1 from candidates rdevs! [ 9.943205] md: export_rdev(sdq1) [ 9.943207] md: kicking non-fresh sde1 from candidates rdevs! [ 9.943207] md: export_rdev(sde1) [ 9.943208] md: created md0 [ 9.943209] md: bind<sdp1> [ 9.943214] md: bind<sdo1> [ 9.943220] md: bind<sdn1> [ 9.943223] md: bind<sdm1> [ 9.943226] md: bind<sdl1> [ 9.943229] md: bind<sdk1> [ 9.943232] md: bind<sdf1> [ 9.943235] md: bind<sdd1> [ 9.943238] md: bind<sdc1> [ 9.943241] md: bind<sdb1> [ 9.943244] md: running: <sdb1><sdc1><sdd1><sdf1><sdk1><sdl1><sdm1><sdn1><sdo1><sdp1> [ 9.981355] md: considering sdb2 ... [ 9.981356] md: adding sdb2 ... [ 9.981357] md: adding sdc2 ... [ 9.981358] md: adding sdd2 ... [ 9.981360] md: adding sde2 ... [ 9.981361] md: adding sdf2 ... [ 9.981362] md: adding sdk2 ... [ 9.981363] md: adding sdl2 ... [ 9.981364] md: adding sdm2 ... [ 9.981365] md: adding sdn2 ... [ 9.981367] md: adding sdo2 ... [ 9.981368] md: adding sdp2 ... [ 9.981369] md: md0: current auto_remap = 0 [ 9.981369] md: adding sdq2 ... [ 9.981370] md: adding sdr2 ... [ 9.981372] md: resync of RAID array md0 [ 9.981504] md: created md1 [ 9.981505] md: bind<sdr2> [ 9.981511] md: bind<sdq2> [ 9.981515] md: bind<sdp2> [ 9.981520] md: bind<sdo2> [ 9.981525] md: bind<sdn2> [ 9.981530] md: bind<sdm2> [ 9.981535] md: bind<sdl2> [ 9.981540] md: bind<sdk2> [ 9.981544] md: bind<sdf2> [ 9.981549] md: bind<sde2> [ 9.981554] md: bind<sdd2> [ 9.981559] md: bind<sdc2> [ 9.981565] md: bind<sdb2> [ 9.981574] md: running: <sdb2><sdc2><sdd2><sde2><sdf2><sdk2><sdl2><sdm2><sdn2><sdo2><sdp2><sdq2><sdr2> [ 9.989470] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 9.989470] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 9.989474] md: using 128k window, over a total of 2490176k. [ 10.052110] md: ... autorun DONE. [ 10.052124] md: md1: current auto_remap = 0 [ 10.052126] md: resync of RAID array md1 [ 10.060221] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 10.060222] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 10.060224] md: using 128k window, over a total of 2097088k. [ 29.602277] md: bind<sdr7> [ 29.602651] md: bind<sdo7> [ 29.679014] md: md2 stopped. [ 29.803959] md: bind<sdc5> [ 29.804024] md: bind<sdd5> [ 29.804084] md: bind<sde5> [ 29.804145] md: bind<sdf5> [ 29.804218] md: bind<sdk5> [ 29.804286] md: bind<sdq5> [ 29.828033] md: bind<sdl5> [ 29.828589] md: bind<sdm5> [ 29.828942] md: bind<sdn5> [ 29.829120] md: bind<sdr5> [ 29.854008] md: bind<sdo5> [ 29.855384] md: bind<sdp5> [ 29.857913] md: bind<sdb5> [ 29.857922] md: kicking non-fresh sdr5 from array! [ 29.857925] md: unbind<sdr5> [ 29.865755] md: export_rdev(sdr5) [ 29.993600] md: bind<sdm6> [ 29.993748] md: bind<sdn6> [ 29.993917] md: bind<sdr6> [ 29.994084] md: bind<sdo6> [ 29.994230] md: bind<sdl6> [ 30.034680] md: md4: set sdl6 to auto_remap [1] [ 30.034681] md: md4: set sdo6 to auto_remap [1] [ 30.034681] md: md4: set sdr6 to auto_remap [1] [ 30.034682] md: md4: set sdn6 to auto_remap [1] [ 30.034682] md: md4: set sdm6 to auto_remap [1] [ 30.034684] md: delaying recovery of md4 until md1 has finished (they share one or more physical units) [ 30.222237] md: md2: set sdb5 to auto_remap [0] [ 30.222238] md: md2: set sdp5 to auto_remap [0] [ 30.222238] md: md2: set sdo5 to auto_remap [0] [ 30.222239] md: md2: set sdn5 to auto_remap [0] [ 30.222239] md: md2: set sdm5 to auto_remap [0] [ 30.222240] md: md2: set sdl5 to auto_remap [0] [ 30.222241] md: md2: set sdq5 to auto_remap [0] [ 30.222241] md: md2: set sdk5 to auto_remap [0] [ 30.222242] md: md2: set sdf5 to auto_remap [0] [ 30.222242] md: md2: set sde5 to auto_remap [0] [ 30.222243] md: md2: set sdd5 to auto_remap [0] [ 30.222244] md: md2: set sdc5 to auto_remap [0] [ 30.222244] md: md2 stopped. [ 30.222246] md: unbind<sdb5> [ 30.228152] md: export_rdev(sdb5) [ 30.228157] md: unbind<sdp5> [ 30.231173] md: export_rdev(sdp5) [ 30.231178] md: unbind<sdo5> [ 30.236190] md: export_rdev(sdo5) [ 30.236205] md: unbind<sdn5> [ 30.239180] md: export_rdev(sdn5) [ 30.239183] md: unbind<sdm5> [ 30.244169] md: export_rdev(sdm5) [ 30.244172] md: unbind<sdl5> [ 30.247189] md: export_rdev(sdl5) [ 30.247192] md: unbind<sdq5> [ 30.252207] md: export_rdev(sdq5) [ 30.252211] md: unbind<sdk5> [ 30.255196] md: export_rdev(sdk5) [ 30.255200] md: unbind<sdf5> [ 30.259408] md: export_rdev(sdf5) [ 30.259411] md: unbind<sde5> [ 30.271235] md: export_rdev(sde5) [ 30.271242] md: unbind<sdd5> [ 30.280228] md: export_rdev(sdd5) [ 30.280233] md: unbind<sdc5> [ 30.288234] md: export_rdev(sdc5) [ 30.680068] md: md2 stopped. [ 30.731994] md: bind<sdc5> [ 30.732110] md: bind<sdd5> [ 30.732258] md: bind<sde5> [ 30.732340] md: bind<sdf5> [ 30.732481] md: bind<sdk5> [ 30.732606] md: bind<sdq5> [ 30.737432] md: bind<sdl5> [ 30.748124] md: bind<sdm5> [ 30.748468] md: bind<sdn5> [ 30.748826] md: bind<sdr5> [ 30.749254] md: bind<sdo5> [ 30.763073] md: bind<sdp5> [ 30.776215] md: bind<sdb5> [ 30.776229] md: kicking non-fresh sdr5 from array! [ 30.776231] md: unbind<sdr5> [ 30.780828] md: export_rdev(sdr5) [ 60.552383] md: md1: resync done. [ 60.569174] md: md1: current auto_remap = 0 [ 60.569202] md: delaying recovery of md4 until md0 has finished (they share one or more physical units) [ 109.601864] md: md0: resync done. [ 109.615133] md: md0: current auto_remap = 0 [ 109.615149] md: md4: flushing inflight I/O [ 109.618280] md: recovery of RAID array md4 [ 109.618282] md: minimum _guaranteed_ speed: 600000 KB/sec/disk. [ 109.618283] md: using maximum available idle IO bandwidth (but not more than 800000 KB/sec) for recovery. [ 109.618296] md: using 128k window, over a total of 2930246912k. [ 109.618297] md: resuming recovery of md4 from checkpoint. [17557.409842] md: md4: recovery done. [17557.601751] md: md4: set sdl6 to auto_remap [0] [17557.601753] md: md4: set sdo6 to auto_remap [0] [17557.601754] md: md4: set sdr6 to auto_remap [0] [17557.601754] md: md4: set sdn6 to auto_remap [0] [17557.601755] md: md4: set sdm6 to auto_remap [0] [24035.085886] md: bind<sdr5> [24035.123679] md: md2: set sdr5 to auto_remap [1] [24035.123681] md: md2: set sdb5 to auto_remap [1] [24035.123682] md: md2: set sdp5 to auto_remap [1] [24035.123682] md: md2: set sdo5 to auto_remap [1] [24035.123683] md: md2: set sdn5 to auto_remap [1] [24035.123684] md: md2: set sdm5 to auto_remap [1] [24035.123685] md: md2: set sdl5 to auto_remap [1] [24035.123685] md: md2: set sdq5 to auto_remap [1] [24035.123697] md: md2: set sdk5 to auto_remap [1] [24035.123697] md: md2: set sdf5 to auto_remap [1] [24035.123698] md: md2: set sde5 to auto_remap [1] [24035.123698] md: md2: set sdd5 to auto_remap [1] [24035.123699] md: md2: set sdc5 to auto_remap [1] [24035.123700] md: md2: flushing inflight I/O [24035.154625] md: recovery of RAID array md2 [24035.154628] md: minimum _guaranteed_ speed: 600000 KB/sec/disk. [24035.154629] md: using maximum available idle IO bandwidth (but not more than 800000 KB/sec) for recovery. [24035.154646] md: using 128k window, over a total of 2925435456k. [24523.174858] md: md2: recovery stop due to MD_RECOVERY_INTR set. [24544.258476] md: md2: set sdr5 to auto_remap [0] [24544.258488] md: md2: set sdb5 to auto_remap [0] [24544.258489] md: md2: set sdp5 to auto_remap [0] [24544.258489] md: md2: set sdo5 to auto_remap [0] [24544.258490] md: md2: set sdn5 to auto_remap [0] [24544.258491] md: md2: set sdm5 to auto_remap [0] [24544.258491] md: md2: set sdl5 to auto_remap [0] [24544.258492] md: md2: set sdq5 to auto_remap [0] [24544.258493] md: md2: set sdk5 to auto_remap [0] [24544.258494] md: md2: set sdf5 to auto_remap [0] [24544.258494] md: md2: set sde5 to auto_remap [0] [24544.258495] md: md2: set sdd5 to auto_remap [0] [24544.258496] md: md2: set sdc5 to auto_remap [0] [24545.106266] md: unbind<sdr5> [24545.117398] md: export_rdev(sdr5)
Help save my 55TB SHR1! Or mount it via Ubuntu :(
in General Post-Installation Questions/Discussions (non-hardware specific)
Posted · Edited by C-Fu
Many thanks for the indepth explanation. My question is, where's /dev/md3?
I mean is md3 intentionally missing or it's because my vg is broken?