C-Fu
-
Posts
90 -
Joined
-
Last visited
Posts posted by C-Fu
-
-
Well a power trip just happened. And obviously I freaked out lol. hopefully nothing damaging happened.
But a drive in md2 doesn't show [E] anymore.
6 hours ago, flyride said:System Partition error is just if there are missing members of /dev/md0 (root) or /dev/md1 (swap). Those are RAID1 arrays and you have lots of copies of those, which is why we don't really care too much about them right now. I'm not sure what the problem is with sdp5 yet, and sdr5 seems to be missing, will look at it when resync is complete.
# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdb5[0] sdp5[10] sdo5[11] sdn5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UU] md4 : active raid5 sdl6[0] sdo6[3] sdr6[5] sdn6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU] md5 : active raid1 sdo7[0] sdr7[2] 3905898432 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdk2[5] sdl2[6] sdm2[7] sdn2[9] sdo2[8] sdp2[10] sdq2[11] sdr2[12] 2097088 blocks [24/13] [UUUUUUUUUUUUU___________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[10] sdo1[0] sdp1[4] 2490176 blocks [12/10] [UUUUUUUUU_U_] unused devices: <none>
yup, sdr5 is still missing.
-
28 minutes ago, flyride said:
In Storage Manager under HDD/SDD panel, there is a neat Action drop down to deactivate a drive. I have never bothered to try it, but I guess it works. If I wanted to fail out a drive I always just mdadm /dev/md -f /dev/sd
I foolishly thought if I deactivated a drive, unplug it and replug it (hotswap-capable) DSM would recognize, reactivate the 10TB and rebuild array. That's obviously not the case.
Current status is, md2 has finished resync.
md2 : active raid5 sdb5[0] sdp5[10](E) sdn5[11] sdo5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UE]
this means that sdp5 has system partition error, and one sd?5 partition is missing, right?
md5 is 95% complete.
6 hours ago, flyride said:All the devices got remapped three times, first with the Ubuntu mount, then with the migration install, and then with the change to the bitmask in synoinfo.conf. mdraid tried to initialize the broken array three different ways and I don't know how or if it affects the metadata. Also I was not sure how the drive deactivation would affect metadata either.
Wow, I didn't know just booting to ubuntu had such a large effect (drives got auto remapped). Or did it get remapped because of a command like mdadm --assemble --scan that I did?
28 minutes ago, flyride said:You might find this an interesting article in that regard. It will make you a btrfs believer if you weren't already.
That's.... awesome. I never would've come across that article on my own.
-
2 minutes ago, flyride said:
Things are promising, yes. All three arrays are set to repair. Don't do anything from Storage Manager. You can monitor progress with cat /proc/mdstat.
Post another mdstat when it's all done. It might be awhile.
I will! Many thanks for spending time to a stranger!!! I can go to sleep now thanks to you!
-
1 minute ago, flyride said:
Ok, the above from your very first post is what threw me. We are where we ought to be.
Aaah, I see now what you mean. My mistake, sorry!
root@homelab:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdr5[14] sdb5[0] sdp5[10] sdn5[11] sdo5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UU] [>....................] recovery = 0.0% (2322676/2925435456) finish=739.2min speed=65905K/sec md4 : active raid5 sdr6[5] sdl6[0] sdn6[3] sdo6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U] resync=DELAYED md5 : active raid1 sdr7[2] sdn7[0] 3905898432 blocks super 1.2 [2/1] [U_] resync=DELAYED md1 : active raid1 sdr2[12] sdq2[11] sdp2[10] sdo2[9] sdn2[8] sdm2[7] sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0] 2097088 blocks [24/13] [UUUUUUUUUUUUU___________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[0] sdo1[10] sdp1[4] 2490176 blocks [12/10] [UUUUUUUUU_U_] unused devices: <none>
md2 looks... promising, am I right?
-
-
OK I just remembered, I replaced an old 3TB drive because it was showing increasing bad sectors. I replaced the drive, resync the array and everything went well for a few months at least.
I believe the old data is still there. Sorry about that, it's 6am right now and I just remembered about this but nothing happened during the few months after replacing.
Should I plug it in and do mdstat?
-
7 minutes ago, flyride said:
Something isn't quite right. Do you have 13 drives plus a cache drive, or 12 plus cache? Which drive is your cache drive now? Please post fdisk -l
It was originally 13 drives + cache. The cache is not initialized now in DSM for some reason. And also I stupidly deactivated one 9.10TB (10TB WD) drive.
root@homelab:~# fdisk -l Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x696935dc Device Boot Start End Sectors Size Id Type /dev/sda1 2048 468857024 468854977 223.6G fd Linux raid autodetect Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 43C8C355-AE0A-42DC-97CC-508B0FB4EF37 Device Start End Sectors Size Type /dev/sdb1 2048 4982527 4980480 2.4G Linux RAID /dev/sdb2 4982528 9176831 4194304 2G Linux RAID /dev/sdb5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 0600DFFC-A576-4242-976A-3ACAE5284C4C Device Start End Sectors Size Type /dev/sdc1 2048 4982527 4980480 2.4G Linux RAID /dev/sdc2 4982528 9176831 4194304 2G Linux RAID /dev/sdc5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 58B43CB1-1F03-41D3-A734-014F59DE34E8 Device Start End Sectors Size Type /dev/sdd1 2048 4982527 4980480 2.4G Linux RAID /dev/sdd2 4982528 9176831 4194304 2G Linux RAID /dev/sdd5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: E5FD9CDA-FE14-4F95-B776-B176E7130DEA Device Start End Sectors Size Type /dev/sde1 2048 4982527 4980480 2.4G Linux RAID /dev/sde2 4982528 9176831 4194304 2G Linux RAID /dev/sde5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 48A13430-10A1-4050-BA78-723DB398CE87 Device Start End Sectors Size Type /dev/sdf1 2048 4982527 4980480 2.4G Linux RAID /dev/sdf2 4982528 9176831 4194304 2G Linux RAID /dev/sdf5 9453280 5860326239 5850872960 2.7T Linux RAID GPT PMBR size mismatch (102399 != 30277631) will be corrected by w(rite). Disk /dev/synoboot: 14.4 GiB, 15502147584 bytes, 30277632 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: B3CAAA25-3CA1-48FA-A5B6-105ADDE4793F Device Start End Sectors Size Type /dev/synoboot1 2048 32767 30720 15M EFI System /dev/synoboot2 32768 94207 61440 30M Linux filesystem /dev/synoboot3 94208 102366 8159 4M BIOS boot Disk /dev/sdk: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1D5B8B09-8D4A-4729-B089-442620D3D507 Device Start End Sectors Size Type /dev/sdk1 2048 4982527 4980480 2.4G Linux RAID /dev/sdk2 4982528 9176831 4194304 2G Linux RAID /dev/sdk5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdl: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 849E02B2-2734-496B-AB52-A572DF8FE63F Device Start End Sectors Size Type /dev/sdl1 2048 4982527 4980480 2.4G Linux RAID /dev/sdl2 4982528 9176831 4194304 2G Linux RAID /dev/sdl5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdl6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdm: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 423D33B4-90CE-4E34-9C40-6E06D1F50C0C Device Start End Sectors Size Type /dev/sdm1 2048 4982527 4980480 2.4G Linux RAID /dev/sdm2 4982528 9176831 4194304 2G Linux RAID /dev/sdm5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdm6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdn: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1713E819-3B9A-4CE3-94E8-5A3DBF1D5983 Device Start End Sectors Size Type /dev/sdn1 2048 4982527 4980480 2.4G Linux RAID /dev/sdn2 4982528 9176831 4194304 2G Linux RAID /dev/sdn5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdn6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdn7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdo: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 09CB7303-C2E7-46F8-ADA0-D4853F25CB00 Device Start End Sectors Size Type /dev/sdo1 2048 4982527 4980480 2.4G Linux RAID /dev/sdo2 4982528 9176831 4194304 2G Linux RAID /dev/sdo5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdo6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdp: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: A3E39D34-4297-4BE9-B4FD-3A21EFC38071 Device Start End Sectors Size Type /dev/sdp1 2048 4982527 4980480 2.4G Linux RAID /dev/sdp2 4982528 9176831 4194304 2G Linux RAID /dev/sdp5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdq: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 54D81C51-AB85-4DE2-AA16-263DF1C6BB8A Device Start End Sectors Size Type /dev/sdq1 2048 4982527 4980480 2.4G Linux RAID /dev/sdq2 4982528 9176831 4194304 2G Linux RAID /dev/sdq5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdr: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: EA537505-55B5-4C27-A7CA-C7BBB7E7B56F Device Start End Sectors Size Type /dev/sdr1 2048 4982527 4980480 2.4G Linux RAID /dev/sdr2 4982528 9176831 4194304 2G Linux RAID /dev/sdr5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdr6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdr7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/md0: 2.4 GiB, 2549940224 bytes, 4980352 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram0: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram1: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram2: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/zram3: 2.3 GiB, 2488270848 bytes, 607488 sectors Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md5: 3.7 TiB, 3999639994368 bytes, 7811796864 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md4: 10.9 TiB, 12002291351552 bytes, 23441975296 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 65536 bytes / 262144 bytes Disk /dev/md2: 32.7 TiB, 35947750883328 bytes, 70210450944 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 65536 bytes / 786432 bytes
-
14 minutes ago, flyride said:
# mdadm --assemble --run /dev/md2 /dev/sd[bcdefklmnopq]5
# cat /proc/mdstat
root@homelab:~# mdadm --assemble --run /dev/md2 /dev/sd[bcdefklmnopq]5 mdadm: /dev/md2 has been started with 12 drives (out of 13). root@homelab:~# root@homelab:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdb5[0] sdp5[10] sdn5[11] sdo5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UU] md4 : active raid5 sdl6[0] sdn6[3] sdo6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U] md5 : active raid1 sdn7[0] 3905898432 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdr2[12] sdq2[11] sdp2[10] sdo2[9] sdn2[8] sdm2[7] sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0] 2097088 blocks [24/13] [UUUUUUUUUUUUU___________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[0] sdo1[10] sdp1[4] 2490176 blocks [12/10] [UUUUUUUUU_U_] unused devices: <none>
I think it's a (partial) success... maybe?
Thanks for the lengthy layman explanation, I honestly appreciate that!!! I was wondering about the event ID thing, I thought it corresponds to some system log somewhere.
-
1 minute ago, flyride said:
# mdadm --assemble /dev/md2 --uuid 43699871:217306be:dc16f5e8:dcbe1b0d
# cat /proc/mdstat
Here's the output:
root@homelab:~# mdadm --assemble /dev/md2 --uuid 43699871:217306be:dc16f5e8:dcbe1b0d mdadm: ignoring /dev/sdc5 as it reports /dev/sdq5 as failed mdadm: ignoring /dev/sdd5 as it reports /dev/sdq5 as failed mdadm: ignoring /dev/sde5 as it reports /dev/sdq5 as failed mdadm: ignoring /dev/sdf5 as it reports /dev/sdq5 as failed mdadm: ignoring /dev/sdk5 as it reports /dev/sdq5 as failed mdadm: ignoring /dev/sdl5 as it reports /dev/sdq5 as failed mdadm: ignoring /dev/sdm5 as it reports /dev/sdq5 as failed mdadm: ignoring /dev/sdo5 as it reports /dev/sdq5 as failed mdadm: ignoring /dev/sdn5 as it reports /dev/sdq5 as failed mdadm: ignoring /dev/sdp5 as it reports /dev/sdq5 as failed mdadm: /dev/md2 assembled from 2 drives - not enough to start the array. root@homelab:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md4 : active raid5 sdl6[0] sdn6[3] sdo6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U] md5 : active raid1 sdn7[0] 3905898432 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdr2[12] sdq2[11] sdp2[10] sdo2[9] sdn2[8] sdm2[7] sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0] 2097088 blocks [24/13] [UUUUUUUUUUUUU___________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[0] sdo1[10] sdp1[4] 2490176 blocks [12/10] [UUUUUUUUU_U_] unused devices: <none>
🤔
-
10 minutes ago, flyride said:
# mdadm --assemble --scan /dev/md2
# cat /proc/mdstat
mdadm --assemble --scan /dev/md2
mdadm: /dev/md2 not identified in config file.
and mdstat doesn't show md2.
# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md4 : active raid5 sdl6[0] sdn6[3] sdo6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U] md5 : active raid1 sdn7[0] 3905898432 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdr2[12] sdq2[11] sdp2[10] sdo2[9] sdn2[8] sdm2[7] sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0] 2097088 blocks [24/13] [UUUUUUUUUUUUU___________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[0] sdo1[10] sdp1[4] 2490176 blocks [12/10] [UUUUUUUUU_U_] unused devices: <none>
Post #15 it did say
mdadm: /dev/md2 assembled from 12 drives - not enough to start the array.
-
7 minutes ago, flyride said:
# mdadm --detail /dev/md2
# mdadm --detail /dev/md2 /dev/md2: Version : Raid Level : raid0 Total Devices : 0 State : inactive Number Major Minor RaidDevice
-
6 minutes ago, flyride said:
Ok good. Here comes a critical part. Note that the assemble command does NOT have the "r" in it.
It means that it's not running right? Just trying to learn and understand at the same time 😁
root@homelab:~# mdadm --stop /dev/md2 mdadm: stopped /dev/md2 root@homelab:~# root@homelab:~# mdadm --assemble --force /dev/md2 /dev/sd[bcdefklmnopq]5 mdadm: forcing event count in /dev/sdq5(6) from 370871 upto 370918 mdadm: clearing FAULTY flag for device 11 in /dev/md2 for /dev/sdq5 mdadm: Marking array /dev/md2 as 'clean' mdadm: /dev/md2 assembled from 12 drives - not enough to start the array.
# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md4 : active raid5 sdl6[0] sdn6[3] sdo6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U] md5 : active raid1 sdn7[0] 3905898432 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdr2[12] sdq2[11] sdp2[10] sdo2[9] sdn2[8] sdm2[7] sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0] 2097088 blocks [24/13] [UUUUUUUUUUUUU___________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[0] sdo1[10] sdp1[4] 2490176 blocks [12/10] [UUUUUUUUU_U_] unused devices: <none>
If it matters, the Kingston SSD is my cache drive.
-
6 minutes ago, flyride said:
# mdadm --examine /dev/sd[bcdefklmnopqr]5 >>/tmp/raid.status
# cat /tmp/raid.status
# cat /tmp/raid.status /dev/sdb5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : a8109f74:46bc8509:6fc3bca8:9fddb6a7 Update Time : Fri Jan 17 03:26:11 2020 Checksum : b2b47a7 - correct Events : 370917 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 0 Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdc5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 8dfdc601:e01f8a98:9a8e78f1:a7951260 Update Time : Fri Jan 17 03:26:11 2020 Checksum : 286f1b5a - correct Events : 370917 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdd5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : f98bc050:a4b46deb:c3168fa0:08d90061 Update Time : Fri Jan 17 03:26:11 2020 Checksum : 7a510a15 - correct Events : 370917 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 2 Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sde5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 1e2742b7:d1847218:816c7135:cdf30c07 Update Time : Fri Jan 17 03:26:11 2020 Checksum : 48c5e5ea - correct Events : 370917 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 3 Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdf5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : ce60c47e:14994160:da4d1482:fd7901f2 Update Time : Fri Jan 17 03:26:11 2020 Checksum : 8fae9b68 - correct Events : 370917 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 4 Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdk5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 706c5124:d647d300:733fb961:e5cd8127 Update Time : Fri Jan 17 03:26:11 2020 Checksum : eaf29b4c - correct Events : 370917 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 5 Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdl5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 6993b9eb:8ad7c80f:dc17268f:a8efa73d Update Time : Fri Jan 17 03:26:11 2020 Checksum : 4e34c29 - correct Events : 370917 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 7 Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdm5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 2f1247d1:a536d2ad:ba2eb47f:a7eaf237 Update Time : Fri Jan 17 03:26:11 2020 Checksum : 73533be8 - correct Events : 370917 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 8 Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdn5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 73610f83:fb3cf895:c004147e:b4de2bfe Update Time : Fri Jan 17 03:26:11 2020 Checksum : d1da5b98 - correct Events : 370917 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 11 Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdo5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 1b4ab27d:bb7488fa:a6cc1f75:d21d1a83 Update Time : Fri Jan 17 03:26:11 2020 Checksum : ad078302 - correct Events : 370917 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 9 Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdp5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : a64f01c2:76c56102:38ad7c4e:7bce88d1 Update Time : Fri Jan 17 03:26:11 2020 Checksum : 20fb6a8c - correct Events : 370917 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 12 Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdq5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=0 sectors State : clean Device UUID : 5cc6456d:bfc950bf:1baf6fef:aabec947 Update Time : Fri Jan 10 19:44:10 2020 Checksum : a0630221 - correct Events : 370871 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 6 Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdr5: Magic : a92b4efc Version : 1.2 Feature Map : 0x2 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Recovery Offset : 78633768 sectors Unused Space : before=1968 sectors, after=0 sectors State : active Device UUID : d0a4607c:b970d906:02920f5c:ad5204d1 Update Time : Fri Jan 3 03:01:29 2020 Checksum : f185b28c - correct Events : 370454 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 10 Array State : AAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
Quote# mdadm --examine /dev/sd[bcdefklmnopqr]5 | egrep 'Event|/dev/sd'
# mdadm --examine /dev/sd[bcdefklmnopqr]5 | egrep 'Event|/dev/sd' /dev/sdb5: Events : 370917 /dev/sdc5: Events : 370917 /dev/sdd5: Events : 370917 /dev/sde5: Events : 370917 /dev/sdf5: Events : 370917 /dev/sdk5: Events : 370917 /dev/sdl5: Events : 370917 /dev/sdm5: Events : 370917 /dev/sdn5: Events : 370917 /dev/sdo5: Events : 370917 /dev/sdp5: Events : 370917 /dev/sdq5: Events : 370871 /dev/sdr5: Events : 370454
-
35 minutes ago, flyride said:
Don't be concerned about disks and volumes being crashed for now, let's just get the disks addressable in the way they were at the beginning.
synoinfo.conf editing works! Not sure why it works after reboot, but meh doesn't matter 😁
3 drives have System Partition Failed now.
Anyways.. md2, md4, md5, md1, md0.
# ls /dev/sd* /dev/sda /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 /dev/sdf2 /dev/sdk2 /dev/sdl2 /dev/sdm1 /dev/sdn /dev/sdn6 /dev/sdo2 /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sdr7 /dev/sda1 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 /dev/sdk5 /dev/sdl5 /dev/sdm2 /dev/sdn1 /dev/sdn7 /dev/sdo5 /dev/sdp2 /dev/sdq2 /dev/sdr2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdk /dev/sdl /dev/sdl6 /dev/sdm5 /dev/sdn2 /dev/sdo /dev/sdo6 /dev/sdp5 /dev/sdq5 /dev/sdr5 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdk1 /dev/sdl1 /dev/sdm /dev/sdm6 /dev/sdn5 /dev/sdo1 /dev/sdp /dev/sdq /dev/sdr /dev/sdr6
root@homelab:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdb5[0] sdp5[10] sdn5[11] sdo5[9] sdm5[8] sdl5[7] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/11] [UUUUUU_UUU_UU] md4 : active raid5 sdl6[0] sdn6[3] sdo6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U] md5 : active raid1 sdn7[0] 3905898432 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdr2[12] sdq2[11] sdp2[10] sdo2[9] sdn2[8] sdm2[7] sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0] 2097088 blocks [24/13] [UUUUUUUUUUUUU___________] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[0] sdo1[10] sdp1[4] 2490176 blocks [12/10] [UUUUUUUUU_U_] unused devices: <none>
/dev/md2: Version : 1.2 Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Used Dev Size : 2925435456 (2789.91 GiB 2995.65 GB) Raid Devices : 13 Total Devices : 11 Persistence : Superblock is persistent Update Time : Fri Jan 17 03:26:11 2020 State : clean, FAILED Active Devices : 11 Working Devices : 11 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : homelab:2 (local to host homelab) UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Events : 370917 Number Major Minor RaidDevice State 0 8 21 0 active sync /dev/sdb5 1 8 37 1 active sync /dev/sdc5 2 8 53 2 active sync /dev/sdd5 3 8 69 3 active sync /dev/sde5 4 8 85 4 active sync /dev/sdf5 5 8 165 5 active sync /dev/sdk5 - 0 0 6 removed 7 8 181 7 active sync /dev/sdl5 8 8 197 8 active sync /dev/sdm5 9 8 229 9 active sync /dev/sdo5 - 0 0 10 removed 11 8 213 11 active sync /dev/sdn5 10 8 245 12 active sync /dev/sdp5
/dev/md4: Version : 1.2 Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 2930246912 (2794.50 GiB 3000.57 GB) Raid Devices : 5 Total Devices : 4 Persistence : Superblock is persistent Update Time : Fri Jan 17 03:26:11 2020 State : clean, degraded Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : homelab:4 (local to host homelab) UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Events : 7052 Number Major Minor RaidDevice State 0 8 182 0 active sync /dev/sdl6 1 8 198 1 active sync /dev/sdm6 2 8 230 2 active sync /dev/sdo6 - 0 0 3 removed 3 8 214 4 active sync /dev/sdn6
/dev/md5: Version : 1.2 Creation Time : Tue Sep 24 19:36:08 2019 Raid Level : raid1 Array Size : 3905898432 (3724.96 GiB 3999.64 GB) Used Dev Size : 3905898432 (3724.96 GiB 3999.64 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Fri Jan 17 03:26:06 2020 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : homelab:5 (local to host homelab) UUID : ae55eeff:e6a5cc66:2609f5e0:2e2ef747 Events : 223792 Number Major Minor RaidDevice State 0 8 215 0 active sync /dev/sdn7 - 0 0 1 removed
/dev/md1: Version : 0.90 Creation Time : Fri Jan 17 03:25:58 2020 Raid Level : raid1 Array Size : 2097088 (2047.94 MiB 2147.42 MB) Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 24 Total Devices : 13 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Fri Jan 17 03:26:49 2020 State : active, degraded Active Devices : 13 Working Devices : 13 Failed Devices : 0 Spare Devices : 0 UUID : 846f27e4:bf628296:cc8c244d:4f76664d (local to host homelab) Events : 0.20 Number Major Minor RaidDevice State 0 8 18 0 active sync /dev/sdb2 1 8 34 1 active sync /dev/sdc2 2 8 50 2 active sync /dev/sdd2 3 8 66 3 active sync /dev/sde2 4 8 82 4 active sync /dev/sdf2 5 8 162 5 active sync /dev/sdk2 6 8 178 6 active sync /dev/sdl2 7 8 194 7 active sync /dev/sdm2 8 8 210 8 active sync /dev/sdn2 9 8 226 9 active sync /dev/sdo2 10 8 242 10 active sync /dev/sdp2 11 65 2 11 active sync /dev/sdq2 12 65 18 12 active sync /dev/sdr2 - 0 0 13 removed - 0 0 14 removed - 0 0 15 removed - 0 0 16 removed - 0 0 17 removed - 0 0 18 removed - 0 0 19 removed - 0 0 20 removed - 0 0 21 removed - 0 0 22 removed - 0 0 23 removed
/dev/md0: Version : 0.90 Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Array Size : 2490176 (2.37 GiB 2.55 GB) Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 10 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Jan 17 03:42:39 2020 State : active, degraded Active Devices : 10 Working Devices : 10 Failed Devices : 0 Spare Devices : 0 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Events : 0.539389 Number Major Minor RaidDevice State 0 8 209 0 active sync /dev/sdn1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 4 8 241 4 active sync /dev/sdp1 5 8 81 5 active sync /dev/sdf1 6 8 161 6 active sync /dev/sdk1 7 8 177 7 active sync /dev/sdl1 8 8 193 8 active sync /dev/sdm1 - 0 0 9 removed 10 8 225 10 active sync /dev/sdo1 - 0 0 11 removed
-
2 hours ago, flyride said:
I am getting lost in your explanation. If your arrays are crashed, they will be read-only or unavailable, so that can be expected.
You keep saying "pop the xpe usb back." I'm assuming you did that only once and it is still installed. Right? So is the following true?
1. You tried to boot up DSM and it asked to migrate.
2. You decided to perform the migration installation
3. The system then booted up to the state you post above
4. You don't remove the boot loader USB, it stays in place
Please confirm whether the above list is true. And when you reboot, does it try to migrate again or come back to the state you posted?
All are true.
It didn't ask to migrate again after rebooting. But this happened.
Before rebooting, all disks showed Normal in Disk Allocation Status.
mdstat after rebooting (md2 and md0 are different than before reboot):
# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdb5[0] sdl5[7] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/7] [UUUUUU_U_____] md4 : active raid5 sdl6[0] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/1] [U____] md1 : active raid1 sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0] 2097088 blocks [12/7] [UUUUUUU_____] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[0] sdo1[10] sdp1[4] 2490176 blocks [12/10] [UUUUUUUUU_U_] unused devices: <none> root@homelab:~# mdadm --detail /dev/md2 /dev/md2: Version : 1.2 Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Used Dev Size : 2925435456 (2789.91 GiB 2995.65 GB) Raid Devices : 13 Total Devices : 7 Persistence : Superblock is persistent Update Time : Fri Jan 17 02:11:28 2020 State : clean, FAILED Active Devices : 7 Working Devices : 7 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : homelab:2 (local to host homelab) UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Events : 370911 Number Major Minor RaidDevice State 0 8 21 0 active sync /dev/sdb5 1 8 37 1 active sync /dev/sdc5 2 8 53 2 active sync /dev/sdd5 3 8 69 3 active sync /dev/sde5 4 8 85 4 active sync /dev/sdf5 5 8 165 5 active sync /dev/sdk5 - 0 0 6 removed 7 8 181 7 active sync /dev/sdl5 - 0 0 8 removed - 0 0 9 removed - 0 0 10 removed - 0 0 11 removed - 0 0 12 removed root@homelab:~# mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Array Size : 2490176 (2.37 GiB 2.55 GB) Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 10 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Jan 17 02:18:37 2020 State : clean, degraded Active Devices : 10 Working Devices : 10 Failed Devices : 0 Spare Devices : 0 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Events : 0.536737 Number Major Minor RaidDevice State 0 8 209 0 active sync /dev/sdn1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 4 8 241 4 active sync /dev/sdp1 5 8 81 5 active sync /dev/sdf1 6 8 161 6 active sync /dev/sdk1 7 8 177 7 active sync /dev/sdl1 8 8 193 8 active sync /dev/sdm1 - 0 0 9 removed 10 8 225 10 active sync /dev/sdo1 - 0 0 11 removed
OK lemme rephrase.
Post #4 says I used the ubuntu usb.
Post #5 you asked me to use the xpe usb and run those commands.
Post #6 states when I put the xpe usb back it asked me to migrate. I migrated, and hence lost my edited synoinfo.conf and it defaults to 6 used disks, etc. I can't make any changes since everything is in read-only mode.
xpe usb is the same one that I've been using since the very beginning.
-
1 hour ago, flyride said:
"Note that when I pop the synoboot usb back, it asks me to migrate, and since all files are read-only, I can't restore my synoinfo.conf back to the original - 20 drives, 4 usb, 2 esata."
Can you explain this further? You said at the beginning you had 12 data drives and one cache drive. synoinfo.conf is stored on the root filesystem, which is /dev/md0 and is still functional. Why can't you modify synoinfo.conf directly as needed?
If you are saying drives are not accessible, we need to fix that before anything else.
It means that when this whole thing started (and the 2 crashed drives), everything went into read-only mode. I have no idea why. My synoinfo.conf was still as is, I didn't change anything since after the first installation.
Now when I pop the xpe usb back, it asks me to migrate, and thus wiping out my synoinfo and reverting to the default ds3617xs setting.
-
11 hours ago, flyride said:
I apologize if this may seem unkind, but you need to get very methodical and organized, and resist the urge to do something quickly.
If your data is intact, it will patiently wait for you. I don't know why you decided to boot up Ubuntu but you must understand that all the device ID's are probably different and nothing will match up. It actually looks like some of the info you posted is from DSM and some of it is from Ubuntu. So pretty much we have to ignore it and start over to have any chance of figuring things out.
Thing is, as I said earlier
First, nothing out of the ordinary happened.
Then two drives out of nowhere started crashing. Drives (Some?) data was still accessible. My mistake was I deactivated the 10TB. So I backed up whatever that I could via rclone from a different machine, accessing via SMB and NFS.
Then I rebooted. It presented the option to repair. I clicked repair, and when everything's done a drive was labelled clean, and a different one labelled as crashed. I freaked out, and continued to do the backup.
Soon after all shares are gone. I did mdadm -Asf && vgchange -ay in DSM.
I booted ubuntu live because I read multiple times that you can just pop in your drives in ubuntu and you can (easily?) mount the drives to access whatever data that's there. That's all. So if I can mount any/all of the drives in ubuntu and proceed to back up what's left, at this stage I'd be more than happy 😁
I didn't physically alter anything on the server other than changing USB to ubuntu.
I will post whatever commands you asked, and unkind or not, at this stage I appreciate any reply 😁
/volume1$ cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdb5[0] sdl5[7] sdk5[5] sdf5[4] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/6] [UUU_UU_U_____] md4 : active raid5 sdl6[0] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/1] [U____] md1 : active raid1 sdl2[6] sdk2[5] sdf2[4] sdd2[2] sdc2[1] sdb2[0] 2097088 blocks [12/6] [UUU_UUU_____] md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[10] sdo1[0] sdp1[4] 2490176 blocks [12/10] [UUUUUUUUU_U_] unused devices: <none>
root@homelab:~# mdadm --detail /dev/md2 /dev/md2: Version : 1.2 Creation Time : Sun Sep 22 21:55:03 2019 Raid Level : raid5 Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Used Dev Size : 2925435456 (2789.91 GiB 2995.65 GB) Raid Devices : 13 Total Devices : 6 Persistence : Superblock is persistent Update Time : Thu Jan 16 15:36:17 2020 State : clean, FAILED Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : homelab:2 (local to host homelab) UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Events : 370905 Number Major Minor RaidDevice State 0 8 21 0 active sync /dev/sdb5 1 8 37 1 active sync /dev/sdc5 2 8 53 2 active sync /dev/sdd5 - 0 0 3 removed 4 8 85 4 active sync /dev/sdf5 5 8 165 5 active sync /dev/sdk5 - 0 0 6 removed 7 8 181 7 active sync /dev/sdl5 - 0 0 8 removed - 0 0 9 removed - 0 0 10 removed - 0 0 11 removed - 0 0 12 removed root@homelab:~# mdadm --detail /dev/md4 /dev/md4: Version : 1.2 Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 2930246912 (2794.50 GiB 3000.57 GB) Raid Devices : 5 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Jan 16 15:36:17 2020 State : clean, FAILED Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : homelab:4 (local to host homelab) UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Events : 7040 Number Major Minor RaidDevice State 0 8 182 0 active sync /dev/sdl6 - 0 0 1 removed - 0 0 2 removed - 0 0 3 removed - 0 0 4 removed root@homelab:~# mdadm --detail /dev/md1 /dev/md1: Version : 0.90 Creation Time : Thu Jan 16 15:35:53 2020 Raid Level : raid1 Array Size : 2097088 (2047.94 MiB 2147.42 MB) Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 12 Total Devices : 6 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Thu Jan 16 15:36:44 2020 State : active, degraded Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 UUID : 000ab602:c505dcb2:cc8c244d:4f76664d (local to host homelab) Events : 0.25 Number Major Minor RaidDevice State 0 8 18 0 active sync /dev/sdb2 1 8 34 1 active sync /dev/sdc2 2 8 50 2 active sync /dev/sdd2 - 0 0 3 removed 4 8 82 4 active sync /dev/sdf2 5 8 162 5 active sync /dev/sdk2 6 8 178 6 active sync /dev/sdl2 - 0 0 7 removed - 0 0 8 removed - 0 0 9 removed - 0 0 10 removed - 0 0 11 removed root@homelab:~# mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Sun Sep 22 21:01:46 2019 Raid Level : raid1 Array Size : 2490176 (2.37 GiB 2.55 GB) Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 12 Total Devices : 10 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jan 16 15:48:30 2020 State : clean, degraded Active Devices : 10 Working Devices : 10 Failed Devices : 0 Spare Devices : 0 UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be Events : 0.522691 Number Major Minor RaidDevice State 0 8 225 0 active sync /dev/sdo1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 4 8 241 4 active sync /dev/sdp1 5 8 81 5 active sync /dev/sdf1 6 8 161 6 active sync /dev/sdk1 7 8 177 7 active sync /dev/sdl1 8 8 193 8 active sync /dev/sdm1 - 0 0 9 removed 10 8 209 10 active sync /dev/sdn1 - 0 0 11 removed
root@homelab:~# ls /dev/sd* /dev/sda /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sdf2 /dev/sdk2 /dev/sdl2 /dev/sdm1 /dev/sdn /dev/sdn6 /dev/sdo5 /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sdr7 /dev/sda1 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sdf5 /dev/sdk5 /dev/sdl5 /dev/sdm2 /dev/sdn1 /dev/sdo /dev/sdo6 /dev/sdp2 /dev/sdq2 /dev/sdr2 /dev/sdb /dev/sdc /dev/sdd /dev/sdf /dev/sdk /dev/sdl /dev/sdl6 /dev/sdm5 /dev/sdn2 /dev/sdo1 /dev/sdo7 /dev/sdp5 /dev/sdq5 /dev/sdr5 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sdf1 /dev/sdk1 /dev/sdl1 /dev/sdm /dev/sdm6 /dev/sdn5 /dev/sdo2 /dev/sdp /dev/sdq /dev/sdr /dev/sdr6 root@homelab:~# ls /dev/md* /dev/md0 /dev/md1 /dev/md2 /dev/md4 root@homelab:~# ls /dev/vg* /dev/vga_arbiter
Note that when I pop the synoboot usb back, it asks me to migrate, and since all files are read-only, I can't restore my synoinfo.conf back to the original - 20 drives, 4 usb, 2 esata.
Again I appreciate any reply, thank you!
-
Nobody? 😪
I have no idea what this means, but I think it's important. HELP!
sdn WDC WD100EMAZ-00 10TB
homelab:4 10.92TB
homelab:5 3.64TBsdk ST6000VN0033-2EE
homelab:4 10.92TBsdj WDC WD100EMAZ-00 10TB
homelab:4 10.92TB
homelab:5 3.64TBsdi ST6000VN0041-2EL
homelab:4 10.92TBsdh ST6000VN0041-2EL
homelab:4 10.92TBsda KINGSTON SV300S3 [SSD CACHE]
homelab:3 223.57GB# fdisk -l Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors Disk model: KINGSTON SV300S3 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x696935dc Device Boot Start End Sectors Size Id Type /dev/sda1 2048 468857024 468854977 223.6G fd Linux raid autodetect Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WDC WD30EFRX-68A Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 43C8C355-AE0A-42DC-97CC-508B0FB4EF37 Device Start End Sectors Size Type /dev/sdb1 2048 4982527 4980480 2.4G Linux RAID /dev/sdb2 4982528 9176831 4194304 2G Linux RAID /dev/sdb5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WDC WD30EFRX-68A Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 48A13430-10A1-4050-BA78-723DB398CE87 Device Start End Sectors Size Type /dev/sde1 2048 4982527 4980480 2.4G Linux RAID /dev/sde2 4982528 9176831 4194304 2G Linux RAID /dev/sde5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WDC WD30EFRX-68A Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 0600DFFC-A576-4242-976A-3ACAE5284C4C Device Start End Sectors Size Type /dev/sdc1 2048 4982527 4980480 2.4G Linux RAID /dev/sdc2 4982528 9176831 4194304 2G Linux RAID /dev/sdc5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WDC WD30EFRX-68A Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 58B43CB1-1F03-41D3-A734-014F59DE34E8 Device Start End Sectors Size Type /dev/sdd1 2048 4982527 4980480 2.4G Linux RAID /dev/sdd2 4982528 9176831 4194304 2G Linux RAID /dev/sdd5 9453280 5860326239 5850872960 2.7T Linux RAID [-----------------------THE LIVE USB DEBIAN/UBUNTU------------] Disk /dev/sdf: 14.9 GiB, 16008609792 bytes, 31266816 sectors Disk model: Cruzer Fit Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xf85f7a50 Device Boot Start End Sectors Size Id Type /dev/sdf1 * 2048 31162367 31160320 14.9G 83 Linux /dev/sdf2 31162368 31262719 100352 49M ef EFI (FAT-12/16/32) Disk /dev/loop0: 2.2 GiB, 2326040576 bytes, 4543048 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdj: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Disk model: WDC WD100EMAZ-00 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1713E819-3B9A-4CE3-94E8-5A3DBF1D5983 Device Start End Sectors Size Type /dev/sdj1 2048 4982527 4980480 2.4G Linux RAID /dev/sdj2 4982528 9176831 4194304 2G Linux RAID /dev/sdj5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdj6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdj7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdg: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WDC WD30PURX-64P Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1D5B8B09-8D4A-4729-B089-442620D3D507 Device Start End Sectors Size Type /dev/sdg1 2048 4982527 4980480 2.4G Linux RAID /dev/sdg2 4982528 9176831 4194304 2G Linux RAID /dev/sdg5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdn: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Disk model: WDC WD100EMAZ-00 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: EA537505-55B5-4C27-A7CA-C7BBB7E7B56F Device Start End Sectors Size Type /dev/sdn1 2048 4982527 4980480 2.4G Linux RAID /dev/sdn2 4982528 9176831 4194304 2G Linux RAID /dev/sdn5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdn6 5860342336 11720838239 5860495904 2.7T Linux RAID /dev/sdn7 11720854336 19532653311 7811798976 3.7T Linux RAID Disk /dev/sdl: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WDC WD30PURX-64P Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: A3E39D34-4297-4BE9-B4FD-3A21EFC38071 Device Start End Sectors Size Type /dev/sdl1 2048 4982527 4980480 2.4G Linux RAID /dev/sdl2 4982528 9176831 4194304 2G Linux RAID /dev/sdl5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdm: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WDC WD30PURX-64P Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 54D81C51-AB85-4DE2-AA16-263DF1C6BB8A Device Start End Sectors Size Type /dev/sdm1 2048 4982527 4980480 2.4G Linux RAID /dev/sdm2 4982528 9176831 4194304 2G Linux RAID /dev/sdm5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdh: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Disk model: ST6000VN0041-2EL Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 849E02B2-2734-496B-AB52-A572DF8FE63F Device Start End Sectors Size Type /dev/sdh1 2048 4982527 4980480 2.4G Linux RAID /dev/sdh2 4982528 9176831 4194304 2G Linux RAID /dev/sdh5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdh6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdk: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Disk model: ST6000VN0033-2EE Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 09CB7303-C2E7-46F8-ADA0-D4853F25CB00 Device Start End Sectors Size Type /dev/sdk1 2048 4982527 4980480 2.4G Linux RAID /dev/sdk2 4982528 9176831 4194304 2G Linux RAID /dev/sdk5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdk6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/sdi: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Disk model: ST6000VN0041-2EL Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 423D33B4-90CE-4E34-9C40-6E06D1F50C0C Device Start End Sectors Size Type /dev/sdi1 2048 4982527 4980480 2.4G Linux RAID /dev/sdi2 4982528 9176831 4194304 2G Linux RAID /dev/sdi5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdi6 5860342336 11720838239 5860495904 2.7T Linux RAID Disk /dev/md126: 3.7 TiB, 3999639994368 bytes, 7811796864 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md125: 10.9 TiB, 12002291351552 bytes, 23441975296 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 65536 bytes / 262144 bytes Disk /dev/md124: 223.6 GiB, 240052666368 bytes, 468852864 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 65536 bytes / 65536 bytes
# cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] md124 : active raid0 sda1[0] 234426432 blocks super 1.2 64k chunks md125 : active (auto-read-only) raid5 sdk6[2] sdj6[3] sdi6[1] sdh6[0] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U] md126 : active (auto-read-only) raid1 sdj7[0] 3905898432 blocks super 1.2 [2/1] [U_] unused devices: <none>
# mdadm --detail /dev/md125 /dev/md125: Version : 1.2 Creation Time : Sun Sep 22 21:55:04 2019 Raid Level : raid5 Array Size : 11720987648 (11178.00 GiB 12002.29 GB) Used Dev Size : 2930246912 (2794.50 GiB 3000.57 GB) Raid Devices : 5 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sat Jan 11 20:50:35 2020 State : clean, degraded Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Consistency Policy : resync Name : homelab:4 UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0 Events : 7035 Number Major Minor RaidDevice State 0 8 118 0 active sync /dev/sdh6 1 8 134 1 active sync /dev/sdi6 2 8 166 2 active sync /dev/sdk6 - 0 0 3 removed 3 8 150 4 active sync /dev/sdj6
# lvm vgscan Reading all physical volumes. This may take a while... Couldn't find device with uuid xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf. Found volume group "vg1" using metadata type lvm2 lvm> vgs vg1 Couldn't find device with uuid xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf. VG #PV #LV #SN Attr VSize VFree vg1 3 2 0 wz-pn- <47.25t 916.00m lvm> lvmdiskscan /dev/loop0 [ <2.17 GiB] /dev/sdf1 [ <14.86 GiB] /dev/sdf2 [ 49.00 MiB] /dev/md124 [ <223.57 GiB] /dev/md125 [ <10.92 TiB] LVM physical volume /dev/md126 [ <3.64 TiB] LVM physical volume 0 disks 4 partitions 0 LVM physical volume whole disks 2 LVM physical volumes lvm> pvscan Couldn't find device with uuid xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf. PV [unknown] VG vg1 lvm2 [32.69 TiB / 0 free] PV /dev/md125 VG vg1 lvm2 [<10.92 TiB / 0 free] PV /dev/md126 VG vg1 lvm2 [<3.64 TiB / 916.00 MiB free] Total: 3 [<47.25 TiB] / in use: 3 [<47.25 TiB] / in no VG: 0 [0 ] lvm> lvs Couldn't find device with uuid xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert syno_vg_reserved_area vg1 -wi-----p- 12.00m volume_1 vg1 -wi-----p- <47.25t lvm> vgdisplay Couldn't find device with uuid xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf. --- Volume group --- VG Name vg1 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 12 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 0 Max PV 0 Cur PV 3 Act PV 2 VG Size <47.25 TiB PE Size 4.00 MiB Total PE 12385768 Alloc PE / Size 12385539 / <47.25 TiB Free PE / Size 229 / 916.00 MiB VG UUID 2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp
-
I'm at loss here, dunno what to do
tl;dr version: I need to either recreate my SHR1 pool, or at least let me mount the drives so I can transfer my files.
Story begins with this:
13 drives in total, including one cache SSD.
Two crashes at the same time?? That's like.... a very rare possibility right? Well whatever. Everything's read-only now. Fine, I'll just back whatever's important via rclone + gdrive. That's gonna take me a while.
Then something weird happened... I could repair, and the 2.73/3TB drive came up normal. But a different drive crashed now. Weird.
And stupid of me, I didn't think and clicked deactivate with the 9.10/10TB drive. And now I have no idea how to reactive the drive again.
After a few restarts, this happened.
The crashed 10TB is not there anymore, which is understandable, but everything's... normal...? But all shares are gone
I took out the usb and plug in ubuntu live usb.
# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] md4 : active (auto-read-only) raid5 sdj6[0] sdl6[3] sdo6[2] sdk6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U] md2 : inactive sdm5[10](S) sdj5[7](S) sdd5[0](S) sde5[1](S) sdp5[14](S) sdo5[9](S) sdl5[11](S) sdn5[13](S) sdi5[5](S) sdf5[2](S) sdh5[4](S) sdk5[8](S) 35105225472 blocks super 1.2 md5 : active (auto-read-only) raid1 sdl7[0] 3905898432 blocks super 1.2 [2/1] [U_] md3 : active raid0 sdc1[0] 234426432 blocks super 1.2 64k chunks unused devices: <none>
# mdadm -Asf && vgchange -ay mdadm: Found some drive for an array that is already active: /dev/md/5 mdadm: giving up. mdadm: Found some drive for an array that is already active: /dev/md/4 mdadm: giving up.
Drive list:
250GB SSD sdc /dev/sdc1 223.57 3TB sdd /dev/sdd1 2.37GB /dev/sdd2 2.00GB /dev/sdd5 2.72TB 3TB sde /dev/sde1 2.37GB /dev/sde2 2.00GB /dev/sde5 2.72TB 3TB sdf /dev/sdf1 2.37GB /dev/sdf2 2.00GB /dev/sdf5 2.72TB 3TB sdh /dev/sdh1 2.37GB /dev/sdh2 2.00GB /dev/sdh5 2.72TB 3TB sdi /dev/sdi1 2.37GB /dev/sdi2 2.00GB /dev/sdi5 2.72TB 5TB sdj /dev/sdj1 2.37GB /dev/sdj2 2.00GB /dev/sdj5 2.72TB /dev/sdj6 2.73TB 5TB sdk /dev/sdk1 2.37GB /dev/sdk2 2.00GB /dev/sdk5 2.72TB /dev/sdk6 2.73TB 10TB sdl /dev/sdl1 2.37GB /dev/sdl2 2.00GB /dev/sdl5 2.72TB /dev/sdl6 2.73TB /dev/sdl7 3.64TB 3TB sdm /dev/sdm1 2.37GB /dev/sdm2 2.00GB /dev/sdm5 2.72TB 3TB sdn /dev/sdn1 2.37GB /dev/sdn2 2.00GB /dev/sdn5 2.72TB 5TB sdo /dev/sdo1 2.37GB /dev/sdo2 2.00GB /dev/sdo5 2.72TB /dev/sdo6 2.73TB 10TB sdp /dev/sdp1 2.37GB /dev/sdp2 2.00GB /dev/sdp5 2.72TB /dev/sdp6 2.73TB /dev/sdp7 3.64TB
Can anybody help me? I just want to access my data, how can I do that?
Relevant thread of mine (tl;dr version: it worked, for a few months at least)
-
On 10/6/2019 at 3:06 AM, IG-88 said:
you should have followed him in his attempts
he gave up on that not knowing what went wrong, he was to optimistic on a lot of things and ran intro one problem after another, xpenology is not as easy to handle when you want something out of the ordinary, even just a kerneldriver can be a problem
quicknic's statement was interesting but he did not explained anything about how he circumvented the limits
maybe it does not need any special sauce to get it working, maybe you just have to use the info he gave about the numbers of disks?
it would need just some time to create a vm, tweak the config, add 28 virtual disks and look if a raid set can be created/degrades and rebuild (for a complete test), if that does not work out then qicknicks 3.0 loader will tell, its all script, no compiled source code so anyone can read it
at 1st glance there was nothing special, just the normal variables for synoinfo.conf and a check for the right max drive numbers (but there might be more to it somewhere)
Yeah after I posted my question, I did follow up with him personally and he confirmed that he had tons of issues with his setup.
About quicknick, AFAIK he pulled back his loader so I suppose only those who got his loader earlier would know more I guess. Oh well. One can only dream.
-
How would you go about doing this? Just wanna see what's the maximum write speed that I can achieve so I can upgrade my network infrastructure accordingly
Example:
If my 13 disk SHR can achieve 400MB/s write speed, then I'll attach a quad gigabit or a 10G mellanox PCI-e card to it, with LACP-capable hardware and all.
Preferably a tool that won't destroy existing data in the SHR.
-
Weird: I clicked on add disk, and I can somehow add all of the ex-volume3 disks (1x3TB + 2x10TB). I thought I couldn't add the 3TB disk since the new /volume1 already has 3x6TB disks?
With the exception of Note Station contents, everything seems to be working right up to the point where I left. 😁
-
I took a (calculated) chance and did it. What I did:
Mental note: with SHR, you can only add bigger or similar drives to your biggest current RAID setup (in my case, only 6TB when I wanted to add 3x3TB to it).
Basic idea (guess):
- backup/clone the files
- take out the drives that contain the backup files
- destroy/format/wipe/fresh install DSM into all drives
- recreate SHR including the newer smaller empty drives
- restore backup
- reinstall apps, apps will automagically use the data(base) from the previously backed up files
/volume1 (~20TB) - 5x3TB + 3x6TB
/volume3 (~20TB) - 1x3TB + 2x10TB
Empty drives: 3x3TB, created as Basic disks each (volume4 volume5 volume6) so it saves DSM data (didn't know DSM is installed in all initialized disks as RAID1, not needed if you didn't take out /volume3)
Back up /volume1 using HyperBackup to /volume3. All apps, all configs, all (Just Shared Folders or all raid contents? Not sure. Need clarification) folders.
Take out /volume3 drives so DSM reinstall doesn't format/empty these backed up drives (not needed if you know which physical drives that contain the backup files, and I didn't want to copy the hdd serials just to be sure)
Reboot. Reinstall DSM from boot menu from fresh, same DSM version (apparently not needed to reformat/fresh install, didn't really know).
SSH into DSM. sudo vi /etc.defaults/synoinfo.conf for maxdisks=24, usbportcfg (0x0), esatacfg (0x0), and internalportcfg (0xffffff) as well as support_syno_hybrid_raid = "yes" and #supportraidgroup="yes" to enable back SHR and increase the max disks to 24.
Install HyperBackup.
Shutdown. Plug in the 3 drives for /volume3. Turn on your pc.
Remove /volume1 as well as Storage Pool 1. Remove the 3x3TB Basic disks' volumes and storage pools as the DSM data is still available in the 3 disks in /volume3. Reboot (just to be safe?).
Create a new SHR out of the empty 3x3TB drives as well as 5x3TB + 3x6TB. It automatically creates /volume1 with a bigger SHR than the original.
HyperBackup will detect the .hbk automagically in /volume3 when you click restore (didn't know how or why). It also restores the configuration of your previous installation (didn't know! awesome!).
Gonna wait it out to see if the app databases, like Office, Drive, Note Station, Plex are restored as well when I install these apps back.
Planned steps afterwards:
Reinstall the apps after restore is done
Hopefully the apps find the database files and use them instead of recreating new databases
-
19 hours ago, bluesnow said:
The problem is no the data, You can move/destroy/restore all volumes as you want. But apps/shares etc must be installed/configured again.
I don't mind reinstalling, as the data is backed up via HyperVisor, like Plex database. Unless you're telling me HyperBackup doesn't back up data on my /volume1 on things like Moments, Drive, plex database folder, etc.
Help save my 55TB SHR1! Or mount it via Ubuntu :(
in General Post-Installation Questions/Discussions (non-hardware specific)
Posted