C-Fu

Help save my 55TB SHR1! Or mount it via Ubuntu :(

Recommended Posts

 

I'm at loss here, dunno what to do :(

 

tl;dr version: I need to either recreate my SHR1 pool, or at least let me mount the drives so I can transfer my files.

 

Story begins with this:

image.thumb.png.fa57878ae82cdeb8ebb3b2819c8d9963.png

 

13 drives in total, including one cache SSD.

 

Two crashes at the same time?? That's like.... a very rare possibility right? Well whatever. Everything's read-only now. Fine, I'll just back whatever's important via rclone + gdrive. That's gonna take me a while.

 

Then something weird happened... I could repair, and the 2.73/3TB drive came up normal. But a different drive crashed now. Weird.

 

And stupid of me, I didn't think and clicked deactivate with the 9.10/10TB drive. And now I have no idea how to reactive the drive again.

 

After a few restarts, this happened.

image.thumb.png.4ef5b459c0bee0d83f79cd4ccc707115.png

 

The crashed 10TB is not there anymore, which is understandable, but everything's... normal...? But all shares are gone :(

 

I took out the usb and plug in ubuntu live usb.

 

# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md4 : active (auto-read-only) raid5 sdj6[0] sdl6[3] sdo6[2] sdk6[1]
      11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U]
      
md2 : inactive sdm5[10](S) sdj5[7](S) sdd5[0](S) sde5[1](S) sdp5[14](S) sdo5[9](S) sdl5[11](S) sdn5[13](S) sdi5[5](S) sdf5[2](S) sdh5[4](S) sdk5[8](S)
      35105225472 blocks super 1.2
       
md5 : active (auto-read-only) raid1 sdl7[0]
      3905898432 blocks super 1.2 [2/1] [U_]
      
md3 : active raid0 sdc1[0]
      234426432 blocks super 1.2 64k chunks
      
unused devices: <none>
# mdadm -Asf && vgchange -ay
mdadm: Found some drive for an array that is already active: /dev/md/5
mdadm: giving up.
mdadm: Found some drive for an array that is already active: /dev/md/4
mdadm: giving up.

Drive list:

250GB SSD sdc
/dev/sdc1 223.57 

3TB sdd
/dev/sdd1 2.37GB
/dev/sdd2 2.00GB
/dev/sdd5 2.72TB

3TB sde
/dev/sde1 2.37GB
/dev/sde2 2.00GB
/dev/sde5 2.72TB

3TB sdf
/dev/sdf1 2.37GB
/dev/sdf2 2.00GB
/dev/sdf5 2.72TB

3TB sdh
/dev/sdh1 2.37GB
/dev/sdh2 2.00GB
/dev/sdh5 2.72TB

3TB sdi
/dev/sdi1 2.37GB
/dev/sdi2 2.00GB
/dev/sdi5 2.72TB

5TB sdj
/dev/sdj1 2.37GB
/dev/sdj2 2.00GB
/dev/sdj5 2.72TB
/dev/sdj6 2.73TB

5TB sdk
/dev/sdk1 2.37GB
/dev/sdk2 2.00GB
/dev/sdk5 2.72TB
/dev/sdk6 2.73TB

10TB sdl
/dev/sdl1 2.37GB
/dev/sdl2 2.00GB
/dev/sdl5 2.72TB
/dev/sdl6 2.73TB
/dev/sdl7 3.64TB

3TB sdm
/dev/sdm1 2.37GB
/dev/sdm2 2.00GB
/dev/sdm5 2.72TB

3TB sdn
/dev/sdn1 2.37GB
/dev/sdn2 2.00GB
/dev/sdn5 2.72TB

5TB sdo
/dev/sdo1 2.37GB
/dev/sdo2 2.00GB
/dev/sdo5 2.72TB
/dev/sdo6 2.73TB

10TB sdp
/dev/sdp1 2.37GB
/dev/sdp2 2.00GB
/dev/sdp5 2.72TB
/dev/sdp6 2.73TB
/dev/sdp7 3.64TB

Can anybody help me? I just want to access my data, how can I do that?

 

 

Relevant thread of mine (tl;dr version: it worked, for a few months at least)

 

Edited by C-Fu

Share this post


Link to post
Share on other sites

Nobody? 😪

 

I have no idea what this means, but I think it's important. HELP!

sdn WDC WD100EMAZ-00 10TB
homelab:4 10.92TB
homelab:5 3.64TB

 

sdk ST6000VN0033-2EE
homelab:4 10.92TB

 

sdj WDC WD100EMAZ-00 10TB
homelab:4 10.92TB
homelab:5 3.64TB

 

sdi ST6000VN0041-2EL
homelab:4 10.92TB

 

sdh ST6000VN0041-2EL
homelab:4 10.92TB

 

sda KINGSTON SV300S3 [SSD CACHE]
homelab:3 223.57GB

 

# fdisk -l
Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Disk model: KINGSTON SV300S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x696935dc

Device     Boot Start       End   Sectors   Size Id Type
/dev/sda1        2048 468857024 468854977 223.6G fd Linux raid autodetect


Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68A
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 43C8C355-AE0A-42DC-97CC-508B0FB4EF37

Device       Start        End    Sectors  Size Type
/dev/sdb1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdb2  4982528    9176831    4194304    2G Linux RAID
/dev/sdb5  9453280 5860326239 5850872960  2.7T Linux RAID


Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68A
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 48A13430-10A1-4050-BA78-723DB398CE87

Device       Start        End    Sectors  Size Type
/dev/sde1     2048    4982527    4980480  2.4G Linux RAID
/dev/sde2  4982528    9176831    4194304    2G Linux RAID
/dev/sde5  9453280 5860326239 5850872960  2.7T Linux RAID


Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68A
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0600DFFC-A576-4242-976A-3ACAE5284C4C

Device       Start        End    Sectors  Size Type
/dev/sdc1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdc2  4982528    9176831    4194304    2G Linux RAID
/dev/sdc5  9453280 5860326239 5850872960  2.7T Linux RAID


Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68A
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 58B43CB1-1F03-41D3-A734-014F59DE34E8

Device       Start        End    Sectors  Size Type
/dev/sdd1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdd2  4982528    9176831    4194304    2G Linux RAID
/dev/sdd5  9453280 5860326239 5850872960  2.7T Linux RAID

[-----------------------THE LIVE USB DEBIAN/UBUNTU------------]
Disk /dev/sdf: 14.9 GiB, 16008609792 bytes, 31266816 sectors
Disk model: Cruzer Fit      
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf85f7a50

Device     Boot    Start      End  Sectors  Size Id Type
/dev/sdf1  *        2048 31162367 31160320 14.9G 83 Linux
/dev/sdf2       31162368 31262719   100352   49M ef EFI (FAT-12/16/32)


Disk /dev/loop0: 2.2 GiB, 2326040576 bytes, 4543048 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdj: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
Disk model: WDC WD100EMAZ-00
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 1713E819-3B9A-4CE3-94E8-5A3DBF1D5983

Device           Start         End    Sectors  Size Type
/dev/sdj1         2048     4982527    4980480  2.4G Linux RAID
/dev/sdj2      4982528     9176831    4194304    2G Linux RAID
/dev/sdj5      9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdj6   5860342336 11720838239 5860495904  2.7T Linux RAID
/dev/sdj7  11720854336 19532653311 7811798976  3.7T Linux RAID


Disk /dev/sdg: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30PURX-64P
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 1D5B8B09-8D4A-4729-B089-442620D3D507

Device       Start        End    Sectors  Size Type
/dev/sdg1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdg2  4982528    9176831    4194304    2G Linux RAID
/dev/sdg5  9453280 5860326239 5850872960  2.7T Linux RAID


Disk /dev/sdn: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
Disk model: WDC WD100EMAZ-00
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: EA537505-55B5-4C27-A7CA-C7BBB7E7B56F

Device           Start         End    Sectors  Size Type
/dev/sdn1         2048     4982527    4980480  2.4G Linux RAID
/dev/sdn2      4982528     9176831    4194304    2G Linux RAID
/dev/sdn5      9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdn6   5860342336 11720838239 5860495904  2.7T Linux RAID
/dev/sdn7  11720854336 19532653311 7811798976  3.7T Linux RAID


Disk /dev/sdl: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30PURX-64P
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: A3E39D34-4297-4BE9-B4FD-3A21EFC38071

Device       Start        End    Sectors  Size Type
/dev/sdl1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdl2  4982528    9176831    4194304    2G Linux RAID
/dev/sdl5  9453280 5860326239 5850872960  2.7T Linux RAID


Disk /dev/sdm: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30PURX-64P
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 54D81C51-AB85-4DE2-AA16-263DF1C6BB8A

Device       Start        End    Sectors  Size Type
/dev/sdm1     2048    4982527    4980480  2.4G Linux RAID
/dev/sdm2  4982528    9176831    4194304    2G Linux RAID
/dev/sdm5  9453280 5860326239 5850872960  2.7T Linux RAID


Disk /dev/sdh: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: ST6000VN0041-2EL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 849E02B2-2734-496B-AB52-A572DF8FE63F

Device          Start         End    Sectors  Size Type
/dev/sdh1        2048     4982527    4980480  2.4G Linux RAID
/dev/sdh2     4982528     9176831    4194304    2G Linux RAID
/dev/sdh5     9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdh6  5860342336 11720838239 5860495904  2.7T Linux RAID


Disk /dev/sdk: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: ST6000VN0033-2EE
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 09CB7303-C2E7-46F8-ADA0-D4853F25CB00

Device          Start         End    Sectors  Size Type
/dev/sdk1        2048     4982527    4980480  2.4G Linux RAID
/dev/sdk2     4982528     9176831    4194304    2G Linux RAID
/dev/sdk5     9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdk6  5860342336 11720838239 5860495904  2.7T Linux RAID


Disk /dev/sdi: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: ST6000VN0041-2EL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 423D33B4-90CE-4E34-9C40-6E06D1F50C0C

Device          Start         End    Sectors  Size Type
/dev/sdi1        2048     4982527    4980480  2.4G Linux RAID
/dev/sdi2     4982528     9176831    4194304    2G Linux RAID
/dev/sdi5     9453280  5860326239 5850872960  2.7T Linux RAID
/dev/sdi6  5860342336 11720838239 5860495904  2.7T Linux RAID


Disk /dev/md126: 3.7 TiB, 3999639994368 bytes, 7811796864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md125: 10.9 TiB, 12002291351552 bytes, 23441975296 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 262144 bytes


Disk /dev/md124: 223.6 GiB, 240052666368 bytes, 468852864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
# cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] 
md124 : active raid0 sda1[0]
      234426432 blocks super 1.2 64k chunks
      
md125 : active (auto-read-only) raid5 sdk6[2] sdj6[3] sdi6[1] sdh6[0]
      11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U]
      
md126 : active (auto-read-only) raid1 sdj7[0]
      3905898432 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>
# mdadm --detail /dev/md125
/dev/md125:
           Version : 1.2
     Creation Time : Sun Sep 22 21:55:04 2019
        Raid Level : raid5
        Array Size : 11720987648 (11178.00 GiB 12002.29 GB)
     Used Dev Size : 2930246912 (2794.50 GiB 3000.57 GB)
      Raid Devices : 5
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Jan 11 20:50:35 2020
             State : clean, degraded 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : resync

              Name : homelab:4
              UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0
            Events : 7035

    Number   Major   Minor   RaidDevice State
       0       8      118        0      active sync   /dev/sdh6
       1       8      134        1      active sync   /dev/sdi6
       2       8      166        2      active sync   /dev/sdk6
       -       0        0        3      removed
       3       8      150        4      active sync   /dev/sdj6
# lvm vgscan
  Reading all physical volumes.  This may take a while...
  Couldn't find device with uuid xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf.
  Found volume group "vg1" using metadata type lvm2
  
lvm> vgs vg1
  Couldn't find device with uuid xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf.
  VG  #PV #LV #SN Attr   VSize   VFree  
  vg1   3   2   0 wz-pn- <47.25t 916.00m
  
lvm> lvmdiskscan 
  /dev/loop0 [      <2.17 GiB] 
  /dev/sdf1  [     <14.86 GiB] 
  /dev/sdf2  [      49.00 MiB] 
  /dev/md124 [    <223.57 GiB] 
  /dev/md125 [     <10.92 TiB] LVM physical volume
  /dev/md126 [      <3.64 TiB] LVM physical volume
  0 disks
  4 partitions
  0 LVM physical volume whole disks
  2 LVM physical volumes
  
lvm> pvscan
  Couldn't find device with uuid xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf.
  PV [unknown]    VG vg1             lvm2 [32.69 TiB / 0    free]
  PV /dev/md125   VG vg1             lvm2 [<10.92 TiB / 0    free]
  PV /dev/md126   VG vg1             lvm2 [<3.64 TiB / 916.00 MiB free]
  Total: 3 [<47.25 TiB] / in use: 3 [<47.25 TiB] / in no VG: 0 [0   ]

  
lvm> lvs
  Couldn't find device with uuid xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf.
  LV                    VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  syno_vg_reserved_area vg1 -wi-----p-  12.00m                                               
  volume_1              vg1 -wi-----p- <47.25t  
  
lvm> vgdisplay
  Couldn't find device with uuid xreQ41-E5FU-YC9V-cTHA-QBb0-Cr3U-tcvkZf.
  --- Volume group ---
  VG Name               vg1
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  12
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                3
  Act PV                2
  VG Size               <47.25 TiB
  PE Size               4.00 MiB
  Total PE              12385768
  Alloc PE / Size       12385539 / <47.25 TiB
  Free  PE / Size       229 / 916.00 MiB
  VG UUID               2n0Cav-enzK-3ouC-02ve-tYKn-jsP5-PxfYQp

 

Edited by C-Fu

Share this post


Link to post
Share on other sites

I apologize if this may seem unkind, but you need to get very methodical and organized, and resist the urge to do something quickly. There's a thing that happens, folks sort of throw themselves at this problem and end up potentially doing damage without really doing research. That leads to a low likelihood of data recovery, and at best recovery with a lot of corruption and data loss.

 

If your data is intact, it will patiently wait for you. I don't know why you decided to boot up Ubuntu but you must understand that all the device ID's are probably different and nothing will match up.  It actually looks like some of the info you posted is from DSM and some of it is from Ubuntu.  So pretty much we have to ignore it and start over to have any chance of figuring things out.

 

If you want help, PUT EVERYTHING BACK EXACTLY THE WAY IT WAS, boot up in DSM, and get shell access. Then:

  1. Summarize exactly what potentially destructive steps you took.  You posted that you clicked Repair, and then Deactivate.  Specifically where, affecting what?  Anything else?
  2. cat /proc/mdstat
  3. Using each of the arrays returned by mdstat: mdadm --detail /dev/<array>
  4. ls /dev/sd*
  5. ls /dev/md*
  6. ls /dev/vg*

Please post the results  (that is if you want help... on your own you can do what you wish, of course). There is no guarantee that your data is recoverable - there is no concrete evidence of anything yet.

  • Like 1

Share this post


Link to post
Share on other sites

One more thing, did you just walk up to the system and see the two crashed drives?  Or was there some system operation happening at the time?

If something caused the problem, it would help to know what it was.

Share this post


Link to post
Share on other sites
11 hours ago, flyride said:

I apologize if this may seem unkind, but you need to get very methodical and organized, and resist the urge to do something quickly

 

If your data is intact, it will patiently wait for you. I don't know why you decided to boot up Ubuntu but you must understand that all the device ID's are probably different and nothing will match up.  It actually looks like some of the info you posted is from DSM and some of it is from Ubuntu.  So pretty much we have to ignore it and start over to have any chance of figuring things out.

 

Thing is, as I said earlier

 

First, nothing out of the ordinary happened.

Then two drives out of nowhere started crashing. Drives (Some?) data was still accessible. My mistake was I deactivated the 10TB. So I backed up whatever that I could via rclone from a different machine, accessing via SMB and NFS. 

 

Then I rebooted. It presented the option to repair. I clicked repair, and when everything's done a drive was labelled clean, and a different one labelled as crashed. I freaked out, and continued to do the backup.

 

Soon after all shares are gone. I did mdadm -Asf && vgchange -ay in DSM.

 

I booted ubuntu live because I read multiple times that you can just pop in your drives in ubuntu and you can (easily?) mount the drives to access whatever data that's there. That's all. So if I can mount any/all of the drives in ubuntu and proceed to back up what's left, at this stage I'd be more than happy 😁

 

I didn't physically alter anything on the server other than changing USB to ubuntu.

 

I will post whatever commands you asked, and unkind or not, at this stage I appreciate any reply 😁

 

/volume1$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sdb5[0] sdl5[7] sdk5[5] sdf5[4] sdd5[2] sdc5[1]
      35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/6] [UUU_UU_U_____]

md4 : active raid5 sdl6[0]
      11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/1] [U____]

md1 : active raid1 sdl2[6] sdk2[5] sdf2[4] sdd2[2] sdc2[1] sdb2[0]
      2097088 blocks [12/6] [UUU_UUU_____]

md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[10] sdo1[0] sdp1[4]
      2490176 blocks [12/10] [UUUUUUUUU_U_]

unused devices: <none>
root@homelab:~# mdadm --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
  Used Dev Size : 2925435456 (2789.91 GiB 2995.65 GB)
   Raid Devices : 13
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Thu Jan 16 15:36:17 2020
          State : clean, FAILED
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : homelab:2  (local to host homelab)
           UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
         Events : 370905

    Number   Major   Minor   RaidDevice State
       0       8       21        0      active sync   /dev/sdb5
       1       8       37        1      active sync   /dev/sdc5
       2       8       53        2      active sync   /dev/sdd5
       -       0        0        3      removed
       4       8       85        4      active sync   /dev/sdf5
       5       8      165        5      active sync   /dev/sdk5
       -       0        0        6      removed
       7       8      181        7      active sync   /dev/sdl5
       -       0        0        8      removed
       -       0        0        9      removed
       -       0        0       10      removed
       -       0        0       11      removed
       -       0        0       12      removed
root@homelab:~# mdadm --detail /dev/md4
/dev/md4:
        Version : 1.2
  Creation Time : Sun Sep 22 21:55:04 2019
     Raid Level : raid5
     Array Size : 11720987648 (11178.00 GiB 12002.29 GB)
  Used Dev Size : 2930246912 (2794.50 GiB 3000.57 GB)
   Raid Devices : 5
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Thu Jan 16 15:36:17 2020
          State : clean, FAILED
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : homelab:4  (local to host homelab)
           UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0
         Events : 7040

    Number   Major   Minor   RaidDevice State
       0       8      182        0      active sync   /dev/sdl6
       -       0        0        1      removed
       -       0        0        2      removed
       -       0        0        3      removed
       -       0        0        4      removed
root@homelab:~# mdadm --detail /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Thu Jan 16 15:35:53 2020
     Raid Level : raid1
     Array Size : 2097088 (2047.94 MiB 2147.42 MB)
  Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
   Raid Devices : 12
  Total Devices : 6
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Thu Jan 16 15:36:44 2020
          State : active, degraded
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

           UUID : 000ab602:c505dcb2:cc8c244d:4f76664d (local to host homelab)
         Events : 0.25

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8       34        1      active sync   /dev/sdc2
       2       8       50        2      active sync   /dev/sdd2
       -       0        0        3      removed
       4       8       82        4      active sync   /dev/sdf2
       5       8      162        5      active sync   /dev/sdk2
       6       8      178        6      active sync   /dev/sdl2
       -       0        0        7      removed
       -       0        0        8      removed
       -       0        0        9      removed
       -       0        0       10      removed
       -       0        0       11      removed
root@homelab:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Sun Sep 22 21:01:46 2019
     Raid Level : raid1
     Array Size : 2490176 (2.37 GiB 2.55 GB)
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 12
  Total Devices : 10
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu Jan 16 15:48:30 2020
          State : clean, degraded
 Active Devices : 10
Working Devices : 10
 Failed Devices : 0
  Spare Devices : 0

           UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be
         Events : 0.522691

    Number   Major   Minor   RaidDevice State
       0       8      225        0      active sync   /dev/sdo1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
       4       8      241        4      active sync   /dev/sdp1
       5       8       81        5      active sync   /dev/sdf1
       6       8      161        6      active sync   /dev/sdk1
       7       8      177        7      active sync   /dev/sdl1
       8       8      193        8      active sync   /dev/sdm1
       -       0        0        9      removed
      10       8      209       10      active sync   /dev/sdn1
       -       0        0       11      removed
root@homelab:~# ls /dev/sd*
/dev/sda   /dev/sdb2  /dev/sdc2  /dev/sdd2  /dev/sdf2  /dev/sdk2  /dev/sdl2  /dev/sdm1  /dev/sdn   /dev/sdn6  /dev/sdo5  /dev/sdp1  /dev/sdq1  /dev/sdr1  /dev/sdr7
/dev/sda1  /dev/sdb5  /dev/sdc5  /dev/sdd5  /dev/sdf5  /dev/sdk5  /dev/sdl5  /dev/sdm2  /dev/sdn1  /dev/sdo   /dev/sdo6  /dev/sdp2  /dev/sdq2  /dev/sdr2
/dev/sdb   /dev/sdc   /dev/sdd   /dev/sdf   /dev/sdk   /dev/sdl   /dev/sdl6  /dev/sdm5  /dev/sdn2  /dev/sdo1  /dev/sdo7  /dev/sdp5  /dev/sdq5  /dev/sdr5
/dev/sdb1  /dev/sdc1  /dev/sdd1  /dev/sdf1  /dev/sdk1  /dev/sdl1  /dev/sdm   /dev/sdm6  /dev/sdn5  /dev/sdo2  /dev/sdp   /dev/sdq   /dev/sdr   /dev/sdr6
root@homelab:~# ls /dev/md*
/dev/md0  /dev/md1  /dev/md2  /dev/md4
root@homelab:~# ls /dev/vg*
/dev/vga_arbiter

Note that when I pop the synoboot usb back, it asks me to migrate, and since all files are read-only, I can't restore my synoinfo.conf back to the original - 20 drives, 4 usb, 2 esata.

 

Again I appreciate any reply, thank you!

Edited by C-Fu

Share this post


Link to post
Share on other sites

"Note that when I pop the synoboot usb back, it asks me to migrate, and since all files are read-only, I can't restore my synoinfo.conf back to the original - 20 drives, 4 usb, 2 esata."

 

Can you explain this further?  You said at the beginning  you had 12 data drives and one cache drive.  synoinfo.conf is stored on the root filesystem, which is /dev/md0 and is still functional.  Why can't you modify synoinfo.conf directly as needed?

 

If you are saying drives are not accessible, we need to fix that before anything else.

Share this post


Link to post
Share on other sites
1 hour ago, flyride said:

"Note that when I pop the synoboot usb back, it asks me to migrate, and since all files are read-only, I can't restore my synoinfo.conf back to the original - 20 drives, 4 usb, 2 esata."

 

Can you explain this further?  You said at the beginning  you had 12 data drives and one cache drive.  synoinfo.conf is stored on the root filesystem, which is /dev/md0 and is still functional.  Why can't you modify synoinfo.conf directly as needed?

 

If you are saying drives are not accessible, we need to fix that before anything else.

 

It means that when this whole thing started (and the 2 crashed drives), everything went into read-only mode. I have no idea why. My synoinfo.conf was still as is, I didn't change anything since after the first installation.

 

Now when I pop the xpe usb back, it asks me to migrate, and thus wiping out my synoinfo and reverting to the default ds3617xs setting. 

image.thumb.png.e172cc083adace0b5872245b1a9f9464.png

Share this post


Link to post
Share on other sites

I am getting lost in your explanation.  If your arrays are crashed, they will be read-only or unavailable, so that can be expected.

 

You keep saying "pop the xpe usb back." I'm assuming you did that only once and it is still installed.  Right?  So is the following true?

 

1. You tried to boot up DSM and it asked to migrate.

2. You decided to perform the migration installation

3. The system then booted up to the state you post above

4. You don't remove the boot loader USB, it stays in place

 

Please confirm whether the above list is true.  And when you reboot, does it try to migrate again or come back to the state you posted?

Share this post


Link to post
Share on other sites
2 hours ago, flyride said:

I am getting lost in your explanation.  If your arrays are crashed, they will be read-only or unavailable, so that can be expected.

 

You keep saying "pop the xpe usb back." I'm assuming you did that only once and it is still installed.  Right?  So is the following true?

 

1. You tried to boot up DSM and it asked to migrate.

2. You decided to perform the migration installation

3. The system then booted up to the state you post above

4. You don't remove the boot loader USB, it stays in place

 

Please confirm whether the above list is true.  And when you reboot, does it try to migrate again or come back to the state you posted?

All are true.

It didn't ask to migrate again after rebooting. But this happened.

image.thumb.png.24fd1902bc8b0e58c1744377a49f4131.png

Before rebooting, all disks showed Normal in Disk Allocation Status.

 

mdstat after rebooting (md2 and md0 are different than before reboot):

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sdb5[0] sdl5[7] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1]
      35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/7] [UUUUUU_U_____]

md4 : active raid5 sdl6[0]
      11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/1] [U____]

md1 : active raid1 sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0]
      2097088 blocks [12/7] [UUUUUUU_____]

md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[0] sdo1[10] sdp1[4]
      2490176 blocks [12/10] [UUUUUUUUU_U_]

unused devices: <none>

  
root@homelab:~# mdadm --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
  Used Dev Size : 2925435456 (2789.91 GiB 2995.65 GB)
   Raid Devices : 13
  Total Devices : 7
    Persistence : Superblock is persistent

    Update Time : Fri Jan 17 02:11:28 2020
          State : clean, FAILED
 Active Devices : 7
Working Devices : 7
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : homelab:2  (local to host homelab)
           UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
         Events : 370911

    Number   Major   Minor   RaidDevice State
       0       8       21        0      active sync   /dev/sdb5
       1       8       37        1      active sync   /dev/sdc5
       2       8       53        2      active sync   /dev/sdd5
       3       8       69        3      active sync   /dev/sde5
       4       8       85        4      active sync   /dev/sdf5
       5       8      165        5      active sync   /dev/sdk5
       -       0        0        6      removed
       7       8      181        7      active sync   /dev/sdl5
       -       0        0        8      removed
       -       0        0        9      removed
       -       0        0       10      removed
       -       0        0       11      removed
       -       0        0       12      removed

      
root@homelab:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Sun Sep 22 21:01:46 2019
     Raid Level : raid1
     Array Size : 2490176 (2.37 GiB 2.55 GB)
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 12
  Total Devices : 10
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Fri Jan 17 02:18:37 2020
          State : clean, degraded
 Active Devices : 10
Working Devices : 10
 Failed Devices : 0
  Spare Devices : 0

           UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be
         Events : 0.536737

    Number   Major   Minor   RaidDevice State
       0       8      209        0      active sync   /dev/sdn1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
       4       8      241        4      active sync   /dev/sdp1
       5       8       81        5      active sync   /dev/sdf1
       6       8      161        6      active sync   /dev/sdk1
       7       8      177        7      active sync   /dev/sdl1
       8       8      193        8      active sync   /dev/sdm1
       -       0        0        9      removed
      10       8      225       10      active sync   /dev/sdo1
       -       0        0       11      removed

 

OK lemme rephrase.

Post #4 says I used the ubuntu usb.

Post #5 you asked me to use the xpe usb and run those commands.

Post #6 states when I put the xpe usb back it asked me to migrate. I migrated, and hence lost my edited synoinfo.conf and it defaults to 6 used disks, etc. I can't make any changes since everything is in read-only mode.

 

xpe usb is the same one that I've been using since the very beginning.
 

Edited by C-Fu

Share this post


Link to post
Share on other sites

Got it, thanks for the clarification. It's unfortunate that a migration installation occurred, but as long as it doesn't keep happening we can continue.

 

According to the mdstat you posted, the root filesystem is NOT in read-only mode (in fact I don't think DSM will boot successfully if it is in read-only mode).  So you should be able to edit /etc.defaults/synoinfo.conf and restore your disk mask configuration.  Are you editing as sudo?

 

Once you can boot DSM "normally" with no migration installation and your disks are back in the layout they were at the beginning, please come back and post these again:

  1. cat /proc/mdstat
  2. Using each of the arrays returned by mdstat: mdadm --detail /dev/<array>
  3. ls /dev/sd*
  4. ls /dev/md*

Don't be concerned about disks and volumes being crashed for now, let's just get the disks addressable in the way they were at the beginning.

Share this post


Link to post
Share on other sites
35 minutes ago, flyride said:

Don't be concerned about disks and volumes being crashed for now, let's just get the disks addressable in the way they were at the beginning.

 

synoinfo.conf editing works! Not sure why it works after reboot, but meh doesn't matter 😁

 

3 drives have System Partition Failed now. 

image.thumb.png.d4af0be5159bbe7f27e567b748e0f596.png

 

Anyways.. md2, md4, md5, md1, md0.

# ls /dev/sd*
/dev/sda   /dev/sdb2  /dev/sdc2  /dev/sdd2  /dev/sde2  /dev/sdf2  /dev/sdk2  /dev/sdl2  /dev/sdm1  /dev/sdn   /dev/sdn6  /dev/sdo2  /dev/sdp1  /dev/sdq1  /dev/sdr1  /dev/sdr7
/dev/sda1  /dev/sdb5  /dev/sdc5  /dev/sdd5  /dev/sde5  /dev/sdf5  /dev/sdk5  /dev/sdl5  /dev/sdm2  /dev/sdn1  /dev/sdn7  /dev/sdo5  /dev/sdp2  /dev/sdq2  /dev/sdr2
/dev/sdb   /dev/sdc   /dev/sdd   /dev/sde   /dev/sdf   /dev/sdk   /dev/sdl   /dev/sdl6  /dev/sdm5  /dev/sdn2  /dev/sdo   /dev/sdo6  /dev/sdp5  /dev/sdq5  /dev/sdr5
/dev/sdb1  /dev/sdc1  /dev/sdd1  /dev/sde1  /dev/sdf1  /dev/sdk1  /dev/sdl1  /dev/sdm   /dev/sdm6  /dev/sdn5  /dev/sdo1  /dev/sdp   /dev/sdq   /dev/sdr   /dev/sdr6
root@homelab:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sdb5[0] sdp5[10] sdn5[11] sdo5[9] sdm5[8] sdl5[7] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1]
      35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/11] [UUUUUU_UUU_UU]

md4 : active raid5 sdl6[0] sdn6[3] sdo6[2] sdm6[1]
      11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U]

md5 : active raid1 sdn7[0]
      3905898432 blocks super 1.2 [2/1] [U_]

md1 : active raid1 sdr2[12] sdq2[11] sdp2[10] sdo2[9] sdn2[8] sdm2[7] sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0]
      2097088 blocks [24/13] [UUUUUUUUUUUUU___________]

md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[0] sdo1[10] sdp1[4]
      2490176 blocks [12/10] [UUUUUUUUU_U_]

unused devices: <none>
/dev/md2:
        Version : 1.2
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
  Used Dev Size : 2925435456 (2789.91 GiB 2995.65 GB)
   Raid Devices : 13
  Total Devices : 11
    Persistence : Superblock is persistent

    Update Time : Fri Jan 17 03:26:11 2020
          State : clean, FAILED
 Active Devices : 11
Working Devices : 11
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : homelab:2  (local to host homelab)
           UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
         Events : 370917

    Number   Major   Minor   RaidDevice State
       0       8       21        0      active sync   /dev/sdb5
       1       8       37        1      active sync   /dev/sdc5
       2       8       53        2      active sync   /dev/sdd5
       3       8       69        3      active sync   /dev/sde5
       4       8       85        4      active sync   /dev/sdf5
       5       8      165        5      active sync   /dev/sdk5
       -       0        0        6      removed
       7       8      181        7      active sync   /dev/sdl5
       8       8      197        8      active sync   /dev/sdm5
       9       8      229        9      active sync   /dev/sdo5
       -       0        0       10      removed
      11       8      213       11      active sync   /dev/sdn5
      10       8      245       12      active sync   /dev/sdp5
/dev/md4:
        Version : 1.2
  Creation Time : Sun Sep 22 21:55:04 2019
     Raid Level : raid5
     Array Size : 11720987648 (11178.00 GiB 12002.29 GB)
  Used Dev Size : 2930246912 (2794.50 GiB 3000.57 GB)
   Raid Devices : 5
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Fri Jan 17 03:26:11 2020
          State : clean, degraded
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : homelab:4  (local to host homelab)
           UUID : 648fc239:67ee3f00:fa9d25fe:ef2f8cb0
         Events : 7052

    Number   Major   Minor   RaidDevice State
       0       8      182        0      active sync   /dev/sdl6
       1       8      198        1      active sync   /dev/sdm6
       2       8      230        2      active sync   /dev/sdo6
       -       0        0        3      removed
       3       8      214        4      active sync   /dev/sdn6
/dev/md5:
        Version : 1.2
  Creation Time : Tue Sep 24 19:36:08 2019
     Raid Level : raid1
     Array Size : 3905898432 (3724.96 GiB 3999.64 GB)
  Used Dev Size : 3905898432 (3724.96 GiB 3999.64 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Fri Jan 17 03:26:06 2020
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : homelab:5  (local to host homelab)
           UUID : ae55eeff:e6a5cc66:2609f5e0:2e2ef747
         Events : 223792

    Number   Major   Minor   RaidDevice State
       0       8      215        0      active sync   /dev/sdn7
       -       0        0        1      removed
/dev/md1:
        Version : 0.90
  Creation Time : Fri Jan 17 03:25:58 2020
     Raid Level : raid1
     Array Size : 2097088 (2047.94 MiB 2147.42 MB)
  Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
   Raid Devices : 24
  Total Devices : 13
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Fri Jan 17 03:26:49 2020
          State : active, degraded
 Active Devices : 13
Working Devices : 13
 Failed Devices : 0
  Spare Devices : 0

           UUID : 846f27e4:bf628296:cc8c244d:4f76664d (local to host homelab)
         Events : 0.20

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8       34        1      active sync   /dev/sdc2
       2       8       50        2      active sync   /dev/sdd2
       3       8       66        3      active sync   /dev/sde2
       4       8       82        4      active sync   /dev/sdf2
       5       8      162        5      active sync   /dev/sdk2
       6       8      178        6      active sync   /dev/sdl2
       7       8      194        7      active sync   /dev/sdm2
       8       8      210        8      active sync   /dev/sdn2
       9       8      226        9      active sync   /dev/sdo2
      10       8      242       10      active sync   /dev/sdp2
      11      65        2       11      active sync   /dev/sdq2
      12      65       18       12      active sync   /dev/sdr2
       -       0        0       13      removed
       -       0        0       14      removed
       -       0        0       15      removed
       -       0        0       16      removed
       -       0        0       17      removed
       -       0        0       18      removed
       -       0        0       19      removed
       -       0        0       20      removed
       -       0        0       21      removed
       -       0        0       22      removed
       -       0        0       23      removed
/dev/md0:
        Version : 0.90
  Creation Time : Sun Sep 22 21:01:46 2019
     Raid Level : raid1
     Array Size : 2490176 (2.37 GiB 2.55 GB)
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 12
  Total Devices : 10
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Fri Jan 17 03:42:39 2020
          State : active, degraded
 Active Devices : 10
Working Devices : 10
 Failed Devices : 0
  Spare Devices : 0

           UUID : f36dde6e:8c6ec8e5:3017a5a8:c86610be
         Events : 0.539389

    Number   Major   Minor   RaidDevice State
       0       8      209        0      active sync   /dev/sdn1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
       4       8      241        4      active sync   /dev/sdp1
       5       8       81        5      active sync   /dev/sdf1
       6       8      161        6      active sync   /dev/sdk1
       7       8      177        7      active sync   /dev/sdl1
       8       8      193        8      active sync   /dev/sdm1
       -       0        0        9      removed
      10       8      225       10      active sync   /dev/sdo1
       -       0        0       11      removed

 

Edited by C-Fu

Share this post


Link to post
Share on other sites
11 minutes ago, C-Fu said:

3 drives have System Partition Failed now. 

 

Don't worry about that, it's not important.  This is a better state than before.

 

First, let's grab all the critical information about the broken array:

 

# mdadm --examine /dev/sd[bcdefklmnopqr]5 >>/tmp/raid.status

 

Then,

 

# mdadm --examine /dev/sd[bcdefklmnopqr]5 | egrep 'Event|/dev/sd'

 

and post the result.

Share this post


Link to post
Share on other sites
6 minutes ago, flyride said:

# mdadm --examine /dev/sd[bcdefklmnopqr]5 >>/tmp/raid.status

# cat /tmp/raid.status

# cat /tmp/raid.status
/dev/sdb5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : a8109f74:46bc8509:6fc3bca8:9fddb6a7

    Update Time : Fri Jan 17 03:26:11 2020
       Checksum : b2b47a7 - correct
         Events : 370917

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 8dfdc601:e01f8a98:9a8e78f1:a7951260

    Update Time : Fri Jan 17 03:26:11 2020
       Checksum : 286f1b5a - correct
         Events : 370917

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : f98bc050:a4b46deb:c3168fa0:08d90061

    Update Time : Fri Jan 17 03:26:11 2020
       Checksum : 7a510a15 - correct
         Events : 370917

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 1e2742b7:d1847218:816c7135:cdf30c07

    Update Time : Fri Jan 17 03:26:11 2020
       Checksum : 48c5e5ea - correct
         Events : 370917

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : ce60c47e:14994160:da4d1482:fd7901f2

    Update Time : Fri Jan 17 03:26:11 2020
       Checksum : 8fae9b68 - correct
         Events : 370917

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 4
   Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdk5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 706c5124:d647d300:733fb961:e5cd8127

    Update Time : Fri Jan 17 03:26:11 2020
       Checksum : eaf29b4c - correct
         Events : 370917

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 5
   Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdl5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 6993b9eb:8ad7c80f:dc17268f:a8efa73d

    Update Time : Fri Jan 17 03:26:11 2020
       Checksum : 4e34c29 - correct
         Events : 370917

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 7
   Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdm5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 2f1247d1:a536d2ad:ba2eb47f:a7eaf237

    Update Time : Fri Jan 17 03:26:11 2020
       Checksum : 73533be8 - correct
         Events : 370917

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 8
   Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdn5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 73610f83:fb3cf895:c004147e:b4de2bfe

    Update Time : Fri Jan 17 03:26:11 2020
       Checksum : d1da5b98 - correct
         Events : 370917

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 11
   Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdo5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 1b4ab27d:bb7488fa:a6cc1f75:d21d1a83

    Update Time : Fri Jan 17 03:26:11 2020
       Checksum : ad078302 - correct
         Events : 370917

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 9
   Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdp5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : a64f01c2:76c56102:38ad7c4e:7bce88d1

    Update Time : Fri Jan 17 03:26:11 2020
       Checksum : 20fb6a8c - correct
         Events : 370917

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 12
   Array State : AAAAAA.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdq5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 5cc6456d:bfc950bf:1baf6fef:aabec947

    Update Time : Fri Jan 10 19:44:10 2020
       Checksum : a0630221 - correct
         Events : 370871

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 6
   Array State : AAAAAAAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdr5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x2
     Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d
           Name : homelab:2  (local to host homelab)
  Creation Time : Sun Sep 22 21:55:03 2019
     Raid Level : raid5
   Raid Devices : 13

 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB)
     Array Size : 35105225472 (33478.95 GiB 35947.75 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
Recovery Offset : 78633768 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : active
    Device UUID : d0a4607c:b970d906:02920f5c:ad5204d1

    Update Time : Fri Jan  3 03:01:29 2020
       Checksum : f185b28c - correct
         Events : 370454

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 10
   Array State : AAAAAAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)

 

Quote

# mdadm --examine /dev/sd[bcdefklmnopqr]5 | egrep 'Event|/dev/sd'

# mdadm --examine /dev/sd[bcdefklmnopqr]5 | egrep 'Event|/dev/sd'
/dev/sdb5:
         Events : 370917
/dev/sdc5:
         Events : 370917
/dev/sdd5:
         Events : 370917
/dev/sde5:
         Events : 370917
/dev/sdf5:
         Events : 370917
/dev/sdk5:
         Events : 370917
/dev/sdl5:
         Events : 370917
/dev/sdm5:
         Events : 370917
/dev/sdn5:
         Events : 370917
/dev/sdo5:
         Events : 370917
/dev/sdp5:
         Events : 370917
/dev/sdq5:
         Events : 370871
/dev/sdr5:
         Events : 370454

 

Share this post


Link to post
Share on other sites

Ok good.  Here comes a critical part.  Note that the assemble command does NOT have the "r" in it.

 

# mdadm --stop /dev/md2

 

Then

 

# mdadm --assemble --force /dev/md2 /dev/sd[bcdefklmnopq]5

 

Then

 

# cat /proc/mdstat

 

and post the result.

Share this post


Link to post
Share on other sites
6 minutes ago, flyride said:

Ok good.  Here comes a critical part.  Note that the assemble command does NOT have the "r" in it.

It means that it's not running right? Just trying to learn and understand at the same time 😁

 

root@homelab:~# mdadm --stop /dev/md2
mdadm: stopped /dev/md2
root@homelab:~#
root@homelab:~# mdadm --assemble --force /dev/md2 /dev/sd[bcdefklmnopq]5
mdadm: forcing event count in /dev/sdq5(6) from 370871 upto 370918
mdadm: clearing FAULTY flag for device 11 in /dev/md2 for /dev/sdq5
mdadm: Marking array /dev/md2 as 'clean'
mdadm: /dev/md2 assembled from 12 drives - not enough to start the array.
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md4 : active raid5 sdl6[0] sdn6[3] sdo6[2] sdm6[1]
      11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U]

md5 : active raid1 sdn7[0]
      3905898432 blocks super 1.2 [2/1] [U_]

md1 : active raid1 sdr2[12] sdq2[11] sdp2[10] sdo2[9] sdn2[8] sdm2[7] sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0]
      2097088 blocks [24/13] [UUUUUUUUUUUUU___________]

md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[0] sdo1[10] sdp1[4]
      2490176 blocks [12/10] [UUUUUUUUU_U_]

unused devices: <none>

If it matters, the Kingston SSD is my cache drive.

Share this post


Link to post
Share on other sites

Just to make totally sure we are not making a mistake, let's do this one more time.

 

# mdadm --detail /dev/md2

Share this post


Link to post
Share on other sites
12 minutes ago, C-Fu said:

It means that it's not running right? Just trying to learn and understand at the same time 😁

 

 

I meant that the assemble command deliberately omitted /dev/sdr5

Share this post


Link to post
Share on other sites
7 minutes ago, flyride said:

# mdadm --detail /dev/md2

# mdadm --detail /dev/md2
/dev/md2:
        Version :
     Raid Level : raid0
  Total Devices : 0

          State : inactive

    Number   Major   Minor   RaidDevice

 

Share this post


Link to post
Share on other sites

Arg, well that wasn't helpful.  Sorry.  Let's do this:

 

# mdadm --assemble --scan /dev/md2

 

# cat /proc/mdstat

Share this post


Link to post
Share on other sites
10 minutes ago, flyride said:

# mdadm --assemble --scan /dev/md2

 

# cat /proc/mdstat

mdadm --assemble --scan /dev/md2
mdadm: /dev/md2 not identified in config file.
 

and mdstat doesn't show md2.

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md4 : active raid5 sdl6[0] sdn6[3] sdo6[2] sdm6[1]
      11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U]

md5 : active raid1 sdn7[0]
      3905898432 blocks super 1.2 [2/1] [U_]

md1 : active raid1 sdr2[12] sdq2[11] sdp2[10] sdo2[9] sdn2[8] sdm2[7] sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0]
      2097088 blocks [24/13] [UUUUUUUUUUUUU___________]

md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[0] sdo1[10] sdp1[4]
      2490176 blocks [12/10] [UUUUUUUUU_U_]

unused devices: <none>

Post #15 it did say

 

mdadm: /dev/md2 assembled from 12 drives - not enough to start the array.

Edited by C-Fu

Share this post


Link to post
Share on other sites

Ok, let's do this instead:

 

# mdadm --assemble /dev/md2 --uuid 43699871:217306be:dc16f5e8:dcbe1b0d

 

# cat /proc/mdstat

Share this post


Link to post
Share on other sites
1 minute ago, flyride said:

# mdadm --assemble /dev/md2 --uuid 43699871:217306be:dc16f5e8:dcbe1b0d

 

# cat /proc/mdstat

 

Here's the output:

root@homelab:~# mdadm --assemble /dev/md2 --uuid 43699871:217306be:dc16f5e8:dcbe1b0d
mdadm: ignoring /dev/sdc5 as it reports /dev/sdq5 as failed
mdadm: ignoring /dev/sdd5 as it reports /dev/sdq5 as failed
mdadm: ignoring /dev/sde5 as it reports /dev/sdq5 as failed
mdadm: ignoring /dev/sdf5 as it reports /dev/sdq5 as failed
mdadm: ignoring /dev/sdk5 as it reports /dev/sdq5 as failed
mdadm: ignoring /dev/sdl5 as it reports /dev/sdq5 as failed
mdadm: ignoring /dev/sdm5 as it reports /dev/sdq5 as failed
mdadm: ignoring /dev/sdo5 as it reports /dev/sdq5 as failed
mdadm: ignoring /dev/sdn5 as it reports /dev/sdq5 as failed
mdadm: ignoring /dev/sdp5 as it reports /dev/sdq5 as failed
mdadm: /dev/md2 assembled from 2 drives - not enough to start the array.
root@homelab:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md4 : active raid5 sdl6[0] sdn6[3] sdo6[2] sdm6[1]
      11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U]

md5 : active raid1 sdn7[0]
      3905898432 blocks super 1.2 [2/1] [U_]

md1 : active raid1 sdr2[12] sdq2[11] sdp2[10] sdo2[9] sdn2[8] sdm2[7] sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0]
      2097088 blocks [24/13] [UUUUUUUUUUUUU___________]

md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[0] sdo1[10] sdp1[4]
      2490176 blocks [12/10] [UUUUUUUUU_U_]

unused devices: <none>

🤔

Share this post


Link to post
Share on other sites

Ok, let me explain what's happening just a bit.

 

According to mdstat, all the arrays that comprise your SHR are degraded but intact, except for /dev/md2, which is the small array that spans every one of your disks.

When you had two concurrently crashed drives, this is what took your SHR down (since a RAID5 needs all but one device to be working).

 

Once all the disks were online and addressable, we examined them in detail.  All array members have an event ID associated with the I/O of the array.  10 (EDIT: 11, actually) of the drives were in sync, 1 was "just" out of sync, and 1 was a bit more out of sync.  So the first --force command attempted to start the array, omitting the worst out of sync drive.  A force command will cause the array to make irreversible changes to itself, so we want to do so very sparingly.

 

I don't want to start the "bad" /dev/sdr5 with the array yet, and we are struggling with the exact syntax to do so.  And it is erroring out on the "just" out of sync drive so despite our first "force," the view of that drive is not fully consistent throughout the array.  Basically we are trying to start it in a degraded state, but not to force if not safe to do so.

 

So.... let's first try:

 

# mdadm --assemble --run /dev/md2 /dev/sd[bcdefklmnopq]5

 

# cat /proc/mdstat

Edited by flyride

Share this post


Link to post
Share on other sites
14 minutes ago, flyride said:

# mdadm --assemble --run /dev/md2 /dev/sd[bcdefklmnopq]5

 

# cat /proc/mdstat

root@homelab:~# mdadm --assemble --run /dev/md2 /dev/sd[bcdefklmnopq]5
mdadm: /dev/md2 has been started with 12 drives (out of 13).
root@homelab:~#
root@homelab:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sdb5[0] sdp5[10] sdn5[11] sdo5[9] sdm5[8] sdl5[7] sdq5[13] sdk5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1]
      35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UU]

md4 : active raid5 sdl6[0] sdn6[3] sdo6[2] sdm6[1]
      11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUU_U]

md5 : active raid1 sdn7[0]
      3905898432 blocks super 1.2 [2/1] [U_]

md1 : active raid1 sdr2[12] sdq2[11] sdp2[10] sdo2[9] sdn2[8] sdm2[7] sdl2[6] sdk2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] sdb2[0]
      2097088 blocks [24/13] [UUUUUUUUUUUUU___________]

md0 : active raid1 sdb1[1] sdc1[2] sdd1[3] sdf1[5] sdk1[6] sdl1[7] sdm1[8] sdn1[0] sdo1[10] sdp1[4]
      2490176 blocks [12/10] [UUUUUUUUU_U_]

unused devices: <none>

I think it's a (partial) success... maybe? :D

 

Thanks for the lengthy layman explanation, I honestly appreciate that!!! I was wondering about the event ID thing, I thought it corresponds to some system log somewhere.

Edited by C-Fu

Share this post


Link to post
Share on other sites

Something isn't quite right.  Do you have 13 drives plus a cache drive, or 12 plus cache?  Which drive is your cache drive now?  Please post fdisk -l

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.