Jump to content
XPEnology Community

bughatti

Member
  • Posts

    14
  • Joined

  • Last visited

Posts posted by bughatti

  1. 2 hours ago, pocopico said:

    Although i understand the concept for the shake of experimentation, i trully think its a waste since all the great tech behind Nimble arrays is wasted with DSM. Maybe a VM accessing the Nimble arrays would be a better approach ?

     

    Anyway i would also be interested on getting my hands on one of these :D how much do they sell ?

    I get what you are saying.  We had one laying around in our datacenter that a customer left behind.  I decided to see if I could get it to work and it did and was easy.  I have installed DSM on many different parts over the years and this appliance by far was the easiest and appears to be the fastest,  We are only using it for a veeam backup over 10gb iscsi.  I guess in all retrospect, its really what management interface you prefer.  We have a bunch of equalogics that I am replacing with these because I hate the administrative nightmare of the EQ software.

     

    I just found a few on ebay we are interested in getting.  From the little research I can find, the plan is to get a cs300, put 4 intel 480GB ssd's and 12 x 12TB Sata NAS drives in it.  Will probably buy 2 of them exactly the same and test out DSM's cluster software.

  2. 4 minutes ago, IG-88 said:

     

    no i just used a search engine to find that in a few minutes, this time you may try it yourself

    only thing i also found out is that later versions use a m.2 module instead of usb to boot

    I appreciate the information.  I have been searching, "nimble array gen 4", nothing pops out except that 2018 they announced gen 5, so I can safely assume anything made prior to 2018 would be usable, but I have found no document that says cs220 is gen X or cs460 is gen X.  I will do more digging though

  3. 2 hours ago, IG-88 said:

     

    i found this (an as dsm relies on usb boot ....)

    "...

    All previous generaitons before Gen5 arrays ship with the USBs in the controllers. They have the bootloader loaded on them. We do not have FRU or parts for the NimbleOS USBs.

    ---"

    Thank you for this information.  Could you provide any model numbers of controller arrays that are gen 4 and also what is the largest size sata hard drive that it will recognize.

  4. Just curious if anyone else has tried this.  We had an old Nimble CS220 laying around at work, I was able to get 6.2 update 2 loaded on it and it runs like a champ with ssd cache and 2 10GB ethernet links.  I am interested in buying more used ones, maybe newer models and curious if anything has changed in the newer ones that would prevent me from installing.

  5. 1 hour ago, Polanskiman said:

    Just a quick question, did you open port 80 on your router?

     

    Yes, 80 and 443 are both open in my router to my xpenology.  I have verified with open port checker, also Web STation responds with a page on both from outside my network

     

    root@LiquidXPe:~# sudo syno-letsencrypt new-cert -d domain.com -m email@gmail.com -v
    DEBUG: ==== start to new cert ====
    DEBUG: Server: https://acme-v01.api.letsencrypt.org/directory
    DEBUG: Email:email@gmail.com
    DEBUG: Domain:  domain.com
    DEBUG: ==========================
    DEBUG: setup acme url https://acme-v01.api.letsencrypt.org/directory
    DEBUG: GET Request: https://acme-v01.api.letsencrypt.org/directory
    DEBUG: Not found registed account. do reg-new.
    DEBUG: Post JWS Request: https://acme-v01.api.letsencrypt.org/acme/new-reg
    DEBUG: Post Request: https://acme-v01.api.letsencrypt.org/acme/new-reg
    {"error":200,"file":"client.cpp","msg":"new-req, unexpect httpcode"}

  6. All, I am trying to issue a lets encrypt on my nas, and it does not want to work.  Below is the error

     

    2019-12-09T14:57:58-06:00 LiquidXPe synoscgi_SYNO.Core.Certificate.LetsEncrypt_1_create[5038]: certificate.cpp:957 syno-letsencrypt failed. 200 [new-req, unexpect httpcode]
    2019-12-09T14:57:58-06:00 LiquidXPe synoscgi_SYNO.Core.Certificate.LetsEncrypt_1_create[5038]: certificate.cpp:1359 Failed to create Let'sEncrypt certificate. [200][new-req, unexpect httpcode]

    I am running   DSM 6.1.7-15284 Update 3

     

    I hav found a few articles and tried all the fixes that worked for others but no luck.

    I have my domain at namecheap, I have A records pointing the hostname to my ip

    I have web station installed using nginx and php7.3, a virtual host setup and ports forwarded.  I have validated I can reach http://host.domain.com and https://host.domain.com

     

    When requesting the lets encrypt cert, I have set default checked and also tried unchecked.  In domain name I am using the domain at namecheap, email is admin@domain and subject alternative is host@domain.com  both subject alternative and web station virtual host are exactly the same.

     

     

    Any help would be greatly appreciated.

     

     

  7. Hello all and thanks in advance for any help or assistance.  I think I am pretty much screwed but figured I would ask first before i make things worse.  I have a system with 12 drives, I have 1 raid 6 that correlates to volume 2 and a raid 5 that correlates to volume 1.  I moved my setup a few days ago and when I plugged it back in, my raid 5 lost 2 of the 4 drives.  1 drive was completely hosed, not readable in anything else.  The other drive seemed to just be empty and not in the raid like it was previously.  I think part of the reason for the drive just removing itself from the raid is that I use 6 onboard sata connections and have an 8 port sas lsi card.  It has actually happened before a few times but when it dropped out, I had 3 of the 4 drives still working so I could just add it back in and repair and I was good till the next outage.  This time with 2 bad drives, it just got hosed.  Either I could not or I didnt know how to add the working drive back into the raid properly so it would go from crashed to degraded and then replace the bad drive and rebuild.  

     

    Honestly I think my first mistake was moving drives around to see if it was a bad drive, or bad cable, or bad sas card.  While moving drives around I figured I would just put all the raid 5 drives on the internal sata connections and put all the raid 6 drives on the lsi sas card.  the raid 6 had 2 drives that removed themselves from the raid, but i was able to put 2 drives back in it and repair and volume 2 is good with no data loss.  I tried alot of commands ( I apologize but I do not remember them all) to get the raid 5 back.  In the end I just replaced the bad drive, so at this point I had 2 original raid 5 good drives, and 2 other drives that did not show in the raid 5.  

     

    I ended up do 

    mdadm --create /dev/md2 --assume-clean --level=5 --verbose --raid-devices=4 /dev/sda3 missing /dev/sdc3 /dev/sdd3

    this put the raid back in a degraded stat which allowed me to repair using the newly replaced drive.  The repair completed but now volume1 which did show up under volumes as crashed, is missing under volumes.  I have tried to follow a few guides to check things out.  All of the lv/vg commands do not show anything at all.  The closest I am able to get to anything is trying to run 

    :~# vgcfgrestore vg1000
      Couldn't find device with uuid h448fL-VaTW-5n9w-W7FY-Gb4O-50Jb-l0ADjn.
      Couldn't find device with uuid Ppyi69-5Osn-gJtL-MTxB-aGAd-cLYJ-7hy199.
      Couldn't find device with uuid 8NeE7P-Bmf5-ErdT-zZKB-jMJ3-LspS-9C3uLg.
      Cannot restore Volume Group vg1000 with 3 PVs marked as missing.
      Restore failed.

     

    :~# e2fsck -pvf /dev/md2
    e2fsck: Bad magic number in super-block while trying to open /dev/md2
    /dev/md2:
    The superblock could not be read or does not describe a correct ext2
    filesystem.  If the device is valid and it really contains an ext2
    filesystem (and not swap or ufs or something else), then the superblock
    is corrupt, and you might try running e2fsck with an alternate superblock:
        e2fsck -b 8193 <device>

     

    :~# cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
    md3 : active raid6 sdd3[6] sdl3[5] sdk3[10] sdh3[7] sdj3[9] sdg3[8]
          7794770176 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]
    
    md2 : active raid5 sdf3[4] sdb3[3] sde3[2] sda3[1]
          11706589632 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
    
    md1 : active raid1 sda2[0] sdb2[1] sdc2[11] sdd2[2] sde2[3] sdf2[9] sdg2[4] sdh2[5] sdi2[10] sdj2[6] sdk2[7] sdl2[8]
          2097088 blocks [12/12] [UUUUUUUUUUUU]
    
    md0 : active raid1 sda1[1] sdb1[5] sdc1[11] sdd1[3] sde1[4] sdf1[6] sdg1[9] sdh1[7] sdi1[10] sdj1[8] sdk1[2] sdl1[0]
          2490176 blocks [12/12] [UUUUUUUUUUUU]

     

    parted -l
    Model: WDC WD40EZRX-00SPEB0 (scsi)
    Disk /dev/hda: 4001GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start   End     Size    File system     Name  Flags
     1      1049kB  2551MB  2550MB  ext4                  raid
     2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
     3      4832MB  4001GB  3996GB                        raid
    
    
    Model: WDC WD40EZRX-00SPEB0 (scsi)
    Disk /dev/sda: 4001GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start   End     Size    File system     Name  Flags
     1      1049kB  2551MB  2550MB  ext4                  raid
     2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
     3      4832MB  4001GB  3996GB                        raid
    
    
    Model: WDC WD40EZRZ-00GXCB0 (scsi)
    Disk /dev/sdb: 4001GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start   End     Size    File system     Name  Flags
     1      1049kB  2551MB  2550MB  ext4                  raid
     2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
     3      4832MB  4001GB  3996GB                        raid
    
    
    Model: ATA ST3000DM001-1CH1 (scsi)
    Disk /dev/sdc: 3001GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start   End     Size    File system     Name  Flags
     1      1049kB  2551MB  2550MB  ext4                  raid
     2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
     3      4832MB  3000GB  2996GB                        raid
    
    
    Model: ATA ST2000DM001-1CH1 (scsi)
    Disk /dev/sdd: 2000GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  2551MB  2550MB  primary               raid
     2      2551MB  4699MB  2147MB  primary               raid
     3      4832MB  2000GB  1995GB  primary               raid
    
    
    Model: ATA ST4000DM005-2DP1 (scsi)
    Disk /dev/sde: 4001GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start   End     Size    File system     Name  Flags
     1      1049kB  2551MB  2550MB  ext4                  raid
     2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
     3      4832MB  4001GB  3996GB                        raid
    
    
    Model: WDC WD40EZRZ-00GXCB0 (scsi)
    Disk /dev/sdf: 4001GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start   End     Size    File system     Name  Flags
     1      1049kB  2551MB  2550MB  ext4                  raid
     2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
     3      4832MB  4001GB  3996GB                        raid
    
    
    Model: Linux Software RAID Array (md)
    Disk /dev/md0: 2550MB
    Sector size (logical/physical): 512B/512B
    Partition Table: loop
    Disk Flags:
    
    Number  Start  End     Size    File system  Flags
     1      0.00B  2550MB  2550MB  ext4
    
    
    Model: Linux Software RAID Array (md)
    Disk /dev/md1: 2147MB
    Sector size (logical/physical): 512B/512B
    Partition Table: loop
    Disk Flags:
    
    Number  Start  End     Size    File system     Flags
     1      0.00B  2147MB  2147MB  linux-swap(v1)
    
    
    Error: /dev/md2: unrecognised disk label
    Model: Linux Software RAID Array (md)
    Disk /dev/md2: 12.0TB
    Sector size (logical/physical): 512B/512B
    Partition Table: unknown
    Disk Flags:
    
    Model: Linux Software RAID Array (md)
    Disk /dev/md3: 7982GB
    Sector size (logical/physical): 512B/512B
    Partition Table: loop
    Disk Flags:
    
    Number  Start  End     Size    File system  Flags
     1      0.00B  7982GB  7982GB  ext4
    
    
    Model: WDC WD2003FYYS-02W0B (scsi)
    Disk /dev/sdg: 2000GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  2551MB  2550MB  primary               raid
     2      2551MB  4699MB  2147MB  primary               raid
     3      4832MB  2000GB  1995GB  primary               raid
    
    
    Model: WDC WD2003FYYS-02W0B (scsi)
    Disk /dev/sdh: 2000GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  2551MB  2550MB  primary               raid
     2      2551MB  4699MB  2147MB  primary               raid
     3      4832MB  2000GB  1995GB  primary               raid
    
    
    Model: ATA ST3000DM001-1E61 (scsi)
    Disk /dev/sdi: 3001GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start   End     Size    File system     Name  Flags
     1      1049kB  2551MB  2550MB  ext4                  raid
     2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
     3      4832MB  3001GB  2996GB                        raid
    
    
    Model: ATA ST2000DM001-1CH1 (scsi)
    Disk /dev/sdj: 2000GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  2551MB  2550MB  primary               raid
     2      2551MB  4699MB  2147MB  primary               raid
     3      4832MB  2000GB  1995GB  primary               raid
    
    
    Model: WDC WD30EZRX-00MMMB0 (scsi)
    Disk /dev/sdk: 3001GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start   End     Size    File system     Name  Flags
     1      1049kB  2551MB  2550MB  ext4                  raid
     2      2551MB  4699MB  2147MB  linux-swap(v1)        raid
     3      4832MB  3001GB  2996GB                        raid
    
    
    Model: WDC WD2003FYYS-02W0B (scsi)
    Disk /dev/sdl: 2000GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  2551MB  2550MB  primary               raid
     2      2551MB  4699MB  2147MB  primary               raid
     3      4832MB  2000GB  1995GB  primary               raid
    
    
    Model: Unknown (unknown)
    Disk /dev/zram0: 2499MB
    Sector size (logical/physical): 4096B/4096B
    Partition Table: loop
    Disk Flags:
    
    Number  Start  End     Size    File system     Flags
     1      0.00B  2499MB  2499MB  linux-swap(v1)
    
    
    Model: Unknown (unknown)
    Disk /dev/zram1: 2499MB
    Sector size (logical/physical): 4096B/4096B
    Partition Table: loop
    Disk Flags:
    
    Number  Start  End     Size    File system     Flags
     1      0.00B  2499MB  2499MB  linux-swap(v1)
    
    
    Model: Unknown (unknown)
    Disk /dev/zram2: 2499MB
    Sector size (logical/physical): 4096B/4096B
    Partition Table: loop
    Disk Flags:
    
    Number  Start  End     Size    File system     Flags
     1      0.00B  2499MB  2499MB  linux-swap(v1)
    
    
    Model: Unknown (unknown)
    Disk /dev/zram3: 2499MB
    Sector size (logical/physical): 4096B/4096B
    Partition Table: loop
    Disk Flags:
    
    Number  Start  End     Size    File system     Flags
     1      0.00B  2499MB  2499MB  linux-swap(v1)
    
    
    Model: SanDisk Cruzer Fit (scsi)
    Disk /dev/synoboot: 8003MB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start   End     Size    File system  Name    Flags
     1      1049kB  16.8MB  15.7MB  fat16        boot    boot, esp
     2      16.8MB  48.2MB  31.5MB  fat16        image
     3      48.2MB  52.4MB  4177kB               legacy  bios_grub

     

    :~# mdadm --detail /dev/md2
    /dev/md2:
            Version : 1.2
      Creation Time : Fri Nov 29 14:05:37 2019
         Raid Level : raid5
         Array Size : 11706589632 (11164.27 GiB 11987.55 GB)
      Used Dev Size : 3902196544 (3721.42 GiB 3995.85 GB)
       Raid Devices : 4
      Total Devices : 4
        Persistence : Superblock is persistent
    
        Update Time : Mon Dec  2 23:52:06 2019
              State : clean
     Active Devices : 4
    Working Devices : 4
     Failed Devices : 0
      Spare Devices : 0
    
             Layout : left-symmetric
         Chunk Size : 64K
    
               Name : LiquidXPe:2  (local to host LiquidXPe)
               UUID : 2e3bde16:7a255483:e4de0929:70dc3562
             Events : 137
    
        Number   Major   Minor   RaidDevice State
           4       8       83        0      active sync   /dev/sdf3
           1       8        3        1      active sync   /dev/sda3
           2       8       67        2      active sync   /dev/sde3
           3       8       19        3      active sync   /dev/sdb3

     

    :~# cat /etc/lvm/backup/vg1000
    # Generated by LVM2 version 2.02.38 (2008-06-11): Sun Sep 25 16:25:42 2016
    
    contents = "Text Format Volume Group"
    version = 1
    
    description = "Created *after* executing '/sbin/lvextend --alloc inherit /dev/vg1000/lv -l100%VG'"
    
    creation_host = "LiquidXPe"     # Linux LiquidXPe 3.10.35 #1 SMP Sat Dec 12 17:01:14 MSK 2015 x86_64
    creation_time = 1474838742      # Sun Sep 25 16:25:42 2016
    
    vg1000 {
            id = "dJc33I-psOe-q3Nu-Qdt6-lKUr-KGB3-gHOdGz"
            seqno = 19
            status = ["RESIZEABLE", "READ", "WRITE"]
            extent_size = 8192              # 4 Megabytes
            max_lv = 0
            max_pv = 0
    
            physical_volumes {
    
                    pv0 {
                            id = "h448fL-VaTW-5n9w-W7FY-Gb4O-50Jb-l0ADjn"
                            device = "/dev/md2"     # Hint only
    
                            status = ["ALLOCATABLE"]
                            dev_size = 19438624128  # 9.05181 Terabytes
                            pe_start = 1152
                            pe_count = 2372878      # 9.05181 Terabytes
                    }
    
                    pv1 {
                            id = "Ppyi69-5Osn-gJtL-MTxB-aGAd-cLYJ-7hy199"
                            device = "/dev/md3"     # Hint only
    
                            status = ["ALLOCATABLE"]
                            dev_size = 17581371264  # 8.18696 Terabytes
                            pe_start = 1152
                            pe_count = 2146163      # 8.18696 Terabytes
                    }
    
                    pv2 {
                            id = "8NeE7P-Bmf5-ErdT-zZKB-jMJ3-LspS-9C3uLg"
                            device = "/dev/md4"     # Hint only
    
                            status = ["ALLOCATABLE"]
                            dev_size = 1953484672   # 931.494 Gigabytes
                            pe_start = 1152
                            pe_count = 238462       # 931.492 Gigabytes
                    }
    
                    pv3 {
                            id = "RM205l-f2bw-BBbm-OYyg-sKK8-VHRv-4Mv9OX"
                            device = "/dev/md5"     # Hint only
    
                            status = ["ALLOCATABLE"]
                            dev_size = 9767427968   # 4.54831 Terabytes
                            pe_start = 1152
                            pe_count = 1192312      # 4.54831 Terabytes
                    }
            }
    
            logical_volumes {
    
                    lv {
                            id = "g5hc5i-t2eR-Wj1v-MTwg-3EHX-APQe-sDLOe5"
                            status = ["READ", "WRITE", "VISIBLE"]
                            segment_count = 8
    
                            segment1 {
                                    start_extent = 0
                                    extent_count = 237287   # 926.902 Gigabytes
    
                                    type = "striped"
                                    stripe_count = 1        # linear
    
                                    stripes = [
                                            "pv0", 0
                                    ]
                            }
                            segment2 {
                                    start_extent = 237287
                                    extent_count = 715387   # 2.72898 Terabytes
    
                                    type = "striped"
                                    stripe_count = 1        # linear
    
                                    stripes = [
                                            "pv1", 0
                                    ]
                            }
                            segment3 {
                                    start_extent = 952674
                                    extent_count = 949152   # 3.62073 Terabytes
    
                                    type = "striped"
                                    stripe_count = 1        # linear
    
                                    stripes = [
                                            "pv0", 237287
                                    ]
                            }
                            segment4 {
                                    start_extent = 1901826
                                    extent_count = 238463   # 931.496 Gigabytes
    
                                    type = "striped"
                                    stripe_count = 1        # linear
    
                                    stripes = [
                                            "pv1", 715387
                                    ]
                            }
                            segment5 {
                                    start_extent = 2140289
                                    extent_count = 238462   # 931.492 Gigabytes
    
                                    type = "striped"
                                    stripe_count = 1        # linear
    
                                    stripes = [
                                            "pv2", 0
                                    ]
                            }
                            segment6 {
                                    start_extent = 2378751
                                    extent_count = 1192312  # 4.54831 Terabytes
    
                                    type = "striped"
                                    stripe_count = 1        # linear
    
                                    stripes = [
                                            "pv3", 0
                                    ]
                            }
                            segment7 {
                                    start_extent = 3571063
                                    extent_count = 1192313  # 4.54831 Terabytes
    
                                    type = "striped"
                                    stripe_count = 1        # linear
    
                                    stripes = [
                                            "pv1", 953850
                                    ]
                            }
                            segment8 {
                                    start_extent = 4763376
                                    extent_count = 1186439  # 4.52591 Terabytes
    
                                    type = "striped"
                                    stripe_count = 1        # linear
    
                                    stripes = [
                                            "pv0", 1186439
                                    ]
                            }
                    }
            }
    }

    the lvm backup data all seems to be old, from 2016 and I have rebuilt both volumes since.  I use to be shr, but moved to a raid setup.

     

     

    Again, any help would be greatly appreciated.

  8. Here is some more info, seems something is up with the superblocks, any idea how to fix it?

     

    LiquidXPe> mdadm --assemble --scan --verbose
    mdadm: looking for devices for further assembly
    mdadm: cannot open device /dev/sdu1: Device or resource busy
    mdadm: cannot open device /dev/sdu: Device or resource busy
    mdadm: no recogniseable superblock on /dev/dm-0
    mdadm: cannot open device /dev/md5: Device or resource busy
    mdadm: cannot open device /dev/md3: Device or resource busy
    mdadm: cannot open device /dev/md2: Device or resource busy
    mdadm: cannot open device /dev/md4: Device or resource busy
    mdadm: cannot open device /dev/zram3: Device or resource busy
    mdadm: cannot open device /dev/zram2: Device or resource busy
    mdadm: cannot open device /dev/zram1: Device or resource busy
    mdadm: cannot open device /dev/zram0: Device or resource busy
    mdadm: cannot open device /dev/md1: Device or resource busy
    mdadm: cannot open device /dev/md0: Device or resource busy
    mdadm: cannot open device /dev/sdh7: Device or resource busy
    mdadm: cannot open device /dev/sdh6: Device or resource busy
    mdadm: cannot open device /dev/sdh5: Device or resource busy
    mdadm: cannot open device /dev/sdh2: Device or resource busy
    mdadm: cannot open device /dev/sdh1: Device or resource busy
    mdadm: cannot open device /dev/sdh: Device or resource busy
    mdadm: cannot open device /dev/sdi7: Device or resource busy
    mdadm: cannot open device /dev/sdi6: Device or resource busy
    mdadm: cannot open device /dev/sdi5: Device or resource busy
    mdadm: cannot open device /dev/sdi2: Device or resource busy
    mdadm: cannot open device /dev/sdi1: Device or resource busy
    mdadm: cannot open device /dev/sdi: Device or resource busy
    mdadm: cannot open device /dev/sdl5: Device or resource busy
    mdadm: no recogniseable superblock on /dev/sdl3
    mdadm: cannot open device /dev/sdl2: Device or resource busy
    mdadm: cannot open device /dev/sdl1: Device or resource busy
    mdadm: cannot open device /dev/sdl: Device or resource busy
    mdadm: cannot open device /dev/sdj6: Device or resource busy
    mdadm: cannot open device /dev/sdj5: Device or resource busy
    mdadm: no recogniseable superblock on /dev/sdj3
    mdadm: cannot open device /dev/sdj2: Device or resource busy
    mdadm: cannot open device /dev/sdj1: Device or resource busy
    mdadm: cannot open device /dev/sdj: Device or resource busy
    mdadm: cannot open device /dev/sdg6: Device or resource busy
    mdadm: cannot open device /dev/sdg5: Device or resource busy
    mdadm: no recogniseable superblock on /dev/sdg3
    mdadm: cannot open device /dev/sdg2: Device or resource busy
    mdadm: cannot open device /dev/sdg1: Device or resource busy
    mdadm: cannot open device /dev/sdg: Device or resource busy
    mdadm: no recogniseable superblock on /dev/sdk8
    mdadm: cannot open device /dev/sdk7: Device or resource busy
    mdadm: cannot open device /dev/sdk6: Device or resource busy
    mdadm: cannot open device /dev/sdk5: Device or resource busy
    mdadm: cannot open device /dev/sdk2: Device or resource busy
    mdadm: cannot open device /dev/sdk1: Device or resource busy
    mdadm: cannot open device /dev/sdk: Device or resource busy
    mdadm: cannot open device /dev/sdf8: Device or resource busy
    mdadm: cannot open device /dev/sdf7: Device or resource busy
    mdadm: cannot open device /dev/sdf6: Device or resource busy
    mdadm: cannot open device /dev/sdf5: Device or resource busy
    mdadm: cannot open device /dev/sdf2: Device or resource busy
    mdadm: cannot open device /dev/sdf1: Device or resource busy
    mdadm: cannot open device /dev/sdf: Device or resource busy
    mdadm: cannot open device /dev/sde8: Device or resource busy
    mdadm: cannot open device /dev/sde7: Device or resource busy
    mdadm: cannot open device /dev/sde6: Device or resource busy
    mdadm: cannot open device /dev/sde5: Device or resource busy
    mdadm: cannot open device /dev/sde2: Device or resource busy
    mdadm: cannot open device /dev/sde1: Device or resource busy
    mdadm: cannot open device /dev/sde: Device or resource busy
    mdadm: cannot open device /dev/sdd6: Device or resource busy
    mdadm: cannot open device /dev/sdd5: Device or resource busy
    mdadm: no recogniseable superblock on /dev/sdd3
    mdadm: cannot open device /dev/sdd2: Device or resource busy
    mdadm: cannot open device /dev/sdd1: Device or resource busy
    mdadm: cannot open device /dev/sdd: Device or resource busy
    mdadm: cannot open device /dev/sdc6: Device or resource busy
    mdadm: cannot open device /dev/sdc5: Device or resource busy
    mdadm: no recogniseable superblock on /dev/sdc3
    mdadm: cannot open device /dev/sdc2: Device or resource busy
    mdadm: cannot open device /dev/sdc1: Device or resource busy
    mdadm: cannot open device /dev/sdc: Device or resource busy
    mdadm: cannot open device /dev/sdb7: Device or resource busy
    mdadm: cannot open device /dev/sdb6: Device or resource busy
    mdadm: cannot open device /dev/sdb5: Device or resource busy
    mdadm: cannot open device /dev/sdb2: Device or resource busy
    mdadm: cannot open device /dev/sdb1: Device or resource busy
    mdadm: cannot open device /dev/sdb: Device or resource busy
    mdadm: No arrays found in config file or automatically

     

  9. Thanks for the reply, I am running SHR on it.  Unfortunately I have not had much luck yet, as far as I got today was after adding all partitions from the new drive back into each /dev/mdx and each rebuild was complete I got a healthy on the top right under system health in the GUI It says healthy but the volume still says crashed.  When I reboot the new drive goes back to available spare but when i look at mdadm -detail /dev/md0 and md1 the partitions from the new drive are in them but the new drive sdk5, sdk6 and sdk7 are not in md2, md3 and md5.  I can add them back in but as soon as I reboot, they fall back out again.  When I run 

     

    fsck.ext4 -pvf /dev/md3

     I get

    fsck.ext4: Bad magic number in super-block while trying to open /dev/md2
    /dev/md3:
    The superblock could not be read or does not describe a correct ext2
    filesystem.  If the device is valid and it really contains an ext2
    filesystem (and not swap or ufs or something else), then the superblock
    is corrupt, and you might try running e2fsck with an alternate superblock:
        e2fsck -b 8193 <device>

     

    I did some research on the error and found the backup superblocks and tried a few different ones but I get the same error.

     

    I have yet to try fixing the issue using a live ubuntu image.

     

    Below are my details

    LiquidXPe> cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md5 : active raid5 sdb7[0] sdi7[4] sdh7[3] sdf7[2] sde7[1]
          4883714560 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [UUUUU_]
    md3 : active raid5 sdk6[10] sdb6[0] sde6[6] sdf6[7] sdh6[8] sdi6[9] sdj6[5] sdg6[4] sdd6[2] sdc6[1]
          8790686208 blocks super 1.2 level 5, 64k chunk, algorithm 2 [10/9] [UUUUU_UUUU]
          [>....................]  recovery =  0.0% (646784/976742912) finish=150.9min speed=107797K/sec
    md2 : active raid5 sdb5[2] sde5[8] sdf5[9] sdh5[10] sdi5[11] sdg5[7] sdc5[4] sdd5[5] sdj5[6] sdl5[3]
          9719312640 blocks super 1.2 level 5, 64k chunk, algorithm 2 [11/10] [UUUUUU_UUUU]
    md4 : active raid1 sde8[0] sdf8[1]
          976742912 blocks super 1.2 [2/2] [UU]
    md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] sdf2[4] sdg2[5] sdh2[6] sdi2[7] sdj2[8] sdk2[10] sdl2[9]
          2097088 blocks [11/11] [UUUUUUUUUUU]
    md0 : active raid1 sdb1[1] sdc1[9] sdd1[8] sde1[0] sdf1[7] sdg1[3] sdh1[10] sdi1[6] sdj1[5] sdk1[2] sdl1[4]
          2490176 blocks [11/11] [UUUUUUUUUUU]
    
    unused devices: <none>
    ~ # lvm vgdisplay
      --- Volume group ---
      VG Name               vg1000
      System ID
      Format                lvm2
      Metadata Areas        4
      Metadata Sequence No  19
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                1
      Open LV               0
      Max PV                0
      Cur PV                4
      Act PV                4
      VG Size               22.70 TB
      PE Size               4.00 MB
      Total PE              5949815
      Alloc PE / Size       5949815 / 22.70 TB
      Free  PE / Size       0 / 0
      VG UUID               dJc33I-psOe-q3Nu-Qdt6-lKUr-KGB3-gHOdGz
    ~ # lvm lvdisplay
      --- Logical volume ---
      LV Name                /dev/vg1000/lv
      VG Name                vg1000
      LV UUID                g5hc5i-t2eR-Wj1v-MTwg-3EHX-APQe-sDLOe5
      LV Write Access        read/write
      LV Status              available
      # open                 0
      LV Size                22.70 TB
      Current LE             5949815
      Segments               8
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     4096
      Block device           253:0
    LiquidXPe> mdadm --detail /dev/md*
    /dev/md0:
            Version : 0.90
      Creation Time : Fri Dec 31 18:00:05 1999
         Raid Level : raid1
         Array Size : 2490176 (2.37 GiB 2.55 GB)
      Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
       Raid Devices : 11
      Total Devices : 11
    Preferred Minor : 0
        Persistence : Superblock is persistent
        Update Time : Thu Jul 27 20:49:01 2017
              State : clean
     Active Devices : 11
    Working Devices : 11
     Failed Devices : 0
      Spare Devices : 0
               UUID : 249cd984:e79b4c51:3017a5a8:c86610be
             Events : 0.15252995
        Number   Major   Minor   RaidDevice State
           0       8       65        0      active sync   /dev/sde1
           1       8       17        1      active sync   /dev/sdb1
           2       8      161        2      active sync   /dev/sdk1
           3       8       97        3      active sync   /dev/sdg1
           4       8      177        4      active sync   /dev/sdl1
           5       8      145        5      active sync   /dev/sdj1
           6       8      129        6      active sync   /dev/sdi1
           7       8       81        7      active sync   /dev/sdf1
           8       8       49        8      active sync   /dev/sdd1
           9       8       33        9      active sync   /dev/sdc1
          10       8      113       10      active sync   /dev/sdh1
    /dev/md1:
            Version : 0.90
      Creation Time : Wed Jul 26 20:10:31 2017
         Raid Level : raid1
         Array Size : 2097088 (2048.28 MiB 2147.42 MB)
      Used Dev Size : 2097088 (2048.28 MiB 2147.42 MB)
       Raid Devices : 11
      Total Devices : 11
    Preferred Minor : 1
        Persistence : Superblock is persistent
        Update Time : Thu Jul 27 18:02:00 2017
              State : clean
     Active Devices : 11
    Working Devices : 11
     Failed Devices : 0
      Spare Devices : 0
               UUID : 7f14e91e:ed74d57f:9b23d1f3:72b7d250 (local to host LiquidXPe)
             Events : 0.40
        Number   Major   Minor   RaidDevice State
           0       8       18        0      active sync   /dev/sdb2
           1       8       34        1      active sync   /dev/sdc2
           2       8       50        2      active sync   /dev/sdd2
           3       8       66        3      active sync   /dev/sde2
           4       8       82        4      active sync   /dev/sdf2
           5       8       98        5      active sync   /dev/sdg2
           6       8      114        6      active sync   /dev/sdh2
           7       8      130        7      active sync   /dev/sdi2
           8       8      146        8      active sync   /dev/sdj2
           9       8      178        9      active sync   /dev/sdl2
          10       8      162       10      active sync   /dev/sdk2
    /dev/md2:
            Version : 1.2
      Creation Time : Thu Dec 17 10:21:31 2015
         Raid Level : raid5
         Array Size : 9719312640 (9269.06 GiB 9952.58 GB)
      Used Dev Size : 971931264 (926.91 GiB 995.26 GB)
       Raid Devices : 11
      Total Devices : 11
        Persistence : Superblock is persistent
        Update Time : Thu Jul 27 18:07:06 2017
              State : clean, degraded
     Active Devices : 10
    Working Devices : 11
     Failed Devices : 0
      Spare Devices : 1
             Layout : left-symmetric
         Chunk Size : 64K
               Name : XPenology:2
               UUID : c88d79ed:5575471a:d6d4e7aa:282ecf4c
             Events : 7316850
        Number   Major   Minor   RaidDevice State
           2       8       21        0      active sync   /dev/sdb5
           3       8      181        1      active sync   /dev/sdl5
           6       8      149        2      active sync   /dev/sdj5
           5       8       53        3      active sync   /dev/sdd5
           4       8       37        4      active sync   /dev/sdc5
           7       8      101        5      active sync   /dev/sdg5
          12       8      165        6      spare rebuilding   /dev/sdk5
          11       8      133        7      active sync   /dev/sdi5
          10       8      117        8      active sync   /dev/sdh5
           9       8       85        9      active sync   /dev/sdf5
           8       8       69       10      active sync   /dev/sde5
    /dev/md3:
            Version : 1.2
      Creation Time : Thu Aug 11 17:02:44 2016
         Raid Level : raid5
         Array Size : 8790686208 (8383.45 GiB 9001.66 GB)
      Used Dev Size : 976742912 (931.49 GiB 1000.18 GB)
       Raid Devices : 10
      Total Devices : 10
        Persistence : Superblock is persistent
        Update Time : Thu Jul 27 20:46:56 2017
              State : clean, degraded, recovering
     Active Devices : 9
    Working Devices : 10
     Failed Devices : 0
      Spare Devices : 1
             Layout : left-symmetric
         Chunk Size : 64K
     Rebuild Status : 75% complete
               Name : XPenology:3
               UUID : 3cef14a9:214bd5de:c71c244c:e59eb342
             Events : 2504149
        Number   Major   Minor   RaidDevice State
           0       8       22        0      active sync   /dev/sdb6
           1       8       38        1      active sync   /dev/sdc6
           2       8       54        2      active sync   /dev/sdd6
           4       8      102        3      active sync   /dev/sdg6
           5       8      150        4      active sync   /dev/sdj6
          10       8      166        5      spare rebuilding   /dev/sdk6
           9       8      134        6      active sync   /dev/sdi6
           8       8      118        7      active sync   /dev/sdh6
           7       8       86        8      active sync   /dev/sdf6
           6       8       70        9      active sync   /dev/sde6
    /dev/md4:
            Version : 1.2
      Creation Time : Sat Sep 24 22:30:44 2016
         Raid Level : raid1
         Array Size : 976742912 (931.49 GiB 1000.18 GB)
      Used Dev Size : 976742912 (931.49 GiB 1000.18 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
        Update Time : Wed Jul 26 22:32:04 2017
              State : clean
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
               Name : LiquidXPe:4  (local to host LiquidXPe)
               UUID : d3a426d3:fafd9c0a:e0393702:79750b47
             Events : 2
        Number   Major   Minor   RaidDevice State
           0       8       72        0      active sync   /dev/sde8
           1       8       88        1      active sync   /dev/sdf8
    /dev/md5:
            Version : 1.2
      Creation Time : Sat Sep 24 22:30:45 2016
         Raid Level : raid5
         Array Size : 4883714560 (4657.47 GiB 5000.92 GB)
      Used Dev Size : 976742912 (931.49 GiB 1000.18 GB)
       Raid Devices : 6
      Total Devices : 6
        Persistence : Superblock is persistent
        Update Time : Thu Jul 27 18:07:23 2017
              State : clean, degraded
     Active Devices : 5
    Working Devices : 6
     Failed Devices : 0
      Spare Devices : 1
             Layout : left-symmetric
         Chunk Size : 64K
               Name : LiquidXPe:5  (local to host LiquidXPe)
               UUID : cec93e77:b4134947:fea5cfba:eee99979
             Events : 2402456
        Number   Major   Minor   RaidDevice State
           0       8       23        0      active sync   /dev/sdb7
           1       8       71        1      active sync   /dev/sde7
           2       8       87        2      active sync   /dev/sdf7
           3       8      119        3      active sync   /dev/sdh7
           4       8      135        4      active sync   /dev/sdi7
           6       8      167        5      spare rebuilding   /dev/sdk7

     

    LiquidXPe> sfdisk -l /dev/sd*
    /dev/sdb1                  2048         4982527         4980480  fd
    /dev/sdb2               4982528         9176831         4194304  fd
    /dev/sdb5               9453280      1953318239      1943864960  fd
    /dev/sdb6            1953334336      3906822239      1953487904  fd
    /dev/sdb7            3906838336      5860326239      1953487904  fd
    
    [/dev/sdb1] is a partition
    [/dev/sdb2] is a partition
    [/dev/sdb5] is a partition
    [/dev/sdb6] is a partition
    [/dev/sdb7] is a partition
    /dev/sdc1                  2048         4982527         4980480  fd
    /dev/sdc2               4982528         9176831         4194304  fd
    /dev/sdc3               9437184      3907015007      3897577824   f
    /dev/sdc5               9453280      1953318239      1943864960  fd
    /dev/sdc6            1953334336      3906822239      1953487904  fd
    
    [/dev/sdc1] is a partition
    [/dev/sdc2] is a partition
    [/dev/sdc3] is a partition
    [/dev/sdc5] is a partition
    [/dev/sdc6] is a partition
    /dev/sdd1                  2048         4982527         4980480  fd
    /dev/sdd2               4982528         9176831         4194304  fd
    /dev/sdd3               9437184      3907015007      3897577824   f
    /dev/sdd5               9453280      1953318239      1943864960  fd
    /dev/sdd6            1953334336      3906822239      1953487904  fd
    
    [/dev/sdd1] is a partition
    [/dev/sdd2] is a partition
    [/dev/sdd3] is a partition
    [/dev/sdd5] is a partition
    [/dev/sdd6] is a partition
    /dev/sde1                  2048         4982527         4980480  fd
    /dev/sde2               4982528         9176831         4194304  fd
    /dev/sde5               9453280      1953318239      1943864960  fd
    /dev/sde6            1953334336      3906822239      1953487904  fd
    /dev/sde7            3906838336      5860326239      1953487904  fd
    /dev/sde8            5860342336      7813830239      1953487904  fd
    
    [/dev/sde1] is a partition
    [/dev/sde2] is a partition
    [/dev/sde5] is a partition
    [/dev/sde6] is a partition
    [/dev/sde7] is a partition
    [/dev/sde8] is a partition
    /dev/sdf1                  2048         4982527         4980480  fd
    /dev/sdf2               4982528         9176831         4194304  fd
    /dev/sdf5               9453280      1953318239      1943864960  fd
    /dev/sdf6            1953334336      3906822239      1953487904  fd
    /dev/sdf7            3906838336      5860326239      1953487904  fd
    /dev/sdf8            5860342336      7813830239      1953487904  fd
    
    [/dev/sdf1] is a partition
    [/dev/sdf2] is a partition
    [/dev/sdf5] is a partition
    [/dev/sdf6] is a partition
    [/dev/sdf7] is a partition
    [/dev/sdf8] is a partition
    /dev/sdg1                  2048         4982527         4980480  fd
    /dev/sdg2               4982528         9176831         4194304  fd
    /dev/sdg3               9437184      3907015007      3897577824   f
    /dev/sdg5               9453280      1953318239      1943864960  fd
    /dev/sdg6            1953334336      3906822239      1953487904  fd
    
    [/dev/sdg1] is a partition
    [/dev/sdg2] is a partition
    [/dev/sdg3] is a partition
    [/dev/sdg5] is a partition
    [/dev/sdg6] is a partition
    /dev/sdh1                  2048         4982527         4980480  fd
    /dev/sdh2               4982528         9176831         4194304  fd
    /dev/sdh5               9453280      1953318239      1943864960  fd
    /dev/sdh6            1953334336      3906822239      1953487904  fd
    /dev/sdh7            3906838336      5860326239      1953487904  fd
    
    [/dev/sdh1] is a partition
    [/dev/sdh2] is a partition
    [/dev/sdh5] is a partition
    [/dev/sdh6] is a partition
    [/dev/sdh7] is a partition
    /dev/sdi1                  2048         4982527         4980480  fd
    /dev/sdi2               4982528         9176831         4194304  fd
    /dev/sdi5               9453280      1953318239      1943864960  fd
    /dev/sdi6            1953334336      3906822239      1953487904  fd
    /dev/sdi7            3906838336      5860326239      1953487904  fd
    
    [/dev/sdi1] is a partition
    [/dev/sdi2] is a partition
    [/dev/sdi5] is a partition
    [/dev/sdi6] is a partition
    [/dev/sdi7] is a partition
    /dev/sdj1                  2048         4982527         4980480  fd
    /dev/sdj2               4982528         9176831         4194304  fd
    /dev/sdj3               9437184      3907015007      3897577824   f
    /dev/sdj5               9453280      1953318239      1943864960  fd
    /dev/sdj6            1953334336      3906822239      1953487904  fd
    
    [/dev/sdj1] is a partition
    [/dev/sdj2] is a partition
    [/dev/sdj3] is a partition
    [/dev/sdj5] is a partition
    [/dev/sdj6] is a partition
    /dev/sdk1                  2048         4982783         4980736  fd
    /dev/sdk2               4982784         9177087         4194304  fd
    /dev/sdk5               9451520      1953318239      1943866720  fd
    /dev/sdk6            1953333248      3906822239      1953488992  fd
    /dev/sdk7            3906836480      5860326239      1953489760  fd
    /dev/sdk8            5860341760      7813830239      1953488480  fd
    
    [/dev/sdk1] is a partition
    [/dev/sdk2] is a partition
    [/dev/sdk5] is a partition
    [/dev/sdk6] is a partition
    [/dev/sdk7] is a partition
    [/dev/sdk8] is a partition
    /dev/sdl1                  2048         4982527         4980480  fd
    /dev/sdl2               4982528         9176831         4194304  fd
    /dev/sdl3               9437184      1953511007      1944073824   f
    /dev/sdl5               9453280      1953318239      1943864960  fd
    
    [/dev/sdl1] is a partition
    [/dev/sdl2] is a partition
    [/dev/sdl3] is a partition
    [/dev/sdl5] is a partition
    /dev/sdu1                    63           49151           49089   e
    
    [/dev/sdu1] is a partition

     

    I partitioned the new drive as /dev/sdk because looking at /etc/space/last know good config, sdk was the only one missing

    LiquidXPe> cat /etc/space/space_history_20170311_233820.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <spaces>
            <space path="/dev/vg1000/lv" reference="/volume1" uuid="g5hc5i-t2eR-Wj1v-MTwg-3EHX-APQe-sDLOe5" device_type="1" container_type="2">
                    <device>
                            <lvm path="/dev/vg1000" uuid="dJc33I-psOe-q3Nu-Qdt6-lKUr-KGB3-gHOdGz" designed_pv_counts="4" status="normal" total_size="24955332853760" free_size="0" pe_size="4194304" expansible="0" max_size="24370456320">
                                    <raids>
                                            <raid path="/dev/md5" uuid="cec93e77:b4134947:fea5cfba:eee99979" level="raid5" version="1.2">
                                                    <disks>
                                                            <disk status="normal" dev_path="/dev/sdb7" model="WD30EZRX-00MMMB0        " serial="WD-WCAWZ1343231" partition_version="8" partition_start="3906838336" partition_size="1953487904" slot="0">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sde7" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E7ZAL0ZX" partition_version="8" partition_start="3906838336" partition_size="1953487904" slot="1">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdf7" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E2LRRLFA" partition_version="8" partition_start="3906838336" partition_size="1953487904" slot="2">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdh7" model="ST3000DM001-1E6166      " serial="" partition_version="8" partition_start="3906838336" partition_size="1953487904" slot="3">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdi7" model="ST3000DM001-1CH166      " serial="" partition_version="8" partition_start="3906838336" partition_size="1953487904" slot="4">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdk7" model="ST3000DM001-1CH166      " serial="" partition_version="8" partition_start="3906838336" partition_size="1953487904" slot="5">
                                                            </disk>
                                                    </disks>
                                            </raid>
                                            <raid path="/dev/md2" uuid="c88d79ed:5575471a:d6d4e7aa:282ecf4c" level="raid5" version="1.2">
                                                    <disks>
                                                            <disk status="normal" dev_path="/dev/sdb5" model="WD30EZRX-00MMMB0        " serial="WD-WCAWZ1343231" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="0">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdc5" model="WD2003FYYS-02W0B1       " serial="WD-WMAY04632022" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="4">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdd5" model="WD2003FYYS-02W0B0       " serial="WD-WMAY02585893" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="3">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sde5" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E7ZAL0ZX" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="10">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdf5" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E2LRRLFA" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="9">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdg5" model="ST2000DM001-1CH164      " serial="" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="5">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdh5" model="ST3000DM001-1E6166      " serial="" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="8">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdi5" model="ST3000DM001-1CH166      " serial="" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="7">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdj5" model="ST2000DM001-1CH164      " serial="" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="2">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdk5" model="ST3000DM001-1CH166      " serial="" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="6">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdl5" model="ST1000DM003-1CH162      " serial="" partition_version="8" partition_start="9453280" partition_size="1943864960" slot="1">
                                                            </disk>
                                                    </disks>
                                            </raid>
                                            <raid path="/dev/md3" uuid="3cef14a9:214bd5de:c71c244c:e59eb342" level="raid5" version="1.2">
                                                    <disks>
                                                            <disk status="normal" dev_path="/dev/sdb6" model="WD30EZRX-00MMMB0        " serial="WD-WCAWZ1343231" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="0">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdc6" model="WD2003FYYS-02W0B1       " serial="WD-WMAY04632022" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="1">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdd6" model="WD2003FYYS-02W0B0       " serial="WD-WMAY02585893" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="2">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sde6" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E7ZAL0ZX" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="9">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdf6" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E2LRRLFA" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="8">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdg6" model="ST2000DM001-1CH164      " serial="" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="3">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdh6" model="ST3000DM001-1E6166      " serial="" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="7">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdi6" model="ST3000DM001-1CH166      " serial="" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="6">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdj6" model="ST2000DM001-1CH164      " serial="" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="4">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdk6" model="ST3000DM001-1CH166      " serial="" partition_version="8" partition_start="1953334336" partition_size="1953487904" slot="5">
                                                            </disk>
                                                    </disks>
                                            </raid>
                                            <raid path="/dev/md4" uuid="d3a426d3:fafd9c0a:e0393702:79750b47" level="raid1" version="1.2">
                                                    <disks>
                                                            <disk status="normal" dev_path="/dev/sde8" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E7ZAL0ZX" partition_version="8" partition_start="5860342336" partition_size="1953487904" slot="0">
                                                            </disk>
                                                            <disk status="normal" dev_path="/dev/sdf8" model="WD40EZRX-00SPEB0        " serial="WD-WCC4E2LRRLFA" partition_version="8" partition_start="5860342336" partition_size="1953487904" slot="1">
                                                            </disk>
                                                    </disks>
                                            </raid>
                                    </raids>
                            </lvm>
                    </device>
                    <reference>
                            <volumes>
                                    <volume path="/volume1" dev_path="/dev/vg1000/lv" uuid="g5hc5i-t2eR-Wj1v-MTwg-3EHX-APQe-sDLOe5">
                                    </volume>
                            </volumes>
                    </reference>
            </space>
    </spaces>

     

    I know its alot of data but anyone willing to look at my issue, I am much appreciated.

  10. So I wanted to post this here as I have spent 3 days trying to fix my volume.  

    I am running xpenology on a JBOD nas with 11 drives.

    DS3615xs  DSM 5.2-5644 

    So back a few months ago I had a drive go bad and the volume went into degraded mode, I failed to replace the bad drive at the time because the volume still worked.  A few days ago I had a power outage and the nas came back up as crashed.  I searched many google pages on what to do to fix it and nothing worked.  The bad drive was not recoverable at all. I am no linux guru but I had similar issues before on this nas with other drives so I tried to focus on mdadm commands.  Problem was that I could not copy any data over from the old drive.  I found a post here https://forum.synology.com/enu/viewtopic.php?f=39&t=102148#p387357 that talked about finding the last known configs of the md raids.  I was able to determine that the bad drive was /dev/sdk  After trying fdisk, and gparted and realizing I could not use gdisk since it is not native in xpenology and my drive was 4tb and gpt I plugged the drive into a usb hard drive bay in a seperate linux machine.  I was able to use another 4tb that was working and copy the partition tables almost identically using gdisk.  Don't try to do it on windows, I did not find a worthy tool to partition it correctly.  After validating my partition numbers, start-end size and file system type FD00 I stuck the drive back in my nas.  I was able to do mdadm --manage /dev/md3 --add /dev/sdk6 and as soon as they showed under cat /proc/mdstat I see the raids rebuilding.  I have 22tb of space and the bad drive was lost on md2, md3 and md5 so it will take a while.  I am hoping my volume comes back up after they are done.

  11. Allright so here is where I am at. I decided to attach the drives to the 5644 installation. on first bootup DSM came up and complained about degraded volumes and wanted to rescan and re import I am guessing because the drives on the new board are in different sata slots than the old board. I rebooted and after dsm would not come online but I could ssh, i noticed from a few commands I had all my volumes even though they are degraded. It took about 30 min before dsm would let me log in. I know only see 2 drives missing and I think a reboot can fix that. I am now able to see all my shares from my network and my nfs vmware share is back online.

  12. So here are my thoughts. I have disabled all unnecessary hardware on the new motherboard. Still no good. I just took a new usb stick and extra HD and installed 5644 perfectly fine.

     

    Looking at settings on the old mb compared to new

     

    old mb: was not UEFI, sata was set to Legacy vs Native, was using onboard nic for mgmt

     

    new mb: is UEFI, only options for sata is AHCI vs Raid, no legacy. Also on the old install with the new mb nic, it does not detect the onboard nic.

     

    Since I have 5644 running, if I power off and plug all the hard drives in to the new install, is it possible to import the data?

  13. Well I swapped out my motherboard, proc and memory and my NAS will not come back online. I am able to ping the ip and i can login as root to console but the web admin and ssh are not operating. Before I get to crazy with this, does anyone know some console commands i can try to get things working, if not i will have to put it back on the original MB combo

     

    forgot to add that xpenoboot does come up at boot

  14. Team I have been using XPEnology for some time now, the 5565 version. It is very stable and great. I have around 18tb of usable storage on it. My question here is can I change out the motherboard and CPU and still keep all my settings?

×
×
  • Create New...