Jump to content
XPEnology Community

Recommended Posts

Posted

I need help recovering a crashed SHR pool. Following a power outage and a subsequent drive issue, my pool is crashed, and the DSM web interface is inaccessible when all drives are installed. I still have SSH access.

 

System Specifications
DSM Version: 7.1.1-42962 Update 4
RAID Type: SHR with 1-drive fault tolerance
Drive Configuration: 12 drives total

 

Sequence of Events
    1. A brief power outage caused one drive to show as corrupted.
    2. I used Storage Manager to deactivate the corrupted drive and then rebooted the NAS. Inteded to add the drive back as I normally do in this situation.
    3. I had to step away for the weekend. Upon my return, the pool was reported as "crashed."
    4. All Docker services relying on the volume stopped, and the shared folders on it were no longer visible in File Station.

 

Current Status & Symptoms
    • Web Interface (DSM): I get a "ERR_CONNECTION_FAILED" error when trying to access the web UI with all drives connected.
    • SSH Access: I can still connect to the NAS via SSH without any issues.
    • Temporary Workaround: If I physically disconnect one of the drives from the storage pool, the DSM web interface loads successfully. However, the volume is obviously not present. Re-connecting the drive causes the web UI to become inaccessible again.


Goal
My primary goal is to recover the volume. If that's not possible, I need to copy the data from the crashed volume to a safe location.


I've gathered outputs from several commands via SSH based on other support threads but am concerned about running any commands that could make the situation worse. I'm looking for expert guidance on the safest way to proceed with recovery. Any and all advice is welcome.

 

Quote

ash-4.4# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1] 
md4 : active raid5 sdi5[3] sdam5[12] sdao5[13] sdan5[11] sdaq5[10](E) sdh5[9] sdap5[7] sdk5[6] sdg5[5] sdl5[4] sdj5[2]
      107377588736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [12/11] [UUUUUUU_EUUU]
      
md3 : active raid1 sde3[0]
      966038208 blocks super 1.2 [1/1] [U]
      
md2 : active raid1 sdb3[0]
      477662208 blocks super 1.2 [1/1] [U]
      
md1 : active raid1 sdb2[5] sde2[7]
      2097088 blocks [16/2] [_____U_U________]
      
md0 : active raid1 sdb1[5] sde1[7]
      2490176 blocks [16/2] [_____U_U________]
      
unused devices: <none>

 

Quote

 ash-4.4# ls /dev/sd*
/dev/sdam   /dev/sdan1    /dev/sdao2  /dev/sdap5    /dev/sdar   /dev/sdb1  /dev/sde2  /dev/sdg5  /dev/sdi    /dev/sdj1  /dev/sdk2  /dev/sdl5
/dev/sdam1  /dev/sdan2    /dev/sdao5  /dev/sdaq    /dev/sdar1  /dev/sdb2  /dev/sde3  /dev/sdh   /dev/sdi1    /dev/sdj2  /dev/sdk5
/dev/sdam2  /dev/sdan5    /dev/sdap   /dev/sdaq1    /dev/sdar2  /dev/sdb3  /dev/sdg   /dev/sdh1  /dev/sdi2    /dev/sdj5  /dev/sdl
/dev/sdam5  /dev/sdao    /dev/sdap1  /dev/sdaq2    /dev/sdar5  /dev/sde   /dev/sdg1  /dev/sdh2  /dev/sdi5    /dev/sdk   /dev/sdl1
/dev/sdan   /dev/sdao1    /dev/sdap2  /dev/sdaq5    /dev/sdb    /dev/sde1  /dev/sdg2  /dev/sdh5  /dev/sdj    /dev/sdk1  /dev/sdl2

 

Quote

ash-4.4# ls /dev/md*
/dev/md0  /dev/md1  /dev/md2  /dev/md3    /dev/md4

 

Quote

ash-4.4# ls /dev/vg*
/dev/vga_arbiter

/dev/vg1:
syno_vg_reserved_area  volume_1

/dev/vg1001:
lv

/dev/vg2:
syno_vg_reserved_area  volume_3

 

Quote

ash-4.4# mdadm --detail /dev/md4
/dev/md4:
        Version : 1.2
  Creation Time : Tue Oct 29 14:24:31 2019
     Raid Level : raid5
     Array Size : 107377588736 (102403.25 GiB 109954.65 GB)
  Used Dev Size : 9761598976 (9309.39 GiB 9995.88 GB)
   Raid Devices : 12
  Total Devices : 11
    Persistence : Superblock is persistent

    Update Time : Tue Jun 10 10:34:14 2025
          State : clean 
 Active Devices : 11
Working Devices : 11
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : DiskStation:4
           UUID : 4fa9acaa:eb658006:d157b35e:2e2cbe20
         Events : 704088

    Number   Major   Minor   RaidDevice State
       3       8      133        0      active sync   /dev/sdi5
       2       8      149        1      active sync   /dev/sdj5
       4       8      181        2      active sync   /dev/sdl5
       5       8      101        3      active sync   /dev/sdg5
       6       8      165        4      active sync   /dev/sdk5
       7      66      149        5      active sync   /dev/sdap5
       9       8      117        6      active sync   /dev/sdh5
       -       0        0        7      removed
      10      66      165        8      faulty active sync   /dev/sdaq5
      11      66      117        9      active sync   /dev/sdan5
      13      66      133       10      active sync   /dev/sdao5
      12      66      101       11      active sync   /dev/sdam5

 

Quote

ash-4.4# mdadm --examine /dev/sdaq5 
/dev/sdaq5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 4fa9acaa:eb658006:d157b35e:2e2cbe20
           Name : DiskStation:4
  Creation Time : Tue Oct 29 14:24:31 2019
     Raid Level : raid5
   Raid Devices : 12

 Avail Dev Size : 19523198016 (9309.39 GiB 9995.88 GB)
     Array Size : 107377588736 (102403.25 GiB 109954.65 GB)
  Used Dev Size : 19523197952 (9309.39 GiB 9995.88 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=64 sectors
          State : clean
    Device UUID : 424c4da1:84e41825:86840f88:c251e661

    Update Time : Tue Jun 10 10:34:14 2025
       Checksum : 81f080a9 - correct
         Events : 704088

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 8
   Array State : AAAAAAA.AAAA ('A' == active, '.' == missing, 'R' == replacing)

 

Quote

ash-4.4# cat /etc/fstab
none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/mapper/cachedev_0 /volume2 btrfs auto_reclaim_space,ssd,synoacl,noatime,nodev 0 0
/dev/mapper/cachedev_1 /volume3 btrfs auto_reclaim_space,ssd,synoacl,relatime,nodev 0 0
/dev/mapper/cachedev_2 /volume1 btrfs auto_reclaim_space,ssd,synoacl,relatime,nodev 0 0

 

Quote

ash-4.4# lvdisplay -v
    Using logical volume(s) on command line.
  --- Logical volume ---
  LV Path                /dev/vg1001/lv
  LV Name                lv
  VG Name                vg1001
  LV UUID                Bni54N-SbQB-Il1f-9RR5-LH1f-Uo2f-er32wI
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                100.00 TiB
  Current LE             26215231
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     2816
  Block device           249:0
   
  --- Logical volume ---
  LV Path                /dev/vg2/syno_vg_reserved_area
  LV Name                syno_vg_reserved_area
  VG Name                vg2
  LV UUID                IZQ4el-aJLv-s1ui-ZA3Q-sb8c-ZFRw-jkyRPz
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 0
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           249:3
   
  --- Logical volume ---
  LV Path                /dev/vg2/volume_3
  LV Name                volume_3
  VG Name                vg2
  LV UUID                KNynWL-riAu-eTeA-ZCXG-PWnS-W0qy-LbO30y
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                921.00 GiB
  Current LE             235776
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           249:4
   
  --- Logical volume ---
  LV Path                /dev/vg1/syno_vg_reserved_area
  LV Name                syno_vg_reserved_area
  VG Name                vg1
  LV UUID                jPRfAb-MsNt-4XpF-Il27-5kPm-BE4G-L5hZA8
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 0
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           249:1
   
  --- Logical volume ---
  LV Path                /dev/vg1/volume_1
  LV Name                volume_1
  VG Name                vg1
  LV UUID                2OfdeC-6fKn-cGxM-ydAm-Bb8N-vW6c-2HDyH2
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                455.00 GiB
  Current LE             116480
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           249:2

 

Quote

ash-4.4# vgdisplay -v
    Using volume group(s) on command line.
  --- Volume group ---
  VG Name               vg1001
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  16
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               100.00 TiB
  PE Size               4.00 MiB
  Total PE              26215231
  Alloc PE / Size       26215231 / 100.00 TiB
  Free  PE / Size       0 / 0   
  VG UUID               vcCPuH-fiPF-9s0B-NTOY-vVyd-OdfI-fvEhUE
   
  --- Logical volume ---
  LV Path                /dev/vg1001/lv
  LV Name                lv
  VG Name                vg1001
  LV UUID                Bni54N-SbQB-Il1f-9RR5-LH1f-Uo2f-er32wI
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                100.00 TiB
  Current LE             26215231
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     2816
  Block device           249:0
   
  --- Physical volumes ---
  PV Name               /dev/md4     
  PV UUID               G60w7O-Mcxn-olQ8-9bs4-0B02-F51E-19nhOW
  PV Status             allocatable
  Total PE / Free PE    26215231 / 0
   
  --- Volume group ---
  VG Name               vg2
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               921.29 GiB
  PE Size               4.00 MiB
  Total PE              235849
  Alloc PE / Size       235779 / 921.01 GiB
  Free  PE / Size       70 / 280.00 MiB
  VG UUID               81uM1G-U8Kd-cKyk-2rF1-y1NM-EXq3-G6W2HH
   
  --- Logical volume ---
  LV Path                /dev/vg2/syno_vg_reserved_area
  LV Name                syno_vg_reserved_area
  VG Name                vg2
  LV UUID                IZQ4el-aJLv-s1ui-ZA3Q-sb8c-ZFRw-jkyRPz
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 0
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           249:3
   
  --- Logical volume ---
  LV Path                /dev/vg2/volume_3
  LV Name                volume_3
  VG Name                vg2
  LV UUID                KNynWL-riAu-eTeA-ZCXG-PWnS-W0qy-LbO30y
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                921.00 GiB
  Current LE             235776
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           249:4
   
  --- Physical volumes ---
  PV Name               /dev/md3     
  PV UUID               R7PKCN-80zW-Yrjl-KVyi-Goi4-OV87-p0BeGW
  PV Status             allocatable
  Total PE / Free PE    235849 / 70
   
  --- Volume group ---
  VG Name               vg1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               455.53 GiB
  PE Size               4.00 MiB
  Total PE              116616
  Alloc PE / Size       116483 / 455.01 GiB
  Free  PE / Size       133 / 532.00 MiB
  VG UUID               z3FQa0-1NY0-7cGh-1XCE-Z67V-WZFO-c3RmOF
   
  --- Logical volume ---
  LV Path                /dev/vg1/syno_vg_reserved_area
  LV Name                syno_vg_reserved_area
  VG Name                vg1
  LV UUID                jPRfAb-MsNt-4XpF-Il27-5kPm-BE4G-L5hZA8
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 0
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           249:1
   
  --- Logical volume ---
  LV Path                /dev/vg1/volume_1
  LV Name                volume_1
  VG Name                vg1
  LV UUID                2OfdeC-6fKn-cGxM-ydAm-Bb8N-vW6c-2HDyH2
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                455.00 GiB
  Current LE             116480
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           249:2
   
  --- Physical volumes ---
  PV Name               /dev/md2     
  PV UUID               zOTbqz-shoH-jnrh-Fwcw-oxCV-PWLe-VNLtdi
  PV Status             allocatable
  Total PE / Free PE    116616 / 133

 

Quote

ash-4.4# lvm pvscan
  PV /dev/md4   VG vg1001   lvm2 [100.00 TiB / 0    free]
  PV /dev/md3   VG vg2      lvm2 [921.29 GiB / 280.00 MiB free]
  PV /dev/md2   VG vg1      lvm2 [455.53 GiB / 532.00 MiB free]
  Total: 3 [101.35 TiB] / in use: 3 [101.35 TiB] / in no VG: 0 [0   ]

 

image.thumb.png.54254a8e43101396eb22ebb20cbded1a.png

 

image.thumb.png.8377a3dd7a800d99cc362a5376614ca6.png

  • Sad 1
Posted
1 hour ago, Trabalhador Anonimo said:

Download M-shell here, record in a new USB drive, boot from it and use function to mount and recover your data.

Look here  at post #1788 to know how it works.

 

Thank you very much! I will give that a look and try later today. Do I risk losing any data with this approach?

Curious if you have run into this same issue before with SHR-1 disk fault tolerance?

Posted

Adding a little more information - Note the last drive (sdar5) is the one that was deactivated a couple days before the crash. From what I am gathering in various posts (some pretty dated now) this is a good sign that my data is in tact?

 

Quote

ash-4.4# mdadm --examine /dev/sd[ghijkl]5 /dev/sdam5 /dev/sdan5 /dev/sdao5 /dev/sdap5 /dev/sdaq5 /dev/sdar5 | egrep 'Event|/dev/sd'

/dev/sdg5:

         Events : 704097

/dev/sdh5:

         Events : 704097

/dev/sdi5:

         Events : 704097

/dev/sdj5:

         Events : 704097

/dev/sdk5:

         Events : 704097

/dev/sdl5:

         Events : 704097

/dev/sdam5:

         Events : 704097

/dev/sdan5:

         Events : 704097

/dev/sdao5:

         Events : 704097

/dev/sdap5:

         Events : 704097

/dev/sdaq5:

         Events : 704097

/dev/sdar5:

         Events : 703612

 

Posted

@Trabalhador Anonimo - I created the bootable USB and added the menu entry for recovery, unfortunately it doesn't seem to work out of the box and I'm not sure where I would go from here? As far as I can tell the disk is fine, its just reported in a failed state which is preventing things from coming up as expected. 3ADCEE43-1C02-4A9E-9B45-13B36BD066FD_1_102_o.thumb.jpeg.129c0b04c59eac2760e216dfa7fa658d.jpeg

Posted
11 minutes ago, Peter Suh said:

Here is Perplexity's answer. Please refer to it.

 

2025-06-1212_29_00.thumb.png.939cb382f24104239d5ff16ec5fb83fd.png

 

2025-06-1212_29_16.thumb.png.69677490a680c7f742b348b9fc64d9e9.png

 

Thank you @Peter Suh for taking the time to review and give it a whirl in perplexity. Unfortunately that doesnt get me anywhere as I cant access the web GUI, and syno_poweroff_task does not appear to exist in DSM 7? 

Posted
17 minutes ago, c_c_b said:

 

Thank you @Peter Suh for taking the time to review and give it a whirl in perplexity. Unfortunately that doesnt get me anywhere as I cant access the web GUI, and syno_poweroff_task does not appear to exist in DSM 7? 

2025-06-1212_56_27.thumb.png.61930f20a63144519aa232de06eebaf6.png

 

These commands have been verified to actually work.

 

Posted
6 minutes ago, Peter Suh said:

2025-06-1212_56_27.thumb.png.61930f20a63144519aa232de06eebaf6.png

 

These commands have been verified to actually work.

 

Thank you again - I did actaully come across a post for DSM 7 with these commands and tried them, which gave me mixed unsuccesful results, either the command hangs indefinetly, or it would simply not unmount and would still show active in lvscan. 

Posted
22 hours ago, c_c_b said:

Thank you very much! I will give that a look and try later today. Do I risk losing any data with this approach?

Curious if you have run into this same issue before with SHR-1 disk fault tolerance?

1- don´t know, but Synology made a lot of improvements to avois data loss;

2- The disks in problem, are my eldest ones and maybe their live is about to end. I got new ones and I´ll replace next weekend. 

Posted
9 hours ago, djvas335 said:

I would suggest booting the server in Gparted Live and see there if you can mount the raid.

Trying this wont put anything at risk of data loss? 

Posted

@Peter Suh - I have tried your suggestions to no success. From what I can tell from the synology assistant, when I have all the drives attached the server goes into a "checking quota" status, upon checking top i see a kworker sucking up 100% CPU. I suspect this it when GUI is unresponsive and why most commands wont finish, and SSH eventually stops connecting. I've looked around but am not coming across anyting to kill that kworker or whatever launches the quota check. I tried your new m-shell recovery tool as well but it gave me whats pictured below. Any chance you have some guidance from here? I know the drive has not failed, its just reporting that way, and it seems to me that im just fighting some incorrect status somewhere that needs to be reset to get things going again?
 

image.png.ef62283fb96188235122add170df9ae3.png

Posted

It will not as long as you know what you are doing, it will be actually possible to fix and mount if damage is not severe and copy your data out. I had one issue with a server and copied the data out this way

 

5 hours ago, c_c_b said:

Trying this wont put anything at risk of data loss? 

 

Posted
2 hours ago, djvas335 said:

It will not as long as you know what you are doing, it will be actually possible to fix and mount if damage is not severe and copy your data out. I had one issue with a server and copied the data out this way

 

 

That is great to hear. I have found this to be a bit more tricky than a simple drive mount given it's 12 drives and its using Synology SHR-1...Can you provide me with the steps needed using gparted live?

Posted

I was able to make slightly more progress today, I think? I discovered that if I boot my server with one of the remaining 11 drives (Pool is 12 drives, 1 was deactivated) disconnected to get the GUI working I can then connect the 11th drive well running and storage manger picks it up no problem. I'm also presented with an online assemble option (Seems good?). I have tried that however it does not appear anything is happening...system resources are less than 5% across the board, I dont see anything obvious running, and the status has been "Assembling...Waiting" for hours. It did however make volume 2 show up again under the storage pool in storage manager, but volume 2 is not mounted (verified in /etc/fstab). Anyone who has done an online assemble who can speak to what status should be seen and how long it normally takes? Am I on the right track for recovery?

 

image.thumb.png.aa3c6acb83d9d342242b3b1506af38f4.png
image.thumb.png.99b4df6c124328e648d983270c8266f0.png

Posted (edited)
3 hours ago, c_c_b said:

Anyone who has done an online assemble who can speak to what status should be seen and how long it normally takes? Am I on the right track for recovery?

 

You already know how to find out how long it takes to rebuild RAID.

 

cat /proc/mdstat

 

Edited by Peter Suh
Posted
2 minutes ago, Peter Suh said:

 

You already know how to find out how long it takes to rebuild RAID.

 

cat /proc/mdstat

 

@Peter Suh - Thats part of why I dont think anything is happening as it shows nothing there:

 

Quote

cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1] 
md4 : active raid5 sdi5[3] sdam5[12] sdao5[13] sdan5[11] sdaq5[10](E) sdh5[9] sdap5[7] sdk5[6] sdg5[5] sdl5[4] sdj5[2]
      107377588736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [12/11] [UUUUUUU_EUUU]
      
md3 : active raid1 sde3[0]
      966038208 blocks super 1.2 [1/1] [U]
      
md2 : active raid1 sdb3[0]
      477662208 blocks super 1.2 [1/1] [U]
      
md1 : active raid1 sdb2[5] sde2[7]
      2097088 blocks [16/2] [_____U_U________]
      
md0 : active raid1 sdb1[5] sde1[7]
      2490176 blocks [16/2] [_____U_U________]
      
unused devices: <none>

 

Posted
1 hour ago, c_c_b said:

@Peter Suh - Thats part of why I dont think anything is happening as it shows nothing there:

 

 

I have 12 disks in VM and use 11 disks with same SHR configuration as you and use 1 as HOT SPARE. (Please ignore my system partition error, it is a separate issue.)

 

I disabled 1 disk that had error and HOT SPARE started automatically.

 

2025-06-131_54_45.thumb.png.ecbbeb8441e4f5fb8b446f394809f1fa.png

 

It seems difficult to cause error during HOT SPARE auto-recovery like you.

 

The disk that had error during HOT SPARE auto-recovery, marked as E instead of U, seems to be stuck in the middle.

[UUUUUUU_EUUU]

 

It seems that HOT SPARE auto-recovery problem is very rare.

 

SHR provides data protection with only 1 disk.

The disk that had error was already removed and I expected this HOT SPARE disk to auto-recover, but it seems that it did not work as expected.

 

In that case, shouldn't HOT SPARE disk be used for initialization and adding to recovery?

It seems that this disk with error E status is preventing even reassembly.

 

Since the above is a very rare case, I think the know-how is with Synology's technical team.

Posted

md4 : active raid5 sdi5[3] sdam5[12] sdao5[13] sdan5[11] sdaq5[10](E) sdh5[9] sdap5[7] sdk5[6] sdg5[5] sdl5[4] sdj5[2]
      107377588736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [12/11] [UUUUUUU_EUUU]

 

However, it seems that the status of md4 was analyzed incorrectly.
md4 already has 2 disks lost. [_E]
As mentioned above, SHR only allows recovery of 1 disk.
Data cannot be recovered when 2 disks are lost.
The HOT SPARE I mentioned is also a result of incorrect analysis and should be ignored.

Posted

Unfortunately there is no easy way to recover from this, at this time I would have stopped trying to recover the raid in DSM and just imaged each drive and try to reassemble the raid from images for data recovery purposes, and again if you value your data then backup backup backup.

 

Posted

Drive 2 - system partition fail. You can restore. Click on the disk, and a button will shows up "repair system partition". A new window will shows up asking to select the disk. I had this problem last week: 2 disks were fault. One crashed and another one had "system partition fail". I restore the first one and disable the second one. As there was 2 faults, the hot spare did not got working. After repair the first one  ( with system partition fail), the hot spare went online and recover the other one. I reboot even before everything was ok, and the second disk came online. I´m backing up now to replace both disks.

FYI: it is a 11 disks box.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...