Jump to content
XPEnology Community
  • 0

RAID0 recovery


Pegasus

Question

Hey! In xpenology I have 2 disks (4TB and 500GB) and RAID0 (4TB + 500GB). Decided to update from DSM 6.1.7-15284 with jun's loader to 7.1 with ARPL. Successfully updated and founded that RAID0 become unavailable, but it was still existed in storage manager. Seeing this with horror, I decided to roll back (my bad) to 6.1.7-15284. I changed USB stick, booted up with 6.1.7-15284, xpenology suggested a migration or reinstall (screenshot). Migration was unsuccessful because I had to download a version higher than the current one, 7.1. I selected reinstall option (there was a text that the data will remain unchanged after reinstall).

 

Then when I boot with 4TB (port 1) and 500GB (port 5)  connected drive I haven't see a broken RAID 0.

2023-01-10_22-01-34-min.thumb.png.6213bcf09073a7182f9dd7c6eab4ee81.png

 

$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md1 : active raid1 sde2[1] sda2[0]
      2097088 blocks [12/2] [UU__________]

md0 : active raid1 sda1[1]
      2490176 blocks [12/1] [_U__________]

unused devices: <none>

 

 

When I boot only with 4TB connected drive I see:

2023-01-10_22-37-51-min.thumb.png.0eb3f572f1abca773b03be49ae6426f1.png

$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid0 sda3[0]
      3902196544 blocks super 1.2 64k chunks [2/1] [U_]

md1 : active raid1 sda2[0]
      2097088 blocks [12/1] [U___________]

md0 : active raid1 sda1[1]
      2490176 blocks [12/1] [_U__________]

unused devices: <none>

 

$ sudo mdadm -D /dev/md2

/dev/md2:
Version : 1.2
Creation Time : Fri Jan  1 00:51:57 2021
Raid Level : raid0
Array Size : 3902196544 (3721.42 GiB 3995.85 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Update Time : Mon Nov 28 02:38:27 2022
State : clean, FAILED
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Chunk Size : 64K

Name : NL_SERVER:2  (local to host NL_SERVER)
UUID : 077dbff9:732a2154:186bcfa5:6bde5b9e
Events : 10

Number   Major   Minor   RaidDevice State
0       8        3        0      active sync   /dev/sda3
-       0        0        1      removed

 

S.M.A.R.T of both disks are perfect. It's fully software trouble. I think that when I rolled back in panic to DSM 6.1.7, I erased RAID0 metadata on a 500GB disk? Is the metadata that stores information about raid0 the same on both disks or is it different? Perhaps it can be transferred from a 4TB HDD?

 

I will be very grateful for any help.

Link to comment
Share on other sites

0 answers to this question

Recommended Posts

There have been no answers to this question yet

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...