Hey! In xpenology I have 2 disks (4TB and 500GB) and RAID0 (4TB + 500GB). Decided to update from DSM 6.1.7-15284 with jun's loader to 7.1 with ARPL. Successfully updated and founded that RAID0 become unavailable, but it was still existed in storage manager. Seeing this with horror, I decided to roll back (my bad) to 6.1.7-15284. I changed USB stick, booted up with 6.1.7-15284, xpenology suggested a migration or reinstall (screenshot). Migration was unsuccessful because I had to download a version higher than the current one, 7.1. I selected reinstall option (there was a text that the data will remain unchanged after reinstall).
Then when I boot with 4TB (port 1) and 500GB (port 5) connected drive I haven't see a broken RAID 0.
$ sudo mdadm -D /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Fri Jan 1 00:51:57 2021
Raid Level : raid0
Array Size : 3902196544 (3721.42 GiB 3995.85 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Mon Nov 28 02:38:27 2022
State : clean, FAILED
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Chunk Size : 64K
Name : NL_SERVER:2 (local to host NL_SERVER)
UUID : 077dbff9:732a2154:186bcfa5:6bde5b9e
Events : 10
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
- 0 0 1 removed
S.M.A.R.T of both disks are perfect. It's fully software trouble. I think that when I rolled back in panic to DSM 6.1.7, I erased RAID0 metadata on a 500GB disk? Is the metadata that stores information about raid0 the same on both disks or is it different? Perhaps it can be transferred from a 4TB HDD?
Question
Pegasus
Hey! In xpenology I have 2 disks (4TB and 500GB) and RAID0 (4TB + 500GB). Decided to update from DSM 6.1.7-15284 with jun's loader to 7.1 with ARPL. Successfully updated and founded that RAID0 become unavailable, but it was still existed in storage manager. Seeing this with horror, I decided to roll back (my bad) to 6.1.7-15284. I changed USB stick, booted up with 6.1.7-15284, xpenology suggested a migration or reinstall (screenshot). Migration was unsuccessful because I had to download a version higher than the current one, 7.1. I selected reinstall option (there was a text that the data will remain unchanged after reinstall).
Then when I boot with 4TB (port 1) and 500GB (port 5) connected drive I haven't see a broken RAID 0.
When I boot only with 4TB connected drive I see:
$ sudo mdadm -D /dev/md2 /dev/md2: Version : 1.2 Creation Time : Fri Jan 1 00:51:57 2021 Raid Level : raid0 Array Size : 3902196544 (3721.42 GiB 3995.85 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Mon Nov 28 02:38:27 2022 State : clean, FAILED Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : NL_SERVER:2 (local to host NL_SERVER) UUID : 077dbff9:732a2154:186bcfa5:6bde5b9e Events : 10 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 - 0 0 1 removed
S.M.A.R.T of both disks are perfect. It's fully software trouble. I think that when I rolled back in panic to DSM 6.1.7, I erased RAID0 metadata on a 500GB disk? Is the metadata that stores information about raid0 the same on both disks or is it different? Perhaps it can be transferred from a 4TB HDD?
I will be very grateful for any help.
0 answers to this question
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.