Jump to content
XPEnology Community

How to assemble raidF1 on non-DSM Linux?


Recommended Posts

I've set up a test instance with raidF1 to try how one can recover data in case of any failure.

 

I've attached the three virtual disks inside my debian system (proxmox) under the following device names:

/dev/nbd0
/dev/nbd1
/dev/nbd2

 

output of lsblk

# lsblk -f
NAME                         FSTYPE            FSVER    LABEL  UUID                                   FSAVAIL FSUSE% MOUNTPOINT
nbd0
├─nbd0p1                     linux_raid_member 0.90.0          53e00d47-f1a3-66c2-05d9-49f7b0bbaec7
├─nbd0p2                     linux_raid_member 0.90.0          f7d63632-735b-1692-05d9-49f7b0bbaec7
└─nbd0p3                     linux_raid_member 1.2      test:2 c63ed2c8-39a8-7ff0-2c43-0f774cb603c9
nbd1
├─nbd1p1                     linux_raid_member 0.90.0          53e00d47-f1a3-66c2-05d9-49f7b0bbaec7
├─nbd1p2                     linux_raid_member 0.90.0          f7d63632-735b-1692-05d9-49f7b0bbaec7
└─nbd1p3                     linux_raid_member 1.2      test:2 c63ed2c8-39a8-7ff0-2c43-0f774cb603c9
nbd2
├─nbd2p1                     linux_raid_member 0.90.0          53e00d47-f1a3-66c2-05d9-49f7b0bbaec7
├─nbd2p2                     linux_raid_member 0.90.0          f7d63632-735b-1692-05d9-49f7b0bbaec7
└─nbd2p3                     linux_raid_member 1.2      test:2 c63ed2c8-39a8-7ff0-2c43-0f774cb603c9

 

output of mdadm examine. Here the problems begin as Raid Level is "-unknown_"

# mdadm --misc --query --examine /dev/nbd0p3
/dev/nbd0p3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c63ed2c8:39a87ff0:2c430f77:4cb603c9
           Name : test:2
  Creation Time : Mon Sep 19 20:56:50 2022
     Raid Level : -unknown-
   Raid Devices : 3

 Avail Dev Size : 45660160 (21.77 GiB 23.38 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 426bd125:fbf2912a:c8e7d1ec:2fe2e307

    Update Time : Mon Sep 19 21:48:24 2022
       Checksum : 27a0eb8a - correct
         Events : 23


   Device Role : Active device 0
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)

 

Then trying to reassemble

# mdadm --assemble --force --uuid=c63ed2c8:39a87ff0:2c430f77:4cb603c9 /dev/md127
mdadm: /dev/md127 assembled from 3 drives - not enough to start the array.

 

Output of mdadm now states that the raid is a raid0

# mdadm --detail /dev/md127
/dev/md127:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 3
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 3

              Name : test:2
              UUID : c63ed2c8:39a87ff0:2c430f77:4cb603c9
            Events : 23

    Number   Major   Minor   RaidDevice

       -      43        3        -        /dev/nbd0p3
       -      43       51        -        /dev/nbd2p3
       -      43       19        -        /dev/nbd1p3

 

What do I have to do different to assemble it successfully? I had the assumption it would be assembled as an raid5 but this does not seem to be the case.

Edited by ns130291
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...