Jump to content
XPEnology Community

Tutorial: How to access DSM's Data & System partitions


Polanskiman

Recommended Posts

  • 1 month later...
  • 1 month later...

I installed Ubuntu 18.04.2 LTS (Bionic Beaver) and Ubuntu 18.10 (Cosmic Cuttlefish). Installed mdadm on both. Looked at the mdadm help and the -AU (--assemble --update) options seem to be available. The byteorder option under update seems to be available as well. Version of mdadm used is v4.1-rc1.

 

So I am not sure what you did or perhaps I am missing something.

Link to comment
Share on other sites

  • 1 year later...
В 13.03.2017 в 07:20, Polanskiman сказал:

mdadm -AU byteorder /dev/md0 /dev/sda1 /dev/sdb1

Dear, i can't mount system partition... (need a edit the /etc/default.....

i Have a 5 Disk named at sda,sdb,sdc,sdd,sde and this Single disk without RAID config.... SimpleRAID

after command mdadm -AU byteorder /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
 the system answer me "mdadm: device /dev/sda1 exists but is not an md array."

but after command sudo fdisk -l | grep /dev/sd  system answer is:
 

Disk /dev/sda: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
/dev/sda1          2048    4982527    4980480  2.4G fd Linux raid autodetect
/dev/sda2       4982528    9176831    4194304    2G fd Linux raid autodetect
/dev/sda3       9437184 3906824351 3897387168  1.8T fd Linux raid autodetect
Disk /dev/sdc: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
/dev/sdc1          2048    4982527    4980480  2.4G fd Linux raid autodetect
/dev/sdc2       4982528    9176831    4194304    2G fd Linux raid autodetect
/dev/sdc3       9437184 3906824351 3897387168  1.8T fd Linux raid autodetect
Disk /dev/sdb: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
/dev/sdb1          2048    4982527    4980480  2.4G fd Linux raid autodetect
/dev/sdb2       4982528    9176831    4194304    2G fd Linux raid autodetect
/dev/sdb3       9437184 3906824351 3897387168  1.8T fd Linux raid autodetect
Disk /dev/sdf: 111.81 GiB, 120034123776 bytes, 234441648 sectors
/dev/sdf1          2048   4982527   4980480   2.4G fd Linux raid autodetect
/dev/sdf2       4982528   9176831   4194304     2G fd Linux raid autodetect
/dev/sdf3       9437184 234236831 224799648 107.2G fd Linux raid autodetect
Disk /dev/sdd: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
/dev/sdd1          2048    4982527    4980480  2.4G fd Linux raid autodetect
/dev/sdd2       4982528    9176831    4194304    2G fd Linux raid autodetect
/dev/sdd3       9437184 3906824351 3897387168  1.8T fd Linux raid autodetect
Disk /dev/sde: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
/dev/sde1          2048    4982527    4980480  2.4G fd Linux raid autodetect
/dev/sde2       4982528    9176831    4194304    2G fd Linux raid autodetect
/dev/sde3       9437184 3906824351 3897387168  1.8T fd Linux raid autodetect

after mdadm -AU byteorder /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /de
v/sde1 ...

mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got fc4e2ba9)
mdadm: no RAID superblock on /dev/sda1
mdadm: /dev/sda1 has no superblock - assembly aborted

whats i am do wrong????

Edited by loveburn
Link to comment
Share on other sites

  • 5 weeks later...
On 4/10/2021 at 1:21 PM, loveburn said:

Dear, i can't mount system partition... (need a edit the /etc/default.....

i Have a 5 Disk named at sda,sdb,sdc,sdd,sde and this Single disk without RAID config.... SimpleRAID

after command mdadm -AU byteorder /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
 the system answer me "mdadm: device /dev/sda1 exists but is not an md array."

but after command sudo fdisk -l | grep /dev/sd  system answer is:
 


Disk /dev/sda: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
/dev/sda1          2048    4982527    4980480  2.4G fd Linux raid autodetect
/dev/sda2       4982528    9176831    4194304    2G fd Linux raid autodetect
/dev/sda3       9437184 3906824351 3897387168  1.8T fd Linux raid autodetect
Disk /dev/sdc: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
/dev/sdc1          2048    4982527    4980480  2.4G fd Linux raid autodetect
/dev/sdc2       4982528    9176831    4194304    2G fd Linux raid autodetect
/dev/sdc3       9437184 3906824351 3897387168  1.8T fd Linux raid autodetect
Disk /dev/sdb: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
/dev/sdb1          2048    4982527    4980480  2.4G fd Linux raid autodetect
/dev/sdb2       4982528    9176831    4194304    2G fd Linux raid autodetect
/dev/sdb3       9437184 3906824351 3897387168  1.8T fd Linux raid autodetect
Disk /dev/sdf: 111.81 GiB, 120034123776 bytes, 234441648 sectors
/dev/sdf1          2048   4982527   4980480   2.4G fd Linux raid autodetect
/dev/sdf2       4982528   9176831   4194304     2G fd Linux raid autodetect
/dev/sdf3       9437184 234236831 224799648 107.2G fd Linux raid autodetect
Disk /dev/sdd: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
/dev/sdd1          2048    4982527    4980480  2.4G fd Linux raid autodetect
/dev/sdd2       4982528    9176831    4194304    2G fd Linux raid autodetect
/dev/sdd3       9437184 3906824351 3897387168  1.8T fd Linux raid autodetect
Disk /dev/sde: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
/dev/sde1          2048    4982527    4980480  2.4G fd Linux raid autodetect
/dev/sde2       4982528    9176831    4194304    2G fd Linux raid autodetect
/dev/sde3       9437184 3906824351 3897387168  1.8T fd Linux raid autodetect

after mdadm -AU byteorder /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /de
v/sde1 ...


mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got fc4e2ba9)
mdadm: no RAID superblock on /dev/sda1
mdadm: /dev/sda1 has no superblock - assembly aborted

whats i am do wrong????

Having the exact same issue, did you find the solution??

Link to comment
Share on other sites

  • 2 months later...

Hello,

 

I'm trying to mount my system disk only using 1 of the 4 disks on my Synology. The reason is, is that I need to fix a specific directory on the System drive which I accidently corrupted in my NAS.

 

If I run the command you mention though:

mdadm -AU byteorder /dev/md0 /dev/sda1

 

It shows me:

mdadm: No super block foudn on /dev/sda1: (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sda1
mdadm: /dev/sda1 has no superblock - assembly aborted

 

I'm really confused on what's going on here. I hope someone can help me :)?

Link to comment
Share on other sites

  • 2 weeks later...
On 7/21/2021 at 2:03 PM, Devedse said:

Hello,

 

I'm trying to mount my system disk only using 1 of the 4 disks on my Synology. The reason is, is that I need to fix a specific directory on the System drive which I accidently corrupted in my NAS.

 

If I run the command you mention though:


mdadm -AU byteorder /dev/md0 /dev/sda1

 

It shows me:


mdadm: No super block foudn on /dev/sda1: (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sda1
mdadm: /dev/sda1 has no superblock - assembly aborted

 

I'm really confused on what's going on here. I hope someone can help me :)?

 

I tried this at first and it only worked out for me when all disks from the array were in place.

 

Thanks for this post OP.

It saved my ass as I forgot to delete the .xpenoboot folder when updating and SSH wasn't enabled.

 

Link to comment
Share on other sites

@Polanskiman 

First of all I would like to thank you for the tutorial! Also I want to thank you for giving your time to solve my problems!

I can not get the system partition mounted and would be happy if you could give me a tip on how to proceed.

 

The following scenario and my system:

 

DSM 6.2.3-25426 as a virtual machine on an unraid system

120 GB SSD passthrough for Volume 1 (Systemapps)

LSI 9211-8i in IT Mode Passthrough with 6x6TB WD in RAID5 (DATA)

 

As stupid as I am I believed a posting in another forum and tried to update to 6.2.4 -> Brick

Since I was still using Baremetall last year and after switching to VM I was able to take over the RAID5 with the LSI controller completely from the

Baremetall version, I wanted to undo my update error with this background information as follows.

 

I removed the RAID5 with the LSI controller and set up the virtual machine with only the SSD again with the version 6.2.3.
Unfortunately I had to find out that the RAID5 is not taken over now, because in the system partitions there is still information

from the 6.2.4 are available and the 6 hard disks are displayed as external hard disks.

 

I was now looking for a solution here in the forum and came to this thread and that of @IG-88 on the subject of downgrade from 6.2.4 to 6.2.3.


The problem I have now is that I have already formatted on the SSD and on the system partition which are not identical with the information of the

system partitions of the RAID5. Result of your tutorial for me is that it is not possible to assemble the RAID of the system partition.

 

1.thumb.PNG.26f1f67e7d0e5aacd81dc941ed843a8e.PNG

...

2.thumb.PNG.bc5ca08379d830dd5d4f7030fa61c5c1.PNG

 

Now I had the idea to copy the content of one of the system partitions, which must be all the same (RAID1), from the six RAID5 hard drives

and write to the system hard drive of the SSD, so that the original RAID1 of the system partition again consists of 7 hard drives.

Unfortunately, this also does not work, because I can not mount a single system partition with the "mount" command...

 

3.thumb.PNG.81cec50bf5a75952faa1042897f0a4c4.PNG  

 

Would it be a good idea to format (not delete) the system disks of all HDD's and use an unused bootimage to force a new installation of 6.2.3. My primary concern is only that the RAID5 system data is preserved. I can quickly reset all apps and system settings.

 

Or is there possibly a better way I can go?

 

The most inelegant way would be to mount the RAID5 of the data partition and back up all the data, then rebuild the RAID5 and import the data again. This would definitely work, since I was already able to mount the RAID5 successfully.

 

Thanks for reading my problems!

Edited by WowaDriver
forgot one point...
Link to comment
Share on other sites

I have just been able to test the following:

 

I have unhooked the RAID5 system and started DSM on the SSD with version 6.2.3. Here I have created a volume 1 and placed test data. Then I have again forced an update to the 6.2.4.

 

Now I mounted the bricked system in Linux and deleted the system partition. Then with a new boot image the 6.2.3 was installed again and voila the system runs and the volume 1 on the ssd is still present with the test data.

 

Now the question is something like this then also works with my RAID5?

Link to comment
Share on other sites

Since no one answers me here I have taken the matter into my own hands. 

 

Tutorials are certainly all useful, but finally the easiest way is to delete the system partitions (2.4GB Linux RAID1 members: sda1, sdb1, sdc1, ... , sdx1) of all installed hard disks, if you have performed an update and the system is no longer bootable. Then simply use an unused synoboot.img and the system is migrated without touching the data partitions.

 

Disadvantage is that all installed apps, configurations and settings have to be reinstalled of course.

 

Thanks anyway for all the information written down here. Would only wish for the future that the community would answer a bit more active.

 

Link to comment
Share on other sites

  • 2 weeks later...
Le 11/08/2021 à 11:22, WowaDriver a dit :

Since no one answers me here I have taken the matter into my own hands. 

 

Tutorials are certainly all useful, but finally the easiest way is to delete the system partitions (2.4GB Linux RAID1 members: sda1, sdb1, sdc1, ... , sdx1) of all installed hard disks, if you have performed an update and the system is no longer bootable. Then simply use an unused synoboot.img and the system is migrated without touching the data partitions.

 

Disadvantage is that all installed apps, configurations and settings have to be reinstalled of course.

 

Thanks anyway for all the information written down here. Would only wish for the future that the community would answer a bit more active.

 

It seems that does not work. A new install requires to format disks. There is no possibility to recover data partition by way.

Link to comment
Share on other sites

On 8/7/2021 at 1:38 PM, WowaDriver said:

??????

 

On 8/11/2021 at 2:22 AM, WowaDriver said:

Would only wish for the future that the community would answer a bit more active.

 

You might get a better result if you did not threadjack someone else's post, and instead posted your own question in the correct forum.  I'm not really looking for new information on DSM partition access so I didn't see this, and I am sure that many others didn't either.

 

https://xpenology.com/forum/terms/

https://xpenology.com/forum/guidelines/

 - review the part about Forum Guideline/Netiquette

Link to comment
Share on other sites

Le 25/08/2021 à 19:16, Bento59 a dit :

It seems that does not work. A new install requires to format disks. There is no possibility to recover data partition by this way.

I modify my comment: only the system partition was formated. Data partitions were conserved when you erase the system partition. The reinstallation does not erase data.

Link to comment
Share on other sites

  • 3 months later...
  • 2 months later...
  • 3 months later...

My box is in the recoverable state.

I've booted into linux, but on the sustem partition I have only two folders:

@autoupdate

lost+found

 

Inside @autopupdate I've found a pat file (now deleted)

 

I cannot find any other folder (like etc)

Link to comment
Share on other sites

  • 2 months later...
On 3/13/2017 at 10:20 AM, Polanskiman said:

8 - Finally, assemble the array and fix the byte order (this is for my case, with 2 drives):

mdadm -AU byteorder /dev/md0 /dev/sda1 /dev/sdb1
 

Same comment as previous command; add any additional system partitions that you may have. Beware of the /dev/md0. It's normal, not a mistake.


It is important to note that -U updates preferred minor to from /dev/mdN argument. And this must be the same as it was on Synology, otherwise DSM won't boot! If you mistakenly changed preferred minor you can recover it by running:
 

mdadm -AUsuper-minor /dev/md0 /dev/sda1 /dev/sdb1

 

In my case DSM boots correctly after -Ubyteorder with prefererred minor 0. DSM converts superblock automatically back to its endianness at the first boot, so you have to run -AUbyteorder again if you want to mount it again on another architecture.

Preferred minor and endianness problem are specific to superblock version 0.90. DSM cannot boot system volumes converted to superblock 1.2 (maybe because of the same dependence on /dev/md0 name). More on Linux Raid superblock can be found here and here. I have written a script to backup and restore superblock.

Edited by midenok
Link to comment
Share on other sites

  • 1 month later...

You just saved my 2FA problem. But sadly since i used following command: btrfs check --repair  /dev/partitionhere when trying to access DATA it screwed it up and i lost all my data anyway.

 

But main thing your guide gave me access to system partition which was exactly where i needed to go to, so i could look for a file related for 2FA which i did and after removing/renaming it and then boot into normal mode i could login to my DSM again without the 2FA miss config error that i got after upgrading from 7.1.0 to 7.1.1.

 

So next time if i happen to be in same problem again i will then know what todo and use your guide again!

  • Like 1
Link to comment
Share on other sites

  • 1 month later...

could this method be used to re enable disabled ssh service? i can't get in to DSM due to permission denied! everything else working as normal and functional just can't login to the DSM

or maybe adding a new admin account or enable it.

appreciate all the help

Thanks!

Link to comment
Share on other sites

  • 2 weeks later...
On 10/16/2022 at 10:07 PM, opty said:

could this method be used to re enable disabled ssh service? i can't get in to DSM due to permission denied! everything else working as normal and functional just can't login to the DSM

or maybe adding a new admin account or enable it.

appreciate all the help

Thanks!

Do you mean you disabled SSH in service tab? Since that SSH does not affect DSM itself. Because i have had SSH there disabled and DSM works fine for me still.

 

But i would say this method won't help much unless you are able to get the server to activate SSH on boot somehow.

Link to comment
Share on other sites

Yes the SSH is disabled on DSM and it's not reachable, i am still on 6.2

i am guessing my only choice is to upgrade to DSM 7 and start over and just keep my data. i do have a backup of my config. which is about 6 months old. that should probably help me to get back with least down time and loss of services

 

Thanks!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...