Polanskiman

Tutorial: How to access DSM's Data & System partitions

Recommended Posts

If you can't access your Xpenology box but you still wish to try and 'fix' some configuration files or perhaps you wish to finally make that backup that you should have done before fiddling with the root user, then you can access the content of the system partition and data partitions through a Live Ubuntu CD (or whatever unix flavoured OS you so desire). Here is how to:

 

1 - Make a Live Ubuntu USB drive. Ideally it is more convenient to make a persistent Live Ubuntu USB drive but that's not required for this tutorial and it would just complicate things unnecessarily.

 

2 - Once you're done burning Ubuntu on the USB flash drive, go plug it in your Xpenology box and boot from it.

 

3 - Once in Ubuntu, launch Terminal. You will need to first be root so type:

sudo -i
 

4 - Now install mdadm and lvm2 by typing the following command:

apt-get install mdadm lvm2
You should get the following Postfix Configuration menu:

Screenshot from 2017-05-17 03:28:16.jpgScreenshot from 2017-05-17 03:28:35.jpg

Select as shown in the pictures above.

 

 

If you wish to mount the data partition alone then proceed with the following command:

 

5 - To mount the data partition, simply issue this command and you are done:

mdadm -Asf && vgchange -ay
 

 

If you also wish to mount the system partition then proceed with the following commands (adapt to your case accordingly):

 

6 - Then you need to check your raid array and partitioning of your drives:

fdisk -l | grep /dev/sd
 

In my case I see this. Note I only have 2 drives /dev/sda and /dev/sdb

root@server:/etc.defaults# fdisk -l | grep /dev/sd
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
/dev/sda1           256    4980735    4980480  2.4G fd Linux raid autodetect
/dev/sda2       4980736    9175039    4194304    2G fd Linux raid autodetect
/dev/sda3       9437184 3907015007 3897577824  1.8T  f W95 Ext'd (LBA)
/dev/sda5       9453280 3907015007 3897561728  1.8T fd Linux raid autodetect
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
/dev/sdb1           256    4980735    4980480  2.4G fd Linux raid autodetect
/dev/sdb2       4980736    9175039    4194304    2G fd Linux raid autodetect
/dev/sdb3       9437184 3907015007 3897577824  1.8T  f W95 Ext'd (LBA)
/dev/sdb5       9453280 3907015007 3897561728  1.8T fd Linux raid autodetect
 

System partitions are the ones labeled sda1, sdb1. If you have more drives in the array, subsequent system partitions will probably be called sdc1, sdd1 so on and so forth. You get the point.

 

7 - Once you figured out all system partitions, you can examine the foreign endian array members by issuing (this is for my case, with 2 drives):

mdadm -Ee0.swap /dev/sda1 /dev/sdb1
 

If you have 3 drives then you add /dev/sdc1. You get the idea.

 

8 - Finally, assemble the array and fix the byte order (this is for my case, with 2 drives):

mdadm -AU byteorder /dev/md0 /dev/sda1 /dev/sdb1
 

Same comment as previous command; add any additional system partitions that you may have. Beware of the /dev/md0. It's normal, not a mistake.

 

Your system partition should now be mounted and you can navigate through the system files. Simply unmount the drives and shutdown the machine when you are done. If for some reason you need to reboot and want to access the partitions again then you will need to re-install mdadm and lvm2 because the Live Ubuntu USB is not persistent.

 

---------

Reference:

https://www.synology.com/en-global/knowledgebase/DSM/tutorial/Storage/How_can_I_recover_data_from_my_DiskStation_using_a_PC

http://xpenology.com/forum/viewtopic.php?f=2&t=22100&p=83631&hilit=version#p83631

http://xpenology.com/forum/viewtopic.php?f=2&t=20216&p=74659&hilit=mdadm#p74659 >> Thanks to Jun

  • Like 5

Share this post


Link to post
Share on other sites

Amazing write-up!

 

Thank you for taking the time to put it together for the community.

Share this post


Link to post
Share on other sites

Hi Polanskiman,

 

Thanks for your tutorial on how to recover the data. It saves me. I wonder if you know how to mount the root or system partition?

 

Thanks in advance

Share this post


Link to post
Share on other sites
Hi Polanskiman,

 

Thanks for your tutorial on how to recover the data. It saves me. I wonder if you know how to mount the root or system partition?

 

Thanks in advance

 

I think it's all in tutorial.

Share this post


Link to post
Share on other sites

Sorry Polanskiman. I was confused by your link https://www.synology.com/en-global/know ... using_a_PC. I thought it's the same as your write up. I am so happy that I manage to recover the data. I am new with xpenology and try to solve my first brick xpenolgy. Been googling and reading a lot of thread. Since I already successfully copy the data, now I am trying to fix the xpenology by downgrading the DSM. I will try to mount the system partition from your tutorial. Sorry for the confusion.

Share this post


Link to post
Share on other sites

so i attempted to do this, and there are a few points missing.

 

1. apt-get install mdadm lvm2 Failed... could not find host error installing.

2. To fix that you must do a sudo apt-get update first before attempting anything here in this article or else it will not work.

3. when running these commands via live ubuntu usb sudo was needed or else read only fail will happen every single time. I made changes once the disk was mountable to the @updates/VERSION and switched it from 6 major 1 minor to 6 and 0 minor.

Share this post


Link to post
Share on other sites

just doing

sudo su

once after login and type the password will make you root and you can do all the stuff without typing sudo before every command

Share this post


Link to post
Share on other sites
so i attempted to do this, and there are a few points missing.

 

1. apt-get install mdadm lvm2 Failed... could not find host error installing.

2. To fix that you must do a sudo apt-get update first before attempting anything here in this article or else it will not work.

3. when running these commands via live ubuntu usb sudo was needed or else read only fail will happen every single time. I made changes once the disk was mountable to the @updates/VERSION and switched it from 6 major 1 minor to 6 and 0 minor.

 

I thought that being root was an obvious pre-requisit before anything. I have edited the tutorial accordingly for the layman.

Share this post


Link to post
Share on other sites

is this also possible for btrfs?

does ubuntu support that filesystem or is it only supported by dsm/synology?

Share this post


Link to post
Share on other sites
15 hours ago, nevusZ said:

is this also possible for btrfs?

does ubuntu support that filesystem or is it only supported by dsm/synology?

 

Yes BTRFS should be fine too.

Here is a quote from the Synology link provided in the references:

Quote

Note: Please make sure the file system running on the hard drives of your Synology NAS are EXT4 or Btrfs.

 

Share this post


Link to post
Share on other sites

Dear all!

I tried to remount SHR volume 1 and 2 on my N40L using live linux usb key.

(because after succesfully migrating from 5.2 to DSM 6.0.2-8451 update 11 during 2 week, I tried to add a NIC card RTL8111, which broke DSM)

(fraiche install with this additionnal NIC card works without any trouble with same loader and DSM version)

 

 So I was able to mount volume 1 (2 same size HDD 1To) following synology tutorial. No particular issue. 

 

But volume 2 (made of 2 HDD 2To + 1To HDD = 3TO data in SHR) looks too hard to recreate LVM by simply doing cmde "mdadm -Asf && vgchange -ay"

I've got the message

"warning device for pv not found or rejected by a filter"

this LVM keeps "inactive"

 

Do you have any idea on how to mount drives like SHR do when HDD are not the same Size? 

 

I tried many things but I'm not enough an expert to identify what to mount.
Maybe a friend of mine will help me this week end, but your help could be a big plus


Thanks,

Share this post


Link to post
Share on other sites

Hi!

 

i've made an update and played with virtualbox and virtual machine package. After a reboot, i've lost my network connection.

 

I'm trying to mount the system partition to fix this problem.

With Ubuntu 17.4, i've launched those commands :

root@ubuntu:~# fdisk -l | grep /dev/sd
Disque /dev/sda : 3,7 TiB, 4000787030016 octets, 7814037168 secteurs
/dev/sda1       2048    4982527    4980480  2,4G RAID Linux
/dev/sda2    4982528    9176831    4194304    2G RAID Linux
/dev/sda3    9437184 7813832351 7804395168  3,6T RAID Linux
Disque /dev/sdb : 3,7 TiB, 4000787030016 octets, 7814037168 secteurs
/dev/sdb1       2048    4982527    4980480  2,4G RAID Linux
/dev/sdb2    4982528    9176831    4194304    2G RAID Linux
/dev/sdb3    9437184 7813832351 7804395168  3,6T RAID Linux
Disque /dev/sdc : 3,7 TiB, 4000787030016 octets, 7814037168 secteurs
/dev/sdc1       2048    4982527    4980480  2,4G RAID Linux
/dev/sdc2    4982528    9176831    4194304    2G RAID Linux
/dev/sdc3    9437184 7813832351 7804395168  3,6T RAID Linux
Disque /dev/sdd : 3,7 TiB, 4000787030016 octets, 7814037168 secteurs
/dev/sdd1       2048    4982527    4980480  2,4G RAID Linux
/dev/sdd2    4982528    9176831    4194304    2G RAID Linux
/dev/sdd3    9437184 7813832351 7804395168  3,6T RAID Linux
Disque /dev/sde : 1,8 TiB, 2000397852160 octets, 3907027055 secteurs
/dev/sde1       2048    4982527    4980480  2,4G fd RAID Linux autodétecté
/dev/sde2    4982528    9176831    4194304    2G fd RAID Linux autodétecté
/dev/sde3    9437184 3906822239 3897385056  1,8T fd RAID Linux autodétecté
Disque /dev/sdf : 14,6 GiB, 15640592384 octets, 30548032 secteurs
/dev/sdf1    *     0    3142655    3142656  1,5G  0 Vide
/dev/sdf2    3118960    3123567       4608  2,3M ef EFI (FAT-12/16/32)
root@ubuntu:~# mdadm -Ee0.swap /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got fc4e2ba9)
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got fc4e2ba9)
mdadm: No super block found on /dev/sdc1 (Expected magic a92b4efc, got fc4e2ba9)
mdadm: No super block found on /dev/sdd1 (Expected magic a92b4efc, got fc4e2ba9)
/dev/sde1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : aaeda6d3:938f2ebb:15fbf0a3:140e7d59
  Creation Time : Sat Jan  1 00:00:05 2000
     Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
     Array Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 12
  Total Devices : 5
Preferred Minor : 0

    Update Time : Sat Dec  2 09:08:38 2017
          State : clean
 Active Devices : 5
Working Devices : 5
 Failed Devices : 7
  Spare Devices : 0
       Checksum : a47f6fae - correct
         Events : 759060


      Number   Major   Minor   RaidDevice State
this     4       8       81        4      active sync   /dev/sdf1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2       8       33        2      active sync   /dev/sdc1
   3     3       8       49        3      active sync   /dev/sdd1
   4     4       8       81        4      active sync   /dev/sdf1
   5     5       0        0        5      faulty removed
   6     6       0        0        6      faulty removed
   7     7       0        0        7      faulty removed
   8     8       0        0        8      faulty removed
   9     9       0        0        9      faulty removed
  10    10       0        0       10      faulty removed
  11    11       0        0       11      faulty removed
root@ubuntu:~# mdadm -AU byteorder /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got fc4e2ba9)
mdadm: no RAID superblock on /dev/sda1
mdadm: /dev/sda1 has no superblock - assembly aborted

Please, could you help me?

Share this post


Link to post
Share on other sites
On 12/2/2017 at 5:09 PM, cinpou said:

Hi!

 

i've made an update and played with virtualbox and virtual machine package. After a reboot, i've lost my network connection.

 

I'm trying to mount the system partition to fix this problem.

With Ubuntu 17.4, i've launched those commands :

Please, could you help me?

 

What are those 4 first drives (sda sdb sdc and sdd)? Are they part of a RAID configuration?

Share this post


Link to post
Share on other sites

@cinpou I ran into this same issue, and it seems to be specific to RAID LINUX partitions.  Although I wasn't able to figure out how to mount the first system partition on every disk I was able to zero out (wipe) the system partition, which effectively removes the configuration file that specifies the version of the DSM you have to use.  

 

Say you have a /dev/sda and it has the 3 partitions - use the following command to zero out the first system partition

 

dd if=/dev/zero of=/dev/sda1

WARNING: You ONLY want to zero out the first partition, doing it to the base disk will zero out the whole disk.  

 

Once you boot back into your Synology NAS OS it will show a warning that you have a degraded disk, all you need to do is repair it at that point and everything is good to go. 

 

Share this post


Link to post
Share on other sites

Hello,

 

First, thanks for tuto.

 

To access to /etc.defaults\VERSION , it just necessary to mount data partition or system partition ?

 

Thanks

Share this post


Link to post
Share on other sites
29 minutes ago, arkim said:

Hello,

 

First, thanks for tuto.

 

To access to /etc.defaults\VERSION , it just necessary to mount data partition or system partition ?

 

Thanks

 

If what you want is modify the VERSION file then you only need to mount the system partition.

Share this post


Link to post
Share on other sites
10 minutes ago, Polanskiman said:

 

If what you want is modify the VERSION file then you only need to mount the system partition.

 

Yes i need to downgrade i didn't enough care about your warning ;-)

 

Merci 

Share this post


Link to post
Share on other sites

I was looking at this because I had the thought of using clonezilla to clone the system partition before a DSM update then I have an easy way to rollback in the case of a failure or bricked installation.

 

So looking at all the partitions, I assume sdb1, sdc1, sdd1 etc is the system partition in my system here? The 2.4G partition on each drive basically?

 

I've only got 1 volume across 4 x 5TB drives so why do I have a 1.8TB and 931GB partition on every drive?

 

Whats the 2GB partition for?

 

Thanks

 

Partitions.thumb.JPG.3a87a4725bef65ba47a3af952158a582.JPG

Share this post


Link to post
Share on other sites
11 hours ago, captainfred said:

So looking at all the partitions, I assume sdb1, sdc1, sdd1 etc is the system partition in my system here? The 2.4G partition on each drive basically?

Correct. See FAQs.

 

11 hours ago, captainfred said:

I've only got 1 volume across 4 x 5TB drives so why do I have a 1.8TB and 931GB partition on every drive?

I believe the max partition size under ext3 is 2TB. Therefore free space is split onto 3 partition: 2 x 1.8 and 1 x 931.5

 

11 hours ago, captainfred said:

Whats the 2GB partition for?

That is the swap partition. See FAQs

Share this post


Link to post
Share on other sites
5 hours ago, Polanskiman said:

Correct. See FAQs.

 

I believe the max partition size under ext3 is 2TB. Therefore free space is split onto 3 partition: 2 x 1.8 and 1 x 931.5

 

That is the swap partition. See FAQs

 

Thanks so if I clone the 2.4G partition I have a rollback if things go wrong? Would I also need to clone the swap 2G partition?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now