Jump to content
XPEnology Community
  • 0

Troubles to acces via ubuntu to my volumes


vic1707

Question

Hi, 

First thanks to all of the dev for Xpenology. Secondly sorry for my English, i'm French (i can't find an answer on the french hub), and finally thanks in advance for helping me.

 

So 3 days ago my nas was suddently unaccessible from the net (the gui from synology, plex or jdownloader via docker).

I posted about it and the answer was that dsm had made an update and the charger was getting stuck. They said I had to update the loader: I tried it but my NAS was not displayed on the network (and of course unavailable on find.synology). So I tried to reinstall my first charger. Finally, the NAS was displayed with the Repair option. after a reboot, find.synology said i had to reinstall dsm and under this one it was mentioned: "We detected that your current ds3617xs hard drives were removed from a previous ds3617xs and installing a new one dsm is required before continuing "(thanks googletranslate). I was very scared of this and the guy who tried to help me suggest to access the player via Ubuntu.

 

after that i boot on a live ubuntu, install mdadm and lvm2. 

root@ubuntu:~# mdadm -Asf && vgchange -ay
mdadm: /dev/md0 has been started with 3 drives (out of 12).
mdadm: /dev/md/2 has been started with 3 drives.
mdadm: /dev/md/3 has been started with 3 drives.

root@ubuntu:~# fdisk -l | grep /dev/sd
Disk /dev/sda: 149.1 GiB, 160041885696 bytes, 312581808 sectors
/dev/sda1          2048   4982527   4980480   2.4G fd Linux raid autodetect
/dev/sda2       4982528   9176831   4194304     2G fd Linux raid autodetect
/dev/sda3       9437184 312376991 302939808 144.5G fd Linux raid autodetect
Disk /dev/sdb: 232.9 GiB, 250000000000 bytes, 488281250 sectors
/dev/sdb1          2048   4982527   4980480   2.4G fd Linux raid autodetect
/dev/sdb2       4982528   9176831   4194304     2G fd Linux raid autodetect
/dev/sdb3       9437184 488076447 478639264 228.2G fd Linux raid autodetect
Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
/dev/sdc1          2048    4982527    4980480   2.4G fd Linux raid autodetect
/dev/sdc2       4982528    9176831    4194304     2G fd Linux raid autodetect
/dev/sdc3       9437184 1953320351 1943883168 926.9G fd Linux raid autodetect
Disk /dev/sdd: 465.8 GiB, 500107862016 bytes, 976773168 sectors
/dev/sdd1          2048   4982527   4980480   2.4G fd Linux raid autodetect
/dev/sdd2       4982528   9176831   4194304     2G fd Linux raid autodetect
/dev/sdd3       9437184 976568351 967131168 461.2G fd Linux raid autodetect
Disk /dev/sde: 149.1 GiB, 160041885696 bytes, 312581808 sectors
/dev/sde1          2048   4982527   4980480   2.4G fd Linux raid autodetect
/dev/sde2       4982528   9176831   4194304     2G fd Linux raid autodetect
/dev/sde3       9437184 312376991 302939808 144.5G fd Linux raid autodetect
Disk /dev/sdf: 232.9 GiB, 250000000000 bytes, 488281250 sectors
/dev/sdf1          2048   4982527   4980480   2.4G fd Linux raid autodetect
/dev/sdf2       4982528   9176831   4194304     2G fd Linux raid autodetect
/dev/sdf3       9437184 488076447 478639264 228.2G fd Linux raid autodetect
Disk /dev/sdg: 7.6 GiB, 8178892800 bytes, 15974400 sectors
/dev/sdg1  *     2048 15974399 15972352  7.6G  c W95 FAT32 (LBA)
Disk /dev/sdh: 1.8 TiB, 2000365289472 bytes, 3906963456 sectors
/dev/sdh1         256 3906959804 3906959549  1.8T  b W95 FAT32

I see this, drives g and h are the live ubuntu and an external hard drive.

after this i have access to /dev/md0 wich seems to be the system partition, and access to /dev/md/3 which is my second volume partition.

/dev/md/2 is unaccessible on the files app with this message : 

Error mounting /dev/md2 at /media/ubuntu/2019.01.08-17:38:26v15217: wrong fs type, badsuperblok on /dev/md2, missing codepage or helper program, or other error 

so i tryed the others commands to mount it ( drives b,c and d are my first raid with volume1 and system partition) the only partitons of these drives with superblocks are the second ones.

 

root@ubuntu:~# mdadm -Ee0.swap /dev/sdb2 /dev/sdc2 /dev/sdd2
/dev/sdb2:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 616f70ec:fe0b3da8:8d2eb468:7e40dd84
  Creation Time : Tue Jan  8 17:03:35 2019
     Raid Level : raid1
  Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
     Array Size : 2097088 (2047.94 MiB 2147.42 MB)
   Raid Devices : 12
  Total Devices : 6
Preferred Minor : 1

    Update Time : Mon Jan 21 18:44:35 2019
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 6
  Spare Devices : 0
       Checksum : ccb07e32 - correct
         Events : 162


      Number   Major   Minor   RaidDevice State
this     5       8       18        5      active sync   /dev/sdb2

   0     0       8       34        0      active sync   /dev/sdc2
   1     1       8       66        1      active sync   /dev/sde2
   2     2       8       50        2      active sync   /dev/sdd2
   3     3       8        2        3      active sync   /dev/sda2
   4     4       8       82        4      active sync   /dev/sdf2
   5     5       8       18        5      active sync   /dev/sdb2
   6     6       0        0        6      faulty removed
   7     7       0        0        7      faulty removed
   8     8       0        0        8      faulty removed
   9     9       0        0        9      faulty removed
  10    10       0        0       10      faulty removed
  11    11       0        0       11      faulty removed
/dev/sdc2:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 616f70ec:fe0b3da8:8d2eb468:7e40dd84
  Creation Time : Tue Jan  8 17:03:35 2019
     Raid Level : raid1
  Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
     Array Size : 2097088 (2047.94 MiB 2147.42 MB)
   Raid Devices : 12
  Total Devices : 6
Preferred Minor : 1

    Update Time : Mon Jan 21 18:44:35 2019
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 6
  Spare Devices : 0
       Checksum : ccb07e4c - correct
         Events : 162


      Number   Major   Minor   RaidDevice State
this     2       8       50        2      active sync   /dev/sdd2

   0     0       8       34        0      active sync   /dev/sdc2
   1     1       8       66        1      active sync   /dev/sde2
   2     2       8       50        2      active sync   /dev/sdd2
   3     3       8        2        3      active sync   /dev/sda2
   4     4       8       82        4      active sync   /dev/sdf2
   5     5       8       18        5      active sync   /dev/sdb2
   6     6       0        0        6      faulty removed
   7     7       0        0        7      faulty removed
   8     8       0        0        8      faulty removed
   9     9       0        0        9      faulty removed
  10    10       0        0       10      faulty removed
  11    11       0        0       11      faulty removed
/dev/sdd2:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 616f70ec:fe0b3da8:8d2eb468:7e40dd84
  Creation Time : Tue Jan  8 17:03:35 2019
     Raid Level : raid1
  Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
     Array Size : 2097088 (2047.94 MiB 2147.42 MB)
   Raid Devices : 12
  Total Devices : 6
Preferred Minor : 1

    Update Time : Mon Jan 21 18:44:35 2019
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 6
  Spare Devices : 0
       Checksum : ccb07e38 - correct
         Events : 162


      Number   Major   Minor   RaidDevice State
this     0       8       34        0      active sync   /dev/sdc2

   0     0       8       34        0      active sync   /dev/sdc2
   1     1       8       66        1      active sync   /dev/sde2
   2     2       8       50        2      active sync   /dev/sdd2
   3     3       8        2        3      active sync   /dev/sda2
   4     4       8       82        4      active sync   /dev/sdf2
   5     5       8       18        5      active sync   /dev/sdb2
   6     6       0        0        6      faulty removed
   7     7       0        0        7      faulty removed
   8     8       0        0        8      faulty removed
   9     9       0        0        9      faulty removed
  10    10       0        0       10      faulty removed
  11    11       0        0       11      faulty removed
root@ubuntu:~# mdadm -AU byteorder /dev/md0 /dev/sdb2 /dev/sdc2 /dev/sdd2
mdadm: /dev/sdb2 is busy - skipping
mdadm: /dev/sdc2 is busy - skipping
mdadm: /dev/sdd2 is busy - skipping
root@ubuntu:~# 

then i get this. I really don't know what to do to recover those data.

Please help me!

Edited by vic1707
Link to comment
Share on other sites

6 answers to this question

Recommended Posts

  • 0

So I would like to know if I miss something with Ubuntu?

If I install a new DSM on another hard drive and I reconnect the three disks of my volume 1, will it be detected?

should I just install a new DSM without worrying about the message that scared me?

I just need to recover the data on the raid i don't care of the cofiguration or the systems files.

Link to comment
Share on other sites

  • 0

Thank you for your answer. The importance of the data makes me ask the question: I do not think anything as long as I do not click on delete or something like that? You'll laugh, or not believe me so much it's big: 2to disk is my backup disk and I literally format it 8h before the crash of the dsm to change exfat to ntfs on the backup disk and i said to myself "I have time I will put back the backup tomorrow"

Edited by vic1707
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...