Jump to content
XPEnology Community
  • 0

Broke DSM by removing Disk while in usage.


haldi

Question

Howdy,

 

I've added a a 5th Drive lately and initialised it tried to create Volume. Due to too many disk failures it couldn't be created.

Adding the Drive during operation worked quite well.

Unplugging the Drive during usage without removing it first in DSM didnt work well.

I'm now stuck on Running NAS with SSH enable but Webpage is down and can't access via Samba... i assume the DSM which is cloned on each disk is somehow ******* up (Seriously... isn't that why it's cloned on ALL disks so this DOESNT happen? -.-)

 

 

So anyone has a Disaster recovery guid handy i could try to get this stuff working again?

Repluging the 5th HDD did not help.

 

Any other ideas except reinstalling DSM via Loader?

 

Here s a output from fdisk.... just in case.

 

fdisk output  

 

ash-4.3# fdisk -l
Disk /dev/sda: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 6E64C596-1EEE-4E32-8713-2D8BE2212912

Device       Start         End     Sectors  Size Type
/dev/sda1     2048     4982527     4980480  2.4G Linux RAID
/dev/sda2  4982528     9176831     4194304    2G Linux RAID
/dev/sda3  9437184 11720840351 11711403168  5.5T Linux RAID


Disk /dev/sdb: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: C40B80C5-1B54-4C4B-9DB8-BB7358FE965C

Device       Start         End     Sectors  Size Type
/dev/sdb1     2048     4982527     4980480  2.4G Linux RAID
/dev/sdb2  4982528     9176831     4194304    2G Linux RAID
/dev/sdb3  9437184 11720840351 11711403168  5.5T Linux RAID


Disk /dev/sdc: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: FE50AFAD-F6F8-46D4-B69E-DBAFA5A43D39

Device       Start         End     Sectors  Size Type
/dev/sdc1     2048     4982527     4980480  2.4G Linux RAID
/dev/sdc2  4982528     9176831     4194304    2G Linux RAID
/dev/sdc3  9437184 11720840351 11711403168  5.5T Linux RAID


Disk /dev/sdd: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B7A01EEE-916D-4DE3-B17C-109626623B0D

Device       Start         End     Sectors  Size Type
/dev/sdd1     2048     4982527     4980480  2.4G Linux RAID
/dev/sdd2  4982528     9176831     4194304    2G Linux RAID
/dev/sdd3  9437184 11720840351 11711403168  5.5T Linux RAID


Disk /dev/sde: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xdc5adc5a

Device     Boot   Start       End   Sectors   Size Id Type
/dev/sde1          2048   4982527   4980480   2.4G fd Linux raid autodetect
/dev/sde2       4982528   9176831   4194304     2G fd Linux raid autodetect
/dev/sde3       9437184 976568351 967131168 461.2G fd Linux raid autodetect


GPT PMBR size mismatch (102399 != 7860223) will be corrected by w(rite).
Disk /dev/synoboot: 3.8 GiB, 4024434688 bytes, 7860224 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C94E55EA-A4D2-4E78-9D73-46CBAE7A03EF

Device         Start    End Sectors Size Type
/dev/synoboot1  2048  32767   30720  15M EFI System
/dev/synoboot2 32768  94207   61440  30M Linux filesystem
/dev/synoboot3 94208 102366    8159   4M BIOS boot


Disk /dev/md0: 2.4 GiB, 2549940224 bytes, 4980352 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram0: 1.1 GiB, 1194328064 bytes, 291584 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/zram1: 1.1 GiB, 1194328064 bytes, 291584 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
ash-4.3#

 

Hide  

 

P.S Running a HP Microserver Gen8

Link to comment
Share on other sites

10 answers to this question

Recommended Posts

  • 0

when ssh is working system partition will be up?

you can user mdadm or

cat /proc/mdstat

to check the state if the raid

 

https://raid.wiki.kernel.org/index.php/Detecting,_querying_and_testing

 

i suggest to read a little before starting to repair anything and also a littel practicing befor in am vm with virtual box can also help, just install a ubuntu, add some disks (thin) create a raid with the disks, shut down remove 1 disk or two an try to recreate and repair your szenario before messing with your real disks/data

 

 

Edited by IG-88
  • Like 1
Link to comment
Share on other sites

  • 0

I've disconnected the 5th drive now. Shows this:


Haldi@NAS:/$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3]
      2097088 blocks [12/4] [UUUU________]

md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3]
      2490176 blocks [12/4] [UUUU________]

unused devices: <none>

 

 

Link to comment
Share on other sites

  • 0

i Think i've been unclear, DSM is still working in the background, i just got a Mail that said NAS has suffered an improper shutdown.  But i cant access anything:

 

MeeiSlp.png

 

Using the Top command in Putty doesn't really tell anything does it?

 

Top command  

 

top - 06:46:03 up 28 min,  1 user,  load average: 0.00, 0.01, 0.05
Tasks: 171 total,   1 running, 170 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.3 us,  0.3 sy,  0.0 ni, 99.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
GiB Mem :    3.708 total,    3.267 free,    0.124 used,    0.318 buff/cache
GiB Swap:    4.225 total,    4.225 free,    0.000 used.    3.349 avail Mem

  PID USER      PR  NI    VIRT    RES  %CPU %MEM     TIME+ S COMMAND
 8395 root      20   0  229.9m   2.6m   0.7  0.1   0:01.09 S /usr/sbin/ovs-vswitchd --pi+
    1 root      20   0   23.6m   4.1m   0.0  0.1   0:02.68 S /sbin/init
    2 root      20   0    0.0m   0.0m   0.0  0.0   0:00.00 S [kthreadd]
    3 root      20   0    0.0m   0.0m   0.0  0.0   0:00.12 S [ksoftirqd/0]
    5 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [kworker/0:0H]
    7 root      rt   0    0.0m   0.0m   0.0  0.0   0:00.63 S [migration/0]
    8 root      20   0    0.0m   0.0m   0.0  0.0   0:00.00 S [rcu_bh]
    9 root      20   0    0.0m   0.0m   0.0  0.0   0:00.37 S [rcu_sched]
   10 root      rt   0    0.0m   0.0m   0.0  0.0   0:00.12 S [watchdog/0]
   11 root      rt   0    0.0m   0.0m   0.0  0.0   0:00.00 S [watchdog/1]
   12 root      rt   0    0.0m   0.0m   0.0  0.0   0:00.32 S [migration/1]
   13 root      20   0    0.0m   0.0m   0.0  0.0   0:00.16 S [ksoftirqd/1]
   15 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [kworker/1:0H]
   16 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [khelper]
   17 root      20   0    0.0m   0.0m   0.0  0.0   0:00.12 S [kdevtmpfs]
   18 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [netns]
  176 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [writeback]
  179 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [kintegrityd]
  180 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [bioset]
  181 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [crypto]
  183 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [kblockd]
  274 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [ata_sff]
  284 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [md]
  383 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [rpciod]
  384 root      20   0    0.0m   0.0m   0.0  0.0   0:00.06 S [kworker/1:1]
  429 root      20   0    0.0m   0.0m   0.0  0.0   0:00.00 S [khungtaskd]
  453 root      20   0    0.0m   0.0m   0.0  0.0   0:00.00 S [kswapd0]
  455 root      25   5    0.0m   0.0m   0.0  0.0   0:00.00 S [ksmd]
  460 root      20   0    0.0m   0.0m   0.0  0.0   0:00.00 S [fsnotify_mark]
  462 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [nfsiod]
 2875 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [iscsi_eh]
 2899 root       0 -20    0.0m   0.0m   0.0  0.0   0:00.00 S [kmpath_rdacd]

 

Hide  

i can't access any folders in volume1

ls command  

 


Haldi@NAS:/$ ls
1       dev           initrd  lib64       mnt   run     sys      usr           volume1
bin     etc           lib     lost+found  proc  sbin    tmp      var           volume2
config  etc.defaults  lib32   Media       root  Server  tmpRoot  var.defaults
Haldi@NAS:/$ cd /volume1/
Haldi@NAS:/volume1$ ls
@database

 

Hide  

so something seems broken here right?

Link to comment
Share on other sites

  • 0

the raid device of the data volume is missing completly there should be a /dev/md2 (your 5.5TB)

as to expect the /dev/md0 (aka DSM system partition) is there and working

as the 5th disk was smaller the the other 4 there should have been only raid extensions to md0 and md1 (initilising disk in dsm)

after that the creation of a new simple (no raid) volume was tryed and failed?

that should have no effect to the md2 at all, maybe dsm tossed complete mdadm configuration and only rebuild the system devices md0 (dsm) and md1 (swap)

there are mdadm commands for a auto assemble to find "lost" devices or repair

 

you can read this or google

or

https://raid.wiki.kernel.org/index.php/RAID_Recovery

https://www.thomas-krenn.com/en/wiki/Mdadm_recovery_and_resync

 

as i dont have much practice in this i dont feel good in suggesting any specific way to go

 

  • Like 1
Link to comment
Share on other sites

  • 0

Oh Thanks... so i finally now what's wrong. Now then to find find a solution :smile:

 

 

On 9/16/2017 at 2:58 AM, sbv3000 said:

Read and work through these solutions and see if it helps

You can also try disconnecting your raid drives, installing dsm on a unpartitioned drive attached to sata channel 1, then reconnect your raid drives to sata 2-n, dsm should find the volume and repair failed system partitions

 

i tried. The disks do show up as "Not Initialized" in Storage Manager, was Afraid this might delete Data/Format Disk if i try to initialize them.

 

Seems like that also changed the md0 -.o

mdstat  

 

root@NAS:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3]
      2097088 blocks [12/4] [UUUU________]

md0 : active raid1 sda1[0]
      2490176 blocks [12/1] [U___________]

unused devices: <none>

 

Hide  

 

 

mdadm  

 


root@NAS:~# mdadm --examine /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 01200368:b851871a:3017a5a8:c86610be
  Creation Time : Tue Jun 20 21:27:38 2017
     Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
     Array Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 12
  Total Devices : 1
Preferred Minor : 0

    Update Time : Sat Sep 16 15:51:49 2017
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 11
  Spare Devices : 0
       Checksum : e8a17e1 - correct
         Events : 2191824


      Number   Major   Minor   RaidDevice State
this     0       8        1        0      active sync   /dev/sda1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       0        0        3      faulty removed
   4     4       0        0        4      faulty removed
   5     5       0        0        5      faulty removed
   6     6       0        0        6      faulty removed
   7     7       0        0        7      faulty removed
   8     8       0        0        8      faulty removed
   9     9       0        0        9      faulty removed
  10    10       0        0       10      faulty removed
  11    11       0        0       11      faulty removed

 

Hide  

 

But the other part from MD0 still seems here on sd(bcd)

mdadm  

 


root@NAS:~# mdadm --examine /dev/sdb1
/dev/sdb1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 01200368:b851871a:3017a5a8:c86610be
  Creation Time : Tue Jun 20 21:27:38 2017
     Raid Level : raid1
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
     Array Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 12
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sat Sep 16 02:44:37 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 8
  Spare Devices : 0
       Checksum : e8959e3 - correct
         Events : 2191052


      Number   Major   Minor   RaidDevice State
this     1       8       17        1      active sync   /dev/sdb1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2       8       33        2      active sync   /dev/sdc1
   3     3       8       49        3      active sync   /dev/sdd1
   4     4       0        0        4      faulty removed
   5     5       0        0        5      faulty removed
   6     6       0        0        6      faulty removed
   7     7       0        0        7      faulty removed
   8     8       0        0        8      faulty removed
   9     9       0        0        9      faulty removed
  10    10       0        0       10      faulty removed
  11    11       0        0       11      faulty removed

 


root@NAS:~# mdadm -D /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Tue Jun 20 21:27:38 2017
     Raid Level : raid1
     Array Size : 2490176 (2.37 GiB 2.55 GB)
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 12
  Total Devices : 1
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sat Sep 16 16:05:55 2017
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : 01200368:b851871a:3017a5a8:c86610be
         Events : 0.2191868

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       -       0        0        1      removed
       -       0        0        2      removed
       -       0        0        3      removed
       -       0        0        4      removed
       -       0        0        5      removed
       -       0        0        6      removed
       -       0        0        7      removed
       -       0        0        8      removed
       -       0        0        9      removed
       -       0        0       10      removed
       -       0        0       11      removed
root@NAS:~# mdadm -D /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Sat Sep 16 06:17:47 2017
     Raid Level : raid1
     Array Size : 2097088 (2047.94 MiB 2147.42 MB)
  Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
   Raid Devices : 12
  Total Devices : 4
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Sat Sep 16 15:38:13 2017
          State : active, degraded
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           UUID : ff236cd2:8e9de823:cced5de7:ca715931 (local to host NAS)
         Events : 0.37

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
       3       8       50        3      active sync   /dev/sdd2
       -       0        0        4      removed
       -       0        0        5      removed
       -       0        0        6      removed
       -       0        0        7      removed
       -       0        0        8      removed
       -       0        0        9      removed
       -       0        0       10      removed
       -       0        0       11      removed

 

Hide  

 

Whatever.... here comes the interesting part:

mdadm  

 

root@NAS:~# mdadm --examine /dev/sda3
/dev/sda3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 27a22ad9:7a626d92:32b81e19:046fae05
           Name : NAS:2  (local to host NAS)
  Creation Time : Tue Jun 20 22:03:51 2017
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 11711401120 (5584.43 GiB 5996.24 GB)
     Array Size : 17567101632 (16753.29 GiB 17988.71 GB)
  Used Dev Size : 11711401088 (5584.43 GiB 5996.24 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : f3c2c38b:e1282260:aa46f570:4d15998f

    Update Time : Fri Sep 15 17:39:06 2017
       Checksum : 813f1b3a - correct
         Events : 345

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
root@NAS:~# mdadm --examine /dev/sdb3
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 27a22ad9:7a626d92:32b81e19:046fae05
           Name : NAS:2  (local to host NAS)
  Creation Time : Tue Jun 20 22:03:51 2017
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 11711401120 (5584.43 GiB 5996.24 GB)
     Array Size : 17567101632 (16753.29 GiB 17988.71 GB)
  Used Dev Size : 11711401088 (5584.43 GiB 5996.24 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : 5abdd8b2:18dc944d:c856dde5:291bc147

    Update Time : Fri Sep 15 17:39:06 2017
       Checksum : c2d6ded3 - correct
         Events : 345

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
root@NAS:~# mdadm --examine /dev/sdc3
/dev/sdc3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 27a22ad9:7a626d92:32b81e19:046fae05
           Name : NAS:2  (local to host NAS)
  Creation Time : Tue Jun 20 22:03:51 2017
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 11711401120 (5584.43 GiB 5996.24 GB)
     Array Size : 17567101632 (16753.29 GiB 17988.71 GB)
  Used Dev Size : 11711401088 (5584.43 GiB 5996.24 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : da180d92:638383c3:1b831ee0:6973934b

    Update Time : Fri Sep 15 17:39:06 2017
       Checksum : 160d6633 - correct
         Events : 345

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
root@NAS:~# mdadm --examine /dev/sdd3
/dev/sdd3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 27a22ad9:7a626d92:32b81e19:046fae05
           Name : NAS:2  (local to host NAS)
  Creation Time : Tue Jun 20 22:03:51 2017
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 11711401120 (5584.43 GiB 5996.24 GB)
     Array Size : 17567101632 (16753.29 GiB 17988.71 GB)
  Used Dev Size : 11711401088 (5584.43 GiB 5996.24 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : c7641066:6446c666:3fafd1df:cce42ab5

    Update Time : Fri Sep 15 17:39:06 2017
       Checksum : f69e12a8 - correct
         Events : 345

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

 

Hide  

So it's still there....

 

According to : https://www.linuxquestions.org/questions/linux-general-1/recovering-mdadm-superblocks-713234/

mdadm --assemble /dev/md# --uuid=<UUID>

Might be a starting point :smile:

mdadm  

 

root@NAS:~# mdadm --assemble /dev/md2 --uuid=27a22ad9:7a626d92:32b81e19:046fae05                 
mdadm: /dev/md2 has been started with 4 drives.
root@NAS:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF                 1]
md2 : active raid5 sda3[0] sdd3[3] sdc3[2] sdb3[1]
      17567101632 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3]
      2097088 blocks [12/4] [UUUU________]

md0 : active raid1 sda1[0]
      2490176 blocks [12/1] [U___________]

unused devices: <none>
root@NAS:~# mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Jun 20 22:03:51 2017
     Raid Level : raid5
     Array Size : 17567101632 (16753.29 GiB 17988.71 GB)
  Used Dev Size : 5855700544 (5584.43 GiB 5996.24 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Fri Sep 15 17:39:06 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : NAS:2  (local to host NAS)
           UUID : 27a22ad9:7a626d92:32b81e19:046fae05
         Events : 345

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3

 

Hide  

 

The thing is.... it does not persist after reboot -.-

 

Anyone has an idea?

Link to comment
Share on other sites

  • 0

root@NAS:~# btrfs check -P /dev/md2
Syno caseless feature on.
Checking filesystem on /dev/md2
UUID: 7e770343-bfb9-464e-9041-fd9859982905
checking extents
   complete...
   complete...
checking free space cache
cache and super generation don't match, space cache will be invalidated
checking fs roots
   complete...
checking csums
checking root refs
found 8894990880768 bytes used err is 0
total csum bytes: 8641506676
total tree bytes: 9901031424
total fs tree bytes: 765607936
total extent tree bytes: 140083200
btree space waste bytes: 402215712
file data blocks allocated: 9699169488896
 referenced 8974377103360

Seems like the Filesystem is not defect.OR at least not Volume1 with Raid5.....

 

Problem must be somewhere else. Seems like its not even loading the correct raid configuration....

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...