Sign in to follow this  
andyrav

Space is Degraded DMS 4.3-3827 update1

Recommended Posts

Hi, Need some urgent help please

I have replace a faulty disk with a new one and after it has finished its repair/ parity checks etc is returns too

The Space is Degraded.

All my drives status say Normal and the smart check are ok.

It then does not have the option to repair again as Managed is grayed out.

running on a hp N54L

any ideas?

cheers

Share this post


Link to post
Share on other sites
Hi, Need some urgent help please

I have replace a faulty disk with a new one and after it has finished its repair/ parity checks etc is returns too

The Space is Degraded.

All my drives status say Normal and the smart check are ok.

It then does not have the option to repair again as Managed is grayed out.

running on a hp N54L

any ideas?

cheers

XPEnology is always having difficulty sorting out the proper disk slot, take look at them, but proceed with caution since rearranging your disks may result in DATA not accessible at all.

Also bear in your mind that the new disk have to be bigger or equal size to the old one.

It can also be possible another old disk is not stable, run full smart test one by one may reveal the problem.

Share this post


Link to post
Share on other sites

thanks running a full smart now. any ideas what command i need to running to fix?

i have a backup but would be a lot easyier to fix and reinstall.

cheers

Share this post


Link to post
Share on other sites

yould you please log in to your DSM using SSH and user root with the passwort of admin.

Then execute the following commands andpost the output here using a CODE-BBCode:

 

# fdisk -l
# cat /proc/mdstat

(The hashes just show it's to be executed in root mode. Do not enter them, too!)

 

Please also tell us which HDDs you are using and in which constellation (what disk in which slot)

 

DSM sometimes has a problem with rebuilds. The Web UI says it would be ready but mdstat shows it is still resyncing.

Share this post


Link to post
Share on other sites

DiskStation> fdisk -l

 

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes

255 heads, 63 sectors/track, 243201 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sda1 1 311 2490240 fd Linux raid autodetect

Partition 1 does not end on cylinder boundary

/dev/sda2 311 572 2097152 fd Linux raid autodetect

Partition 2 does not end on cylinder boundary

/dev/sda3 588 243201 1948788912 f Win95 Ext'd (LBA)

/dev/sda5 589 30401 239464864 fd Linux raid autodetect

/dev/sda6 30402 182401 1220931952 fd Linux raid autodetect

/dev/sda7 182402 243201 488367952 fd Linux raid autodetect

 

Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes

255 heads, 63 sectors/track, 182401 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sdb1 1 311 2490240 fd Linux raid autodetect

Partition 1 does not end on cylinder boundary

/dev/sdb2 311 572 2097152 fd Linux raid autodetect

Partition 2 does not end on cylinder boundary

/dev/sdb3 588 182401 1460412912 f Win95 Ext'd (LBA)

/dev/sdb5 589 30401 239464864 fd Linux raid autodetect

/dev/sdb6 30402 182401 1220931952 fd Linux raid autodetect

 

Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes

255 heads, 63 sectors/track, 243201 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sdc1 1 311 2490240 fd Linux raid autodetect

Partition 1 does not end on cylinder boundary

/dev/sdc2 311 572 2097152 fd Linux raid autodetect

Partition 2 does not end on cylinder boundary

/dev/sdc3 588 243201 1948788912 f Win95 Ext'd (LBA)

/dev/sdc5 589 30401 239464864 fd Linux raid autodetect

/dev/sdc6 30402 182401 1220931952 fd Linux raid autodetect

/dev/sdc7 182402 243201 488367952 fd Linux raid autodetect

 

Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes

255 heads, 63 sectors/track, 243201 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sdd1 1 311 2490240 fd Linux raid autodetect

Partition 1 does not end on cylinder boundary

/dev/sdd2 311 572 2097152 fd Linux raid autodetect

Partition 2 does not end on cylinder boundary

/dev/sdd3 588 243201 1948788912 f Win95 Ext'd (LBA)

/dev/sdd5 589 30401 239464864 fd Linux raid autodetect

/dev/sdd6 30402 182401 1220931952 fd Linux raid autodetect

/dev/sdd7 182402 243201 488367952 fd Linux raid autodetect

 

Disk /dev/sdu: 1929 MB, 1929379840 bytes

255 heads, 63 sectors/track, 234 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sdu1 * 1 2 16033+ 83 Linux

 

Disk /dev/sdv: 1000.2 GB, 1000204886016 bytes

255 heads, 63 sectors/track, 121601 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Device Boot Start End Blocks Id System

/dev/sdv1 1 121602 976760832 7 HPFS/NTFS

Note: sector size is 4096 (not 512)

 

Disk /dev/sdw: 1500.3 GB, 1500301905920 bytes

255 heads, 63 sectors/track, 22800 cylinders

Units = cylinders of 16065 * 4096 = 65802240 bytes

 

Device Boot Start End Blocks Id System

/dev/sdw1 ? 120528 234814 3049098368 7 HPFS/NTFS

Partition 1 does not end on cylinder boundary

/dev/sdw2 ? 119381 153271 2177748372 73 Unknown

Partition 2 does not end on cylinder boundary

/dev/sdw3 ? 113202 147075 2176700544 2b Unknown

Partition 3 does not end on cylinder boundary

/dev/sdw4 ? 177064 177067 219896 61 Unknown

Partition 4 does not end on cylinder boundary

 

Partition table entries are not in disk order

DiskStation>

 

 

DiskStation> cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]

md4 : active raid1 sda7[0] sdc7[1]

488366784 blocks super 1.2 [2/2] [uU]

 

md3 : active raid5 sdb6[0] sdc6[3] sda6[1]

3662792256 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [uUU_]

 

md2 : active raid5 sda5[0] sdc5[4] sdd5[5] sdb5[1]

718391040 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [uUUU]

 

md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3]

2097088 blocks [12/4] [uUUU________]

 

md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3]

2490176 blocks [12/4] [uUUU________]

 

unused devices:

DiskStation>

Share this post


Link to post
Share on other sites

Disk 1 = ST2000DL003-9VT166 2TB

Disk 2 = ST31500341AS 1.5TB

Disk 3 = ST2000Dm001-1E6164 2TB

Disk 3 = ST2000Dm001-1E6164 2TB

Share this post


Link to post
Share on other sites

Disk 1 = ST2000DL003-9VT166 2TB

Disk 2 = ST31500341AS 1.5TB

Disk 3 = ST2000Dm001-1E6164 2TB

Disk 3 = ST2000Dm001-1E6164 2TB THIS WAS THE LAST DISK I ADDED WAS WORKING FINE UNTIL I REPLACED IT

 

IF YOU CAN FIX YOU ARE A LIVE SAVER

Share this post


Link to post
Share on other sites

Well

 

md3 : active raid5 sdb6[0] sdc6[3] sda6[1]
3662792256 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [uUU_]

 

This is your degraded volume.

As you can see sdd6 is missing (4th slot)

 

I would first try to reboot and then repair over menu.

If that does not work or if it is not possible at all you can issue the following comand on SSH:

# mdadm /dev/md3 --add /dev/sdd6

then wait a minute and issue

# cat /proc/mdadm

It should now show a rebuild/resync in progress

 

Always remember: Any change could harm your data! Always -and I really mean ALWAYS - have a backup!

 

I guess you meant Disk 4 with your second disk 3 :smile:

 

One more note:

For RAId it is always a good idea to have the same Disk Types.

At least you should have the same speed.

 

But fortunately with MD it is very flexible so that different-disk-setups work well in most cases.

Share this post


Link to post
Share on other sites

Good Luck :smile:

 

One last hint: You can issue the command

cat /proc/mdstat

from time to time to see how the progress goes. Rhere is also an assumed rest time :wink:

Share this post


Link to post
Share on other sites

DiskStation> cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]

md4 : active raid1 sda7[0] sdc7[1]

488366784 blocks super 1.2 [2/2] [uU]

 

md3 : active raid5 sdd6[4] sdb6[0] sdc6[3] sda6[1]

3662792256 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [uUU_]

[=====>...............] recovery = 25.9% (316227328/1220930752) finish=348.7min speed=43232K/sec

 

md2 : active raid5 sda5[0] sdc5[4] sdd5[5] sdb5[1]

718391040 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [uUUU]

 

md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3]

2097088 blocks [12/4] [uUUU________]

 

md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3]

2490176 blocks [12/4] [uUUU________]

 

unused devices:

Share this post


Link to post
Share on other sites

DiskStation> cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]

md4 : active raid1 sda7[0] sdc7[1]

488366784 blocks super 1.2 [2/2] [uU]

 

md3 : active raid5 sdd6[4] sdb6[0] sdc6[3] sda6[1]

3662792256 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [uUU_]

[=====>...............] recovery = 25.9% (316227328/1220930752) finish=348.7min speed=43232K/sec

 

md2 : active raid5 sda5[0] sdc5[4] sdd5[5] sdb5[1]

718391040 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [uUUU]

 

md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3]

2097088 blocks [12/4] [uUUU________]

 

md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3]

2490176 blocks [12/4] [uUUU________]

 

unused devices:

Share this post


Link to post
Share on other sites

DiskStation> cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]

md4 : active raid1 sda7[0] sdc7[1]

488366784 blocks super 1.2 [2/2] [uU]

 

md3 : active raid5 sdd6[4] sdb6[0] sdc6[3] sda6[1]

3662792256 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [uUU_]

[=====>...............] recovery = 25.9% (316227328/1220930752) finish=348.7min speed=43232K/sec

 

md2 : active raid5 sda5[0] sdc5[4] sdd5[5] sdb5[1]

718391040 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [uUUU]

 

md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3]

2097088 blocks [12/4] [uUUU________]

 

md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3]

2490176 blocks [12/4] [uUUU________]

 

unused devices:

Share this post


Link to post
Share on other sites

Hi

it now reports the Volume as Normal. With You can expand size to about 5109GB

when i try i get the following error

Any ideas? cheers again

 

Apr 14 10:04:23 DiskStation volumehandler.cgi: space_internal_lib.c:309 Failed to migrate '/dev/md4' from RAID 1 to RAID 5

Apr 14 10:04:23 DiskStation volumehandler.cgi: space_internal_lib.c:568 Failed to migrate RAID 1 to RAID 5: [/dev/md4] with disk [/dev/sda7]

Apr 14 10:04:23 DiskStation volumehandler.cgi: space_expand_unfinished_shr.c:46 failed to add partition to space: /dev/vg1000/lv [0x0D00 string_sep_pair.c:54]

Apr 14 10:04:23 DiskStation volumehandler.cgi: space_lib.cpp:1453 failed to expand unfinished space: /dev/vg1000/lv [0x0D00 string_sep_pair.c:54]

Apr 14 10:04:23 DiskStation volumehandler.cgi: volumehandler.cpp:633 failed to expand unfinished space: /volume1

Share this post


Link to post
Share on other sites

If I were you I would move my data to a safe place and reinstall clean wit all discs using SHR as volume type.

 

Then move data back.

 

It will protect you against a lot of trouble.

 

Something wents wrog and I do not know wthas synos problem.

We can go on guessing but without any warranty!

 

Try to post the outpu of the following commands:

# cat /proc/mdstat
# df -h
# lvdisplay

 

Then I can guess what the problem is.

Or you could creat port forwardings for SSH and DSM Web and PN me with your user data then I could hav a quick look over it.

Also tell me if you want me tot ry to fix it - which might cause full loss of data - or not.

Share this post


Link to post
Share on other sites

cheers

login as: root

root@192.168.0.4's password:

 

 

BusyBox v1.16.1 (2014-02-11 20:19:16 CST) built-in shell (ash)

Enter 'help' for a list of built-in commands.

 

DiskStation> cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]

md4 : active raid1 sda7[0] sdc7[1]

488366784 blocks super 1.2 [2/2] [uU]

 

md3 : active raid5 sdd6[4] sdb6[0] sdc6[3] sda6[1]

3662792256 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [uUUU]

 

md2 : active raid5 sda5[0] sdc5[4] sdd5[5] sdb5[1]

718391040 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [uUUU]

 

md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3]

2097088 blocks [12/4] [uUUU________]

 

md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3]

2490176 blocks [12/4] [uUUU________]

 

unused devices:

DiskStation> df -h

Filesystem Size Used Available Use% Mounted on

/dev/md0 2.3G 674.6M 1.6G 29% /

/dev/md0 2.3G 674.6M 1.6G 29% /proc/bus/pci

/tmp 1.9G 972.0K 1.9G 0% /tmp

/dev/sdw1 15.2M 10.0M 4.4M 70% /volumeUSB1/usbshare

/dev/sdv1 931.5G 752.4G 179.2G 81% /volumeUSB2/usbshare

/dev/sdu 1.4T 955.6G 441.7G 68% /volumeUSB3/usbshare

/var 2.3G 674.6M 1.6G 29% /usr/local/zarafa-licensed/var

/dev/vg1000/lv 4.5T 3.1T 1.4T 69% /volume1

DiskStation> lvdisplay

--- Logical volume ---

LV Name /dev/vg1000/lv

VG Name vg1000

LV UUID ySDjQz-e5FC-02Oo-tvE6-RaM8-gX98-JAR2GV

LV Write Access read/write

LV Status available

# open 1

LV Size 4.54 TB

Current LE 1188854

Segments 5

Allocation inherit

Read ahead sectors auto

- currently set to 4096

Block device 253:0

 

DiskStation>

Share this post


Link to post
Share on other sites

Sorry forgot

 

# lvm pvdisplay
# lvm vgdisplay

 

Just wanna see the current setup to find sth that DSM could probably see...

Share this post


Link to post
Share on other sites

DiskStation> lvm pvdisplay

--- Physical volume ---

PV Name /dev/md2

VG Name vg1000

PV Size 685.11 GB / not usable 1.19 MB

Allocatable yes (but full)

PE Size (KByte) 4096

Total PE 175388

Free PE 0

Allocated PE 175388

PV UUID FzLv7t-DA16-xeBu-HoKr-Ye0S-2i1F-9mSCxv

 

--- Physical volume ---

PV Name /dev/md3

VG Name vg1000

PV Size 3.41 TB / not usable 1.00 MB

Allocatable yes (but full)

PE Size (KByte) 4096

Total PE 894236

Free PE 0

Allocated PE 894236

PV UUID NLJAYw-rf6e-eUHi-z9Xw-xmLI-hzi0-PmuOLK

 

--- Physical volume ---

PV Name /dev/md4

VG Name vg1000

PV Size 465.74 GB / not usable 448.00 KB

Allocatable yes (but full)

PE Size (KByte) 4096

Total PE 119230

Free PE 0

Allocated PE 119230

PV UUID YzXNLl-m1YV-v400-3w3p-eS5o-ZX00-BfyWPT

 

DiskStation> lvm vgdisplay

--- Volume group ---

VG Name vg1000

System ID

Format lvm2

Metadata Areas 3

Metadata Sequence No 11

VG Access read/write

VG Status resizable

MAX LV 0

Cur LV 1

Open LV 1

Max PV 0

Cur PV 3

Act PV 3

VG Size 4.54 TB

PE Size 4.00 MB

Total PE 1188854

Alloc PE / Size 1188854 / 4.54 TB

Free PE / Size 0 / 0

VG UUID oLPI1t-z20M-RUm0-rQ5K-OrUB-0w85-5yjEgB

Share this post


Link to post
Share on other sites

It looks like if there are three disk groups with one volume overt them.

 

Did you do that manually or is that a feature of SHR?

 

I am sorry but I have no idea at the moment...

Share this post


Link to post
Share on other sites

dont know how that has happened.

do you think it is going to be best to start again? guess this way i could use DMS5

Share this post


Link to post
Share on other sites

Clean is always better.

 

I have a strange problem, too...

My Photo Station stopped working at any point when I was installing some 3rd party packages.

I uninstalled them all again but it didn't soly my problem. It is not accessible anymore via Web...

 

I am currently thinking about reinstaling cleanly...

Share this post


Link to post
Share on other sites

Upgraded to DMS5 all data still there and it looks to be fixing the raid issue. Like to web service and photo station wont run

any ideas

Share this post


Link to post
Share on other sites

Got web and photo working.

from the GUI

Created a new group called 'http' with read/write on web folder

restarted web services

reinstalled Photo Station.

All working... No setting/photos lost in the gallery

Share this post


Link to post
Share on other sites
Sign in to follow this