giveortake

6 DRIVES SHR1 on N54L : Crashed Volume (should be degraded)

Recommended Posts

not sure if I am posting the right section, it looked appropriate here.

Hi there,

 

 

I have been using xpenology on this box for the last 4 years, currently running DSM 5.2 (i know, upgrade to DSM 6 was planned this month).

 

My setup is : 4 bays with 4TB drives, CD-ROM port 6TB and E-SATA with 6TB drive set-up in SHR.

 

Everything was working perfectly until I got a Smart quick scan failed issue on drive 5 (CD-ROM port), I ordered a new drive(waiting for it) but after a reboot yesterday and I got Volume Crashed.

Which weird because I know that 5 out of the 6 drives are working. I can still see the 6 drives in the BIOS.

 

Before I receive my new drive, I would like to perform the steps needed to ensure that:

-disk 5 is indeed working but was just somehow disabled

-I can remount volume1 (in a degraded state) and wait until I receive my new drive to rebuild the array

 

I am not sure which info I should share with you, but your help would be much appreciated, I have backups of the important data but the amount of data on this volume means there are a lot of other things that it would be nice to access. I have ssh access to the box so can provide additional info if needed.

 

 space history xml from a time when everything was working is as follow:

<?xml version="1.0" encoding="UTF-8"?>
<spaces>
	<space path="/dev/vg1" reference="@storage_pool" device_type="1" container_type="2" >
		<device>
			<lvm path="/dev/vg1" uuid="aHEhJg-V1xG-l4Ld-AKp3-IWaE-XWQx-7pNMU6" designed_pv_counts="3" status="normal" total_size="21979914567680" free_size="0" pe_size="4194304">
				<raids>
					<raid path="/dev/md4" uuid="3e315753:507984cd:74a45244:e4d3c78c" level="raid5" version="1.2">
						<disks>
							<disk status="normal" dev_path="/dev/sda6" model="HUS724040ALE640         " serial="PK1331PAHERR4S" partition_version="7" partition_start="488399104" partition_size="7325623904" slot="3">
							</disk>
							<disk status="normal" dev_path="/dev/sdb6" model="HDS724040ALE640         " serial="PK1311PAGLN2US" partition_version="7" partition_start="488399104" partition_size="7325623904" slot="1">
							</disk>
							<disk status="normal" dev_path="/dev/sdc6" model="HDN724040ALE640         " serial="PK2334PBHB01KR" partition_version="7" partition_start="488399104" partition_size="7325623904" slot="0">
							</disk>
							<disk status="normal" dev_path="/dev/sdd6" model="HUS724040ALE640         " serial="PK1334PBHT7XJX" partition_version="7" partition_start="488399104" partition_size="7325623904" slot="2">
							</disk>
							<disk status="normal" dev_path="/dev/sde6" model="HUS726060ALA640         " serial="AR11001EV14T6B" partition_version="8" partition_start="488399104" partition_size="7325623904" slot="5">
							</disk>
							<disk status="normal" dev_path="/dev/sdf6" model="HUS726060ALA640         " serial="AR11001EV13QYB" partition_version="8" partition_start="488399104" partition_size="7325623904" slot="4">
							</disk>
						</disks>
					</raid>
					<raid path="/dev/md2" uuid="ba61cca6:a912275b:c627df66:b73318e0" level="raid5" version="1.2">
						<disks>
							<disk status="normal" dev_path="/dev/sda5" model="HUS724040ALE640         " serial="PK1331PAHERR4S" partition_version="7" partition_start="9453280" partition_size="478929728" slot="0">
							</disk>
							<disk status="normal" dev_path="/dev/sdb5" model="HDS724040ALE640         " serial="PK1311PAGLN2US" partition_version="7" partition_start="9453280" partition_size="478929728" slot="1">
							</disk>
							<disk status="normal" dev_path="/dev/sdc5" model="HDN724040ALE640         " serial="PK2334PBHB01KR" partition_version="7" partition_start="9453280" partition_size="478929728" slot="2">
							</disk>
							<disk status="normal" dev_path="/dev/sdd5" model="HUS724040ALE640         " serial="PK1334PBHT7XJX" partition_version="7" partition_start="9453280" partition_size="478929728" slot="3">
							</disk>
							<disk status="normal" dev_path="/dev/sde5" model="HUS726060ALA640         " serial="AR11001EV14T6B" partition_version="8" partition_start="9453280" partition_size="478929728" slot="5">
							</disk>
							<disk status="normal" dev_path="/dev/sdf5" model="HUS726060ALA640         " serial="AR11001EV13QYB" partition_version="8" partition_start="9453280" partition_size="478929728" slot="4">
							</disk>
						</disks>
					</raid>
					<raid path="/dev/md3" uuid="cead1a51:3a67d840:75e3e70f:f6db4c1e" level="raid1" version="1.2">
						<disks>
							<disk status="normal" dev_path="/dev/sde7" model="HUS726060ALA640         " serial="AR11001EV14T6B" partition_version="8" partition_start="7814039104" partition_size="3906799104" slot="0">
							</disk>
							<disk status="normal" dev_path="/dev/sdf7" model="HUS726060ALA640         " serial="AR11001EV13QYB" partition_version="8" partition_start="7814039104" partition_size="3906799104" slot="1">
							</disk>
						</disks>
					</raid>
				</raids>
			</lvm>
		</device>
		<reference>
			<volumes>
				<volume path="/volume1" dev_path="/dev/vg1/volume_1">
				</volume>
			</volumes>
			<iscsitrgs>
			</iscsitrgs>
		</reference>
	</space>
</spaces>

 cat /proc/mdstat:

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md4 : active raid5 sdc6[0] sda6[3] sdd6[2] sdb6[1]
      18314053760 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/4] [UUUU__]

md2 : active raid5 sda5[4] sdd5[3] sdc5[2] sdb5[1]
      1197318400 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/4] [UUUU__]

md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3]
      2097088 blocks [6/4] [UUUU__]

md0 : active raid1 sda1[0] sdb1[4] sdc1[3] sdd1[2]
      2490176 blocks [12/4] [U_UUU_______]

 parted -l
 

Model: Hitachi HUS724040ALE640 (scsi)
Disk /dev/hda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      131kB   2550MB  2550MB  ext4                  raid
 2      2550MB  4698MB  2147MB  linux-swap(v1)        raid
 5      4840MB  250GB   245GB                         raid
 6      250GB   4001GB  3751GB                        raid


Model: Hitachi HUS724040ALE640 (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      131kB   2550MB  2550MB  ext4                  raid
 2      2550MB  4698MB  2147MB  linux-swap(v1)        raid
 5      4840MB  250GB   245GB                         raid
 6      250GB   4001GB  3751GB                        raid


Model: Hitachi HDS724040ALE640 (scsi)
Disk /dev/sdb: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      131kB   2550MB  2550MB  ext4                  raid
 2      2550MB  4698MB  2147MB  linux-swap(v1)        raid
 5      4840MB  250GB   245GB                         raid
 6      250GB   4001GB  3751GB                        raid


Model: HGST HDN724040ALE640 (scsi)
Disk /dev/sdc: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      131kB   2550MB  2550MB  ext4                  raid
 2      2550MB  4698MB  2147MB  linux-swap(v1)        raid
 5      4840MB  250GB   245GB                         raid
 6      250GB   4001GB  3751GB                        raid


Model: HGST HUS724040ALE640 (scsi)
Disk /dev/sdd: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name  Flags
 1      131kB   2550MB  2550MB  ext4                  raid
 2      2550MB  4698MB  2147MB  linux-swap(v1)        raid
 5      4840MB  250GB   245GB                         raid
 6      250GB   4001GB  3751GB                        raid


Model: Linux Software RAID Array (md)
Disk /dev/md0: 2550MB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  2550MB  2550MB  ext4


Model: Linux Software RAID Array (md)
Disk /dev/md1: 2147MB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  2147MB  2147MB  linux-swap(v1)


Error: /dev/md2: unrecognised disk label
Model: Linux Software RAID Array (md)
Disk /dev/md2: 1226GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Error: /dev/md4: unrecognised disk label
Model: Linux Software RAID Array (md)
Disk /dev/md4: 18.8TB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Model: General UDisk (scsi)
Disk /dev/sdu: 2014MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  2014MB  2013MB  primary               boot, lba


Error: /dev/zram0: unrecognised disk label
Model: Unknown (unknown)
Disk /dev/zram0: 1206MB
Sector size (logical/physical): 4096B/4096B
Partition Table: unknown
Disk Flags:

Error: /dev/zram1: unrecognised disk label
Model: Unknown (unknown)
Disk /dev/zram1: 1206MB
Sector size (logical/physical): 4096B/4096B
Partition Table: unknown
Disk Flags:

 fdisk -l
 

fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sda: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sda1               1      267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdb: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sdb1               1      267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdc: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sdc1               1      267350  2147483647+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/sdd: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sdd1               1      267350  2147483647+ ee EFI GPT

Disk /dev/sdu: 2014 MB, 2014314496 bytes
255 heads, 63 sectors/track, 244 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sdu1   *           1         245     1966080   c Win95 FAT32 (LBA)
Partition 1 has different physical/logical endings:
     phys=(243, 254, 63) logical=(244, 227, 47)

Thanks so much for taking the time to read this, your help is REALLY appreciated, I am worried I will break the whole thing and lose a lot of historical data.

Share this post


Link to post
Share on other sites

I received my replacement hardrive and while following the tutorial above I noticed that my drives were all seen in Bios but I qwas still missing two in DSM. Turns out my modified BIOS for the n54L had changed. By following: From the main screen go to ‘Chipset > Southbridge Configuration > SB SATA Configuration’ and make sure your settings are the same as below: OnChip SATA Channel = Enabled SATA PORTS 4/5 IDE mode = Disabled SATA EPS on all PORT = Enabled SATA Power on all PORT = Enabled Return to the main screen then go to ‘Advanced > IDE Configuration’ and again, make sure your settings are the same as below. Embedded SATA Link Rate = 3.0Gbps MAX 

I got all drives shown, my volume in degraded mode (expected) and I am now rebuilding using the new drive. 

Glad it all worked out, it has been a few tense days ;) Thanks again Xpenology !!!

Share this post


Link to post
Share on other sites

@giveortake Congrats with your findings, I hope the rebuild will be all OK. 

I wonder how/why the settings changed, as that is a unusual thing to happen.

What BIOS are you using, the "TheBay"?

Share this post


Link to post
Share on other sites

Hi there, yes using TheBay. It is the first time this happened, and I didn't remember these steps from 4 years ago. Not sure why/how these settings changed there is no keyboard connected and time/other bios settings are all good (so battery is ok). Rebuild is almost 10% through. 😴

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.