Jump to content
XPEnology Community

Inx192

Member
  • Posts

    10
  • Joined

  • Last visited

Posts posted by Inx192

  1. Update:

    I have partially backed my server up and today most my stuff started disappearing so I thought it best to remove the volume and start again. At only about 0.2% of rebuilding a new volume, drives started crashing (notification also says 'I/O error occurred to hard disk 1 on Server1').

    Is there a way to see what is causing this?

    The drives are all showing as healthy, could it be hardware related or even my Xpenology build?

  2. Backing up that amount for me is not possible but I have the non replaceable items but the rest will be a right ball ache to replace.

    I am completely puzzled as to what caused this to happen, the drives all show as healthy but at any one time between 3 & 5 drives show as crashed.

    Could you possibly explain what might have caused this to happen because at present even with redundancy I cannot see how I can stop this from happening again in the future?

  3. Can anyone possibly help me with my problem.

     

    My server was working fine yesterday morning and while I was copying photos across the network to store on the server, it popped up a message saying that my volume has crashedimage.jpg.

    It states that four drives have crashes and that I have to remove the volume image.jpg.

     

    When I look at the disk group it only shows two drives that have crashed but says four have crashed, also sometimes after a reboot it states only three have crashed image.jpg.

     

    I can still access the data on the server but when copying my photos across to a laptop it says i don't have permission for some of them.

    I am running in SHR with two drives fault tolerance.

     

    I just don't have enough spare drives to backup everything, is there anyway to fix this without losing any data?

  4. is it possible that a 6tb drive is not supported by your controller, so its defaulting to a lower value because it cant read beyond the drive parameters?

    I remember a few years back when large drives started appearing and would only show up on some controllers as a lower value, 1tb showing as 640gb for example because that was the CHS limit based on binary/hex and 8/16/32 bits etc

    could you try a 2 or 3 tb drive (its asking for a 2tb minimum)

     

    I agree this is most likely the issue here.

     

    most older controllers can't read beyond 2TB for example

     

    then even the not so old PCI-SATA2 controllers some maxes out at 3TB or 4 TB

     

    the 6TB and 8TB drives are fairly "new" you'll probably need a newer PCI-e-SATA3 controller with newest firmware to read them properly.

     

    Also! very important, now there are these new HDD called ARCHIVES! drive from Seagate, and WD has similar ones, that they don't work like normal HDDs, if you purchased these Archive version of HDDs, they are not going to work well with any known OS. Those drivers were developed for Data Centers and Programmers to write directly to the cluster location.

    Not something our Average Win OS, Linux OS do as of yet.

     

    I have attached a couple of images in the previous reply.

     

    The 6tb drives are WD Red's and the following are the hardware parts in the server.

     

    LSI 9211-4i SAS/SATA RAID - 4-port

    Chenbro CK23601 - 36 Port SAS Expander

    Asus B85m-g Lga1150 Matx USB 3.0 32gb Ddr3 6 X Sata 6gb

    Intel 4th Generation Core i5 (4460) 3.2GHz Quad Core Processor 6MB L3 Cache (Boxed)

  5. is it possible that a 6tb drive is not supported by your controller, so its defaulting to a lower value because it cant read beyond the drive parameters?

    I remember a few years back when large drives started appearing and would only show up on some controllers as a lower value, 1tb showing as 640gb for example because that was the CHS limit based on binary/hex and 8/16/32 bits etc

    could you try a 2 or 3 tb drive (its asking for a 2tb minimum)

     

    From what I have seen, all my 6tb drives have shown fine.

     

    Capture.jpg

     

    Capture.jpg

  6. Can someone please help me?

    Have You tried running smart checks on the drives? Sometimes fake errors get cleared...

     

    Inviato dal mio Nexus 10 utilizzando Tapatalk

     

    I have tried this and the error still shows.

    Also quick check runs daily and full check weekly without any failures.

     

    Thanks anyway.

  7. If anyone could help me out it would be much appreciated.

     

    A couple of months back just after completing my build, I did something stupid.

    I remember taking one of my drives out of the server while it was powered on to swap the order of the drives in the hotswap bays (I don't know why I did this while it was powered on) and after I realised my stupidity I then swapped them back when it was shut down.

    After this it showed one of my 2 Tb drives as faulty so I replaced it with a 6 Tb but ever since I have had the same degraded error.

     

    "The space is degraded. We suggest you replace the failing hard disks with healthy ones for repair (The disk size is equal or bigger than "1862 GB".) Please refer to the status field in Disk Info below to find out the failing hard disks.

     

    All drives are showing as healthy.

     

    Can anyone help me out without the need to rebuild? ( I cannot wipe and restore as I do not have enough spare drives to backup)

     

    Server1> fdisk -l

    fdisk: device has more than 2^32 sectors, can't use all of them

     

    Disk /dev/sde: 2199.0 GB, 2199023255040 bytes

    255 heads, 63 sectors/track, 267349 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

    Device Boot Start End Blocks Id System

    /dev/sde1 1 267350 2147483647+ ee EFI GPT

    fdisk: device has more than 2^32 sectors, can't use all of them

     

    Disk /dev/sdg: 2199.0 GB, 2199023255040 bytes

    255 heads, 63 sectors/track, 267349 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

    Device Boot Start End Blocks Id System

    /dev/sdg1 1 267350 2147483647+ ee EFI GPT

    fdisk: device has more than 2^32 sectors, can't use all of them

     

    Disk /dev/sdh: 2199.0 GB, 2199023255040 bytes

    255 heads, 63 sectors/track, 267349 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

    Device Boot Start End Blocks Id System

    /dev/sdh1 1 267350 2147483647+ ee EFI GPT

    fdisk: device has more than 2^32 sectors, can't use all of them

     

    Disk /dev/sdi: 2199.0 GB, 2199023255040 bytes

    255 heads, 63 sectors/track, 267349 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

    Device Boot Start End Blocks Id System

    /dev/sdi1 1 267350 2147483647+ ee EFI GPT

    fdisk: device has more than 2^32 sectors, can't use all of them

     

    Disk /dev/sdj: 2199.0 GB, 2199023255040 bytes

    255 heads, 63 sectors/track, 267349 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

    Device Boot Start End Blocks Id System

    /dev/sdj1 1 267350 2147483647+ ee EFI GPT

    fdisk: device has more than 2^32 sectors, can't use all of them

     

    Disk /dev/sdl: 2199.0 GB, 2199023255040 bytes

    255 heads, 63 sectors/track, 267349 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

    Device Boot Start End Blocks Id System

    /dev/sdl1 1 267350 2147483647+ ee EFI GPT

    fdisk: device has more than 2^32 sectors, can't use all of them

     

    Disk /dev/sdk: 2199.0 GB, 2199023255040 bytes

    255 heads, 63 sectors/track, 267349 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

    Device Boot Start End Blocks Id System

    /dev/sdk1 1 267350 2147483647+ ee EFI GPT

    fdisk: device has more than 2^32 sectors, can't use all of them

     

    Disk /dev/sdf: 2199.0 GB, 2199023255040 bytes

    255 heads, 63 sectors/track, 267349 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

    Device Boot Start End Blocks Id System

    /dev/sdf1 1 267350 2147483647+ ee EFI GPT

    fdisk: device has more than 2^32 sectors, can't use all of them

     

    Disk /dev/sda: 2199.0 GB, 2199023255040 bytes

    255 heads, 63 sectors/track, 267349 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

    Device Boot Start End Blocks Id System

    /dev/sda1 1 267350 2147483647+ ee EFI GPT

     

    Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes

    255 heads, 63 sectors/track, 121601 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

    Device Boot Start End Blocks Id System

    /dev/sdc1 1 121602 976759808 7 HPFS/NTFS

     

    Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes

    255 heads, 63 sectors/track, 121601 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

    Device Boot Start End Blocks Id System

    /dev/sdb1 1 121602 976759808 7 HPFS/NTFS

     

    Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes

    255 heads, 63 sectors/track, 243201 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

    Device Boot Start End Blocks Id System

    /dev/sdd1 1 311 2490240 fd Linux raid autodetect

    Partition 1 does not end on cylinder boundary

    /dev/sdd2 311 572 2097152 fd Linux raid autodetect

    Partition 2 does not end on cylinder boundary

    /dev/sdd3 588 243201 1948788912 f Win95 Ext'd (LBA)

    /dev/sdd5 589 121589 971932480 fd Linux raid autodetect

    /dev/sdd6 121590 243189 976743952 fd Linux raid autodetect

     

    Disk /dev/sdu: 62.5 GB, 62518853632 bytes

    4 heads, 32 sectors/track, 953962 cylinders

    Units = cylinders of 128 * 512 = 65536 bytes

     

    Device Boot Start End Blocks Id System

    /dev/sdu1 * 1 384 24544+ e Win95 FAT16 (LBA)

     

     

     

     

    Server1> cat /proc/mdstat

    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]

    md4 : active raid6 sdf6[0] sdi6[10] sde6[9] sda6[6] sdl6[7] sdj6[5] sdk6[4] sdh6[3] sdg6[2] sdd6[1]

    8790686208 blocks super 1.2 level 6, 64k chunk, algorithm 2 [11/10] [uUUUUUUUU_U]

     

    md2 : active raid6 sdj8[0] sdi8[4] sdl8[3] sda8[2] sdk8[1]

    8790740736 blocks super 1.2 level 6, 64k chunk, algorithm 2 [5/5] [uUUUU]

     

    md5 : active raid6 sdf7[0] sdi7[8] sde7[7] sda7[5] sdl7[6] sdj7[4] sdk7[3] sdh7[2] sdg7[1]

    6837200384 blocks super 1.2 level 6, 64k chunk, algorithm 2 [9/9] [uUUUUUUUU]

     

    md3 : active raid6 sdf5[0] sde5[10] sda5[7] sdl5[8] sdj5[6] sdk5[5] sdh5[4] sdg5[3] sdi5[11] sdd5[1]

    8747383104 blocks super 1.2 level 6, 64k chunk, algorithm 2 [11/10] [uUUUUUUUUU_]

     

    md1 : active raid1 sda2[0] sdd2[6] sde2[3] sdf2[7] sdg2[5] sdh2[4] sdi2[1] sdj2[8] sdk2[9] sdl2[10]

    2097088 blocks [12/10] [uU_UUUUUUUU_]

     

    md0 : active raid1 sda1[0] sdd1[8] sde1[10] sdf1[2] sdg1[6] sdh1[5] sdi1[7] sdj1[3] sdk1[4] sdl1[1]

    2490176 blocks [12/10] [uUUUUUUUU_U_]

     

    unused devices:

×
×
  • Create New...