Jump to content
XPEnology Community

URGENT!!! - 6 DISK VOLUME CRASHED -- HELP!!!


amuletxheart

Recommended Posts

I have an ESXi server running DSM as one of the VMs. The hard disks are passed through to the DSM VM using virtual RDM. I started with a 3 disk SHR volume and everything was fine. I then expanded the volume by adding another disk and it become a 4 disk volume. Again, everything was fine. Finally I added my remaining 2 disks to make it a 6 disk volume. The expansion had gone wrong and 1 of the disk shows up as "not initialized" and the other as "crashed".

ZY3tTBi.png

D9bTZdu.png

 

I believe my data is still fine as I can still access the files through the network. When I double click on the files they open up fine without any signs of corruption. However, when I try to copy the same file to my local hard drive, Windows gives me this error. This happened with a lot of the files, not just 1 or 2.

l0fmnWH.png

 

I tried to download the same files using the Synology web interface. It gives me a zip file of the folder but upon extraction, some (not all) of the files are corrupted. The log in the web interface had a lot of errors.

iDjyo9h.png

Files without thumbnails are corrupted:

XWNm3bV.png

XNyUGjw.png

 

I also tried to attach another drive to the DSM VM directly and attempt to backup data to it using the built in DSM backup application. I left it going overnight but it failed after a few hours of progress.

 

I build the server from old parts with the following specs: i7 920 @ 4GHz OC + gigabyte ex58-ud5 with 24GBs of RAM. I'm using Crucial m500 240GB for ESXi and the DSM VM, as well as 6 other 5400rpm 2TB hard disks for the data storage.

 

At this point I'm desperate so any help would be much appreciated.

Link to comment
Share on other sites

The hard disks are passed through to the DSM VM using virtual RDM.

Ahh, another one bites the dust. I can't help you sorry, maybe someone else can.

 

Warning for the future or other users: RDM does not pass SMART data along so XPenology can never know when a disk is failing.

Set it up in ESXi or use passthrough.

Link to comment
Share on other sites

The hard disks are passed through to the DSM VM using virtual RDM.

Ahh, another one bites the dust. I can't help you sorry, maybe someone else can.

 

Warning for the future or other users: RDM does not pass SMART data along so XPenology can never know when a disk is failing.

Set it up in ESXi or use passthrough.

 

I have taken the disks and connected them directly to another PC. There's nothing wrong with them and the data is still there. I think this is a logical problem that somehow DSM has corrupted the partition table or something similar to that.

Link to comment
Share on other sites

The hard disks are passed through to the DSM VM using virtual RDM.

Ahh, another one bites the dust. I can't help you sorry, maybe someone else can.

 

Warning for the future or other users: RDM does not pass SMART data along so XPenology can never know when a disk is failing.

Set it up in ESXi or use passthrough.

 

I have taken the disks and connected them directly to another PC. There's nothing wrong with them and the data is still there. I think this is a logical problem that somehow DSM has corrupted the partition table or something similar to that.

 

We can argue all day about the use of RDM but Nindustries provides no real alternative as using an ESXi virtual disk doesn't pass SMART data to XPenology and doing a passthrough requires at least 2 controllers, one for the Datastore for your ESXi host and one for the passthrough to the VM. The majority of the people where won't have a dedicated controller nor would Nanoboot/Gnoboot have the driver to support it if it was passed through. Remember Nanoboot doesn't have the vast driver support like FreeNAS or VMware, that's one of the reasons we virtualize, if Nanoboot has drivers for your machine of course you're better off running it Bare Metal.

 

Anyways back to amuletxheart's problem. First off I don't think you should ever expand a volume by more then 1 drive at a time if you only have a single drive redundancy (RAID 5/SHR) as the drive you are adding is usually untested and can cause unrecoverable error if multiple drives fail. I know it's tempting to save a rebuild but think of it like this if you expand from 4 to 5 drive and one of the drives fail you should be still able to recover data from the 4 drives as during the expansion data is written to 5 drives but only 4 is needed for recovery. If you expand with 2 drives going from 4 to 6, data is written to all 6 drives and 5 drives is needed for recovery, if the 2 new drives fail (because they are untested) then you will not be able to recover the data because you only have 4 good drives.

 

I'm trying to figure out your problem... and I'm thinking that maybe you mapped the 2 drives to the same physical drives... is that a possibility? Also this won't help with your problem but if you change your VM's SCSI controller to LSI Logic SAS controller DSM will show your "real" drive model and serial number instead of VMware's virtual drive. This won't solve your problem but it's an easy way to see if two drives have the same serial number then you know you have two RDM's mapped to 1 physical drive. Lastly if you went back to your original 4 drive configuration before your second expansion does DSM see the volume/repair it?

Link to comment
Share on other sites

I initially expanded 1 drive at a time but I got impatient and added the remaining 2 drives together. Now I deeply regret this decision.

 

I don't think I have 2 RDMs mapped to a single drive. I have double checked the serial numbers and also the 2 drives have different model numbers. One is an original Samsung HD204UI, the other is a Seagate RMA unit with ST2000DL004 model.

 

An interesting thing is that my motherboard has a very unique SATA port layout. 6 SATA 2 ports come off the Intel X58 chipset, while 4 more SATA 2 ports come off the "GIGABYTE SATA2 chip" which is a "JMB363 SATA/IDE Controller" after some digging with the PCI ID. The JMicron controller only supports 2 ports, so a JMB322 ports multiplier is connected to each channel to split it into 4 SATA 2 ports. The 2 drives are initially connected to 2 ports on the same channel. I don't know whether this is related to my problem but I have since moved all the hard drives to be connected to the Intel chipset but the problem persists.

 

When connecting the original 4 drives, the volume appears the same as "crashed". The files are totally unavailable in file station and the capacity shows up as "-GB/-GB". The same situation occurs when only the "not initialized" drive is attached to the DSM VM and not the "crashed" drive. When only the "crashed" drive is attached to the VM and not the other, the files become available again, same as I described in the first post.

 

I have tried running e2fsck -v -y -f on the volume in the DSM linux terminal according to this article but the problem persists. http://shankerbalan.net/blog/rescue-cra ... gy-volume/

 

Now I've installed Ubuntu on another computer and I'm currently in the process of connecting the drive to this computer and see how Ubuntu sees the drives. http://www.synology.com/en-global/support/faq/579

Link to comment
Share on other sites

I had a similar problem happen to me when expanding arrays a while back. My data was accessible, but DSM thought it was crashed. After enough playing I eventually fixed it, but in the end I just backed up all my data and started over.

 

my similar issue: viewtopic.php?f=2&t=2529

 

A good page for the understanings of linux software raid: https://wiki.archlinux.org/index.php/So ... ID_and_LVM

 

And one of my older experiences where my partition tables got erased while adding a disk. I was eventually able to assemble the array w/o partition tables with a lot of help from a guy named Remy: http://forum.cgsecurity.org/phpBB3/foun ... t2600.html

 

As I said before, if you can access your data you should just back it up and start over. Either way you should back it up, this way if you want to play around and pull your hair out trying to fix it you have a fall back. Good luck :smile:

Link to comment
Share on other sites

I had a similar problem happen to me when expanding arrays a while back. My data was accessible, but DSM thought it was crashed. After enough playing I eventually fixed it, but in the end I just backed up all my data and started over.

 

my similar issue: viewtopic.php?f=2&t=2529

 

A good page for the understanings of linux software raid: https://wiki.archlinux.org/index.php/So ... ID_and_LVM

 

And one of my older experiences where my partition tables got erased while adding a disk. I was eventually able to assemble the array w/o partition tables with a lot of help from a guy named Remy: http://forum.cgsecurity.org/phpBB3/foun ... t2600.html

 

As I said before, if you can access your data you should just back it up and start over. Either way you should back it up, this way if you want to play around and pull your hair out trying to fix it you have a fall back. Good luck :smile:

 

ALL HOPE WAS NOT LOST!!! HUGE SUCCESS!!!

 

Thanks to your post on cgsecurity and https://raid.wiki.kernel.org/index.php/RAID_Recovery, I managed to regain access to my files. I just subscribed to a 1 year subscription of CrashPlan as it's on promotion. I also bought a 4TB Seagate for local backup. I don't want such a tragedy to happen again. I spent the last week of my life frantically trying to recover my files and I cry myself to sleep every night (just kidding) as I would have lost my entire anime/music/movie/personal photo collection.

 

Thanks everyone once again for all your help.

Link to comment
Share on other sites

I had a similar problem happen to me when expanding arrays a while back. My data was accessible, but DSM thought it was crashed. After enough playing I eventually fixed it, but in the end I just backed up all my data and started over.

 

my similar issue: viewtopic.php?f=2&t=2529

 

A good page for the understanings of linux software raid: https://wiki.archlinux.org/index.php/So ... ID_and_LVM

 

And one of my older experiences where my partition tables got erased while adding a disk. I was eventually able to assemble the array w/o partition tables with a lot of help from a guy named Remy: http://forum.cgsecurity.org/phpBB3/foun ... t2600.html

 

As I said before, if you can access your data you should just back it up and start over. Either way you should back it up, this way if you want to play around and pull your hair out trying to fix it you have a fall back. Good luck :smile:

 

ALL HOPE WAS NOT LOST!!! HUGE SUCCESS!!!

 

Thanks to your post on cgsecurity and https://raid.wiki.kernel.org/index.php/RAID_Recovery, I managed to regain access to my files. I just subscribed to a 1 year subscription of CrashPlan as it's on promotion. I also bought a 4TB Seagate for local backup. I don't want such a tragedy to happen again. I spent the last week of my life frantically trying to recover my files and I cry myself to sleep every night (just kidding) as I would have lost my entire anime/music/movie/personal photo collection.

 

Thanks everyone once again for all your help.

 

I'm glad it helped! I know your pain, I had done the same with my movie, software collection :lol:

 

Took me a long time and lots of reading to try to learn about linux software raid, and how it works, and I'm still kinda a noob. If it wasn't for the guy named Remy, I'd have been clueless on where to start.

Link to comment
Share on other sites

  • 4 months later...

Hi,

 

I am in dire need of direction on how to recover my crashed SHR volume. I have XPENOLOGY Installed with Nanoboot on a VMWare Workstation Virtual Machine. I had two 2TB WD Caviar Green and added a third. The expansion was successful. However, after a power failure It seems that the drives where mounted in different order and that caused the volume to crash. I have seen the links in this thread and since I am a newbie, I do not understand some of the steps to follow. I have created the raid.status file and run the fdisk -l command. The results are copied here. After that I am lost on what to do. Your help is greatly appreciated as I have years of pictures saved here with no backup elsewhere. Thanks.

 

fdisk -l

Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot      Start         End      Blocks  Id System
/dev/sdc1               1         311     2490240  fd Linux raid autodetect
Partition 1 does not end on cylinder boundary
/dev/sdc2             311         572     2097152  fd Linux raid autodetect
Partition 2 does not end on cylinder boundary
/dev/sdc3             588        1305     5759296   f Win95 Ext'd (LBA)
/dev/sdc5             589        1305     5751248  fd Linux raid autodetect

Disk /dev/sdd: 1000.2 GB, 1000227396608 bytes
255 heads, 63 sectors/track, 121604 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot      Start         End      Blocks  Id System
/dev/sdd1               1         311     2490240  fd Linux raid autodetect
Partition 1 does not end on cylinder boundary
/dev/sdd2             311         572     2097152  fd Linux raid autodetect
Partition 2 does not end on cylinder boundary
/dev/sdd3             588      243201  1948788912   f Win95 Ext'd (LBA)
/dev/sdd5             589      243201  1948780864  fd Linux raid autodetect

Disk /dev/sde: 1000.2 GB, 1000227396608 bytes
255 heads, 63 sectors/track, 121604 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot      Start         End      Blocks  Id System
/dev/sde1               1         311     2490240  fd Linux raid autodetect
Partition 1 does not end on cylinder boundary
/dev/sde2             311         572     2097152  fd Linux raid autodetect
Partition 2 does not end on cylinder boundary
/dev/sde3             588      243201  1948788912   f Win95 Ext'd (LBA)
/dev/sde5             589      243201  1948780864  fd Linux raid autodetect

Disk /dev/sdf: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot      Start         End      Blocks  Id System
/dev/sdf1               1         311     2490240  fd Linux raid autodetect
Partition 1 does not end on cylinder boundary
/dev/sdf2             311         572     2097152  fd Linux raid autodetect
Partition 2 does not end on cylinder boundary
/dev/sdf3             588      243201  1948788912   f Win95 Ext'd (LBA)
/dev/sdf5             589      243201  1948780864  fd Linux raid autodetect

 

 

vi raid.status
/dev/sdd1:
         Magic : a92b4efc
       Version : 0.90.00
          UUID : 26993c55:eba54617:cb3c5fda:937d006b (local to host MKHOME)
 Creation Time : Fri Dec 31 19:00:05 1999
    Raid Level : raid1
 Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
    Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
 Total Devices : 4
Preferred Minor : 0

   Update Time : Sun Nov 16 22:19:57 2014
         State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 8
 Spare Devices : 0
      Checksum : a7675561 - correct
        Events : 2341491


     Number   Major   Minor   RaidDevice State
this     3       8       49        3      active sync   /dev/sdd1

  0     0       8       33        0      active sync   /dev/sdc1
  1     1       8       81        1      active sync   /dev/sdf1
  2     2       8       65        2      active sync   /dev/sde1
  3     3       8       49        3      active sync   /dev/sdd1
  4     4       0        0        4      faulty removed
  5     5       0        0        5      faulty removed
  6     6       0        0        6      faulty removed
  7     7       0        0        7      faulty removed
  8     8       0        0        8      faulty removed
  9     9       0        0        9      faulty removed
 10    10       0        0       10      faulty removed
 11    11       0        0       11      faulty removed
/dev/sde1:
         Magic : a92b4efc
       Version : 0.90.00
          UUID : 26993c55:eba54617:cb3c5fda:937d006b (local to host MKHOME)
 Creation Time : Fri Dec 31 19:00:05 1999
    Raid Level : raid1
 Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
    Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
 Total Devices : 4
Preferred Minor : 0
Update Time : Sun Nov 16 22:19:57 2014
         State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 8
 Spare Devices : 0
      Checksum : a767556f - correct
        Events : 2341491


     Number   Major   Minor   RaidDevice State
this     2       8       65        2      active sync   /dev/sde1

  0     0       8       33        0      active sync   /dev/sdc1
  1     1       8       81        1      active sync   /dev/sdf1
  2     2       8       65        2      active sync   /dev/sde1
  3     3       8       49        3      active sync   /dev/sdd1
  4     4       0        0        4      faulty removed
  5     5       0        0        5      faulty removed
  6     6       0        0        6      faulty removed
  7     7       0        0        7      faulty removed
  8     8       0        0        8      faulty removed
  9     9       0        0        9      faulty removed
 10    10       0        0       10      faulty removed
 11    11       0        0       11      faulty removed
/dev/sdf1:
         Magic : a92b4efc
       Version : 0.90.00
          UUID : 26993c55:eba54617:cb3c5fda:937d006b (local to host MKHOME)
 Creation Time : Fri Dec 31 19:00:05 1999
    Raid Level : raid1
 Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
    Array Size : 2490176 (2.37 GiB 2.55 GB)
  Raid Devices : 12
 Total Devices : 2
Preferred Minor : 0

   Update Time : Mon Nov 17 00:05:34 2014
         State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 10
 Spare Devices : 0
      Checksum : a7677a2a - correct
        Events : 2343081


     Number   Major   Minor   RaidDevice State
this     1       8       81        1      active sync   /dev/sdf1

  0     0       8       33        0      active sync   /dev/sdc1
  1     1       8       81        1      active sync   /dev/sdf1
  2     2       0        0        2      faulty removed
  3     3       0        0        3      faulty removed
  4     4       0        0        4      faulty removed
  5     5       0        0        5      faulty removed
  6     6       0        0        6      faulty removed
  7     7       0        0        7      faulty removed
  8     8       0        0        8      faulty removed
  9     9       0        0        9      faulty removed
 10    10       0        0       10      faulty removed
 11    11       0        0       11      faulty removed

 

vi messages

Nov 16 23:46:00 MKHOME kernel: [    0.000000] ACPI: RSDP 00000000000f6b80 00024 (v02 PTLTD )
Nov 16 23:46:00 MKHOME kernel: [    0.000000] ACPI: XSDT 000000003feeda33 0005C (v01 INTEL  440BX    06040000 VMW  01324272)
Nov 16 23:46:00 MKHOME kernel: [    0.000000] ACPI: FACP 000000003fefee73 000F4 (v04 INTEL  440BX    06040000 PTL  000F4240)
Nov 16 23:46:00 MKHOME kernel: [    0.000000] ACPI: DSDT 000000003feee8eb 10588 (v01 PTLTD  Custom   06040000 MSFT 03000001)
Nov 16 23:46:00 MKHOME kernel: [    0.000000] ACPI: FACS 000000003fefffc0 00040
Nov 16 23:46:00 MKHOME kernel: [    0.000000] ACPI: BOOT 000000003feee8c3 00028 (v01 PTLTD  $SBFTBL$ 06040000  LTP 00000001)
Nov 16 23:46:00 MKHOME kernel: [    0.000000] ACPI: APIC 000000003feee501 003C2 (v01 PTLTD  ? APIC   06040000  LTP 00000000)
Nov 16 23:46:00 MKHOME kernel: [    0.000000] ACPI: MCFG 000000003feee4c5 0003C (v01 PTLTD  $PCITBL$ 06040000  LTP 00000001)
Nov 16 23:46:00 MKHOME kernel: [    0.000000] ACPI: SRAT 000000003feedb2f 004D0 (v02 VMWARE MEMPLUG  06040000 VMW  00000001)
Nov 16 23:46:00 MKHOME kernel: [    0.000000] ACPI: HPET 000000003feedaf7 00038 (v01 VMWARE VMW HPET 06040000 VMW  00000001)
Nov 16 23:46:00 MKHOME kernel: [    0.000000] ACPI: WAET 000000003feedacf 00028 (v01 VMWARE VMW WAET 06040000 VMW  00000001)
Nov 16 23:46:00 MKHOME kernel: [    0.000000] Zone PFN ranges:
Nov 16 23:46:00 MKHOME kernel: [    0.000000]   DMA      0x00000010 -> 0x00001000
Nov 16 23:46:00 MKHOME kernel: [    0.000000]   DMA32    0x00001000 -> 0x00100000
Nov 16 23:46:00 MKHOME kernel: [    0.000000]   Normal   empty
Nov 16 23:46:00 MKHOME kernel: [    0.000000] Movable zone start PFN for each node
Nov 16 23:46:00 MKHOME kernel: [    0.000000] early_node_map[3] active PFN ranges
Nov 16 23:46:00 MKHOME kernel: [    0.000000]     0: 0x00000010 -> 0x0000009f
Nov 16 23:46:00 MKHOME kernel: [    0.000000]     0: 0x00000100 -> 0x0003fee0
Nov 16 23:46:00 MKHOME kernel: [    0.000000]     0: 0x0003ff00 -> 0x00040000
Nov 16 23:46:00 MKHOME kernel: [    0.000000] 64 Processors exceeds NR_CPUS limit of 8
Nov 16 23:46:00 MKHOME kernel: [    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 258410
Nov 16 23:46:00 MKHOME kernel: [    0.000000] Internal HD num: 0
Nov 16 23:46:00 MKHOME kernel: [    0.000000] Internal netif num: 4
Nov 16 23:46:00 MKHOME kernel: [    0.000000] Synology Hardware Version: DS3612xs-j
Nov 16 23:46:00 MKHOME kernel: [    0.000000] Serial Number: B3J4N01003
Nov 16 23:46:00 MKHOME kernel: [    0.000000] Synoboot VID: ea0
Nov 16 23:46:00 MKHOME kernel: [    0.000000] Synoboot PID: 2168
Nov 16 23:46:00 MKHOME kernel: [    0.000000] Detected 3333.429 MHz processor.
Nov 16 23:46:00 MKHOME kernel: [    0.176009] raid6: int64x1   2367 MB/s
Nov 16 23:46:00 MKHOME kernel: [    0.192982] raid6: int64x2   3164 MB/s
Nov 16 23:46:00 MKHOME kernel: [    0.210003] raid6: int64x4   2445 MB/s
Nov 16 23:46:00 MKHOME kernel: [    0.313904] raid6: sse2x4    9656 MB/s
Nov 16 23:46:00 MKHOME kernel: [    0.313905] raid6: using algorithm sse2x4 (9656 MB/s)
Nov 16 23:46:00 MKHOME kernel: [    1.277635] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 16 23:46:00 MKHOME kernel: [    3.936014] Enable Synology sata fast booting
Nov 16 23:46:00 MKHOME kernel: [    3.936016] ata1: COMRESET failed (errno=-95)
Nov 16 23:46:00 MKHOME kernel: [    3.936017] ata2: COMRESET failed (errno=-95)
Nov 16 23:46:00 MKHOME kernel: [    4.106677] ata2.00: Find SSD disks. [VMware Virtual IDE CDROM Drive]
Nov 16 23:46:00 MKHOME kernel: [    4.476982] sd 2:0:0:0: [sdc] Assuming drive cache: write through
Nov 16 23:46:00 MKHOME kernel: [    4.482969] sd 2:0:0:0: [sdc] Assuming drive cache: write through
Nov 16 23:46:00 MKHOME kernel: [    4.518021] sd 2:0:1:0: [sdd] Assuming drive cache: write through
Nov 16 23:46:00 MKHOME kernel: [    4.524690] sd 2:0:0:0: [sdc] Assuming drive cache: write through
Nov 16 23:46:00 MKHOME kernel: [    4.596225] sd 2:0:1:0: [sdd] Assuming drive cache: write through
Nov 16 23:46:00 MKHOME kernel: [    4.597453] sdd: p5 size 3897561728 extends beyond EOD, enabling native capacity
Nov 16 23:46:00 MKHOME kernel: [    4.621019] sd 2:0:1:0: [sdd] Assuming drive cache: write through
Nov 16 23:46:00 MKHOME kernel: [    4.621337] sdd: p5 size 3897561728 extends beyond EOD, truncated
Nov 16 23:46:00 MKHOME kernel: [    4.638676] sd 2:0:2:0: [sde] Assuming drive cache: write through
Nov 16 23:46:00 MKHOME kernel: [    4.653418] sd 2:0:2:0: [sde] Assuming drive cache: write through
Nov 16 23:46:00 MKHOME kernel: [    4.668463] sde: p5 size 3897561728 extends beyond EOD, enabling native capacity
Nov 16 23:46:00 MKHOME kernel: [    4.696433] sd 2:0:1:0: [sdd] Assuming drive cache: write through
Nov 16 23:46:00 MKHOME kernel: [    4.721060] sd 2:0:2:0: [sde] Assuming drive cache: write through
Nov 16 23:46:00 MKHOME kernel: [    4.721407] sde: p5 size 3897561728 extends beyond EOD, truncated
Nov 16 23:46:00 MKHOME kernel: [    4.748619] sd 2:0:3:0: [sdf] Assuming drive cache: write through
Nov 16 23:46:00 MKHOME kernel: [    4.769442] sd 2:0:3:0: [sdf] Assuming drive cache: write through
Nov 16 23:46:00 MKHOME kernel: [    4.828097] sd 2:0:2:0: [sde] Assuming drive cache: write through
Nov 16 23:46:00 MKHOME kernel: [    4.857569] sd 2:0:3:0: [sdf] Assuming drive cache: write through
Nov 16 23:46:00 MKHOME kernel: [    5.118243] e100: Unknown symbol mii_ethtool_sset (err 0)
Nov 16 23:46:00 MKHOME kernel: [    5.118252] e100: Unknown symbol mii_link_ok (err 0)
Nov 16 23:46:00 MKHOME kernel: [    5.118282] e100: Unknown symbol mii_check_link (err 0)
Nov 16 23:46:00 MKHOME kernel: [    5.118290] e100: Unknown symbol mii_nway_restart (err 0)
Nov 16 23:46:00 MKHOME kernel: [    5.118310] e100: Unknown symbol generic_mii_ioctl (err 0)
Nov 16 23:46:00 MKHOME kernel: [    5.118318] e100: Unknown symbol mii_ethtool_gset (err 0)
Nov 16 23:46:00 MKHOME kernel: [    5.703398] md: invalid raid superblock magic on sdc5
Nov 16 23:46:00 MKHOME kernel: [    5.703401] md: sdc5 does not have a valid v0.90 superblock, not importing!
Nov 16 23:46:00 MKHOME kernel: [    5.703624] md: could not open unknown-block(8,49).
Nov 16 23:46:00 MKHOME kernel: [    5.703872] md: could not open unknown-block(8,50).
Nov 16 23:46:00 MKHOME kernel: [    5.704048] md: invalid raid superblock magic on sdd5
Nov 16 23:46:00 MKHOME kernel: [    5.704049] md: sdd5 does not have a valid v0.90 superblock, not importing!
Nov 16 23:46:00 MKHOME kernel: [    5.704240] md: could not open unknown-block(8,65).
Nov 16 23:46:00 MKHOME kernel: [    5.704422] md: could not open unknown-block(8,66).
Nov 16 23:46:00 MKHOME kernel: [    5.704573] md: invalid raid superblock magic on sde5
Nov 16 23:46:00 MKHOME kernel: [    5.704574] md: sde5 does not have a valid v0.90 superblock, not importing!
Nov 16 23:46:00 MKHOME kernel: [    5.779063] md: invalid raid superblock magic on sdf5
Nov 16 23:46:00 MKHOME kernel: [    5.779070] md: sdf5 does not have a valid v0.90 superblock, not importing!
Nov 16 23:46:00 MKHOME kernel: [    5.779146] md: sdc2 has different UUID to sdc1
Nov 16 23:46:00 MKHOME kernel: [    5.779154] md: sdd2 has different UUID to sdc1
Nov 16 23:46:00 MKHOME kernel: [    5.779159] md: sde2 has different UUID to sdc1
Nov 16 23:46:00 MKHOME kernel: [    5.779163] md: sdf2 has different UUID to sdc1
Nov 16 23:46:00 MKHOME kernel: [    5.779228] md: kicking non-fresh sdd1 from array!
Nov 16 23:46:00 MKHOME kernel: [    5.779340] md: kicking non-fresh sde1 from array!
Nov 16 23:46:00 MKHOME kernel: [    5.779340] md: kicking non-fresh sde1 from array!
Nov 16 23:46:00 MKHOME kernel: [    5.779945] md: sdd2 has different UUID to sdc2
Nov 16 23:46:00 MKHOME kernel: [    5.779946] md: sde2 has different UUID to sdc2
Nov 16 23:46:00 MKHOME kernel: [    5.787783] md: md1 already running, cannot run sdd2
Nov 16 23:46:00 MKHOME kernel: [    5.849504] bromolow_synobios: module license 'Synology Inc.' taints kernel.
Nov 16 23:46:00 MKHOME kernel: [    5.849506] Disabling lock debugging due to kernel taint
Nov 16 23:46:00 MKHOME kernel: [    5.870611] 2014-11-17 4:45:55 UTC
Nov 16 23:46:00 MKHOME kernel: [    5.870621] Brand: Synology
Nov 16 23:46:00 MKHOME kernel: [    5.870622] Model: DS-3612xs
Nov 16 23:46:00 MKHOME kernel: [    5.870624] set group disks wakeup number to 4, spinup time deno 7
Nov 16 23:46:00 MKHOME kernel: [    6.389980] synobios: unload
Nov 16 23:46:02 MKHOME rc: defined swap disk is not identical
Nov 16 23:46:02 MKHOME rc:     defined disks: sdc2 sdd2 sde2 sdf2
Nov 16 23:46:02 MKHOME rc:      online disks: sdc2 sdf2
Nov 16 23:46:02 MKHOME kernel: [   13.528854] md: md1: set sdc2 to auto_remap [0]
Nov 16 23:46:02 MKHOME kernel: [   13.528855] md: md1: set sdf2 to auto_remap [0]
Nov 16 23:46:03 MKHOME kernel: [   14.420030] zram: module is from the staging directory, the quality is unknown, you have been warned.
insmod: can't insert '/lib/modules/acpi-cpufreq.ko': Input/output error
insmod: can't insert '/lib/modules/ixgbe.ko': unknown symbol in module, or unknown parameter
Nov 16 23:46:04 MKHOME kernel: [   15.432569] ixgbe: Unknown symbol mdio_mii_ioctl (err 0)
Nov 16 23:46:04 MKHOME kernel: [   15.432588] ixgbe: Unknown symbol mdio45_probe (err 0)
Nov 16 23:46:05 MKHOME kernel: [   15.577573] 2014-11-17 4:46:5 UTC
Nov 16 23:46:05 MKHOME kernel: [   15.577579] Brand: Synology
Nov 16 23:46:05 MKHOME kernel: [   15.577580] Model: DS-3612xs
Nov 16 23:46:05 MKHOME kernel: [   15.577581] set group disks wakeup number to 4, spinup time deno 7
Nov 16 23:46:05 MKHOME interface-catcher: eth0 () is added
Nov 16 23:46:06 MKHOME kernel: [   16.624781] init: nginx main process (11174) terminated with status 1
Nov 16 23:46:06 MKHOME kernel: [   16.624815] init: nginx faild on spawn stage, stopped
Nov 16 23:46:06 MKHOME interface-catcher: lo () is added
Nov 16 23:46:06 MKHOME kernel: [   17.002217] md: sde5 does not have a valid v1.2 superblock, not importing!
Nov 16 23:46:06 MKHOME kernel: [   17.002238] md: md_import_device returned -22
Nov 16 23:46:06 MKHOME kernel: [   17.002363] md: sdd5 does not have a valid v1.2 superblock, not importing!
Nov 16 23:46:06 MKHOME kernel: [   17.002367] md: md_import_device returned -22
Nov 16 23:46:06 MKHOME kernel: [   17.015379] md/raid:md3: not enough operational devices (2/3 failed)
Nov 16 23:46:06 MKHOME kernel: [   17.015381] md/raid:md3: raid level 5 active with 1 out of 3 devices, algorithm 2
Nov 16 23:46:06 MKHOME spacetool.shared: spacetool.c:1013 Try to force assemble RAID [/dev/md3].[0x2000 file_get_key_value.c:108]
Nov 16 23:46:06 MKHOME kernel: [   17.030243] md: md3: set sdf5 to auto_remap [0]
Nov 16 23:46:06 MKHOME kernel: [   17.172368] md: sde5 does not have a valid v1.2 superblock, not importing!
Nov 16 23:46:06 MKHOME kernel: [   17.172374] md: md_import_device returned -22
Nov 16 23:46:06 MKHOME kernel: [   17.173999] md: sdd5 does not have a valid v1.2 superblock, not importing!
Nov 16 23:46:06 MKHOME kernel: [   17.174004] md: md_import_device returned -22
Nov 16 23:46:06 MKHOME kernel: [   17.198537] md/raid:md3: not enough operational devices (2/3 failed)
Nov 16 23:46:06 MKHOME kernel: [   17.198539] md/raid:md3: raid level 5 active with 1 out of 3 devices, algorithm 2
Nov 16 23:46:06 MKHOME spacetool.shared: spacetool.c:2588 [info] Old vg path: [/dev/vg1000], New vg path: [/dev/vg1000], UUID: [zz5UPe-B5St-uYZc-F6cI-U0KH-GCIO-zMmbtU]
Nov 16 23:46:06 MKHOME spacetool.shared: spacetool.c:2588 [info] Old vg path: [/dev/vg1], New vg path: [/dev/vg1], UUID: [NwsASG-qr7N-EZ14-h9xp-YJEM-Ga3U-46POVC]
Nov 16 23:46:06 MKHOME spacetool.shared: spacetool.c:2595 [info] Activate all VG
Nov 16 23:46:07 MKHOME spacetool.shared: spacetool.c:2606 Activate LVM [/dev/vg1000]
Nov 16 23:46:07 MKHOME spacetool.shared: lvm_vg_activate.c:25 Failed to do '/sbin/vgchange -ay /dev/vg1 > /dev/null 2>&1'
Nov 16 23:46:07 MKHOME spacetool.shared: spacetool.c:2604 Failed to activate LVM [/dev/vg1]
Nov 16 23:46:07 MKHOME spacetool.shared: spacetool.c:2629 space: [/dev/vg1000]
Nov 16 23:46:07 MKHOME spacetool.shared: spacetool.c:2657 space: [/dev/vg1000], ndisk: [1]
Nov 16 23:46:07 MKHOME spacetool.shared: spacetool.c:2629 space: [/dev/vg1]
Nov 16 23:46:07 MKHOME spacetool.shared: spacetool.c:2657 space: [/dev/vg1], ndisk: [3]
Nov 16 23:46:07 MKHOME spacetool.shared: space_pool_meta_open.c:63 Failed to open path: /dev/vg1/syno_vg_reserved_area, errno=No such file or directory
Nov 16 23:46:07 MKHOME spacetool.shared: space_pool_meta_enum.c:40 Failed to read meta operator: /dev/vg1, errno=No such file or directory
Nov 16 23:46:07 MKHOME spacetool.shared: space_map_file_dump.c:1030 failed to enum pool meta of '/dev/vg1'
Nov 16 23:46:07 MKHOME spacetool.shared: SmartDataRead(107) read value /dev/sdf fail
Nov 16 23:46:07 MKHOME spacetool.shared: disk_temperature_get.c:71 read value /dev/sdf fail
Nov 16 23:46:07 MKHOME spacetool.shared: SmartFirmAndSerialRead(156) AtaSmartFirmAndSerialRead fail
Nov 16 23:46:07 MKHOME spacetool.shared: space_int_xml.h:216 SmartFirmAndSerialRead failed
Nov 16 23:46:07 MKHOME spacetool.shared: SmartDataRead(107) read value /dev/sdc fail
Nov 16 23:46:07 MKHOME spacetool.shared: disk_temperature_get.c:71 read value /dev/sdc fail
Nov 16 23:46:07 MKHOME spacetool.shared: SmartFirmAndSerialRead(156) AtaSmartFirmAndSerialRead fail
Nov 16 23:46:07 MKHOME spacetool.shared: space_int_xml.h:216 SmartFirmAndSerialRead failed
Nov 16 23:46:07 MKHOME synovspace: virtual_space_conf_check.c:74 [iNFO] No implementation, skip checking configuration of virtual space [HA]
Nov 16 23:46:07 MKHOME synovspace: virtual_space_conf_check.c:74 [iNFO] No implementation, skip checking configuration of virtual space [sNAPSHOT_ORG]
Nov 16 23:46:07 MKHOME synovspace: virtual_space_conf_check.c:78 [iNFO] "PASS" checking configuration of virtual space [FCACHE], app: [1]
Nov 16 23:46:07 MKHOME synovspace: vspace_wrapper_load_all.c:76 [iNFO] No virtual layer above space: [/volume1] / [/dev/vg1000/lv]
Nov 16 23:46:07 MKHOME s00_synocheckfstab: SmartDataRead(107) read value /dev/sdc fail
Nov 16 23:46:07 MKHOME s00_synocheckfstab: disk_temperature_get.c:71 read value /dev/sdc fail
Nov 16 23:46:07 MKHOME s00_synocheckfstab: SmartDataRead(107) read value /dev/sdc fail
Nov 16 23:46:07 MKHOME s00_synocheckfstab: disk_temperature_get.c:71 read value /dev/sdc fail
Nov 16 23:46:07 MKHOME s00_synocheckfstab: SmartDataRead(107) read value /dev/sdc fail
Nov 16 23:46:07 MKHOME s00_synocheckfstab: disk_temperature_get.c:71 read value /dev/sdc fail
Re-generating missing device nodes...

Link to comment
Share on other sites

×
×
  • Create New...