Jump to content
XPEnology Community

Hard disc not detect and pool degrade after restarted


Recommended Posts

Hi Marius, 

 

I own a Synology 920+ and am currently using DSM version 7.2.1-69057 Update 5. The NAS has been running continuously, but after shutting it down for maintenance yesterday, I noticed upon restarting it today that the HDD has degraded and disconnected.

 

This is second times, i experience this issues. This issues happen since past 2 year and HDD is a new HDD. 

I suspect may not be a HDD issues, maybe is OS affecting it. 

 

Thank You

 


ash-4.4# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sata1p5[1]
      5855691456 blocks super 1.2 [2/1] [_U]

md1 : active raid1 sata1p2[1]
      2097088 blocks [4/1] [_U__]

md0 : active raid1 sata1p1[1]
      2490176 blocks [4/1] [_U__]

unused devices: <none>

 

ash-4.4# vgdisplay -v
    Using volume group(s) on command line.
  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  8
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               5.45 TiB
  PE Size               4.00 MiB
  Total PE              1429612
  Alloc PE / Size       1427971 / 5.45 TiB
  Free  PE / Size       1641 / 6.41 GiB
  VG UUID               blwEd7-I01i-6gIO-i41d-Y4U4-cPOI-ctVwtA

  --- Logical volume ---
  LV Path                /dev/vg1/syno_vg_reserved_area
  LV Name                syno_vg_reserved_area
  VG Name                vg1
  LV UUID                N2aCBI-8Cos-RX8d-Bp2K-FbDI-GixS-mxQgvA
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 0
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           248:0

  --- Logical volume ---
  LV Path                /dev/vg1/volume_1
  LV Name                volume_1
  VG Name                vg1
  LV UUID                O8iKF2-Sdsc-pceS-Gvyh-B3Gs-VflC-ABlEOA
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                5.45 TiB
  Current LE             1427968
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           248:1

  --- Physical volumes ---
  PV Name               /dev/md2
  PV UUID               kqtebU-C0N9-YXHb-4lam-exZA-CD4l-DLdIBA
  PV Status             allocatable
  Total PE / Free PE    1429612 / 1641

 

ash-4.4# lvdisplay -v
    Using logical volume(s) on command line.
  --- Logical volume ---
  LV Path                /dev/vg1/syno_vg_reserved_area
  LV Name                syno_vg_reserved_area
  VG Name                vg1
  LV UUID                N2aCBI-8Cos-RX8d-Bp2K-FbDI-GixS-mxQgvA
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 0
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           248:0

  --- Logical volume ---
  LV Path                /dev/vg1/volume_1
  LV Name                volume_1
  VG Name                vg1
  LV UUID                O8iKF2-Sdsc-pceS-Gvyh-B3Gs-VflC-ABlEOA
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                5.45 TiB
  Current LE             1427968
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           248:1

 


ash-4.4# lvm pvscan
  PV /dev/md2   VG vg1   lvm2 [5.45 TiB / 6.41 GiB free]
  Total: 1 [5.45 TiB] / in use: 1 [5.45 TiB] / in no VG: 0 [0   ]

 


ash-4.4# cat /etc/fstab
none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/mapper/cachedev_0 /volume1 btrfs auto_reclaim_space,ssd,synoacl,relatime,relatime_period=30,nodev 0 0

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...