I have a 1812+ 8-bay system. I recently added another 20TB drive... so my makeup is the following:
3 20TB HDD
5 8TB HDD
After rebuilding the array (SHR-1) for 5 days - i received a notification system errors occurred. The process did finish and it said 'Healthy' but had a 'Warning' that there were some system errors. The recommendation was to reboot and do a system scan. I allowed it to reboot and do this scan.
When this scan was occurring after reboot, it was doing a e2fsck scan. This took about 5/6 hours. And has since finished - based on checking 'top'.
Another process is now running and has been for about 90 minutes. This process is:
/sbin/debugfs -q /.remap.vg1000.lv /dev/vg1000/lv
Should i let this process finish? I assume the answer is yes! Please confirm?
I have done some other system checks/commands, and the outputs are below. It LOOKS to me that the volume is still generally OK... but i do not see if when i do a 'df -h'.
Here is the output:
lvm vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg1000" using metadata type lvm2
pvdisplay
--- Physical volume ---
PV Name /dev/md2
VG Name vg1000
PV Size 19.07 TiB / not usable 2.94 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 4999687
Free PE 0
Allocated PE 4999687
PV UUID 39SMYw-LvD9-csns-Ibx0-Xk6L-NcRN-gH5z57
--- Physical volume ---
PV Name /dev/md3
VG Name vg1000
PV Size 19.10 TiB / not usable 1.31 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 5007581
Free PE 0
Allocated PE 5007581
PV UUID zigxlk-rzBz-w71G-XG0c-T5NK-Kdty-2asJUO
--- Physical volume ---
PV Name /dev/md4
VG Name vg1000
PV Size 10.92 TiB / not usable 2.69 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 2861564
Free PE 0
Allocated PE 2861564
PV UUID gaMsmu-3noo-SQlO-A2Ol-9irc-OeQP-PAsC7H
--- Physical volume ---
PV Name /dev/md5
VG Name vg1000
PV Size 21.83 TiB / not usable 3.94 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 5721574
Free PE 0
Allocated PE 5721574
PV UUID cMM5qC-lxe1-ET5H-uUjR-1X0R-9oqd-2mf5pX
vgdisplay
--- Volume group ---
VG Name vg1000 System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 57
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 4
Act PV 4
VG Size 70.92 TiB
PE Size 4.00 MiB
Total PE 18590406
Alloc PE / Size 18590406 / 70.92 TiB
Free PE / Size 0 / 0
VG UUID R3FyH1-QjrW-UoEu-xDM6-Ihzq-AuQ0-45PC2M
lvdisplay
--- Logical volume ---
LV Path /dev/vg1000/lv
LV Name lv
VG Name vg1000
LV UUID 7dAjuW-c4q8-223V-L2QF-DmFw-dfJv-UR5ZND
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 70.92 TiB
Current LE 18590406
Segments 4
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 253:0
I just would like someone to confirm with me that the debugfs is an EXPECTED process of the reboot/scan following the completion of the e2fsck scan? If so, I will be PATIENT and wait for it to finish and i assume reboot the NAS and allow me to login into the web GUI... if it is NOT expected to run, any recommended next steps?
Edited by ccfc1986 ease of reading - bolded and underlined commands
Question
ccfc1986
Hi All!
I have a 1812+ 8-bay system. I recently added another 20TB drive... so my makeup is the following:
3 20TB HDD
5 8TB HDD
After rebuilding the array (SHR-1) for 5 days - i received a notification system errors occurred. The process did finish and it said 'Healthy' but had a 'Warning' that there were some system errors. The recommendation was to reboot and do a system scan. I allowed it to reboot and do this scan.
When this scan was occurring after reboot, it was doing a e2fsck scan. This took about 5/6 hours. And has since finished - based on checking 'top'.
Another process is now running and has been for about 90 minutes. This process is:
/sbin/debugfs -q /.remap.vg1000.lv /dev/vg1000/lv
Should i let this process finish? I assume the answer is yes! Please confirm?
I have done some other system checks/commands, and the outputs are below. It LOOKS to me that the volume is still generally OK... but i do not see if when i do a 'df -h'.
Here is the output:
lvm vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg1000" using metadata type lvm2
lvm lvscan
ACTIVE '/dev/vg1000/lv' [70.92 TiB] inherit
pvs
PV VG Fmt Attr PSize PFree
/dev/md2 vg1000 lvm2 a-- 19.07t 0
/dev/md3 vg1000 lvm2 a-- 19.10t 0
/dev/md4 vg1000 lvm2 a-- 10.92t 0
/dev/md5 vg1000 lvm2 a-- 21.83t 0
vgs
VG #PV #LV #SN Attr VSize VFree
vg1000 4 1 0 wz--n- 70.92t 0
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv vg1000 -wi-ao---- 70.92t
pvdisplay
--- Physical volume ---
PV Name /dev/md2
VG Name vg1000
PV Size 19.07 TiB / not usable 2.94 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 4999687
Free PE 0
Allocated PE 4999687
PV UUID 39SMYw-LvD9-csns-Ibx0-Xk6L-NcRN-gH5z57
--- Physical volume ---
PV Name /dev/md3
VG Name vg1000
PV Size 19.10 TiB / not usable 1.31 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 5007581
Free PE 0
Allocated PE 5007581
PV UUID zigxlk-rzBz-w71G-XG0c-T5NK-Kdty-2asJUO
--- Physical volume ---
PV Name /dev/md4
VG Name vg1000
PV Size 10.92 TiB / not usable 2.69 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 2861564
Free PE 0
Allocated PE 2861564
PV UUID gaMsmu-3noo-SQlO-A2Ol-9irc-OeQP-PAsC7H
--- Physical volume ---
PV Name /dev/md5
VG Name vg1000
PV Size 21.83 TiB / not usable 3.94 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 5721574
Free PE 0
Allocated PE 5721574
PV UUID cMM5qC-lxe1-ET5H-uUjR-1X0R-9oqd-2mf5pX
vgdisplay
--- Volume group ---
VG Name vg1000 System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 57
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 4
Act PV 4
VG Size 70.92 TiB
PE Size 4.00 MiB
Total PE 18590406
Alloc PE / Size 18590406 / 70.92 TiB
Free PE / Size 0 / 0
VG UUID R3FyH1-QjrW-UoEu-xDM6-Ihzq-AuQ0-45PC2M
lvdisplay
--- Logical volume ---
LV Path /dev/vg1000/lv
LV Name lv
VG Name vg1000
LV UUID 7dAjuW-c4q8-223V-L2QF-DmFw-dfJv-UR5ZND
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 70.92 TiB
Current LE 18590406
Segments 4
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 253:0
cat /etc/fstab
none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/vg1000/lv /volume1 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0
I just would like someone to confirm with me that the debugfs is an EXPECTED process of the reboot/scan following the completion of the e2fsck scan? If so, I will be PATIENT and wait for it to finish and i assume reboot the NAS and allow me to login into the web GUI... if it is NOT expected to run, any recommended next steps?
Edited by ccfc1986ease of reading - bolded and underlined commands
4 answers to this question
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.