Jump to content
XPEnology Community

Help I lost access to ~45 Tb of data ...


Recommended Posts

I apologize, you said you were on 6.2 and the last XML file referenced was for 6.1.x.  I haven't yet had to recover a flagged drive status on 6.2.  If you want, try renaming those two files adding extension ".bak" and reboot, see if the drive status changes.  In any case, I think the error is cosmetic.  This drive (/dev/sdm) has valid data on it and is present in all your arrays.

 

Do not try to fix the array with the shell commands you found.  e2fsck has no business being executed against a btrfs file system.  And those commands attempt a brute force creation of an array over top an existing array, which is a high risk scenario.

 

Rebuilding the array should be done from Disk Manager, especially when you have a SHR array.  You only must add the 5TB disk /dev/sdl for repair; again, the Ironwolf drive is already present and working.

 

Link to comment
Share on other sites

On 12/27/2019 at 5:29 AM, flyride said:

I apologize, you said you were on 6.2 and the last XML file referenced was for 6.1.x.  I haven't yet had to recover a flagged drive status on 6.2.  If you want, try renaming those two files adding extension ".bak" and reboot, see if the drive status changes.  In any case, I think the error is cosmetic.  This drive (/dev/sdm) has valid data on it and is present in all your arrays.

 

Do not try to fix the array with the shell commands you found.  e2fsck has no business being executed against a btrfs file system.  And those commands attempt a brute force creation of an array over top an existing array, which is a high risk scenario.

 

Rebuilding the array should be done from Disk Manager, especially when you have a SHR array.  You only must add the 5TB disk /dev/sdl for repair; again, the Ironwolf drive is already present and working.

 

 

Many thanks, no worry about release 6.1/6.2 confusion, I provided in this topic a lot of info (and few mix ;) ) ... btw Ok understood concerning array rebuild etc.

 

I am with you: more and more I check = it is a "cosmetic" issue as you told ... but archiving in .bak the two files have nothing changed :-( 

 

I assume the log files are not impacting the identification of faulty disk on 6.2 ... and no idea from where to change setting in the config files ... 

 

Interresting point : even if the files are in .bak, DSM still list the tests performed and status in history ... btw /var/log files are not considered 

 

Is it something to do in /var/log/synolog ?

 

root@Diskstation:/var/log/synolog# ll
total 296
drwx------  2 system log    4096 Dec 27 02:01 .
drwxr-xr-x 19 root   root   4096 Dec 26 20:51 ..
-rw-r--r--  1 root   root  26624 Dec 27 02:01 .SYNOACCOUNTDB
-rw-r--r--  1 system log  114688 Dec 27 01:47 .SYNOCONNDB
-rw-r--r--  1 system log   32768 Dec 27 02:01 .SYNOCONNDB-shm
-rw-r--r--  1 system log   20992 Dec 27 02:01 .SYNOCONNDB-wal
-rw-r--r--  1 system log   12288 Dec 25 18:10 .SYNODISKDB
-rw-rw-rw-  1 root   root   3072 Dec 27 01:50 .SYNODISKHEALTHDB
-rw-r--r--  1 system log    8192 Dec 27 01:47 .SYNODISKTESTDB
-rw-r--r--  1 root   root   2048 Dec 22 15:40 .SYNOISCSIDB
-rw-r--r--  1 system log   14336 Dec 27 01:50 .SYNOSYSDB
-rw-r--r--  1 system log   32768 Dec 27 01:51 .SYNOSYSDB-shm
-rw-r--r--  1 system log    1080 Dec 27 01:51 .SYNOSYSDB-wal


None are editable ... 

 

Thanks 

 

Ps: It is 2am and I will stop investigate the 2 coming days as out of home, will continue when back :) 

Edited by Polanskiman
Added code tag.
Link to comment
Share on other sites

Some news:

 

Few other tests performed:

- Installed IronWolf directly on SATA direct on Gen8 interface, and "provoc" an install/migration with another boot card = IronWolf listed as "Normal/OK", but the 8 TB SATA WD installed in external inclosure with other SAS HDD not recognized/disks missing

- Update the synoconf removing eSata etc.

- the 8TB WD is detected etc.

=> the IronWolf become as "Faulty" and automatically set out of the pool in DSM

 

=> the IronWolf HDD are managed specifically in DSM allowing few additional functions as monitoring, smart specific etc. BUT it is an issue in my case as in order to be compatible of SAS +SATA HDD in the same enclosure, parameters modify the way DSM is detecting the IronWolf ... potentially the eSata support is used in interfacing IronWolf ...

=> not the same behaviour under DSM 6.1 ... where it was working perfectly, issue with 6.2.2

 

Current status:

- installed a new 12 GB WD SATA on Gen8 AHCI SATA (in addition to the 8 disks installed via the H310 LSI card)

- DSM pool manager accepted the disk and initiated a recovery: 1st disk check (taked ~12h) = Ok

- then recovery in progress ... estimated at 24h

 

to be continued ...

Edited by satdream
Link to comment
Share on other sites

Thinking back, there is new Ironwolf-specific code in 6.2, Synology was touting it as a feature upon release. It seems like this may be related to that.  If you do figure out what file(s) are locking your drive out, please post as I doubt anyone here has encountered it before now.

  • Thanks 1
Link to comment
Share on other sites

il y a 50 minutes, flyride a dit :

Thinking back, there is new Ironwolf-specific code in 6.2, Synology was touting it as a feature upon release. It seems like this may be related to that.  If you do figure out what file(s) are locking your drive out, please post as I doubt anyone here has encountered it before now.

 

I run a basic grep -rnw etc. from root on the Ironwolf serial number, that returned a limited number of files ... btw I understood that overall displayed disks details are stored now in  SQLite db, that I was able to edit with DBbrowser ... not difficult  to remove smart tests results etc. btw it is also in those files the account, connection etc. logs are stored ... plugging AHCI/Sata it generated a specifc disk_latency file too (the Gen8 internal AHCI is a 3Gb link, as the Dell H310 a 6Gb x2 links, btw the DSM is able to determinate a latency in the accesses).

 

It is those .SYNOxxxx files listed previously, and a disk_latency_tmp.db then made cleaning removing in SQLite records where the IronWolf serial was identified, but not change on the disk status ... except removing the log/trace/history of smart tests and extended one, not change on the disk himself.

 

But now the issue seems to be more precisely linked with the pool management, as the disk health status is at "Normal" but the allocation is "Faulty" ... it means to determinate how the pool consider disks in its structure ... and why the eSata have impact (as for the IronWolf management) ...

 

 

Edited by satdream
Link to comment
Share on other sites

Resync finalized with the new WD 12TB, extracted the 10TB IronWolf = all data available ... resync successfull ...

 

But the pool is still failed ... then removed the unused 5TB Toshiba (from the beginning of issues I do not understand why this HDD was status changed to initalized in DSM assuming the Raid manager consider it as Ok in the Raid volume), as DSM asking for a 8TB minimum disk, I plugged a new 8TB Seagate NAS HDD version ...

 

Resync initated, 8h prevision ... for the 1st round ... 

 

Rq: the resync duration predication is a bit unprecised, and do not consider that 2x resync are requested, but only the on-going one : but MD3 then MD2 shall be resync (SHR consider MD0/MD1 as system partition duplicated on all disks,  data are on MD2/MD3 and parity error correction on MD4/MD5, btw the high volume to sync is on MD3 and MD2, why DSM shows 2x consecutive reparing/resync actions but is not able to give the cumulated duration eval).

Edited by satdream
Link to comment
Share on other sites

Got finally pool working and all status at Normal removing IronWolf, then replacing it, and adding another disk too, after long (long) resync. now config is full working w/DSM 6.2.2-24922 Update 4 ... 

 

But IronWolf is really an issue, will made other tests (not with my data, but fake config ;) ) in order to try to understand, but for the moment I have to reconfigure all data accesses etc.

 

Thanks all for support ! 

Edited by satdream
Link to comment
Share on other sites

Done tests with fresh install: confirm that IronWolf is working fine in a new fresh install in mixed SAS/SATA environment, but then tried migration and same issue ... Ironwolf support from DSM 6.2.2 24922 update 4 have a bug ... disk is put out of the pool as in default even if status is normal ...

 

Close topic/my contribution, and thanks again to all you sending me suggestions (and private message), and especially to @flyride !

 

FINISH

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...