satdream

Members
  • Content Count

    69
  • Joined

  • Last visited

Community Reputation

3 Neutral

About satdream

  • Rank
    Regular Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. SMR official lists
  2. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.2-24922 Update 4 - Loader version and model: JUN'S LOADER v1.03b - DS3617xs - Using custom extra.lzma: Yes - Installation type: BAREMETAL - HP Microserver Gen8 - Additional comments: 1) Manual Update 2) Once update finished, the Gen8 start to reboot automatically = switch off quickly (during the "long" boot test process of the Gen8=without issue) 3) Update extra.lzma (simple copy/paste under W10) on the SDCard from which the Gen8 boot 4) Boot Gen8 with updated SDCard => no issue with integrated NiC, the Gen8 is fine visible on network ! 5) Log via SSH then update /etc.default/synoinfo.conf if specific (eg. more HDDs etc.) the update reseted values, supposed to be restored to - esataportcfg to 0x00000 (instead default 0xff000), - usbportcfg to 0x7C000 (instead default 0x300000) = 5 USB on the Gen8 - internalportcfg to 0xFFFFFF (instead of default 0xfff) = 21 Disks support in my case
  3. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25426 - Loader version and model: JUN'S LOADER v1.03b - DS3617xs - Using custom extra.lzma: Yes - Installation type: BAREMETAL - HP Microserver Gen8 - Additional comments: REBOOT REQUIRED -- Manual Update + update /etc.default/synoinfo.conf if specific (eg. more HDDs etc.) The update reseted values, supposed to be restored to - esataportcfg to 0x00000 (instead default 0xff000), - usbportcfg to 0x7C000 (instead default 0x300000) = 5 USB on the Gen8 - internalportcfg to 0xFFFFFF (instead of default 0xfff) = 21 Disks support in my case
  4. - Outcome of the update: SUCCESSFUL - DSM version prior update: Disks migration from DS3615xs/6.1.2/loader 1.02b - Loader version and model: JUN'S LOADER v1.03b - DS3617xs - Using custom extra.lzma: YES - Installation type: BAREMETAL - HP Gen8 Micro 12Gb i5 3470T - New install - Dell H310 w/LSI 9211-8i P20 IT Mode + extension Rack of SAS/SATA mixed HDD - Additional comments: Gen8 onboard Dual Broadcom NiC working (no need of additionnal NiC thanks to native drivers from IG-88)
  5. Done tests with fresh install: confirm that IronWolf is working fine in a new fresh install in mixed SAS/SATA environment, but then tried migration and same issue ... Ironwolf support from DSM 6.2.2 24922 update 4 have a bug ... disk is put out of the pool as in default even if status is normal ... Close topic/my contribution, and thanks again to all you sending me suggestions (and private message), and especially to @flyride ! FINISH
  6. Got finally pool working and all status at Normal removing IronWolf, then replacing it, and adding another disk too, after long (long) resync. now config is full working w/DSM 6.2.2-24922 Update 4 ... But IronWolf is really an issue, will made other tests (not with my data, but fake config ) in order to try to understand, but for the moment I have to reconfigure all data accesses etc. Thanks all for support !
  7. Resync finalized with the new WD 12TB, extracted the 10TB IronWolf = all data available ... resync successfull ... But the pool is still failed ... then removed the unused 5TB Toshiba (from the beginning of issues I do not understand why this HDD was status changed to initalized in DSM assuming the Raid manager consider it as Ok in the Raid volume), as DSM asking for a 8TB minimum disk, I plugged a new 8TB Seagate NAS HDD version ... Resync initated, 8h prevision ... for the 1st round ... Rq: the resync duration predication is a bit unprecised, and do not consider that 2x resync are requested, but only the on-going one : but MD3 then MD2 shall be resync (SHR consider MD0/MD1 as system partition duplicated on all disks, data are on MD2/MD3 and parity error correction on MD4/MD5, btw the high volume to sync is on MD3 and MD2, why DSM shows 2x consecutive reparing/resync actions but is not able to give the cumulated duration eval).
  8. I run a basic grep -rnw etc. from root on the Ironwolf serial number, that returned a limited number of files ... btw I understood that overall displayed disks details are stored now in SQLite db, that I was able to edit with DBbrowser ... not difficult to remove smart tests results etc. btw it is also in those files the account, connection etc. logs are stored ... plugging AHCI/Sata it generated a specifc disk_latency file too (the Gen8 internal AHCI is a 3Gb link, as the Dell H310 a 6Gb x2 links, btw the DSM is able to determinate a latency in the accesses). It is those .SYNOxxxx files listed previously, and a disk_latency_tmp.db then made cleaning removing in SQLite records where the IronWolf serial was identified, but not change on the disk status ... except removing the log/trace/history of smart tests and extended one, not change on the disk himself. But now the issue seems to be more precisely linked with the pool management, as the disk health status is at "Normal" but the allocation is "Faulty" ... it means to determinate how the pool consider disks in its structure ... and why the eSata have impact (as for the IronWolf management) ...
  9. Some news: Few other tests performed: - Installed IronWolf directly on SATA direct on Gen8 interface, and "provoc" an install/migration with another boot card = IronWolf listed as "Normal/OK", but the 8 TB SATA WD installed in external inclosure with other SAS HDD not recognized/disks missing - Update the synoconf removing eSata etc. - the 8TB WD is detected etc. => the IronWolf become as "Faulty" and automatically set out of the pool in DSM => the IronWolf HDD are managed specifically in DSM allowing few additional functions as monitoring, smart specific etc. BUT it is an issue in my case as in order to be compatible of SAS +SATA HDD in the same enclosure, parameters modify the way DSM is detecting the IronWolf ... potentially the eSata support is used in interfacing IronWolf ... => not the same behaviour under DSM 6.1 ... where it was working perfectly, issue with 6.2.2 Current status: - installed a new 12 GB WD SATA on Gen8 AHCI SATA (in addition to the 8 disks installed via the H310 LSI card) - DSM pool manager accepted the disk and initiated a recovery: 1st disk check (taked ~12h) = Ok - then recovery in progress ... estimated at 24h to be continued ...
  10. Work now with integrated NIC using driver extension published two weeks ago, I run Gen8 with 6.2-24922 Update 4, but I performed a fresh install, no idea if update from previous release will work (see specific way to use driver extension)
  11. Many thanks, no worry about release 6.1/6.2 confusion, I provided in this topic a lot of info (and few mix ) ... btw Ok understood concerning array rebuild etc. I am with you: more and more I check = it is a "cosmetic" issue as you told ... but archiving in .bak the two files have nothing changed I assume the log files are not impacting the identification of faulty disk on 6.2 ... and no idea from where to change setting in the config files ... Interresting point : even if the files are in .bak, DSM still list the tests performed and status in history ... btw /var/log files are not considered Is it something to do in /var/log/synolog ? root@Diskstation:/var/log/synolog# ll total 296 drwx------ 2 system log 4096 Dec 27 02:01 . drwxr-xr-x 19 root root 4096 Dec 26 20:51 .. -rw-r--r-- 1 root root 26624 Dec 27 02:01 .SYNOACCOUNTDB -rw-r--r-- 1 system log 114688 Dec 27 01:47 .SYNOCONNDB -rw-r--r-- 1 system log 32768 Dec 27 02:01 .SYNOCONNDB-shm -rw-r--r-- 1 system log 20992 Dec 27 02:01 .SYNOCONNDB-wal -rw-r--r-- 1 system log 12288 Dec 25 18:10 .SYNODISKDB -rw-rw-rw- 1 root root 3072 Dec 27 01:50 .SYNODISKHEALTHDB -rw-r--r-- 1 system log 8192 Dec 27 01:47 .SYNODISKTESTDB -rw-r--r-- 1 root root 2048 Dec 22 15:40 .SYNOISCSIDB -rw-r--r-- 1 system log 14336 Dec 27 01:50 .SYNOSYSDB -rw-r--r-- 1 system log 32768 Dec 27 01:51 .SYNOSYSDB-shm -rw-r--r-- 1 system log 1080 Dec 27 01:51 .SYNOSYSDB-wal None are editable ... Thanks Ps: It is 2am and I will stop investigate the 2 coming days as out of home, will continue when back
  12. Sorry for wrong formulation mixed Smart test and Smart status ... : - the SMART Extended reported No Errors, everything as "Normal" = final disk health status is "normal": - But strangly (see the hardcopy of SMART details) the SMART status shows few errors (assuming SMART details are sometime a bit "complex" to analyze): Raw_read error_rate Seek_error_rate Hardware_ECC_recovered Checking on few forums it seems not really an issue and common with IronWolf ... assuming the reallocated sector is at 0 ... Feel pretty confident on SMART extended tests as it is a non-destructive physical tests (read value/write value of each disk sector) ... In addition IronWolf have a specific test integrated in DSM, that I run and report No Error (000. Normal) Found no smart_test_log.xml but: /var/log/healthtest/ dhm_<IronWolfSerialNo>.xz /var/log/smart_result/2019-12-25_<longnum>.txz I keep them waiting recommandation ! What about rebuild the array ? Sequence to rebuild the array is well known as following one: 1. umount /opt 2. umount /volume1 3. syno_poweroff_task -d 4. mdadm -–stop /dev/mdX 5. mdadm -Cf /dev/mdxxxx -e1.2 -n1 -l1 /dev/sdxxxx -u<id number> 6. e2fsck -pvf -C0 /dev/mdxxxx 7. cat /proc/mdstat 8. reboot but my array look likes to be correct as cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdk5[1] sdh5[7] sdi5[6] sdm5[8] sdn5[4] sdg5[3] sdj5[2] 27315312192 blocks super 1.2 level 5, 64k chunk, algorithm 2 [8/7] [_UUUUUUU] md3 : active raid5 sdk6[1] sdm6[7] sdh6[6] sdi6[5] sdn6[4] sdg6[3] sdj6[2] 6837200384 blocks super 1.2 level 5, 64k chunk, algorithm 2 [8/7] [_UUUUUUU] md5 : active raid5 sdi8[0] sdh8[1] 3904788864 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/2] [UU_] md4 : active raid5 sdn7[0] sdm7[3] sdh7[2] sdi7[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUU_] md1 : active raid1 sdg2[0] sdh2[5] sdi2[4] sdj2[1] sdk2[2] sdl2[3] sdm2[7] sdn2[6] 2097088 blocks [14/8] [UUUUUUUU______] md0 : active raid1 sdg1[0] sdh1[5] sdi1[4] sdj1[1] sdk1[2] sdl1[3] sdm1[6] sdn1[7] 2490176 blocks [12/8] [UUUUUUUU____] unused devices: <none> only the IronWold disk is consider as faulty ... not sure rebuild array will reset the disk error It is completly crazy, the disk are all normal, the arry is fully accessible, but DSM consider a disk as faulty and block any action (including adding drive etc.) Thx
  13. No change, the regenerated disk_overview.xml is now without the disconnected SSD, the structure of each disk is exactly the same tag as for the IronWolf: <SN_xxxxxxxx model="ST10000VN0004-1ZD101"> <path>/dev/sdm</path> <unc>0</unc> <icrc>0</icrc> <idnf>0</idnf> <retry>0</retry> </SN_xxxxxxx> Where the DSM is storing the disk status ? I remember something as Synology has a customized version of md driver and mdadm toolsets that adds a 'DriveError' flag to the rdev->flags structure in the kernel ... but don't know I to change it ... Thx
  14. Done, removedxml tag of IronWolf, but after reboot still the same status ... and I see that the former SSD I removed from install (disconnected) are listed in the disk_overview.xml What about performing a reinstall ? btw changing serialid of 3615xs in order to initiate a migration ? will it reset status ? as I do not reinstall applications/config etc. not a big issue to perform a migration ... Thanks a lot
  15. Here is the screenshot, but sorry it is in French ... IronWolf is displayed as "En panne" (="Failed" or "Broken") but with 0 bad sectors (but the SMART Extended test returned few errors) the pool status as "degraded" with the "failed drive" show as "Failed allocation status" and the rest of disks as normal the list of disks (the unused Toshiba is displayed as "Initied") And the SMART status of the IronWolf Many Thanks !