Jump to content
XPEnology Community
  • 0

SSD cache missing


ADMiNZ

Question

Hello. After the system was restored, the cache disappeared. Unsuccessfully switched to a new version which did not work ....

What can you do about it? There are disks with cache, but the system does not connect them ...

 

001.thumb.png.a44a6b77db8d5d9adb75b97507b54bf3.png   002.thumb.png.ff737ff1c67b5f696d835b2593a78e0c.png.  003.thumb.png.f1cfa63b8201c0aac5ac92e3197a1967.png 

 

004.thumb.png.232b08913ed86677389b8b14b895eb42.png.  005.thumb.png.a5ab4aca54cabcbd617d87a8333b5869.png.  006.thumb.png.7d3af73704915590be97d318f9b7ed49.png

 

ash-4.3# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md2 : active raid5 sda3[5] sdd3[4] sde3[3] sdc3[2] sdb3[1]
      7794770176 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
      
md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3] sde2[4] sdf2[5] sdg2[6]
      2097088 blocks [16/7] [UUUUUUU_________]
      
md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3] sde1[4] sdf1[5] sdg1[6]
      2490176 blocks [16/7] [UUUUUUU_________]

 

ash-4.3# mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Tue Jun 22 19:18:07 2021
     Raid Level : raid1
     Array Size : 2490176 (2.37 GiB 2.55 GB)
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 16
  Total Devices : 7
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Fri Jun 25 18:59:43 2021
          State : clean, degraded 
 Active Devices : 7
Working Devices : 7
 Failed Devices : 0
  Spare Devices : 0

           UUID : bc535ace:18245e6d:3017a5a8:c86610be
         Events : 0.11337

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
       4       8       65        4      active sync   /dev/sde1
       5       8       81        5      active sync   /dev/sdf1
       6       8       97        6      active sync   /dev/sdg1
       -       0        0        7      removed
       -       0        0        8      removed
       -       0        0        9      removed
       -       0        0       10      removed
       -       0        0       11      removed
       -       0        0       12      removed
       -       0        0       13      removed
       -       0        0       14      removed
       -       0        0       15      removed

 

ash-4.3# mdadm --detail /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Tue Jun 22 19:18:10 2021
     Raid Level : raid1
     Array Size : 2097088 (2047.94 MiB 2147.42 MB)
  Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
   Raid Devices : 16
  Total Devices : 7
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Fri Jun 25 17:50:04 2021
          State : clean, degraded 
 Active Devices : 7
Working Devices : 7
 Failed Devices : 0
  Spare Devices : 0

           UUID : 6b7352a0:2dd09c09:3017a5a8:c86610be
         Events : 0.17

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
       3       8       50        3      active sync   /dev/sdd2
       4       8       66        4      active sync   /dev/sde2
       5       8       82        5      active sync   /dev/sdf2
       6       8       98        6      active sync   /dev/sdg2
       -       0        0        7      removed
       -       0        0        8      removed
       -       0        0        9      removed
       -       0        0       10      removed
       -       0        0       11      removed
       -       0        0       12      removed
       -       0        0       13      removed
       -       0        0       14      removed
       -       0        0       15      removed

 

ash-4.3# mdadm --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Wed Dec  9 02:15:41 2020
     Raid Level : raid5
     Array Size : 7794770176 (7433.67 GiB 7981.84 GB)
  Used Dev Size : 1948692544 (1858.42 GiB 1995.46 GB)
   Raid Devices : 5
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Fri Jun 25 17:50:15 2021
          State : clean 
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : memedia:2  (local to host memedia)
           UUID : ae85cc53:ecc1226b:0b6f21b5:b81b58c5
         Events : 34755

    Number   Major   Minor   RaidDevice State
       5       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       3       8       67        3      active sync   /dev/sde3
       4       8       51        4      active sync   /dev/sdd3

 

Edited by ADMiNZ
  • Thanks 1
Link to comment
Share on other sites

4 answers to this question

Recommended Posts

  • 0
ash-4.3# spacetool --synoblock-enum
****** Syno-Block of /dev/sda ******
         Version: 5
         Space Type: Storage Pool
         Space Path: @storage_pool
         Is EBox Cross: FALSE
         PV count: 1
         VG path: /dev/vg1
         VG UUID: GC3cSi-vZTv-SuF0-cS7c-CCrg-0riS-OKeyEs
         RAID UUID: [ae85cc53:ecc1226b:0b6f21b5:b81b58c5]

****** Syno-Block of /dev/sdb ******
         Version: 5
         Space Type: Storage Pool
         Space Path: @storage_pool
         Is EBox Cross: FALSE
         PV count: 1
         VG path: /dev/vg1
         VG UUID: GC3cSi-vZTv-SuF0-cS7c-CCrg-0riS-OKeyEs
         RAID UUID: [ae85cc53:ecc1226b:0b6f21b5:b81b58c5]

****** Syno-Block of /dev/sdc ******
         Version: 5
         Space Type: Storage Pool
         Space Path: @storage_pool
         Is EBox Cross: FALSE
         PV count: 1
         VG path: /dev/vg1
         VG UUID: GC3cSi-vZTv-SuF0-cS7c-CCrg-0riS-OKeyEs
         RAID UUID: [ae85cc53:ecc1226b:0b6f21b5:b81b58c5]

****** Syno-Block of /dev/sdd ******
         Version: 5
         Space Type: Storage Pool
         Space Path: @storage_pool
         Is EBox Cross: FALSE
         PV count: 1
         VG path: /dev/vg1
         VG UUID: GC3cSi-vZTv-SuF0-cS7c-CCrg-0riS-OKeyEs
         RAID UUID: [ae85cc53:ecc1226b:0b6f21b5:b81b58c5]

****** Syno-Block of /dev/sde ******
         Version: 5
         Space Type: Storage Pool
         Space Path: @storage_pool
         Is EBox Cross: FALSE
         PV count: 1
         VG path: /dev/vg1
         VG UUID: GC3cSi-vZTv-SuF0-cS7c-CCrg-0riS-OKeyEs
         RAID UUID: [ae85cc53:ecc1226b:0b6f21b5:b81b58c5]

****** Syno-Block of /dev/sdf ******
         Not found!!

****** Syno-Block of /dev/sdg ******
         Not found!!

 

  • Thanks 1
Link to comment
Share on other sites

  • 0

the two ssd's are shown as "initialized" and that means system and swap where copied to them as if in the in the process of building a raid

under normal circumstances a new disk in the system is not touched and will stay empty, cache disks don't hold system/swap as disks making up a data volume

my conclusion would be that the two cache disks cant be redone to get the cache disks content back because there are already 4.4GByte overwritten by the added system and swap partitions

if the there was cache content not written to the volume it will be lost, hard to say how much is missing, it would need some diagnostic about the file system of the raid5 volumes file system, the safe way would be to use the backup and restore the data of the raid5 volume

the two ssd's added partitions can be removed and then it should be possible to add them as cache again

 

i'm not the recovery expert, so you might get a 2nd opinion about that

Edited by IG-88
  • Thanks 1
Link to comment
Share on other sites

  • 0
4 hours ago, ADMiNZ said:

All data is lost because there was nothing to recover from the 5th raid with all possible options

not necessarily, if there was no content to write on the ssd's the state of the raid5 (or more likely shr1) volume might not be affected

1st is to know what happened, if there are hardware problems you will not have much success with recovery procedures or a start over without data

not sure what "all possible options" means in your case, also the description of what happened so far

22 hours ago, ADMiNZ said:

After the system was restored, the cache disappeared. Unsuccessfully switched to a new version which did not work ....

is giving more questions then answers, it starts with a restore ....?

the step where you installed a new dsm version might be the one where the system/swap partition came to the former cache disks

this step usually does not touch the data volume so the shr1 will still be in the state as before that step

19 hours ago, ADMiNZ said:

VG path: /dev/vg1

that indicates a shr1 volume, also all 5 disks are still in the raid5 so you would be able to assemble the raid and mount the volume to analyze the file structure - or send the disks to a recovery service to salvage data

if you dont need any data  back just delete the two ssd's (storage manager, hdd/ssd and actions deactivate might do that so you can use them as cache disks again, otherwise attach them to another system and delete all partitions) and remove the shr1 volume and storage pool, then create a new pool and volume - to exclude hardware problems do some tests and check the logs in /var/log/ for unusual events

 

depending on the hardware maybe dont use ssd cache, if its just a 1GBit nic there is usually no gain from ssd cache as this limits your spees to ~110MB/s, when handling a lot of smaller files (like in a multi user office environment) there might be some gain but in 1-3 user home use there id not much that is not much that can't be handled by the ram used as cache on the dsm system, when using cache make sure to have a backup and maybe look into options to empty the cache as often es possible (i dont use ssd cache, i can't remember  the options for that)

 

two ssd's can also be used as raid1 data volume and hold data from vm's or data that consists of a lot small files or get fragmented a lot, less dangerous then ssd cache

Edited by IG-88
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...