Jump to content
XPEnology Community
  • 0

Data loss after migration to DSM 7.1 (my fault !) HELP PLEASE !!


vinceg77

Question

I have been running a Volume1 SHR-2 (with 2 disks protection) under DSM 6.x on proxmox for years without too many problem...
For the last few months I was not more able to update to DSM 6.X updates, the pat file was retunring as corrupted each time I was trying (I did not investigate furthermore...)
More recently 2 out of my 8 disks started to show bad health, but I had no time to deal with that. Then one day, after a reboot, my DSM 6.2.??? showed up with 'Configuration lost' !

 

Rather than trying to repair this old version, I took advantage to migrate to 7.1 with TCRP

 

TCRP did not recognized my 2nd passthrouh SATA controller so I rearranged the disks in order to use only one controller card, the 4 others disks being held by the onboard controller (4 passtrough motherboard disks + 1 passtrough 4 slots Marvell SATA controller with 4 disks). One out of my two 4Tb WD red HDD was having massives I/O errors I have sent it back under warranty).

 

After different tried, I had to change to DS3622xs+ to make it run. I

configured user_config.json with SataPortMap=44 and DiskIdxMap=0004

 

This is what I get with :

460325468_2022-07-0814_50_50-NAS-Maison-SynologyNAS.thumb.png.005f60020115e9b302d54f1b8fe214e4.png

 

 

1632940792_2022-07-0814_51_52-NAS-Maison-SynologyNAS.thumb.png.09f33e0a98826d80740cbef7549c8417.png

536605321_2022-07-0814_53_34-NAS-Maison-SynologyNAS.thumb.png.6e3d1726ce647f9347b940b6a0ca8f1d.png

639270995_2022-07-0815_17_26-NAS-Maison-SynologyNAS.thumb.png.8b27375b682a2b2ee61a28cd0f78800c.png

Proxmox disks (those are those of the controller card, the 4 others are not seen because in passthrough under the running Xpenology VM) : 

image.thumb.png.4475d9c496f88aac2da8f227e131c2e7.png

 

VM configuration :

1977819251_2022-07-0814_55_01-proxmox-ProxmoxVirtualEnvironment.thumb.png.0fb864c0da2183dffa9c805c9871a1b8.png

 

 

 

 

All the disks are recognized (except the one gone under warranty) and I thought naively that thanks to SHR-2 DSM will be able to rearrange the Volume.

I am really not an IT expert, specially when it comes about partitionning, LVM, disk management ...

My wife has many sensible datas on that volume and I have been very busy these last few months so I haven't been very serious about backups (I know :( !... )
She will probably ask divorce if I do not manage to recover thoses files 🤔!

 

Any help in order to recover my data (and my mariage 😀) would be much much appreciated.

Thanks in advance.

Edited by vinceg77
Link to comment
Share on other sites

2 answers to this question

Recommended Posts

  • 0

I have mounted all my NAS disks into ubuntu :


admin@VM-ubuntu$ sudo fdisk -l | grep /dev/sd
Disque /dev/sda : 64 GiB, 68719476736 octets, 134217728 secteurs
/dev/sda1       2048   1050623   1048576   512M Système EFI
/dev/sda2    1050624 134215679 133165056  63,5G Système de fichiers Linux

Disque /dev/sdb : 2,73 TiB, 3000592982016 octets, 5860533168 secteurs
/dev/sdb1          2048    4982527    4980480   2,4G RAID Linux
/dev/sdb2       4982528    9176831    4194304     2G RAID Linux
/dev/sdb5       9453280 1953318239 1943864960 926,9G RAID Linux
/dev/sdb6    1953334336 3906806143 1953471808 931,5G RAID Linux
/dev/sdb7    3906822240 5860326207 1953503968 931,5G RAID Linux

Disque /dev/sdc : 3,64 TiB, 4000787030016 octets, 7814037168 secteurs
/dev/sdc1          2048    4982527    4980480   2,4G RAID Linux
/dev/sdc2       4982528    9176831    4194304     2G RAID Linux
/dev/sdc5       9453280 1953318239 1943864960 926,9G RAID Linux
/dev/sdc6    1953334336 3906806143 1953471808 931,5G RAID Linux
/dev/sdc7    3906822240 5860326207 1953503968 931,5G RAID Linux

Disque /dev/sdd : 2,73 TiB, 3000592982016 octets, 5860533168 secteurs
/dev/sdd1          2048    4982527    4980480   2,4G RAID Linux
/dev/sdd2       4982528    9176831    4194304     2G RAID Linux
/dev/sdd5       9453280 1953318239 1943864960 926,9G RAID Linux
/dev/sdd6    1953334336 3906806143 1953471808 931,5G RAID Linux
/dev/sdd7    3906822240 5860326207 1953503968 931,5G RAID Linux

Disque /dev/sde : 1,82 TiB, 2000398934016 octets, 3907029168 secteurs
/dev/sde1                   2048    4982527    4980480   2,4G fd RAID Linux autodetected
/dev/sde2                4982528    9176831    4194304     2G fd RAID Linux autodetected
/dev/sde3                9437184 3907015007 3897577824   1,8T  f Étendue W95 (LBA)
/dev/sde5                9453280 1953318239 1943864960 926,9G fd RAID Linux autodetected
/dev/sde6             1953334336 3906806143 1953471808 931,5G fd RAID Linux autodetected

Disque /dev/sdf : 1,82 TiB, 2000397852160 octets, 3907027055 secteurs
/dev/sdf1                   2048    4982527    4980480   2,4G fd RAID Linux autodetected
/dev/sdf2                4982528    9176831    4194304     2G fd RAID Linux autodetected
/dev/sdf3                9437184 3907015007 3897577824   1,8T  f Étendue W95 (LBA)
/dev/sdf5                9453280 1953318239 1943864960 926,9G fd RAID Linux autodetected
/dev/sdf6             1953334336 3906806143 1953471808 931,5G fd RAID Linux autodetected

Disque /dev/sdg : 1,82 TiB, 2000398934016 octets, 3907029168 secteurs
/dev/sdg1                   2048    4982527    4980480   2,4G fd RAID Linux autodetected
/dev/sdg2                4982528    9176831    4194304     2G fd RAID Linux autodetected
/dev/sdg3                9437184 3907015007 3897577824   1,8T  f Étendue W95 (LBA)
/dev/sdg5                9453280 1953318239 1943864960 926,9G fd RAID Linux autodetected
/dev/sdg6             1953334336 3906806143 1953471808 931,5G fd RAID Linux autodetected

Disque /dev/sdh : 1,82 TiB, 2000398934016 octets, 3907029168 secteurs
/dev/sdh1                   2048    4982527    4980480   2,4G fd RAID Linux autodetected
/dev/sdh2                4982528    9176831    4194304     2G fd RAID Linux autodetected
/dev/sdh3                9437184 3907015007 3897577824   1,8T  f Étendue W95 (LBA)
/dev/sdh5                9453280 1953318239 1943864960 926,9G fd RAID Linux autodetected
/dev/sdh6             1953334336 3906806143 1953471808 931,5G fd RAID Linux autodetected

 

 

admin@VM-ubuntu$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md3 : active raid6 sdf6[8] sdd6[1] sdb6[0] sde6[3] sdg6[6] sdh6[7]
      5860409088 blocks super 1.2 level 6, 64k chunk, algorithm 2 [8/6] [UUUUUU__]

md2 : active raid6 sdb5[0] sdf5[7] sdd5[1] sdg5[8] sde5[3] sdh5[10]
      5831588736 blocks super 1.2 level 6, 64k chunk, algorithm 2 [8/6] [UUUUUU__]

md4 : active raid6 sdb7[0] sdd7[1]
      1953501824 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/2] [UU__]

 

admin@VM-ubuntu$ sudo lvs
  WARNING: PV /dev/md2 in VG vg1 is using an old PV header, modify the VG to update.
  WARNING: PV /dev/md3 in VG vg1 is using an old PV header, modify the VG to update.
  WARNING: PV /dev/md4 in VG vg1 is using an old PV header, modify the VG to update.
  LV                    VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  syno_vg_reserved_area vg1 -wi-a-----  12,00m
  volume_1              vg1 -wi-a----- <12,71t

 

Does it help ? I am not sure to really fully understand thoses outputs 

Edited by vinceg77
Link to comment
Share on other sites

  • 0

After trying ReclaiMe file recovery software under a Windows10 VM which was not working because it was not recognizing all my drives, I restarted DSM 7 and very surprisingly my volume is now seen as critical and read-only!

 

image.thumb.png.be2d381699ed5905282e8baa2ea3af4d.png

 

I tried to FTP to the NAS to backup but the connexion keeps on deconnecting, probably due to data corruption.

 

What can I do now ?

Should I try to convert the volume to read/write as proposed under the 3 dots ?

 

What would be the best way to recover my data ?

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...