I restored my /volume1 myself! Huray!
Okay, so, after week of investigations I tried different tools to extract data. And here whats happened.
1) The only tool which states it can extract data from Synology LUNs is Reclaime and ReclamePro(which is 4 times more expensive).
However, I tried and it failed. I was talking to their support and one of their senior developers started investigating, why its not working (and doing it still, they are very interested in this case).
2) What is funny, i actually could restore it a week ago if kept cold blooded :) (Actually no, i didn't know one command line option)
So, previously I already disassembled my 2-disk RAID1 with
```
# Stop the RAID1
mdadm -S /dev/md2
# Recreate a RAID1 with just one disk keeping the data as is
mdadm --create --assume-clean --level=1 --force --raid-devices=1 /dev/md2 /dev/sdd5
```
At this point i stopped last time, since LVM couldnt find volume groups and volumes...
Now to the happy finish:
```
#Stop the RAID1 again
mdadm --stop /dev/md2
# Recreate RAID1 with new option "--metadata=1.2"
mdadm --create --level=1 --force --raid-devices=1 /dev/md2 /dev/sdd5 --metadata=1.2
# Reload LVM configuration from backup, where vg1000 is the name of my volume group
vgcfgrestore --test -f /etc/lvm/backup/vg1000 vg1000
# that was dry run, to check that its okay, now real run
vgcfgrestore -f /etc/lvm/backup/vg1000 vg1000
#Output: Restored volume group vg1000
# check it is in the list now
vgs -v
# finally make it active.
vgchange -ay vg1000
# after this my /volume1 is restored as RAID1 mirror with only one HDD.
# but DSM is not picking it up properly - reboot is needed.
```
After reboot all work fine. Storage manager shows my raid, however is has 'Failed system partition'. Its an easy fix for me.
Looking at the disks with lsblk its seen /dev/md0 (system raid1 ) doesnt include partition from my hdd ( /dev/sdd1 ).
```
mdadm --manage /dev/md0 --add /dev/sdd1
```
And all is okay after few minutes.