Jump to content
XPEnology Community
  • 0

Snapshot replication - broken connection



Quick overview of my problem/question.
I have two xpenology servers - main data server and his backup on second machine. Backup is exactly that - it wakes up 2 times a day, receives data from snapshot replications and goes to sleep - literally nothing more.
Snapshot replication is my go to app, it makes all snapshots and backups flawlessly, i can access snapshots through windows explorer, just win-win, to a point.

Problem starts when, for some reasons, connection between servers is broken. It happened for me second time, this time sas controler went crazy and crashed (temporarilly) volume. I've changed controller, data is all back, disks are ok - no problem there. Unfortunatelly snapshot replication jobs on backup server are gone. Nada.
On main server they are there, but says "broken/damaged connection" (its in polish language on mine servers, dunno what original english tag says).
There is no way to repair that connection, i can only delete that job. No other buttons, i can go into edit, but all is greyed out.
Do it again you say? No problem with a job to remake, but even if all data are safelly on source and destination servers, program HAS to send all job data again - tens of TBs in mine case.
On destination server data is read only, i cannot overwrite permissions to this folders (or i dont know how) just to move correct data to new folders. All has to go through network between servers again. Its not healthly for hdds, its terrible slow (many days) when all data is in its orignal places, just dont know how to repair connection.

Any1 had this problem and solved it?

Link to comment
Share on other sites

4 answers to this question

Recommended Posts

  • 0

If you delete the job you will orphan the target snapshot, as you say.


If the crash resulted in the loss of snapshot jobs on the target, then probably something additional happened you are not reporting.  Did you have to reinstall the Snapshot Replication package?  Regardless, the metadata about your backup jobs was somehow lost due to reinstall.

I have faced this issue many times in the past, and do not know of any way to correct it once metadata is lost.  If the metadata is still intact at both source and target, then editing the replication job with new authentication will relink a broken pair in many cases.  But I try very hard to protect the replication pair against problems and changes for the reasons you cite.


To say it is not healthy for HDD's is a bit of misinformation, however.  It's inconvenient, yes, but that is what HDD's are there to do.  Also, if you are not using the external media option to spawn your large replicas, you should consider doing so.

Link to comment
Share on other sites

  • 0

I didnt deleted the job on source neither on destination.
Server lost two drives (dunno why, after reconecting again all is good, all hdds are healthly), crashed volume, and then deleted described metadata of snapshot jobs.
All data after volume repair (it assigned new hdd numbers - dont ask my why, all cables in same places) is ok, all shared folders in place, all still blocked for write permission due snapshot replication rules. But snapshot metadata are gone, maybe there is way to backep them somewhere and put them in console?
I've tried to remake snapshot jobs and by hand put data from folder to folder, as all are allready there, but to no avail, due permissions i dont know how to overwrite.

Ive tried to change shared folder name on backup server, it crashed, didnt change name, and crashed the job - lost connection, metadata are on both ends, but cant reconnect.

Its weird, its stupid, but most important its best program on DSM for overhaul backup system.

What You mean by external media option for spawning large replicas? Third backup off-site? I do have on paid server.
Those to are at my house, both with redundancy.

2020-11-11 16_48_13-studio-backup - Synology DiskStation.jpg

Link to comment
Share on other sites

  • 0

I have not found where the metadata is stored to try and manually edit.  However, I can see where Synology would try very hard to limit access as the permutations and inter-snapshot dependencies that will occur with Copy On Write and snapshot replications are incredibly complex.


Really there is no realistic expectation to relink jobs except in the most simple of circumstances, and one would have to just assume they were correct which would instantly lead to massive corruption if the filesystem states were not exact.


I am sorry that I don't have better information about managing replication metadata.  If you learn anything new about how it works, please post as I am very interested.


Re: Spawning large replicas - I am referring to the ability to use an external disk to prime (start) a new replica rather than wait for it go over the wire.  My data sets are also too large to wait for them to replicate over the Internet, and if I break them, I always have to carry a start-up copy to my remote NASes.

Link to comment
Share on other sites

  • 0

Oh, now i see about prime replica - good idea, i will think about this, but in my case it wouldnt do much. I have 10gbit connection, so its not that slow. External hdd means first populating hdd on source server and then copying it on destination - looks like a lot of work on mine attic, where servers are in seperated steel antitheft cages.

When prime job finishes ill try another idea. Ill kill all snapshot services and try to overwrite permissions to add files to shared folder. maybe it will work.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...