This week's book giveaway is in the OO, Patterns, UML and Refactoring forum. We're giving away four copies of Refactoring for Software Design Smells: Managing Technical Debt and have Girish Suryanarayana, Ganesh Samarthyam & Tushar Sharma on-line! See this thread for details.
Currently I am working on a project where I have to migrate data from one system to another.This is one time process. The actual intention of the project was to scrap the old legacy content management store & resort to the new content management application.
Now due to certain contingencies for atleast a year we have to keep the old legacy datastore running parallely with the new datastore.
So now I have the issue of keeping both the datastores in synch with the latest uptodate data. How do i send/receive updates from either of the datastores Also say if both the datastores modify the same piece of data how to resolve conflicts in such a situation.
One idea is to publish web services from both the apps to accept incoming updates
I am totally new to this area & looking for some inputs to get a headstart in this direction
This is an interesting challenge. There are several approaches, but which to use depends a lot on the usage patterns of the different systems.
For example, if the "legacy" repository is only needed for read access, but write access can be done through the new system, then it makes sense for writes to the new system to "write through" to the legacy system. That way the new system is the master repository, and the old one becomes a mirror which can simply be switched off when it becomes obsolete.
If the first approach is not possible, consider if it is possible to add some sort of wrapper or redirection layer around both the systems, which catches write attempts and duplicates them to both systems, but passes read attempts through directly.
If this sort of approach is not possible, then you might have to resort to two-way propagation (both systems allow writes, but have to tell the other system). This is a much more complex and risky proposition, with all sorts of subtle failure modes. Some of these were discussed in a recent thread on cacheing.
Can you tell us any more about the usage patterns of the systems you are working with?
Thanks a lot for the response. From your response I reckon the one you mentioned about ********** If the first approach is not possible, consider if it is possible to add some sort of wrapper or redirection layer around both the systems, which catches write attempts and duplicates them to both systems, but passes read attempts through directly *********** Can you ellaborate a bit on how to go about the implementation of the same.
To give a context of the scenario I am encountering I have a homegrown content management using Postgres as the backend. & the new migrated Content Management System is Vignette using MSSQL server 2000 as the database. The migration was pretty smooth but now the problem is instead of scrapping the homegrown app we would need to continue with it for some more time. Both the systems will have writes & these updates need to be synchronized across each other.
You mentioned about usage patterns. What exactly are usage patterns & how can they help. The idea we initially have propositioned is to build 2 wrapper web services which will accept the updates. Both the system on receipt of any update event will invoke these web service to pass on the data. This is just a preliminary thought & we are yet in the process of conceptualizing the same
I’ve looked at a lot of different solutions, and in my humble opinion Aspose is the way to go. Here’s the link: http://aspose.com
subject: Strategy for synchronization of diverse systems