On 22/10/2017 19:39, Roy Teeuwen wrote:
>
>   * One could create a JCR content package of production, and copy it
>     over to the different environments daily. The downside of this is
>     that when the package becomes big, it gets problematic (for
>     example when a lot of images/videos start to get involved). 
>   * One could use VLT RCP, I have tried this but it seems that it has
>     a bug[1] that when there is an error in the vlt rcp, it actually
>     retries infinite amount to fix it, making the actual sync
>     impossible from the moment one error for a specific node pops up.
>     A second problem, and also sort of a dealbreaker is that for VLT
>     RCP to work, you would have to make the production environment
>     network accessible to all the other environments, or a bit better,
>     use some proxy that has access to both production and the test
>     environments.
>   * Time Warner seems to have run into the same issue, and created
>     Grabbit[2] for this, but their tool has also two mayor drawbacks.
>     The first drawback is related to the VLT RCP, where you would need
>     to have access from production to all the other environments, but
>     even worse is that you can't do it with a proxy in between,
>     meaning they really need to have access to each other. A second
>     drawback that I really dislike is that their tool is written in
>     Spring OSGi. I have no idea why one would make a tool in OSGi and
>     then use Spring for it, but they bring in like 30 bundles just to
>     do a content sync between environments, mainly outdated Spring 3.X
>     bundles.
>   * Lastly of course there is the oak migration tool, which is super
>     fast in doing an export and import, it is the current one we use,
>     but the drawback being that you have to bring down the environment.
>

Hmmm, so off the top of my head I would say.

vlt rcp is a very good candidate; but as you said it requires access
rights between production and the other environments. I used it already
more than once to sycn content around and it works fairly nicely without
sucking too many resources. Don't know about the current status as I
used it many years ago last time but you could have some observation
that collects the paths that are changed, craft a list and then an agent
that pull such list for moving content around. Good bit is that it
doesn't require the instance to be shut down.

Content package are good candidates as well. Same approach as above
where you craft a list of paths you want to include and then you build a
package and download it from somewhere. Cons are for example that
content package build will probably tend to be more resource intensive
than vlt rcp.

Finally, another approach I used, but it was AEM, was to leverage the
sling replication to distribute content around as it was being made public.

The big question mark I have here is that I don't know how currently
you're making some content live.

Another aspect you may look at it, which is orthogonal to the above, is
the deployment topology. You may opt for a mongo clustered deployment,
where you have mongo keeping in sync with it's own replication mechanism
to a non-live cluster. Then you can rely on mongo backups to move
content around. You'll have a production copy to work with, by being an
additional node, it could be outside of live traffic not impacting
therefore the performance, you have a place where in case of disaster
could act as recovery.

HTH
Davide


Reply via email to