On Tue, Feb 03, 2015 at 01:48:52PM +0100, Jan Danielsson wrote:
>    In terms of the type of data, our data and fossil's data is very
> different, but in terms of the time it takes to synchronize large data
> stores/repositories, we're in the exact same situation.  We don't expect
> synchronizations to fail; they rarely will, but it will happen sooner or
> later, so we were forced to find a way to skip work that has already
> been done.

Correct. One possible approach is to mark new artefacts in a separate
"to parse" table, add them to that list after every round trip and maybe
even do a commit in case of power down. Afterwards, process that table.
It doesn't even require many changes to the code as the approach taken
is essentially the same, just using in-memory storage.

Joerg
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to