Hello Patrick,

On Dec 8, 2010, at 12:22 , Patrick Ohly wrote:

> The Synthesis engine has the feature that its backends are allowed to
> use IDs of arbitrary length. The engine will translate into IDs shorter
> than the maximum ID size supported by the peer.

It only works for the server, which makes sure it does not send IDs longer than 
what the client allows. The server must be able to handle whatever size of IDs 
a client might have.

This is plain SyncML requirements - when the standard was defined the idea was 
that there might be clients with resources so limited that they might not be 
able to store the often longer server IDs. The opposite - a server with a too 
short remoteID DB field was considered unlikely and the standard does not 
provide a mechanism for that case (and libsynthesis doesn't, either).

> TLocalEngineDS::adjustLocalIDforSize() creates these temporary IDs,
> using:
>  fTempGUIDMap.size() + 1
>  // as list only grows, we have unique tempuids for sure
> 
> I'm currently (involuntarily ;-) stress-testing this code by running
> SyncEvolution<->SyncEvolution syncs with lots of iCalendar 2.0 items,
> which happen to have very long IDs.
> 
> I see failures where the server assigns the same temporary ID to
> different items in the same sync.
> 
> I've added debug logging. It shows that the following happens:
>     1. fTempGUIDMap is restored from the map file such that it has 105
>        entries, *but* these are non-contiguous (from #1 to #124).
>     2. fTempGUIDMap.size() + 1 is #106, which already exists in
>        fTempGUIDMap.
>     3. Overwriting that entry does not increase fTempGUIDMap.size().
>     4. A second item is assigned the same #106 temp ID.
> 
> I don't know exactly how I arrived at this state. Is the on-disk dump of
> fTempGUIDMap really meant to preserve all temporary IDs in perpetuity?

No. The life span of a temporary ID is one complete sync, with the possibility
of "pending mappings" carried over to the beginning of the next session.
But before that next session actually starts, all existing tempIDs become 
invalid
(Line 5852 in localengineds.cpp) and new ones needed in the progress of the
sync can use the same values again.

> I don't think so, because that would fill up the disk and there is a
> "delete map item" operation.
> 
> Unless there is some additional constraint, then the assumption that
> "the list only grows" is wrong.

I still think the assumption is correct. I admit the way this works is a bit 
fragile because the , so no objections to make it more robust!

Still, the thing is that there's never a deletion of a single tempGUID map 
entry (of course, there ARE deletions of map entries with other types, but not 
of mapentry_tempidmap). It starts with an empty fTempGUIDMap container at the 
beginning of the sync, and then for each <Add> sent to the client where the 
actual localID is longer than the maxGUIDsize of the client, a new tempGUID map 
entry is created. A session might get suspended several times, which means that 
all map entries, including the contents of fTempGUIDMap might need to be made 
persistent in the DB and restored into a new session instance several times as 
well, but during all this time IMHO no map entry of type mapentry_tempidmap is 
deleted.

If you see the number of mapentry_tempidmaps decreasing during a sync session 
(counted from that anchorpoint at line 5852 in localengineds.cpp), there must 
be a problem with making these persistent and loading them later on (or some 
other not yet discovered bug - in the engine or the DB plugins).


> I have added a workaround which checks
> for ID collisions in TLocalEngineDS::adjustLocalIDforSize(). Is that the
> right solution or do I need to search for the reason why the mapping has
> gaps?

It makes the whole thing more robust for the case that for some future change 
in the engine the "increasing list" assumption could become invalid, but for 
now I guess the workaround will just hide another problem elsewhere. So I'd add 
a big fat red DBG_ERROR type message when you detect a collision, so we'll find 
if this happens regularily and if so, we can find out why.

Best Regards,

Lukas


_______________________________________________
os-libsynthesis mailing list
os-libsynthesis@synthesis.ch
http://lists.synthesis.ch/mailman/listinfo/os-libsynthesis

Reply via email to