I hope so. When I read the GO-1 spec it seemed like it was designed
with a full in memory rendering model. I've seen ideas like keeping
in memory the feature ids and reload by id when the cache does not
have any more the feature, this is obviously a performance disaster
with any JDBC datasource. Some kind of tiled design is needed instead
imho, something allowing to gather the elements that are not in
the memory cache in a single call (the network delays will kill
any approach that needs hundreds of separate access).
gasp. seems that was for a bad idea I had !
Thinking of it, you are definitely right.
My problem was the cache should be able to forget about features when it
reaches its maximum capacity, and as in my design, on one hand relationships
between queries and features and on the other hand features are tracked
separately, there could be case where the cache would not remember of a
given feature, especially when using random eviction strategy for the cache
feature storage.
But this is likely be not performance wise as you outlined.
This is a mainly a matter of eviction strategy.
I store the query bounds in a tree (a R-tree for instance, but it could be
any spatial index).
Then, the cache feature storage should not decide on its own to evict
features.
Rather, when the cache is full, it should pick an appropriate node in the
query bounds tree and remove corresponding features in the cache storage,
whatsoever.
To me, this is equivalent to your tiled proposal, where you propose using a
quatree as a spatial index. What I could do is to try different kinds of
index. I suppose R-tree are more efficient, actually R* or R+ tree, but they
cost more to build, which may not be optimal as cache should be highly
dynamic. So you may right again on using quad-tree rather than R-tree.
This approach implies that the cache size limit should not be strict, but
rather is a hint for the cache so it knows when to make room for new
features. Let's say for example when the cache oversizes its capacity, it
will try to shrink back to 70% of its capacity.
Moreover, I don't see a clear way to know what the size of the cached
features is when working in memory or with other datastore backend. This
will be different when serializing features on disk.
Cheers,
Christophe
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Geotools-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/geotools-devel