Thanks for the information. 

I never had the intension to generalise on the fly. I want to implement the 
same principle I used
with the imagemosaic-jdbc module, storing pyramid levels in the datastore. 

All the pre generalized geometries have to be valid, because WFS should also 
be supported. 

As you pointed out, I think there are 2,perhaps 3  interesting cases 

1) The native store contains multiple generalizations of the same geometry .
  In the case of a shape file, each generalized shape file has the same 
number of records and
  the same attributes, but different geometries. In the case of a JDBC 
database one could make it more intelligent by using views ( I did it this 
way).
2) The native store has the possibility to generalize before sending the 
data
  (e. g. DB2 has a SQL function st_generalize(...) using Douglas Peuker )
3) Another idea would be to generalize a Feature Source at system startup 
and put the generalized            geometries in a temp directory.  These 
generalizations MUST be executed in a background thread
  at low priority and should not influence the response time of the system. 
If a generalization is finished, use it , otherwise fall back to an already 
finished generalization or the base geometries. At any point in time, the 
FeatureSource is fully functional, only the returned geometries my differ. 

But anyway, I do not want any generalization on the fly and 1) is my first 
target. 

Passing difference/offset in the native CRS as a hint would make me happy 
:-)
If the hint is missing, you always get the base geometries, the same holds 
true if the value of the hint is 0. 

I would like to avoid CRS transformations to be consistent with  other 
FeatureSources. 

Pleasy give me a hint and I would start. If possible in 2.5.x and 2.6.x, I 
need this feature in geoserver. 

christian 

 

Andrea Aime writes: 

> Christian Müller ha scritto:
>> Last week I posed a question about having a FeatureSource which has also 
>> some generalizations of the vector data to speed up response time and 
>> reduce data transfer.  
>> 
>> While it  is is surely not the problem to implement such a FeatureSource, 
>> I need the ScaleDenominator from the caller.  
>> 
>> I studied the source and I want to ask the specialists about the 
>> following assumptions:  
>> 
>> 1) StreamingRenderer and ShapeFileRenderer use 
>> FeatureSource>>getFeatures(queryObject)
>> 2) The Query interface offers the possibility to pass Hints  
>> 
>> My idea would be to pass the scaleDenominater as a hint to the 
>> queryObject.  This makes sense since
>> the scaleDenominator is only usefull for some use cases and should not be 
>> part of the FeatureSource API.  
>> 
>> I am not sure here if the ScaleDenomintar is enough, perhaps a unit of 
>> measure is also required. On the other side,  the Query interface offers 
>> a getter for the CRS, I think I can get the unit from the CRS, yes or no 
>> ?  
>> 
>> Anyway, if my assumptions come close to the truth, I would do such an 
>> implementation and insert one line in the mentioned renderer classes to 
>> test it.  
>> 
>> opinions ?
> 
> If you're trying to speed up the rendering process by doing generalization 
> in memory inside a FeatureSource wrapper, hem, sorry,
> you're on the wrong path, at least for WMS. 
> 
> The renderer is already doing that, and in a very efficient way, so
> if you implemented it only in memory, as a wrapper, you would at
> best get a small slowdown, because you'd be doing the generalization
> inside the wrapper, and the renderer would try to do it again (uselessly
> since you already did it). 
> 
> Different is the case in which you're not using a wrapper, but you have
> some native way to access pre-decimated data in your datastore, for 
> example if:
> - your native store contains multiple version of the same geometry at
>   different generalization levels
> - your native store has way to efficiently generalize the data on the
>   fly and this can reduce network traffic and the associated data
>   encoding/decoding path 
> 
> Algorithm wise, there are many, but I've found that using generic and
> well behaved algorithms like Douglas-Peuker results in a _slowdown_
> when performed inside the renderer, the recursive nature of the
> algorithm makes generalization more expensive than the speedup gained
> by rendering generalized data (if you are also reprojecting it may
> be you still get a speedup, as reprojection might be more expensive
> that generalization, don't know exactly because I did not try this
> case out). So the current renderer uses a very simple one pass algorithm
> that avoids expensive calculations, and it's based on pure offset:
> a point is skipped if the deltaX, deltaY are less than one pixel size
> from the last chosen point. 
> 
> So, if you have some kind of native support it's worth experimenting
> with it. I can add one hint, the generalization distance/offset,
> that will be specified in the same unit as the native data, and you
> can leverage it inside your datastore to decide how you want to
> generalize. 
> 
> When mixing generalization and transformation things get more complex.
> As a rule of thumb, the renderer now never asks the datastore to
> transform data, for two reasons:
> - the datastores have been historically very unreliable at reprojecting
>   data
> - the renderer does reprojection after generalizing (this is
>   very important to get good performance, as reprojection is expensive)
>   and datastores so far did not have any generalization capability 
> 
> Jody mentioned generalizing before/after reprojection.
> This is an interesting topic too. The renderer always transforms
> after generalization to get a good speedup of the whole rendering
> process, this is in the common case good, but it's not in less
> well behaved cases.
> The issue is related to how the generalization distance is picked up.
> Now the renderer gets the pixel size in the rendering CRS, and back
> transforms it into the native CRS by placing a small segment in
> the middle of the rendered area. If the transformatio is within
> its "well behaved" area the result is going to be a good distance
> for the whole rendering area, but if the linear deformation varies
> wildly along the rendered area you may over generalize the input
> data, resulting in bad rendering afterwards.
> Two examples I can make are:
> - rendering a polar stereographic so that the pole is in the middle
>   and the rendered area goes well below 80° latitude. In this case
>   you'll see ruined polygons (e.g., they were touching in the
>   native data, but they will be rendered as disconnected) at
>   the borders of the map
> - rendering a UTM zone well beyond 6° from the central meridian,
>   with the same effect 
> 
> If you value accuracy more than speed even in those badly behaved
> cases you should do the reprojection before doing the generalization.
> A compromise could be performing some sampling of the area being
> rendered and pick the lowest generalization distance, or generate
> a grid of generalization distances so that each area uses
> a more appropriate generalization.
> Both of these approaches would fail anyways if the projection
> has singularities, as there is not grid sampling good enough to
> cope with a linear deformation that diverges close to a line
> or a point inside (or at the border) or your map.
> To be fair thought, you should not render such map to start with,
> as you've gone way past any reasonable standard of good cartographic
> representation. 
> 
> Jody's point about the choice of algorithm is also another
> interesting point. The algorithm I described above is good
> for rendering assuming the renderer can cope with invalid
> geometries, as the algorithm gives no guarantees the result
> will be topologically correct. However, for the rendering
> case that is good enough. 
> 
> JTS has a topology preserving Douglas-Peuker implementation,
> as that's what I would choose if I had to deal with a WFS output,
> where speed is not such an issue (XML encoding is so expensive
> that the difference in the generalization algorithm speed
> will be un-noticeable anyways) but you want to give your
> clients good data. 
> 
> Hope this helps
> Cheers
> Andrea 
> 
> 
> -- 
> Andrea Aime
> OpenGeo - http://opengeo.org
> Expert service straight from the developers.
 


------------------------------------------------------------------------------
_______________________________________________
Geotools-devel mailing list
Geotools-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geotools-devel

Reply via email to