Nitro wrote:
> As you mention below there are some cards which do this through an old GL  
> extension. The very newest ATI cards support freely programmable  
> tesselation shaders, see:  
> http://www.youtube.com/watch?v=bkKtY2G3FbU&feature=related (the  
> interesting part starts at 0:21).

We've seen that extension - but yes, it's a recent extension, so no go 
for us.

> You might be able to abuse  
> geometry shaders for tesselation though.

I've looked around for some examples of this, but not found them -- 
you'd think it would be a really obvious thing to do.

One of our issues is that there really isn't much out there for 2-d, in 
theory, it's just a special case of 3-d, but then you don't' have code 
optimized for 2-d

> There are other workarounds such as this  
> http://users.belgacom.net/gc610902/technical.htm . Depending on your use  
> case it might work.

I don't think I write follow that, but that, and others, seem to be 
focusing on how to get more or less detailed tessellation depending on 
zoom level, which would certainly be something you might want to do in 
hardware.

In our case, we're trying to do something really simple: draw a bunch of 
filled polygons: they have a lot of vertexes, and there are a lot of 
them -- it's that simple. Those polygons may be edited by the user, and 
thus need to be re-tessellated when points are moved, added and deleted.

Tessellating with glu in python is painfully slow -- we probably can get 
fine performance by moving that to C, but on the GPU would be even better.

>> can be an issue when you have thousands of small polygons, for instance.
>> note: I'm not writing the code, I may have this a bit wrong.
> 
> Yes, you are right. Batching is very very important. Try to keep the  
> OpenGL commands called per frame to a minimum. Everything else swamps the  
> graphics card driver on the CPU side. It's hard to tell more without  
> knowing what exactly you are drawing.

lots and lots of polygons (concave, maybe with holes) -- 10s of 
thousands of polygons, made up of a total of 100s of thousands of points.

Any suggestions you have would be great!

>> If you know of a good 2-d OpenGL-based library, please let us know!
> 
> I am a bit hesitant to suggest anything here, because it's hard to do  
> without knowing the things you want to do and the minimum hardware  
> configuration you target. In general I think you don't gain a lot from  
> engines which are specifically geared towards 2d. Most 3d engines have  
> means to do 2d text rendering.

sure, but not that well. We've only tried VTK -- but it converts text to 
polygons in 3-d space, and renders those, which is fine for a little 
text, but put a small label on 1000 or so points, and it's painfully slow.

> Yes, then it's costly. I'd be interested though which application requires  
> lots of nodes simultaneously moving.

good point, you're usually only moving one at a time,

> In a mapping context I'd also expect  
> most of them to move in a way where you can exploit temporal coherence  
> which should allow to recalculate the transforms lazily in respect to the  
> current viewport.

I think those are the tricks you'd need that an only re-calculating the 
transform for those points that move, rather than the while object -- 
that just gets messier to code -- and you need to cache.

>>>> 4) There are also issues with stuff that you don't want to scale: like
>>>> text  and objects that stay the same size as you zoom.

> Not using vertex shaders, you could use an  
> http://en.wikipedia.org/wiki/Orthographic_projection , no? It doesn't do  
> the "gets smaller with increasing depth" perspective projection. So you  
> input the four vertices of the icon in the size you want them to be (only  
> have to be calculated once) and it will be output that way.

Can you switch between projections mid-scene? i.e. we'd want to draw 
polygons, etc, all scaled appropriately, but a bitmap not scaled, but 
located at the correct location.

Even with an orthographic projection, which I think we're using anyway 
(2-d) remember, don't you also have to specify the viewport, with 
changes the scale of the objects?


> One way is to calculate the spline on the cpu, put the vertices into a  
> vertex buffer and then render that as a line strip (assuming 1d splines  
> for e.g. pathes on terrain).

yup -- again, on the CPU, and pre-determining the level of detail. 
Probably not that big a deal, but still.

> Other alternative is to make a vertex buffer  
> with many vertices having a single coordinate from 0 to 1 (e.g. 0, 0.01,  
> 0.02, ... 0.99, 1.00) and then do the math in the vertex shader (or maybe  
> also geometry shader). How easy it is to implement directly on the gpu  
> depends a bit on the exact kind of spline you want to use.

A basic bezier spline -- I guess we need to look a bit more to find 
examples of that. still pre-determining the level of detail, but not too 
bad.

> Well, imo it's a discussion of FC3 ;)

Yes, I suppose it is!

Thanks,
   -Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

[email protected]
_______________________________________________
FloatCanvas mailing list
[email protected]
http://paulmcnett.com/cgi-bin/mailman/listinfo/floatcanvas

Reply via email to