Am 05.12.2009, 01:39 Uhr, schrieb Christopher Barker  
<[email protected]>:

> Nitro wrote:
>>> 2) A lot of OpenGL, or at least GLU, requires lots of individual calls
>>> to the library, often one call per vertex: for polygon tesselation, for
>>> instance. Doing this in Python is a performance killer. We're looking  
>>> at
>>> using Cython to write C version of some of this stuff.
>>
>> If you make proper use of OpenGL this should not happen. It sounds like
>> you use OpenGL like it was used in version 1.0.
>
> no -- but we do need to support some cards without modern extensions.
> But anyway, yes we are using vertex buffers, etc -- I didn't mean to
> imply that you needed to make a call per vertex when drawing, but you do
> need to for things like polygon tessellation. This is the only thing
> that's killing us now.

As you mention below there are some cards which do this through an old GL  
extension. The very newest ATI cards support freely programmable  
tesselation shaders, see:  
http://www.youtube.com/watch?v=bkKtY2G3FbU&feature=related (the  
interesting part starts at 0:21). That allows you to subdivide into  
millions of polygons through hardware.
Judging the OpenGL meeting board's speed they won't incorporate this in  
the near future though... Only Direct3D 11... You might be able to abuse  
geometry shaders for tesselation though. These are already supported by  
OpenGL. Requires newer hardware though.

There are other workarounds such as this  
http://users.belgacom.net/gc610902/technical.htm . Depending on your use  
case it might work. If you are tesselating terrain, then there are also  
various other techniques.

> You also apparently need to make a call per item you want to draw, which
> can be an issue when you have thousands of small polygons, for instance.
> note: I'm not writing the code, I may have this a bit wrong.

Yes, you are right. Batching is very very important. Try to keep the  
OpenGL commands called per frame to a minimum. Everything else swamps the  
graphics card driver on the CPU side. It's hard to tell more without  
knowing what exactly you are drawing. There's also (pseudo) instancing  
which you can use if you have to the same objects over and over.

>> This also relates to point 1). Maybe you should look into 3d engines  
>> which
>> have an api to do this stuff, and they do it as fast as possible.  It
>> sounds like you're reinventing the wheel. Not meant offensive,
>
> I know -- I keep telling people on the wxPython list to stop trying to
> write a drawing engine from scratch!
>
> Anyway, the issue is that we're not doing anything 3-d this is strictly
> 2-d, and the 3-d engines seem to have a LOT of overhead. We use VTK for
> some other stuff, and it's really pretty big and ugly. When we started
> with VTK, I thought: great! We get 2-d drawing, and then 3-d for free!
> But it turns out that it's actually pretty painful for 2-d.
>
> Also, it still does a crappy job of much of what we need -- try
> rendering a lot of text, for instance.
>
> If you know of a good 2-d OpenGL-based library, please let us know!

I am a bit hesitant to suggest anything here, because it's hard to do  
without knowing the things you want to do and the minimum hardware  
configuration you target. In general I think you don't gain a lot from  
engines which are specifically geared towards 2d. Most 3d engines have  
means to do 2d text rendering.

>> The full-featured transforms don't have much cost really (as long as you
>> don't move lots of nodes simultaneously).
>   - but if you do...

Yes, then it's costly. I'd be interested though which application requires  
lots of nodes simultaneously moving. In a mapping context I'd also expect  
most of them to move in a way where you can exploit temporal coherence  
which should allow to recalculate the transforms lazily in respect to the  
current viewport.

>> In OpenGL you can
>> always use a vertex shader for arbitrary functions on your input data.
>
> Maybe that's what we need to learn to do -- when I've read about it,
> it's not clear how to do arbitrary functions. For instance, I'd love to
> use the GPU to do tessellation, but I haven't seen any examples of that
> (except with cards that support it as a GL extension)

Regarding the tesselation, see the part about tesselation shaders above.  
In general you have to think of a gpu as a stream processor. You put a  
thing in and it gives a thing out. That's done with lots of things in  
parallel. So you put a vertex in, manipulate it and it puts a vertex out.  
Or you put a pixel in and it outputs a pixel. Now geometry shaders and  
tesselation shaders allow you to output more than 1 thing, but that's  
fairly recent.
If you don't have these capabilities you have to be creative on your input  
side like in that trilinear displacement mapping website.

>>> 4) There are also issues with stuff that you don't want to scale: like
>>> text  and objects that stay the same size as you zoom. You have to
>>> change that size on each render -- you get some help 'cause you can
>>> often use the same data that's already been passed to GL, though, but
>>> you do end up making a lot of calls in a python loop on each render.
>>
>> You could simply split your objects into world-space objects and
>> screen-space objects. Render the world space objects with your regular
>> transform and the screen-space objects with a special matrix/vertex  
>> shader.
>
> hmm -- I'm not sure I get this -- if we want to draw a small bitmap
> (like an icon) always at the same pixel size, how would we do that? it
> seems we have to make a drawing call for the texture each time, scaling
> it appropriately into world coords. Is there an easier way?

Not using vertex shaders, you could use an  
http://en.wikipedia.org/wiki/Orthographic_projection , no? It doesn't do  
the "gets smaller with increasing depth" perspective projection. So you  
input the four vertices of the icon in the size you want them to be (only  
have to be calculated once) and it will be output that way.

With vertex shaders there are endless opportunities to do this of course.

> While I've got your attention: Do you know how to draw a spline in  
> OpenGL?

One way is to calculate the spline on the cpu, put the vertices into a  
vertex buffer and then render that as a line strip (assuming 1d splines  
for e.g. pathes on terrain). Other alternative is to make a vertex buffer  
with many vertices having a single coordinate from 0 to 1 (e.g. 0, 0.01,  
0.02, ... 0.99, 1.00) and then do the math in the vertex shader (or maybe  
also geometry shader). How easy it is to implement directly on the gpu  
depends a bit on the exact kind of spline you want to use.

> Though this is getting a bit OT for the FC list.

Well, imo it's a discussion of FC3 ;)

-Matthias
_______________________________________________
FloatCanvas mailing list
[email protected]
http://paulmcnett.com/cgi-bin/mailman/listinfo/floatcanvas

Reply via email to