Matt, Thanks for your thoughts, you clearly have extensive experience with some of this.
Nitro wrote: >> 2) A lot of OpenGL, or at least GLU, requires lots of individual calls >> to the library, often one call per vertex: for polygon tesselation, for >> instance. Doing this in Python is a performance killer. We're looking at >> using Cython to write C version of some of this stuff. > > If you make proper use of OpenGL this should not happen. It sounds like > you use OpenGL like it was used in version 1.0. no -- but we do need to support some cards without modern extensions. But anyway, yes we are using vertex buffers, etc -- I didn't mean to imply that you needed to make a call per vertex when drawing, but you do need to for things like polygon tessellation. This is the only thing that's killing us now. You also apparently need to make a call per item you want to draw, which can be an issue when you have thousands of small polygons, for instance. note: I'm not writing the code, I may have this a bit wrong. > If you use Direct3D 10 or 11 We need to support OS-X and other platforms. > This also relates to point 1). Maybe you should look into 3d engines which > have an api to do this stuff, and they do it as fast as possible. It > sounds like you're reinventing the wheel. Not meant offensive, I know -- I keep telling people on the wxPython list to stop trying to write a drawing engine from scratch! Anyway, the issue is that we're not doing anything 3-d this is strictly 2-d, and the 3-d engines seem to have a LOT of overhead. We use VTK for some other stuff, and it's really pretty big and ugly. When we started with VTK, I thought: great! We get 2-d drawing, and then 3-d for free! But it turns out that it's actually pretty painful for 2-d. Also, it still does a crappy job of much of what we need -- try rendering a lot of text, for instance. If you know of a good 2-d OpenGL-based library, please let us know! > but I've > worked on an engine like that for 6 years now, so I know how much work you > can put into this kind of thing before you have it working all the way you > want it to work. So true -- I am disappointed with our progress -- in the early testing phase, it was so quick and easy to draw lots of points and big polylines, and we didn't even have to think about optimizing anything about the rendering, so it looked like a pretty decent way to go. As it turns out the easy stuff is easy, and everything else is a pain. So it _may_ have been easier to use a fully-featured 2-d drawing engine, and then spend our time on optimizing. > The full-featured transforms don't have much cost really (as long as you > don't move lots of nodes simultaneously). - but if you do... > The final transforms are > computed and then cached and only re-evaluated when something in the > node's chain changes. right -- that's the way to go -- and what Chaco, for instance, doesn't do -- which I think hurts its performance for fast zooming and panning. > The only thing that happens each render is > multiplying the camera transform with the world transform (in OpenGL > terms: compute the model-view matrix), but this has acceptable speed and > is not a major problem. sure -- that's no big deal, I didn't realize you were caching that -- it's the same approach we came up with -- you need to cache some linear coordinates, so the back-end can do the final shift+scale (or other affine transform) to pixel coords. >> However it doesn't allow the arbitrary nested transforms that FC2 has. > > From what I can see they're not too useful anyway. I agree, actually -- that's why I never put anything like that in FC2 > In OpenGL you can > always use a vertex shader for arbitrary functions on your input data. Maybe that's what we need to learn to do -- when I've read about it, it's not clear how to do arbitrary functions. For instance, I'd love to use the GPU to do tessellation, but I haven't seen any examples of that (except with cards that support it as a GL extension) >> 4) There are also issues with stuff that you don't want to scale: like >> text and objects that stay the same size as you zoom. You have to >> change that size on each render -- you get some help 'cause you can >> often use the same data that's already been passed to GL, though, but >> you do end up making a lot of calls in a python loop on each render. > > You could simply split your objects into world-space objects and > screen-space objects. Render the world space objects with your regular > transform and the screen-space objects with a special matrix/vertex shader. hmm -- I'm not sure I get this -- if we want to draw a small bitmap (like an icon) always at the same pixel size, how would we do that? it seems we have to make a drawing call for the texture each time, scaling it appropriately into world coords. Is there an easier way? >> 5) how stuff is rendered is somewhat left up to the video card, some >> make prettier results than others. > > You should be able to control all of that. E.g. you can specify which > multisampling (antialiasing) level to use, mipmap bias, texture filtering > (e.g. anisotropic) and so on. You can specify a lot, but it's still up to the card/driver to take the hint or not -- we definitely get different results with different cards -- better anti-aliasing on some, etc. The nice thing about the projects that use the Agg back-end is that it does really nice rendering, and it does it exactly the same everywhere. Frankly, that's not a big deal for us -- we need fast interactions more than pretty rendering, but it does matter to some folks. While I've got your attention: Do you know how to draw a spline in OpenGL? Though this is getting a bit OT for the FC list. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception [email protected] _______________________________________________ FloatCanvas mailing list [email protected] http://paulmcnett.com/cgi-bin/mailman/listinfo/floatcanvas
