Hi all -

>
> I am using OSGGeometry nodecores that have ~ 1000 points in them. I 
> setDListCache(true) for these cores, as these points are replaced 
> periodically, but after I modify the GeoPositions field it seems that 
> the old points are displayed along with the new ones. Is there a way 
> to tell OpenSG to dump the old Display list for a node core and 
> generate a new one?
>   

OpenGL display lists are write-once. You can't read them, you can't edit 
them, they need to recreated from scratch every time something changes.

So if you have old point not being deleted, it can't be related to 
display lists, as those are scrapped. Do you clear out the OpenSG fields 
that contain the data before setting the new points, or do you overwrite 
all of the existing data when setting new stuff?

====================================================================================
======= my comments here. ==========================================


Dirk - this was due to my own mistake. I created a new field container 
and core which I use to represent a ground plane. The node core 
resembles Switch, except it has any number of children which can be 
turned on and off, not just one.

My mistake was in setting the DrawAction and RenderAction's default 
enter/exit functions in the Node instead of the Core. Instead of the 
draw function being called (where I call action->useNodeList(), then add 
the nodes which have been selected for display), ALL the children were 
being drawn.

Now I've corrected this and the behavior is as you describe. Sorry for 
the confusion!



=============================================================

> Another question along those lines, Is there a way to pre-compile 
> display lists like this and then directly use the precompiled lists in 
> my node cores? This way I could quickly replace one display list ( 
> which might contain 1000's of points) with another that already sits 
> in the video card's memory. I've got a lot of unused graphics memory 
> just sitting there.......
>   

So you have a number of sets of points that you want to show 
alternatingly?  The easiest way to do that is have separate Geometry 
nodes and either use a Switch to select one of them or to use traversal 
masks to select subsets of them.

Can you give us a better idea of your application? That might help us 
understand what it is you're trying to do and come up with more specific 
hints.

Yours

   Dirk

================================================================



Here's a bit more about what I'm doing.

My application simulates motion over a ground plane. The ground plane is 
populated with random dots. I've modelled the groundplane as a set of 
grids. Every time the camera beacon is moved I update the list of grids 
which are to be drawn by projecting the viewing volume onto the plane 
(there's other constraints which make this a good approximation). As 
grids fall out of the view they are returned to the pool. As new grids 
are required they are drawn from the pool, re-populated with random 
dots, and flagged for drawing on the next frame.

This application is used in the study of visual neuroscience, and as 
such there are specific requirements which complicate our ground plane. 
One requirement is that we wish to limit depth cues that the subject 
sees. This means that we use gl points with a fixed gl point size rather 
than drawn objects or textures. Such objects or textures would be drawn 
larger as they move closer to the camera, smaller as they move away, 
etc. That's a depth cue we want to eliminate. But using GL points means 
the dots are drawn as fixed-size squares of pixels regardless of their 
distance from the viewer.

The other complication is that of frame-by-frame control. We would like 
to update the content of the scene on each frame and ensure that the 
specified scene is drawn - without fail - for minutes at a time. That's 
been another battle with drivers, the operating system and video cards 
that we needn't go into right now ;) I can say that this basic goal has 
been met with rather pedestrian video cards and CPUs, and we've achieved 
100FPS with some reliability (though it seems to make the video cards 
rather unhappy.)

The frame-by-frame control requirement has led me to limit any 
time-consuming procedures from the update of each scene. Memory 
allocation is one of those, and as such I've arranged to pre-allocate 
all the node cores needed for the groundplane and gathered them into one 
node core, which I call OSGMultiSelect. The individual cores in 
multiselect each represents a grid rectangle on the plane, and each 
actually consists of three nodes/cores:
1.  a ComponentTransform that scales and translates the grid to a 
location on the plane. This transform has two children,
2.  a colored rectangle which is the "ground", and
3.  a collection of dots that are the "texture" on the ground. This is 
an OSGGeometry core with an attached PointChunk where the pointsize is set.

For various  scientific reasons the dots must be random, and we don't 
want to re-use dots. Thus, the requirement that I re-populate these dots 
with some frequency. Not every frame, though! Thus I leave display lists 
ON, as the dots on a given grid will remain on the screen for a second 
or so before they are dumped and re-populated. The grid sizes and dot 
density are controlled by the experimenter, and so a wide range of 
values must be supported.

Sorry for the long-winded explanation.

[ BTW, I'll post some stuff in the applications gallery one of these 
days or weeks. There's a big conference coming up, however, and we've 
got a lot to do so people can get preliminary data to show. I suppose 
you know how that goes ;) ]

Now, back to the question. I am concerned that under some circumstances 
this re-population of dots may take too much time. Unfortunately I can't 
yet discern where that time may be taken up - either in the generation 
of randoms, the movement of the dots across the bus to the video card, 
or in the rendering on the card itself. Currently we haven't pushed our 
application to this threshhold, but as our scenes become more complex I 
cannot be sure there won't be trouble.

So, my thought becomes - why not pre-populate the card's memory with 
generated display lists of points. I'm guessing that a list of 1000 
points will take roughly 1K * 4bytes * 3(xyz) = 12K memory. On a card 
with 64Mb there ought to be room for thousands of these. When the time 
comes to repopulate the points for a grid rectangle, I'd simply change 
the display list ID that the node core uses with no cost in transporting 
that information to the card, compiling the list, etc. All that cost 
would be paid prior to the start of an experimental trial where there's 
plenty of time for such things.

Don't games do this type of thing loading textures onto the card?


Anyways, there's a longer explanation of what I'm doing. I'd appreciate 
any input on this last point, or perhaps some guidance on measuring the 
time required to do some of these operations. I'm sure that the card 
specs will tell me roughly how long it'll take to move data there, 
generate lists, etc., but that's a little beyond my current knowledge.


Dan

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users

Reply via email to