Hi Dominik,
Dominik Rau wrote:
Yes and no. With Xinerama and MultiView-SLI, it would be possible to
span one window over all screens. This works, but everything right of
pixel 4096 stays black.
Ok, I didn't think of Xinerama. I'm not sure how well nVidia supports
it, but in principle I would try to avoid it, as it will force some
amount of serialization. If you have two cards you will probably want to
feed them in parallel.
Well, the target is to keep it open for as many as possible
configurations. So, again, yes and no. It should work in both cases.
OK.
Ok, maybe 2 out of 25 frames per second are really enough. ;)
That sounds good. ;)
Err, ok, sorry for that. Would it be possible to compress the data?
(Although it would be a bit dumb to uncompress an mpeg-stream to
compress the pictures again to send them over the network...)
Not easily. Networks are getting pretty fast, in same of our tests it
turned out to be faster to just send the data instead of trying to
compress and decompress it. This was for pixel data and a relatively
slow CPU, things might be different today and for your system.
This seems to be complicated but easier (for me) than writing a video
chunk. But it raises some new questions:
* How can I access the SceneGraph on the server side? For the client, I
set the the RootNode for every Viewport with setRoot(NodePtr xy). Is
there something like getRoot on the other side?
Not directly. The easiest way is to add a Name to the instances you're
looking for and to walk through the created FieldContainer list.
Alternatively you can create a new type and add a filter for that
specific type.
* How can I identify a specific texture or texture chunk? There
is an AttachmentsField - can I use getName/setName here?
Yup.
I hope that someday I will understand how to implement something like
that in OpenSG. At this very moment, I don't even try.
It's not that hard, really. Just try it.
Ok, let me see if I get this right: You mean that I should decode the
whole video if I enable the chunk and just select the frame afterwards?
Ah, no, for the reasons you cite. I was thinking about not fixing the
playback to the rendering rate, but to real time instead, which might
necessitate skipping a couple frames for complex scenes. But only
incrementally, not decoding the whole thing.
This seems to be impractical for all but very short videos (or machines
with lots of RAM), but for the start this would be an option (my videos
are quite short and there's plenty of memory left)
Hm, ok. That's unexpected, but it could help.
Assumed I decode the whole video at startup into an OSG::Image with
multiple frames and select only the needed frame (don't know whether
this is possible, I see only methods to adjust the frame change time,
but this should be easier to add than a VideoChunk), then the data
is sent only once? In that case I could go with a simple TextureChunk
for now...
Yes, that is definitely worth a try. You just need to create a
multi-frame Image (essentially a stack of images), and to set the frame
number in the TextureChunk. Make sure to use a correct and minimal
fieldMask in the begin/endEdit calls or the gain will be lost.
Hope it helps
Dirk
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users