Re: [osg-users] osgFX and RTT?

2007-12-21 Thread Robert Hansford
Ben,
That sounds like exactly what I'm trying to do.  I'd be interested in seeing
how you set up the post effect class.  Could you send me a link to the
Delta3D thread?

Rob.

On Dec 19, 2007 11:00 PM, Ben Cain [EMAIL PROTECTED] wrote:

  Rob,



 I've written some classes that implement RTT and post-filtering using OSG
 and GLSL shaders.  Let me know if interested.  I thought about putting in
 Wiki when I get time.



 You can take the output of one RTT stage and apply it as an input to the
 next stage … ganging them together … allowing as many filters as needed (
 e.g. for creating a night vision scene … RTT of main scene - color
 amplification stage - bright pass stage - horizontal Gauss blurring -
 vertical Gauss blurring - composition with original RTT of main stage -
 noise - change of resolution).



 The technique is from an idea of one of the users on Delta3D forum.



 Cheers,

   Ben




  --

 *From:* [EMAIL PROTECTED] [mailto:
 [EMAIL PROTECTED] *On Behalf Of *Robert Hansford
 *Sent:* Wednesday, December 19, 2007 3:26 PM
 *To:* osg-users@lists.openscenegraph.org
 *Subject:* [osg-users] osgFX and RTT?



 Hi all,
 For one of the projects I'm working on I need to have several render
 passes which operate on the rendered scene as a texture.  I was wondering if
 I could base the implementations on the osgFX::Effect class.  I know the
 existing effects are all based around simply using different statesets, but
 would it be an inappropriate use of the Effect class to write Techniques
 which contain a render-to-texture camera and then operate on that texture?

 A few examples of the sort of thing I'm doing are as follows:
 1) blurring the image using a Gaussian kernel.
 2) computing the min and max pixel value in the image in order to
 downgrade HDR imagery to get the expected effects when an unusually bright
 object appears in the field of view.
 3) adding noise to the image to simulate a poor quality camera.

 I am a little concerned that this may not be a proper use for the Effect
 class as this sort of process probably has to be done at the root of the
 scene graph, where as the existing effects can be applied to any subgraph.
 However, I do like the idea of implementing these processes as nodes in the
 graph that can be saved out to .osg files etc, but I can probably achieve
 this by writing my own class rather than extending Effect (though that would
 take me more time).

 I would greatly appreciate hearing people's thoughts on this matter.

 Thanks in advance,

 Rob.

 No virus found in this outgoing message.
 Checked by AVG Free Edition.
 Version: 7.5.503 / Virus Database: 269.17.5/1190 - Release Date:
 12/19/2007 7:37 PM

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgFX and RTT?

2007-12-21 Thread Robert Hansford
Robert,
I guess that answers my question then.  I was looking at osg::Effect as it
seemed to have a lot in common with what I want to do, but you are right,
its not really an appropriate use for it.

Thanks

Rob.

On Dec 20, 2007 9:24 AM, Robert Osfield [EMAIL PROTECTED] wrote:

 Hi Rob,

 I don't see any reason why one would implement an osgFX::Effect using
 RTT.  The idea of osgFX is that you decorate the subgraph that you
 want the effect on, this could be the whole scene, or just a single
 object.

 Robert.

 On Dec 19, 2007 9:25 PM, Robert Hansford [EMAIL PROTECTED] wrote:
  Hi all,
  For one of the projects I'm working on I need to have several render
 passes
  which operate on the rendered scene as a texture.  I was wondering if I
  could base the implementations on the osgFX::Effect class.  I know the
  existing effects are all based around simply using different statesets,
 but
  would it be an inappropriate use of the Effect class to write Techniques
  which contain a render-to-texture camera and then operate on that
 texture?
 
  A few examples of the sort of thing I'm doing are as follows:
  1) blurring the image using a Gaussian kernel.
  2) computing the min and max pixel value in the image in order to
  downgrade HDR imagery to get the expected effects when an unusually
 bright
  object appears in the field of view.
  3) adding noise to the image to simulate a poor quality camera.
 
  I am a little concerned that this may not be a proper use for the Effect
  class as this sort of process probably has to be done at the root of the
  scene graph, where as the existing effects can be applied to any
 subgraph.
  However, I do like the idea of implementing these processes as nodes in
 the
  graph that can be saved out to .osg files etc, but I can probably
 achieve
  this by writing my own class rather than extending Effect (though that
 would
  take me more time).
 
  I would greatly appreciate hearing people's thoughts on this matter.
 
  Thanks in advance,
 
  Rob.
 
  ___
  osg-users mailing list
  osg-users@lists.openscenegraph.org
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
 
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] osgFX and RTT?

2007-12-19 Thread Robert Hansford
Hi all,
For one of the projects I'm working on I need to have several render passes
which operate on the rendered scene as a texture.  I was wondering if I
could base the implementations on the osgFX::Effect class.  I know the
existing effects are all based around simply using different statesets, but
would it be an inappropriate use of the Effect class to write Techniques
which contain a render-to-texture camera and then operate on that texture?

A few examples of the sort of thing I'm doing are as follows:
1) blurring the image using a Gaussian kernel.
2) computing the min and max pixel value in the image in order to
downgrade HDR imagery to get the expected effects when an unusually bright
object appears in the field of view.
3) adding noise to the image to simulate a poor quality camera.

I am a little concerned that this may not be a proper use for the Effect
class as this sort of process probably has to be done at the root of the
scene graph, where as the existing effects can be applied to any subgraph.
However, I do like the idea of implementing these processes as nodes in the
graph that can be saved out to .osg files etc, but I can probably achieve
this by writing my own class rather than extending Effect (though that would
take me more time).

I would greatly appreciate hearing people's thoughts on this matter.

Thanks in advance,

Rob.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] OSG and DIS

2007-12-13 Thread Robert Hansford
Hi all,
Does anyone have any experience working with the DIS protocol (
http://en.wikipedia.org/wiki/Distributed_Interactive_Simulation) or
something similar?  I have to rewrite our OSG application to render a scene
described/updated via DIS and I'm unsure of the best way to go about it
(from a design point of view).

DIS provides the state information for the various objects in the scene (eg
their position, attitude and any degree-of-freedom or other attributes), so
I will have to query that information before each frame is rendered.
Presumably the best way to do that is to use the update callbacks for each
object, but what resolution should I do it at?  Should I have a single
callback for each entity that updates all of it's data, or should each node
in the graph callback to the appropriate datum?

Either way, should I create classes that extend the existing OSG classes (eg
a DIS-compatible PAT node, a DIS-compatible DoF node, a DIS-compatible LoD
node etc) and then find some way to replace the appropriate instances of
those nodes within the models?  Or is there some other way to do it? I've
never really looked at this side of OSG before.

If anyone has done anything similar and wouldn't mind giving some advice, it
would be greatly appreciated.

Thanks in advance

Rob.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Setting window options

2007-12-10 Thread Robert Hansford
Hi all,

I'm writing a little application which renders images to texture and passes
them out to a Matlab program.  So far it all works fine, but I was wondering
if there is any way I can get rid of the window that appears when the
application is running.  It doesn't display anything as the only camera in
the scene graph is set up for RTT, so its just a bit untidy.  I'm running
windows XP if that makes any difference.

Thanks

Rob.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] OSG and Threads

2007-11-21 Thread Robert Hansford
Hi all,

I'm in the process of redesigning our OSG application (the current
version is little better than a proof-of-principle prototype, and
wasn't written by me) and I'm trying to sort out how the threads will
interact.  The application has to render imagery from one or more
views of a common scene which is updated via a network connection.
The rendered imagery will probably need to be delivered back over the
network, either by reading back each frame or by using a separate
video capture card.

My current plan is to have a thread which interacts with the network
and converts the packets into updates to the scene graph, then have a
second thread which does the rendering, and possibly a third thread to
readback and compress the imagery (though this may be done separately
via a video capture card).  I am also considering a fourth thread to
do the scene management stuff, like applying updates from the
network and loading models in the background, but I'm not sure how
necessary this is.

So my questions are as follows:

Is my architecture sensible?
 -   Is there a 'better' way to do it?
 -   How closely should I integrate my application with OSG?  From a
design point of view I'm tempted to create lots of adapters or facades
(ie do it properly), but as I will also be the one doing the
implementation I'm not sure I can be bothered.

How easy is it use OSG in a multi-threaded environment?
 -   Do I need to provide my own mutual exclusion code when modifying
the scene graph or is it already taken care of?

I think I will need each of the viewpoints to be rendered to a
separate window, what is the best mechanism for doing this?
 -   Each view should use the same scene graph except they will each
have a different set of render passes and some different shaders and
uniforms, is this achievable, and how?
 -   Will each view need to run in a separate thread, or can they all
run in one thread?  Are there performance implications either way?

Thanks in advance for any help.

Rob.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org