Hello Tom!
Looks like there was an issue in uploading the screenshot?
Could you post a minimal/complete/reproducible example of the code that you
use? Maybe someone will be able to spot the issue just by looking at the
code.
-- Vaillancourt
On Tuesday, 27 October 2020 18:58:06 UTC-4, Tom
Finally had some time to look at it in more detail and it was a problem on my
side, all is working now with setting the default fbo id. I've made a pull
request, thanks again for the pointer.
--
Read this topic online here:
Thank you, that looks helpful. I have quickly tried implementing it but it does
not seem to fix the problem I'm having, however it now does now react to
keyboard inputs again, so it looks like it is a step in the right direction. I
will try I bit more later today.
gwaldron wrote:
> Read this -
Read this - it might help:
http://forum.osgearth.org/solved-black-screen-with-drape-mode-in-a-QOpenGLWidget-td7592420.html#a7592421
Glenn Waldron / osgEarth
On Mon, Sep 16, 2019 at 2:03 PM Wouter Roos wrote:
> Hi all,
> I'm really struggling with getting RTT to work under the latest version
Hi all,
I'm really struggling with getting RTT to work under the latest version of
osgQt and using osgQOpenGL. I am aware of the discussion around adding the
setDrawBuffer(GL_BACK) and setReadBuffer(GL_BACK) to the camera for double
buffered contexts, but no matter what settings I set for the
I found problem. HUD-Camera does not work when i create OpenGL 3.3 core
context. Only clear color is visible. If not create context, all is worked.
Code:
const int width(1920), height(1080);
const std::string version("3.3");
osg::ref_ptr< osg::GraphicsContext::Traits > traits = new
Hi, Robert.
All right, now i see. Probably, something wrong with my code.
I'm use new OSG 3.5.6, trying to port deferred renderer with light system to
OpenGL 3.3.
Thank you for your answer.
--
Read this topic online here:
HI Nickolai,
There are no differences between RTT setup in the OSG for GL2 and GL3,
or any other GL/GLES combinations for that matter. The osgprerender
or osgprerendercubemap examples are decent places to start to learn
what you need to do.
Robert.
On 23 May 2017 at 11:33, Nickolai Medvedev
Hi, community!
How to correctly create a render to texture in GL3 context?
What is the difference between RTT GL2 and GL3 in OSG.
Thank you!
Cheers,
Nickolai
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=70962#70962
One more thing: Rendering to a pbuffer does not automatically give you the
option to access your rendered content as a texture.
The technique to render to texture with pbuffers is called pbuffer-rtt and
implemented in several OSG samples with the "
*--pbuffer-rtt" command line option.*
*This
On Windows, create a graphics context with the pbuffer flag set to true
and windowDecoration set to false.
osg::ref_ptr traits = new
osg::GraphicsContext::Traits;
traits->x = 0;
traits->y = 0;
traits->width = 640;
traits->height = 480;
traits->red =
Hi,
OK, I based my initial integration into my app on osgteapot.cpp. As with all
the other examples, it os run via
viewer.run();
And this creates an output window in OSX (and I am assuming any other OS its
run on). And thats the issue I have, I need OSG to run "headless", that is to
say,
Hi,
https://github.com/xarray/osgRecipes
Chapter 6 - it's all what you need.
Cheers,
Nickolai
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68418#68418
___
osg-users mailing list
Hi Chris,
Take a look at the osgprerender example. It shows you how to render to
a framebuffer object.
The bound texture can be used to be displayed later on.
Cheers
Sebastian
Hi,
I have an existing app I am developing, which itself is based on OpenGL. It
uses an API that provides a 3D
have a look at the osgprerender example. That one renders to texture first,
and then uses the contents of this texture for rendering to screen.
The osgscreencapture and osgposter examples also have options to render
off-screen to FBO or pbuffers.
2016-08-18 13:19 GMT+02:00 Chris Thomas
Hi,
I have an existing app I am developing, which itself is based on OpenGL. It
uses an API that provides a 3D windowing system, with different media being
displayed on planes, within this 3D space. All good...
Except, its API does not offer anything near the flexibility, and ease of use
of
Hi,
I was able to figure out the issue.
For everyone wondering, I was missing the following line:
textureImage->setInternalTextureFormat(GL_RGBA16F_ARB);
In other words, one needs to set the format on the image as well as on the
texture for everything to work properly. Hope this helps someone
Hi,
I did some more testing and it turns out that I can set a texel to a color with
values > 1.0 just fine in the C++ code.
When using image->setColor(osg::Vec4(1,2,3,4),x,y,0) before reading it with
getColor, I can get results > 1.0.
Does that mean that the shader itself is clamping the
Hi,
for my current project I need to do some computations in the fragment shader
and retrieve the values within my application. For that I am using the render
to texture feature together with a float texture.
I'm having some trouble reading values > 1.0 though. It seems like the values
are
Hi Nicolas,
On 23 February 2015 at 09:35, Nicolas Baillard nicolas.baill...@gmail.com
wrote:
By sharing of GL objects between contexts I assume you mean the
sharedContext member of the GraphicsContext::Traits structure correct ?
Yes, this is how one sets up shared contexts. This means
robertosfield wrote:
The OSG doesn't mutex lock GL objects for a single context as the costs would
be prohibitive, and it's only required for shared contexts usage so it's not
a penalty that is worth paying.
If I didn't use render to texture at all (or if I didn't try to share the
On 23 February 2015 at 13:09, Nicolas Baillard nicolas.baill...@gmail.com
wrote:
robertosfield wrote:
The OSG doesn't mutex lock GL objects for a single context as the costs
would be prohibitive, and it's only required for shared contexts usage so
it's not a penalty that is worth paying.
Hi Nicolas,
The OSG by default will use separate OpenGL objects and associated buffers
for each graphics context. If you enable sharing of GL objects between
contexts then you'll need to run the application single theaded to avoid
these shared GL objects and associated buffers being contend.
If
Thank you Robert.
By sharing of GL objects between contexts I assume you mean the sharedContext
member of the GraphicsContext::Traits structure correct ?
I do set this member for all my contexts. If I don't set it then my windows
don't display the texture generated by my master camera, they
Hello everyone.
I have a view with a master camera rendering to a texture. Then I have two
slave cameras that display this texture into two different windows (so two
different rendering contexts). When I use the DrawThreadPerContext threading
model I get a crash into
wwwanghao wrote:
I use the Render to texture method to get a float image, then I need to
process the image to make the data is in the range of 0-255
I'm not sure I understand the why, but anyway -
How did you customize the image you render to?
That's likely in the vicinity of
Code:
hello hao,
following code frame mybe will help you:
osg::Image* p_image ;
p_image= your image adress;
int width = p_image-s();
int height = p_image-t();
int totalbytes = width * height * 4;
// Directly copy osg image buffer to your memptr;
memptr= your allocated
Hi,
I use the Render to texture method to get a float image, then I need to process
the image to make the data is in the range of 0-255, after that I want to
display the image. But I don't know how to handle it,is there anyone know how
do it? Thank you.
Thank you!
Cheers,
Hao
Hi Ethan,
Thanks, that makes sense that it would just be rendering a quad and that the
original scene geometry would be lost. However, the GLSL geometry shader only
accepts primitives of the type point, line, or triangle-is it perhaps
rendering two triangles to the geometry shader to make
Thanks Sebastian,
I have in fact looked through every geometry shader tutorial I could find and
have tried to implement a simple pass-through sahder identical to the one you
posted, but when I add the geometry shader I just get a black screen with no
OpenGL error messages, and if I remove the
Hi Ethan
Thanks Sebastian,
I have in fact looked through every geometry shader tutorial I could find and
have tried to implement a simple pass-through sahder identical to the one you
posted, but when I add the geometry shader I just get a black screen with no
OpenGL error messages, and if I
Thanks again, it looks like I need to get up to speed with using in and out
vs attribute and varying since I cut my teeth on older tutorials and apparently
attribute and varying are officially deprecated and are supported through
compatibility mode. I'm not used to needing historical context
Hello Sebastian,
I read up on the differences between glsl 1.2 and 1.5 and then scimmed through
the official GLSL 1.5 specification document. I then grepped the osg src and
examples directories to see if I could find any #version 150 shaders (I could
not). Are there any reference/example
Hi Ethan,
Hello Sebastian,
I read up on the differences between glsl 1.2 and 1.5 and then scimmed through
the official GLSL 1.5 specification document. I then grepped the osg src and
examples directories to see if I could find any #version 150 shaders (I could
not). Are there any
If I use #version 150 compatibility, do I still have to explicitly do the in
out specifications, such as declaring out gl_FragColor in the frag shader?
SMesserschmidt wrote:
Hi Ethan,
Hello Sebastian,
I read up on the differences between glsl 1.2 and 1.5 and then scimmed
through the
Also, is there any good reason to use #version 150 compatibility vs using
#version 120 and using the extension required to use geometry shaders other
than using #version 150 compatibility is more forward looking syntactically?
--
Read this topic online here:
Sorry Ethan,
Personally I try to take the profile which doesn't require the
extension. Simply go ahead an try.
Concerning your other question: Please check the web for answers. The
OpenGL/GLSL Specification is freely available and there might be some
tutorials for this.
For reference I use:
If I use #version 150 compatibility, do I still have to explicitly do the in
out specifications, such as declaring out gl_FragColor in the frag shader?
No, as I already said: You can use the old syntax and mix it with the
new one.
You absolutely don't have to use the layout, in, out things in
Hello,
Preface: I realize this question comes about because I've never really learned
OpenGL/GLSL from the ground up and am likely missing some simple concepts, but
I mostly have been coasting by at the osg middleware level and have been doing
OK so far.
If I want to do some simple
Am 18.11.2013 15:32, schrieb Ethan Fahy:
Hello,
Preface: I realize this question comes about because I've never really learned
OpenGL/GLSL from the ground up and am likely missing some simple concepts, but
I mostly have been coasting by at the osg middleware level and have been doing
OK so
Thanks, that makes sense that it would just be rendering a quad and that the
original scene geometry would be lost. However, the GLSL geometry shader only
accepts primitives of the type point, line, or triangle-is it perhaps
rendering two triangles to the geometry shader to make up the quad?
in the vertex shader totally about primitives
such as triangles, lines ,points? Or about the whole surface that is assembled
by primitives?
From: ethanf...@gmail.com
Date: Mon, 18 Nov 2013 16:50:53 +0100
To: osg-users@lists.openscenegraph.org
Subject: Re: [osg-users] Render
: Mon, 18 Nov 2013 16:50:53 +0100
To: osg-users@lists.openscenegraph.org
Subject: Re: [osg-users] Render-To-Texture GLSL program question
Thanks, that makes sense that it would just be rendering a quad and
that the original scene geometry would be lost. However, the GLSL
geometry shader only
Ok found it... I just had to write to image in post camera callback...
nothing more.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Hey again. As I posted before I managed to render to texture a scene and
projected it onto a Geometry. But it is just an image of a scene and the RTT
camera can't move around my object (cessna.osg).
Is there a way to move the camera freely around like you would do with viewer's
camera?
lg
Hi Christian,
Have a look at the osgdistortion example.
Robert.
On 25 June 2012 11:20, Christian Rumpf ru...@student.tugraz.at wrote:
Hey again. As I posted before I managed to render to texture a scene and
projected it onto a Geometry. But it is just an image of a scene and the RTT
camera
robertosfield wrote:
Hi Christian,
thank you
Have a look at the osgdistortion example.
Robert.
On 25 June 2012 11:20, Christian Rumpf wrote:
Hey again. As I posted before I managed to render to texture a scene and
projected it onto a Geometry. But it is just an image of a
This really sounds helpful, Robert, but for some reason this examples you
recommended aren't understandable for me. They all tells about reflecting,
printing it into a flag and so on. isn't there an example which just renders a
single node, a single box or something else into a texture and
Hi Christian,
I'm afraid I do have time to walk you through the baby steps of this
type of problem. The OSG has lots of examples, there are several
books, lots of resources you can call upon to learn about the OSG.
Robert.
On 11 June 2012 16:26, Christian Rumpf ru...@student.tugraz.at wrote:
No need anymore, Robert. I finally a homepage which explains everything about
render to texture:
http://beefdev.blogspot.de/2012/01/render-to-texture-in-openscenegraph.html
It really helped me, and I finally could load my shader files with texture as
uniform sampler2D. Nevertheless thank you
Hey!
I read nearly everything about render to texture techniques and what you can do
with them, but not how to SIMPLY display it. My intention is to pre-render the
scene into a texture and sending this texture into my fragment shader (GLSL) to
simulate blur effects.
But before I can do this
Hi Christian,
You simply need to create a geometry and assign the RTT Texture to via
a StateSet and then render this as part of the main scene graph. Have
a look at the osgprerender or osgdistortion examples.
Robert.
On 7 June 2012 20:52, Christian Rumpf ru...@student.tugraz.at wrote:
Hey!
Hi,
I've run into a problem which I suspect is just a gap in my understanding of
osg and RTT. Nevertheless, I'm a bit stumped.
My goal is to do a small RTT test where I take a source texture, render it to a
quad using an RTT camera and then apply the output to another quad. In other
words,
Hi,
I made an application with a camera that rendered into a texture and everything
worked.
Now I want to render into the texture from 2 Cameras. I want cam1 to render
into the left side, and cam1 render into the right side into the texture.
Basically this is my code:
Code:
cam1 = new
Hi, Martin
You should either use clear only on camera that render first half, or use
scissor test (osg::Scissor and GL_SCISSOR_TEST) set to region of camera
viewport on each camera to not clear other camera's render result.
Cheers,
Sergey.
05.08.2011, 18:51, Martin Haffner str...@gmx.net:
Thanks Robert for taking the time to reply. Also, I want to say thanks to you
and other main developers of OSG, because I have found OSG quite easy to work
with so far. I haven’t had to post to the forum thus far because of the
learning environment that the forum content has provided, along
Hi Glen,
Thanks for the explanation about Scaleform. Given that it's doing
OpenGL calls for you you'll need to be mindful of the effect of OSG
state on Scalform and visa-versa. The issues of integration of 3rd
party OpenGL codes with the OSG is something that has been disucssed a
number of
Robert,
Just what I was looking for...Thanks!
I do have a prototype working using an RTT camera which updates a texture on an
object in the scene. On your suggestion about minding the state, it did take
me a while to work through the interaction between OSG and Scaleform on the
state since
Hi Glen,
Have a glimpse of the osgXI project on sourceforge.net. It has an
osgFlash absolute layer and two different implementations using
Scaleform and gameswf, written by one of my cooperators. It is not
written in a uniform format at present so you may have to ignore many
Chinese comments in
Hi,
I have some questions concerning the best approach to update textures using a
pre-render FBO camera. First I want to provide a little background before my
questions.
For the project I'm working on I've been integrating a 3rd party vendor product
(Scaleform) into OpenSceneGraph to
Hi Glen,
The solution you are explaining sounds far too complicated for what is
actually needed. I can't see why a pre-render FBO camera would be
required, unless Scaleform is using OpenGL.
The normal way to implement video textures with the OSG is to subclass
from osg::ImageStream as the
hi, Sergey,
Thank you for your help. I used Texture2D, not use mipmap.
I need to render the camera frame to the whole size of viewer.
Do you think it will improve my program's efficiency to use mipmap?
TANG
hybr wrote:
Hi, Tang
First thing that comes to mind - check if you disabled resizing
Hi, Tang
Camera frame texture is likely to be not power of two size. Osg by default
resize textures to be power of two size, this can take a lot of time each frame
in your case. You can disable resizing by calling
setResizeNonPowerOfTwoHint(false) on your texture. Also use linear filtering as
Hi Tang,
just some tips:
* disable the autoresizing of NPOT-images via
texture-setResizeNonPowerOfTwoHint(false);
* set the internal format of the image to GL_RGBA and the format of the
image to GL_BGRA, so the conversion is done by hardware/driver.
image-setInternalTextureFormat(GL_RGBA);
Hi, Stephan and Sergey
Thank you for your helps!!!
Finally i got a good fps on my iphone4 according to your advices. It is so
great!!! Now I can go on working on my mobile AR project.
Appreciate you very much again!
Cheers,
Tang
--
Read this topic online here:
Hi,
I also met the same question about rendering to texture on iphone. I tried to
render the video frame, captured from iphone's camera continually, as 2D
texture of the viewer's background, but the speed is very slowly.
How can i fix it?
Thank you for your any help!
Cheers,
Tang
Hi,
Am 28.02.11 11:30, schrieb Tang Yu:
I also met the same question about rendering to texture on iphone. I tried to
render the video frame, captured from iphone's camera continually, as 2D
texture of the viewer's background, but the speed is very slowly.
How can i fix it?
Without seeing
Hi, Tang
First thing that comes to mind - check if you disabled resizing of non power of
two textures on texture with image from camera, as well as mipmap generation.
Cheers, Sergey.
28.02.2011, 13:30, Tang Yu tangy...@yahoo.com.cn:
Hi,
I also met the same question about rendering to
Hi Phummipat,
I think it is not intended that within first viewer.frame() a statement causes
an abort of viewer.frame(). However most people call viewer.frame() in a loop
and wouldn't notice it at all.
Best regards.
Dietmar Funck
pumdo575 wrote:
Hi Dietmar Funck
Thank you very much for
Hi Sergey,
your proposal works very well.
Thank you very much,
Dietmar Funck
hybr wrote:
Hi, Dietmar Funck.
In order to get another texture attached you can use something like
_cam-setCullCallback( new fboAttachmentCullCB( this ) );
void fboAttachmentCullCB::operator()(osg::Node*
Hi everyone,
I have some question about render to texture. The code that I give below is
work but I have some question.
1. Why I have to use viewer.frame() for 2 time for working ?. If I use
viewer.frame() just one time it doesn't work. The written image show just
bank screen.
2. How to run
Hi,
I noticed the problem with first call of viewer.frame() too. This is happens.
because meanwhile the viewer initialization - which is called by first call of
viewer.frame() - a call to glGetString (SceneView::init() -
osg::isGLExtensionSupported(_renderInfo.getState()-getContextID(),); )
Hi Dietmar Funck
Thank you very much for your reply. Do you mean, the first viewer.frame() is
used for initialization that why nothing is rendered in the first frame ?.
Best regards,
Phummipat
On Wed, Feb 16, 2011 at 2:48 PM, Dietmar Funck
dietmar.fu...@student.hpi.uni-potsdam.de wrote:
Hi,
Hi, Dietmar Funck.
In order to get another texture attached you can use something like
_cam-setCullCallback( new fboAttachmentCullCB( this ) );
void fboAttachmentCullCB::operator()(osg::Node* node, osg::NodeVisitor* nv)
{
osg::Camera* fboCam = dynamic_castosg::Camera*( node );
Hello,
I would like use render to texture in every render step. My texture resolution
is 2048 x 2048 and it is very slow. There are tipps and tricks to speed up the
render to texture?
With 2048 x 2048 I get around 15 FPS and with 1024 x 1024 I get 45 FPS.
Thanks
Martin
--
Neu: GMX De-Mail -
Hi Martin
What is your Hardware/Software configuration?
Which osg::Camera::RenderTargetImplementation did you use in your code ?
Try the osgprerendercubemap example to test performance of your hardware.
HTH
David Callu
2011/2/2 Martin Großer grosser.mar...@gmx.de
Hello,
I would like use
Datum: Wed, 2 Feb 2011 13:56:09 +0100
Von: David Callu led...@gmail.com
An: OpenSceneGraph Users osg-users@lists.openscenegraph.org
Betreff: Re: [osg-users] Render To Texture is very slow
Hi Martin
What is your Hardware/Software configuration?
Which osg::Camera
Hi Martin,
Am 02.02.2011 14:35, schrieb Martin Großer:
Hello David,
So I use the FRAME_BUFFER_OBJECT and I have a NVIDIA GTX 470 grafics card.
I tried the osgprerendercubemap, but I cannot print out the frame rate.
Additionally I tried the osgprerender example and I get a frame rate of
-computing.de
An: OpenSceneGraph Users osg-users@lists.openscenegraph.org
Betreff: Re: [osg-users] Render To Texture is very slow
Hi Martin,
Am 02.02.2011 14:35, schrieb Martin Großer:
Hello David,
So I use the FRAME_BUFFER_OBJECT and I have a NVIDIA GTX 470 grafics
card.
I tried
Thanks a lot ! It works :D
Frederic Bouvier wrote:
I used
camera-setImplicitBufferAttachmentMask(
osg::Camera::IMPLICIT_COLOR_BUFFER_ATTACHMENT,
osg::Camera::IMPLICIT_COLOR_BUFFER_ATTACHMENT );
to avoid having a depth buffer attached.
HTH
Regards,
-Fred
- Julien
Thank for your answer:
I've manage to make it work in pure GL3 without osg and see that your tweak in
osg is the right thing to do.
However it always doesnt work..
here are the different GL call for fbo creation for 2 case:
-working case (only one slice)
cam-attach(
I used
camera-setImplicitBufferAttachmentMask(
osg::Camera::IMPLICIT_COLOR_BUFFER_ATTACHMENT,
osg::Camera::IMPLICIT_COLOR_BUFFER_ATTACHMENT );
to avoid having a depth buffer attached.
HTH
Regards,
-Fred
- Julien Valentin a écrit :
Thank for your answer:
I've manage to make it work in
Hi Julien,
It's me that submitted this change at
http://www.mail-archive.com/osg-submissions@lists.openscenegraph.org/msg05568.html
It's hard to tell what's going wrong without the full code of your camera setup.
In http://www.opengl.org/registry/specs/ARB/geometry_shader4.txt
error 0x8da8
Hi Julien,
I haven't personally tested this feature yet, but having merged the
submissions I know that the FACE_CONTROLLED_BY_GEOMETRY_SHADER control
is only available on recent hardware and drivers so check whether this
feature is available on your hardware.
Robert.
On Sat, Dec 18, 2010 at
Hi,
I'm trying to make efficient fluid simulation with osg
I've just found this page :
http://www.mail-archive.com//msg05568.html
It look pretty great as my 1 camera per slice code is very CPU time consuming.
I develop a geometry shader that change gl_layer value per primitive.
It work so i
Hi Delport,
I am getting the several pass with the chain effect within the dynamic scene.
But the scene freezes after the third pass. And the scene remains frozen(i mean
the animation), i can only see the blurred scene.
What should i look into to debug this?
Thanks for all the useful hint
Hi Delport,
Thanks for the support. At least i have a little bit of improvement. I can go
upto second pass that makes the burred scene blurrer.
1. Initial scene - No blur effect.
2. First key press - Shader activated and scene blurred and scene is visualized.
3. Second key press - the scene is
Hi Delport,
I am attaching the code i put inside the keyboard handler. I believe it will
provide you with more insight to suggest.
'
class BlurPassHandler : public osgGA::GUIEventHandler
{
public:
BlurPassHandler(int
Hi,
you cannot make loops in the graph, so the more passes you need, the
more passes you will have to insert into the scene graph.
jp
On 14/12/10 13:53, Sajjadul Islam wrote:
Hi Delport,
Thanks for the support. At least i have a little bit of improvement. I can go
upto second pass that
Hi Delport ,
Is something like happening in osggameflife without adding the pass in the
scenegraph?
Please correct me if i m wrong.
They have multiple pass and they are are just flip flop the two output textures
...
Thank you!
Cheers,
Sajjadul
--
Read this topic online
Hi,
in the case of osggameoflife, the state of a single image is updated
during every call to viewer.frame(). In other words the processing
starts with a single state and this is updated - the input in not
dynamic. The flip flop is only used because one cannot read from and
write to the same
Hi Delport,
I have added one more pass branch to the graph and i can see a new behavior.
The code snippet for it as follows:
***'
//the first pass in the scene, with the key press the following do the blur on
the
Hi,
sorry, I can't quite follow the code, but do something like this:
input_texture - ProcessPass[0] - Out[0]
Out[0] - Process[1] - Out[1]
Out[1] - Process[2] - Out[2]
Just make a chain...
Depending on how many passes you have enables, view one of the Out[]
textures.
jp
On 15/12/10 02:40,
Hi Robert Delport,
I have the setup with 4 cameras:
1st camera - the slave camera that inherit the master camera's relative frame
and render the scene to 2 textures with color attachments. One of the texture
remains as it is and other texture is left for further operation.
2. 2nd camera -
Hi,
On 13/12/10 10:15, Sajjadul Islam wrote:
Hi Robert Delport,
I have the setup with 4 cameras:
1st camera - the slave camera that inherit the master camera's
relative frame and render the scene to 2 textures with color
attachments. One of the texture remains as it is and other texture is
Hi Delport,
Reasons for rendering the scene to two textrues
With key press event i may want to show the texture using the HUD that have not
gone through any operation. If i do any post processing on the texture the
initial texture value is lost. I want to preserve it. This is why i render the
Hi,
On 13/12/10 12:26, Sajjadul Islam wrote:
Hi Delport,
Reasons for rendering the scene to two textrues
With key press event i may want to show the texture using the HUD
that have not gone through any operation. If i do any post processing
on the texture the initial texture value is lost. I
Hi Delport,
I am sorry that i did not get much from your last reply asking how am i
changing the texture
when i switch the branches. Do i have to explicitly specify the texture? Even
it is i believe that it is done as follows:
stateset-setTextureAttributeAndModes(0,
Hi Delport,
I have created a new class inherating the osg::Drawable::UpdateCallback. The
class structure is as follows:
*'
class BlurCallback : public osg::Drawable::UpdateCallback
{
public:
BlurCallback(BlurPass
Hi,
On 14/12/10 02:46, Sajjadul Islam wrote:
Hi Delport,
I have created a new class inherating the osg::Drawable::UpdateCallback. The
class structure is as follows:
*'
class BlurCallback : public
1 - 100 of 218 matches
Mail list logo