Re: [osg-users] Help: Data accuracy from Frame buffer is not very good

2011-11-29 Thread J.P. Delport

Hi,

On 29/11/2011 23:46, wang shuiying wrote:

Hello, J.P. Delport

I followed your advice but it still doesn't work well.

And I also changed the internal format of the Image as GL_RGBA32F_ARB,
but the points remain the same.

What is the name of " the osgprerender example with the --hdr switch" ?
the example is called "osgprerender", look at the code of it. It has a 
command line switch that enables HDR rendering to texture, but I'm not 
sure this is your problem anymore.


What resolution are you expecting from the FBO and in what range are the 
vertex data you are using? Are you sure you are not converting to 
integers somewhere along the line? Are you using the osg::Image at all, 
or are you using the texture directly?


rgds
jp



Thank you very much !

Shuiying
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



--
This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. 
The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html.


This message has been scanned for viruses and dangerous content by MailScanner, 
and is believed to be clean.


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Help: Data accuracy from Frame buffer is not very good

2011-11-29 Thread wang shuiying

Hello, J.P. Delport

I followed your advice but it still doesn't work well.

And I also changed the internal format of the Image as  GL_RGBA32F_ARB, 
but  the points remain the same.


What is the name of " the osgprerender example with the --hdr switch" ?

Thank you very much !

Shuiying
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Help: Data accuracy from Frame buffer is not very good

2011-11-29 Thread J.P. Delport

Hi,

I'd suggest attaching a texture with a GL_RGBA32F_ARB internal format to 
the camera as well (in addition to the image). I'm not sure you are 
getting a float FBO. See the osgprerender example with the --hdr switch.


jp

On 29/11/2011 16:12, wang shuiying wrote:

Hallo,

I want to simulate laser scanner using osg camera. In Fragment shader
the color information is converted into position information. And then
the position information is accessed from this Fragment to be used to
draw points in a view. But the points are not so accurate, as it is
shown in the attached picture, which is composed of 11 cameras. The
important codes are:

osg::Program * program = new osg::Program();
program->addShader(osg::Shader::readShaderFile(osg::Shader::FRAGMENT,
fragmentShaderPath));
program->addShader(osg::Shader::readShaderFile(osg::Shader::VERTEX,
vertexShaderPath));

osg::Image * image = new osg::Image();
image->allocateImage(xRes, yRes, 1, GL_RGBA, GL_FLOAT);

osg::ref_ptr camera(new osg::CameraNode());
camera->setComputeNearFarMode(osg::Camera::DO_NOT_COMPUTE_NEAR_FAR);
camera->setViewport(0, 0, xRes, yRes);
camera->setClearColor(osg::Vec4(1000.0f, 1000.0f, 1000.0f, 1000.0f));
camera->setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
camera->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
camera->setProjectionMatrixAsFrustum(-tan( viewYaw), tan(viewYaw),
-tan(viewPitch), tan(viewPitch), zNear,zFar);
camera->setViewMatrix(osg::Matrix::lookAt(osg::Vec3d(0, 0, 1.0f),
osg::Vec3d(-10.0f, 0.0f, 0), osg::Vec3d(0, 0, 1)));
camera->setRenderOrder(osg::CameraNode::PRE_RENDER);
camera->setRenderTargetImplementation(renderTargetImplementation);
camera->attach(osg::CameraNode::COLOR_BUFFER, image);
camera->getOrCreateStateSet()->setAttribute(program,
osg::StateAttribute::ON);

In vertex Shader, I write:
void main()
{

x=(gl_ModelViewMatrix * (gl_Vertex - vec4(0.0,0.0,0.0,0.0))).x;
y=(gl_ModelViewMatrix * (gl_Vertex - vec4(0.0,0.0,0.0,0.0))).y;
z=(gl_ModelViewMatrix * (gl_Vertex - vec4(0.0,0.0,0.0,0.0))).z;
gl_Position = ftransform();
}

in Fragment shader, I write:

gl_FragColor = vec4(abs(z/100),abs(x/17),abs(y/4),0.2);

And I when get the position information, I will multiply them with the
corresponding factors.

However, the points are not that accurate. I seems that the Fragment
shader does not calculate the point one by one, but assign a certain
value to a certain amount of points.

Can anybody give some advice?

Thanks in advance!

Shuiying



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


--
This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. 
The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html.


This message has been scanned for viruses and dangerous content by MailScanner, 
and is believed to be clean.


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Help: Data accuracy from Frame buffer is not very good

2011-11-29 Thread wang shuiying

Hallo,

I want to simulate  laser scanner using osg camera. In Fragment shader 
the  color information  is converted into position information. And then 
the position information is accessed from this Fragment to be used to 
draw points in a view. But the points are not so accurate, as it is 
shown in the attached picture, which is composed of 11 cameras. The 
important codes are:


osg::Program * program = new osg::Program();
 
program->addShader(osg::Shader::readShaderFile(osg::Shader::FRAGMENT, 
fragmentShaderPath));
 
program->addShader(osg::Shader::readShaderFile(osg::Shader::VERTEX, 
vertexShaderPath));


osg::Image * image = new osg::Image();
 image->allocateImage(xRes, yRes, 1, GL_RGBA, 
GL_FLOAT);


  osg::ref_ptr camera(new 
osg::CameraNode());
  
camera->setComputeNearFarMode(osg::Camera::DO_NOT_COMPUTE_NEAR_FAR);

  camera->setViewport(0, 0, xRes, yRes);
  camera->setClearColor(osg::Vec4(1000.0f, 1000.0f, 
1000.0f, 1000.0f));
  camera->setClearMask(GL_COLOR_BUFFER_BIT | 
GL_DEPTH_BUFFER_BIT);
   
camera->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
   camera->setProjectionMatrixAsFrustum(-tan( 
viewYaw), tan(viewYaw), -tan(viewPitch), tan(viewPitch), zNear,zFar);
   
camera->setViewMatrix(osg::Matrix::lookAt(osg::Vec3d(0, 0, 1.0f), 
osg::Vec3d(-10.0f, 0.0f, 0), osg::Vec3d(0, 0, 1)));
   
camera->setRenderOrder(osg::CameraNode::PRE_RENDER);
   
camera->setRenderTargetImplementation(renderTargetImplementation);
   camera->attach(osg::CameraNode::COLOR_BUFFER, 
image);
   
camera->getOrCreateStateSet()->setAttribute(program, 
osg::StateAttribute::ON);


In vertex Shader, I write:
void main()
{

x=(gl_ModelViewMatrix * (gl_Vertex - vec4(0.0,0.0,0.0,0.0))).x;
y=(gl_ModelViewMatrix * (gl_Vertex - vec4(0.0,0.0,0.0,0.0))).y;
z=(gl_ModelViewMatrix * (gl_Vertex - vec4(0.0,0.0,0.0,0.0))).z;
gl_Position = ftransform();
}

in Fragment shader, I write:

gl_FragColor = vec4(abs(z/100),abs(x/17),abs(y/4),0.2);

And I when get the position information, I will multiply them with the 
corresponding factors.


However, the points are not that accurate. I seems that the Fragment 
shader does not calculate the point one by one, but assign a certain 
value to a certain amount of points.


Can anybody give some advice?

Thanks in advance!

Shuiying

<>___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org