Hi All,
I have a very simple query. I have a 3D Scene that I want to render as
monochrome image. In openGL I used to achieve this as follows:
1. Render the scene in RGB color
2. Use GL_LUMINANCE pixel format with glReadPixels to read the monochrome
image,
3. Put back this Buffer using glDrawPixels.
Do I have to follow the same route in OSG or is there any mechanism using
which I can directly render the Luminance component.
Another doubt. Is there any construct to model the defocussing / blurring
effect of a camera or do we have to apply the same Image Processing techniques
(applying a Gaussian filter on the captured image) to model these.
I am sorry if the queries are too simplistic, but OSG is so vast that it
is difficult to know everything in a short duration, since I am still a novice
and at the same time I don't want to redo something.
Regards
Harash
---------------------------------
Be a better Globetrotter. Get better travel answers from someone who knows.
Yahoo! Answers - Check it out._______________________________________________
osg-users mailing list
[email protected]
http://openscenegraph.net/mailman/listinfo/osg-users
http://www.openscenegraph.org/