[osg-users] Measure Render-Time prior to SwapBuffers with Multhreaded Rendering

2016-10-25 Thread Philipp Meyer
Hi,

I'm trying to measure the render time of my application. The render time is 
defined as the time between the start of a new frame (begin of frame loop) and 
the time where all operations on CPU and GPU for that frame have finished. 

However, I want to exclude swapBuffers from the measurement because I can not 
turn off VSYNC on my test system and swapBuffers will block until the VSYNC 
signal arrives, invalidating my measurements.

I have already successfully implemented this measurement by using a custom 
SwapCallback and attaching it to my graphicsContext. Within the SwapCallback, I 
first issue a call to glFinish() to make sure all GPU operations are finished, 
then measure the time and then call swapBuffers afterwards.

This works fine as long as the threading mode of the viewer is set to 
singleThreaded. However, I'm using multiple cameras viewing the same scene from 
different angles, so for maximum performance I've set the threading mode to 
ThreadPerCamera. Now Im getting measurements that no longer make sense (the 
duration is way too short), and I'm uncertain what actually causes this or how 
to fix it.

How do I best go about measuring this time?
In pseudocode, it would look smth like:

for(;;)
{
tp1 = timepoint();
//osg render scene multithreaded
synchronizeWithOSGThreads();
glFinish();
tp2 = timepoint();
renderTime = tp2 - tp1;
swapBuffers();
}

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=69155#69155





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Rendering Textured Quad no longer working after moving to different View

2016-09-12 Thread Philipp Meyer
Hi,

I have a little annoying issue here and fail to understand why it happens or 
what goes wrong.

Im using a shader to compute various output Textures, some for direct display, 
others for post processing.
To be able to easily display the rendered Textures, I created a little helper 
class which derives from osg::Camera and internally used a orthographic 
projection and a Quad to render the texture to the screen. 
(Source code below) That way, I can simply add the TextureDisplay to any scene 
graph to view my rendered Texture.

Code:

/*
 * TextureView.cpp
 *
 *  Created on: Jun 8, 2016
 *  Author: ubuntu
 */

#include "TextureView.h"

#include "osgHelper.h"
#include "MDRTErrorHandling.h"

#include 

namespace MDRT {

TextureView::TextureView() {
setViewMatrix(osg::Matrix::identity());
setProjectionMatrix(osg::Matrix::ortho2D(0, 1, 0, 1));
setClearColor(osg::Vec4(1, 0, 0, 1));

//  auto mt = osgHelper::make_osgref();
//  mt->setMatrix(osg::Matrix::identity());
//  mt->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
//  addChild(mt);

auto geode = osgHelper::make_osgref();
geometry = osgHelper::make_osgref();
geode->addDrawable(geometry);
addChild(geode);

auto quadVertices = osgHelper::make_osgref();
quadVertices->push_back(osg::Vec3(0, 0, 0));
quadVertices->push_back(osg::Vec3(1, 0, 0));
quadVertices->push_back(osg::Vec3(1, 1, 0));
quadVertices->push_back(osg::Vec3(0, 1, 0));
geometry->setVertexArray(quadVertices);

auto quadPrimitiveSet = osgHelper::make_osgref(
osg::PrimitiveSet::QUADS, 0);
quadPrimitiveSet->push_back(0);
quadPrimitiveSet->push_back(1);
quadPrimitiveSet->push_back(2);
quadPrimitiveSet->push_back(3);
geometry->addPrimitiveSet(quadPrimitiveSet);

auto texCoords = osgHelper::make_osgref();
texCoords->push_back(osg::Vec2(0, 0));
texCoords->push_back(osg::Vec2(1, 0));
texCoords->push_back(osg::Vec2(1, 1));
texCoords->push_back(osg::Vec2(0, 1));
geometry->setTexCoordArray(0, texCoords, osg::Array::BIND_PER_VERTEX);

auto ss = geode->getOrCreateStateSet();

auto program = osgHelper::make_osgref();
auto vertShader = osg::Shader::readShaderFile(osg::Shader::Type::VERTEX,
"res/CustomShaders/FrameHeader.vert");
auto fragShader = 
osg::Shader::readShaderFile(osg::Shader::Type::FRAGMENT,
"res/CustomShaders/FrameHeader.frag");

if (!vertShader || !fragShader) {
throw MDRTExceptionBase("error loading textureview shaders");
}

bool ok = true;

ok = ok && program->addShader(vertShader);
ok = ok && program->addShader(fragShader);

if (!ok) {
throw MDRTExceptionBase(
"error adding textureview shaders to program 
obj");
}

ss->setAttributeAndModes(program);
ss->getOrCreateUniform("tex", osg::Uniform::Type::SAMPLER_2D_RECT, 0);

}

TextureView::~TextureView() {
// TODO Auto-generated destructor stub
}

void TextureView::setTexture(osg::Texture* tex) {
auto texRec = dynamic_cast(tex);
assert(texRec);

float texWidth = static_cast(texRec->getImage()->s());
float texHeight = static_cast(texRec->getImage()->t());

auto texCoords = osgHelper::make_osgref();
texCoords->push_back(osg::Vec2(0, 0));
texCoords->push_back(osg::Vec2(1 * texWidth, 0));
texCoords->push_back(osg::Vec2(1 * texWidth, 1 * texHeight));
texCoords->push_back(osg::Vec2(0, 1 * texHeight));
geometry->setTexCoordArray(0, texCoords, osg::Array::BIND_PER_VERTEX);
geometry->getOrCreateStateSet()->setTextureAttributeAndModes(0, tex);
geometry->dirtyDisplayList();

}

} /* namespace MDRT */




This works fine and I have used it since several months. Today, I decided that 
it would be nicer to view the Textures in a separate window, so I switched to a 
CompositeViewer and created a new View and window for the textures. After that, 
I add my TextureViews to the views main camera.



Code:
//create texture views for radar shader output
constexpr int radarCameraTextureViewWidth = 800;
constexpr int radarCameraTextureViewHeight = 
radarCameraTextureViewWidth
/ 4;
constexpr int radarTextureViewSize = 
radarCameraTextureViewHeight;

auto radarCameraTextureView = 
osgHelper::make_osgref();
radarCameraTextureView->setCameraManipulator(nullptr);

radarCameraTextureView->getCamera()->setGraphicsContext(
masterCamera->getGraphicsContext());
radarCameraTextureView->getCamera()->setViewMatrix(

Re: [osg-users] Custom Graphics Context not applied?

2016-08-30 Thread Philipp Meyer
Hi,

ah nice, thats even better. Will experiment with it.

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68489#68489





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Custom Graphics Context not applied?

2016-08-30 Thread Philipp Meyer
Hi,

I want to measure the frametime without waiting for vsync. On my test system I 
have no way to turn vsync off, but I still need to benchmark.

So the plan was to call glFinish() after the rendering traversal is over, but 
BEFORE swapBuffers() (because thats the call that actually blocks until the 
vsync signal arrives). That way, in theory, I should be able to receive 
frametimes without vsync.

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68487#68487





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Custom Graphics Context not applied?

2016-08-30 Thread Philipp Meyer
How do I provide a graphics context for a viewer? I see there is a method 
returning all graphics contexts, but I cant find any methods to set the 
context, and the constructor of osgViewer doesnt take a graphics context either.

Maybe there is an easier way to accomplish what I need? I want to store a 
timestamp BEFORE the call to swapBuffers, but after rendering traversal.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68485#68485





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Custom Graphics Context not applied?

2016-08-30 Thread Philipp Meyer
Hi,

I'm trying to create a custom graphics context to replace the default X11 
Graphics context (I need to add some additional code).

To do this, I created a new "CustomGraphicsContextX11" class, deriving from 
"PixelBufferX11". I reimplemented the virtual methods that I need to adjust.

Then, I apply the custom graphics context to my master camera and start up the 
viewer.

However, to my own surprise the swapBuffersImplementation() method of my custom 
context is never executed. Same goes for "realizeImplementation()".

Is OSG still using another graphics context behind the scenes for some reason? 
The method described above worked fine for me on another system without the X11 
windowing system.

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68483#68483





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Strange problem with QT, OSG and osgdb_dae.so

2016-08-10 Thread Philipp Meyer
Hi,

fixed it by calling


Code:
std::locale::global(std::locale());



Just after QT/GTK initialization.
Thanks guys.

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68342#68342





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Strange problem with QT, OSG and osgdb_dae.so

2016-08-10 Thread Philipp Meyer
Hi,

Ive made further tests. It indeed seems to be a locale issue. When replacing 
all "." with "," for floating point numbers, I get an almost correct result.

So I guess now I need to figure out how to enforce a certain locale for the 
loader?

Thank you!

Cheers,
Philipp

PS: My system language is english, but im from Germany. So it probably somehow 
detected a german locale, which messes up the float reads from the collada file.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68336#68336





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Strange problem with QT, OSG and osgdb_dae.so

2016-08-10 Thread Philipp Meyer

robertosfield wrote:
> Hi Philipp,
> 
> Is there any chance that the COLLADA_DOM assumes a certain locale
> while Qt is changing it?
> 
> Robert.


Hi Robert,

interesting idea. However, the loader seems to parse texture paths correctly. 
Wouldnt that also be messed up if the issue was caused by locales?

Also, this issue just got even weirder:

Instead of using QT, i tried using GTK with osgviewerGTK. I got it to work, but 
Im facing the exact same issue: The model fails to load vertices properly.

When keeping the entire source code the same, and just commenting out 
"gtk_init()", the loader works again.

I fail to understand this.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68333#68333





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Strange problem with QT, OSG and osgdb_dae.so

2016-08-09 Thread Philipp Meyer
Hi,

Im trying to build a collada model viewer using osg, the osgdb_dae plugin and 
QT for the user interface.

For integrating osg into QT, I have followed the example implementation here:
https://github.com/openscenegraph/OpenSceneGraph/blob/master/examples/osgviewerQt/osgviewerQt.cpp

I can see the graphics window, I'm able to change the clear color of it and I 
also successfully created some basic shapes and used shaders.

However, when loading a collada file using the osgDB and osg_dae plugin, I get 
very weird results. For example, when loading a car model, I only see a flat 
rectangle. I have written my whole scene graph into a file to debug it, and it 
seems like it loaded the model correctly, however, all vertices are set to 0 0 
0 instead of their proper values.

To make sure I did not build the library incorrectly I also created another 
program without QT and the exact same source code for loading the model, and it 
works perfectly.

I have experimented further, and it appears that the call to

Code:

QApplication(argc, argv); 



prior to loading the collada model is causing the issue. When removing the qt 
initialization, everything works fine.

So, it seems like QT is messing with some openGL stuff in the background? I 
still dont really get why that would mess up the collada loader though.
Does anyone have an idea what the issue could be?

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68323#68323





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Explicitly synchronize all viewer cameras?

2016-07-06 Thread Philipp Meyer
Hi,

thank you for your input. I've resolved the problem by marking a couple more 
StateSets as dynamic.

Maybe it would be a good idea to include the threading + dynamic nodes hint in 
the official documentation? Personally I wasnt aware that data variance 
settings influence threading behavior. I was under the impression that it was 
mainly for the osgOptimizer class.
Adding it to the documentation may safe other beginners some headaches.

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68022#68022





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Get current Billboard ModelView matrix?

2016-07-05 Thread Philipp Meyer
Hi,

I want to track the delta movement of every fragment in my scene. Only tracking 
on a per object basis would not be enough because an object may, for example, 
rotate (so that one side of the object approaches the camera, the other side 
doesnt) or it may have moving parts.

You are talking about writing data on a texture. I would be very interested in 
how that technically works. Im still a beginner with shaders but I thought that 
the fragment shader can only write data to a very specific point on a texture 
(where the current fragment would appear), and depending on the alpha blending 
and other settings the final pixel color is determined by the hardware. How 
would I go about writing actual information to a texture via. a shader as you 
suggest?

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68005#68005





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Explicitly synchronize all viewer cameras?

2016-07-05 Thread Philipp Meyer
Hi,

thanks for your (as always) really quick response!
I'm trying to achieve various things. As a simple example, I want to measure 
the time it takes to render 1 frame (CPU and GPU time).
For that, I used something along the lines of:


> //..
> start = highPrecisionTimer.now();
> viewer->advance();
>   viewer->eventTraversal();
>   viewer->updateTraversal();
>   viewer->renderingTraversals();
> 
> //at this point, I would need some sort of barrier (like joining all threads?)
> 
> glFinish() //make sure GPU is done, too.
> end = highPrecisionTimer.now();
> //...


But since I switched to ThreadPerCamera, to my understanding, this will no 
longer work. How would I go about something simple like this? Is there any 
example code I could look at regarding the graphicsOperation and barrier?

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68003#68003





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Explicitly synchronize all viewer cameras?

2016-07-04 Thread Philipp Meyer
Hi,

I'm using multiple cameras and want them to render the scene in parallel to 
increase GPU load. For that, I set the threading model of my Viewer to 
"ThreadPerCamera".

That all works fine, however, I'm facing the issue that the viewer seems to 
begin the next frame before the current frame is completed. (Or with other 
words: viewer.renderingTraversals() does not seem to block long enough for my 
needs).

I can not have that happen because of some additional logic I'm performing in 
my program main loop. Is there any way I can wait until all cameras have 
completed their frame? Ive messed with the viewers end barrier and frame 
policies with no success. Some older tutorials also mention a "sync()" method, 
however it does not seem to exist any longer. I also cant find any other 
methods within the viewer related to synchronization.

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67983#67983





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Get current Billboard ModelView matrix?

2016-07-01 Thread Philipp Meyer
Hi,


> If you only need the movement towards/away from you, you can use the
> previous frames depth and perform difference computation based on the
> difference of the linear depth. 


Im not exactly sure what you mean here. Are you talking about rendering the 
depth of each frame to a texture and comparing the values with the next frame 
to compute the delta?

If so, wouldnt that be a pixel based approach again suffering the same problems 
as I mentioned earlier as a response to mp3butcher?


> I'd simply pass the view matrix inverse (pre frame) and calculate the
> modelmatrix via inverse_view * model_view_matrix.


That works, but only gives me the current modelMatrix. I need the modelMatrix 
of the previos frame though. Because objects can move in my scene, I can not 
calculate the old vertex position by simply using the current modelMatrix with 
the old viewMatrix.

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67951#67951





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Get current Billboard ModelView matrix?

2016-07-01 Thread Philipp Meyer
Hi,

I'm using shaders to do some pre computations for a realtime radar simulation. 
(Some further processing is done with CUDA) The delta distance is required for 
the calculation of the doppler effect.

So yes, I guess I'm using shaders in a weird way. Unfortunately it would be 
very difficult do migrate to CUDA, because that would require me to do all 
graphics calculations by myself in the cuda kernel (Z test etc.)

So, to get back to my original question, is there any way to get a current 
Billboards modelMatrix? How do I use billboard->computeMatrix() properly?

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67946#67946





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Get current Billboard ModelView matrix?

2016-06-30 Thread Philipp Meyer
Hi Robert,

unfortunately some objects move in my scene, so its not enough to only hold the 
old view matrix.

The camera AND any object can move.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67932#67932





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Get current Billboard ModelView matrix?

2016-06-30 Thread Philipp Meyer
Hi,

that was my first idea as well, however, then I realized that this approach 
does not work.

The problem is that there is no way to know if a certain pixel still shows the 
same fragment. For example, if the camera view angle changes by 180 degrees in 
one frame, the pixel at the 0,0 texture coordinate would no longer refer to the 
same fragment. Therefore, the program would compute the delta between two 
unrelated fragments, yielding a wrong result.

However, with my approach, the program would calculate the old fragment 
position by using the old view matrix (pre 180 degree change) and the model 
matrix (unchanged in this example), compute the distance to the camera and 
compare it to the old distance, resulting in 0. 

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67929#67929





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Get current Billboard ModelView matrix?

2016-06-30 Thread Philipp Meyer
My goal is to create a fragment shader that computes the delta distance to each 
fragment compared to the previous frame.

In the shader, I can easily calculate the current distance to a fragment by 
using built in functions, the problem is that I also need access to the 
fragment position in the PREVIOUS frame in order to compute the delta and 
generate the output.

As the vertex and fragment shaders have no native access to data of the 
previous frame my idea was to use uniform variables to pass the viewMatrix and 
the modelMatrix of the PREVIOUS frame to the vertex shader.

I can easily retrieve old viewMatrix by setting a uniform variable in the 
program main render loop after the current frame completed.
However, it is very difficult to obtain the old modelMatrix because the 
modelMatrix is unique to every primitive in the scene (or in other words, there 
are many modelView matrices and not just one).

So, the only solution I found was to attach a uniform to every transform node 
in my scene graph and store the previous modelMatrix in it.
This is very CPU intensive but works okay, unfortunately I do not yet take 
Billboards into account and this is why I need to calculate the modelMatrix of 
a Billboard.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67926#67926





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Get current Billboard ModelView matrix?

2016-06-29 Thread Philipp Meyer
Hi,

is it possible to retrieve the current modelView matrix of a billboard node?
For matrix transforms, one can simply use ->getMatrix() and multiply that with 
the current view matrix.

I noticed that there is a method called "computeMatrix", but I'm a little bit 
confused in how to use it. What exactly are the "pos_local" and "eye_local" 
parameters?

My goal is to retrieve the final modelView matrix of the billboard node 
(including all parent transforms).

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67896#67896





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to osg::TextureRectangle fails?

2016-06-28 Thread Philipp Meyer
Nevermind, forgot to change my texture sampler from sampler2D to 
sampler2DRect...

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67874#67874





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render to osg::TextureRectangle fails?

2016-06-28 Thread Philipp Meyer
Hi,

im using several RTT Cameras in my scene. So far, I have only rendered to 
osg::Texture2D and that worked fine, however, I need to render to NPOD textures 
now so that will no longer work.

For that, I changed my render target to a osg::TextureRectangle and left 
everything else pretty much unchanged. I'm displaying the texture using a 
custom QUAD. 

Unfortunately I only see a black texture when using TextureRectangle instead of 
Texture2D. I checked the osg prerender example and noticed that I need to use 
un-normalized texture coordinates for my QUAD in this case, however, to my own 
surprise that didn't fix the issue either.

Does anyone have an idea whats wrong?

Some relevant code bits:

Texture creation:


Code:
osg::ref_ptr osgHelper::createDefaultRttTexture(int width,
int height, GLenum sourceFormat, GLenum sourceType,
GLint internalFormat) {

osg::ref_ptr tex;

auto texImg = osgHelper::make_osgref();
texImg->setInternalTextureFormat(internalFormat);
texImg->allocateImage(width, height, 1, sourceFormat, sourceType);

if (isPowerOfTwo(width) && isPowerOfTwo(height)) {
auto specTex = make_osgref();
specTex->setTextureSize(width, height);
specTex->setImage(texImg);

tex = specTex;
} else {
auto specTex = make_osgref();
specTex->setTextureSize(width, height);
specTex->setImage(texImg);

tex = specTex;
}

tex->setInternalFormat(internalFormat);
tex->setSourceFormat(sourceFormat);
tex->setSourceType(sourceType);
tex->setFilter(osg::Texture::FilterParameter::MIN_FILTER,
osg::Texture::FilterMode::NEAREST);
tex->setFilter(osg::Texture::FilterParameter::MAG_FILTER,
osg::Texture::FilterMode::NEAREST);
tex->setDataVariance(osg::Object::DYNAMIC);
tex->setMaxAnisotropy(0);

return tex;
}



RTT Camera setup:


Code:
RTTCamera::RTTCamera(osg::Texture *dest, osg::Viewport *vp) :
osg::Camera() {

// set clear the color and depth buffer
this->setClearColor(osg::Vec4(1, 0, 0, 1));

//matrices get set properly later.
setProjectionMatrix(osg::Matrix::identity());
setViewMatrix(osg::Matrix::identity());

setComputeNearFarMode(ComputeNearFarMode::DO_NOT_COMPUTE_NEAR_FAR);

// set viewport
this->setViewport(vp);

// set the camera to render before the main camera.
this->setRenderOrder(osg::Camera::RenderOrder::PRE_RENDER);

// tell the camera to use OpenGL frame buffer object where supported.

this->setRenderTargetImplementation(osg::Camera::RenderTargetImplementation::FRAME_BUFFER_OBJECT);

// attach the texture and use it as the color buffer.
this->attach(osg::Camera::COLOR_BUFFER0, dest);

}



Texture display:

Code:

TextureView::TextureView() {
setViewMatrix(osg::Matrix::identity());
setProjectionMatrix(osg::Matrix::ortho2D(0, 1, 0, 1));
setClearColor(osg::Vec4(1, 0, 0, 1));

auto mt = osgHelper::make_osgref();
mt->setMatrix(osg::Matrix::identity());
mt->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
addChild(mt);

auto geode = osgHelper::make_osgref();
geometry = osgHelper::make_osgref();
geode->addDrawable(geometry);
mt->addChild(geode);

auto quadVertices = osgHelper::make_osgref();
quadVertices->push_back(osg::Vec3(0, 0, 0));
quadVertices->push_back(osg::Vec3(1, 0, 0));
quadVertices->push_back(osg::Vec3(1, 1, 0));
quadVertices->push_back(osg::Vec3(0, 1, 0));
geometry->setVertexArray(quadVertices);

auto quadPrimitiveSet = osgHelper::make_osgref(
osg::PrimitiveSet::QUADS, 0);
quadPrimitiveSet->push_back(0);
quadPrimitiveSet->push_back(1);
quadPrimitiveSet->push_back(2);
quadPrimitiveSet->push_back(3);
geometry->addPrimitiveSet(quadPrimitiveSet);

auto texCoords = osgHelper::make_osgref();
texCoords->push_back(osg::Vec2(0, 0));
texCoords->push_back(osg::Vec2(1, 0));
texCoords->push_back(osg::Vec2(1, 1));
texCoords->push_back(osg::Vec2(0, 1));
geometry->setTexCoordArray(0, texCoords, osg::Array::BIND_PER_VERTEX);

auto ss = geode->getOrCreateStateSet();

auto program = osgHelper::make_osgref();
auto vertShader = osg::Shader::readShaderFile(osg::Shader::Type::VERTEX,
"res/CustomShaders/FrameHeader.vert");
auto fragShader = 
osg::Shader::readShaderFile(osg::Shader::Type::FRAGMENT,
"res/CustomShaders/FrameHeader.frag");

if (!vertShader || !fragShader) {
throw MDRTExceptionBase("error loading textureview shaders");
   

Re: [osg-users] Pass an osg::Texture2D to CUDA driver api

2016-06-27 Thread Philipp Meyer
Hi,

setting useDisplayLists to false indeed fixed both issues. Thank you very much.

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67835#67835





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Pass an osg::Texture2D to CUDA driver api

2016-06-24 Thread Philipp Meyer
Hi Robert,

thanks for the input!
When thinking about this, I really want to find another approach than the "main 
loop" because I need to execute the CUDA code at a very specific point in time. 
(After a RTT Camera Texture has been written, but prior to rendering another 
texture).

Therefore, I think I cannot use a camera postDrawCallback, because that doesnt 
solve the problem of displaying the CUDA output after it has been produced in 
the same frame.

My idea was to create a "fake" drawable node and use a DrawCallback together 
with renderBins to make my custom CUDA code execute at a very specific time 
during rendering traversal.

However, I'm having trouble getting that approach to work. If I create a very 
simple scene graph, only consisting of a root node and 10 of my custom drawable 
nodes attached to it, each with another render bin number, the execution order 
of my drawCallbacks does not follow the renderbin number. Instead, the 
drawCallbacks always execute in the same order that I have added the drawables 
to the root node.

Does anyone know why that happens? From my understanding, the renderbin number 
should determine the order of draw (and therefore also of the drawCallback?) 
operations.


robertosfield wrote:
> Hi Philipp
> 
> On 15 June 2016 at 14:48, Philipp Meyer <> wrote:
> 
> > figured it out.
> > One needs to use
> > 
> > 
> > Code:
> > viewer->setReleaseContextAtEndOfFrameHint(false);
> > 
> > 
> > 
> > to prevent the context from getting released after a frame is rendered.
> > That way, its resources, like textures, can still be accessed after the 
> > frame completes.
> > 
> 
> I don't have CUDA experience so can't comment on this specifically.
> 
> On the OSG side the setReleaseContextAtEndOfFrameHint() is only useful
> in when you have a single graphics context and are running your
> application SingleThreaded.
> 
> There will be other ways to integrate CUDA rather than via the main
> loop.  You should be able to create a custom GraphicsOperation and
> attach this to a GraphicsWidnow or Camera draw callback to invoke the
> CUDA side from within a thread that has the graphics context current.
> 
> Robert.
> ___
> osg-users mailing list
> 
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
> 
>  --
> Post generated by Mail2Forum


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67782#67782





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Most efficient way to get the gl_ModelViewMatrix of PREVIOUS frame?

2016-06-23 Thread Philipp Meyer
Hi,

so here is a little update on my current progress. I have a working solution, 
but im not 100% happy with it as it is pretty messy and offers bad performance.

The basic idea is to assign a uniform variable to each and every transform node 
of the scene graph, storing its total modelMatrix so it can be accessed by the 
shader.

1) After building the scene graph, traverse through all nodes. Once a transform 
node is found, create a uniform and attach it to it. Also add an entry into a 
global map variable, linking the created uniform with a list of transform nodes 
(all parents of the current node).
2) Then, with every frame, iterate through the uniform map and calculate the 
current model matrices based on the matrix transform list.

This works, but there are several drawbacks:

1) If the scene graph is modified in any way after buidling the map, it will 
yield wrong results. So with every change of the scene graph, the map needs to 
be rebuilt.
2) If multiple parents share the same transform node and it is not an identity 
matrix transform (with no effect), the approach wont work at all because there 
is no unique stateset for that transform.
3) Somewhat bad performance, lots of CPU load and bottlenecking.

If anyone has a better idea about how to obtain the model matrix of the 
PREVIOUS frame in the vertex shader, please let me know.


Code:
void MDRT::MotionDeskRT::updatePreviousFrameModelMatrices(
const osg::Matrix ) {
if (modelMatrixUniformMap.empty()) {
buildModelMatrixUniformMap(sceneRoot,
std::vector());
}

osg::Matrix modelMatrix;

for (const auto & pair : modelMatrixUniformMap) {
osg::Uniform *mmUniform = pair.first;
const auto  = pair.second;
const size_t mtListLen = mtList.size();

modelMatrix = mtList[0]->getMatrix();
for (size_t i = 1; i < mtListLen; ++i) {
modelMatrix = mtList[i]->getMatrix() * modelMatrix;
}
//modelViewMatrix = modelViewMatrix * viewMatrix;

mmUniform->set(modelMatrix);
}
}

void MDRT::MotionDeskRT::buildModelMatrixUniformMap(osg::Node* root,
const std::vector ) {

//all of the below only works if no matrix transforms with a matrix != 
identity are shared.
//otherwise, all use the same state set, which will make it impossible
//to have a unique uniform storing the individual overall transform of 
a MT node.
osg::Group *group = dynamic_cast(root);
if (!group) {
return; //is leaf and not MT
}

const unsigned int childCount = group->getNumChildren();
osg::MatrixTransform *mt = dynamic_cast(group);

if (mt && mt->getMatrix() != osg::Matrix::identity()) {
//is matrix transform, update modelview matrix
auto modifiedMatrixTransforms = matrixTransforms;
if (mt->getReferenceFrame()
!= osg::Transform::ReferenceFrame::RELATIVE_RF) 
{
//this mt doesnt use relative reference frame, and 
therefore ignores all parent MTs when calculating the final model matrix.
modifiedMatrixTransforms.clear();
}
modifiedMatrixTransforms.push_back(mt);

osg::Uniform *mmUniform = 
mt->getOrCreateStateSet()->getOrCreateUniform(
"oldModelMatrix", 
osg::Uniform::Type::FLOAT_MAT4, 1);

modelMatrixUniformMap[mmUniform] = modifiedMatrixTransforms;

for (unsigned int cid = 0; cid < childCount; ++cid) {
buildModelMatrixUniformMap(group->getChild(cid),
modifiedMatrixTransforms);
}

return;
}

for (unsigned int cid = 0; cid < childCount; ++cid) {
buildModelMatrixUniformMap(group->getChild(cid), 
matrixTransforms);
}

}




Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67747#67747





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Most efficient way to get the gl_ModelViewMatrix of PREVIOUS frame?

2016-06-21 Thread Philipp Meyer
Hi,

I am currently working on a Shader that is supposed to color fragments 
approaching the camera red and fragments departing the camera green.
So for example, if an object in the scene is traveling towards the camera, it 
should be rendered red, otherwise green.

For that, my basic idea was to compute the distance of each fragment to the 
camera, and then subtract the distance of the same fragment from the previous 
frame. If the result is negative, the fragment would approach the camera and 
therefore be red, otherwise green.

The challenge here is to get the fragment position from the previous frame 
though.
>From my understanding, the fragments position relative to the camera (0,0,0,1) 
>is calculated by interpolating
gl_Vertex * gl_ModelViewMatrix, where ModelViewMatrix is calculated as 
(viewmatrix * modelmatrix).

So, what I would need is something like "gl_ModelViewMatrixPreviousFrame".
While it is relatively easy to pass the previous viewmatrix to the shader by 
setting a uniform on the parent camera in OSG and updating it in the program 
render loop, how would one go about passing the old modelMatrix? The model 
matrix can be different for every primitive in the scene, so wouldn't I need a 
million uniforms for that? How would that even work?

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67713#67713





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Pass an osg::Texture2D to CUDA driver api

2016-06-15 Thread Philipp Meyer
Hi,

figured it out.
One needs to use


Code:
viewer->setReleaseContextAtEndOfFrameHint(false);



to prevent the context from getting released after a frame is rendered.
That way, its resources, like textures, can still be accessed after the frame 
completes.

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67640#67640





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Pass an osg::Texture2D to CUDA driver api

2016-06-15 Thread Philipp Meyer
Hi,

I'm currently facing some issues in passing a texture2d created via OSG to the 
CUDA low level driver API. I'm trying to run a cuda kernel on a texture after 
calling viewer->renderingTraversals();

As far as I have understood, all thats required is getting the underlaying 
texture ID for OpenGL and passing it to CUDA for further processing. However, I 
immediately get a segmentation fault when calling cuGraphicsGLRegisterImage. 
(Note: NOT a cuda error)
This in itself is already sort of weird, because to my understanding a CUDA 
call should either work or return a proper error code. Anyways...

Using the debugger, I validated the following:

"openGLContextID" is set to 0, which seems correct. Im not sure if I am 
retrieving the context ID in the correct way tough. Im using multiple cameras 
and let OSG manage the graphics context by itself (I never explicitly create 
it).

"texid" is set to 2.
"texid2" is set to 78.

I've also double checked the texture type, getTextureObject()->target() returns 
the same numerical value as GL_TEXTURE_2D, so it can't be that either.

I have tested all my code previously in an OpenGL only program, and it worked 
perfectly.

Does anyone know whats wrong? Am I executing the CUDA stuff at the wrong place? 
(After rendering)

Relevant code (error checking boilerplate removed for readability):


Code:
unsigned int openGLContextID =

viewer->getCamera()->getGraphicsContext()->getState()->getContextID();
GLenum texid = radarShaderOutputTexture->getTextureObject(
openGLContextID)->id();
GLenum texid2 =

cudaOutputTexture->getTextureObject(openGLContextID)->id();

CUgraphicsResource cudaInputTex, cudaOutputTex;
cuGraphicsGLRegisterImage(, texid,
GL_TEXTURE_2D, CU_GRAPHICS_REGISTER_FLAGS_READ_ONLY);
cuGraphicsGLRegisterImage(, texid2,
GL_TEXTURE_2D, CU_GRAPHICS_REGISTER_FLAGS_WRITE_DISCARD);



Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67637#67637





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture without clamping values

2016-06-14 Thread Philipp Meyer
Hi,

I was able to figure out the issue.
For everyone wondering, I was missing the following line:

textureImage->setInternalTextureFormat(GL_RGBA16F_ARB);

In other words, one needs to set the format on the image as well as on the 
texture for everything to work properly. Hope this helps someone in the future!

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67607#67607





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture without clamping values

2016-06-14 Thread Philipp Meyer
Hi,

I did some more testing and it turns out that I can set a texel to a color with 
values > 1.0 just fine in the C++ code.
When using image->setColor(osg::Vec4(1,2,3,4),x,y,0) before reading it with 
getColor, I can get results > 1.0.

Does that mean that the shader itself is clamping the values somehow? Or does 
it have to do with the internal texture copy from GPU to host memory?

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67600#67600





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render to Texture without clamping values

2016-06-13 Thread Philipp Meyer
Hi,

for my current project I need to do some computations in the fragment shader 
and retrieve the values within my application. For that I am using the render 
to texture feature together with a float texture.

I'm having some trouble reading values > 1.0 though. It seems like the values 
are getting clamped to 0..1, even though I followed the osgprerender HDR setup. 
Besides the code below, I have also tried GL_RGBA32F (ARB and not ARB) for the 
internal texture format, tried double for the image and source type and tried 
using osg::ClampColor to disable clamping for the RTT camera, all without 
success.

When reading the texture, It returns (0.123,0.5,1,1) for every texel.

Code for texture setup:


Code:
radarTexture = new osg::Texture2D;
radarTexture->setInternalFormat(GL_RGBA16F_ARB);
radarTexture->setSourceFormat(GL_RGBA);
radarTexture->setSourceType(GL_FLOAT);

auto textureImage = osgHelper::make_osgref();
textureImage->allocateImage(16,16,1,GL_RGBA, GL_FLOAT);
//  textureImage->setImage(128, 128, 1, GL_RGBA, GL_RGBA, GL_UNSIGNED_BYTE,
//  nullptr, osg::Image::AllocationMode::NO_DELETE);
radarTexture->setImage(textureImage);
radarTexture->setMaxAnisotropy(0);
radarTexture->setWrap(osg::Texture::WRAP_S, 
osg::Texture::CLAMP_TO_EDGE);
radarTexture->setWrap(osg::Texture::WRAP_T, 
osg::Texture::CLAMP_TO_EDGE);
radarTexture->setFilter(osg::Texture::FilterParameter::MIN_FILTER,
osg::Texture::FilterMode::NEAREST);
radarTexture->setFilter(osg::Texture::FilterParameter::MAG_FILTER,
osg::Texture::FilterMode::NEAREST);
radarTexture->setDataVariance(osg::Object::DYNAMIC);



RTT Camera setup (some)

Code:

// set the camera to render before the main camera.
this->setRenderOrder(osg::Camera::PRE_RENDER);

// tell the camera to use OpenGL frame buffer object where supported.
this->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);

// attach the texture and use it as the color buffer.
this->attach(osg::Camera::COLOR_BUFFER0, dest->getImage());



GLSL Fragment Shader Code (simplified):


Code:
void main()
{
gl_FragColor = vec4(0.123,0.5,3,4);
}





Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67593#67593





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Custom GraphicsContext Segmentation Fault when using Multithreading

2016-04-28 Thread Philipp Meyer
Hi,

so after countless more hours of debugging I have identified the issue.
Within the "setUpEGL" function I already set the eglContext to be current. So 
once the "makeCurrentImplementation" is called, the context is already set to 
be current. For some reason, when using singlethreaded rendering, this works 
without issues and "eglMakeCurrent" returns EGL_TRUE even if the context is 
already active.

However, when using multithreading, eglMakeCurrent fails when called on an 
active context and also appearantly invalidates the context, so that all 
following OpenGL calls fail.

I managed to fix the issue completely by just removing the "eglMakeCurrent" 
call within my "setUpEGL" function, so that the context does not become current 
before "makeCurrentImplementation" is executed by OSG. I can now use the 
application without errors on the realtime machine. :)

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=66995#66995





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Custom GraphicsContext Segmentation Fault when using Multithreading

2016-04-22 Thread Philipp Meyer
Hi,

I tried to orientate on "PixelBufferX11" when creating the EGLGraphicsContext, 
and I noticed that PixelBufferX11 calls the "init()" method in the constructor 
as well as in the "realizeImplementation" method, if necessary. So my call to 
"realizeImpl" in the constructor is pretty much just resembling that structure.

It indeed seems like there is no graphics context when the graphics thread 
starts, however, I fail to understand why this only happens when in 
multithreading mode. I studied the PixelBufferX11 source code but couldnt find 
anything that would explain the error in my implementation. Unfortunately I 
cannot test the original GraphicsContext on my realtime machine because it cant 
run it, however, it works fine on my ubuntu desktop machine that I use for 
programming.

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=66925#66925





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Custom GraphicsContext Segmentation Fault when using Multithreading

2016-04-22 Thread Philipp Meyer
Hi,

I added the source code for the custom graphicsContext. Sorry for the delay.

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=66918#66918





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Custom GraphicsContext Segmentation Fault when using Multithreading

2016-04-21 Thread Philipp Meyer
DISCLAIMER: I'm not a graphics or OpenGL expert, so if something is dumb or 
doesnt make sense please let me know.

Hi,

I am trying to use OSG to create an application for a real time Linux system 
without a windowing system.

To get OSG to work properly, I create my own GraphicsContext and assign it to 
each Camera I'm using. Within the GraphicsContext I set up EGL and DRM.

My GraphicsContext source code:

This works fine and I can run my application perfectly on the real time 
machine. However, if I switch from a single threaded rendering mode to 
multithreaded mode, I get a segmentation fault error and I'm having trouble 
understanding why.

I debugged the application via remote debugger and the segmentation fault 
happens here:


Code:
Shader::PerContextShader::PerContextShader(const Shader* shader, unsigned int 
contextID) :
osg::Referenced(),
_contextID( contextID )
{
_shader = shader;
_extensions = GLExtensions::Get( _contextID, true );
_glShaderHandle = _extensions->glCreateShader( shader->getType() );
requestCompile();
}




The function ptr "glCreateShader" is set to 0x0. After double checking it seems 
like the method to assign the function pointers fails because no valid OpenGL 
context can be found. The application also prints 


> Error: OpenGL version test failed, requires valid graphics context.


So now I'm wondering why this only happens if I enable multithreaded rendering, 
and not for singleThreaded rendering. What exactly do I need to change so that 
multithreading works on the real time machine?

Thank you!

Cheers,
mille25

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=66895#66895





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] [osgPlugins] [VRML plugin] Performances issue ...

2010-05-12 Thread Tangi Meyer
Hi All,

I've been adapting/updating a software from:
- OSG 2.0.0 + Coin 2.5 + IV plugin + Visual C++ 2005 
to
- OSG 2.9.5 + OpenVrml0.14.3 + Visual C++ 2008.

There seems to be a serious drop in performances with the newer version 
regarding the loading of vrml files through the following call:
osgDB::readNodeFile(Filename)

Is it the case or do you have an idea of where I should look?

Thank you very much in advance!

Cheers,
Glouglou

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=27453#27453





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Blender/Maya importer

2009-04-30 Thread Meyer
Thanks,

I'll try this one.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=11082#11082





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Blender/Maya importer

2009-04-29 Thread Meyer
Hi,

To be honest, I switched from OSG to OGRE some time ago, but hopefully yo will 
forgive me and help me with my problem.

I need to establish a communication between an OGRE-visualisation and an 
OSG-framework, that produces .ive-files. Is it possible to convert these 
ive-files into ogre .meshes and .materials directly, or at least into a file 
Blender, Maya or 3DStudioMax could read? 
In your wiki I found a Blender exporter, transforming into OSG, but not the 
other way round. And the links to osgmaya (an importer for Maya) and a similar 
program for 3DS didnt work. As Blender is a freeware, it would be really great 
to be able to import and visualize .ive-scenes in it (I could then export it to 
.mesh for OGRE).
Does anyone of you know a (working) importer for Blender?

Thank you a lot!

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=11036#11036





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org