Re: [osg-users] Oculus+OSG

2015-09-08 Thread Jan Ciger
On Mon, Sep 7, 2015 at 9:37 PM, Björn Blissing  wrote:
>
> Jan Ciger wrote:
>> That blog post I remember, but the conclusion there seems to be that 
>> positional timewarp introduces occlusion artifacts and a lot of extra 
>> complexity. I didn't sound like something they are intending to implement at 
>> the time.
>>
>
>
> Hi Jan,
>
> One reddit user made two short videos to display the effect of positional 
> timewarp. Both a normal case and one extreme case (pushed to fail).
>
> In the first video the frame rate is artificially dropped. Then he toggles 
> between positional timewarp on/off, to show how this trick can alleviate a 
> frame rate drop and still do some minor movement without stutter:
> http://zippy.gfycat.com/JealousMeagerKestrel.webm
>
> The second video shows what happens if you move to much. Here rendering is 
> frozen and the HMD user continues to move until the effect fails miserably:
> http://giant.gfycat.com/AgileThatGraysquirrel.webm
>
> The author wrote the following description:
> "Timewarp is a way of distorting the image the HMD receives in the event that 
> the simulation can't keep up with the target frame rate or refresh rate of 
> the device in order to compensate. So, motion-to-photon latency and 
> headtracking will appear to remain consistent, even if your framerate is 
> fluctuating. This helps to alleviate jitter. This, however, was only 
> previously possible for rotational tracking in a prior Oculus SDK. Now the 
> distortion is possible for positional tracking as well. Both have their 
> limits of course -- you can only move so much before the scene becomes 
> terribly warped. It's meant to cover for momentary or tiny dips in frame rate 
> -- not for massive ones that last a while. "

Oh careful there.

What he is actually showing is the effect of the asynchronous timewarp
- if you can't hit the framerate, you reproject/warp the previous
frame. That's a fairly new thing which required the driver support -
you need to preempt the current rendering command stream if you aren't
going to hit the target and re-project the old frame.

The original idea of timewarping was to reduce the apparent tracking
latency by warping the rendered image to match the tracking data late
in the frame. That is what I was referring to - I doubt the latency
reduction in the case of positional tracking is worth the effort,
because the tracker is running at a comparable/same speed as your
application.

 I have recently spent some crazy amount of time debugging framerate
problems when the application (both OSG/OpenGL & Unity/D3D) are locked
to VSYNC but you get horrible framerate jitter every once in a while
for no apparent reason. This produces horrid artifacts when doing
frame-sequential stereo, because suddenly the stereo flips (I am
trying to do stereo on a GeForce with my own synchronisation hw). My
hypothesis is that the Nvidia's driver is silently dropping frames
from the pre-rendered frame queue if you are rendering too quickly
(the scene was really simple, rendering in thousands of fps without
vsync) instead of blocking or something. And it happens only on
GeForce hardware, not on our old Quadro, so it seems to be driver
"optimization". If the same thing happens with the Rift connected too,
then these timewarping kludges are going to have a hard time keeping
up. Or maybe they switch to a different code path in the driver when
the Rift is detected ...

I can't say I am a big fan of this - it is a kludge depending on the
GPU vendor's good will and your good relationship with them (the SDK
is NDA-ed, for ex.). Basically we are getting to the point where it is
going to be impossible to deliver a properly working 3D application
without working with the GPU vendor (good luck if you are a small
indie guy).

J.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgTerrain and CLAMP_TO_EDGE in ImageLayers?

2015-09-08 Thread Christian Buchner
> The current CLAMP_TO_EDGE approach requires me to modify the edge
> pixels of that terrain tile to have a transparent color.

Sorry, I meant to say "edge pixels of that image". I should proof-read my
postings better.



2015-09-08 13:52 GMT+02:00 Christian Buchner :

> Hi all,
>
> I am questioning the hardcoded use for CLAMP_TO_EDGE texture wrapping
> modes in the osgTerrain/GeometryTechnique.cpp.
>
> Let's say that I wanted to add a high tes texture patch on some small
> location on a terrain tile. The current CLAMP_TO_EDGE approach requires me
> to modify the edge pixels of that terrain tile to have a transparent color.
> So I basically lose these pixels.
>
> I think it would be desirable to have the texture wrapping mode as a
> configurable option in the osgTerrain::Layer base class, similar to how
> currently the minLevel, maxLevel, minFilter and magFilter variables are
> configurable there.
>
> Preferable to me would be an approach using CLAMP_TO_BORDER where I can
> define my own border color, for example to be fully transparent.
>
> If this is an acceptable approach, I could submit a patch.
>
> Christian
>
>
>
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] osgTerrain and CLAMP_TO_EDGE in ImageLayers?

2015-09-08 Thread Christian Buchner
Hi all,

I am questioning the hardcoded use for CLAMP_TO_EDGE texture wrapping modes
in the osgTerrain/GeometryTechnique.cpp.

Let's say that I wanted to add a high tes texture patch on some small
location on a terrain tile. The current CLAMP_TO_EDGE approach requires me
to modify the edge pixels of that terrain tile to have a transparent color.
So I basically lose these pixels.

I think it would be desirable to have the texture wrapping mode as a
configurable option in the osgTerrain::Layer base class, similar to how
currently the minLevel, maxLevel, minFilter and magFilter variables are
configurable there.

Preferable to me would be an approach using CLAMP_TO_BORDER where I can
define my own border color, for example to be fully transparent.

If this is an acceptable approach, I could submit a patch.

Christian
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Oculus+OSG

2015-09-08 Thread Jan Ciger
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/09/15 19:48, Björn Blissing wrote:
> I concur with your conclusion that it is highly likely that Oculus 
> are in fact using the asynchronous solution in the SDK0.6 and 0.7, 
> BUT they have not stated officially that the are using this. In 
> theory positional re-projection could be done in synchronous 
> rendering (although unlikely).

Positional reprojection could certainly be done synchronously (modulo
artifacts, etc), but the Reddit video you have posted explictly shows
framerate "smoothing".

That wouldn't happen if you do the warping at the end of each frame
(as before), regardless whether only rotational warp is done or both
positional and rotational. The framerate will be the same or even a
bit worse than before because more calculation has to be done per
frame. However, the tracking latency will be perceived as lower,
because the time between the movement and the change making it to the
screen will be shorter.

That's why I have concluded that the video is showing the new
asynchronous timewarping technique - that one actually permits to
fill-in for the missing frames and allows to smooth over framerate
jitter and occasional missed vsync deadline. The "old style"
timewarping doesn't do that (and was not advertised as a tool for that
neither).

J.



-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iD8DBQFV709In11XseNj94gRAjXnAKCJHP+0FeNrcRr42q/GtTxRrU1T+QCg0WTy
3LeldC81cWSvU5YFT1OPF3k=
=lrOB
-END PGP SIGNATURE-
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] how to get the handle of opengl texture that corresponds to a OSG::Texture

2015-09-08 Thread Qingjie Zhang
Hi,
I generated a texture2d with osg, to use the texture in CUDA, I have to get the 
corresponding "opengl texture" handle(GLuint), and then access to the texture 
by the handle in CUDA.
Now I don't know how to get the handle. 
Many thanks for your reply!!!
... 

Thank you!

Cheers,
Qingjie

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=65061#65061





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Change picked points' color ---why can't work in the pick function?

2015-09-08 Thread Yexin W
Hi,

Guys, I have a pick problem. What I want to do is loading 10 points with their 
loactions and colors. I want to use poly pick, and use another color to show 
the picked points.  
The loading points process is as follows:


Code:

osg::ref_ptr geom = new osg::Geometry();
osg::ref_ptr v = new osg::Vec3Array;
osg::ref_ptr clr = new osg::Vec4Array;
v->push_back(osg::Vec3(0.0, 0.0, 0.0));
... // 10 points in total
clr->push_back(osg::Vec4(1.0, 0.0, 1.0, 1.0f));
...// set the color 
geom->addPrimitiveSet(new osg::DrawArrays(osg::PrimitiveSet::POINTS, 0, 
v->size()));
// after set the color and normal, add the geometry to Geode node
geode->addDrawable(geom.get());





Then I want to use ploy pick, the problem is : when I put the following codes 
in the pick() function in Pickhandler class, I found out that the color of the 
picked premitiveset has been changed, however, nothing changed in the view 
window. The following codes are in pick function:


Code:

if (picker->containsIntersections())
{
osgUtil::PolytopeIntersector::Intersections intersections = 
picker->getIntersections();
osgUtil::PolytopeIntersector::Intersections::iterator iter;
for (iter = intersections.begin(); iter != intersections.end(); iter++)
{
osg::NodePath nodepath = (*iter).nodePath; 
node = (nodepath.size() >= 1) ? nodepath[nodepath.size() - 1] : 0; 
int pointIndex = (*iter).primitiveIndex; 
osg::Geode * geode = dynamic_cast (node);
osg::Geometry * geom = dynamic_cast(geode->getDrawable(0));
osg::Vec4Array * clrary = dynamic_cast(geom->getColorArray());
clrary->operator [] (pointIndex) = osg::Vec4(0.0, 1.0, 0.0, 1.0f);
geom->setColorBinding(osg::Geometry::BIND_PER_VERTEX);
node->addUpdateCallback(new CessnaCallback()); // not helpful
viewer->updateTraversal(); //not helpful
//viewer->run();//not helpful
}
}





First, I thought may be the colors have not been changed, so I tested changing 
several points' color in the main function, like this:


Code:

osg::Geode * geode1 = dynamic_cast (nodePt.get());
osg::Geometry * geom = dynamic_cast(geode1->getDrawable(0));
osg::Vec4Array * clrary = dynamic_cast(geom->getColorArray());
clrary->operator [] (10) = osg::Vec4(0.0, 1.0, 0.0, 1.0f);
clrary->operator [] (6) = osg::Vec4(0.0, 1.0, 0.0, 1.0f);
clrary->operator [] (2) = osg::Vec4(0.0, 1.0, 0.0, 1.0f);
geom->setColorBinding(osg::Geometry::BIND_PER_VERTEX);




It changed the colors! So it confuses me why it can't work in the pick 
function? Any body any idears?
Any ideas and suggestions would be appreciated! 


Thank you!

Cheers,
Yexin

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=64994#64994





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Accessing georegistered images via GDAL plug-in - too many limitations...

2015-09-08 Thread Robert Osfield
Hi Christian,

The GDAL plugin was written to read basic image and height field data via
GDAL rather than expose all the possible data.  Potentially this could be
added, but I you are the first I recall asking for it.

Robert.

On 8 September 2015 at 15:12, Christian Buchner  wrote:

> Hi,
>
> while height fields loaded via the GDAL plug-in get their coordinate
> origin and x/y extents as well as rotation set correctly via the following
> code in ReaderWriterGDAL.cpp
>
> hf->setOrigin(osg::Vec3(BottomLeft[0],BottomLeft[1],0));
> hf->setXInterval(sqrt(geoTransform[1]*geoTransform[1] +
> geoTransform[2]*geoTransform[2]));
> hf->setYInterval(sqrt(geoTransform[4]*geoTransform[4] +
> geoTransform[5]*geoTransform[5]));
> hf->setRotation(osg::Quat(rotation, osg::Vec3d(0.0, 0.0, 1.0)));
>
> there appears to be no comparable facility to the GDAL plugin's image
> loader. All that will be returned is an osg::Image without any geo
> referencing data - if I understand the code correctly.
>
> I would propose that we set named properties like Origin, XInterval,
> YInterval, Rotation using the  osg::Object->setUserValue()
> 
> member function as osg::Vec2 objects.
>
> Also it would appear that the GDALDataset (which is a derived class of
> osgTerrain::Layer) is not accessible to the caller (user) of the GDAL
> plug-in. It would be nice if there was a facility to somehow access this
> dataset ptr through an API - as this object contains all the georeferencing
> information equired to slap this piece of data onto an OSGTerrain::Terrain
>
> Christian
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Oculus+OSG

2015-09-08 Thread Björn Blissing

Jan Ciger wrote:
> 
> Oh careful there.
> 


Not my words, I was only quoting the author of the videos.


Jan Ciger wrote:
> What he is actually showing is the effect of the asynchronous timewarp - if 
> you can't hit the framerate, you reproject/warp the previous frame. That's a 
> fairly new thing which required the driver support - you need to preempt the 
> current rendering command stream if you aren't going to hit the target and 
> re-project the old frame.
> 
> The original idea of timewarping was to reduce the apparent tracking latency 
> by warping the rendered image to match the tracking data late in the frame. 
> That is what I was referring to - I doubt the latency reduction in the case 
> of positional tracking is worth the effort, because the tracker is running at 
> a comparable/same speed as your application.
> 


In the "Oculus Rift Developer Guide" you will not find a word about 
asynchronous time warp. They are only saying that the compositing framework is 
handling distortion, timewarp, and GPU synchronization, whatever that incurs... 

I don't know if this means that Oculus currently are doing pure "original 
timewarp", pure "asynchronous timewarp" or a combination of the two (since its 
all happening inside the closed source part of the Oculus SDK). But the feature 
shown in the videos is currently available.

/Björn

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=65062#65062





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Accessing georegistered images via GDAL plug-in - too many limitations...

2015-09-08 Thread Christian Buchner
Hi,

while height fields loaded via the GDAL plug-in get their coordinate origin
and x/y extents as well as rotation set correctly via the following code in
ReaderWriterGDAL.cpp

hf->setOrigin(osg::Vec3(BottomLeft[0],BottomLeft[1],0));
hf->setXInterval(sqrt(geoTransform[1]*geoTransform[1] +
geoTransform[2]*geoTransform[2]));
hf->setYInterval(sqrt(geoTransform[4]*geoTransform[4] +
geoTransform[5]*geoTransform[5]));
hf->setRotation(osg::Quat(rotation, osg::Vec3d(0.0, 0.0, 1.0)));

there appears to be no comparable facility to the GDAL plugin's image
loader. All that will be returned is an osg::Image without any geo
referencing data - if I understand the code correctly.

I would propose that we set named properties like Origin, XInterval,
YInterval, Rotation using the  osg::Object->setUserValue()

member function as osg::Vec2 objects.

Also it would appear that the GDALDataset (which is a derived class of
osgTerrain::Layer) is not accessible to the caller (user) of the GDAL
plug-in. It would be nice if there was a facility to somehow access this
dataset ptr through an API - as this object contains all the georeferencing
information equired to slap this piece of data onto an OSGTerrain::Terrain

Christian
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] automatically merge close-by geometries to reduce cull/draw overhead?

2015-09-08 Thread Christian Buchner
My building's aren't (yet) textured, so that removes the need for a texture
atlas. However they are individually shaped (being created from an ESRI
shape file essentially), so instancing is ruled out.

I might try to group the buildings into tiles of equal size, trying to run
the osgUtil::Optimizer with MERGE_GEOMETRY and MERGE_DRAWABLES individually
on each tile. Let's see how that goes. If it is too slow, I will have to
refactor my code that generates buildings.

Also I am wondering what the SPATIALIZE_GROUPS feature of the optimizer
does.

Christian


2015-09-07 12:32 GMT+02:00 Robert Osfield :

> Hi Christian,
>
> Since you are creating the building yourself I would recommend that you
> build them grouped to start off with rather than post process them.
>
> The first step I'd tack would be to create a texture atlas from the wall
> and roof textures and then just create a single osg::Geometry and
> associated osg::StateSet.   This will half the number of Drawables and
> state changes.
>
> If you have a set of roof and wall textures then see if you create a
> single texture atlas from them so that you can then reuse the same
> osg::StateSet between separate Drawables.
>
> Then final step would be to merge groups of Drawables that are in
> geographical location.  The osgforest example does grouping of randomly
> placed trees so have a look at ways of doing this.
>
> Another approach you could take would be to create a single osg::Geometry
> and then use instances to repeat the building geometry and provide a size
> for a shader to scale the geometry and place it in it's final position.
> Again the osgforest has a code path that does this so have a look at this.
>
> Robert.
>
>
>
> On 7 September 2015 at 10:50, Christian Buchner <
> christian.buch...@gmail.com> wrote:
>
>> Hi,
>>
>> we're using code loading some buildings (outline and height), creating a
>> Geode with a two drawables per individual building - one drawable for the
>> walls, one for the roof polygon. This has served us well to display a few
>> hundred to a few thousand buildings.
>>
>> Fast forward to current date. Our client has sent us a new geo data set
>> containing 55000 building polygons. Once you zoom out the camera to show
>> most of these buildings, frame rates drop into the low single digit, mostly
>> due to all the culling effort done by the CPU (maybe also from the large
>> number of draw calls). D'oh!
>>
>> Are there any specific features within OSG to group close by geodes, and
>> to merge their drawables?
>>
>> I know the osgUtil::Optimize has flags for merging geodes and drawables,
>> but I guess it would not automatically merge only very close objects.
>>
>> What path should I try to take for tackling this problem, if possible
>> using built-in OSG features?
>>
>> Christian
>>
>>
>>
>> ___
>> osg-users mailing list
>> osg-users@lists.openscenegraph.org
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>>
>>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] ffmpeg library version expected for OSG 3.2.3

2015-09-08 Thread sam
What version of ffmpeg is required to use the plugin for OSG 3.2.3? When
trying to use the latest build of ffmpeg; I get some undeclared identifier
errors which seem to be caused by deprecation of some functions within
ffmpeg.

Thanks, Sam
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] automatically merge close-by geometries to reduce cull/draw overhead?

2015-09-08 Thread Robert Osfield
Hi Christian,

On 8 September 2015 at 15:26, Christian Buchner  wrote:

>
>
> My building's aren't (yet) textured, so that removes the need for a
> texture atlas. However they are individually shaped (being created from an
> ESRI shape file essentially), so instancing is ruled out.
>
> I might try to group the buildings into tiles of equal size, trying to run
> the osgUtil::Optimizer with MERGE_GEOMETRY and MERGE_DRAWABLES individually
> on each tile. Let's see how that goes. If it is too slow, I will have to
> refactor my code that generates buildings.
>
> Also I am wondering what the SPATIALIZE_GROUPS feature of the optimizer
> does.
>

All these merge/specialize visitors work on flat osg::Geode and osg::Group
respectively.

While you potentially could try and co-opt these classes to help you they
are bandages that can be applied to crappy scene graphs to try and prevent
the worst offences they make from affecting performance.  With 3rd party
scene graphs created by modelling tools with little clue of real-time scene
graph needs this is best you can get.  Running the Optimizer classes might
improve bad scene graphs but it can't avoid creating more fragment memory,
and it can't make the most optimal solutions for all types of data.

Creating efficient scene graphs from the start is ABSOLUTELY the most
efficient way to use the OSG and OpenGL.  In your case you should be
creating an efficient scene graph right from the start and their should be
no need to run *any* of the osgUtil::Optimizer classes.   To keep asking
about using the Optimizer you are effectively saying, no no I want to
create crappy scene graphs and let the other code do it's best to fix it up.

I don't want to help you make bad scene graphs.  I want to help you make
efficient scene graphs.

Robert




On 8 September 2015 at 15:26, Christian Buchner  wrote:

>
> My building's aren't (yet) textured, so that removes the need for a
> texture atlas. However they are individually shaped (being created from an
> ESRI shape file essentially), so instancing is ruled out.
>
> I might try to group the buildings into tiles of equal size, trying to run
> the osgUtil::Optimizer with MERGE_GEOMETRY and MERGE_DRAWABLES individually
> on each tile. Let's see how that goes. If it is too slow, I will have to
> refactor my code that generates buildings.
>
> Also I am wondering what the SPATIALIZE_GROUPS feature of the optimizer
> does.
>
> Christian
>
>
> 2015-09-07 12:32 GMT+02:00 Robert Osfield :
>
>> Hi Christian,
>>
>> Since you are creating the building yourself I would recommend that you
>> build them grouped to start off with rather than post process them.
>>
>> The first step I'd tack would be to create a texture atlas from the wall
>> and roof textures and then just create a single osg::Geometry and
>> associated osg::StateSet.   This will half the number of Drawables and
>> state changes.
>>
>> If you have a set of roof and wall textures then see if you create a
>> single texture atlas from them so that you can then reuse the same
>> osg::StateSet between separate Drawables.
>>
>> Then final step would be to merge groups of Drawables that are in
>> geographical location.  The osgforest example does grouping of randomly
>> placed trees so have a look at ways of doing this.
>>
>> Another approach you could take would be to create a single osg::Geometry
>> and then use instances to repeat the building geometry and provide a size
>> for a shader to scale the geometry and place it in it's final position.
>> Again the osgforest has a code path that does this so have a look at this.
>>
>> Robert.
>>
>>
>>
>> On 7 September 2015 at 10:50, Christian Buchner <
>> christian.buch...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> we're using code loading some buildings (outline and height), creating a
>>> Geode with a two drawables per individual building - one drawable for the
>>> walls, one for the roof polygon. This has served us well to display a few
>>> hundred to a few thousand buildings.
>>>
>>> Fast forward to current date. Our client has sent us a new geo data set
>>> containing 55000 building polygons. Once you zoom out the camera to show
>>> most of these buildings, frame rates drop into the low single digit, mostly
>>> due to all the culling effort done by the CPU (maybe also from the large
>>> number of draw calls). D'oh!
>>>
>>> Are there any specific features within OSG to group close by geodes, and
>>> to merge their drawables?
>>>
>>> I know the osgUtil::Optimize has flags for merging geodes and drawables,
>>> but I guess it would not automatically merge only very close objects.
>>>
>>> What path should I try to take for tackling this problem, if possible
>>> using built-in OSG features?
>>>
>>> Christian
>>>
>>>
>>>
>>> ___
>>> osg-users mailing list
>>> osg-users@lists.openscenegraph.org
>>> 

Re: [osg-users] how to get the handle of opengl texture that corresponds to a OSG::Texture

2015-09-08 Thread Qingjie Zhang
Thank you Robert,
well, I've checked the source code of OSG, and found what you said, but I still 
don't know about the contextID, what value of the parameter should I give to 
the function?
Thanks so much!
Qingjie.


robertosfield wrote:
> Hi Qingjie,
> 
> 
> The osgTexture::TextureObject* osg::Texture::getTextureObject(uint contextID) 
>  method can be used to get the OSG's wrapper for the OpenGL texture object.  
> The TextureObject::id() method returns the OpenGL texture object id.
> 
> 
> Robert.
> 


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=65067#65067





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] how to get the handle of opengl texture that corresponds to a OSG::Texture

2015-09-08 Thread Qingjie Zhang
Hi Robert,
I'm knowing that the RenderInfo->getContextID() can give me the contextID to 
use in the getTextureObject(uint contextID), but where can I get the 
RenderInfo? I searched the source code of Texture but did not find that..
Many thanks.

Qingjie


robertosfield wrote:
> Hi Qingjie,
> 
> 
> The osgTexture::TextureObject* osg::Texture::getTextureObject(uint contextID) 
>  method can be used to get the OSG's wrapper for the OpenGL texture object.  
> The TextureObject::id() method returns the OpenGL texture object id.
> 
> 
> Robert.
> 
> 
> On 8 September 2015 at 15:20, Qingjie Zhang < ()> wrote:
> 
> > Hi,
> > I generated a texture2d with osg, to use the texture in CUDA, I have to get 
> > the corresponding "opengl texture" handle(GLuint), and then access to the 
> > texture by the handle in CUDA.
> > Now I don't know how to get the handle.
> > Many thanks for your reply!!!
> > ...
> > 
> > Thank you!
> > 
> > Cheers,
> > Qingjie
> > 
> > --
> > Read this topic online here:
> > http://forum.openscenegraph.org/viewtopic.php?p=65061#65061 
> > (http://forum.openscenegraph.org/viewtopic.php?p=65061#65061)
> > 
> > 
> > 
> > 
> > 
> > ___
> > osg-users mailing list
> >  ()
> > http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 
> > (http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org)
> > 
> 
> 
>  --
> Post generated by Mail2Forum


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=65073#65073





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Oculus+OSG

2015-09-08 Thread Jan Ciger
On Tue, Sep 8, 2015 at 4:26 PM, Björn Blissing  wrote:

> In the "Oculus Rift Developer Guide" you will not find a word about 
> asynchronous time warp. They are only saying that the compositing framework 
> is handling distortion, timewarp, and GPU synchronization, whatever that 
> incurs...

It is described in this post (that you have mentioned too):
https://developer.oculus.com/blog/asynchronous-timewarp-examined/
It was one of the reasons they had to go to Nvidia (the other being
the direct mode support), because it is not possible to implement it
without driver support.

>
> I don't know if this means that Oculus currently are doing pure "original 
> timewarp", pure "asynchronous timewarp" or a combination of the two (since 
> its all happening inside the closed source part of the Oculus SDK). But the 
> feature shown in the videos is currently available.

Yes, that is the asynchronous timewarping in the new SDK. The
"original timewarp" does not have any effect on framerate (you still
need to render every frame + you are rendering the warping at the
end), only on perceived tracking latency. The difference is explained
in that blog article by their chief software architect. The original
timewarping by J. Carmack is described here:
https://web.archive.org/web/20140719085135/http://www.altdev.co/2013/02/22/latency-mitigation-strategies/


Regards,

J.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] how to get the handle of opengl texture that corresponds to a OSG::Texture

2015-09-08 Thread Robert Osfield
Hi Qingjie,

The osgTexture::TextureObject* osg::Texture::getTextureObject(uint
contextID)  method can be used to get the OSG's wrapper for the OpenGL
texture object.  The TextureObject::id() method returns the OpenGL texture
object id.

Robert.

On 8 September 2015 at 15:20, Qingjie Zhang <305479...@qq.com> wrote:

> Hi,
> I generated a texture2d with osg, to use the texture in CUDA, I have to
> get the corresponding "opengl texture" handle(GLuint), and then access to
> the texture by the handle in CUDA.
> Now I don't know how to get the handle.
> Many thanks for your reply!!!
> ...
>
> Thank you!
>
> Cheers,
> Qingjie
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=65061#65061
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Oculus+OSG

2015-09-08 Thread Björn Blissing

Jan Ciger wrote:
> It is described in this post (that you have mentioned too):
> https://developer.oculus.com/blog/asynchronous-timewarp-examined/
> It was one of the reasons they had to go to Nvidia (the other being
> the direct mode support), because it is not possible to implement it
> without driver support.


I feel that you are misunderstanding what I am trying to say. I will try to 
reformulate myself.

First of all, we have two types of warping that could be performed:

1. Rotational - A pure 2D transformation, we only need a color texture for this.

2. Positional - A 3D re-projection were every pixel has to be transformed with 
its depth taken into account. This requires us to have a color texture, a depth 
texture and the projection matrices used.

Both techniques can be used either in sync with rendering (as was performed 
with rotational timewarp in SDK0.5 and earlier) or in a separate thread. The 
benefit of having it in a separate thread is that we can handle the case of the 
rendering is not done when the HMD requires a new image. But with the drawback 
that it requires low level support in OS/Driver. 

I concur with your conclusion that it is highly likely that Oculus are in fact 
using the asynchronous solution in the SDK0.6 and 0.7, BUT they have not stated 
officially that the are using this. In theory positional re-projection could be 
done in synchronous rendering (although unlikely). 

/Björn

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=65069#65069





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org