They are kept in a separate repo:
https://github.com/openscenegraph/OpenSceneGraph-Data
Do keep in mind the license for said data is different:
>From GitHub:
If not otherwise specified all files in the OpenSceneGraph-Data are provided
free for non commericial usage. Commericial users may use the
I did a similar project with adaptive headlamps a couple years ago.
The way I would do this is to think about this as sort of a spot light with a
projector slide. So we can also think about this also a spot light with a
shadow.
So what I did was to setup a "slide" scene in which I would render
Thank you for your reply.
In general my CPU load is pretty good, what I am really concerned about is
reducing the number of OpenGL calls/driver overhead. If I have 200 signs all
under there own switch, each sign is only 4-64 triangles each, and all the
signs share the same texture mosaic. Havi
I have some scenes with tons of osgSim::MultiSwitch's, I think for some of my
scenes it can be in the low 1000s. For a scene these are typically setup once,
and then rarely changed. From what I understand a lot of the optimize
operations do not work across switches, which makes sense, I would th
I currently have a hardware instancing system that is setup for cars. I
currently have the transform data stored in a TBO (texture buffer object).
What I want to do is to change over to using a SSBO (shader storage buffer
object), and to only copying the minimum amount of data I need per frame.
GPU affinity is really only available with Quadro cards. AMD has there own
system that is a little more complicated, and has a few more featuresbut I
am not really experienced with those, and I believe they are only usable on the
FirePro/Radeon Pro cards.
We have found setting the GPU affin
I am not running it in Quadro mosaic mode. I do have a slightly older version
of the program running on a render cluster with 5 2-gpu nodes driving 16
displays that runs fine on windows 7.
I am starting to suspect it has something to do with the Windows - Desktop
Window Manager (DWM) compositi
I currently have an odd problem I am stuck on. I have a system with 2 Quadro
P5000 cards- driving 3 displays that are warped and blended on a 120 degree
dome. Running Windows 10. The application is ground vehicle simulation - I
have pretty high rates of optical flow. Each display is running it
yes this is doable. You need to set the internal format to something like:
GL_RGBA32F
and the source type to float on your texture.
You might look at the OSG forest example, and its use of osg::TextureBuffer.
Basically with the forest example, its doing instancing where the transforms
are load
We do have a city scene at ground levels with lots of cars, pedestrian,
construction zone clutter. The cars tend to be the worst as they can have up to
8 transparent object each from them. Large parking lots can have 500+ vehicles.
OK so it sounds like what we need to do is to try to combine oc
I am using OccluderNode:
for each "building" in config file{
auto &pnt_pair = vecs[i];
osg::OccluderNode* onode = new osg::OccluderNode();
osg::ConvexPlanarOccluder* occl = new osg::ConvexPlanarOccluder();
osg::ConvexPlanarPolygon& polly = occl->getOccluder();
I have a scene with a large number of occluder nodes. Its a large city scene,
and most of the nodes are relatively small, like 10-50 meters wide, like 10-30
meters tall, in a database that is maybe 20 km x 20 km. I have 1000's of these.
They are basically extracted semi-automatically. Most of th
I am using osg 3.4.0. I am in a position where I could switch to a newer build.
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=72392#72392
___
osg-users mailing list
osg-users@lists.openscenegraph.org
Right now I having problems setting unique uniforms and num instances on a per
view basis (camera).
Right now I am using hardware instancing, with a scene with 1000's of
relatively complex objects, but only a handful of actual models. I was able to
encode index's to textures + position and rot
Instead of multiplying the modelview matrix by the inverse view matrix in the
vertex shader, does any one have a workable solution to get the model matrix
setup as a uniform to begin with?
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=72239#72239
Is the GPU time spiking when not running the debugger? Basically are you
getting like 99% of your frames hitting your frame, then 1% that are taking 4-5
times longer than they are supposed to.
I suspect you may have some kind of threading issue. Its easy to see slightly
changing the timing of
FYI I forgot to mention this from OSG 3.4.0, looking at the git repo, I does
not look like anything has changed since with the plugin.
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=71674#71674
___
o
I have an issue I have some nodes with day, night , dusk flags set. I need this
data for time of day control, as nodes set for night time have emissive
properties, that do not work for day time, and the converse is true, and even
worse is if both are active at the same time. I traced down into t
I have a number of scenes with a large number of old fashioned light maps.
These light maps are basically textures that are set with blending and I have
a number of issues with these.
I have 100's of these lights in the scene, but only a few a handful should be
active in a scene at a time. At
yes this is possible, but I am afraid its likely not going to be simple,
depending on how much integration between the two you are looking for.
Most likely the easiest path for you is to just create a OSG program, and
create your python program, and use sockets
(https://docs.python.org/2/howto
I would try to look at instancing. The OSG Forest is a good example.
Instancing is were you tell the GPU to draw an object X number of times, and
in the vertex shader you get a built in var gl_InstanceID, that tells you which
item it is drawling (i.e. if you are drawling 100 versions, this wil
Thanks, this seems to mostly fix my issue. The texture filtering should have
already been set for the model, but it looks like along the way, things that
were supposed to be NEAREST_MIPMAP_LINEAR were getting set to LINEAR.
--
Read this topic online here:
http://forum.opensceneg
I having an issue with rendering models with mip maps, when I render my scene I
get some strange aliasing on text in the texture.
I am mostly using DDS files, some texture have mip maps, some do not. If they
do not have a mip map defined I am setting:
texture->allocateMipmapLevels();
So to t
Basically What you want is the situation where the peak of the cone is at
0,0,0, and the base is at your negative direction pointed towards the ground.
You may have to put transform between say the DOF node, and the cone. You might
also just forgo the OSG cone, and have a model where the peak of
How are you planning on doing the search light? Are you planning on doing it as
a semi-transparent cone? If so I would basically recommend looking at the DOF
nodes (osgSim::DOFTransform), and look to attach the DOF node to your aircraft,
and then placing your code under the DOF node.
As per l
What threading model are you using?
One thing you need to be conscious of is when you update/modify your scenegraph
and who is using it at that time. Some operations might better be served using
different callback mechanisms, such as using viewer->addUpdateOperation.
You might try switching you
yeah basically you want to use un-warped coordinates for your blending image,
and warped coordinates for your scene image. If not you would have to warp your
blending image, which may be easier for you depending on process.
As per passing your texture width, you can use the function textureSize
OK, I see it now. Taking a quick look at the code, it seems what you want to do
is to add a vertex shader, and pass the texture coordinates from you Mesh, into
your fragment shader. Then in your fragment shader use the texture coordinates
in this line:
"vec4 sceneColor = texture2D(sceneTexture,
Can you post your shader code? For testing you might try just rendering your
texture coordinates for the scene texture, and the texture coordinates for your
blending texture, they should be different.
If you are still using this:
" vec4 sceneColor = texture2D(sceneTexture, texCoord);\n"
" vec4
I am currently using a UniformBufferObject, to setup bindless textures. I have
a UniformBufferObject, that I am using to upload to bind an array of texture
handles. I added extensions, glGetTextureHandle, and
glMakeTextureHandleResident to setup the texture handles - UBO.
I have a class I hav
Can you send them to me as well?
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=69287#69287
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/o
Two places where I have personally seen such issues, one from using code ported
using an older version of OSG, the other was shader/driver differences between
computers.
If I remember some methods for setting color were removed moving from version
3.0 to 3.2, and I had some code where the inco
The function I posted was similar to a function I called from my shader I ran
in my final pass. What you have is functionally the similar.
As per the inout, this states the parameter into the function is both an input
and an output. I set the version for my shader to something like 4.5 in
comp
Taking a quick look at the code, yes it seems like it is a good example.
So basically taking a quick read of this, you render your scene to a texture,
then render a dome basically setup to do your warping, with the results from
the render to texture, as its texture. You basically want to attach
You should be able to grab the state set in the pre-render callback and bind
your image for blending + add any uniforms you need there. You will want to
pick a texture level that you are not going to use rendering your scene. You
will also need to bind your shader the the scene as well.
One t
Its kind of tough to say without having a good idea about what exactly you can
do with your plugin. I can say what you most likely need to do is to bind a
texture with your blend map to your scene some how.
Ideally you would do this in a 2nd rendering pass. Basically you want to
perform blendin
I think looking at the osgblenddrawbuffers or osgmultiplerendertargets might be
a good start. Neither of these are really doing what you are aiming for, but
shows you how to add the color buffers and shaders.
You might also look at the OSGPPU project. It I think it has some more relevant
examp
There are a few ways of doing this. What you most likely want to do is to look
up information on high dynamic range rendering. This involves multi-pass
rendering where you use some kind of blur filter to make the brighest parts of
your scene bleed out.
This is a resonable example:
http://learnop
Thanks for posting the code it helped a lot. I did get the code to eventually
work. What I found was I needed to make sure the shader was active first, then
do the texture binds/ handle fetches. So what I found was I needed a top level
node with the shader activated at the state set, then a chil
if you are using windows you can use EnumDisplayMonitors, this will go through
your list of physical displays attached to your computer.
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68145#68145
___
what you can do is run the simplifier, then use osgDB::writeNodeFile() to save
the simplified result then load that the next time you run.
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68144#68144
_
Has any one implemented bindless textures? I am trying to add this to my app to
improve batching. I have a large scene, with lots of unique draw calls. We do
use texture atlas's, the number of small objects is limited. I would like to
further improve batching without having to redo textures. I d
This is a pretty broad topic. It largely depends on what you are trying to
display.
If you are displaying a lot of the same thing, (say a warehouse filled with
1000's of identical boxes) you can use hardware instancing, there are a of
sources you can look up on the subject. The OSG forest exam
I have two separate issues with DDS textures. First and the most annoying is
after moving from OSG-3.2.x to OSG-3.4.0 my grayscale textures now show up
red. I did not change the source textures or models between versions.
I have another problem with mip mapped dds textures. I have a number of
I work at the National Advanced Driving Simulator, at the University of Iowa.
We running OSG on a 16 projector system, and one of the world's largest motion
bases. I think we are #2 at this point. And we also use OSG in single PC
simulators that we provide to a number of universities and other e
when I run the OSG Shadow example, it works. I also modified the example to use
a composite viewer and that works as well.
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=63792#63792
___
osg-users mai
I have added a shadow scene (using shadow map technique) to my scene graph and
the shadow texture I am getting does not seem to make much sense see:
https://www.nads-sc.uiowa.edu/userweb/dheitbri/shadowScene_small.jpg
I turned on the debug HUD, the scene has 3 cars, on top of a white rectangle,
If I remember from my conversations with people at NVidia, the least amount of
latency you can get is 2-3 frames I cannot remember the exact number. If you
select the additional per-rendered frames, this will increase this. Also
remember your display will also add some additional latency as wel
Just to give you some perspective, the way we do this at were I work is we use
hardware frame lock. We have a single computer that is responsible for
generating our frame data (i.e. object position and animation state), this in
turn is broadcast out to all our render nodes. Each frame is tagged
I tried playing around with the different threading modes, I was only get this
to work in single threading mode. I also get periodic hangs, I seem to drop 1
frame every second or so.
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=58369#58369
I am having a problem with setting up swap group using OSG 3.2 on windows 7. I
am running the program on a system with 2 Quadro K5000s, with a sync card. I am
setting the swap group to 1, and setting swapGroupEnabled to true for the
graphics context traits. I have a single graphics context with
51 matches
Mail list logo