Re: [osg-users] SingleThreaded leading to whole application just running on one core

2016-09-25 Thread Fabian Wiesel
Hi Robert,

> Have you tried setting the affinity of the threads that are created?
> Have you tried creating the threads before the call to viewer.realize()?

Yes, both cause the threads being distributed across the cores. That is
probably also why initialising TBB early in main helps, as it creates a
pool of worker threads. For my app, you can consider it solved.
But don't you see a difficulty for OSG, if you cannot use any threading
library without additonal setup code?

> The way things are behaving looks to be down to the way that the Linux
threading is forcing the inheritance of the threading affinity of the main
thread to child threads.
> I don't know if there is an setting on the Linux threads side that can
change this behaviour so it's more consistent with other platforms.

I was looking for that, and my search was fruitless.
It also seems not to be Linux specific. FreeBSD seems to do the the same,
as is Windows:
https://msdn.microsoft.com/es-es/library/windows/desktop/ms686223(v=vs.85).aspx
> Process affinity is inherited by any child process or newly instantiated
local process

It looks more like OS X is the isolated case (and qnx).

Cheers,

Fabian
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] SingleThreaded leading to whole application just running on one core

2016-09-25 Thread Fabian Wiesel
Hi,

I can confirm the behaviour with the following test case:
https://github.com/fwiesel/vertexarrayfunctest/blob/threads/main.cpp#L92-L103
All threads run on CPU 0.

That clears up the mystery, which baffled me and two of my colleagues:
After upgrading to a new Ubuntu version, suddenly our application making heavy 
use of the Intel Thread Building Blocks failed to scale with the cores.
Explicitly initialising the TBB early in the program solved the issue, so we 
blamed some change in TBB, and I didn't investigate further.

It looks like OSG was previously packaged  with QtThreads instead of pthreads, 
making the affinity operations a no-op, while the newer doesn't.

In the light that it affects any child thread, can I ask you to re-consider the 
affinity handling, and/or maybe rename 
osgViewer::ViewerBase::SingleThreaded to 
osgViewer::ViewerBase::SingleThreadedCpuLocked or something?

I understand, that it is possible to override the behaviour (which I did now), 
but that requires some internal knowledge of the library, which you obviously 
have.
But for me as a user, where OSG is simply one of the libraries I use, I would 
not it effectively to change the behaviour of a second one.

Thanks,
  Fabian
  
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] SingleThreaded leading to whole application just running on one core

2016-09-24 Thread Fabian Wiesel
Hi,

>  OSG is setting the affinity for some of its own threads, which is totally 
> legitimate.

Currently, I was not able to confirm it on my Mac. But I think, I observed such 
a behaviour in my application on Linux.
But take it with a grain of salt, as it can be the result of some other side 
effects.
I will try to test it the following days with a simple test program, unless you 
can point out a mistake, and save me the trouble.

The following scenario sounds plausible to me:

If you are setting the osgViewer::Viewer::setThreadingMode(SingleThreaded), and 
then calling Viewer::realize(),
it will in turn call ViewerBase::setUpThreading() -> 
OpenThreads::SetProcessorAffinityOfCurrentThread(0); -> 
pthread_setaffinity_np(...)

"pthread_setaffinity_np" will be called on the main thread, which is debatable, 
if it is its "OSGs own thread".

The side effect is arising on Linux from the following (man page):
> A new thread created by pthread_create(3) inherits a copy of its  creator's 
> CPU affinity mask.

So, all threads created either from the view or after the Viewer::realize() 
will only run on the main CPU.

Given the following (pseudo-)program, I would expect the threads to run 
parallel on all processors, but likely they aren't on Linux.

  int main(int argc, char **argv) {
std::vector myvector(1024);
osgViewer::Viewer viewer;
viewer.setSceneData( node );
viewer.setThreadingModel(SingleThreaded);
viewer.realize(); // calling ViewerBase::setUpThreading() -> 
OpenThreads::SetProcessorAffinityOfCurrentThread(0); -> 
pthread_setaffinity_np(...)

// Create Threads
for (int i = 0; i < 100; ++i) pthread_create(...)
   
viewer.run()
pthread_join(...);
  }

Cheers,
  Fabian




> On 24 Sep 2016, at 10:33, Sebastian Messerschmidt 
>  wrote:
> 
> Hi,
> 
> Wow, before this escalates: OSG is setting the affinity for some of its own 
> threads, which is totally legitimate.And I totally agree, that it would be 
> nice to have an interface to control the core/wether affinity is used in 
> single-threaded mode except from having to subclass the viewer.  
> 
> If all other threads are forced to one core (as reported), by setting the 
> affinity of the osg-threads, it is clearly a bug and needs further inspection 
> . I've been working with OSG in a multi-threading environment for several 
> years and didn't experience problems so far however.
> So creating a minimal example might help to find the problem, if there is one.
> 
> Cheers
> Sebastian 
>>> Affinity is set by default because the it will provide the best
>>> performance for majority of OSG applications. This might be a
>>> "terrible" reason for you, but the OSG development is motivated not by
>>> just focusing on one class of users needs or preferences, default
>>> settings we try to do the best for most OSG applications.
>>> 
>> I have no particular desire to repeat the last discussion, but i'll say it 
>> again.
>> 
>> Hardcoding CPU affinity is:
>> a) unexpected
>> b) a premature optimisation 
>> c) not consistent across platforms
>> d) not easily reversible
>> e) a performance killer outside of one specific application model.
>> f) conflicting with other libraries that expect to set CPU affinity linked 
>> in the application
>> 
>> 
>> It is a terrible idea, and doing it in the context of a library is just 
>> plain wrong. 
>> 
>> PS. Reason f) doesn't really exist because other libraries don't do this, 
>> for reasons a,b,c,d and e.
>> 
>> --
>> Read this topic online here:
>> 
>> http://forum.openscenegraph.org/viewtopic.php?p=68716#68716
>> 
>> 
>> 
>> 
>> 
>> 
>> ___
>> osg-users mailing list
>> 
>> osg-users@lists.openscenegraph.org
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
> 
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Lazy Disabling without VertexFuncsAvailable

2016-09-21 Thread Fabian Wiesel
Hi,

I have converted my fixed function OSG program to a shader pipeline 
(using setVertexAttribArray and shaders instead of the setVertexPointer et al)
and did run into some issues when disabling "UseVertexAttributeAliasing", and 
OSG is compiled with OPENGL_PROFILE=GLCORE
(So OSG_GL_VERTEX_FUNCS_AVAILABLE is unset). I tested the same code on a linux 
with OSG compiled for a FFP.
And there, the code works as expected.

A simple test program is here: https://github.com/fwiesel/vertexarrayfunctest

An API trace of the failing case shows the following interesting part:
...
glEnableVertexAttribArray(0)
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL) # NULL is okay, the 
Array has been bound before
glDisableVertexAttribArray(0)
glDisableVertexAttribArray(1)
glDisableVertexAttribArray(2)
glDisableVertexAttribArray(11)
glDisableVertexAttribArray(12)
glDrawArrays(GL_POINTS, 0, 500)
...

No prior call to enable the array 1,2, 11, or 12 are issued.
The glDisableVertexAttribArray calls are coming from 
osg::State::applyDisablingOfVertexAttributes()

https://github.com/openscenegraph/OpenSceneGraph/blob/master/src/osg/State.cpp#L1296-L1304
 as _useVertexAttributeAliasing is false, and each "._lazy_disable" is 
true.
The state of "._enabled" is never checked, as "disablePointer" is 
unconditionally mapped to the aliased "disableVertexAttribArray"

I think, the bug lies in the assumption of the lazy disabling, that if we do 
not use the aliasing, that there is a fixed function pipeline.
But if OSG_GL_VERTEX_FUNCS_AVAILABLE the functions are unconditionally mapped 
to aliased vertex attributes.

I think, the whole lazy disabling of aliased attributes is superfluous in that 
context, as each vertex attribute tracks its own state already,
and have proposed a patch accordingly: 
https://github.com/openscenegraph/OpenSceneGraph/pull/125
With the patch applied, the code runs as expected.

Does anyone have a different explanation or a better proposal for solving the 
issue? It doesn't seem to be the acceptable solution.

Cheers,
  Fabian



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org