On 2015-12-30 17:16-0000 Phil Rosenberg wrote:

> Hi Alan
> Thanks for the test results. I will look to see if I can do a similar test on 
> windows, perhaps using Cygwin bash. Then I will see where we stand.
> Just to be sure are you building an optimised or debug build on Linux?

Hi Phil:

I would like to keep this discussion on list in case someone else here
can comment on the IPC question.

Good question above.  By historical accident (I have been debugging C and
Fortran issues a lot for the new Fortran binding), my compile flags were

software@raven> printenv |grep FL
CXXFLAGS=-O3 -fvisibility=hidden -Wuninitialized
CFLAGS=-g
FFLAGS=-g

So assuming -O3 will give a factor of ~2 speed increase, the C++-based
qtwidget and wxwidgets devices should have a two-fold speed advantage
compared to the C-based xwin and xcairo devices.

But the important point is that if you look at the timings for all the
devices, then the sum of user + system timings is normally roughly
equal to the real time required to do the test.  But not so for
wxwidgets where the real time is an order of magnitude larger than
that sum, i.e., the problem is not to reduce cpu cycles.  Therefore,
the -O3 flag above won't make much difference for the wxwidgets case.
Instead the problem is each example and/or associated wxPLViewer are
simultaneously doing large amounts of waiting while the cpu is
completely idle.  My hypothesis to explain those long waits is they
are both waiting too long for communications between them because of
some issue with the way you have set up the IPC method (perhaps
only on Linux, but maybe on Windows also).

My IPC expertise is confined to light skimming of
<https://en.wikipedia.org/wiki/Inter-process_communication> and
<https://en.wikipedia.org/wiki/Shared_memory_(interprocess_communication)>.
I took a look at the latter since it appears you are using the POSIX
shared memory method of IPC on Linux.  So I am definitely no expert,
and the above articles only talk about using shared memory for a very
fast way to communicate data between processes.  So that part of
efficiency considerations should be fine.  But there must be more to
it than that since the processes must communicate between themselves
(presumably with some sort of signal) concerning what data has been
written to that shared memory and how one process wants the other to
service the request implied by the signal that has been sent.

That is, my mental model of the ideal IPC is the -dev wxwidgets code
launches a wxPLViewer application (WA) and the two processes set up
IPC between them. Then the IPC between the two processes is the one
with control goes about its business until it needs the other to
service a request.  Then it simply writes needed data for that request
to a shared memory area and sends a specific signal to the other
process to deal with that data, and then that other process receives
that specific signal and deals with that data and continues until it
needs the other process to service a request. Thus, with this ideal
IPC model, the cpu is busy 100 per cent of the time either running
real code (not a wait loop) in either the device or WA.  That is, when
either device or WA are waiting while the other is busy, they are
waiting for explicit signals from the other to proceed and are not in
some sort of inefficient sleep x amount of time and wakeup and check
for activity from IPC partner loop.

If you do have such an inefficient wait x amount of time loop in your
code for the Windows case as well, I think you will also find a
similar inefficiency for test_c_wxwidgets compared to test_c_wingcc.
So I will be most interested in your results for that comparison.

Alan
__________________________
Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state
implementation for stellar interiors (freeeos.sf.net); the Time
Ephemerides project (timeephem.sf.net); PLplot scientific plotting
software package (plplot.sf.net); the libLASi project
(unifont.org/lasi); the Loads of Linux Links project (loll.sf.net);
and the Linux Brochure Project (lbproject.sf.net).
__________________________

Linux-powered Science
__________________________

------------------------------------------------------------------------------
_______________________________________________
Plplot-devel mailing list
Plplot-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/plplot-devel

Reply via email to