Should be done on trunk with r21826 - would you please give it a try
and let me know if that meets requirements? If so, I'll move it to
1.3.4.
Thanks
Ralph
On Aug 17, 2009, at 6:42 AM, Greg Watson wrote:
Hi Ralph,
Yes, you'd just need issue the start tag prior to any other XML
output, t
Hi Chris
The devel trunk has all of this in it - you can get that tarball from
the OMPI web site (take the nightly snapshot).
I plan to work on cpuset support beginning Tues morning.
Ralph
On Aug 17, 2009, at 7:18 PM, Chris Samuel wrote:
- "Eugene Loh" wrote:
Hi Eugene,
[...]
It
- "Eugene Loh" wrote:
Hi Eugene,
[...]
> It would be even better to have binding selections adapt to other
> bindings on the system.
Indeed!
This touches on the earlier thread about making OMPI aware
of its cpuset/cgroup allocation on the node (for those sites
that are using it), it might
On Aug 17, 2009, at 5:59 PM, Chris Samuel wrote:
- "Jeff Squyres" wrote:
An important point to raise here: the 1.3 series is *not* the super
stable series. It is the *feature* series. Specifically: it is not
out of scope to introduce or change features within the 1.3 series.
Ah, I t
- "Jeff Squyres" wrote:
> An important point to raise here: the 1.3 series is *not* the super
> stable series. It is the *feature* series. Specifically: it is not
> out of scope to introduce or change features within the 1.3 series.
Ah, I think I've misunderstood the website then. :-(
On Aug 17 2009, Paul H. Hargrove wrote:
+ I wonder if one can do any "introspection" with the dynamic linker to
detect hybrid OpenMP (no "I") apps and avoid pinning them by default
(examining OMP_NUM_THREADS in the environment is no good, since that
variable may have a site default value othe
Jeff,
Jeff Squyres wrote:
ignored it whenever presenting competitive data. The 1,000,000th time I
saw this, I gave up arguing that our competitors were not being fair and
simply changed our defaults to always leave memory pinned for
OpenFabrics-based networks.
Instead, you should have tol
Some more thoughts in this thread that I've not seen expressed yet
(perhaps I missed them):
+ Some argue that this change in the middle of a stable series may, to
some users, appear to be a performance regression when they update.
However, I would argue that if the alternative is to delay thi
Some very good points in this thread all round.
On Mon, 2009-08-17 at 09:00 -0400, Jeff Squyres wrote:
>
> This is probably not too surprising (i.e., allowing the OS to move
> jobs around between cores on a socket can probably involve a little
> cache thrashing, resulting in that 5-10% loss)
On Aug 17, 2009, at 3:23 PM, N.M. Maclaren wrote:
>Yes, BUT... We had a similar option to this for a long, long time.
Sorry, perhaps I should have spelled out what I meant by "mandatory".
The system would not build (or run, depending on where it was set)
without such a value being specified.
On Aug 17 2009, Jeff Squyres wrote:
Yes, BUT... We had a similar option to this for a long, long time.
Sorry, perhaps I should have spelled out what I meant by "mandatory".
The system would not build (or run, depending on where it was set)
without such a value being specified. There would
On Aug 17, 2009, at 12:11 PM, N.M. Maclaren wrote:
1) To have a mandatory configuration option setting the default,
which
would have a name like 'performance' for the binding option. YOU
could then
beat up anyone who benchmarkets without it for being biassed. This
is a
better solution
On Aug 17 2009, Ralph Castain wrote:
At issue for us is that other MPIs -do- bind by default, thus creating an
apparent performance advantage for themselves compared to us on standard
benchmarks run "out-of-the-box". We repeatedly get beat-up in papers and
elsewhere over our performance, when ma
I don't disagree with your statements. However, I was addressing the
specific question of two OpenMPI programs conflicting on process placement,
not the overall question you are raising.
The issue of when/if to bind has been debated for a long time. I agree that
having more options (bind-to-socket
In some of the experiments I've run and studied on exclusive binding to
specific cores, the performance metrics (which have yielded both excellent
gains as well as phases of reduced performance) have depended upon the
nature of the experiment being run (a task partitioning problem) and how the
exp
On Aug 17 2009, Ralph Castain wrote:
The problem is that the two mpiruns don't know about each other, and
therefore the second mpirun doesn't know that another mpirun has
already used socket 0.
We hope to change that at some point in the future.
It won't help. The problem is less likely
Jeff Squyres wrote:
On Aug 16, 2009, at 11:02 PM, Ralph Castain wrote:
UNLESS you have a threaded application, in which case -any- binding
can be highly detrimental to performance.
I'm not quite sure I understand this statement. Binding is not
inherently contrary to multi-threaded applic
On Aug 17 2009, Jeff Squyres wrote:
On Aug 16, 2009, at 11:02 PM, Ralph Castain wrote:
I think the problem here, Eugene, is that performance benchmarks are
far from the typical application. We have repeatedly seen this -
optimizing for benchmarks frequently makes applications run less
effi
On Aug 16, 2009, at 8:56 PM, George Bosilca wrote:
I tend to agree with Chris. Changing the behavior of the 1.3 in the
middle of the stable release cycle, will be very confusing for our
users.
An important point to raise here: the 1.3 series is *not* the super
stable series. It is the *fea
On Aug 16, 2009, at 11:02 PM, Ralph Castain wrote:
I think the problem here, Eugene, is that performance benchmarks are
far from the typical application. We have repeatedly seen this -
optimizing for benchmarks frequently makes applications run less
efficiently. So I concur with Chris on th
On Aug 16, 2009, at 6:39 PM, Graham, Richard L. wrote:
A question about library dependencies in the ompi build system. I
am creating a new ompi component that has uses routines out of ompi/
common/a and ompi/common/b . How do I get routines from ompi/common/
a to pick up the symbols in ompi
Hi Ralph,
Yes, you'd just need issue the start tag prior to any other XML
output, then the end tag when it's guaranteed all XML other output has
been sent.
Greg
On Aug 17, 2009, at 7:44 AM, Ralph Castain wrote:
All things are possible - some just a tad more painful than others.
It looks
All things are possible - some just a tad more painful than others.
It looks like you want the mpirun tags to flow around all output
during the run - i.e., there is only one pair of mpirun tags that
surround anything that might come out of the job. True?
If so, that would be trivial.
On Au
The problem is that the two mpiruns don't know about each other, and
therefore the second mpirun doesn't know that another mpirun has
already used socket 0.
We hope to change that at some point in the future.
Ralph
On Aug 17, 2009, at 4:02 AM, Lenny Verkhovsky wrote:
In the multi job envi
In the multi job environment, can't we just start binding processes on the
first avaliable and unused socket?
I mean first job/user will start binding itself from socket 0,
the next job/user will start binding itself from socket 2, for instance .
Lenny.
On Mon, Aug 17, 2009 at 6:02 AM, Ralph Casta
25 matches
Mail list logo