Rolf,
I think it is not a good idea to increase the default value to 2G. You
have to keep in mind that there are not so many people who have a
machine with 128 and more cores on a single node. The average people
will have nodes with 2,4 maybe 8 cores and therefore it is not necessary
to set thi
Maybe an clarification of the SM BTL implementation is needed. Does the
SM BTL not set a limit based on np using the max allowable as a
ceiling? If not and all jobs are allowed to use up to max allowable I
see the reason for not wanting to raise the max allowable.
That being said it seems to
There are 3 parameters that control how much memory is used by the SM BTL.
MCA mpool: parameter "mpool_sm_max_size" (current value: "536870912")
Maximum size of the sm mpool shared memory file
MCA mpool: parameter "mpool_sm_min_size" (current value: "134217728")
Minimum siz
On Mon, 2007-08-27 at 15:10 -0400, Rolf vandeVaart wrote:
> We are running into a problem when running on one of our larger SMPs
> using the latest Open MPI v1.2 branch. We are trying to run a job
> with np=128 within a single node. We are seeing the following error:
>
> "SM failed to send messa
On Aug 28, 2007, at 9:05 AM, Li-Ta Lo wrote:
On Mon, 2007-08-27 at 15:10 -0400, Rolf vandeVaart wrote:
We are running into a problem when running on one of our larger SMPs
using the latest Open MPI v1.2 branch. We are trying to run a job
with np=128 within a single node. We are seeing the fol
On Aug 27, 2007, at 10:04 PM, Jeff Squyres wrote:
On Aug 27, 2007, at 2:50 PM, Greg Watson wrote:
Until now I haven't had to worry about the opal/orte thread model.
However, there are now people who would like to use ompi that has
been configured with --with-threads=posix and --with-enable-mp
On Tue, 2007-08-28 at 10:12 -0600, Brian Barrett wrote:
> On Aug 28, 2007, at 9:05 AM, Li-Ta Lo wrote:
>
> > On Mon, 2007-08-27 at 15:10 -0400, Rolf vandeVaart wrote:
> >> We are running into a problem when running on one of our larger SMPs
> >> using the latest Open MPI v1.2 branch. We are tryin
On 8/27/07 7:30 AM, "Tim Prins" wrote:
> Ralph,
>
> Ralph H Castain wrote:
>> Just returned from vacation...sorry for delayed response
> No Problem. Hope you had a good vacation :) And sorry for my super
> delayed response. I have been pondering this a bit.
>
>> In the past, I have expressed
Attached is a patch for the PHP side of things that does the following:
* Creates a config.inc file for centralization of various user-
settable parameters:
* HTTP username/password for curl (passwords still protected; see
code)
* MTT database name/username/password
* HTML header /
@#$%@#$%
Sorry; I keep sending to devel instead of mtt-devel.
On Aug 28, 2007, at 2:48 PM, Jeff Squyres wrote:
Attached is a patch for the PHP side of things that does the
following:
* Creates a config.inc file for centralization of various user-
settable parameters:
* HTTP username/pa
I'm having a problem with the UD BTL and hoping someone might have some
input to help solve it.
What I'm seeing is hangs when running alltoall benchmarks with nbcbench
or an LLNL program called mpiBench -- both hang exactly the same way.
With the code on the trunk running nbcbench on IU's odin
The first step will be to figure out which version of the alltoall
you're using. I suppose you use the default parameters, and then the
decision function in the tuned component say it is using the linear
all to all. As the name state it, this means that every node will
post one receive from
12 matches
Mail list logo