Hi everyone,
I noticed a very minor issue with sattach: If you pass an option it
doesn't understand, it asks you to look at "sbatch --help" which is a
little confusing:
$ sattach -X
sattach: invalid option -- X
Try "sbatch --help" for more information
I didn't find the right place in the source
Did you mean to send this to the SLURM list?
:-)
On Aug 27, 2007, at 4:46 AM, Manuel Prinz wrote:
Hi everyone,
I noticed a very minor issue with sattach: If you pass an option it
doesn't understand, it asks you to look at "sbatch --help" which is a
little confusing:
$ sattach -X
sattach: in
Am Montag, den 27.08.2007, 08:07 -0400 schrieb Jeff Squyres:
> Did you mean to send this to the SLURM list?
> :-)
Yes, I did. Sorry! It's one of those days... :-/
Best regards
Manuel
Yo folks
Just checked out a fresh copy of the trunk and tried to build it using my
usual configure:
./configure --prefix=/Users/rhc/openmpi --with-devel-headers
--disable-shared --enable-static --disable-mpi-f77 --disable-mpi-f90
--enable-mem-debug --without-memory-manager --enable-debug
--disabl
Ralph,
Ralph H Castain wrote:
Just returned from vacation...sorry for delayed response
No Problem. Hope you had a good vacation :) And sorry for my super
delayed response. I have been pondering this a bit.
In the past, I have expressed three concerns about the RSL.
My bottom line recommen
Yes, if you're using --disable-dlopen, then libltdlc should not be
linked in (because it [rightfully] won't exist).
I can reproduce the problem on my MBP.
Brian -- did something change here recently?
On Aug 27, 2007, at 9:23 AM, Ralph H Castain wrote:
Yo folks
Just checked out a fresh cop
Just wanted to let everyone know that the server upgrade went well.
It is currently up and running. Feel free to submit your MTT tests as
usual.
Cheers,
Josh
On Aug 24, 2007, at 1:45 PM, Jeff Squyres wrote:
FYI. The MTT database will be down for a few hours on Monday
morning. It'll be re
Hi,
Until now I haven't had to worry about the opal/orte thread model.
However, there are now people who would like to use ompi that has
been configured with --with-threads=posix and --with-enable-mpi-
threads. Can someone give me some pointers as to what I need to do in
order to make sure
We are running into a problem when running on one of our larger SMPs
using the latest Open MPI v1.2 branch. We are trying to run a job
with np=128 within a single node. We are seeing the following error:
"SM failed to send message due to shortage of shared memory."
We then increased the allowa
Rolf,
Would it be better to put this parameter in the system configuration file,
rather than change the compile time option ?
Rich
On 8/27/07 3:10 PM, "Rolf vandeVaart" wrote:
> We are running into a problem when running on one of our larger SMPs
> using the latest Open MPI v1.2 branch. We
Hello,
* Jeff Squyres wrote on Mon, Aug 27, 2007 at 04:07:22PM CEST:
> On Aug 27, 2007, at 9:23 AM, Ralph H Castain wrote:
> >
> > Making all in mca/timer/darwin
> > make[2]: Nothing to be done for `all'.
> > Making all in .
> > make[2]: *** No rule to make target `../opal/libltdl/libltdlc.la',
Ethan --
You said to me in IM:
"i'm getting stuck trying to use MTT::Functions::find. it's returning
EVERY file under the directory i give it."
Can you cite a specific example? Is this on the jms-new-parser branch?
Keep in mind that you need to supply a *perl* regexp (not a shell
regexp)
Whoops -- wrong list; meant to send this to mtt-devel... sorry
folks... nothing to see here...
On Aug 27, 2007, at 7:38 PM, Jeff Squyres wrote:
Ethan --
You said to me in IM:
"i'm getting stuck trying to use MTT::Functions::find. it's returning
EVERY file under the directory i give it."
Ca
On Aug 27, 2007, at 2:50 PM, Greg Watson wrote:
Until now I haven't had to worry about the opal/orte thread model.
However, there are now people who would like to use ompi that has
been configured with --with-threads=posix and --with-enable-mpi-
threads. Can someone give me some pointers as to w
On Aug 24, 2007, at 11:05 PM, Josh Aune wrote:
Hmm. If you compile Open MPI with no memory manager, then it
*shouldn't* be Open MPI's fault (unless there's a leak in the mvapi
BTL...?). Verify that you did not actually compile Open MPI with a
memory manager by running "ompi_info| grep ptmalloc
15 matches
Mail list logo