On 26 Jul 2011, at 19:59, Jack Bryan wrote:
> Any help is appreciated.
Your best option is to distill this down to a short example program which shows
what's happening v's what you think should be happening.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection t
ry the internal
data structures it uses were corrupt.
> In valgrind,
>
> there are some invalid read and write butno errors about this
> free(): invalid next size .
You need to fix the invalid write errors, the above error is almost certainly a
symptom is these.
Ashley.
--
you go
public, I have many thousands of hours of EC2 time to my name and have spent
much of it configuring and testing MPI librarys within them to allow me to test
my debugger which sits on top of them.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
n the same region.
As you correctly notice not only are your hosts are on the same network which
means that they won't all be able to contact each other over the network,
without this OpenMPI is not going to be able to work.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job
rmanently from
the next boot, obviously you should check with your network administrator
before doing this.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
my own record. Why would you say I shouldn't be doing so?
>
> Regards,
>
> Tena
>
>
> On 2/13/11 1:29 PM, "Ashley Pittman" <ash...@pittman.co.uk> wrote:
>
>> On 12 Feb 2011, at 14:06, Ralph Castain wrote:
>>
>>> Have you searched the
people.
Ashley.
Ps, I would recommend reading up on sudo and su, "sudo su" is not a command you
should be typing.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
e been using the
--prefix option to mpirun or configuring OpenMPI with the
--enable-mpirun-prefix-by-default option.
See:
http://www.open-mpi.org/faq/?category=running#run-prereqs
http://www.open-mpi.org/faq/?category=running#mpirun-prefix
--
Ashley Pittman, Bath, UK.
Padb - A parallel job ins
cast and the other collective operations are just that, "collective" and
have to be called from all ranks in a communicator with the same parameters and
in the same order.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
can't see any difference between the two.
Ashley.
On 10 Dec 2010, at 18:25, Ralph Castain wrote:
>
> So if you wanted to get your own local rank, you would call:
>
> my_local_rank = orte_ess.proc_get_local_rank(ORTE_PROC_MY_NAME);
--
Ashley Pittman, Bath, UK.
Padb - A parallel jo
you can expect it to do.
http://www.open-mpi.org/faq/?category=running#force-aggressive-degraded
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ormation, it won't allow you to point-and-click through the
> > > > source or single step through the code but it is lightweight and will
> > > > show you the information which you need to know.
> > > >
> > > > Padb needs to integrate with the resou
> but only if you work with me and provide details of the integration, in
> > particular I've sent you a version which has a small patch and some debug
> > printfs added, if you could send me the output from this I'd be able to
> > tell you if it was likely to work and how t
y if you
work with me and provide details of the integration, in particular I've sent
you a version which has a small patch and some debug printfs added, if you
could send me the output from this I'd be able to tell you if it was likely to
work and how to go about making it do so.
Ashley.
--
Or - find the node where the PBS script is being executed, check that the
ompi-ps command is returning the jobid and then run
padb -Ormgr=orte -Q
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
adb -axt" for the stack traces and send the output to this
list.
The web-site is in my signature or there is a new beta release out this week at
http://padb.googlecode.com/files/padb-3.2-beta1.tar.gz
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ms of
software, if this isn't the case for your systems then the easiest way might be
to compile open mpi from source (on the older of the two machines would be
best) and to install it to a common directory on both machines.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cl
.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ranks would be started simultaneously, you'll find this easier than having one
single-rank job spawn more ranks as required.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
s under any programming paradigm.
However if you mean "execution threads" or in MPI parlance "ranks" then yes,
under OpenMPI each "rank" will be a separate process on one of the nodes in the
host list, as Jody says look at MPI_Comm_Spawn for this.
Ashley,
--
As
m not familiar enough with OMPI to be able to tell you, I'm
sure somebody can though. If my suspicion above is correct I have doubt
knowing what this value is would help you at all though in terms of application
performance.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job
implemented and played around with in the past however it's
not yet available to users today but I believe it will be shortly and as you'll
have read my believe is it's going to be a very useful addition to the MPI
offering.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool
d case it doesn't cause any process to block
ever so the cost is only that of the CPU cycles the code takes itself, in the
bad case where it has to delay a rank then this tends to have a positive impact
on performance.
> Would it be application/communicator pattern dependent?
Absolutely.
Ashley
+25 and immediately starting
another one, again waiting for it 25 steps later.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
could help, in
theory it should have the effect of being able keep processes in sync without
any additional overhead in the case that they are already well synchronised.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
had a single rank receive all
messages and keep them in a queue and then use MPI_Ssend() to forward messages
to your "consumer" ranks. Substitute ranks for threads in the above text as
you feel is appropriate.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
rte integration (pdsh runs out of file
descriptors eventually) but is more generic and might get you to somewhere that
works. If your job spans more than 32 nodes you may need to set the FANOUT
variable for pdsh to work.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
connecting)
> Unexpected EOF from Inner stderr (connecting)
> Unexpected exit from parallel command (state=connecting)
> Bad exit code from parallel command (exit_code=131)
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
and is open-source
Ashley (padb developer)
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
fully fledged release so you
should try this to see if it makes a difference to your problem. The website
for padb (containing links to it's own mailing lists) is in my signature.
Ashley (the padb developer)
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection too
m gets picked by the library.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ow in on problems quickly.
Also which is unique to MPI it's possible to see the "message queues" for ranks
within an MPI application which can help with programming.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
an rpm-tmp
> file is executed, but that file has disappeared so I don't really know what
> it does. I thought it might be apparent in the spec file, but it's certainly
> not apparent to me! Any help or advice would be appreciated.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job ins
y seen it once or twice in the last six
months and not on installations I've installed myself, I've never been able to
find out the underlying cause and why some machines report this error and some
don't.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
source/detail?r=355
Verify the type information if present:
http://code.google.com/p/padb/source/detail?r=386
> However,
> some users prefer the classic launch with -tv, and this seems to be failing
> with
> the latest builds I've done on Darwin.
I've seen this 'problem' on Linux as
On 11 Jan 2010, at 06:20, Jed Brown wrote:
> On Sun, 10 Jan 2010 19:29:18 +0000, Ashley Pittman <ash...@pittman.co.uk>
> wrote:
>> It'll show you parallel stack traces but won't let you single step for
>> example.
>
> Two lightweight options if you want step
support for parallel programs,
I've not used it however so can't comment on it's features.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
r configuration or starting of deamons is required. No effort is
made to prevent multiple jobs from starting on the same nodes and no
effort is made to maintain a "queue" of jobs waiting for nodes to become
free. Each job is independent, and runs where you tell it to
immediately.
Ash
e SVN version of padb for this, the "orte-job-step"
option tells it to attach to the first spawned job, use orte-ps to see
the list of job steps.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
*
Shouldn't this be MYARGS=$@ It'll change the way quoted args are
forwarded to the parallel job.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
and see these queues. That I know of there are three
tools which use this, either TotalView, DDT or my own tool, padb.
TotalView and DDT are both full-featured graphical debuggers and
commercial products, padb is a open-source text based tool.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A pa
ther programs want to use the CPU the MPI
processes will not hog it but rather let the other processes use as much
CPU time as they want and just spin when the CPU would otherwise be
idle. This is something I use daily and greatly increases the
responsiveness of systems which are mixing idle MPI with
on't help either, process
binding is about squeezing the last 15% of performance out of a system
and making performance reproducible, it has no bearing on correctness or
scalability. If you're not running on a dedicated machine which with
firefox running I guess you aren't then there would be a good c
On Wed, 2009-12-02 at 13:11 -0500, Brock Palen wrote:
> On Dec 1, 2009, at 11:15 AM, Ashley Pittman wrote:
> > On Tue, 2009-12-01 at 10:46 -0500, Brock Palen wrote:
> >> The attached code, is an example where openmpi/1.3.2 will lock up, if
> >> ran on 48 cor
this information would be useful.
http://padb.pittman.org.uk/full-report.html
Ashley Pittman.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
es for messages from the OOM killer?
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
angs in a parallel job take a look at the tool
linked to below (padb), it should be able to give you a parallel stack
trace and the message queues for the job.
http://padb.pittman.org.uk/full-report.html
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for clus
double) so I wouldn't rule out this theory.
Also, you are mallocing at least 4Gb per process and quite possibly a
large amount for buffering in the MPI library as well, it could be that
you are simply running out of memory.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job
any software and particularly a library IMHO.
https://svn.open-mpi.org/trac/ompi/ticket/1720
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
rarely a good
solution.
Ashley.
On Wed, 2009-10-07 at 18:42 +0300, Roman Cheplyaka wrote:
> As a slight modification, you can write a wrapper script
>
> #!/bin/sh
> my_exe < inputs.txt
>
> and pass it to mpirun.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspecti
each node and
that executable is then executed locally.
> Is the implication correct or is there some way around.
Typically some kind of a shared filesystem would be used, nfs for
example.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster co
ou need
which should be fast in all cases.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ing to the manpage.
> posix_memalign is the one to use.
> > > https://svn.open-mpi.org/trac/ompi/changeset/21744
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
so suggest that as you are seeing random hangs and crashes
running your code under Valgrind might be advantageous.
Ashley Pittman.
On Sun, 2009-09-27 at 02:05 +0800, guosong wrote:
> Yes, I know there should be a bug. But I do not know where and why.
> The strange thing was som
ogram did start and has really hung then you can get more
in-depth information about it using padb which is linked to in my
signature.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
vially however.
The problem being Embarrassingly parallel is of no consequence beyond
the fact that if it was they you wouldn't need either MPI or MapReduce.
Ashley.
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
tation of classical computing and
one that people have learned to live with.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
eck with the local admins for a definitive answer.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
running
will tell you the hostname where every rank is running or if you want
more information (load, cpu usage etc) you can use padb, the link for
which is in my signature.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
On Wed, 2009-07-08 at 15:43 -0400, Michael Di Domenico wrote:
> On Wed, Jul 8, 2009 at 3:33 PM, Ashley Pittman<ash...@pittman.co.uk> wrote:
> >> When i run tping i get:
> >> ELAN_EXCEOPTIOn @ --: 6 (Initialization error)
> >> elan_init: Can't get capabi
On Wed, 2009-07-08 at 15:09 -0400, Michael Di Domenico wrote:
> On Wed, Jul 8, 2009 at 12:33 PM, Ashley Pittman<ash...@pittman.co.uk> wrote:
> > Is the machine configured correctly to allow non OpenMPI QsNet programs
> > to run, for example tping?
> >
> > Which re
its a good place
> to start
Is the machine configured correctly to allow non OpenMPI QsNet programs
to run, for example tping?
Which resource manager are you running, I think slurm compiled for RMS
is essential.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
de faster if you run the patches.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
t trying to outrun it by setting the
> > $PATH variable to point first at my local installation.
> >
> >
> > Catalin
> >
> >
> > --
> >
> > **
> > Catalin David
> > B.Sc. Computer Science 2010
> > Jacobs University Bremen
> >
> > Phone: +49-(0)1577-49-38-667
> >
> > College Ring 4, #343
> > Bremen, 28759
> > Germany
> > **
> >
>
>
>
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
I don't have access to hardware either currently.
Ashley,
--
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk
ression file as George said. As the
error is from MPI_Init() you can safely ignore it from a end-user
perspective.
Ashley.
--
Ashley Pittman
Padb - A parallel job viewer for cluster computing
http://padb.pittman.org.uk
ring MPI_Init() so it's possible for memory to be allocated on the
wrong quad, the discussion was about moving the binding to the orte
process as I recall?
>From my testing of process affinity you tend to get much more consistent
results with it on and much more unpredictable results with it off, I'd
questing that it's working properly if you are seeing a 88-93% range in
the results.
Ashley Pittman.
On Tue, 2009-05-19 at 14:01 -0400, Noam Bernstein wrote:
I'm glad you got to the bottom of it.
> With one of them, apparently, CP2K will silently go on if
> the
> file is missing, but then lock up in an MPI call (maybe it leaves
> some
> variables uninitialized, and then uses them in the
On Tue, 2009-05-19 at 11:01 -0400, Noam Bernstein wrote:
> I'd suspect the filesystem too, except that it's hung up in an MPI
> call. As I said
> before, the whole thing is bizarre. It doesn't matter where the
> executable is,
> just what CWD is (i.e. I can do mpirun /scratch/exec or mpirun
.
I've always used the compile options to specify max message size and rep
count, the -msglen option is not one I've seen before.
Ashley Pittman.
On Wed, 2009-04-22 at 12:40 +0530, vkm wrote:
> The same amount of memory required for recvbuf. So at the least each
> node should have 36GB of memory.
>
> Am I calculating right ? Please correct.
Your calculation looks correct, the conclusion is slightly wrong
however. The Application
;all done" messages depending on whether the message
> indicates a graph operation or signals "all done".
Exactly, that way you have a defined number of messages which can be
calculated locally for each process and hence there is no need to use
Probe and you can get rid of the MPI_Barrier call.
Ashley Pittman.
On 23 Mar 2009, at 23:36, Shaun Jackman wrote:
loop {
MPI_Ibsend (for every edge of every leaf node)
MPI_barrier
MPI_Iprobe/MPI_Recv (until no messages pending)
MPI_Allreduce (number of nodes removed)
} until (no nodes removed by any node)
Previously, I attempted to use a single MPI_Allreduce
On 23 Mar 2009, at 21:11, Ralph Castain wrote:
Just one point to emphasize - Eugene said it, but many times people
don't fully grasp the implication.
On an MPI_Allreduce, the algorithm requires that all processes -enter-
the call before anyone can exit.
It does -not- require that they all
challenge is to benchmark MPI_Barrier, it's not as
easy as you might think...
Ashley Pittman.
On 11 Feb 2009, at 14:13, Prentice Bisbal wrote:
Douglas Guptill wrote:
Thanks. I did end up building for all the compilers under separate
trees. It looks like the --exec-prefix option is only of use if your
compiling 32-bit and 64-bit versions using the same compiler.
This is what I decided
e and AlltoAll also have an implicit barrier by
virtue of the dataflow required, all processes need input from all other
processes before they can return.
Ashley Pittman.
On Mon, 2009-01-19 at 12:50 +0530, gaurav gupta wrote:
> Hello,
>
> I want to know that which task is running on which node. Is there any
> way to know this.
>From where? From the command line outside of a running job then the new
open-ps command in v1.3 will give you this information. In 1.2
On Wed, 2008-10-08 at 09:46 -0400, Jeff Squyres wrote:
> - Have you tried compiling Open MPI with something other than GCC?
> Just this week, we've gotten some reports from an OMPI member that
> they are sometimes seeing *huge* performance differences with OMPI
> compiled with GCC vs. any
On Sat, 2008-08-16 at 08:03 -0400, Jeff Squyres wrote:
> - large all to all operations are very stressful on the network, even
> if you have very low latency / high bandwidth networking such as DDR IB
>
> - if you only have 1 IB HCA in a machine with 8 cores, the problem
> becomes even more
One tip is to use the --log-file=valgrind.out.%
q{OMPI_MCA_ns_nds_vpid} option to valgrind which will name the output
file according to rank. In the 1.3 series the variable has changed from
OMPI_MCA_ns_nds_vpid to OMPI_COMM_WORLD_RANK.
Ashley.
On Tue, 2008-08-05 at 17:51 +0200, George Bosilca
On Wed, 2008-07-30 at 10:45 -0700, Scott Beardsley wrote:
> I'm attempting to move to OpenMPI from another MPICH-derived
> implementation. I compiled openmpi 1.2.6 using the following configure:
>
> ./configure --build=x86_64-redhat-linux-gnu
> --host=x86_64-redhat-linux-gnu
On Sun, 2008-07-13 at 09:16 -0400, Jeff Squyres wrote:
> On Jul 13, 2008, at 9:11 AM, Tom Riddle wrote:
>
> > Does anyone know if this feature has been incorporated yet? I did a
> > ./configure --help but do not see the enable-ptmalloc2-internal
> > option.
> >
> > - The ptmalloc2 memory
libopen-pal library is preventing valgrind from intercepting these
functions in glibc and hence dramatically reducing the benefit which
valgrind brings.
Ashley Pittman.
is given local_rank=0
>
> If there are others that would be useful, now is definitely the time to
> speak up!
The only other one I'd like to see is some kind of global identifier for
the job but as far as I can see I don't believe that openmpi has such a
concept.
Ashley Pittman.
On Fri, 2008-07-11 at 07:59 -0600, Ralph H Castain wrote:
> Not until next week's meeting, but I would guess we would simply prepend the
> rank. The issue will be how often to tag the output since we write it in
> fragments to avoid blocking - so do we tag the fragment, look for newlines
> and tag
On Fri, 2008-07-11 at 07:20 -0600, Ralph H Castain wrote:
> This variable is only for internal use and has no applicability to a user.
> Basically, it is used by the local daemon to tell an application process its
> rank when launched.
>
> Note that it disappears in v1.3...so I wouldn't recommend
ibVEX rev 1658, a library for dynamic binary
> translation.
> ==17839== Copyright (C) 2004-2006, and GNU GPL'd, by OpenWorks LLP.
> ==17839== Using valgrind-3.2.1, a dynamic binary instrumentation
> framework.
> ==17839== Copyright (C) 2000-2006, and GNU GPL'd, by Julian Seward et
> al.
> ==17839== For more details, rerun with: -v
Ashley Pittman.
the shell which is launching the program.
Ashley Pittman.
sccomp@demo4-sles-10-1-fe:~/benchmarks/IMB_3.0/src> mpirun -H comp00,comp01
./IMB-MPI1
/opt/openmpi-1.2.6/intel/bin/orted: error while loading shared libraries:
libimf.so: cannot open shared object file: No such file or directory
/opt/
option to mpirun.
Or do you mean static linking of the tools? I could go for that if
there is a configure option for it.
Ashley Pittman.
On Mon, 2008-06-09 at 08:27 -0700, Doug Reeder wrote:
> Ashley,
>
> It could work but I think you would be better off to try and
> statically li
with icc?
Yours,
Ashley Pittman,
I notice on the download page all file sizes are listed as 0KB, this is
presumably an error somewhere.
http://www.open-mpi.org/software/ompi/v1.2/
Ashley,
On Tue, 2008-03-25 at 20:56 -0400, Jeff Squyres wrote:
> > Except that you should never do that. First off, RPMs should never
> > install in /opt by default.
>
> The community Open MPI projects distributes SRPMs which, when built,
> do not install into /opt by default -- you have to request
On Thu, 2008-03-20 at 10:27 -0700, Dave Grote wrote:
> After reading the previous discussion on AllReduce and AlltoAll, I
> thought I would ask my question. I have a case where I have data
> unevenly distributed among the processes (unevenly means that the
> processes have differing amounts of
On Wed, 2008-03-12 at 18:05 -0400, Aurélien Bouteiller wrote:
> If you can avoid them it is better to avoid them. However it is always
> better to use a MPI_Alltoall than coding your own all to all with
> point to point, and in some algorithms you *need* to make a all to all
>
efault to
> http://www.open-mpi.org/faq/?category=running#mpirun-prefix
>
> Tim
>
> Ashley Pittman wrote:
> > That looks like just what I need, thank you for the quick response.
> >
> > The closest I could find in the FAQ is this entry which has a broken
>
where mpirun is run from path?
http://www.open-mpi.org/community/lists/users/2006/01/0480.php
Yours, Ashley Pittman.
I can volunteer myself as a beta-tester if that's OK. If there is
anything specific you want help with either drop me a mail directly or
mail supp...@quadrics.com
We are not aware of any other current project of this nature.
Ashley,
On Mon, 2007-03-19 at 18:48 -0400, George Bosilca wrote:
>
98 matches
Mail list logo