[OMPI users] error durgin execution

2009-09-12 Thread Luis Vitorio Cargnini

Hi,
Please someone could help me with this error:
[node11][0,1,7][/SourceCache/openmpi/openmpi-5/openmpi/ompi/mca/btl/ 
tcp/btl_tcp_frag.c:202:mca_btl_tcp_frag_recv] mca_btl_tcp_frag_recv:  
readv failed with errno=54
[node28][0,1,22][/SourceCache/openmpi/openmpi-5/openmpi/ompi/mca/btl/ 
tcp/btl_tcp_endpoint.c:572:mca_btl_tcp_endpoint_complete_connect]  
connect() failed with errno=61


Any idea of what this can be ? and how to solve it ? and how to avoid  
this ?




smime.p7s
Description: S/MIME cryptographic signature


PGP.sig
Description: Ceci est une signature électronique PGP


Re: [OMPI users] ompi_info segmentation fault with Snow Leopard

2009-09-01 Thread Luis Vitorio Cargnini

Did you try using the Apple compiler too ?

Le 09-09-01 à 19:31, Marcus Herrmann a écrit :


Hi,
I am trying to install openmpi 1.3.3 under OSX 10.6 (Snow Leopard)  
using the 11.1.058 intel compilers. Configure and build seem to work  
fine. However trying to run ompi_info after install causes directly  
a segmentation fault without any additional information being printed.

Did anyone have success in using 1.3.3 under Snow Leopard?

Thanks
Marcus
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




smime.p7s
Description: S/MIME cryptographic signature


PGP.sig
Description: Ceci est une signature électronique PGP


Re: [OMPI users] Xgrid and choosing agents...

2009-07-12 Thread Luis Vitorio Cargnini

did you saw that, maybe, just maybe using:
xserve01.local slots=8 max-slots=8
xserve02.local slots=8 max-slots=8
xserve03.local slots=8 max-slots=8
xserve04.local slots=8 max-slots=8

it can set the number of process specifically for each node, the  
"slots" does this setting the configuration of slots per each node,  
try it with the old conf of Xgrid and also test with your new Xgrid  
conf.


Regards.
Vitorio.


Le 09-07-11 à 18:11, Klymak Jody a écrit :

If anyone else is using xgrid, there is a mechanism to limit the  
processes per machine:


sudo defaults write /Library/Preferences/com.apple.xgrid.agent  
MaximumTaskCount 8


on each of the nodes and then restarting xgrid tells the controller  
to only send 8 processes to that node.  For now that is fine  
solution for my need.  I'll try and figure out how to specify hosts  
via xgrid and get back to the list...


Thanks for everyone's help,

Cheers, Jody

On 11-Jul-09, at 12:42 PM, Ralph Castain wrote:

Looking at the code, you are correct in that the Xgrid launcher is  
ignoring hostfiles. I'll have to look at it to determine how to  
correct that situation - I didn't write that code, nor do I have a  
way to test any changes I might make to it.


For now, though, if you add --bynode to your command line, you  
should get the layout you want. I'm not sure you'll get the rank  
layout you'll want, though...or if that is important to what you  
are doing.


Ralph

On Jul 11, 2009, at 1:18 PM, Klymak Jody wrote:


Hi Vitorio,

Thanks for getting back to me!  My hostfile is

xserve01.local max-slots=8
xserve02.local max-slots=8
xserve03.local max-slots=8
xserve04.local max-slots=8

I've now checked, and this seems to work fine just using ssh.   
i.e. if I turn off the Xgrid queue manager I can submit jobs  
manually to the appropriate nodes using --hosts.


However, I'd really like to use Xgrid as my queue manager as it is  
already set up (though I'll happily take hints on how to set up  
other queue managers on an OS X cluster).


So you have 4 nodes each one with 2 processors, each processor 4- 
core - quad-core.

So you have capacity for 32 process in parallel.


The new Xeon chips designate 2-processes per core, though at a  
reduced clock rate.  This means that Xgrid believes I have 16  
processors/node.  For large jobs I expect that to be useful, but  
for my more modest jobs I really only want 8 processes/node.


It appears that the default way xgrid assigns the jobs is to fill  
all 16 slots on one node before moving to the next.  OpenMPI  
doesn't appear to look at the hostfile configuration when using  
Xgrid, so it makes it hard for me to deprecate this behaviour.


Thanks,  Jody



I think that only using the hostfile is enough is how I use. If  
you to specify a specific host or a different sequence, the  
mpirun will obey the host sequence in your hostfile to start the  
process, also can you put how you configured your host files ?  
I'm asking this because you should have something like:

# This is an example hostfile. Comments begin with
# #
# The following node is a single processor machine:
foo.example.com
# The following node is a dual-processor machine:
bar.example.com slots=2
# The following node is a quad-processor machine, and we absolutely
# want to disallow over-subscribing it:
yow.example.com slots=4 max-slots=4
so in your case like mine you should have something like:
your.hostname.domain slots=8 max-slots=8 # for each node

I hope this will help you.
Regards.
Vitorio.


Le 09-07-11 à 10:56, Klymak Jody a écrit :


Hi all,

Sorry in advance if these are naive questions - I'm not  
experienced in running a grid...


I'm using openMPI on 4  duo Quad-core Xeon xserves.  The 8 cores  
mimic 16 cores and show up in xgrid as each agent having 16  
processors.  However, the processing speed goes down as the used  
processors exceeds 8, so if possible I'd prefer to not have more  
than 8 processors working on each machine at a time.


Unfortunately, if I submit a 16-processor job to xgrid it all  
goes to "xserve03".  Or even worse, it does so if I submit two  
separate 8-processor jobs.  Is there anyway to steer jobs to  
less-busy agents?


I tried making a hostfile and then specifying the host, but I get:

/usr/local/openmpi/bin/mpirun -n 8 --hostfile hostfile --host  
xserve01.local ../build/mitgcmuv


Some of the requested hosts are not included in the current  
allocation for the

application:
../build/mitgcmuv
The requested hosts were:
xserve01.local

so I assume --host doesn't work with xgrid?

Is a reasonable alternative to simply not use xgrid and rely on  
ssh?


Thanks,  Jody

--
Jody Klymak
http://web.uvic.ca/~jklymak

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Xgrid and choosing agents...

2009-07-11 Thread Luis Vitorio Cargnini

Hi,

So you have 4 nodes each one with 2 processors, each processor 4-core  
- quad-core.

So you have capacity for 32 process in parallel.
I think that only using the hostfile is enough is how I use. If you to  
specify a specific host or a different sequence, the mpirun will obey  
the host sequence in your hostfile to start the process, also can you  
put how you configured your host files ? I'm asking this because you  
should have something like:

# This is an example hostfile. Comments begin with
# #
# The following node is a single processor machine:
foo.example.com
 # The following node is a dual-processor machine:
bar.example.com slots=2
# The following node is a quad-processor machine, and we absolutely
# want to disallow over-subscribing it:
 yow.example.com slots=4 max-slots=4
so in your case like mine you should have something like:
your.hostname.domain slots=8 max-slots=8 # for each node

I hope this will help you.
Regards.
Vitorio.


Le 09-07-11 à 10:56, Klymak Jody a écrit :


Hi all,

Sorry in advance if these are naive questions - I'm not experienced  
in running a grid...


I'm using openMPI on 4  duo Quad-core Xeon xserves.  The 8 cores  
mimic 16 cores and show up in xgrid as each agent having 16  
processors.  However, the processing speed goes down as the used  
processors exceeds 8, so if possible I'd prefer to not have more  
than 8 processors working on each machine at a time.


Unfortunately, if I submit a 16-processor job to xgrid it all goes  
to "xserve03".  Or even worse, it does so if I submit two separate 8- 
processor jobs.  Is there anyway to steer jobs to less-busy agents?


I tried making a hostfile and then specifying the host, but I get:

/usr/local/openmpi/bin/mpirun -n 8 --hostfile hostfile --host  
xserve01.local ../build/mitgcmuv


Some of the requested hosts are not included in the current  
allocation for the

application:
 ../build/mitgcmuv
The requested hosts were:
 xserve01.local

so I assume --host doesn't work with xgrid?

Is a reasonable alternative to simply not use xgrid and rely on ssh?

Thanks,  Jody

--
Jody Klymak
http://web.uvic.ca/~jklymak

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




smime.p7s
Description: S/MIME cryptographic signature


PGP.sig
Description: Ceci est une signature électronique PGP


Re: [OMPI users] MPI and C++ (Boost)

2009-07-07 Thread Luis Vitorio Cargnini
Ok, after all the considerations, I'll try Boost, today, make some  
experiments and see if I can use it or if I'll avoid it yet.


But as said by Raimond I think, the problem is been dependent of a  
rich-incredible-amazing-toolset  but still implementing only  
MPI-1, and do not implement all the MPI functions main drawbacks of  
boost, but the set of functions implemented do not compromise the  
functionality, i don't know about the MPI-1, MPI-2 and future MPI-3  
specifications, how this specifications implementations affect boost  
and the developer using Boost, with OpenMPI of course.


Continuing if something change in the boost how can I guarantee it  
won't affect my code in the future ? It is impossible.


Anyway I'll test it today and without it and choose my direction,  
thanks for all the replies, suggestions, solutions, that you all  
pointed to me I really appreciate all your help and comments about  
boost or not my code.


Thanks and Regards.
Vitorio.


Le 09-07-07 à 08:26, Jeff Squyres a écrit :


I think you face a common trade-off:

- use a well-established, debugged, abstraction-rich library
- write all of that stuff yourself

FWIW, I think the first one is a no-brainer.  There's a reason they  
wrote Boost.MPI: it's complex, difficult stuff, and is perfect as  
middleware for others to use.


If having users perform a 2nd step is undesirable (i.e., install  
Boost before installing your software), how about embedding Boost in  
your software?  Your configure/build process can certainly be  
tailored to include Boost[.MPI].  Hence, users will only perform 1  
step, but it actually performs "2" steps under the covers (configures 
+installs Boost.MPI and then configures+installs your software,  
which uses Boost).


FWIW: Open MPI does exactly this.  Open MPI embeds at least 5  
software packages: PLPA, VampirTrace, ROMIO, libltdl, and libevent.   
But 99.9% of our users don't know/care because it's all hidden in  
our configure / make process.  If you watch carefully, you can see  
the output go by from each of those configure sections when running  
OMPI's configure.  But no one does.  ;-)


Sidenote: I would echo that the Forum is not considering including  
Boost.MPI at all.  Indeed, as mentioned in different threads, the  
Forum has already voted once to deprecate the MPI C++ bindings,  
partly *because* of Boost.  Boost.MPI has shown that the C++  
community is better at making C++ APIs for MPI than the Forum is.   
Hence, our role should be to make the base building blocks and let  
the language experts make their own preferred tools.





On Jul 7, 2009, at 5:03 AM, Matthieu Brucher wrote:

> IF boost is attached to MPI 3 (or whatever), AND it becomes part  
of the
> mainstream MPI implementations, THEN you can have the discussion  
again.


Hi,

At the moment, I think that Boost.MPI only supports MPI1.1, and even
then, some additional work may be done, at least regarding the  
complex

datatypes.

Matthieu
--
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/? 
blog=92

LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




--
Jeff Squyres
Cisco Systems

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




smime.p7s
Description: S/MIME cryptographic signature


PGP.sig
Description: Ceci est une signature électronique PGP


Re: [OMPI users] MPI and C++ (Boost)

2009-07-06 Thread Luis Vitorio Cargnini

Hi Raymond, thanks for your answer
Le 09-07-06 à 21:16, Raymond Wan a écrit :


I've used Boost MPI before and it really isn't that bad and  
shouldn't be seen as "just another library".  Many parts of Boost  
are on their way to being part of the standard and are discussed and  
debated on.  And so, it isn't the same as going to some random  
person's web page and downloading their library/template. Of course,  
it takes time to make it into the standard and I'm not entirely sure  
if everything will (probably not).


(One "annoying" thing about Boost MPI is that you have to compile  
it...if you are distributing your code, end-users might find that  
bothersome...oh, and serialization as well.)




we have a common factor, I'm not exactly distributing, but I'll add a  
dependency into my code, something that bothers me.


One suggestion might be to make use of Boost and once you got your  
code working,  start changing it back.  At least you will have a  
working program to compare against.  Kind of like writing a  
prototype first...




Your suggestion is a great and interesting idea. I only have the fear  
to get used to the Boost and could not get rid of Boost anymore,  
because one thing is sure the abstraction added by Boost is  
impressive, it turn the things much less painful like MPI to be  
implemented using C++, also the serialization inside Boost::MPI  
already made by Boost to use MPI is astonishing attractive, and of  
course the possibility to add new types like classes to be able to  
send objects through MPI_Send of Boost, this is certainly attractive,  
but again I do not want to get dependent of a library as I said, this  
is my major concern.

.

Ray


smime.p7s
Description: S/MIME cryptographic signature


PGP.sig
Description: Ceci est une signature électronique PGP


Re: [OMPI users] MPI and C++ - now Send and Receive of Classes and STL containers

2009-07-06 Thread Luis Vitorio Cargnini

Thanks, but I really do not want to use Boost.
Is easier ? certainly is, but I want to make it using only MPI itself  
and not been dependent of a Library, or templates like the majority of  
boost a huge set of templates and wrappers for different libraries,  
implemented in C, supplying a wrapper for C++.
I admit Boost is a valuable tool, but in my case, as much independent  
I could be from additional libs, better.


Le 09-07-06 à 04:49, Number Cruncher a écrit :


I strongly suggest you take a look at boost::mpi, 
http://www.boost.org/doc/libs/1_39_0/doc/html/mpi.html

It handles serialization transparently and has some great natural  
extensions to the MPI C interface for C++, e.g.


bool global = all_reduce(comm, local, logical_and());

This sets "global" to "local_0 && local_1 && ... && local_N-1"


Luis Vitorio Cargnini wrote:
Thank you very much John, the explanation of [0], was the kind of  
think that I was looking for, thank you very much.

This kind of approach solves my problems.
Le 09-07-05 à 22:20, John Phillips a écrit :

Luis Vitorio Cargnini wrote:

Hi,
So, after some explanation I start to use the bindings of C  
inside my C++ code, then comme my new doubt:
How to send a object through Send and Recv of MPI ? Because the  
types are CHAR, int, double, long double, you got.

Someone have any suggestion ?
Thanks.
Vitorio.


Vitorio,

If you are sending collections of built in data types (ints,  
doubles, that sort of thing), then it may be easy, and it isn't  
awful. You want the data in a single stretch of continuous memory.  
If you are using an STL vector, this is already true. If you are  
using some other container, then no guarantees are provided for  
whether the memory is continuous.


Imagine you are using a vector, and you know the number of entries  
in that vector. You want to send that vector to processor 2 on the  
world communicator with tag 0. Then, the code snippet would be;


std::vector v;

... code that fills v with something ...

int send_error;

send_error = MPI_Send([0], v.size(), MPI_DOUBLE, 2,  
0,  MPI_COMM_WORLD);


The [0] part provides a pointer to the first member of the array  
that holds the data for the vector. If you know how long it will  
be, you could use that constant instead of using the v.size()  
function. Knowing the length also simplifies the send, since the  
remote process also knows the length and doesn't need a separate  
send to provide that information.


It is also possible to provide a pointer to the start of storage  
for the character array that makes up a string. Both of these  
legacy friendly interfaces are part of the standard, and should be  
available on any reasonable implementation of the STL.


If you are using a container that is not held in continuous  
memory, and the data is all of a single built in data type, then  
you need to first serialize the data into a block of continuous  
memory before sending it. (If the data block is large, then you  
may actually have to divide it into pieces and send them  
separately.)


If the data is not a block of all a single built in type, (It may  
include several built in types, or it may be a custom data class  
with complex internal structure, for example.) then the  
serialization problem gets harder. In this case, look at the MPI  
provided facilities for dealing with complex data types and  
compare to the boost provided facilities. There is an initial  
learning curve for the boost facilities, but in the long run it  
may provide a substantial development time savings if you need to  
transmit and receive several complex types. In most cases, the run  
time cost is small for using the boost facilities. (according to  
the tests run during library development and documented with the  
library)


   John Phillips

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




smime.p7s
Description: S/MIME cryptographic signature


PGP.sig
Description: Ceci est une signature électronique PGP


Re: [OMPI users] MPI and C++ - now Send and Receive of Classes and STL containers

2009-07-06 Thread Luis Vitorio Cargnini

just one additional and if I have:
vector< vector > x;

How to use the MPI_Send

MPI_Send([0][0], x[0].size(),MPI_DOUBLE, 2, 0, MPI_COMM_WORLD);

?

Le 09-07-05 à 22:20, John Phillips a écrit :


Luis Vitorio Cargnini wrote:

Hi,
So, after some explanation I start to use the bindings of C inside  
my C++ code, then comme my new doubt:
How to send a object through Send and Recv of MPI ? Because the  
types are CHAR, int, double, long double, you got.

Someone have any suggestion ?
Thanks.
Vitorio.


 Vitorio,

 If you are sending collections of built in data types (ints,  
doubles, that sort of thing), then it may be easy, and it isn't  
awful. You want the data in a single stretch of continuous memory.  
If you are using an STL vector, this is already true. If you are  
using some other container, then no guarantees are provided for  
whether the memory is continuous.


 Imagine you are using a vector, and you know the number of entries  
in that vector. You want to send that vector to processor 2 on the  
world communicator with tag 0. Then, the code snippet would be;


std::vector v;

... code that fills v with something ...

int send_error;

send_error = MPI_Send([0], v.size(), MPI_DOUBLE, 2, 0,
  MPI_COMM_WORLD);

 The [0] part provides a pointer to the first member of the array  
that holds the data for the vector. If you know how long it will be,  
you could use that constant instead of using the v.size() function.  
Knowing the length also simplifies the send, since the remote  
process also knows the length and doesn't need a separate send to  
provide that information.


 It is also possible to provide a pointer to the start of storage  
for the character array that makes up a string. Both of these legacy  
friendly interfaces are part of the standard, and should be  
available on any reasonable implementation of the STL.


 If you are using a container that is not held in continuous memory,  
and the data is all of a single built in data type, then you need to  
first serialize the data into a block of continuous memory before  
sending it. (If the data block is large, then you may actually have  
to divide it into pieces and send them separately.)


 If the data is not a block of all a single built in type, (It may  
include several built in types, or it may be a custom data class  
with complex internal structure, for example.) then the  
serialization problem gets harder. In this case, look at the MPI  
provided facilities for dealing with complex data types and compare  
to the boost provided facilities. There is an initial learning curve  
for the boost facilities, but in the long run it may provide a  
substantial development time savings if you need to transmit and  
receive several complex types. In most cases, the run time cost is  
small for using the boost facilities. (according to the tests run  
during library development and documented with the library)


John Phillips

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




smime.p7s
Description: S/MIME cryptographic signature


PGP.sig
Description: Ceci est une signature électronique PGP


[OMPI users] MPI and C++ - now Send and Receive of Classes and STL containers

2009-07-05 Thread Luis Vitorio Cargnini

Hi,

So, after some explanation I start to use the bindings of C inside my C 
++ code, then comme my new doubt:
How to send a object through Send and Recv of MPI ? Because the types  
are CHAR, int, double, long double, you got.

Someone have any suggestion ?

Thanks.
Vitorio.

smime.p7s
Description: S/MIME cryptographic signature


PGP.sig
Description: Ceci est une signature électronique PGP


Re: [OMPI users] MPI and C++

2009-07-04 Thread Luis Vitorio Cargnini

Thanks Jeff.

Le 09-07-04 à 08:24, Jeff Squyres a écrit :

There is a proposal that has passed one vote so far to deprecate the  
C++ bindings in MPI-2.2 (meaning: still have them, but advise  
against using them).  This opens the door for potentially removing  
the C++ bindings in MPI-3.0.


As has been mentioned on this thread already, the official MPI C++  
bindings are fairly simplistic -- they take advantage of a few  
language features, but not a lot.  They are effectively a 1-to-1  
mapping to the C bindings.  The Boost.MPI library added quite a few  
nice C++-friendly abstractions on top of MPI.  But if Boost is  
unattractive for you, then your best bet is probably just to use the  
C bindings.




smime.p7s
Description: S/MIME cryptographic signature


PGP.sig
Description: Ceci est une signature électronique PGP


Re: [OMPI users] MPI and C++

2009-07-04 Thread Luis Vitorio Cargnini
Thanks for your answers I'll use normal C-style MPI so. I checked  
boost, but it seems it only supplies me with a shared communication  
interface among the nodes, turning a little difficult to parallelize  
the processes itself, also boost obligate me to have an MPI  
installation too. Boost is working like a giant wrapper for many non- 
OO things to C++, and it seems to use boost I have to install a lot of  
additional things.


Thanks.
Regards.
Vitorio.


Le 09-07-03 à 19:44, Dorian Krause a écrit :


I'm sorry. I meant boost.mpi ...



Luis Vitorio Cargnini wrote:

Hi,
Please I'm writing a C++ applications that will use MPI. My problem  
is, I want to use the C++ bindings and then come my doubts. All the  
examples that I found people is using almost like C, except for the  
fact of adding the namespace MPI:: before the procedure calls.
For example I want to apply MPI for a OO code, like inside my  
object, for example, call the MPI::Init() inside the constructor ...


Please if someone could advice me on this thanks.


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




smime.p7s
Description: S/MIME cryptographic signature


PGP.sig
Description: Ceci est une signature électronique PGP


[OMPI users] MPI and C++

2009-07-03 Thread Luis Vitorio Cargnini

Hi,
Please I'm writing a C++ applications that will use MPI. My problem  
is, I want to use the C++ bindings and then come my doubts. All the  
examples that I found people is using almost like C, except for the  
fact of adding the namespace MPI:: before the procedure calls.
For example I want to apply MPI for a OO code, like inside my object,  
for example, call the MPI::Init() inside the constructor ...


Please if someone could advice me on this thanks.


smime.p7s
Description: S/MIME cryptographic signature


PGP.sig
Description: Ceci est une signature électronique PGP


Re: [OMPI users] Please help me with this simple setup. i am stuck

2009-05-09 Thread Luis Vitorio Cargnini

maybe add the slots=1 for example to your first node

Le 09-05-09 à 11:42, Venu Gopal a écrit :


I am venu,

I have tried to setup a simple 2 node openmpi system.

on two machines one is running debian lenny (ip 10.0.3.1)
other is running ubuntu hardy (ip 10.0.3.3)

I am getting error when i try to execute a file using mpiexec, i am  
sure password is correct. as ssh is working
and the file pi3 is in directory code which in turn is in my home  
directory venu.


the file pi.c is below



/* To run this  
program:   */
/ 
*-
*/
/ 
* 
*/
/ 
* 
*/
/*Issue:   time  mpirun  -np  [nprocs]  ./pi  (SGI,  
Beowulf)  */
/ 
* 
*/
/ 
* 
*/
/*  
-- */


#include 
#include 

#include "mpi.h"

int main(int argc, char *argv[])
{
  inti, n;
  double h, pi, x;

  intme, nprocs;
  double piece;

/* --- */

  MPI_Init (, );

  MPI_Comm_size (MPI_COMM_WORLD, );
  MPI_Comm_rank (MPI_COMM_WORLD, );

/* --- */

  if (me == 0)
  {
 printf("%s", "Input number of intervals:\n");
 scanf ("%d", );
  }

/* --- */

  MPI_Bcast (, 1, MPI_INT,
 0, MPI_COMM_WORLD);

/* --- */

  h = 1. / (double) n;

  piece = 0.;

  for (i=me+1; i <= n; i+=nprocs)
  {
   x = (i-1)*h;

   piece = piece + (
   4/
  (1+(x)*(x))
   +
   4/
  (1+(x+h)*(x+h))
   ) / 2 * h;
  }

  printf("%d: pi = %25.15f\n", me, piece);

/* --- */

  MPI_Reduce (, , 1, MPI_DOUBLE,
  MPI_SUM, 0, MPI_COMM_WORLD);

/* --- */

  if (me == 0)
  {
 printf("pi = %25.15f\n", pi);
  }

/* --- */

  MPI_Finalize();

  return 0;
}



the code directory is nfs shared and mounted on the client system  
which is 10.0.3.3.

the server system is 10.0.3.1

i can ping the client from server and also server from client. ssh  
is working bothways.


the /etc/openmpi/openmpi-default-hostfile is having the line on the  
first node ie. 10.0.3.1


10.0.3.3 slots=2


the other nodes file is just empty. i mean only comments are there.


this is the error is get when i execute.


venu@mainframe:~$ mpiexec -np 3 ./code/pi3
venu@10.0.3.3's password:
--
Could not execute the executable "./code/pi3": Exec format error

This could mean that your PATH or executable name is wrong, or that  
you do not
have the necessary permissions.  Please ensure that the executable  
is able to be

found and executed.

--
--
Could not execute the executable "./code/pi3": Exec format error

This could mean that your PATH or executable name is wrong, or that  
you do not
have the necessary permissions.  Please ensure that the executable  
is able to be

found and executed.

--
--
Could not execute the executable "./code/pi3": Exec format error

This could mean that your PATH or executable name is wrong, or that  
you do not
have the necessary permissions.  Please ensure that the executable  
is able to be

found and executed.

--



now, when i remove that line from /etc/openmpi/openmpi-default- 
hostfile on the first node


the program compiles and executes on the first node node.

same, when i compile it and execute it on the second node, it works.

only problem is when i try to run it on both.

i get the error mesage as above.


someone, please help me. as i am trying to setup this system for the  
first time.


and i am stuck.

i am fairly good with linux. so i know my way around linux. but am  
stuck with open mpi.

--

Regards,

Venu Gopal



___
users mailing list
us...@open-mpi.org

Re: [OMPI users] How do I compile OpenMPI in Xcode 3.1

2009-05-06 Thread Luis Vitorio Cargnini
This problem is occuring because the fortran wasn't compiled with the  
debug symbols:
warning: Could not find object file "/Users/admin/build/i386-apple- 
darwin9.0.0/libgcc/_udiv_w_sdiv_s.o" - no debug information available  
for "../../../gcc-4.3-20071026/libgcc/../gcc/libgcc2.c".


Is the same problem for who is using LLVM in Xcode, there is no debug  
symbols to create a debug release, try create a release and see if it  
will compile at all and try the fortran from macports it will works  
smoothly.



Le 09-05-05 à 17:33, Jeff Squyres a écrit :


I agree; that is a bummer.  :-(

Warner -- do you have any advice here, perchance?


On May 4, 2009, at 7:26 PM, Vicente Puig wrote:


But it doesn't work well.

For example, I am trying to debug a program, "floyd" in this case,  
and when I make a breakpoint:


No line 26 in file "../../../gcc-4.2-20060805/libgfortran/fmain.c".

I am getting disappointed and frustrated that I can not work well  
with openmpi in my Mac. There should be a was to make it run in  
Xcode, uff...


2009/5/4 Jeff Squyres 
I get those as well.  I believe that they are (annoying but)  
harmless -- an artifact of how the freeware gcc/gofrtran that I use  
was built.




On May 4, 2009, at 1:47 PM, Vicente Puig wrote:

Maybe I had to open a new thread, but if you have any idea why I  
receive it when I use gdb for debugging an openmpi program:


warning: Could not find object file "/Users/admin/build/i386-apple- 
darwin9.0.0/libgcc/_umoddi3_s.o" - no debug information available  
for "../../../gcc-4.3-20071026/libgcc/../gcc/libgcc2.c".



warning: Could not find object file "/Users/admin/build/i386-apple- 
darwin9.0.0/libgcc/_udiv_w_sdiv_s.o" - no debug information  
available for "../../../gcc-4.3-20071026/libgcc/../gcc/libgcc2.c".



warning: Could not find object file "/Users/admin/build/i386-apple- 
darwin9.0.0/libgcc/_udivmoddi4_s.o" - no debug information  
available for "../../../gcc-4.3-20071026/libgcc/../gcc/libgcc2.c".



warning: Could not find object file "/Users/admin/build/i386-apple- 
darwin9.0.0/libgcc/unwind-dw2_s.o" - no debug information available  
for "../../../gcc-4.3-20071026/libgcc/../gcc/unwind-dw2.c".



warning: Could not find object file "/Users/admin/build/i386-apple- 
darwin9.0.0/libgcc/unwind-dw2-fde-darwin_s.o" - no debug  
information available for "../../../gcc-4.3-20071026/libgcc/../gcc/ 
unwind-dw2-fde-darwin.c".



warning: Could not find object file "/Users/admin/build/i386-apple- 
darwin9.0.0/libgcc/unwind-c_s.o" - no debug information available  
for "../../../gcc-4.3-20071026/libgcc/../gcc/unwind-c.c".

...



There is no 'admin' so I don't know why it happen. It works well  
with a C program.


Any idea??.

Thanks.


Vincent





2009/5/4 Vicente Puig 
I can run openmpi perfectly with command line, but I wanted a  
graphic interface for debugging because I was having problems.


Thanks anyway.

Vincent

2009/5/4 Warner Yuen 

Admittedly, I don't use Xcode to build Open MPI either.

You can just compile Open MPI from the command line and install  
everything in /usr/local/. Make sure that gfortran is set in your  
path and you should just be able to do a './configure --prefix=/usr/ 
local'


After the installation, just make sure that your path is set  
correctly when you go to use the newly installed Open MPI. If you  
don't set your path, it will always default to using the version of  
OpenMPI that ships with Leopard.



Warner Yuen
Scientific Computing
Consulting Engineer
Apple, Inc.
email: wy...@apple.com
Tel: 408.718.2859




On May 4, 2009, at 9:13 AM, users-requ...@open-mpi.org wrote:

Send users mailing list submissions to
 us...@open-mpi.org

To subscribe or unsubscribe via the World Wide Web, visit
 http://www.open-mpi.org/mailman/listinfo.cgi/users
or, via email, send a message with subject or body 'help' to
 users-requ...@open-mpi.org

You can reach the person managing the list at
 users-ow...@open-mpi.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of users digest..."


Today's Topics:

1. Re: How do I compile OpenMPI in Xcode 3.1 (Vicente Puig)


--

Message: 1
Date: Mon, 4 May 2009 18:13:45 +0200
From: Vicente Puig 
Subject: Re: [OMPI users] How do I compile OpenMPI in Xcode 3.1
To: Open MPI Users 
Message-ID:
 <3e9a21680905040913u3f36d3c9rdcd3413bfdcd...@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

If I can not make it work with Xcode,  which one could I use?,  
which one do

you use to compile and debug OpenMPI?.
Thanks

Vincent


2009/5/4 Jeff Squyres 

Open MPI comes pre-installed in Leopard; as Warner noted, since  
Leopard
doesn't ship with a Fortran compiler, the Open MPI that Apple ships  
has

non-functional mpif77 and mpif90 wrapper compilers.

So the 

Re: [OMPI users] How do I compile OpenMPI in Xcode 3.1

2009-05-06 Thread Luis Vitorio Cargnini

$0,02 of contribution, try macports

Le 09-05-04 à 11:42, Jeff Squyres a écrit :

FWIW, I don't use Xcode, but I use the precompiled gcc/gfortran from  
here with good success:


   http://hpc.sourceforge.net/



On May 4, 2009, at 11:38 AM, Warner Yuen wrote:


Have you installed a Fortran compiler? Mac OS X's developer tools do
not come with a Fortran compiler, so you'll need to install one if  
you

haven't already done so. I routinely use the Intel IFORT compilers
with success. However, I hear many good things about the gfortran
compilers on Mac OS X, you can't beat the price of gfortran!


Warner Yuen
Scientific Computing
Consulting Engineer
Apple, Inc.
email: wy...@apple.com
Tel: 408.718.2859




On May 4, 2009, at 7:28 AM, users-requ...@open-mpi.org wrote:

> Send users mailing list submissions to
>   us...@open-mpi.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>   http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
>   users-requ...@open-mpi.org
>
> You can reach the person managing the list at
>   users-ow...@open-mpi.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
>   1. How do I compile OpenMPI in Xcode 3.1 (Vicente)
>   2. Re: 1.3.1 -rf rankfile behaviour ?? (Ralph Castain)
>
>
>  
--

>
> Message: 1
> Date: Mon, 4 May 2009 16:12:44 +0200
> From: Vicente 
> Subject: [OMPI users] How do I compile OpenMPI in Xcode 3.1
> To: us...@open-mpi.org
> Message-ID: <1c2c0085-940f-43bb-910f-975871ae2...@gmail.com>
> Content-Type: text/plain; charset="windows-1252"; Format="flowed";
>   DelSp="yes"
>
> Hi, I've seen the FAQ "How do I use Open MPI wrapper compilers in
> Xcode", but it's only for MPICC. I am using MPIF90, so I did the  
same,
> but changing MPICC for MPIF90, and also the path, but it did not  
work.

>
> Building target ?fortran? of project ?fortran? with configuration
> ?Debug?
>
>
> Checking Dependencies
> Invalid value 'MPIF90' for GCC_VERSION
>
>
> The file "MPIF90.cpcompspec" looks like this:
>
>   1 /**
>   2 Xcode Coompiler Specification for MPIF90
>   3
>   4 */
>   5
>   6 {   Type = Compiler;
>   7 Identifier = com.apple.compilers.mpif90;
>   8 BasedOn = com.apple.compilers.gcc.4_0;
>   9 Name = "MPIF90";
>  10 Version = "Default";
>  11 Description = "MPI GNU C/C++ Compiler 4.0";
>  12 ExecPath = "/usr/local/bin/mpif90";  // This gets
> converted to the g++ variant automatically
>  13 PrecompStyle = pch;
>  14 }
>
> and is located in "/Developer/Library/Xcode/Plug-ins"
>
> and when I do mpif90 -v on terminal it works well:
>
> Using built-in specs.
> Target: i386-apple-darwin8.10.1
> Configured with: /tmp/gfortran-20090321/ibin/../gcc/configure --
> prefix=/usr/local/gfortran --enable-languages=c,fortran --with- 
gmp=/

> tmp/gfortran-20090321/gfortran_libs --enable-bootstrap
> Thread model: posix
> gcc version 4.4.0 20090321 (experimental) [trunk revision 144983]
> (GCC)
>
>
> Any idea??
>
> Thanks.
>
> Vincent
> -- next part --
> HTML attachment scrubbed and removed
>
> --
>
> Message: 2
> Date: Mon, 4 May 2009 08:28:26 -0600
> From: Ralph Castain 
> Subject: Re: [OMPI users] 1.3.1 -rf rankfile behaviour ??
> To: Open MPI Users 
> Message-ID:
>   <71d2d8cc0905040728h2002f4d7s4c49219eee29e...@mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Unfortunately, I didn't write any of that code - I was just  
fixing the
> mapper so it would properly map the procs. From what I can tell,  
the

> proper
> things are happening there.
>
> I'll have to dig into the code that specifically deals with parsing
> the
> results to bind the processes. Afraid that will take awhile  
longer -

> pretty
> dark in that hole.
>
>
> On Mon, May 4, 2009 at 8:04 AM, Geoffroy Pignot
>  wrote:
>
>> Hi,
>>
>> So, there are no more crashes with my "crazy" mpirun command.  
But the
>> paffinity feature seems to be broken. Indeed I am not able to  
pin my

>> processes.
>>
>> Simple test with a program using your plpa library :
>>
>> r011n006% cat hostf
>> r011n006 slots=4
>>
>> r011n006% cat rankf
>> rank 0=r011n006 slot=0   > bind to CPU 0 , exact ?
>>
>> r011n006% /tmp/HALMPI/openmpi-1.4a/bin/mpirun --hostfile hostf --
>> rankfile
>> rankf --wdir /tmp -n 1 a.out
> PLPA Number of processors online: 4
> PLPA Number of processor sockets: 2
> PLPA Socket 0 (ID 0): 2 cores
> PLPA Socket 1 (ID 3): 2 cores
>>
>> Ctrl+Z
>> r011n006%bg
>>
>> r011n006% ps axo stat,user,psr,pid,pcpu,comm | grep gpignot
>> R+   gpignot3  9271 97.8 a.out
>>
>> In fact whatever the slot number I put in my rankfile , a.out
>> always runs
>> on the CPU 3. I was looking 

Re: [OMPI users] few Problems

2009-04-23 Thread Luis Vitorio Cargnini
] top: openmpi-sessions-lvcargnini@cluster-srv2_0
[cluster-srv2:23282] tmp: /tmp
[cluster-srv2:23281] procdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv2_0/44411/1/20
[cluster-srv2:23281] jobdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv2_0/44411/1

[cluster-srv2:23281] top: openmpi-sessions-lvcargnini@cluster-srv2_0
[cluster-srv2:23281] tmp: /tmp
[cluster-srv2:23284] procdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv2_0/44411/1/23
[cluster-srv2:23284] jobdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv2_0/44411/1

[cluster-srv2:23284] top: openmpi-sessions-lvcargnini@cluster-srv2_0
[cluster-srv2:23284] tmp: /tmp
[cluster-srv2:23283] procdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv2_0/44411/1/22
[cluster-srv2:23283] jobdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv2_0/44411/1

[cluster-srv2:23283] top: openmpi-sessions-lvcargnini@cluster-srv2_0
[cluster-srv2:23283] tmp: /tmp
[cluster-srv3:20276] procdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1/26
[cluster-srv3:20276] jobdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1

[cluster-srv3:20276] top: openmpi-sessions-lvcargnini@cluster-srv3_0
[cluster-srv3:20276] tmp: /tmp
[cluster-srv3:20274] procdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1/24
[cluster-srv3:20274] jobdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1

[cluster-srv3:20274] top: openmpi-sessions-lvcargnini@cluster-srv3_0
[cluster-srv3:20274] tmp: /tmp
[cluster-srv3:20277] procdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1/27
[cluster-srv3:20277] jobdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1

[cluster-srv3:20277] top: openmpi-sessions-lvcargnini@cluster-srv3_0
[cluster-srv3:20277] tmp: /tmp
[cluster-srv3:20280] procdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1/30
[cluster-srv3:20280] jobdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1

[cluster-srv3:20280] top: openmpi-sessions-lvcargnini@cluster-srv3_0
[cluster-srv3:20280] tmp: /tmp
[cluster-srv3:20275] procdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1/25
[cluster-srv3:20275] jobdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1

[cluster-srv3:20275] top: openmpi-sessions-lvcargnini@cluster-srv3_0
[cluster-srv3:20275] tmp: /tmp
[cluster-srv3:20279] procdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1/29
[cluster-srv3:20279] jobdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1

[cluster-srv3:20279] top: openmpi-sessions-lvcargnini@cluster-srv3_0
[cluster-srv3:20279] tmp: /tmp
[cluster-srv3:20278] procdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1/28
[cluster-srv3:20278] jobdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1

[cluster-srv3:20278] top: openmpi-sessions-lvcargnini@cluster-srv3_0
[cluster-srv3:20278] tmp: /tmp
[cluster-srv3:20281] procdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1/31
[cluster-srv3:20281] jobdir: /tmp/openmpi-sessions-lvcargnini@cluster- 
srv3_0/44411/1

[cluster-srv3:20281] top: openmpi-sessions-lvcargnini@cluster-srv3_0
[cluster-srv3:20281] tmp: /tmp
[cluster-srv4:09059] [[44411,1],32] node[0].name cluster-srv0 daemon 0  
arch ffc91200
[cluster-srv4:09059] [[44411,1],32] node[1].name cluster-srv1 daemon 1  
arch ffc91200
[cluster-srv4:09059] [[44411,1],32] node[2].name cluster-srv2 daemon 2  
arch ffc91200
[cluster-srv4:09059] [[44411,1],32] node[3].name cluster-srv3 daemon 3  
arch ffc91200
[cluster-srv4:09059] [[44411,1],32] node[4].name cluster-srv4 daemon 4  
arch ffc91200



Le 09-04-22 à 17:06, Gus Correa a écrit :


Hi Luis, list

To complement Jeff's recommendation,
see if this recipe to setup passwordless ssh connections helps.
If you use RSA keys instead of DSA, replace all "dsa" by "rsa":

http://www.sshkeychain.org/mirrors/SSH-with-Keys-HOWTO/SSH-with-Keys-HOWTO-4.html#ss4.3

I hope this helps.

Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
-


Jeff Squyres wrote:

It looks like you need to fix your password-less ssh problems first:
> Permission denied, please try again.
>lvcargnini@cluster-srv2's password:
As I mentioned earlier, you need to be able to be able to run
   ssh cluster-srv2 uptime
without being prompted for a password before Open MPI will work  
properly.
If you're still having problems after fixing this, please send all  
the information from the "help" URL I sent earlier.

Thanks!
On Apr 22, 2009, at 3:24 PM, Luis Vitorio Cargnini wrote:

ok this is the debug information debug running on 5 nodes (trying at
least), the process is locked until now:

each node is composed by two quad-core microprocessors.

(don't finish), one node yet asked me the password. I have the home
partition mounted (the same) in a

Re: [OMPI users] few Problems

2009-04-22 Thread Luis Vitorio Cargnini
thank you all, I'll try to fix this ASAP, after I'll make a new test  
round than I answer back, Thanks you all until here.


Le 09-04-22 à 17:06, Gus Correa a écrit :


Hi Luis, list

To complement Jeff's recommendation,
see if this recipe to setup passwordless ssh connections helps.
If you use RSA keys instead of DSA, replace all "dsa" by "rsa":

http://www.sshkeychain.org/mirrors/SSH-with-Keys-HOWTO/SSH-with-Keys-HOWTO-4.html#ss4.3

I hope this helps.

Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
-


Jeff Squyres wrote:

It looks like you need to fix your password-less ssh problems first:
> Permission denied, please try again.
> AH72000@cluster-srv2's password:
As I mentioned earlier, you need to be able to be able to run
   ssh cluster-srv2 uptime
without being prompted for a password before Open MPI will work  
properly.
If you're still having problems after fixing this, please send all  
the information from the "help" URL I sent earlier.

Thanks!
On Apr 22, 2009, at 3:24 PM, Luis Vitorio Cargnini wrote:

ok this is the debug information debug running on 5 nodes (trying at
least), the process is locked until now:

each node is composed by two quad-core microprocessors.

(don't finish), one node yet asked me the password. I have the home
partition mounted (the same) in all nodes. so login in cluster-
srv[0-4] is the same thing, I generated the


key in each node, in

different files and added all to the authorized_keys, it should be
working.

That is it, all help is welcome.


this is the code been executed:

#include 
#include 


int main (int argc, char *argv[])
{
  int rank, size;

  MPI_Init (, ); /* starts MPI */
  MPI_Comm_rank (MPI_COMM_WORLD, );   /* get current  
process id */
  MPI_Comm_size (MPI_COMM_WORLD, );   /* get number of  
processes */

  printf( "Hello world from process %d of %d\n", rank, size );
  MPI_Finalize();
  return 0;
}


--
debug:

-bash-3.2$ mpirun -v -d -hostfile chosts -np 32 /export/cluster/ 
home/

AH72000/mpi/hello

[cluster-srv0:21606] procdir: /tmp/openmpi-sessions-AH72000@cluster-
srv0_0/35335/0/0
[cluster-srv0:21606] jobdir: /tmp/openmpi-sessions-AH72000@cluster-
srv0_0/35335/0
[cluster-srv0:21606] top: openmpi-sessions-AH72000@cluster-srv0_0
[cluster-srv0:21606] tmp: /tmp
[cluster-srv0:21606] mpirun: reset PATH: /export/cluster/appl/ 
x86_64/

llvm/bin:/bin:/sbin:/export/cluster/appl/x86_64/llvm/bin:/usr/local/
llvm/bin:/usr/local/bin:/usr/bin:/usr/sbin:/home/GTI420/AH72000/oe/
bitbake/bin
[cluster-srv0:21606] mpirun: reset LD_LIBRARY_PATH: /export/cluster/
appl/x86_64/llvm/lib:/lib64:/lib:/export/cluster/appl/x86_64/llvm/ 
lib:/

usr/lib64:/usr/lib:/usr/local/lib64:/usr/local/lib
AH72000@cluster-srv1's password: AH72000@cluster-srv2's password:
AH72000@cluster-srv3's password:
[cluster-srv1:07406] procdir: /tmp/openmpi-sessions-AH72000@cluster-
srv1_0/35335/0/1
[cluster-srv1:07406] jobdir: /tmp/openmpi-sessions-AH72000@cluster-
srv1_0/35335/0
[cluster-srv1:07406] top: openmpi-sessions-AH72000@cluster-srv1_0
[cluster-srv1:07406] tmp: /tmp


Permission denied, please try again.
AH72000@cluster-srv2's password:
[cluster-srv3:14230] procdir: /tmp/openmpi-sessions-AH72000@cluster-
srv3_0/35335/0/3
[cluster-srv3:14230] jobdir: /tmp/openmpi-sessions-AH72000@cluster-
srv3_0/35335/0
[cluster-srv3:14230] top: openmpi-sessions-AH72000@cluster-srv3_0
[cluster-srv3:14230] tmp: /tmp

Permission denied, please try again.
AH72000@cluster-srv2's password:
[cluster-srv2:17092] procdir: /tmp/openmpi-sessions-AH72000@cluster-
srv2_0/35335/0/2
[cluster-srv2:17092] jobdir: /tmp/openmpi-sessions-AH72000@cluster-
srv2_0/35335/0
[cluster-srv2:17092] top: openmpi-sessions-AH72000@cluster-srv2_0
[cluster-srv2:17092] tmp: /tmp
[cluster-srv0:21606] [[35335,0],0] node[0].name cluster-srv0  
daemon 0

arch ffc91200
[cluster-srv0:21606] [[35335,0],0] node[1].name cluster-srv1  
daemon 1

arch ffc91200
[cluster-srv0:21606] [[35335,0],0] node[2].name cluster-srv2  
daemon 2

arch ffc91200
[cluster-srv0:21606] [[35335,0],0] node[3].name cluster-srv3  
daemon 3

arch ffc91200
[cluster-srv0:21606] [[35335,0],0] node[4].name cluster-srv4 daemon
INVALID arch ffc91200
[cluster-srv1:07406] [[35335,0],1] node[0].name cluster-srv0  
daemon 0

arch ffc91200
[cluster-srv1:07406] [[35335,0],1] node[1].name cluster-srv1  
daemon 1

arch ffc91200
[cluster-srv1:07406] [[35335,0],1] node[2].name cluster-srv2  
daemon 2

arch ffc91200
[cluster-srv1:07406] [[35335,0],1] node[3].name cluster-srv3  
daemon 3

arch ffc91200
[cluster-srv1:07406] [[35335,0],1] node[4].name cluster-srv4 daemon
INVALID arch ffc91200
[cluster-srv2:17092] [[35335,0],2] node[0].name cluster-srv0  
daemon 0

arch ffc91200
[cluster-srv2:17092] [[35335,0],2] node[1].name c

[OMPI users] Problems with SSH

2009-04-21 Thread Luis Vitorio Cargnini

Hi,
Please I did as mentioned into the FAQ for SSH password-less but the  
mpirun still requesting me the password ?


-bash-3.2$ mpirun -d -v -hostfile chosts -np 16  ./hello
[cluster-srv0.logti.etsmtl.ca:31929] procdir: /tmp/openmpi-sessions-AH72000@cluster-srv0.logti.etsmtl.ca_0 
/41688/0/0
[cluster-srv0.logti.etsmtl.ca:31929] jobdir: /tmp/openmpi-sessions-AH72000@cluster-srv0.logti.etsmtl.ca_0 
/41688/0

[cluster-srv0.logti.etsmtl.ca:31929] top: 
openmpi-sessions-AH72000@cluster-srv0.logti.etsmtl.ca_0
[cluster-srv0.logti.etsmtl.ca:31929] tmp: /tmp
[cluster-srv0.logti.etsmtl.ca:31929] mpirun: reset PATH: /export/ 
cluster/appl/x86_64/llvm/bin:/bin:/sbin:/export/cluster/appl/x86_64/ 
llvm/bin:/usr/local/llvm/bin:/usr/local/bin:/usr/bin:/usr/sbin:/home/ 
GTI420/AH72000/oe/bitbake/bin
[cluster-srv0.logti.etsmtl.ca:31929] mpirun: reset LD_LIBRARY_PATH: / 
export/cluster/appl/x86_64/llvm/lib:/lib64:/lib:/export/cluster/appl/ 
x86_64/llvm/lib:/usr/lib64:/usr/lib:/usr/local/lib64:/usr/local/lib
ah72...@cluster-srv1.logti.etsmtl.ca's password: ah72...@cluster-srv2.logti.etsmtl.ca 
's password: ah72...@cluster-srv3.logti.etsmtl.ca's password:
[cluster-srv1.logti.etsmtl.ca:02621] procdir: /tmp/openmpi-sessions-AH72000@cluster-srv1.logti.etsmtl.ca_0 
/41688/0/1
[cluster-srv1.logti.etsmtl.ca:02621] jobdir: /tmp/openmpi-sessions-AH72000@cluster-srv1.logti.etsmtl.ca_0 
/41688/0

[cluster-srv1.logti.etsmtl.ca:02621] top: 
openmpi-sessions-AH72000@cluster-srv1.logti.etsmtl.ca_0
[cluster-srv1.logti.etsmtl.ca:02621] tmp: /tmp


Permission denied, please try again.
ah72...@cluster-srv2.logti.etsmtl.ca's password:
[cluster-srv3.logti.etsmtl.ca:09730] procdir: /tmp/openmpi-sessions-AH72000@cluster-srv3.logti.etsmtl.ca_0 
/41688/0/3
[cluster-srv3.logti.etsmtl.ca:09730] jobdir: /tmp/openmpi-sessions-AH72000@cluster-srv3.logti.etsmtl.ca_0 
/41688/0

[cluster-srv3.logti.etsmtl.ca:09730] top: 
openmpi-sessions-AH72000@cluster-srv3.logti.etsmtl.ca_0
[cluster-srv3.logti.etsmtl.ca:09730] tmp: /tmp

Permission denied, please try again.
ah72...@cluster-srv2.logti.etsmtl.ca's password:
[cluster-srv2.logti.etsmtl.ca:12802] procdir: /tmp/openmpi-sessions-AH72000@cluster-srv2.logti.etsmtl.ca_0 
/41688/0/2
[cluster-srv2.logti.etsmtl.ca:12802] jobdir: /tmp/openmpi-sessions-AH72000@cluster-srv2.logti.etsmtl.ca_0 
/41688/0

[cluster-srv2.logti.etsmtl.ca:12802] top: 
openmpi-sessions-AH72000@cluster-srv2.logti.etsmtl.ca_0
[cluster-srv2.logti.etsmtl.ca:12802] tmp: /tmp



smime.p7s
Description: S/MIME cryptographic signature


PGP.sig
Description: Ceci est une signature électronique PGP


[OMPI users] few Problems

2009-04-21 Thread Luis Vitorio Cargnini

Hi,
Please someone can answer me which can be this problem ?
 daemon INVALID arch ffc91200




the debug output:
[[41704,1],14] node[4].name cluster-srv4 daemon INVALID arch ffc91200
[cluster-srv3:09684] [[41704,1],13] node[0].name cluster-srv0 daemon 0  
arch ffc91200
[cluster-srv3:09684] [[41704,1],13] node[1].name cluster-srv1 daemon 1  
arch ffc91200
[cluster-srv3:09684] [[41704,1],13] node[2].name cluster-srv2 daemon 2  
arch ffc91200
[cluster-srv3:09684] [[41704,1],13] node[3].name cluster-srv3 daemon 3  
arch ffc91200
[cluster-srv3:09684] [[41704,1],13] node[4].name cluster-srv4 daemon  
INVALID arch ffc91200


ORTE_ERROR_LOG: A message is attempting to be sent to a process whose  
contact information is unknown in file rml_oob_send.c at line 105

smime.p7s
Description: S/MIME cryptographic signature


PGP.sig
Description: Ceci est une signature électronique PGP