FWIW, I filed https://svn.open-mpi.org/trac/ompi/ticket/2241 about this.
Thanks Jed!
On Feb 6, 2010, at 10:56 AM, Jed Brown wrote:
> On Fri, 5 Feb 2010 14:28:40 -0600, Barry Smith wrote:
> > To cheer you up, when I run with openMPI it runs forever sucking down
> > 100% CPU trying to send the
You shouldn't need to do this in a tarball build.
Did you run autogen manually, or did you just untar the OMPI tarball and just
configure / make?
On Feb 6, 2010, at 10:49 AM, Caciano Machado wrote:
> Hi,
>
> You can solve this installing libtool 2.2.6b and running autogen.sh.
>
> Regards,
>
Correction on a correction; I did not goof, however zombie's remaining
is not a reproducible problem but can occur.
On Mon, Feb 8, 2010 at 2:34 PM, Laurence Marks wrote:
> I goofed, openmpi does trap these errors but the system I tested them
> on had a very sluggish response. However, and end-of-
I goofed, openmpi does trap these errors but the system I tested them
on had a very sluggish response. However, and end-of-file is NOT
trapped.
On Mon, Feb 8, 2010 at 1:29 PM, Laurence Marks wrote:
> This was "Re: [OMPI users] Trapping fortran I/O errors leaving zombie
> mpi processes", but it is
On Feb 8, 2010, at 2:34 PM, Lubomir Klimes wrote:
> I am new in the world of MPI and I would like to ask you for the help. In my
> diploma thesis I need to write a program in C++ using MPI that will execute
> another external program - an optimization software GAMS. My question is
> wheter is s
On Mon, 08 Feb 2010 14:42:15 -0500, Prentice Bisbal wrote:
> I'll give that a try, too. IMHO, MPI_Pack/Unpack looks easier and less
> error prone, but Pacheco advocates using derived types over
> MPI_Pack/Unpack.
I would recommend using derived types for big structures, or perhaps for
long-lived
Prentice Bisbal wrote:
> I hit send to early on my last reply, please forgive me...
>
> Jed Brown wrote:
>> On Mon, 08 Feb 2010 13:54:10 -0500, Prentice Bisbal wrote:
>>> but I don't have that book handy
>> The standard has lots of examples.
>>
>> http://www.mpi-forum.org/docs/docs.html
>
>
I hit send to early on my last reply, please forgive me...
Jed Brown wrote:
> On Mon, 08 Feb 2010 13:54:10 -0500, Prentice Bisbal wrote:
>> but I don't have that book handy
>
> The standard has lots of examples.
>
> http://www.mpi-forum.org/docs/docs.html
Thanks, I'll check out those example
Hi,
I am new in the world of MPI and I would like to ask you for the help. In my
diploma thesis I need to write a program in C++ using MPI that will execute
another external program - an optimization software GAMS. My question is
wheter is sufficient to use simply the command system(); for executi
This was "Re: [OMPI users] Trapping fortran I/O errors leaving zombie
mpi processes", but it is more severe than this.
Sorry, but it appears that at least with ifort most run-time errors
and signals will leave zombie processes behind with openmpi if they
only occur on some of the processors and no
On Mon, 08 Feb 2010 13:54:10 -0500, Prentice Bisbal wrote:
> but I don't have that book handy
The standard has lots of examples.
http://www.mpi-forum.org/docs/docs.html
You can do this, but for small structures, you're better off just
packing buffers. For large structures containing variable
Hello, again MPU Users:
This question is similar to my earlier one about MPI_Pack/Unpack,
I'm trying to send the following structure, which has a dynamically
allocated array in it, as a MPI derived type using MPI_Create_type_struct():
typedef struct{
int index;
int* coords;
}point;
I woul
I'm using ClusterTools 8.2.1 on Solaris 10 and according to the HPC
docs,
"Open MPI includes a commented default hostfile at
/opt/SUNWhpc/HPC8.2/etc/openmpi-default-hostfile. Unless you
specify
a different hostfile at a different location, this is the hostfile
that OpenMPI uses."
I have added my
Jed Brown wrote:
> On Sun, 07 Feb 2010 22:40:55 -0500, Prentice Bisbal wrote:
>> Hello, everyone. I'm having trouble packing/unpacking this structure:
>>
>> typedef struct{
>> int index;
>> int* coords;
>> }point;
>>
>> The size of the coords array is not known a priori, so it needs to be a
>>
You can use the 'checkpoint to local disk' example to checkpoint and restart
without access to a globally shared storage devices. There is an example on the
website that does not use a globally mounted file system:
http://www.osl.iu.edu/research/ft/ompi-cr/examples.php#uc-ckpt-local
What versi
I asked this question because checkpointing with to NFS is successful, but
checkpointing without a mount filesystem or a shared storage throws this
warning&error:
WARNING: Could not preload specified file: File already exists.
Fileset: /home/andreea/checkpoints/global/ompi_global_snapshot_7426.ckp
On Sun, 07 Feb 2010 22:40:55 -0500, Prentice Bisbal wrote:
> Hello, everyone. I'm having trouble packing/unpacking this structure:
>
> typedef struct{
> int index;
> int* coords;
> }point;
>
> The size of the coords array is not known a priori, so it needs to be a
> dynamic array. I'm trying
Hello,
It does work with version 1.4. This is the hello world that hangs with
1.4.1:
#include
#include
int main(int argc, char **argv)
{
int node, size;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &node);
MPI_Comm_size(MPI_COMM_WORLD, &size);
printf("Hello World from Node
18 matches
Mail list logo