MPI_IN_PLACE is defined in both mpif.h and the "mpi" Fortran module.
Does the subroutine in question have "include mpif.h" or "use mpi"?
On Aug 16, 2010, at 3:55 PM, Richard Walsh wrote:
>
> All,
>
> I have a fortran code (Octopus 3.2) that is bombing during a build in a
> routine that uses:
Thanks Richard,
Actually I am not sure how to try the way you told in RAxML. I don't have
too much experience with these programs.
Thanks again.
On Mon, Aug 16, 2010 at 5:40 PM, Richard Walsh
wrote:
>
> Hey Gokhan,
>
> The following worked for me with OpenMPI 1.4.1 with the latest Intel
> compil
All,
I have a fortran code (Octopus 3.2) that is bombing during a build in a routine
that uses:
call MPI_Allreduce(MPI_IN_PLACE, rho(1, ispin), np, MPI_DOUBLE_PRECISION,
MPI_SUM, st%mpi_grp%comm, mpi_err)
with the error message:
states.F90(1240): error #6404: This name does not have a type,
Hey Gokhan,
The following worked for me with OpenMPI 1.4.1 with the latest Intel compiler
(May release) although there have been reports that with full vectorization
there
are some unexplained inflight failures:
#
# Parallel Version
#
service0:/share/apps/raxml/7.0.4/build # make -f Makefile.MP
You might want to start by contacting someone from that software package - this
is the Open MPI mailing list.
On Aug 16, 2010, at 3:43 PM, Gokhan Kir wrote:
> Hi,
> I am currently using RAxML 7.0, and recently I got a problem. Even though I
> Googled it, I couldn't find a satisfying answer.
Hi,
I am currently using RAxML 7.0, and recently I got a problem. Even though I
Googled it, I couldn't find a satisfying answer.
I got this message from BATCH_ERRORs file " raxmlHPC-MPI: topologies.c:179:
restoreTL: Assertion `n >= 0 && n < rl->max' failed. "
Any help is appreciated,
Thanks,
--
The value of hdr->tag seems wrong.
In ompi/mca/pml/ob1/pml_ob1_hdr.h
#define MCA_PML_OB1_HDR_TYPE_MATCH (MCA_BTL_TAG_PML + 1)
#define MCA_PML_OB1_HDR_TYPE_RNDV (MCA_BTL_TAG_PML + 2)
#define MCA_PML_OB1_HDR_TYPE_RGET (MCA_BTL_TAG_PML + 3)
#define MCA_PML_OB1_HDR_TYPE_ACK (MCA_BT
Hi Jeff,
I've reproduced your test here, with the same results. Moreover, if I
put the nodes with rank>0 into a blocking MPI call (MPI_Bcast or
MPI_Barrier) I still get the same behavior; namely, rank 0's calling
abort() generates a core file and leads to termination, which is the
behavior I want
On Aug 16, 2010, at 10:05 AM, Eloi Gaudry wrote:
> I did run our application through valgrind but it couldn't find any "Invalid
> write": there is a bunch of "Invalid read" (I'm using 1.4.2 with the
> suppression file), "Use of uninitialized bytes" and "Conditional jump
> depending on uninitial
Hi Jeff,
Thanks for your reply.
I did run our application through valgrind but it couldn't find any
"Invalid write": there is a bunch of "Invalid read" (I'm using 1.4.2
with the suppression file), "Use of uninitialized bytes" and
"Conditional jump depending on uninitialized bytes" in differe
FWIW, I'm unable to replicate your behavior. This is with Open MPI 1.4.2 on
RHEL5:
[9:52] svbu-mpi:~/mpi % cat abort.c
#include
#include
#include
int main(int argc, char **argv)
{
int rank;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (0 == rank) {
Josh
I have one more update on my observation while analyzing this issue.
Just to refresh, I am using openmpi-trunk release 23596 with
mpi4py-1.2.1 and BLCR 0.8.2. When I checkpoint the python script written
using mpi4py, the program doesn't progress after the checkpoint is taken
successfully
I've tried both--as you said, MPI_Abort doesn't drop a core file, but
does kill off the entire MPI job. abort() drops core when I'm running
on 1 processor, but not in a multiprocessor run. In addition, a node
calling abort() doesn't lead to the entire run being killed off.
David
O
n Mon, 2010-0
Sorry for the delay in replying.
Odd; the values of the callback function pointer should never be 0. This seems
to suggest some kind of memory corruption is occurring.
I don't know if it's possible, because the stack trace looks like you're
calling through python, but can you run this applicat
On Aug 10, 2010, at 3:59 PM, Gus Correa wrote:
> Thank you for opening a ticket and taking care of this.
Sorry -- I missed your inline questions when I first read this mail...
> > That being said, we didn't previously find any correctness
> > issues with using an alignment of 1.
>
> Does it aff
On Aug 13, 2010, at 12:53 PM, David Ronis wrote:
> I'm using mpirun and the nodes are all on the same machin (a 8 cpu box
> with an intel i7). coresize is unlimited:
>
> ulimit -a
> core file size (blocks, -c) unlimited
That looks good.
In reviewing the email thread, it's not entirely
sun...@chem.iitb.ac.in wrote:
sun...@chem.iitb.ac.in wrote:
Dear Open-mpi users,
I installed openmpi-1.4.1 in my user area and then set the path for
openmpi in the .bashrc file as follow. However, am still getting
following
error message whenever am starting the paralle
Hi,
Sorry for late answer.
I've checked your source code, and I didn't find anything wrong,
everything works just fine with Open MPI trunk version. Could you tell
me which version did you use, so that I can debug with your generated
mpi libs?
By the way, I noticed that you put MPI_Init, MP
Try
env | grep LD_LIBRARY_PATH
Does it show /home/sunitap/soft/openmpi/lib in your library path.
I have a similar installation. This is how my LD_LIBRARY_PATH looks.
LD_LIBRARY_PATH=/lustre/work/apps/gromacs-testgar/lib:/lustre/work/apps/gromacs-mkl/lib:/lustre/work/apps/openmpi-testgar/lib:/o
> Hello Sunitha,
> If you have admin privileges on this system add library path to
> /etc/ld.so.conf
I don't have admin privileges.
>
> eg: echo "/home/sunitap/soft/openmpi/lib" >> /etc/ld.so.conf
>
> ldconfig
>
> Rangam
>
> From: users-boun...@open-mpi.org
Hi,
> sun...@chem.iitb.ac.in wrote:
>> Dear Open-mpi users,
>>
>> I installed openmpi-1.4.1 in my user area and then set the path for
>> openmpi in the .bashrc file as follow. However, am still getting
>> following
>> error message whenever am starting the parallel molecular dynamics
>> simulatio
Hello Sunitha,
If you have admin privileges on this system add library path to
/etc/ld.so.conf
eg: echo "/home/sunitap/soft/openmpi/lib" >> /etc/ld.so.conf
ldconfig
Rangam
From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] On Behalf Of
sun...
Hi Sunita,
have you tried running "ldconfig"?
Manik Mayur
2010/8/16 :
> Hi,
>
>> hello Sunita,
>>
>> what linux distribution is this?
> The linux distribution is Red Hat Enterprise Linux Server release 5.5
> (Tikanga)
>>
>> On Fri, Aug 13, 2010 at 1:57 AM, wrote:
>>
> Thanks,
> Sunita
>
>>>
Hi,
> hello Sunita,
>
> what linux distribution is this?
The linux distribution is Red Hat Enterprise Linux Server release 5.5
(Tikanga)
>
> On Fri, Aug 13, 2010 at 1:57 AM, wrote:
>
Thanks,
Sunita
>> Dear Open-mpi users,
>>
>> I installed openmpi-1.4.1 in my user area and then set the path for
Josh
I tried running the mpi4py program with the latest trunk version of
openmpi. I have compiled openmpi-1.7a1r23596 from trunk and recompiled
mpi4py to use this library. Unfortunately I see the same behavior as I
have seen with openmpi 1.4.2 ie; checkpoint will be successful but the
program does
25 matches
Mail list logo