Hi Gilles,
thanks for the prompt response and assistance.

Both systems use Intel CPUs.

The problem originally comes from a coupler, yac, used in climate science.

There are several reported instances where the coupling tests fail.
The problem occurs often enough to incorporate a workaround which is to
have a compiler switch to use MPI_Pack and MPI_Unpack instead of
MPI_Pack_external and MPI_Unpack_external.

How do I determine how OpenMPI was configured for the package installed on
Ubuntu 14.04?
Is there some way to determine from the OpenMP header or other files whether
--enable-heterogeneous was enabled or disabled on either system when I do
not have access to the ./configure logs?

So, since I have one installation that works and a similar installation
that fails, I would
like to determine what is causing the problem.

I will try:
1: Tonight try later versions of gcc and OpenMP supplied with Ubuntu 15.10
2: Tomorrow, download and install OpenMP 1.10.2 on my Ubuntu 14.04
workstation.

and send back the details.

kindest regards
Mike

On 11 February 2016 at 14:48, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> Michael,
>
> does your two systems have the same endianness ?
>
> do you know how openmpi was configure'd on both systems ?
> (is --enable-heterogeneous enabled or disabled on both systems ?)
>
> fwiw, openmpi 1.6.5 is old now and no more maintained.
> I strongly encourage you to use openmpi 1.10.2
>
> Cheers,
>
> Gilles
>
> On Thursday, February 11, 2016, Michael Rezny <michael.re...@monash.edu>
> wrote:
>
>> Hi,
>> I am running Ubuntu 14.04 LTS with OpenMPI 1.6.5 and gcc 4.8.4
>>
>> On a single rank program which just packs and unpacks two ints using
>> MPI_Pack_external and MPI_Unpack_external
>> the unpacked ints are in the wrong endian order.
>>
>> However, on a HPC, (not Ubuntu), using OpenMPI 1.6.5 and gcc 4.8.4 the
>> unpacked ints are correct.
>>
>> Is it possible to get some assistance to track down what is going on?
>>
>> Here is the output from the program:
>>
>>  ~/tests/mpi/Pack test1
>> send data 000004d2 0000162e
>> MPI_Pack_external: 0
>> buffer size: 8
>> MPI_unpack_external: 0
>> recv data d2040000 2e160000
>>
>> And here is the source code:
>>
>> #include <stdio.h>
>> #include <mpi.h>
>>
>> int main(int argc, char *argv[]) {
>>   int numRanks, myRank, error;
>>
>>   int send_data[2] = {1234, 5678};
>>   int recv_data[2];
>>
>>   MPI_Aint buffer_size = 1000;
>>   char buffer[buffer_size];
>>
>>   MPI_Init(&argc, &argv);
>>   MPI_Comm_size(MPI_COMM_WORLD, &numRanks);
>>   MPI_Comm_rank(MPI_COMM_WORLD, &myRank);
>>
>>   printf("send data %08x %08x \n", send_data[0], send_data[1]);
>>
>>   MPI_Aint position = 0;
>>   error = MPI_Pack_external("external32", (void*) send_data, 2, MPI_INT,
>>           buffer, buffer_size, &position);
>>   printf("MPI_Pack_external: %d\n", error);
>>
>>   printf("buffer size: %d\n", (int) position);
>>
>>   position = 0;
>>   error = MPI_Unpack_external("external32", buffer, buffer_size,
>> &position,
>>           recv_data, 2, MPI_INT);
>>   printf("MPI_unpack_external: %d\n", error);
>>
>>   printf("recv data %08x %08x \n", recv_data[0], recv_data[1]);
>>
>>   MPI_Finalize();
>>
>>   return 0;
>> }
>>
>>
>>
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> Link to this post:
> http://www.open-mpi.org/community/lists/devel/2016/02/18573.php
>

Reply via email to