Lisandro,
Thanks for the tester. I pushed a fix in the trunk (r32613) and I requested
a CMR for the 1.8.3.
George.
On Tue, Aug 26, 2014 at 6:53 AM, Lisandro Dalcin wrote:
> I've just installed 1.8.2, something is still wrong with
> HINDEXED_BLOCK datatypes.
>
> Please note the example belo
The proposed patch has several issues, all of them detailed on the ticket.
A correct patch as well as a broaden tester are provided.
George.
On Tue, Aug 26, 2014 at 8:21 PM, Jeff Squyres (jsquyres) wrote:
> Good catch.
>
> I filed https://svn.open-mpi.org/trac/ompi/ticket/4876 with a patch
On Aug 26, 2014, at 6:09 PM, Andrej Prsa wrote:
> Hi Ralph,
>
>> I don't know what version of OMPI you're working with, so I can't
>> precisely pinpoint the line in question. However, it looks likely to
>> be an error caused by not finding the PBS nodefile.
>
> This is openmpi 1.6.5.
>
>> We
Hi Ralph,
> I don't know what version of OMPI you're working with, so I can't
> precisely pinpoint the line in question. However, it looks likely to
> be an error caused by not finding the PBS nodefile.
This is openmpi 1.6.5.
> We look in the environment for PBS_NODEFILE to find the directory
>
Good catch.
I filed https://svn.open-mpi.org/trac/ompi/ticket/4876 with a patch for the
fix; I want to get more eyeballs on it before I commit.
On Aug 26, 2014, at 7:07 AM, Lisandro Dalcin wrote:
> While I agree that the code below is rather useless, however I'm not
> sure it should actually
I don't know what version of OMPI you're working with, so I can't precisely
pinpoint the line in question. However, it looks likely to be an error caused
by not finding the PBS nodefile.
We look in the environment for PBS_NODEFILE to find the directory where the
file should be found, and then l
If you have reproducers, yes, that would be most helpful -- thanks.
On Aug 26, 2014, at 12:26 PM, Lisandro Dalcin wrote:
> I'm getting a bunch of the following messages. Are they signaling some
> easy-to-fix internal issue? Do you need code to reproduce each one?
>
> malloc debug: Request for 0
Hi all,
I asked this question on the torque mailing list, and I found several
similar issues on the web, but no definitive solutions. When we run our
MPI programs via torque/maui, at random times, in ~50-70% of all cases,
the job will fail with the following error message:
[node1:51074] [[36074,0
Lisandro,
You rely on a feature clearly prohibited by the MPI standard. Please read
the entire section I pinpointed you to (8.7.1).
There are 2 key sentences in the section.
1. When MPI_FINALIZE is called, it will first execute the equivalent of an
MPI_COMM_FREE on MPI_COMM_SELF.
2. The freeing
Good catch. I will take a look and see how best to fix this.
-Nathan
On Tue, Aug 26, 2014 at 07:03:24PM +0300, Lisandro Dalcin wrote:
> I finally managed to track down some issues in mpi4py's test suite
> using Open MPI 1.8+. The code below should be enough to reproduce the
> problem. Run it und
Hey folks
I've had a few questions lately about how ORCM plans to support fast MPI
startup, so I figured I'd pass along some notes on the matter. Nothing secret
or hush-hush about it - these are things we've discussed in the OMPI world a
few times, and have simply adopted/implemented in ORCM. I
>
> libtoolize: putting libltdl files in LT_CONFIG_LTDL_DIR, `opal/libltdl'.
> libtoolize: `COPYING.LIB' not found in `/usr/share/libtool/libltdl'
> autoreconf: libtoolize failed with exit status: 1
>
>
The error message is from libtoolize about a file missing from the libtool
installation director
On Aug 26, 2014, at 10:53 AM, Lisandro Dalcin wrote:
> On 26 August 2014 19:27, Ralph Castain wrote:
>> Do you know if this works in the trunk? If so, then it may just be a missing
>> commit that should have come across to 1.8.2 and we can chase it down
>>
>
> $ ./autogen.pl
> Open MPI autog
On 26 August 2014 21:29, George Bosilca wrote:
> The MPI standard clearly states (in 8.7.1 Allowing User Functions at Process
> Termination) that the mechanism you describe is only allowed on
> MPI_COMM_SELF. The most relevant part starts at line 14.
>
IMHO, you are misinterpreting the standard.
The MPI standard clearly states (in 8.7.1 Allowing User Functions at
Process Termination) that the mechanism you describe is only allowed on
MPI_COMM_SELF. The most relevant part starts at line 14.
George.
On Tue, Aug 26, 2014 at 11:20 AM, Lisandro Dalcin wrote:
> Another issue while testing
On 26 August 2014 19:27, Ralph Castain wrote:
> Do you know if this works in the trunk? If so, then it may just be a missing
> commit that should have come across to 1.8.2 and we can chase it down
>
$ ./autogen.pl
Open MPI autogen (buckle up!)
1. Checking tool versions
Searching for autocon
Do you know if this works in the trunk? If so, then it may just be a missing
commit that should have come across to 1.8.2 and we can chase it down
On Aug 26, 2014, at 3:53 AM, Lisandro Dalcin wrote:
> I've just installed 1.8.2, something is still wrong with
> HINDEXED_BLOCK datatypes.
>
> Ple
I'm getting a bunch of the following messages. Are they signaling some
easy-to-fix internal issue? Do you need code to reproduce each one?
malloc debug: Request for 0 bytes (coll_libnbc_ireduce_scatter_block.c, 67)
...
malloc debug: Request for 0 bytes (nbc_internal.h, 496)
...
malloc debug: Reque
I finally managed to track down some issues in mpi4py's test suite
using Open MPI 1.8+. The code below should be enough to reproduce the
problem. Run it under valgrind to make sense of my following
diagnostics.
In this code I'm creating a 2D, periodic Cartesian topology out of
COMM_SELF. In this c
Another issue while testing 1.8.2 (./configure --enable-debug
--enable-mem-debug).
Please look at the following code. I'm duplicating COMM_WORLD and
composing the dupe on it. The attribute free function is written to
Comm_free the duped comm and deallocate memory. However, the run fails
with the e
Theoretically, we may make it functional (with good performance) even without
hwloc.
As it is today, I would suggest to disable ML if hwloc is disabled.
Best,
Pasha
> -Original Message-
> From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Gilles
> Gouaillardet
> Sent: Tuesday,
While I agree that the code below is rather useless, however I'm not
sure it should actually fail:
$ cat comm_split_type.c
#include
#include
int main(int argc, char *argv[])
{
MPI_Comm comm;
MPI_Init(&argc, &argv);
MPI_Comm_split_type(MPI_COMM_SELF,MPI_UNDEFINED,0,MPI_INFO_NULL,&comm);
a
I've just installed 1.8.2, something is still wrong with
HINDEXED_BLOCK datatypes.
Please note the example below, it should print "ni=2" but I'm getting "ni=7".
$ cat type_hindexed_block.c
#include
#include
int main(int argc, char *argv[])
{
MPI_Datatype datatype;
MPI_Aint disps[] = {0,2,4,
Folks,
i just commited r32604 in order to fix compilation (pmix) when ompi is
configured with --without-hwloc
now, even a trivial hello world program issues the following output
(which is a non fatal, and could even be reported as a warning) :
[soleil][[32389,1],0][../../../../../../src/ompi-tru
Folks,
the test_shmem_zero_get.x from the openshmem-release-1.0d test suite is
currently failing.
i looked at the test itself, and compared it to test_shmem_zero_put.x
(that is a success) and
i am very puzzled ...
the test calls several flavors of shmem_*_get where :
- the destination is in the
25 matches
Mail list logo