All the platforms that failed over the weekend have passed today.
-Paul
On Mon, Feb 10, 2014 at 2:34 PM, Paul Hargrove wrote:
> The fastest of my systems that failed over the weekend (a ppc64) has
> completed tests successfully.
> I will report on the ppc32 and SPARC results when they have al
sqrt(2^31)/log(sqrt(2^31))*(1+1.2762/log(sqrt(2^31)))/1024 * 4byte =
18,850133965051 kbyte should do it. ;)
Amazing - I think our systems are still *too small* - lets go for MPI with
int64 types. ^^
- Ursprüngliche Mail -
Von: "Jeff Squyres (jsquyres)"
An: "Open MPI Developers"
Gesende
The Fortran programs in the oshmem test suite don't compile because my_pe and
num_pes are already declared in OMPI's shmem.fh.
To be fair, I asked Mellanox to put those declarations in shmem.fh because I
thought it was crazy that all applications would have to declare them.
Apparently, the shme
Hi Andreas,
As mentioned in my former mail I did not touch the factorization code.
But to figure out if a number n is *not* a prime number it is sufficient to
check up to \sqrt(n).
Proof:
let n = p*q with q > \sqrt{n}
--> p < \sqrt(n)
So we have already found factor p before reaching \sqrt(n) and
On Feb 11, 2014, at 01:05 , Nathan Hjelm wrote:
> On Tue, Feb 11, 2014 at 12:29:57AM +0100, George Bosilca wrote:
>> Nathan,
>>
>> While this sounds like an optimization for highly specific application
>> behavior, it is justifiable under some usage scenarios. I have several
>> issues with th
Cool.
See the other thread where I'm wondering if we shouldn't just pre-generate all
the primes, hard-code them into a table, and be done with this issue.
On Feb 10, 2014, at 5:19 PM, Andreas Schäfer wrote:
> Jeff-
>
> I've seen that you've reverted the patch as it was faulty. Sorry about
>
On Feb 10, 2014, at 7:22 PM, Christoph Niethammer wrote:
> 2.) Interesting idea: Using the approximation from the cited paper we should
> only need around 400 MB to store all primes in the int32 range. Potential for
> applying compression techniques still present. ^^
Per Andreas' last mail, we
Hello,
If you mean the current version in the ompi-tests/ibm svn repository I can
confirm that it passes the topolgy/dimscreate test without errors. :)
The difference in the patches is as follows: The patch from Andreas only
generated a table of prime numbers of up to sqrt(freeprocs) while my p
WHAT: On trunk, force MPI_Count/MPI_Offset to be 32 bits when building in 32
bit mode (they are currently 64 bit, even in a 32 bit build). On v1.7, leave
the sizes at 64 bit (for ABI reasons), but put error checking in the MPI API
layer to ensure we won't over/underflow 32 bits.
WHY: See ticke
On Tue, Feb 11, 2014 at 12:29:57AM +0100, George Bosilca wrote:
> Nathan,
>
> While this sounds like an optimization for highly specific application
> behavior, it is justifiable under some usage scenarios. I have several issues
> with the patch. Here are the minor ones:
>
> 1. It does modifica
Nathan,
While this sounds like an optimization for highly specific application
behavior, it is justifiable under some usage scenarios. I have several issues
with the patch. Here are the minor ones:
1. It does modifications that are nor necessary to the patch itself (as an
example removal of th
The fastest of my systems that failed over the weekend (a ppc64) has
completed tests successfully.
I will report on the ppc32 and SPARC results when they have all passed or
failed.
-Paul
On Mon, Feb 10, 2014 at 1:52 PM, Ralph Castain wrote:
> Tarball is now posted
>
> On Feb 10, 2014, at 1:31
Christoph-
your patch has the same problem as my original patch: indeed there may
be a prime factor p of n with p > sqrt(n). What's important is that
there may only be at most one. I've submitted an updated patch (see my
previous mail) which catches this special case.
Best
-Andreas
On 19:30 Mon
Jeff-
I've seen that you've reverted the patch as it was faulty. Sorry about
that! I've attached a new patch, which applies against the current
trunk. The problem with the last patch was that it didn't catch a
special case: of all prime factors of n, there may be at most one
larger than sqrt(n). T
Tarball is now posted
On Feb 10, 2014, at 1:31 PM, Ralph Castain wrote:
> Generating it now - sorry for my lack of response, my OMPI email was down for
> some reason. I can now receive it, but still haven't gotten the backlog from
> the down period.
>
>
> On Feb 10, 2014, at 1:23 PM, Paul Ha
Generating it now - sorry for my lack of response, my OMPI email was down for
some reason. I can now receive it, but still haven't gotten the backlog from
the down period.
On Feb 10, 2014, at 1:23 PM, Paul Hargrove wrote:
> Ralph,
>
> If you give me a heads-up when this makes it into a tarba
Ralph,
If you give me a heads-up when this makes it into a tarball, I will retest
my failing ppc and sparc platforms.
-Paul
On Mon, Feb 10, 2014 at 1:13 PM, Rolf vandeVaart wrote:
> I have tracked this down. There is a missing commit that affects
> ompi_mpi_init.c causing it to initialize bml
Done - thanks Rolf!!
On Feb 10, 2014, at 1:13 PM, Rolf vandeVaart wrote:
> I have tracked this down. There is a missing commit that affects
> ompi_mpi_init.c causing it to initialize bml twice.
> Ralph, can you apply r30310 to 1.7?
>
> Thanks,
> Rolf
>
> From: devel [mailto:devel-boun...@
I have tracked this down. There is a missing commit that affects
ompi_mpi_init.c causing it to initialize bml twice.
Ralph, can you apply r30310 to 1.7?
Thanks,
Rolf
From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Rolf vandeVaart
Sent: Monday, February 10, 2014 12:29 PM
To: Open MP
Nice! Can you verify that it passes the ibm test? I didn't look closely, and
to be honest, I'm not sure why the previous improvement broke the IBM test
because it hypothetically did what you mentioned (stopped at sqrt(freenodes)).
I think patch 1 is a no-brainer. I'm not sure about #2 because
Hello,
I noticed some effort in improving the scalability of
MPI_Dims_create(int nnodes, int ndims, int dims[])
Unfortunately there were some issues with the first attempt (r30539 and r30540)
which were reverted.
So I decided to give it a short review based on r30606
https://svn.open-mpi.org/tra
I have seen this same issue although my core dump is a little bit different. I
am running with tcp,self. The first entry in the list of BTLs is garbage, but
then there is tcp and self in the list. Strange. This is my core dump. Line
208 in bml_r2.c is where I get the SEGV.
Program termina
Note that we have removed all CMake support from Open MPI starting with v1.7.
Is there a reason you're using the CMake support instead of the Autotools
support? We only had the CMake support there for MS Windows support, which has
been removed (which is why the CMake support was removed).
On
It is a compilation flag passes through the Makefile (when automake is used). I
guess you will have to modify the CMake to pass it as well. You need to for the
compilation of the ompi/debuggers/ompi_debuggers.c and should point to the
location of the installed libraries.
George.
On Feb 10, 2
*$/scrap/jenkins/scrap/workspace/hpc-ompi-shmem/label/hpc-test-node/ompi_install1/bin/mpirun
-np 8 -mca pml ob1 -mca btl self,tcp
/scrap/jenkins/scrap/workspace/hpc-ompi-shmem/label/hpc-test-node/ompi_install1/examples/hello_usempi
[vegas12:12724] *** Process received signal ***
[vegas12:12724] Sig
25 matches
Mail list logo