Dear Ralph, that would be great if you could give it a try. We have been
hoping for it for a year now and it could greatly benefit us if this is
fixed!! :-)
Thanks!
Suraj
On Fri, Sep 13, 2013 at 5:39 PM, Ralph Castain wrote:
> It has been a low priority issue, and hence
Here's some more compile errors from outside of the f77 directory. Do we need
to turn off the shmem build on the nightlies until these compile errors are
fixed?
-
make[1]: Entering directory
`/nfs/deep-thought/home/data/jsquyres/scratch/svn/ompi/oshmem/mca/memheap'
CC
I did a manual build on eddie (the OMPI build server); here's all the failures
from the f77 directory. Please fix -- this is preventing nightly builds from
occurring...
-
[14:03] eddie:~/svn/ompi/oshmem/shmem/f77 % make -k |& tee ../../../make.out
CC start_pes_f.lo
CC
It has been a low priority issue, and hence not resolved yet. I doubt it will
make 1.7.3, though if you need it, I'll give it a try.
On Sep 13, 2013, at 7:21 AM, Suraj Prabhakaran
wrote:
> Hello,
>
> Is there a plan to fix the problem with MPI_Intercomm_merge
Yes, it appears the send_requests list is the one that is growing. This list
holds the send request structures that are in use. After a send is completed,
a send request is supposed to be returned to this list and then get re-used.
With 7 processes, it had reached a size of 16,324 send
Hello,
Is there a plan to fix the problem with MPI_Intercomm_merge with 1.7.3 as
stated in this ticket? We are really in need of this at the moment. Any hints?
We face the following problem.
Parents (x and y) spawn child (z). (all of them execute on separate nodes)
x is the root.
x,y and z do
The open shmem Fortran iterface is quite definitely not a Fortran 77 interface
(there have not been Fortran 77 compilers in over 30 years).
Can we rename the oshmem/shmem/f77 directory to be oshmem/shmem/fortran?
Also, there should be no shmemf77 and shmemf90 wrappers -- there should only be
Hi Rolf,
I applied your patch, the full output is rather big, even gzip >
10Mb, which is not good for the mailinglist, but the head and tail are
below for a 7 and 8 processor run.
Seem that the send requests are growing fast 4000 times in just 10 min.
Do you now of a method to bound the
Bah -- I forwarded the wrong build failure. This is the shmem failure that has
failed for the last 2 nights.
Begin forwarded message:
> From: MPI Team
> Subject: === CREATE FAILURE (trunk) ===
> Date: September 13, 2013 3:48:47 AM GMT+02:00
> To:
>