/www.open-mpi.org/mailman/listinfo.cgi/devel
Thanks,
Graham.
----------
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI, SNIPE & Open MPI
Computer Science Dep
Hi all
is there a single function call that components can use to check that the
progress thread is up and running ?
Thanks,
Graham.
--
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative
_
svn mailing list
s...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/svn
Thanks,
Graham.
--
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative Computing Lab. PVM3.4, HARNESS, F
On Thu, 19 Jan 2006, Rainer Keller wrote:
And yes, when I run with the basic-coll, we also hang ,-]
in the first case your running :
#8 0x407307a4 in ompi_coll_tuned_bcast_intra_basic_linear (buff=0x80c9c58,
which is actually the basic collective anyway.. it just got there via a
different p
trunk/ompi/mca/btl/tcp/btl_tcp_frag.h 2006-01-14 20:21:44 UTC (rev
8692)
@@ -49,7 +49,7 @@
struct mca_btl_base_endpoint_t *endpoint;
struct mca_btl_tcp_module_t* btl;
mca_btl_tcp_hdr_t hdr;
-struct iovec iov[MCA_BTL_TCP_FRAG_IOVEC_NUMBER];
+struct iovec iov[MCA_BTL_TCP_FRAG_IOVEC_NUM
if I select the basic coll component. Anyways, here is
the output you requested. The full output is about 140MB, so I killed it
before it finished...
Tim
Quoting Graham E Fagg :
Hi Tim
nope, can you rerun with mpirun -np 4 -mca coll_base_verbose 1
and email me the output?
Thanks
G
On Tue, 10
.
--
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI, SNIPE & Open MPI
Computer Science Dept | Suite 203, 1122 Volunteer Blvd,
University of Tennessee | Knoxville, Tennessee, USA. TN 37996-3450
Email: f...@cs.utk.edu | Phon
But thinking about this today, I have no idea what MPI_REPLACE is
supposed to do in a collective reduction. Specifically -- what value
should end up in the target buffer? It doesn't make sense.
I think that this is a grey area in the MPI standard -- MPI_REPLACE is
an MPI_Op, but it should *only