Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2010-10-29 Thread Jeff Squyres
I have fixes for this, but they're .m4 changes (stupid VPATH stuff; sorry) -- 
so I'll commit them tonight after 6pm US Eastern.



On Oct 28, 2010, at 9:16 PM, MPI Team wrote:

> 
> ERROR: Command returned a non-zero exist status (trunk):
>   make distcheck
> 
> Start time: Thu Oct 28 21:00:05 EDT 2010
> End time:   Thu Oct 28 21:16:19 EDT 2010
> 
> ===
> [... previous lines snipped ...]
> checking for OPAL CXXFLAGS... -pthread 
> checking for OPAL CXXFLAGS_PREFIX...  
> checking for OPAL LDFLAGS...   
> checking for OPAL LIBS... -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil -lm 
> -ldl 
> checking for OPAL extra include dirs... 
> checking for ORTE CPPFLAGS... 
> checking for ORTE CXXFLAGS... -pthread 
> checking for ORTE CXXFLAGS_PREFIX...  
> checking for ORTE CFLAGS... -pthread 
> checking for ORTE CFLAGS_PREFIX...  
> checking for ORTE LDFLAGS...
> checking for ORTE LIBS...  -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil -lm 
> -ldl 
> checking for ORTE extra include dirs... 
> checking for OMPI CPPFLAGS... 
> checking for OMPI CFLAGS... -pthread 
> checking for OMPI CFLAGS_PREFIX...  
> checking for OMPI CXXFLAGS... -pthread 
> checking for OMPI CXXFLAGS_PREFIX...  
> checking for OMPI FFLAGS... -pthread 
> checking for OMPI FFLAGS_PREFIX...  
> checking for OMPI FCFLAGS... -pthread 
> checking for OMPI FCFLAGS_PREFIX...  
> checking for OMPI LDFLAGS... 
> checking for OMPI LIBS...   -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil -lm 
> -ldl 
> checking for OMPI extra include dirs... 
> 
> *** Final output
> configure: creating ./config.status
> config.status: creating ompi/include/ompi/version.h
> config.status: creating orte/include/orte/version.h
> config.status: creating opal/include/opal/version.h
> config.status: creating opal/mca/backtrace/Makefile
> config.status: creating opal/mca/backtrace/printstack/Makefile
> config.status: creating opal/mca/backtrace/execinfo/Makefile
> config.status: creating opal/mca/backtrace/darwin/Makefile
> config.status: creating opal/mca/backtrace/none/Makefile
> config.status: creating opal/mca/carto/Makefile
> config.status: creating opal/mca/carto/auto_detect/Makefile
> config.status: creating opal/mca/carto/file/Makefile
> config.status: creating opal/mca/compress/Makefile
> config.status: creating opal/mca/compress/gzip/Makefile
> config.status: creating opal/mca/compress/bzip/Makefile
> config.status: creating opal/mca/crs/Makefile
> config.status: creating opal/mca/crs/none/Makefile
> config.status: creating opal/mca/crs/self/Makefile
> config.status: creating opal/mca/crs/blcr/Makefile
> config.status: creating opal/mca/event/Makefile
> config.status: creating opal/mca/event/libevent207/Makefile
> config.status: error: cannot find input file: 
> `opal/mca/event/libevent207/libevent/include/event2/event-config.h.in'
> make: *** [distcheck] Error 1
> ===
> 
> Your friendly daemon,
> Cyrador
> ___
> testing mailing list
> test...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/testing


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




[OMPI devel] Cost for AllGatherV() Operation

2010-10-29 Thread Tim Stitt

Dear OpenMPI Developers,

I would be grateful if someone could briefly describe the cost 
(complexity) for the allgatherv() collective operation in the current 
release of OpenMPI.


For MPICH2 I believe the cost is ceiling(lg p). Can anyone comment on 
the algorithms and cost used in the OpenMPI implementation?


Thanks in advance,

Tim.


Re: [OMPI devel] Cost for AllGatherV() Operation

2010-10-29 Thread George Bosilca
Tim,

The collective in Open MPI works differently than in MPICH. They are 
dynamically selected based on the number of processes involved and the amount 
of data to be exchanged. Therefore, it is difficult to answer your question 
without knowing this information.

There are 4 algorithms for MPI_Allgather in Open MPI:
- recursive doubling
- Bruck
- ring
- neighbor exchange

I think their complexity is described in "Performance analysis of MPI 
collective operations" (http://www.springerlink.com/content/542207241006p64h/).

  george.

On Oct 29, 2010, at 15:42 , Tim Stitt wrote:

> Dear OpenMPI Developers,
> 
> I would be grateful if someone could briefly describe the cost (complexity) 
> for the allgatherv() collective operation in the current release of OpenMPI.
> 
> For MPICH2 I believe the cost is ceiling(lg p). Can anyone comment on the 
> algorithms and cost used in the OpenMPI implementation?
> 
> Thanks in advance,
> 
> Tim.
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel