[OMPI devel] Build failure on FreeBSD 7

2008-04-04 Thread Karol Mroz
Hello everyone... it's been some time since I posted here. I pulled the latest svn revision (18079) and had some trouble building Open MPI on a FreeBSD 7 machine (i386). Make failed when compiling opal/event/kqueue.c. It appears that freebsd needs sys/types.h, sys/ioctl.h, termios.h and libuti

Re: [OMPI devel] MPI_Comm_connect/Accept

2008-04-04 Thread Ralph H Castain
Okay, I have a partial fix in there now. You'll have to use -mca routed unity as I still need to fix it for routed tree. Couple of things: 1. I fixed the --debug flag so it automatically turns on the debug output from the data server code itself. Now ompi-server will tell you when it is accessed.

Re: [OMPI devel] MPI_Comm_connect/Accept

2008-04-04 Thread Ralph H Castain
Well, something got borked in here - will have to fix it, so this will probably not get done until next week. On 4/4/08 12:26 PM, "Ralph H Castain" wrote: > Yeah, you didn't specify the file correctly...plus I found a bug in the code > when I looked (out-of-date a little in orterun). > > I am

Re: [OMPI devel] Affect of compression on modex and launch messages

2008-04-04 Thread Edgar Gabriel
actually, we used lzo a looong time ago with PACX-MPI, it was indeed faster then zlib. Our findings at that time were however similar to what George mentioned, namely a benefit from compression was only visible if the network latency was really high (e.g. multiple ms)... Thanks Edgar Roland D

Re: [OMPI devel] Affect of compression on modex and launch messages

2008-04-04 Thread Jeff Squyres
LZO looks cool, but it's unfortunately GPL (Open MPI is BSD). Bummer. On Apr 4, 2008, at 2:29 PM, Roland Dreier wrote: Based on some discussion on this list, I integrated a zlib-based compression ability into ORTE. Since the launch message sent to the orteds and the modex between the applica

Re: [OMPI devel] Affect of compression on modex and launch messages

2008-04-04 Thread Roland Dreier
> Based on some discussion on this list, I integrated a zlib-based compression > ability into ORTE. Since the launch message sent to the orteds and the modex > between the application procs are the only places where messages of any size > are sent, I only implemented compression for those two e

Re: [OMPI devel] MPI_Comm_connect/Accept

2008-04-04 Thread Ralph H Castain
Yeah, you didn't specify the file correctly...plus I found a bug in the code when I looked (out-of-date a little in orterun). I am updating orterun (commit soon) and will include a better help message about the proper format of the orterun cmd-line option. The syntax is: -ompi-server uri or -omp

Re: [OMPI devel] Affect of compression on modex and launch messages

2008-04-04 Thread George Bosilca
Ralph, There are several studies about compressions and data exchange. Few years ago we integrate such mechanism (adaptive compression of communication) in one of the projects here at ICL (called GridSolve). The idea was to optimize the network traffic for sending large matrices used for

Re: [OMPI devel] MPI_Comm_connect/Accept

2008-04-04 Thread Aurélien Bouteiller
Ralph, I've not been very successful at using ompi-server. I tried this : xterm1$ ompi-server --debug-devel -d --report-uri test [grosse-pomme.local:01097] proc_info: hnp_uri NULL daemon uri NULL [grosse-pomme.local:01097] [[34900,0],0] ompi-server: up and running! xterm2$ mpirun -ompi

[OMPI devel] Affect of compression on modex and launch messages

2008-04-04 Thread Ralph H Castain
Hello all Based on some discussion on this list, I integrated a zlib-based compression ability into ORTE. Since the launch message sent to the orteds and the modex between the application procs are the only places where messages of any size are sent, I only implemented compression for those two ex

Re: [OMPI devel] init_thread + spawn error

2008-04-04 Thread Tim Prins
Thanks for the report. As Ralph indicated the threading support in Open MPI is not good right now, but we are working to make it better. I have filed a ticket (https://svn.open-mpi.org/trac/ompi/ticket/1267) so we do not loose track of this issue, and attached a potential fix to the ticket.