[OMPI devel] Enabling debugging and profiling in openMPI (make "CFLAGS=-pg -g")

2009-06-12 Thread Leo P.
Hi everyone,

I am trying to understand the openMPI code so was trying to enable debug and 
profiling by issusing 

$ make "CFLAGS=-pg -g"

But i am getting this error.

libtool: link: ( cd ".libs" && rm -f "mca_paffinity_linux.la" && ln -s 
"../mca_paffinity_linux.la" "mca_paffinity_linux.la" )
make[3]: Leaving directory 
`/home/Desktop/openmpi-1.3.2/opal/mca/paffinity/linux'
make[2]: Leaving directory 
`/home/Desktop/openmpi-1.3.2/opal/mca/paffinity/linux'
Making all in tools/wrappers
make[2]: Entering directory `/home/Desktop/openmpi-1.3.2/opal/tools/wrappers'
depbase=`echo opal_wrapper.o | sed 's|[^/]*$|.deps/&|;s|\.o$||'`;\
gcc "-DEXEEXT=\"\"" -I. -I../../../opal/include -I../../../orte/include 
-I../../../ompi/include -I../../../opal/mca/paffinity/linux/plpa/src/libplpa   
-I../../..-pg -g -MT opal_wrapper.o -MD -MP -MF $depbase.Tpo -c -o 
opal_wrapper.o opal_wrapper.c &&\
mv -f $depbase.Tpo $depbase.Po
/bin/bash ../../../libtool --tag=CC   --mode=link gcc  -pg -g  -export-dynamic  
 -o opal_wrapper opal_wrapper.o ../../../opal/libopen-pal.la -lnsl -lutil  -lm 
libtool: link: gcc -pg -g -o .libs/opal_wrapper opal_wrapper.o 
-Wl,--export-dynamic  ../../../opal/.libs/libopen-pal.so -ldl -lnsl -lutil -lm
../../../opal/.libs/libopen-pal.so: undefined reference to `pthread_key_create'
../../../opal/.libs/libopen-pal.so: undefined reference to `pthread_getspecific'
../../../opal/.libs/libopen-pal.so: undefined reference to `pthread_create'
../../../opal/.libs/libopen-pal.so: undefined reference to `pthread_atfork'
../../../opal/.libs/libopen-pal.so: undefined reference to `pthread_setspecific'
../../../opal/.libs/libopen-pal.so: undefined reference to `pthread_join'
collect2: ld returned 1 exit status
make[2]: *** [opal_wrapper] Error 1
make[2]: Leaving directory `/home//Desktop/openmpi-1.3.2/opal/tools/wrappers'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/Desktop/openmpi-1.3.2/opal'
make: *** [all-recursive] Error 1

Is there any other way of enabling debugging and profilling in open MPI.

Leo


  Own a website.Get an unlimited package.Pay next to nothing.*Go to 
http://in.business.yahoo.com/

Re: [OMPI devel] Enabling debugging and profiling in openMPI (make "CFLAGS=-pg -g")

2009-06-12 Thread Leo P.
Thank you Ralph and Samuel. 

Sorry for the complete newbie question. 

The reason that i wanted to study openMPI is because i wanted to make open MPI 
support nodes that are behind NAT or firewall. If you guys could give me some 
pointers on how to go about doing this i would appreciate alot. I am 
considering this for my thesis project.

Sincerely,
LEO





From: Ralph Castain 
To: Open MPI Developers 
Sent: Friday, 12 June, 2009 9:56:16 PM
Subject: Re: [OMPI devel] Enabling debugging and profiling in openMPI (make 
"CFLAGS=-pg -g")

If you do a "./configure --help" you will get a complete list of the configure 
options. You may want to turn on more things than just enable-debug, though 
that is the critical first step.



On Jun 12, 2009, at 8:31 AM, Samuel K. Gutierrez wrote:

Hi,

Let me begin by stating that I'm at most an Open MPI novice - but you may want 
to try the addition of the --enable-debug configure option.  That is, for 
example:

./configure --enable-debug; make

Hope this helps.

Samuel K. Gutierrez
 

On Jun 12, 2009, at 3:27 AM, Leo P. wrote:

Hi everyone,

I am trying to understand the openMPI code so was trying to enable debug and 
profiling by issusing 

$ make "CFLAGS=-pg -g"

But i am getting this error.

libtool: link: ( cd ".libs" && rm -f "mca_paffinity_linux.la" && ln -s 
"../mca_paffinity_linux.la" "mca_paffinity_linux.la" )
make[3]: Leaving directory 
`/home/Desktop/openmpi-1.3.2/opal/mca/paffinity/linux'
make[2]: Leaving directory 
`/home/Desktop/openmpi-1.3.2/opal/mca/paffinity/linux'
Making all in tools/wrappers
make[2]: Entering directory `/home/Desktop/openmpi-1.3.2/opal/tools/wrappers'
depbase=`echo opal_wrapper.o | sed 's|[^/]*$|.deps/&|;s|\.o$||'`;\
gcc "-DEXEEXT=\"\"" -I. -I../../../opal/include -I../../../orte/include 
-I../../../ompi/include -I../../../opal/mca/paffinity/linux/plpa/src/libplpa   
-I../../..-pg -g -MT opal_wrapper.o -MD -MP -MF $depbase.Tpo -c -o 
opal_wrapper.o opal_wrapper.c &&\
mv -f $depbase.Tpo $depbase.Po
/bin/bash ../../../libtool --tag=CC   --mode=link gcc  -pg -g  -export-dynamic  
 -o opal_wrapper opal_wrapper.o ../../../opal/libopen-pal.la -lnsl -lutil  -lm 
libtool: link: gcc -pg -g -o .libs/opal_wrapper opal_wrapper.o 
-Wl,--export-dynamic  ../../../opal/.libs/libopen-pal.so -ldl -lnsl -lutil -lm
../../../opal/.libs/libopen-pal.so: undefined reference to `pthread_key_create'
../../../opal/.libs/libopen-pal.so: undefined reference to `pthread_getspecific'
../../../opal/.libs/libopen-pal.so: undefined reference to `pthread_create'
../../../opal/.libs/libopen-pal.so: undefined reference to `pthread_atfork'
../../../opal/.libs/libopen-pal.so: undefined reference to `pthread_setspecific'
../../../opal/.libs/libopen-pal.so: undefined reference to `pthread_join'
collect2: ld returned 1 exit status
make[2]: *** [opal_wrapper] Error 1
make[2]: Leaving directory `/home//Desktop/openmpi-1.3.2/opal/tools/wrappers'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/Desktop/openmpi-1.3.2/opal'
make: *** [all-recursive] Error 1

Is there any other way of enabling debugging and profilling in open MPI.

Leo



Explore your hobbies and interests. Click here to 
begin.___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel
___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel



  Bollywood news, movie reviews, film trailers and more! Go to 
http://in.movies.yahoo.com/

[OMPI devel] complete newbie question regarding --enable-mpi-profile option

2009-06-14 Thread Leo P.
Hi Everyone, 

I have been trying to enabling profiling of openMPI code. 

Earlier i also saw a thread 
[http://www.open-mpi.org/community/lists/users/2008/04/5369.php] which talks 
about using --enable-mpi-profile option in configure which i have done. But i 
have not been able to get hold of profiling data. I tried installing vampir 
from https://computing.llnl.gov/code/vgv.html#installations but i am not been 
able to install. 

So i wanted to know how people are profiling the core openMPI code. 

I am a complete newbie and would appreciate any information. 

Also i was wondering whether gdb could be used with openMPI. I know about -d 
option in mpirun but i need to use gdb if its possible. I think i have done all 
the necessary things to enable profiling and debuging but i am missing 
something here. 

Currentl y i am configuring openMPI using following param
 ./configure -enable-debug --with-devel-headers --enable-trace 
--enable-mpi-profile --enable-mem-debug 


Leo P.



  Own a website.Get an unlimited package.Pay next to nothing.*Go to 
http://in.business.yahoo.com/

Re: [OMPI devel] complete newbie question regarding --enable-mpi-profile option

2009-06-14 Thread Leo P.
Also i was wondering whether gdb could be used with openMPI. I know
about -d option in mpirun but i need to use gdb if its possible. I
think i have done all the necessary things to enable profiling and
debuging but i am missing something here. 

Sorry guys i forgot i could debug shared library function in gdb. :) So 
currently using 

$ mpirun -np 1 xterm -e gdb hello 
 
to debug the openMPI source.  

If only i could get the profiling information, it could help me a lot. 

Leo :)



From: Leo P. 
To: Open MPI Developers 
Sent: Monday, 15 June, 2009 12:36:34 AM
Subject: [OMPI devel] complete newbie question regarding --enable-mpi-profile 
option


Hi Everyone, 

I have been trying to enabling profiling of openMPI code. 

Earlier i also saw a thread 
[http://www.open-mpi.org/community/lists/users/2008/04/5369.php] which talks 
about using --enable-mpi-profile option in configure which i have done. But i 
have not been able to get hold of profiling data. I tried installing vampir 
from https://computing.llnl.gov/code/vgv.html#installations but i am not been 
able to install. 

So i wanted to know how people are profiling the core openMPI code. 

I am a complete newbie and would appreciate any information. 

Also i was wondering whether gdb could be used with openMPI. I know about -d 
option in mpirun but i need to use gdb if its possible. I think i have done all 
the necessary things to enable profiling and debuging but i am missing 
something here. 

Currentl y i am configuring openMPI using following param
 ./configure -enable-debug --with-devel-headers --enable-trace 
--enable-mpi-profile --enable-mem-debug 


Leo P.


 Explore and discover exciting holidays and getaways with Yahoo! India Travel 
Click here!


  Bollywood news, movie reviews, film trailers and more! Go to 
http://in.movies.yahoo.com/

Re: [OMPI devel] complete newbie question regarding --enable-mpi-profile option

2009-06-15 Thread Leo P.
Hi Nick, 

Thanks for the information, you have provided. It helps me immensely. 


Anjin





From: Nikolay Molchanov 
To: leo_7892...@yahoo.co.in
Cc: Open MPI Developers 
Sent: Monday, 15 June, 2009 12:18:50 PM
Subject: Re: [OMPI devel] complete newbie question regarding 
--enable-mpi-profile option

Hi Leo,

If you want to get the profiling information, you can try Sun Studio
Performance Analyzer. You can download SS12.1 EA release -
here is a pointer to the web page:

http://developers.sun.com/sunstudio/downloads/express/index.jsp

Final version will be available soon, but EA should be good enough 
to try :-) I suggest you to download EA as a tar file, extract it, 
set PATH, and run the following commands:

$ collect  -M  OPENMPI  mpirun  -np  2  --  hello 

Note: it is necessary to add "--" after mpirun arguments.
This command will create a "test.1.er" directory (experiment).
To view the experiment, run "analyzer" (Java GUI tool):

$ analyzer  test.1.er

If everything works properly you will see MPI Timeline and other tabs,
that show profiling information. Please, make sure you have java 1.5 or
newer in your PATH.

Thanks,
Nik

Leo P. wrote: 
Also i was wondering
whether gdb could be used with openMPI. I know
about -d option in mpirun but i need to use gdb if its possible. I
think i have done all the necessary things to enable profiling and
debuging but i am missing something here. 

Sorry guys i forgot i could debug shared library function in gdb. :) So
currently using 

$ mpirun -np 1 xterm -e gdb hello 
 
to debug the openMPI source.  

If only i could get the profiling information, it could help me a lot. 

Leo :)


________
From: Leo
P. 
To: Open MPI
Developers 
Sent: Monday, 15 June,
2009 12:36:34 AM
Subject: [OMPI devel]
complete newbie question regarding --enable-mpi-profile option


Hi
Everyone, 

I have been trying to enabling profiling of openMPI code. 

Earlier i also saw a thread 
[http://www.open-mpi.org/community/lists/users/2008/04/5369.php]
which talks about using --enable-mpi-profile option in configure which
i have done. But i have not been able to get hold of profiling data. I
tried installing vampir from 
https://computing.llnl.gov/code/vgv.html#installations but i am not been able 
to install. 

So i wanted to know how people are profiling the core openMPI code. 

I am a complete newbie and would appreciate any information. 

Also i was wondering whether gdb could be used with openMPI. I know
about -d option in mpirun but i need to use gdb if its possible. I
think i have done all the necessary things to enable profiling and
debuging but i am missing something here. 

Currentl y i am configuring openMPI using following param
 ./configure -enable-debug --with-devel-headers --enable-trace
--enable-mpi-profile --enable-mem-debug 


Leo P.


 Explore and discover exciting holidays and getaways
with Yahoo! India Travel Click here!

 Bollywood news, movie reviews, film trailers and more! Click here. 



___
devel mailing list
de...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/devel 



  Cricket on your mind? Visit the ultimate cricket website. Enter 
http://beta.cricket.yahoo.com

Re: [OMPI devel] complete newbie question regarding --enable-mpi-profile option

2009-06-15 Thread Leo P.
HI Nik,

I tried the Sun Studio Performance Analyzer and it was able to profile to 
application but not the openMPI source code. The source code was empty in Sun 
Studio Performance Analyzer. I tried but i was not able to get the profiling 
information of openMPI.

So i installed Vampir trace and i was able to get otf while suggested i got the 
profiling information. But i fail to understand what i should do now. 
1. Am i suppose to download additional information for trace information 
visualization? Is the additional software this 
https://www.ssl-id.net/www.vampir.eu/index.html
2. If not how can i visualize the trace information i got vampir-trace.

Sorry for bugging everyone so much about this. But i spend a lot of time and 
this alone and i am not getting output. 

Leo P.





From: Nikolay Molchanov 
To: leo_7892...@yahoo.co.in
Cc: Open MPI Developers 
Sent: Monday, 15 June, 2009 12:18:50 PM
Subject: Re: [OMPI devel] complete newbie question regarding 
--enable-mpi-profile option

Hi Leo,

If you want to get the profiling information, you can try Sun Studio
Performance Analyzer. You can download SS12.1 EA release -
here is a pointer to the web page:

http://developers.sun.com/sunstudio/downloads/express/index.jsp

Final version will be available soon, but EA should be good enough 
to try :-) I suggest you to download EA as a tar file, extract it, 
set PATH, and run the following commands:

$ collect  -M  OPENMPI  mpirun  -np  2  --  hello 

Note: it is necessary to add "--" after mpirun arguments.
This command will create a "test.1.er" directory (experiment).
To view the experiment, run "analyzer" (Java GUI tool):

$ analyzer  test.1.er

If everything works properly you will see MPI Timeline and other tabs,
that show profiling information. Please, make sure you have java 1.5 or
newer in your PATH.

Thanks,
Nik

Leo P. wrote: 
Also i was wondering
whether gdb could be used with openMPI. I know
about -d option in mpirun but i need to use gdb if its possible. I
think i have done all the necessary things to enable profiling and
debuging but i am missing something here. 

Sorry guys i forgot i could debug shared library function in gdb. :) So
currently using 

$ mpirun -np 1 xterm -e gdb hello 
 
to debug the openMPI source.  

If only i could get the profiling information, it could help me a lot. 

Leo :)


________
From: Leo
P. 
To: Open MPI
Developers 
Sent: Monday, 15 June,
2009 12:36:34 AM
Subject: [OMPI devel]
complete newbie question regarding --enable-mpi-profile option


Hi
Everyone, 

I have been trying to enabling profiling of openMPI code. 

Earlier i also saw a thread 
[http://www.open-mpi.org/community/lists/users/2008/04/5369.php]
which talks about using --enable-mpi-profile option in configure which
i have done. But i have not been able to get hold of profiling data. I
tried installing vampir from 
https://computing.llnl.gov/code/vgv.html#installations but i am not been able 
to install. 

So i wanted to know how people are profiling the core openMPI code. 

I am a complete newbie and would appreciate any information. 

Also i was wondering whether gdb could be used with openMPI. I know
about -d option in mpirun but i need to use gdb if its possible. I
think i have done all the necessary things to enable profiling and
debuging but i am missing something here. 

Currentl y i am configuring openMPI using following param
 ./configure -enable-debug --with-devel-headers --enable-trace
--enable-mpi-profile --enable-mem-debug 


Leo P.


 Explore and discover exciting holidays and getaways
with Yahoo! India Travel Click here!

 Bollywood news, movie reviews, film trailers and more! Click here. 



___
devel mailing list
de...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/devel 



  Own a website.Get an unlimited package.Pay next to nothing.*Go to 
http://in.business.yahoo.com/

Re: [OMPI devel] complete newbie question regarding --enable-mpi-profile option

2009-06-15 Thread Leo P.
rchives/A1790244868
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 mca_coll_inter.so.08_UaWfHvW0 
-> ../../archives/A2775785482
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 mca_coll_self.so.08_UaWfHvW0 -> 
../../archives/A1454987564
lrwxrwxrwx 1 st105788 anjin 25 2009-06-15 13:39 mca_coll_sm.so.08_UaWfHvW0 -> 
../../archives/A154432543
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 mca_coll_sync.so.08_UaWfHvW0 -> 
../../archives/A3252039816
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 mca_coll_tuned.so.08_UaWfHvW0 
-> ../../archives/A3648561707
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 mca_dpm_orte.so.08_UaWfHvW0 -> 
../../archives/A3219106871
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 mca_ess_env.so.08_UaWfHvW0 -> 
../../archives/A1437804225
lrwxrwxrwx 1 st105788 anjin 25 2009-06-15 13:39 mca_grpcomm_bad.so.08_UaWfHvW0 
-> ../../archives/A191452387
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 
mca_grpcomm_basic.so.08_UaWfHvW0 -> ../../archives/A4287186513
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 mca_mpool_fake.so.08_UaWfHvW0 
-> ../../archives/A3760992247
lrwxrwxrwx 1 st105788 anjin 25 2009-06-15 13:39 mca_mpool_rdma.so.08_UaWfHvW0 
-> ../../archives/A576932308
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 mca_mpool_sm.so.08_UaWfHvW0 -> 
../../archives/A2901485964
lrwxrwxrwx 1 st105788 anjin 22 2009-06-15 13:39 
mca_notifier_syslog.so.08_UaWfHvW0 -> ../../archives/A791485
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 mca_oob_tcp.so.08_UaWfHvW0 -> 
../../archives/A2708436963
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 mca_osc_pt2pt.so.08_UaWfHvW0 -> 
../../archives/A1267378599
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 mca_osc_rdma.so.08_UaWfHvW0 -> 
../../archives/A2369457763
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 
mca_paffinity_linux.so.08_UaWfHvW0 -> ../../archives/A3261151085
lrwxrwxrwx 1 st105788 anjin 25 2009-06-15 13:39 mca_pml_cm.so.08_UaWfHvW0 -> 
../../archives/A651907835
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 mca_pml_csum.so.08_UaWfHvW0 -> 
../../archives/A1877586533
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 mca_pml_ob1.so.08_UaWfHvW0 -> 
../../archives/A1096690429
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 mca_pml_v.so.08_UaWfHvW0 -> 
../../archives/A2927666762
lrwxrwxrwx 1 st105788 anjin 25 2009-06-15 13:39 mca_pubsub_orte.so.08_UaWfHvW0 
-> ../../archives/A607593425
lrwxrwxrwx 1 st105788 anjin 25 2009-06-15 13:39 mca_rcache_vma.so.08_UaWfHvW0 
-> ../../archives/A844609745
lrwxrwxrwx 1 st105788 anjin 25 2009-06-15 13:39 mca_rml_oob.so.08_UaWfHvW0 -> 
../../archives/A396208440
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 
mca_routed_binomial.so.08_UaWfHvW0 -> ../../archives/A3003421601
lrwxrwxrwx 1 st105788 anjin 25 2009-06-15 13:39 
mca_routed_direct.so.08_UaWfHvW0 -> ../../archives/A331661572
lrwxrwxrwx 1 st105788 anjin 26 2009-06-15 13:39 
mca_routed_linear.so.08_UaWfHvW0 -> ../../archives/A3804511592
--


collect  -M OPENMPI  mpirun  -np  2  --  hello 
Creating experiment database test.4.er ...
Hello, world.  I am 1 of 2 on anjin-IBM-31
Hello, world.  I am 0 of 2 on anjin-IBM-31

analyzer  test.1.er
I have included screenshot (DIR=screenshots) of the profiler in the folder 
attached with this email.

The application itself (source code or binary). BTW i am using ubuntu 8.04 32 
bit machine
Everything is included in the folder

And again thanks for helping me

Regards,
Leo P.





From: Nikolay Molchanov 
To: Leo P. 
Cc: de...@open-mpi.org
Sent: Tuesday, 16 June, 2009 12:34:59 AM
Subject: Re: [OMPI devel] complete newbie question regarding 
--enable-mpi-profile option

Hi Leo,

I think there is something wrong in the way the application is built,
or in the way you run collect. We run MPI tests every night, so at
least simple tests should work just fine. Could you, please, send me 
more details about your MPI and your application?

1. MPI version strings

which mpicc

mpicc -version

which mpirun

mpirun -version

2. Analyzer version strings

collect -V

analyzer -V

3. Experiment listing

ls -lR test.1.er

4. Log file (starting from the collect command):

collect  -M OPENMPI  mpirun  -np  2  --  hello 

...

analyzer  test.1.er

...

5. The application itself (source code or binary).

I'll run this application on our system, and let you know 
the result.

Thanks,
Nik 

Leo P. wrote: 
HI Nik,

I tried the Sun Studio Performance Analyzer and it was able to profile
to application but not the openMPI source code. The source code was
empty in Sun Studio Performance Analyzer. I tried but i was not able to
get the profiling information of openMPI.

So i installed Vampir trace and i was able to get otf while suggested i
got the profiling information. But i fail to understand what i shou

Re: [OMPI devel] complete newbie question regarding --enable-mpi-profile option

2009-06-15 Thread Leo P.
Hi Eugene,

Thanks for the information.  And i had already clicked on the "Show All" button 
in the profiler before i send an email to the group.  But it did not work :( 


Also Eugene, can you please help me understand what does turning on -g option 
mean. Currently i am building the system with the following option 

./configure --with-devel-headers --enable-trace --enable-mpi-profile 
--enable-mem-debug --enable-debug

Do i need to add something additional here ?

Also i don't understand what you mean by tool ecosystem.  [I am a complete 
newbie ]

BTW if you are sending Nik's phone number, i like to get yours also. Just in 
case Nik is not picking up his phone. :) 

Anyways if there is anything i can do to contribute please do let me know? I 
would love to a part of this great community.

Regards,
Leo.P 




From: Eugene Loh 
To: Open MPI Developers 
Cc: nikolay.molcha...@sun.com
Sent: Tuesday, 16 June, 2009 1:11:15 AM
Subject: Re: [OMPI devel] complete newbie question regarding 
--enable-mpi-profile option

 Leo P. wrote: 
HI Nik,

I tried the Sun Studio Performance Analyzer and it was able to profile
to application but not the openMPI source code. The source code was
empty in Sun Studio Performance Analyzer. I tried but i was not able to
get the profiling information of openMPI.

So i installed Vampir trace and i was able to get otf while suggested i
got the profiling information. But i fail to understand what i should
do now. 
1. Am i suppose to download additional information for trace
information visualization? Is the additional software this 
https://www.ssl-id.net/www.vampir.eu/index.html
2. If not how can i visualize the trace information i got vampir-trace.

Sorry for bugging everyone so much about this. But i spend a lot of
time and this alone and i am not getting output. 

It's probably fine to bug people about some of this.  OMPI would
benefit from having a tool ecosystem around it.  There's VampirTrace
and PERUSE instrumentation and stuff, but some more activity/attention
in this area would be better.

I don't know that VampirTrace will give what you're looking for.  You
seem to want to profile the internals of OMPI.  VT basically just
instruments entry into and exit out of MPI.  In contrast, PERUSE
instruments MPI internals.

Sun Studio Performance Analyzer should also work.  I know I've used it
to profile both MPI apps and the internals of OMPI.

One of the problems...  I mean, one of the *features* of Sun
Performance Analyzer is that it *HIDES* the internals of the MPI
library.  There is a concept of user and expert models and stuff.  Most
users just want to see their program the way they wrote it (whether for
Java, OpenMP, MPI, etc.).  So, Performance Analyzer hides the "black
box" stuff (internals of Java, OpenMP, MPI, etc.).  But, *you* want
"expert" capabilities.  You want to see what's under the hood.  So,
after you have collected data and have started the Analyzer GUI, choose
"View" -> "Show/Hide Functions..." -> "Show All".  Maybe there
are other things you're encountering, but for me that changes MPI calls
from being black boxes to exposing where OMPI is spending its time: 
PML functions, BTL functions, etc.

To get source code information, you also need to build OMPI with -g
turned on.  That will include debugging information.  With some
compilers, turning -g on might turn off optimizations... I just don't
know.  With Sun Studio compilers, -g will not change your optimizations
-- it will only add debugging/symbolic information, compiler commentary
on optimizations, etc.

If you want to ask Nik or me other questions, feel free.  I'll send you
Nik's home phone number!  :^)



  Cricket on your mind? Visit the ultimate cricket website. Enter 
http://beta.cricket.yahoo.com

[OMPI devel] Just a suggestion about a formation of new openMPI student mailing list

2009-06-17 Thread Leo P.
Hi everyone, 

I found openMPI community filled with co-operative and helpful people. And 
would like to thank them through this email [Nik, Eugene, Ralph, Mitchel and 
others]. 

Also I would like to suggest one or may be two things.

1. First of all i would like to suggest a different mailing list for students 
like me who wants to learn about openMPI. Since questions from someone like is 
going to be simpler than those of other professional developers. Maybe the 
students from the  student mailing list can solve it. If not we can post in the 
developers mailing list. I think this will limit the email in the developers 
list. 

2. Secondly if the developer could volunteer to become mentors for student 
(particularly thesis student like m e :) ). I think they would benefit a lot. 


Regards,
Leo P.


  Cricket on your mind? Visit the ultimate cricket website. Enter 
http://cricket.yahoo.com

Re: [OMPI devel] Just a suggestion about a formation of new openMPI student mailing list

2009-06-17 Thread Leo P.
Hi Eugene,

I was just thinking about Ubuntu's MOTU initiative. 
[https://wiki.ubuntu.com/MOTU/Mentoring]  when i talked about mentoring program 
for openMPI. 

Also i thought the user mailing list was for talking about user's level program 
not the things related to core openMPI functions and soon.

And yes i have observed how adhoc relationship spring up in openMPI community.  
:)

Regards,
Leo. P





From: Eugene Loh 
To: Open MPI Developers 
Sent: Wednesday, 17 June, 2009 8:44:07 PM
Subject: Re: [OMPI devel] Just a suggestion about a formation of new openMPI 
student mailing list

 Leo P. wrote: 
I
found openMPI community filled with co-operative and helpful people.
And would like to thank them through this email [Nik, Eugene, Ralph,
Mitchel and others]. 

You are very gracious.

Also
I would like to suggest one or may be two things.

1. First of all i would like to suggest a different mailing list for
students like me who wants to learn about openMPI. Since questions from
someone like is going to be simpler than those of other professional
developers. Maybe the students from the  student mailing list can solve
it. If not we can post in the developers mailing list. I think this
will limit the email in the developers list. 

I think there is already such a list.  It's the "users" (rather than
"devel") list.

2.
Secondly if the developer could volunteer to become mentors for student
(particularly thesis student like m e :) ). I think they would benefit
a lot. 

Perhaps some of those relationships spring up "ad hoc" on the mail
list, as you have already observed.



  Love Cricket? Check out live scores, photos, video highlights and more. 
Click here http://cricket.yahoo.com

[OMPI devel] some question about OMPI communication infrastructure

2009-06-18 Thread Leo P.

Hi Everyone, 

I wanted to ask some questions about things I am having trouble understanding. 
1. As far as my understanding of MPI_INIT function, I assumed MPI_INIT 
typically procedure resources required including the sockets. But now as I 
understand from the documentation that openMPI only allocated socket when the 
process has to send a message to a peer. If some one could let me where exactly 
in the code this is happening I would appreciate a lot. I guess this is 
happening in ORTE layer so I am spending time looking at it. But if some one 
could let me in which function this is happening it will help me a lot. 

2. Also I think most of the MPI implementation embed source and 
destination address with the communication protocol. Am I right to assume 
openMPI does the same thing. Is this also happening in the ORTE layer.
Is there a documentation about this openMPI site? if there can someone please 
let me know the location of it.



Sincerely,
Leo.P 


  ICC World Twenty20 England '09 exclusively on YAHOO! CRICKET 
http://cricket.yahoo.com

Re: [OMPI devel] some question about OMPI communication infrastructure

2009-06-18 Thread Leo P.
Hi Ralph,

Thanks for the response.  And Yes, this give me a good starting point Thanks.

Leo.P





From: Ralph Castain 
To: Open MPI Developers 
Sent: Thursday, 18 June, 2009 9:26:46 PM
Subject: Re: [OMPI devel] some question about OMPI communication infrastructure

Hi Leo

The MPI communications is contained in the ompi/mca/btl code area. The BTL's 
(Bit Transport Layer) actually moves the message data. Each BTL is responsible 
for opening its own connections - ORTE has nothing to do with it, except to 
transport out-of-band (OOB) messages to support creating the connection if that 
specific BTL requires it.

If you are interested in TCP communications, you will find all of that code in 
ompi/mca/btl/tcp. It can be confusing down there, so expect to spend a little 
time trying to understand it. I believe Jeff has some documentation on the OMPI 
web site about it (perhaps a video?).

The source/destination is embedded in the message, again done by each BTL since 
the receiver must be a BTL of the same type. Again, this has nothing to do with 
ORTE - it is purely up to the BTL. MPI communications are also coordinated by 
the PML, which is responsible for matching messages with posted receives. You 
might need to look at the ompi/mca/pml/ob1 code to understand how that works.

Hope that gives you a starting point
Ralph


On Jun 18, 2009, at 7:57 AM, Leo P. wrote:

Hi Everyone,

I wanted to ask some questions about things I am having trouble understanding.
1. As far as my understanding of MPI_INIT function, I assumed MPI_INIT 
typically procedure resources required including the sockets. But now as I 
understand from the documentation that openMPI only allocated socket when the 
process has to send a message to a peer. If some one could let me where exactly 
in the code this is happening I would appreciate a lot. I guess this is 
happening in ORTE layer so I am spending time looking at it. But if some one 
could let me in which function this is happening it will help me a lot. 

2. Also I think most of the MPI implementation embed source and 
destination address with the communication protocol. Am I right to assume 
openMPI does the same thing. Is this also happening in the ORTE layer.
Is there a documentation about this openMPI site? if there can someone please 
let me know the location of it.



Sincerely,
Leo.P

ICC World Twenty20 England '09 exclusively on YAHOO! 
CRICKET___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel



  Love Cricket? Check out live scores, photos, video highlights and more. 
Click here http://cricket.yahoo.com

Re: [OMPI devel] some question about OMPI communicationinfrastructure

2009-06-19 Thread Leo P.
Hi jeff,

All the information provided here helps me a lot.

Thank you, really really really appreciate it.  :)

Regards,
Leo P.





From: Jeff Squyres 
To: Open MPI Developers 
Sent: Friday, 19 June, 2009 5:05:59 AM
Subject: Re: [OMPI devel] some question about OMPI communicationinfrastructure

A few addendums in no particular order...

1. The ompi/ tree is the MPI layer.  It's the top layer in the stack.  It uses 
ORTE and OPAL for various things.

2. The PML (point-to-point messagging layer) is the stuff right behind 
MPI_SEND, MPI_RECV, and friends.  We have two main PMLs: OB1 and CM (and some 
other similar ones, but not important here).  OB1 is probably the only one you 
care about.

3. OB1 effects the majority of the MPI rules and behavior.  It makes 
MPI_Requests, processes them, potentially segments and re-assembles individual 
messages, etc.

4. OB1 uses BTLs (Byte Transfer Layers) to actually move bytes between 
processes.  Each BTL is for a different kind of transport; OB1 uses the BML 
(BTL multiplexing layer; "layer" is a generous term here; think of it as 
trivial BTL pointer array management functionality) to manage all the BTLs that 
it is currently using.

5. OB1 and some of the BTLs use the ORTE layer for "out of band" 
communications, usually for initialization and finalization.  The "OOB" ORTE 
framework is more-or-less equivalent to the BTL framework, but it's *only* used 
for ORTE-level communications (not MPI communications).  The RML (routing 
message layer) ORTE framework is a layer on top of the OOB that has the 
potential to route messages as necessary.  To be clear, the OMPI layer always 
uses the RML, not the OOB directly (the RML uses the OOB underneath).

6. A bunch of OOB connections are made during the startup of the MPI job.  BTL 
connections are generally made on an "as needed" basis (e.g., during the first 
MPI_SEND to a given peer).  Ralph will have to fill you in on the details of 
how/when/where OOB connections are made.

7. There is unfortunately little documentation on the OMPI source code except 
comments in the code.  :-\  However, there was a nice writeup recently that may 
be helpful to you:

http://www.open-mpi.org/papers/trinity-btl-2009/

8. Once TCP BTL connections are made, IP addressing is no longer necessary in 
the OMPI-level messages that are sent because the sockets are connected 
point-to-point -- i.e., the peer process is already known because we have a 
socket to them.  The MPI layer messaging more contains things like the 
communicator ID, tag, ...etc.

Hope that helps!


On Jun 18, 2009, at 10:26 AM, Ralph Castain wrote:

> Hi Leo
> 
> The MPI communications is contained in the ompi/mca/btl code area. The BTL's 
> (Bit Transport Layer) actually moves the message data. Each BTL is 
> responsible for opening its own connections - ORTE has nothing to do with it, 
> except to transport out-of-band (OOB) messages to support creating the 
> connection if that specific BTL requires it.
> 
> If you are interested in TCP communications, you will find all of that code 
> in ompi/mca/btl/tcp. It can be confusing down there, so expect to spend a 
> little time trying to understand it. I believe Jeff has some documentation on 
> the OMPI web site about it (perhaps a video?).
> 
> The source/destination is embedded in the message, again done by each BTL 
> since the receiver must be a BTL of the same type. Again, this has nothing to 
> do with ORTE - it is purely up to the BTL. MPI communications are also 
> coordinated by the PML, which is responsible for matching messages with 
> posted receives. You might need to look at the ompi/mca/pml/ob1 code to 
> understand how that works.
> 
> Hope that gives you a starting point
> Ralph
> 
> On Jun 18, 2009, at 7:57 AM, Leo P. wrote:
> 
>> Hi Everyone,
>> 
>> 
>> 
>> I wanted to ask some questions about things I am having trouble 
>> understanding.
>> 
>> •
>> As far as my understanding of MPI_INIT function, I assumed MPI_INIT 
>> typically procedure resources required including the sockets. But now as I 
>> understand from the documentation that openMPI only allocated socket when 
>> the process has to send a message to a peer. If some one could let me where 
>> exactly in the code this is happening I would appreciate a lot. I guess this 
>> is happening in ORTE layer so I am spending time looking at it. But if some 
>> one could let me in which function this is happening it will help me a lot.
>> 
>> •
>> Also I think most of the MPI implementation embed source and destination 
>> address with the communication protocol. Am I right to assume openMPI does 
>> the same thing. Is this also h

[OMPI devel] How is a MPI process launched ?

2010-04-26 Thread Leo P.
Hi everyone, 

I wanted to know how OpenMPI launches a MPI  process in a cluster environment. 
I am assuming if the process lifecycle management it will be using rsh.


Anyhelp would be greatly appreciated. 




Re: [OMPI devel] How is a MPI process launched ?

2010-04-26 Thread Leo P.
Hi Ralph, 

Thank you for  your response. Really appreciate it as usual. :)

It depends - if you have an environment like slurm, sge, or torque, then we use 
that to launch our daemons on each node. Otherwise, we default to using ssh.


Once the daemons are launched, we then tell the daemons what processes each is 
to run. So it is a two-stage launch procedure.

Ralph after starting the orte_deamon  
1. what is the role of ssh then ?
2. Also i am assuming HNP is created  before using ssh. Am i right ?
3. Also Ralph i would to know how i can tell the  daemon to run a 
process ?
Ralph i am tying to create run a simple experiment where i can create a simple 
process between two computer using SSH module without using mpirun. I would to 
hack the mpi library so that i can send a simple "Hello World " from process A 
running in computer A to process B running in computer B. I would be create 
both the process  myself. HOPE I AM BEING CLEAR. 

Basically what i am saying is i would to create the MPI_COMM_WORLD comprising 
of two process Process A and Process B. For that i would to create a functions 
called Create_Process_A and Create_Process_B and Send_Message by utilizing Open 
MPI source code.

Also, I know i should be looking into PLM subsystem, RMAPS subsystem, ODLS 
subsystem, and ORTED subsystem. But Ralph if you guide me a bit i can finish 
the experiment with less sleepless night, headache, and stress.

Leo P


On Apr 26, 2010, at 2:22 AM, Leo P. wrote:

Hi everyone, 
>
>
>I wanted to know how OpenMPI launches a MPI  process in a cluster environment. 
>I am assuming if the process lifecycle management it will be using rsh.
>
>
>
>
>Anyhelp would be greatly appreciated. 
>
>
>___
>devel mailing list
>de...@open-mpi.org
>http://www.open-mpi.org/mailman/listinfo.cgi/devel




Re: [OMPI devel] How is a MPI process launched ?

2010-04-26 Thread Leo P.
Hi Ralph,

Is there some reason why you don't just use MPI_Comm_spawn? This is precisely 
what it was created to do. You can still execute it from a singleton, if you 
don't want to start your first process via mpirun (and is there some reason why 
you don't use mpirun???).

The reason  why i am  using MPI_Comm_spawn and singleton is i am going to route 
the MPI Communication (btl and OOB) from another computer before it reaches it 
intended destination. :)

Yes, you -could- hack the MPI code to do this. Starting from scratch, with 
little knowledge of the code base - figure on taking awhile. I could probably 
come up with a way to do it, but this would have to be a very low priority for 
me.

I am  trying to learn the OpenMPI code base and i know its going to take time. 
Now i need to understand how the processes are started and made part of 
MPI_Comm_World. I really want to do this but i need help. If you can suggest 
how this can be done, i would really appreciate a lot.

Leo





From: Ralph Castain 
To: Open MPI Developers 
Sent: Tue, 27 April, 2010 6:44:49 AM
Subject: Re: [OMPI devel] How is a MPI process launched ?

UI sincerely hope you are kidding :-)

Is there some reason why you don't just use MPI_Comm_spawn? This is precisely 
what it was created to do. You can still execute it from a singleton, if you 
don't want to start your first process via mpirun (and is there some reason why 
you don't use mpirun???).

Yes, you -could- hack the MPI code to do this. Starting from scratch, with 
little knowledge of the code base - figure on taking awhile. I could probably 
come up with a way to do it, but this would have to be a very low priority for 
me.



On Apr 26, 2010, at 12:12 PM, Leo P. wrote:

Hi Ralph, 
>
>
>Thank you for  your response. Really appreciate it as usual. :)
>
>It depends - if you have an environment like slurm, sge, or torque, then we 
>use that to launch our daemons on each node. Otherwise, we default to using 
>ssh.
>
>
>Once the daemons are launched, we then tell the daemons what processes each is 
>to run. So it is a two-stage launch procedure.
>
>
>Ralph after starting the orte_deamon  
>   1. what is the role of ssh then ?
>   2. Also i am assuming HNP is created  before using ssh. Am i right ?
>   3. Also Ralph i would to know how i can tell the  daemon to run a 
> process ?
>Ralph i am tying to create run a simple experiment where i can create a simple 
>process between two computer using SSH module without using mpirun. I would to 
>hack the mpi library so that i can send a simple "Hello World " from process A 
>running in computer A to process B running in computer B. I would be create 
>both the process  myself. HOPE I AM BEING CLEAR. 
>
>
>Basically what i am saying is i would to create the MPI_COMM_WORLD comprising 
>of two process Process A and Process B. For that i would to create a functions 
>called Create_Process_A and Create_Process_B and Send_Message by utilizing 
>Open MPI source code.
>
>
>Also, I know i should be looking into PLM subsystem, RMAPS subsystem, ODLS 
>subsystem, and ORTED subsystem. But Ralph if you guide me a bit i can finish 
>the experiment with less sleepless night, headache, and stress.
>
>
>Leo P
>
>
>
>
>On Apr 26, 2010, at 2:22 AM, Leo P. wrote:
>
>Hi everyone, 
>>
>>
>>I wanted to know how OpenMPI launches a MPI  process in a cluster 
>>environment. I am assuming if the process lifecycle management it will be 
>>using rsh.
>>
>>
>>
>>
>>Anyhelp would be greatly appreciated. 
>>
>>
>>___
>>devel mailing list
>>de...@open-mpi.org
>>http://www.open-mpi.org/mailman/listinfo.cgi/devel
>
>___
>devel mailing list
>de...@open-mpi.org
>http://www.open-mpi.org/mailman/listinfo.cgi/devel




[OMPI devel] Is there a way to knit multiple ompi-servers into a broader network ?

2010-04-27 Thread Leo P.
I tested this on my machines and it worked, so hopefully it will meet your 
needs. You only need to run one "ompi-server" period, so long as you locate it 
where all of the processes can find the contact file and can open a TCP socket 
to the daemon. There is a way to knit multiple ompi-servers into a broader 
network (e.g., to connect processes that cannot directly access a server due to 
network segmentation), but it's a tad tricky - let me know if you require it 
and I'll try to help. 

In one of reply http://www.open-mpi.org/community/lists/users/2010/04/12763.php 
you said the above thing

Actually i am very much interested in doing that. Can you please let me know 
how it can be done?

If anyone has done this before can you please help. I would appreciate any help.



Re: [OMPI devel] How is a MPI process launched ?

2010-04-27 Thread Leo P.
Hi jeff,


> The reason  why i am  using MPI_Comm_spawn and singleton is i am going to 
> route the MPI Communication (btl and OOB) from another computer before it 
> reaches it intended destination. :)

Ralph has talked about the other parts already; so I'll ask about the BTL: what 
type of network are you looking to route via the BTL?

I am talking about two different network using a private IP and all the 
communication being routed through a NAT router 






From: Jeff Squyres 
To: Open MPI Developers 
Sent: Tue, 27 April, 2010 5:16:02 PM
Subject: Re: [OMPI devel] How is a MPI process launched ?

On Apr 26, 2010, at 11:05 PM, Leo P. wrote:

> The reason  why i am  using MPI_Comm_spawn and singleton is i am going to 
> route the MPI Communication (btl and OOB) from another computer before it 
> reaches it intended destination. :)

Ralph has talked about the other parts already; so I'll ask about the BTL: what 
type of network are you looking to route via the BTL?

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel




Re: [OMPI devel] How is a MPI process launched ?

2010-04-27 Thread Leo P.
Hi Jeff, 

Sorry, can't use IPv6 right now but may be in the future. 

When you're talking to someone behind NAT (or any type of firewall), how do you 
know to whom you're actually talking?

If Machine A can talk to machine C in front of NAT and that machine can relay 
the data packet to the machine B behind the NAT. From Machine A perspective 
won't it be just like talking to machine B. May be use IPTABLES to specify the 
route on the port range. 

There are ways, of course, but it's quite complicated if connection initiation 
can effectively only flow in one direction. 

Jeff, can you tell me the most simple way. It does not have to be perfect. 

Thanks




From: Jeff Squyres 
To: Open MPI Developers 
Sent: Tue, 27 April, 2010 9:12:07 PM
Subject: Re: [OMPI devel] How is a MPI process launched ?

On Apr 27, 2010, at 10:06 AM, Leo P. wrote:

> Ralph has talked about the other parts already; so I'll ask about the BTL: 
> what type of network are you looking to route via the BTL?
> 
> I am talking about two different network using a private IP and all the 
> communication being routed through a NAT router 

There's a bunch of issues with this; I know that the U. Tennessee and INRIA 
folks have dug into at least some of them.

When you're talking to someone behind NAT (or any type of firewall), how do you 
know to whom you're actually talking?  There are ways, of course, but it's 
quite complicated if connection initiation can effectively only flow in one 
direction.

Can you just use IPv6?

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel




[OMPI devel] Error (mpirun: symbol lookup error: /usr/local/lib/openmpi/mca_rmaps_load_balance.so: undefined symbol: orte_rmaps_base_get_starting_point )

2010-06-14 Thread Leo P.
HI everyone, 
I am getting this error when i am running 
$ mpirun -np 10 myapp
mpirun: symbol lookup error: /usr/local/lib/openmpi/mca_rmaps_load_balance.so: 
undefined symbol: orte_rmaps_base_get_starting_point
Any help would be greatly appreciated 
Thank you



Re: [OMPI devel] Error (mpirun: symbol lookup error: /usr/local/lib/openmpi/mca_rmaps_load_balance.so: undefined symbol: orte_rmaps_base_get_starting_point )

2010-06-15 Thread Leo P.
Hi Ralph,
I am using mpirun (Open MPI) 1.3.2 on Ubuntu 8.04
Leo
--- On Mon, 14/6/10, Ralph Castain  wrote:

From: Ralph Castain 
Subject: Re: [OMPI devel] Error (mpirun: symbol lookup error: 
/usr/local/lib/openmpi/mca_rmaps_load_balance.so: undefined symbol: 
orte_rmaps_base_get_starting_point )
To: "Open MPI Developers" 
List-Post: devel@lists.open-mpi.org
Date: Monday, 14 June, 2010, 7:46 PM

What OMPI version? On what system?
On Jun 14, 2010, at 3:35 AM, Leo P. wrote:
HI everyone, 
I am getting this error when i am running 
$ mpirun -np 10 myapp
mpirun: symbol lookup error: /usr/local/lib/openmpi/mca_rmaps_load_balance.so: 
undefined symbol: orte_rmaps_base_get_starting_point
Any help would be greatly appreciated 
Thank you
___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel

-Inline Attachment Follows-

___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel