Re: [OMPI devel] 0.9.1rc2 is available

2009-10-21 Thread Chris Samuel
- "Jeff Squyres" wrote: > Sweet! :-) > And -- your reply tells me that, for the 2nd time in a single day, I > posted to the wrong list. :-) Ah well, if you'd posted to the right list I wouldn't have seen this. > I'll forward your replies to the hwloc-devel list. Not a problem - I'll g

Re: [OMPI devel] 0.9.1rc2 is available

2009-10-21 Thread Tony Breeds
On Thu, Oct 22, 2009 at 10:29:36AM +1100, Chris Samuel wrote: > Dual socket, dual core Power5 (SMT disabled) running SLES9 > (2.6.9 based kernel): > > System(15GB) > Node#0(7744MB) > P#0 > P#2 > Node#1(8000MB) > P#4 > P#6 Powerpc kernels that old do not have the topology info

Re: [OMPI devel] 0.9.1rc2 is available

2009-10-21 Thread Jeff Squyres
Sweet! And -- your reply tells me that, for the 2nd time in a single day, I posted to the wrong list. :-) I'll forward your replies to the hwloc-devel list. Thanks! On Oct 21, 2009, at 7:37 PM, Chris Samuel wrote: - "Chris Samuel" wrote: > Some sample results below for configs no

Re: [OMPI devel] 0.9.1rc2 is available

2009-10-21 Thread Chris Samuel
- "Chris Samuel" wrote: > Some sample results below for configs not represented > on the current website. A final example of a more convoluted configuration with a Torque job requesting 5 CPUs on a dual Shanghai node and has been given a non-contiguous configuration. [csamuel@tango069 ~]$

Re: [OMPI devel] 0.9.1rc2 is available

2009-10-21 Thread Chris Samuel
- "Jeff Squyres" wrote: > Give it a whirl: Nice - built without warnings with GCC 4.4.2. Some sample results below for configs not represented on the current website. Dual socket Shanghai: System(31GB) Node#0(15GB) + Socket#0 + L3(6144KB) L2(512KB) + L1(64KB) + Core#0 + P#0 L2

[OMPI devel] MPI_Group_{incl|exc} with nranks=0 and ranks=NULL

2009-10-21 Thread Lisandro Dalcin
Currently (trunk, just svn update'd), the following call fails (because of the ranks=NULL pointer) MPI_Group_{incl|excl}(group, 0, NULL, &newgroup) BTW, MPI_Group_translate_ranks() has similar issues... Provided that Open MPI accept the combination (int_array_size=0, int_array_ptr=NULL) in othe

Re: [OMPI devel] why mx_forget in mca_btl_mx_prepare_dst?

2009-10-21 Thread Scott Atchley
On Oct 21, 2009, at 3:32 PM, Brice Goglin wrote: George Bosilca wrote: On Oct 21, 2009, at 13:42 , Scott Atchley wrote: On Oct 21, 2009, at 1:25 PM, George Bosilca wrote: Because MX doesn't provide a real RMA protocol, we created a fake one on top of point-to-point. The two peers have to agre

Re: [OMPI devel] why mx_forget in mca_btl_mx_prepare_dst?

2009-10-21 Thread Brice Goglin
George Bosilca wrote: > On Oct 21, 2009, at 13:42 , Scott Atchley wrote: >> On Oct 21, 2009, at 1:25 PM, George Bosilca wrote: >>> Because MX doesn't provide a real RMA protocol, we created a fake >>> one on top of point-to-point. The two peers have to agree on a >>> unique tag, then the receiver p

[OMPI devel] 0.9.1rc2 is available

2009-10-21 Thread Jeff Squyres
Give it a whirl: http://www.open-mpi.org/software/hwloc/v0.9/ I updated the docs, too: http://www.open-mpi.org/projects/hwloc/doc/ -- Jeff Squyres jsquy...@cisco.com

Re: [OMPI devel] Trunk is brokem ?

2009-10-21 Thread Ralph Castain
Thanks - impossible to know what explicit includes are required for every environment. We have been building the trunk without problem on our systems. Appreciate the fix! On Oct 21, 2009, at 10:30 AM, Pavel Shamis (Pasha) wrote: It was broken :-( I fixed it - r22119 Pasha Pavel Shamis (P

Re: [OMPI devel] why mx_forget in mca_btl_mx_prepare_dst?

2009-10-21 Thread George Bosilca
On Oct 21, 2009, at 13:42 , Scott Atchley wrote: On Oct 21, 2009, at 1:25 PM, George Bosilca wrote: Brice, Because MX doesn't provide a real RMA protocol, we created a fake one on top of point-to-point. The two peers have to agree on a unique tag, then the receiver posts it before the se

Re: [OMPI devel] trac ticket emails

2009-10-21 Thread Jeff Squyres
Blah; wrong list -- sorry! On Oct 21, 2009, at 2:03 PM, Jeff Squyres wrote: The IU sysadmins fixed something with trac today such that we should now get mails for trac ticket actions (to the hwloc-bugs list). -- Jeff Squyres jsquy...@cisco.com ___

[OMPI devel] trac ticket emails

2009-10-21 Thread Jeff Squyres
The IU sysadmins fixed something with trac today such that we should now get mails for trac ticket actions (to the hwloc-bugs list). -- Jeff Squyres jsquy...@cisco.com

Re: [OMPI devel] why mx_forget in mca_btl_mx_prepare_dst?

2009-10-21 Thread Scott Atchley
On Oct 21, 2009, at 1:25 PM, George Bosilca wrote: Brice, Because MX doesn't provide a real RMA protocol, we created a fake one on top of point-to-point. The two peers have to agree on a unique tag, then the receiver posts it before the sender starts the send. However, as this is integrat

Re: [OMPI devel] why mx_forget in mca_btl_mx_prepare_dst?

2009-10-21 Thread George Bosilca
Brice, Because MX doesn't provide a real RMA protocol, we created a fake one on top of point-to-point. The two peers have to agree on a unique tag, then the receiver posts it before the sender starts the send. However, as this is integrated with the real RMA protocol, where only one side

Re: [OMPI devel] Trunk is brokem ?

2009-10-21 Thread Pavel Shamis (Pasha)
It was broken :-( I fixed it - r22119 Pasha Pavel Shamis (Pasha) wrote: On my systems I see follow error: gcc -DHAVE_CONFIG_H -I. -I../../../../opal/include -I../../../../orte/include -I../../../../ompi/include -I../../../../opal/mca/paffinity/linux/plpa/src/libplpa -I../../../.. -O3 -DNDEB

[OMPI devel] Trunk is brokem ?

2009-10-21 Thread Pavel Shamis (Pasha)
On my systems I see follow error: gcc -DHAVE_CONFIG_H -I. -I../../../../opal/include -I../../../../orte/include -I../../../../ompi/include -I../../../../opal/mca/paffinity/linux/plpa/src/libplpa -I../../../.. -O3 -DNDEBUG -Wall -Wundef -Wno-long-long -Wsign-compare -Wmissing-prototypes -Wstri

[OMPI devel] why mx_forget in mca_btl_mx_prepare_dst?

2009-10-21 Thread Brice Goglin
Hello, I am debugging a crash with OMPI 1.3.3 BTL over Open-MX. It's crashing will trying to store incoming data in the OMPI receive buffer, but OMPI seems to have already freed the buffer even if the MX request is not complete yet. It looks like this is caused by mca_btl_mx_prepare_dst() posting