- "Jeff Squyres" wrote:
> Sweet!
:-)
> And -- your reply tells me that, for the 2nd time in a single day, I
> posted to the wrong list. :-)
Ah well, if you'd posted to the right list I wouldn't
have seen this.
> I'll forward your replies to the hwloc-devel list.
Not a problem - I'll g
On Thu, Oct 22, 2009 at 10:29:36AM +1100, Chris Samuel wrote:
> Dual socket, dual core Power5 (SMT disabled) running SLES9
> (2.6.9 based kernel):
>
> System(15GB)
> Node#0(7744MB)
> P#0
> P#2
> Node#1(8000MB)
> P#4
> P#6
Powerpc kernels that old do not have the topology info
Sweet!
And -- your reply tells me that, for the 2nd time in a single day, I
posted to the wrong list. :-)
I'll forward your replies to the hwloc-devel list.
Thanks!
On Oct 21, 2009, at 7:37 PM, Chris Samuel wrote:
- "Chris Samuel" wrote:
> Some sample results below for configs no
- "Chris Samuel" wrote:
> Some sample results below for configs not represented
> on the current website.
A final example of a more convoluted configuration with
a Torque job requesting 5 CPUs on a dual Shanghai node
and has been given a non-contiguous configuration.
[csamuel@tango069 ~]$
- "Jeff Squyres" wrote:
> Give it a whirl:
Nice - built without warnings with GCC 4.4.2.
Some sample results below for configs not represented
on the current website.
Dual socket Shanghai:
System(31GB)
Node#0(15GB) + Socket#0 + L3(6144KB)
L2(512KB) + L1(64KB) + Core#0 + P#0
L2
Currently (trunk, just svn update'd), the following call fails
(because of the ranks=NULL pointer)
MPI_Group_{incl|excl}(group, 0, NULL, &newgroup)
BTW, MPI_Group_translate_ranks() has similar issues...
Provided that Open MPI accept the combination (int_array_size=0,
int_array_ptr=NULL) in othe
On Oct 21, 2009, at 3:32 PM, Brice Goglin wrote:
George Bosilca wrote:
On Oct 21, 2009, at 13:42 , Scott Atchley wrote:
On Oct 21, 2009, at 1:25 PM, George Bosilca wrote:
Because MX doesn't provide a real RMA protocol, we created a fake
one on top of point-to-point. The two peers have to agre
George Bosilca wrote:
> On Oct 21, 2009, at 13:42 , Scott Atchley wrote:
>> On Oct 21, 2009, at 1:25 PM, George Bosilca wrote:
>>> Because MX doesn't provide a real RMA protocol, we created a fake
>>> one on top of point-to-point. The two peers have to agree on a
>>> unique tag, then the receiver p
Give it a whirl:
http://www.open-mpi.org/software/hwloc/v0.9/
I updated the docs, too:
http://www.open-mpi.org/projects/hwloc/doc/
--
Jeff Squyres
jsquy...@cisco.com
Thanks - impossible to know what explicit includes are required for
every environment. We have been building the trunk without problem on
our systems.
Appreciate the fix!
On Oct 21, 2009, at 10:30 AM, Pavel Shamis (Pasha) wrote:
It was broken :-(
I fixed it - r22119
Pasha
Pavel Shamis (P
On Oct 21, 2009, at 13:42 , Scott Atchley wrote:
On Oct 21, 2009, at 1:25 PM, George Bosilca wrote:
Brice,
Because MX doesn't provide a real RMA protocol, we created a fake
one on top of point-to-point. The two peers have to agree on a
unique tag, then the receiver posts it before the se
Blah; wrong list -- sorry!
On Oct 21, 2009, at 2:03 PM, Jeff Squyres wrote:
The IU sysadmins fixed something with trac today such that we should
now get mails for trac ticket actions (to the hwloc-bugs list).
--
Jeff Squyres
jsquy...@cisco.com
___
The IU sysadmins fixed something with trac today such that we should
now get mails for trac ticket actions (to the hwloc-bugs list).
--
Jeff Squyres
jsquy...@cisco.com
On Oct 21, 2009, at 1:25 PM, George Bosilca wrote:
Brice,
Because MX doesn't provide a real RMA protocol, we created a fake
one on top of point-to-point. The two peers have to agree on a
unique tag, then the receiver posts it before the sender starts the
send. However, as this is integrat
Brice,
Because MX doesn't provide a real RMA protocol, we created a fake one
on top of point-to-point. The two peers have to agree on a unique tag,
then the receiver posts it before the sender starts the send. However,
as this is integrated with the real RMA protocol, where only one side
It was broken :-(
I fixed it - r22119
Pasha
Pavel Shamis (Pasha) wrote:
On my systems I see follow error:
gcc -DHAVE_CONFIG_H -I. -I../../../../opal/include
-I../../../../orte/include -I../../../../ompi/include
-I../../../../opal/mca/paffinity/linux/plpa/src/libplpa -I../../../..
-O3 -DNDEB
On my systems I see follow error:
gcc -DHAVE_CONFIG_H -I. -I../../../../opal/include
-I../../../../orte/include -I../../../../ompi/include
-I../../../../opal/mca/paffinity/linux/plpa/src/libplpa -I../../../..
-O3 -DNDEBUG -Wall -Wundef -Wno-long-long -Wsign-compare
-Wmissing-prototypes -Wstri
Hello,
I am debugging a crash with OMPI 1.3.3 BTL over Open-MX. It's crashing
will trying to store incoming data in the OMPI receive buffer, but OMPI
seems to have already freed the buffer even if the MX request is not
complete yet. It looks like this is caused by mca_btl_mx_prepare_dst()
posting
18 matches
Mail list logo