Chris Samuel wrote:
- "Ashley Pittman" wrote:
$ grep Cpus_allowed_list /proc/$$/status
Useful, ta!
Does this imply the default is to report on processes
in the current cpuset rather than the entire system?
Does anyone else feel that violates the principal of
least surprise?
Not reall
- "Ashley Pittman" wrote:
> $ grep Cpus_allowed_list /proc/$$/status
Useful, ta!
> Does this imply the default is to report on processes
> in the current cpuset rather than the entire system?
> Does anyone else feel that violates the principal of
> least surprise?
Not really, I feel that
I added tickets #21 and #22 about these features.
https://svn.open-mpi.org/trac/hwloc/ticket/21
https://svn.open-mpi.org/trac/hwloc/ticket/22
Thanks!
On Oct 22, 2009, at 5:54 AM, Ashley Pittman wrote:
On Thu, 2009-10-22 at 11:05 +0200, Brice Goglin wrote:
> Ashley Pittman wrote:
> >
On Thu, 2009-10-22 at 11:05 +0200, Brice Goglin wrote:
> Ashley Pittman wrote:
> > Does this imply the default is to report on processes in the current
> > cpuset rather than the entire system? Does anyone else feel that
> > violates the principal of least surprise?
> Yes, by default, it's the c
Ashley Pittman wrote:
>> [csamuel@tango069 ~]$ ~/local/hwloc/0.9.1rc2/bin/lstopo
>> System(31GB)
>> Node#0(15GB) + Socket#0 + L3(6144KB) + L2(512KB) + L1(64KB) + Core#0 + P#0
>> Node#1(16GB) + Socket#1 + L3(6144KB)
>> L2(512KB) + L1(64KB) + Core#0 + P#4
>> L2(512KB) + L1(64KB) + Core#1
On Thu, 2009-10-22 at 10:37 +1100, Chris Samuel wrote:
> - "Chris Samuel" wrote:
>
> > Some sample results below for configs not represented
> > on the current website.
>
> A final example of a more convoluted configuration with
> a Torque job requesting 5 CPUs on a dual Shanghai node
> and
- "Tony Breeds" wrote:
> Powerpc kernels that old do not have the topology information needed
> (in /sys or /proc/cpuinfo) So for the short term that's be best we
> can do.
That's fine, I quite understand. I'm trying to get that
cluster replaced anyway.. ;-)
> FWIW I'm looking at how we
- "Jeff Squyres" wrote:
> Sweet!
:-)
> And -- your reply tells me that, for the 2nd time in a single day, I
> posted to the wrong list. :-)
Ah well, if you'd posted to the right list I wouldn't
have seen this.
> I'll forward your replies to the hwloc-devel list.
Not a problem - I'll g
On Thu, Oct 22, 2009 at 10:29:36AM +1100, Chris Samuel wrote:
> Dual socket, dual core Power5 (SMT disabled) running SLES9
> (2.6.9 based kernel):
>
> System(15GB)
> Node#0(7744MB)
> P#0
> P#2
> Node#1(8000MB)
> P#4
> P#6
Powerpc kernels that old do not have the topology info
Sweet!
And -- your reply tells me that, for the 2nd time in a single day, I
posted to the wrong list. :-)
I'll forward your replies to the hwloc-devel list.
Thanks!
On Oct 21, 2009, at 7:37 PM, Chris Samuel wrote:
- "Chris Samuel" wrote:
> Some sample results below for configs no
- "Chris Samuel" wrote:
> Some sample results below for configs not represented
> on the current website.
A final example of a more convoluted configuration with
a Torque job requesting 5 CPUs on a dual Shanghai node
and has been given a non-contiguous configuration.
[csamuel@tango069 ~]$
- "Jeff Squyres" wrote:
> Give it a whirl:
Nice - built without warnings with GCC 4.4.2.
Some sample results below for configs not represented
on the current website.
Dual socket Shanghai:
System(31GB)
Node#0(15GB) + Socket#0 + L3(6144KB)
L2(512KB) + L1(64KB) + Core#0 + P#0
L2
Give it a whirl:
http://www.open-mpi.org/software/hwloc/v0.9/
I updated the docs, too:
http://www.open-mpi.org/projects/hwloc/doc/
--
Jeff Squyres
jsquy...@cisco.com
13 matches
Mail list logo