On Feb 17, 2012, at 11:54 AM, Ralph Castain wrote:
> All that said, I think using the WHOLE_SYSTEM flag is actually incorrect.
I think we want to continue using WHOLE_SYSTEM. There are definite uses for it
(being able to look around the machine beyond where you may or may not be
bound, such
On Fri, Feb 17, 2012 at 8:47 AM, Brice Goglin wrote:
> Le 17/02/2012 14:59, Jeff Squyres a écrit :
> > On Feb 17, 2012, at 8:21 AM, Ralph Castain wrote:
> >
> >>> I didn't follow this entire thread in details, but I am feeling that
> something is wrong here. The flag fixes
Le 17/02/2012 14:59, Jeff Squyres a écrit :
> On Feb 17, 2012, at 8:21 AM, Ralph Castain wrote:
>
>>> I didn't follow this entire thread in details, but I am feeling that
>>> something is wrong here. The flag fixes your problem indeed, but I think it
>>> may break binding too. It's basically
I took a closer look at this, and I think we're getting ourselves confused
by the rather large differences between what is on the trunk vs the 1.5
branch. The trunk is doing the "am I bound" calculation correctly - it gets
the cpubind bitmask and compares it to the allowed/available cpus.
The 1.5
On Feb 16, 2012, at 8:16 AM, nadia.der...@bull.net wrote:
> Could you please move it to v1.5 (do I need to fill a CMR)?
Just to clarify - you're asking for the patch to set WHOLE_SYSTEM when we load
the hwloc topology, right?
If so, please file a CMR. Note that there's some differences
On Feb 17, 2012, at 8:21 AM, Ralph Castain wrote:
>> I didn't follow this entire thread in details, but I am feeling that
>> something is wrong here. The flag fixes your problem indeed, but I think it
>> may break binding too. It's basically making all "unavailable resources"
>> available. So
On Thu, Feb 16, 2012 at 11:36 PM, Brice Goglin wrote:
> **
> Le 16/02/2012 14:16, nadia.der...@bull.net a écrit :
>
> Hi Jeff,
>
> Sorry for the delay, but my victim with 2 ib devices had been stolen ;-)
>
> So, I ported the patch on the v1.5 branch and finally could test
devel-boun...@open-mpi.org wrote on 02/17/2012 08:36:54 AM:
> De : Brice Goglin
> A : de...@open-mpi.org
> Date : 02/17/2012 08:37 AM
> Objet : Re: [OMPI devel] btl/openib: get_ib_dev_distance doesn't see
> processes as bound if the job has been launched by srun
> Envoyé
Le 16/02/2012 14:16, nadia.der...@bull.net a écrit :
> Hi Jeff,
>
> Sorry for the delay, but my victim with 2 ib devices had been stolen ;-)
>
> So, I ported the patch on the v1.5 branch and finally could test it.
>
> Actually, there is no opal_hwloc_base_get_topology() in v1.5 so I had
> to set
>
Hi Jeff,
Sorry for the delay, but my victim with 2 ib devices had been stolen ;-)
So, I ported the patch on the v1.5 branch and finally could test it.
Actually, there is no opal_hwloc_base_get_topology() in v1.5 so I had to
set
the hwloc flags in ompi_mpi_init() and orte_odls_base_open() (i.e.
That's pretty much what I had in mind too - will have to play with it a bit
until we find the best solution, but it shouldn't be all that hard.
On Feb 9, 2012, at 2:23 PM, Brice Goglin wrote:
> Here's what I would do:
> During init, walk the list of hwloc PCI devices (hwloc_get_next_pcidev())
On 2/9/2012 1:19 PM, Brice Goglin wrote:
So you can find out that you are "bound" by a Linux cgroup (I am not
saying Linux "cpuset" to avoid confusion) by comparing root->cpuset and
root->online_cpuset.
If I understood the problem as stated earlier in this thread the current
code was
Here's what I would do:
During init, walk the list of hwloc PCI devices
(hwloc_get_next_pcidev()) and keep an array of pointers to the
interesting onces + their locality (the hwloc cpuset of the parent
non-IO object).
When you want the I/O device near a core, walk the array and find one
whose
Le 09/02/2012 14:00, Ralph Castain a écrit :
> There is another aspect, though - I had missed it in the thread, but the
> question Nadia was addressing is: how to tell I am bound? The way we
> currently do it is to compare our cpuset against the local cpuset - if we are
> on a subset, then we
Hmmm….guess we'll have to play with it. Our need is to start with a core or
some similar object, and quickly determine the closest IO device of a certain
type. We wound up having to write "summarizer" code to parse the hwloc tree
into a more OMPI-usable form, so we can always do that with the
That doesn't really work with the hwloc model unfortunately. Also, when
you get to smaller objects (cores, threads, ...) there are multiple
"closest" objects at each depth.
We have one "closest" object at some depth (usually Machine or NUMA
node). If you need something higher, you just walk the
Nadia --
I committed the fix in the trunk to use HWLOC_WHOLE_SYSTEM and IO_DEVICES.
Do you want to revise your patch to use hwloc APIs with opal_hwloc_topology
(instead of paffinity)? We could use that as a basis for the other places you
identified that are doing similar things.
On Feb 9,
Ah, okay - in that case, having the I/O device attached to the "closest" object
at each depth would be ideal from an OMPI perspective.
On Feb 9, 2012, at 6:30 AM, Brice Goglin wrote:
> The bios usually tells you which numa location is close to each host-to-pci
> bridge. So the answer is yes.
>
The bios usually tells you which numa location is close to each host-to-pci
bridge. So the answer is yes.
Brice
Ralph Castain a écrit :
I'm not sure I understand this comment. A PCI device is attached to the node,
not to any specific location within the node, isn't it? Can
I'm not sure I understand this comment. A PCI device is attached to the node,
not to any specific location within the node, isn't it? Can you really say that
a PCI device is "attached" to a specific NUMA location, for example?
On Feb 9, 2012, at 6:15 AM, Jeff Squyres wrote:
> That doesn't
Yeah, I think that's the right solution. We'll have to check the impact on the
rest of the code, but I -think- it will be okay - else we'll have to make some
tweaks here and there. Either way, it's still the right answer, I think.
On Feb 9, 2012, at 6:14 AM, Jeff Squyres wrote:
> Should we
On Feb 9, 2012, at 8:06 AM, Brice Goglin wrote:
>> What if my cpuset is only on Socket P#0? What exactly will be reported
>> via (WHOLE_SUBSYSTEM | HWLOC_TOPOLOGY_FLAG_WHOLE_IO)?
>
> I actually fixed something related to this case in 1.3.2. The device will be
> attached to the root object in
Should we just do this, then:
Index: mca/hwloc/base/hwloc_base_util.c
===
--- mca/hwloc/base/hwloc_base_util.c(revision 25885)
+++ mca/hwloc/base/hwloc_base_util.c(working copy)
@@ -173,6 +173,9 @@
Jeff Squyres a écrit :
>On Feb 9, 2012, at 7:50 AM, Chris Samuel wrote:
>
>>> Just so that I understand this better -- if a process is bound in a
>>> cpuset, will tools like hwloc's lstopo only show the Linux
>>> processors *in that cpuset*? I.e., does it not have any
>>>
Yes, I missed that point before - too early in the morning :-/
As I said in my last note, it would be nice to either have a flag indicating we
are bound, or see all the cpu info so we can compute that we are bound. Either
way, we still need to have a complete picture of all I/O devices so you
There is another aspect, though - I had missed it in the thread, but the
question Nadia was addressing is: how to tell I am bound? The way we currently
do it is to compare our cpuset against the local cpuset - if we are on a
subset, then we know we are bound.
So if all hwloc returns to us is
devel-boun...@open-mpi.org wrote on 02/09/2012 01:32:31 PM:
> De : Ralph Castain
> A : Open MPI Developers
> Date : 02/09/2012 01:32 PM
> Objet : Re: [OMPI devel] btl/openib: get_ib_dev_distance doesn't see
> processes as bound if the job has been
On Feb 9, 2012, at 7:50 AM, Chris Samuel wrote:
>> Just so that I understand this better -- if a process is bound in a
>> cpuset, will tools like hwloc's lstopo only show the Linux
>> processors *in that cpuset*? I.e., does it not have any
>> visibility of the processors outside of its cpuset?
>
On Thursday 09 February 2012 22:18:20 Jeff Squyres wrote:
> Just so that I understand this better -- if a process is bound in a
> cpuset, will tools like hwloc's lstopo only show the Linux
> processors *in that cpuset*? I.e., does it not have any
> visibility of the processors outside of its
On Feb 9, 2012, at 7:15 AM, nadia.der...@bull.net wrote:
> > By default, hwloc only shows what's inside the current cpuset. There's
> > an option to show everything instead (topology flag).
>
> So may be using that flag inside opal_paffinity_base_get_processor_info()
> would be a better fix
Hi Nadia
I'm wondering what value there is in showing the full topology, or using it in
any of our components, if the process is restricted to a specific set of cpus?
Does it really help to know that there are other cpus out there that are
unreachable?
On Feb 9, 2012, at 5:15 AM,
devel-boun...@open-mpi.org wrote on 02/09/2012 12:20:41 PM:
> De : Brice Goglin
> A : Open MPI Developers
> Date : 02/09/2012 12:20 PM
> Objet : Re: [OMPI devel] btl/openib: get_ib_dev_distance doesn't see
> processes as bound if the job has been
devel-boun...@open-mpi.org wrote on 02/09/2012 12:18:20 PM:
> De : Jeff Squyres
> A : Open MPI Developers
> Date : 02/09/2012 12:18 PM
> Objet : Re: [OMPI devel] btl/openib: get_ib_dev_distance doesn't see
> processes as bound if the job has been
By default, hwloc only shows what's inside the current cpuset. There's
an option to show everything instead (topology flag).
Brice
Le 09/02/2012 12:18, Jeff Squyres a écrit :
> Just so that I understand this better -- if a process is bound in a cpuset,
> will tools like hwloc's lstopo only
Just so that I understand this better -- if a process is bound in a cpuset,
will tools like hwloc's lstopo only show the Linux processors *in that cpuset*?
I.e., does it not have any visibility of the processors outside of its cpuset?
On Jan 27, 2012, at 11:38 AM, nadia.derbey wrote:
> Hi,
>
Resending, as i didn't get any answer...
Regards,
Nadia
--
Nadia Derbey
devel-boun...@open-mpi.org wrote on 01/27/2012 05:38:34 PM:
> De : "nadia.derbey"
> A : Open MPI Developers
> Date : 01/27/2012 05:35 PM
> Objet : [OMPI devel] btl/openib:
Hi,
If a job is launched using "srun --resv-ports --cpu_bind:..." and slurm
is configured with:
TaskPlugin=task/affinity
TaskPluginParam=Cpusets
each rank of that job is in a cpuset that contains a single CPU.
Now, if we use carto on top of this, the following happens in
37 matches
Mail list logo