On Tue, 03 Apr 2007 15:13:09 -0700
Ulrich Drepper <[EMAIL PROTECTED]> wrote:

> Andrew Morton wrote:
> > Did we mean to go off-list?
> 
> Oops, no, pressed the wrong button.
> 
> >> Andrew Morton wrote:
> >>> So I'd have thought that in general an application should be querying its
> >>> present affinity mask - something like sched_getaffinity()?  That fixes 
> >>> the
> >>> CPU hotplug issues too, of course.
> >> Does it really?
> >>
> >> My recollection is that the affinity masks of running processes is not
> >> updated on hotplugging.  Is this addressed?
> > 
> > ah, yes, you're correct.
> > 
> > Inside a cpuset:
> > 
> >   sched_setaffinity() is constrained to those CPUs which are in the
> >   cpuset.
> > 
> >   If a cpu if on/offlined we update each cpuset's cpu mask appropriately
> >   but we do not update all the tasks presently running in the cpuset.
> > 
> > Outside a cpuset:
> > 
> >   sched_setaffinity() is constrained to all possible cpus
> > 
> >   We don't update each task's cpus_allowed when a CPU is removed.
> > 
> > 
> > I think we trivially _could_ update each tasks's cpus_allowed mask when a
> > CPU is removed, actually.
> 
> I think it has to be done.  But that's not so trivial.  What happens if
> all the CPUs a process was supposed to be runnable on vanish.
> Shouldn't, if no affinity mask is defined, new processors be added?  I
> agree that if the process has a defined affinity mask no new processors
> should be added _automatically_.
> 

Yes, some policy decision needs to be made there.

But whatever we decide to do, the implementation will be relatively
straightforward, because hot-unplug uses stop_machine_run() and later, we
hope, will use the process freezer.  This setting of the whole machine into
a known state means (I think) that we can avoid a whole lot of fuss which
happens when affinity is altered.

Anyway.  It's not really clear who maintains CPU hotplug nowadays.  <adds a
few cc's>.  But yes, I do thing we should do <something sane> with process
affinity when CPU hot[un]plug happens.

Now it could be argued that the current behaviour is that sane thing: we
allow the process to "pin" itself to not-present CPUs and just handle it in
the CPU scheduler.

Paul, could you please describe what cpusets' policy is in the presence of
CPU additional and removal?

> 
> >> If yes, sched_getaffinity is a solution until the NUMA topology
> >> framework can provide something better.  Even without a popcnt
> >> instruction in the CPU (64-bit albeit) it's twice as fast as the the
> >> stat() method proposed.
> > 
> > I'm surprised - I'd have expected sched_getaffinity() to be vastly quicker
> > that doing fileystem operations.
> 
> You mean because it's only a factor of two?  Well, it's not once you
> count the whole overhead.

Is it kernel overhead, or userspace?  The overhead of counting the bits?

Because sched_getaffinity() could be easily sped up in the case where
it is operating on the current process.


Anyway, where do we stand?  Assuming we can address the CPU hotplug issues,
does sched_getaffinity() look like it will be suitable?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to