On Monday, 11 December 2006 07:52, Dipankar Sarma wrote:
> On Sun, Dec 10, 2006 at 03:18:38PM +0100, Rafael J. Wysocki wrote:
> > On Sunday, 10 December 2006 13:16, Andrew Morton wrote:
> > > On Sun, 10 Dec 2006 12:49:43 +0100
> >
> > Hm, currently we're using the CPU hotplug to disable the nonboo
On Sun, Dec 10, 2006 at 03:18:38PM +0100, Rafael J. Wysocki wrote:
> On Sunday, 10 December 2006 13:16, Andrew Morton wrote:
> > On Sun, 10 Dec 2006 12:49:43 +0100
>
> Hm, currently we're using the CPU hotplug to disable the nonboot CPUs before
> the freezer is called. ;-)
>
> However, we're now
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> > > > > > {
> > > > > > int cpu = raw_smp_processor_id();
> > > > > > /*
> > > > > > * Interrupts/softirqs are hotplug-safe:
> > > > > > */
> > > > > > if (in_interrupt())
> > > > > >
> On Mon, 11 Dec 2006 11:15:45 +0530 Srivatsa Vaddagiri <[EMAIL PROTECTED]>
> wrote:
> On Sun, Dec 10, 2006 at 04:16:00AM -0800, Andrew Morton wrote:
> > One quite different way of addressing all of this is to stop using
> > stop_machine_run() for hotplug synchronisation and switch to the swsusp
>
On Sun, Dec 10, 2006 at 04:16:00AM -0800, Andrew Morton wrote:
> One quite different way of addressing all of this is to stop using
> stop_machine_run() for hotplug synchronisation and switch to the swsusp
> freezer infrastructure: all kernel threads and user processes need to stop
> and park thems
On Sun, Dec 10, 2006 at 09:26:16AM +0100, Ingo Molnar wrote:
> something like the pseudocode further below - when applied to a data
> structure it has semantics and scalability close to that of
> preempt_disable(), but it is still preemptible and the lock is specific.
Ingo,
The psuedo-code
On Sat, 9 Dec 2006 11:26:52 +0100
Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * Andrew Morton <[EMAIL PROTECTED]> wrote:
>
> > > > + if (cpu != -1)
> > > > + mutex_lock(&workqueue_mutex);
> > >
> > > events/4 thread itself wanting the same mutex above?
> >
>
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> > > + if (cpu != -1)
> > > + mutex_lock(&workqueue_mutex);
> >
> > events/4 thread itself wanting the same mutex above?
>
> Could do, not sure. I'm planning on converting all the locking around
> here to preempt_disable() th
On Thu, Dec 07, 2006 at 08:54:07PM -0800, Andrew Morton wrote:
> Could do, not sure.
AFAICS it will deadlock for sure.
> I'm planning on converting all the locking around here
> to preempt_disable() though.
Will look forward to that patch. Its hard to dance around w/o a
lock_cpu_hotplug() ..:)
On Fri, 8 Dec 2006 08:23:01 +0530
Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> On Thu, Dec 07, 2006 at 11:37:00AM -0800, Andrew Morton wrote:
> > -static void flush_cpu_workqueue(struct cpu_workqueue_struct *cwq)
> > +/*
> > + * If cpu == -1 it's a single-threaded workqueue and the caller does
On Thu, Dec 07, 2006 at 11:37:00AM -0800, Andrew Morton wrote:
> -static void flush_cpu_workqueue(struct cpu_workqueue_struct *cwq)
> +/*
> + * If cpu == -1 it's a single-threaded workqueue and the caller does not hold
> + * workqueue_mutex
> + */
> +static void flush_cpu_workqueue(struct cpu_workq
On Thu, 7 Dec 2006 10:51:48 -0800
Andrew Morton <[EMAIL PROTECTED]> wrote:
> + if (!cpu_online(cpu)) /* oops, CPU got unplugged */
> + goto bail;
hm, actually we can pull the same trick with flush_scheduled_work().
Should fix quite a few things...
From: Andre
On Wed, 6 Dec 2006 17:26:14 -0700
Bjorn Helgaas <[EMAIL PROTECTED]> wrote:
> I'm seeing a workqueue-related deadlock. This is on an ia64
> box running SLES10, but it looks like the same problem should
> be possible in current upstream on any architecture.
>
> Here are the two tasks involved:
>
On Thu, Dec 07, 2006 at 11:47:01AM +0530, Srivatsa Vaddagiri wrote:
> - Make it rw-sem
I think rw-sems also were shown to hit deadlocks (recursive read-lock
attempt deadlocks when a writer comes between the two read attempts by the same
thread). So below suggestion only seems to makes sense
On Wed, Dec 06, 2006 at 05:26:14PM -0700, Bjorn Helgaas wrote:
> loadkeys is holding the cpu_hotplug lock (acquired in flush_workqueue())
> and waiting in flush_cpu_workqueue() until the cpu_workqueue drains.
>
> But events/4 is responsible for draining it, and it is blocked waiting
> to acquire t
15 matches
Mail list logo