On Thu, 19 Sep 2013, Linus Torvalds wrote:

> On Thu, Sep 19, 2013 at 2:51 PM, Frederic Weisbecker <fweis...@gmail.com> 
> wrote:
> >
> > It fixes stacks overruns reported by Benjamin Herrenschmidt:
> > http://lkml.kernel.org/r/1378330796.4321.50.camel%40pasglop
> 
> So I don't really dislike this patch-series, but isn't "irq_exit()"
> (which calls the new softirq_on_stack()) already running in the
> context of the irq stack? And it's run at the very end of the irq
> processing, so the irq stack should be empty too at that point.

Right, but most of the implementations are braindamaged.

      irq_enter();
      handle_irq_on_hardirq_stack();
      irq_exit();

instead of doing:
        
      switch_stack()
      irq_enter()
      handle_irq()
      irq_exit()
      restore_stack()

So in the case of softirq processing (the likely case) we end up doing:

   switch_to_hardirq_stack()
   ...
   restore_original_stack()
   switch_to_softirq_stack()
   ...
   restore_original_stack()

Two avoidable stack switch operations for no gain.

> I'm assuming that the problem is that since we're already on the irq
> stack, if *another* irq comes in, now that *other* irq doesn't get yet
> another irq stack page. And I'm wondering whether we shouldn't just
> fix that (hopefully unlikely) case instead? So instead of having a
> softirq stack, we'd have just an extra irq stack for the case where
> the original irq stack is already in use.

Why not have a single irq_stack large enough to accomodate interrupt
handling during softirq processing? We have no interrupt nesting so
the maximum stack depth necessary is

  max(softirq_stack_usage) + max(irq_stack_usage)

Today we allocate THREAD_SIZE_ORDER for the hard and the soft context,
so allocating 2 * THREAD_SIZE_ORDER should be sufficient.

Thanks,

        tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to