Re: [PATCH 1/5] ACPI / PM: Move references to pm_flags into sleep.c

2011-02-08 Thread Linus Torvalds
On Tue, Feb 8, 2011 at 1:20 PM, Rafael J. Wysocki r...@sisk.pl wrote:

 If direct references to pm_flags are moved from bus.c to sleep.c,
 CONFIG_ACPI will not need to depend on CONFIG_PM any more.

The patch may _work_, but I really hate it. That function naming is insane:

  #ifdef CONFIG_ACPI_SLEEP
  #else
 +static inline bool acpi_pm_enabled(void) { return true; }

acpi_pm_enabled() returns true if ACPI_SLEEP is _not_ enabled? That's
just crazy.

... followed by more crazy:

 +bool acpi_pm_enabled(void)
 +{
 +       if (!(pm_flags  PM_APM)) {
 +               pm_flags |= PM_ACPI;
 +               return true;
 +       }
 +       return false;
 +}

IOW, that function doesn't do anything _remotely_ like what the naming
indicates.

Any sane person would expect that a function called
'acpi_pm_enabled()' would return true if ACPI PM was enabled, and
false otherwise. But it's not what it does at all. Instead, what it
does is to say if APM isn't enabled, let's enable ACPI and return
true. Except then for the non-ACPI sleep case, we just return true
regardless, which still looks damn odd, wouldn't you say?

That isn't good. Maybe it all does what you want it to do, but from a
code readability standpoint, it's just one honking big WTF?. Please
don't do that. Call it something else. Make the naming actually follow
what the semantics are. Appropriate naming should also make it
sensible to return true when ACPI_SLEEP is disabled, and should make
the caller look sane.

Now, I don't know what that particular naming might be, but maybe it
would be about APM being enabled. Which is what the caller actually
seems to care about and talks about for the failure case. Maybe you
need separate functions for the is APM enabled case for the naming
to make sense. Hmm?

  Linus
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] PM: Clean up Kconfig dependencies

2011-02-08 Thread Linus Torvalds
Ack on patches 2-5 in this series. It's just patch 1/5 that I think is
too ugly/odd to live.

Linus
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/5] ACPI / PM: Move references to pm_flags into sleep.c

2011-02-08 Thread Linus Torvalds
On Tue, Feb 8, 2011 at 4:37 PM, Rafael J. Wysocki r...@sisk.pl wrote:

 but maybe it would be about APM being enabled. Which is what the caller
 actually seems to care about and talks about for the failure case. Maybe
 you need separate functions for the is APM enabled case for the naming
 to make sense. Hmm?

 That sounds like a good idea.  What about the following patch?

This patch I have no problems with.

Linus
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Wait for console to become available, v3.2

2009-04-21 Thread Linus Torvalds


On Tue, 21 Apr 2009, David VomLehn wrote:
 
 What in the world are users going to do when they see a message about
 output being lost? There is no way to recover the data and no way to
 prevent it in the future. I don't think this is a good approach.

Sure there is. The console messages are saved too, so doing 'dmesg' will 
get you all the data that was generated before the console went on-line.

We _already_ lose data in that sense (although we could replay it for the 
first console connected - maybe we even do, I'm too lazy to check).

Linus
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-27 Thread Linus Torvalds


On Wed, 27 Aug 2008, Paul Mackerras wrote:
 
 I think your memory is failing you.  In 2.4 and earlier, the kernel
 stack was 8kB minus the size of the task_struct, which sat at the
 start of the 8kB.

Yup, you're right.

Linus
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-26 Thread Linus Torvalds


On Tue, 26 Aug 2008, Adrian Bunk wrote:
 
 If you think we have too many stacksize problems I'd suggest to consider 
 removing the choice of 4k stacks on i386, sh and m68knommu instead of 
 using -fno-inline-functions-called-once:

Don't be silly. That makes the problem _worse_.

We're much better off with a 1% code-size reduction than forcing big 
stacks on people. The 4kB stack option is also a good way of saying if it 
works with this, then 8kB is certainly safe.

And embedded people (the ones that might care about 1% code size) are the 
ones that would also want smaller stacks even more!

Linus
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-26 Thread Linus Torvalds


On Tue, 26 Aug 2008, Parag Warudkar wrote:
 
 This is something I never understood - embedded devices are not going
 to run more than a few processes and 4K*(Few Processes)
  IMHO is not worth a saving now a days even in embedded world given
 falling memory prices. Or do I misunderstand?

Well, by that argument, 1% of kernel size doesn't matter either..

1% of a kernel for an embedded device is roughly 10-30kB or so depending 
on how small you make the configuration. 

If that matters, then so should the difference of 3-8 processes' kernel 
stack usage when you have a 4k/8k stack choice.

And they _all_ will have at least 3-8 processes on them. Even the simplest 
ones will tend to have many more.

Linus
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-26 Thread Linus Torvalds


On Wed, 27 Aug 2008, Adrian Bunk wrote:
  
  We're much better off with a 1% code-size reduction than forcing big 
  stacks on people. The 4kB stack option is also a good way of saying if it 
  works with this, then 8kB is certainly safe.
 
 You implicitely assume both would solve the same problem.

I'm just saying that your logic doesn't hold water.

If we can save kernel stack usage, then a 1% increase in kernel size is 
more than worth it.

 While 4kB stacks are something we anyway never got 100% working

What? Don't be silly. 

Linux _historically_ always used 4kB stacks.

No, they are likely not usable on x86-64, but dammit, they should be more 
than usable on x86-32 still.

 But I do not think the problem you'd solve with 
 -fno-inline-functions-called-once is big enough to warrant the size 
 increase it causes.

You continually try to see the inlining as a single solution to one 
problem (debuggability, stack, whatever).

The biggest problem with gcc inlining has always been that it has been 
_unpredictable_. It causes problems in many different ways. It has caused 
stability issues due to gcc versions doing random things. It causes the 
stack expansion. It makes stack traces harder for debugging, etc.

If it was any one thing, I wouldn't care. But it's exactly the fact that 
it causes all these problems in different areas.

Linus
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-26 Thread Linus Torvalds


On Tue, 26 Aug 2008, Parag Warudkar wrote:

 And although you said in your later reply that Linux x86 with 4K
 stacks should be more than usable - my experiences running a untainted
 desktop/file server with 4K stack have been always disastrous XFS or
 not.  It _might_ work for some well defined workloads but you would
 not want to risk 4K stacks otherwise.

Umm. How long?

4kB used to be the _only_ choice. And no, there weren't even irq stacks. 
So that 4kB was not just the whole kernel call-chain, it was also all the 
irq nesting above it.

And yes, we've gotten much worse over time, and no, I can't really suggest 
going back to that in general. The code bloat has certainly been 
accompanied by a stack bloat too.

But part of it is definitely gcc. Some versions of gcc used to be 
absolutely _horrid_ when it came to stack usage, especially with some 
flags, and especially with the crazy inlining that module-at-a-time 
caused.

But I'd be really happy if some embedded people tried to take some of that 
bloat back, and aim for 4kB stacks. Because it's definitely not 
unrealistic. At least it _shouldn't_ be. And a lot of the cases of us 
having structures on the stack is actually not worth it, and tends to be 
about being lazy rather than anything else.

Linus
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-26 Thread Linus Torvalds


On Tue, 26 Aug 2008, Parag Warudkar wrote:
 
 What about deep call chains? The problem with the uptake of 4K stacks
 seems to be that is not reliably provable that it will work under all
 circumstances.

Umm. Neither is 8k stacks. Nobody proved anything.

But yes, some subsystems have insanely deep call chains. And yes, things 
like the VFS recursion (for symlinks) makes that deeper yet for 
filesystems, although only on the lookup path. And that is exactly the 
kind of thing that can exacerbate the problem of the compiler artificially 
making for a bigger stack footprint of a function (*).

For things like the VFS layer, right now we allow a nesting level of 8, I 
think. If I remember correctly, it was 5 historically. Part of raising 
that depth, though, was that we actually moved the recursive part into 
fs/namei.c, and the nesting stack-depth was something pretty damn small 
when the filesystem used follow_link properly and let the VFS do it for 
it (ie the callchain to actually look up the link could be deep, but it 
would not recurse back, and instead just return a pointer, so that the 
actual _recursive_ part was just __do_follow_link() and is just a few 
words on the stack).

So yes, we do have some deep callchains, but they tend to be pretty well 
managed for _good_ code. The problems tend to be the areas with lots of 
indirection layers, and yeah, XFS, MD and ACPI all have those kinds of 
things.

In an embdedded world, many of those should be a non-issue, though. 

Linus

(*) ie the function that _is_ on the deep chain doesn't actually need much 
of a stack footprint at all itself, but it may call a helper function that 
is _not_ in the deep chain, and if it gets inlined it may give its 
excessive stack footprint to the deep chain - and this is _exactly_ the 
problem that happened with inlining load_module().
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html