On Friday August 10, [EMAIL PROTECTED] wrote:
> On 8/1/07, Neil Brown <[EMAIL PROTECTED]> wrote:
>
> > No, this does not use indefinite stack.
> >
> > loop will schedule each request to be handled by a kernel thread, so
> > requests to 'loop' are serialised, never stacked.
> >
> > In 2.6.22,
On 8/1/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> No, this does not use indefinite stack.
>
> loop will schedule each request to be handled by a kernel thread, so
> requests to 'loop' are serialised, never stacked.
>
> In 2.6.22, generic_make_request detects and serialises recursive calls,
> so
On 8/1/07, Alan Cox <[EMAIL PROTECTED]> wrote:
> On Wed, 1 Aug 2007 15:33:58 +0200
> Andrea Arcangeli <[EMAIL PROTECTED]> wrote:
> > Tweaking kernel ptes is prohibitive during clone() because that's
> > kernel memory and it would require a flush tlb all with IPIs that
> > won't scale (IPIs are
On 8/1/07, Alan Cox [EMAIL PROTECTED] wrote:
On Wed, 1 Aug 2007 15:33:58 +0200
Andrea Arcangeli [EMAIL PROTECTED] wrote:
Tweaking kernel ptes is prohibitive during clone() because that's
kernel memory and it would require a flush tlb all with IPIs that
won't scale (IPIs are really the
On 8/1/07, Neil Brown [EMAIL PROTECTED] wrote:
No, this does not use indefinite stack.
loop will schedule each request to be handled by a kernel thread, so
requests to 'loop' are serialised, never stacked.
In 2.6.22, generic_make_request detects and serialises recursive calls,
so unlimited
On Friday August 10, [EMAIL PROTECTED] wrote:
On 8/1/07, Neil Brown [EMAIL PROTECTED] wrote:
No, this does not use indefinite stack.
loop will schedule each request to be handled by a kernel thread, so
requests to 'loop' are serialised, never stacked.
In 2.6.22, generic_make_request
On Wed, 1 Aug 2007 15:33:58 +0200
Andrea Arcangeli <[EMAIL PROTECTED]> wrote:
> On Wed, Aug 01, 2007 at 04:11:23AM -0400, Dan Merillat wrote:
> > How expensive would it be to allocate two , then use the MMU mark the
> > second page unwritable? Hardware wise it should be possible, (for
>
>
On Wed, Aug 01, 2007 at 04:11:23AM -0400, Dan Merillat wrote:
> How expensive would it be to allocate two , then use the MMU mark the
> second page unwritable? Hardware wise it should be possible, (for
Tweaking kernel ptes is prohibitive during clone() because that's
kernel memory and it would
On Wednesday August 1, [EMAIL PROTECTED] wrote:
>
> The other issue is with the layered IO design - no matter what we
> configure the stack size to, it is still possible to create a set of
> translation layers that will cause it to crash regularly: XFS on
> dm_crypt on loop on XFS on dm_crypt on
On 7/31/07, Eric Sandeen <[EMAIL PROTECTED]> wrote:
> No, what I had did only that, so it was still a matter of probabilities...
How expensive would it be to allocate two , then use the MMU mark the
second page unwritable? Hardware wise it should be possible, (for
constant 4k pagesizes, I have
On 7/31/07, Eric Sandeen [EMAIL PROTECTED] wrote:
No, what I had did only that, so it was still a matter of probabilities...
How expensive would it be to allocate two , then use the MMU mark the
second page unwritable? Hardware wise it should be possible, (for
constant 4k pagesizes, I have not
On Wednesday August 1, [EMAIL PROTECTED] wrote:
The other issue is with the layered IO design - no matter what we
configure the stack size to, it is still possible to create a set of
translation layers that will cause it to crash regularly: XFS on
dm_crypt on loop on XFS on dm_crypt on loop
On Wed, Aug 01, 2007 at 04:11:23AM -0400, Dan Merillat wrote:
How expensive would it be to allocate two , then use the MMU mark the
second page unwritable? Hardware wise it should be possible, (for
Tweaking kernel ptes is prohibitive during clone() because that's
kernel memory and it would
On Wed, 1 Aug 2007 15:33:58 +0200
Andrea Arcangeli [EMAIL PROTECTED] wrote:
On Wed, Aug 01, 2007 at 04:11:23AM -0400, Dan Merillat wrote:
How expensive would it be to allocate two , then use the MMU mark the
second page unwritable? Hardware wise it should be possible, (for
Tweaking
Satyam Sharma wrote:
> On 7/27/07, Alan Cox <[EMAIL PROTECTED]> wrote:
>>> Maybe I should resurrect it & send it out...
>
> Hmm, something that hooks in not only at do_IRQ time (as the present
> in-mainline stackoverflow check thing)?
No, what I had did only that, so it was still a matter of
Satyam Sharma wrote:
On 7/27/07, Alan Cox [EMAIL PROTECTED] wrote:
Maybe I should resurrect it send it out...
Hmm, something that hooks in not only at do_IRQ time (as the present
in-mainline stackoverflow check thing)?
No, what I had did only that, so it was still a matter of
On 7/27/07, Alan Cox <[EMAIL PROTECTED]> wrote:
> > Maybe I should resurrect it & send it out...
Hmm, something that hooks in not only at do_IRQ time (as the present
in-mainline stackoverflow check thing)?
> > (FWIW I think I recall that the warning itself sometimes tipped the
> > scales enough
> Maybe I should resurrect it & send it out...
>
> (FWIW I think I recall that the warning itself sometimes tipped the
> scales enough on 4k stacks to bring the box down)
You can always switch stack for the printk and it probably should panic
at that point and give a trace then die as that is
Eric Sandeen <[EMAIL PROTECTED]> writes:
>> 8K stacks without IRQ stacks are not "safer" so I don't understand your
>> comment ?
>
> Hmm was it SuSE or RH kernels (or mainline?) I saw which had a test to
> defer soft IRQs if they occurred too deep in the stack for the current
> thread.
Perhaps
Alan Cox wrote:
>> I don't think they're necessarily bugs. IMHO the WARN_ON is better off
>> at 7k level like it is today with the current STACK_WARN. 4k for a
>> stack for common code really is small. I doubt you're going to find
>
> You want the limit settable. On a production system you want
Alan Cox wrote:
>> About 4k stacks I was generally against them, much better to fail in
>> fork than to risk corruption. The per-irq stack part is great feature
>> instead (too bad it wasn't enabled for the safer 8k stacks).
>
> 8K stacks without IRQ stacks are not "safer" so I don't understand
Alan Cox wrote:
I don't think they're necessarily bugs. IMHO the WARN_ON is better off
at 7k level like it is today with the current STACK_WARN. 4k for a
stack for common code really is small. I doubt you're going to find
You want the limit settable. On a production system you want to set
Alan Cox wrote:
About 4k stacks I was generally against them, much better to fail in
fork than to risk corruption. The per-irq stack part is great feature
instead (too bad it wasn't enabled for the safer 8k stacks).
8K stacks without IRQ stacks are not safer so I don't understand your
Eric Sandeen [EMAIL PROTECTED] writes:
8K stacks without IRQ stacks are not safer so I don't understand your
comment ?
Hmm was it SuSE or RH kernels (or mainline?) I saw which had a test to
defer soft IRQs if they occurred too deep in the stack for the current
thread.
Perhaps the 8 KB
Maybe I should resurrect it send it out...
(FWIW I think I recall that the warning itself sometimes tipped the
scales enough on 4k stacks to bring the box down)
You can always switch stack for the printk and it probably should panic
at that point and give a trace then die as that is what we
On 7/27/07, Alan Cox [EMAIL PROTECTED] wrote:
Maybe I should resurrect it send it out...
Hmm, something that hooks in not only at do_IRQ time (as the present
in-mainline stackoverflow check thing)?
(FWIW I think I recall that the warning itself sometimes tipped the
scales enough on 4k
On Thu, 19 Jul 2007, Denis Vlasenko wrote:
> On Tuesday 17 July 2007 00:42, Bodo Eggert wrote:
> > > b) make 4K stacks the default option in vanilla kernel.org kernels as
> > > a gentle nudge towards getting people to start fixing the code paths
> > > that are not 4K stack safe.
> >
> > That's
On Tuesday 17 July 2007 00:42, Bodo Eggert wrote:
> > Please note that I was not trying to remove the 8K stack option right
> > now - heck, I didn't even add anything to feature-removal-schedule.txt
> > - all I wanted to accomplish with the patch that started this threas
> > was; a) indicate that
On Tue, 17 Jul 2007, Arjan van de Ven wrote:
> > 1) It all can be reduced to 4K + 4K by asuming all IRQ happen on one CPU.
>
> no it's separate stacks for soft and hard irqs, so it's really 4+4+4
Thanks, I missed that information. Unfortunately this change still does
not help if one of these
On Wed, 18 Jul 2007, Rene Herman wrote:
> On 07/18/2007 01:19 AM, Bodo Eggert wrote:
> > Please post a list of things you have designed, so I can avoid them.
>
> - The ability to read
> - The ability to understand
>
> You're doing a hell of a job already.
If you designed them like you design
Alan Cox <[EMAIL PROTECTED]> wrote:
> On Thu, 19 Jul 2007 03:33:58 +0200
> Andrea Arcangeli <[EMAIL PROTECTED]> wrote:
>> > 8K stacks without IRQ stacks are not "safer" so I don't understand your
>> > comment ?
>>
>> Ouch, see the reports about 4k stack crashes. I agree they're not
>> safe w/o
> I don't think they're necessarily bugs. IMHO the WARN_ON is better off
> at 7k level like it is today with the current STACK_WARN. 4k for a
> stack for common code really is small. I doubt you're going to find
You want the limit settable. On a production system you want to set the
limit to
On Wed, Jul 18, 2007 at 08:37:25PM -0500, Matt Mackall wrote:
> Turn on irqstacks when using 8k stacks
Indeed.
> Detect when usage with 8k stacks would overrun a 4k stack when doing
> our stack switch and do a WARN_ONCE
> Fix up the damn bugs
I don't think they're necessarily bugs. IMHO the
On Thu, Jul 19, 2007 at 10:23:59AM +0100, Alan Cox wrote:
> Still don't follow. How is "exceeds stack space but less likely to be
> noticed" safer.
Statistically speaking it clearly is. The reason is probably that the
irq theoretical issue happens only on large boxes with lots of
reentrant irqs.
On Thu, 19 Jul 2007 03:33:58 +0200
Andrea Arcangeli <[EMAIL PROTECTED]> wrote:
> > 8K stacks without IRQ stacks are not "safer" so I don't understand your
> > comment ?
>
> Ouch, see the reports about 4k stack crashes. I agree they're not
> safe w/o irq stacks (like on x86-64), but they're
On Thu, 19 Jul 2007 03:33:58 +0200
Andrea Arcangeli [EMAIL PROTECTED] wrote:
8K stacks without IRQ stacks are not safer so I don't understand your
comment ?
Ouch, see the reports about 4k stack crashes. I agree they're not
safe w/o irq stacks (like on x86-64), but they're generally safer.
On Thu, Jul 19, 2007 at 10:23:59AM +0100, Alan Cox wrote:
Still don't follow. How is exceeds stack space but less likely to be
noticed safer.
Statistically speaking it clearly is. The reason is probably that the
irq theoretical issue happens only on large boxes with lots of
reentrant irqs. Not
On Wed, Jul 18, 2007 at 08:37:25PM -0500, Matt Mackall wrote:
Turn on irqstacks when using 8k stacks
Indeed.
Detect when usage with 8k stacks would overrun a 4k stack when doing
our stack switch and do a WARN_ONCE
Fix up the damn bugs
I don't think they're necessarily bugs. IMHO the
I don't think they're necessarily bugs. IMHO the WARN_ON is better off
at 7k level like it is today with the current STACK_WARN. 4k for a
stack for common code really is small. I doubt you're going to find
You want the limit settable. On a production system you want to set the
limit to
Alan Cox [EMAIL PROTECTED] wrote:
On Thu, 19 Jul 2007 03:33:58 +0200
Andrea Arcangeli [EMAIL PROTECTED] wrote:
8K stacks without IRQ stacks are not safer so I don't understand your
comment ?
Ouch, see the reports about 4k stack crashes. I agree they're not
safe w/o irq stacks (like on
On Wed, 18 Jul 2007, Rene Herman wrote:
On 07/18/2007 01:19 AM, Bodo Eggert wrote:
Please post a list of things you have designed, so I can avoid them.
- The ability to read
- The ability to understand
You're doing a hell of a job already.
If you designed them like you design secure
On Tue, 17 Jul 2007, Arjan van de Ven wrote:
1) It all can be reduced to 4K + 4K by asuming all IRQ happen on one CPU.
no it's separate stacks for soft and hard irqs, so it's really 4+4+4
Thanks, I missed that information. Unfortunately this change still does
not help if one of these
On Tuesday 17 July 2007 00:42, Bodo Eggert wrote:
Please note that I was not trying to remove the 8K stack option right
now - heck, I didn't even add anything to feature-removal-schedule.txt
- all I wanted to accomplish with the patch that started this threas
was; a) indicate that the 4K
On Thu, 19 Jul 2007, Denis Vlasenko wrote:
On Tuesday 17 July 2007 00:42, Bodo Eggert wrote:
b) make 4K stacks the default option in vanilla kernel.org kernels as
a gentle nudge towards getting people to start fixing the code paths
that are not 4K stack safe.
That's the big NACK.
On 07/19/2007 03:37 AM, Matt Mackall wrote:
Here's a way to make forward progress on this whole thing:
Turn on irqstacks when using 8k stacks
WLI: are you submitting? Makes great sense regardless of anything and
they've been tested silly with 4KSTACKS already...
Detect when usage with 8k
On Thu, Jul 19, 2007 at 03:33:58AM +0200, Andrea Arcangeli wrote:
> On Thu, Jul 19, 2007 at 01:39:55AM +0100, Alan Cox wrote:
> > > About 4k stacks I was generally against them, much better to fail in
> > > fork than to risk corruption. The per-irq stack part is great feature
> > > instead (too
On Thu, Jul 19, 2007 at 01:39:55AM +0100, Alan Cox wrote:
> > About 4k stacks I was generally against them, much better to fail in
> > fork than to risk corruption. The per-irq stack part is great feature
> > instead (too bad it wasn't enabled for the safer 8k stacks).
>
> 8K stacks without IRQ
On Thu, Jul 19, 2007 at 02:48:37AM +0200, Rene Herman wrote:
> On 07/19/2007 02:41 AM, Matt Mackall wrote:
>
> >On Thu, Jul 19, 2007 at 02:15:39AM +0200, Andrea Arcangeli wrote:
>
> >>Using kmalloc(8k) instead of alloc_page() doesn't sound a too big deal
> >>and that will solve the problem.
> >
On 07/19/2007 02:41 AM, Matt Mackall wrote:
On Thu, Jul 19, 2007 at 02:15:39AM +0200, Andrea Arcangeli wrote:
Using kmalloc(8k) instead of alloc_page() doesn't sound a too big deal
and that will solve the problem.
How do you figure?
If you're saying that soft pages helps our 8k stack
On Thu, Jul 19, 2007 at 02:15:39AM +0200, Andrea Arcangeli wrote:
> On Mon, Jul 16, 2007 at 06:27:55PM -0500, Matt Mackall wrote:
> > So it's absolutely no help in fixing our order-1 allocation problem
> > because we don't want to force large pages on people.
>
> Using kmalloc(8k) instead of
> About 4k stacks I was generally against them, much better to fail in
> fork than to risk corruption. The per-irq stack part is great feature
> instead (too bad it wasn't enabled for the safer 8k stacks).
8K stacks without IRQ stacks are not "safer" so I don't understand your
comment ?
-
To
On Mon, Jul 16, 2007 at 06:27:55PM -0500, Matt Mackall wrote:
> So it's absolutely no help in fixing our order-1 allocation problem
> because we don't want to force large pages on people.
Using kmalloc(8k) instead of alloc_page() doesn't sound a too big deal
and that will solve the problem. The
On 07/18/2007 07:19 PM, Phillip Susi wrote:
Why do the two pages have to be physically contiguous? The stack just
needs to be two contiguous pages in virtual memory, but they can map to
any two pages anywhere in physical memory.
As far as I'm aware that's just a consequence of the way linux
Alan Cox wrote:
Why do the two pages have to be physically contiguous? The stack just
needs to be two contiguous pages in virtual memory, but they can map to
any two pages anywhere in physical memory.
Historically we allowed DMA off the stack on old x86 systems. Removing
that while a good
> Why do the two pages have to be physically contiguous? The stack just
> needs to be two contiguous pages in virtual memory, but they can map to
> any two pages anywhere in physical memory.
Historically we allowed DMA off the stack on old x86 systems. Removing
that while a good idea would
Matt Mackall wrote:
As far as I'm aware, the actual reason for 4K stacks is that after the
system has been up and running for some time getting "1 physically
contiguous pages" becomes significantly easier than 2 which wouldn't be
arbitrary.
If there are exactly two free pages in the system,
On 07/18/2007 06:54 PM, Matt Mackall wrote:
You can expect the distribution of file sizes to follow a gamma
distribution, with a large hump towards the small end of the spectrum
around 1-10K, dropping off very rapidly as file sizes grow.
Okay.
Not too sure then that 8K wouldn't be something
On Wed, Jul 18, 2007 at 04:38:19AM +0200, Rene Herman wrote:
> On 07/17/2007 01:27 AM, Matt Mackall wrote:
>
> >Larger soft pages waste tremendous amounts of memory (mostly in page
> >cache) for minimal benefit on, say, the typical desktop. While there
> >are workloads where it's a win, it's
Nick Craig-Wood <[EMAIL PROTECTED]> wrote:
> Zan Lynx <[EMAIL PROTECTED]> wrote:
> > There *are* crashes from LVM and ext3. I had to change kernels to avoid
> > them.
> >
> > I had crashes with ext3 on LVM snapshot on DM mirror on SATA.
>
> We've noticed these too... ext3/LVM/raid0/sata
Zan Lynx <[EMAIL PROTECTED]> wrote:
> There *are* crashes from LVM and ext3. I had to change kernels to avoid
> them.
>
> I had crashes with ext3 on LVM snapshot on DM mirror on SATA.
We've noticed these too... ext3/LVM/raid0/sata seems fine. If you add
snapshot in that mix then it becomes
On 7/17/07, Alan Cox <[EMAIL PROTECTED]> wrote:
On Mon, 16 Jul 2007 16:15:28 -0700
"Ray Lee" <[EMAIL PROTECTED]> wrote:
> Heh :-). No, it's not a question of trust. First and foremost, it's
> that there are still users who say that they can crash a current
> 4k+interrupt stacks kernel, while the
On 7/17/07, Alan Cox [EMAIL PROTECTED] wrote:
On Mon, 16 Jul 2007 16:15:28 -0700
Ray Lee [EMAIL PROTECTED] wrote:
Heh :-). No, it's not a question of trust. First and foremost, it's
that there are still users who say that they can crash a current
4k+interrupt stacks kernel, while the 8k
Zan Lynx [EMAIL PROTECTED] wrote:
There *are* crashes from LVM and ext3. I had to change kernels to avoid
them.
I had crashes with ext3 on LVM snapshot on DM mirror on SATA.
We've noticed these too... ext3/LVM/raid0/sata seems fine. If you add
snapshot in that mix then it becomes rather
Nick Craig-Wood [EMAIL PROTECTED] wrote:
Zan Lynx [EMAIL PROTECTED] wrote:
There *are* crashes from LVM and ext3. I had to change kernels to avoid
them.
I had crashes with ext3 on LVM snapshot on DM mirror on SATA.
We've noticed these too... ext3/LVM/raid0/sata seems fine. If
On Wed, Jul 18, 2007 at 04:38:19AM +0200, Rene Herman wrote:
On 07/17/2007 01:27 AM, Matt Mackall wrote:
Larger soft pages waste tremendous amounts of memory (mostly in page
cache) for minimal benefit on, say, the typical desktop. While there
are workloads where it's a win, it's probably on
On 07/18/2007 06:54 PM, Matt Mackall wrote:
You can expect the distribution of file sizes to follow a gamma
distribution, with a large hump towards the small end of the spectrum
around 1-10K, dropping off very rapidly as file sizes grow.
Okay.
Not too sure then that 8K wouldn't be something
Matt Mackall wrote:
As far as I'm aware, the actual reason for 4K stacks is that after the
system has been up and running for some time getting 1 physically
contiguous pages becomes significantly easier than 2 which wouldn't be
arbitrary.
If there are exactly two free pages in the system,
Why do the two pages have to be physically contiguous? The stack just
needs to be two contiguous pages in virtual memory, but they can map to
any two pages anywhere in physical memory.
Historically we allowed DMA off the stack on old x86 systems. Removing
that while a good idea would take a
Alan Cox wrote:
Why do the two pages have to be physically contiguous? The stack just
needs to be two contiguous pages in virtual memory, but they can map to
any two pages anywhere in physical memory.
Historically we allowed DMA off the stack on old x86 systems. Removing
that while a good
On 07/18/2007 07:19 PM, Phillip Susi wrote:
Why do the two pages have to be physically contiguous? The stack just
needs to be two contiguous pages in virtual memory, but they can map to
any two pages anywhere in physical memory.
As far as I'm aware that's just a consequence of the way linux
On Mon, Jul 16, 2007 at 06:27:55PM -0500, Matt Mackall wrote:
So it's absolutely no help in fixing our order-1 allocation problem
because we don't want to force large pages on people.
Using kmalloc(8k) instead of alloc_page() doesn't sound a too big deal
and that will solve the problem. The
About 4k stacks I was generally against them, much better to fail in
fork than to risk corruption. The per-irq stack part is great feature
instead (too bad it wasn't enabled for the safer 8k stacks).
8K stacks without IRQ stacks are not safer so I don't understand your
comment ?
-
To
On Thu, Jul 19, 2007 at 02:15:39AM +0200, Andrea Arcangeli wrote:
On Mon, Jul 16, 2007 at 06:27:55PM -0500, Matt Mackall wrote:
So it's absolutely no help in fixing our order-1 allocation problem
because we don't want to force large pages on people.
Using kmalloc(8k) instead of
On 07/19/2007 02:41 AM, Matt Mackall wrote:
On Thu, Jul 19, 2007 at 02:15:39AM +0200, Andrea Arcangeli wrote:
Using kmalloc(8k) instead of alloc_page() doesn't sound a too big deal
and that will solve the problem.
How do you figure?
If you're saying that soft pages helps our 8k stack
On Thu, Jul 19, 2007 at 02:48:37AM +0200, Rene Herman wrote:
On 07/19/2007 02:41 AM, Matt Mackall wrote:
On Thu, Jul 19, 2007 at 02:15:39AM +0200, Andrea Arcangeli wrote:
Using kmalloc(8k) instead of alloc_page() doesn't sound a too big deal
and that will solve the problem.
How do you
On Thu, Jul 19, 2007 at 01:39:55AM +0100, Alan Cox wrote:
About 4k stacks I was generally against them, much better to fail in
fork than to risk corruption. The per-irq stack part is great feature
instead (too bad it wasn't enabled for the safer 8k stacks).
8K stacks without IRQ stacks
On Thu, Jul 19, 2007 at 03:33:58AM +0200, Andrea Arcangeli wrote:
On Thu, Jul 19, 2007 at 01:39:55AM +0100, Alan Cox wrote:
About 4k stacks I was generally against them, much better to fail in
fork than to risk corruption. The per-irq stack part is great feature
instead (too bad it
On 07/19/2007 03:37 AM, Matt Mackall wrote:
Here's a way to make forward progress on this whole thing:
Turn on irqstacks when using 8k stacks
WLI: are you submitting? Makes great sense regardless of anything and
they've been tested silly with 4KSTACKS already...
Detect when usage with 8k
On 07/17/2007 01:27 AM, Matt Mackall wrote:
Larger soft pages waste tremendous amounts of memory (mostly in page
cache) for minimal benefit on, say, the typical desktop. While there
are workloads where it's a win, it's probably on a small percentage of
machines.
So it's absolutely no help in
On 07/18/2007 01:39 AM, Jesper Juhl wrote:
On 17/07/07, William Lee Irwin III <[EMAIL PROTECTED]> wrote:
At hch's suggestion I rewrote the separate IRQ stack configurability
patch into one making IRQ stacks mandatory and unconfigurable, and
hence enabled with 8K stacks.
For what it's
> I can't speak for Fedora, but RHEL disables XFS in their kernel likely
> because it is known to cause problems with 4K stacks.
-was- - the SGI folks submitted patches to deal with some gcc problems
with stack usage.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
On Mon, 16 Jul 2007 16:15:28 -0700
"Ray Lee" <[EMAIL PROTECTED]> wrote:
> On 7/16/07, Rene Herman <[EMAIL PROTECTED]> wrote:
> > Yes but it's also an argument that the 4K stacks don't make the _current_
> > situation without CONFIG_4KSTACKS selected worse and given that you trust
> > that current
On 07/18/2007 01:19 AM, Bodo Eggert wrote:
Please post a list of things you have designed, so I can avoid them.
- The ability to read
- The ability to understand
You're doing a hell of a job already.
Rene.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body
On Tue, 2007-07-17 at 10:45 -0400, John Stoffel wrote:
> utz> I have to recompile the fedora kernel rpms (fc6, f7) with 8k
> utz> stacks on my i686 server. It's using NFS -> XFS -> DM -> MD
> utz> (raid1) -> IDE disks. With 4k stacks it crash (hang) within
> utz> minutes after using NFS. With 8k
On 17/07/07, William Lee Irwin III <[EMAIL PROTECTED]> wrote:
At some point in the past, I wrote:
>> If at some point one of the pro-4k stacks crowd can prove that all
>> code paths are safe, or introduce another viable alternative (such as
>> Matt's idea for extending the stack dynamically),
> 1) It all can be reduced to 4K + 4K by asuming all IRQ happen on one CPU.
no it's separate stacks for soft and hard irqs, so it's really 4+4+4
another angle is that while correctness rules, userspace correctness
rules as well. If you can't fork enough threads for what you need the
machine
On Tue, 17 Jul 2007, Rene Herman wrote:
> On 07/17/2007 12:06 PM, Bodo Eggert wrote:
> > On Tue, 17 Jul 2007, Rene Herman wrote:
> >> On 07/17/2007 01:45 AM, Bodo Eggert wrote:
> >>> You claim 4k+4k is safe, therefore 8k must be safe, too.
> >>
> >> No, I most certainly do not. I claim proving
On Tue, 2007-07-17 at 18:52 +0200, Rene Herman wrote:
> On 07/17/2007 06:14 PM, Shawn Bohrer wrote:
>
> > I can't speak for Fedora, but RHEL disables XFS in their kernel likely
> > because it is known to cause problems with 4K stacks.
>
> Okay. So is it fair to say it's largely XFS that's the
At some point in the past, I wrote:
>> If at some point one of the pro-4k stacks crowd can prove that all
>> code paths are safe, or introduce another viable alternative (such as
>> Matt's idea for extending the stack dynamically), then removing the 8k
>> stacks option makes sense.
On Mon, Jul
On 07/17/2007 06:14 PM, Shawn Bohrer wrote:
I can't speak for Fedora, but RHEL disables XFS in their kernel likely
because it is known to cause problems with 4K stacks.
Okay. So is it fair to say it's largely XFS that's the problem? No problems
with LVM/MD and say plain ext? If that's the
On Tue, Jul 17, 2007 at 02:57:45AM +0200, Rene Herman wrote:
> True enough. I'm rather wondering though why RHEL is shipping with it if
> it's a _real_ problem. Scribbling junk all over kernel memory would be the
> kind of thing I'd imagine you'd mightely piss-off enterprise customers with.
utz> On Tue, 2007-07-17 at 00:28 +0200, Rene Herman wrote:
>> Given that as Arjan stated Fedora and even RHEL have been using 4K stacks
>> for some time now, and certainly the latter being a distribution which I
>> would expect to both host a relatively large number of lvm/md/xfs and what
>>
On 07/17/2007 12:06 PM, Bodo Eggert wrote:
On Tue, 17 Jul 2007, Rene Herman wrote:
On 07/17/2007 01:45 AM, Bodo Eggert wrote:
You claim 4k+4k is safe, therefore 8k must be safe, too.
No, I most certainly do not. I claim proving that 4K and seperate (per cpu)
interrupt stacks are safe are
On 07/17/2007 01:38 AM, Matt Mackall wrote:
On Sun, Jul 15, 2007 at 12:19:15AM +0200, Rene Herman wrote:
Quite. Ofcourse, saying "our stacks are 1 page" would be the by far
easiest solution to that. Personally, I've been running with 4K stacks
exclusively on a variety of machines for quite
On Tue, 17 Jul 2007, Rene Herman wrote:
> On 07/17/2007 01:45 AM, Bodo Eggert wrote:
> > On Tue, 17 Jul 2007, Rene Herman wrote:
> >> On 07/17/2007 12:37 AM, Ray Lee wrote:
> >>> If at some point one of the pro-4k stacks crowd can prove that all
> >>> code paths are safe
> >> I'll do that the
On Tue, 17 Jul 2007, Rene Herman wrote:
On 07/17/2007 01:45 AM, Bodo Eggert wrote:
On Tue, 17 Jul 2007, Rene Herman wrote:
On 07/17/2007 12:37 AM, Ray Lee wrote:
If at some point one of the pro-4k stacks crowd can prove that all
code paths are safe
I'll do that the minute you prove
On 07/17/2007 01:38 AM, Matt Mackall wrote:
On Sun, Jul 15, 2007 at 12:19:15AM +0200, Rene Herman wrote:
Quite. Ofcourse, saying our stacks are 1 page would be the by far
easiest solution to that. Personally, I've been running with 4K stacks
exclusively on a variety of machines for quite
On 07/17/2007 12:06 PM, Bodo Eggert wrote:
On Tue, 17 Jul 2007, Rene Herman wrote:
On 07/17/2007 01:45 AM, Bodo Eggert wrote:
You claim 4k+4k is safe, therefore 8k must be safe, too.
No, I most certainly do not. I claim proving that 4K and seperate (per cpu)
interrupt stacks are safe are
utz On Tue, 2007-07-17 at 00:28 +0200, Rene Herman wrote:
Given that as Arjan stated Fedora and even RHEL have been using 4K stacks
for some time now, and certainly the latter being a distribution which I
would expect to both host a relatively large number of lvm/md/xfs and what
On Tue, Jul 17, 2007 at 02:57:45AM +0200, Rene Herman wrote:
True enough. I'm rather wondering though why RHEL is shipping with it if
it's a _real_ problem. Scribbling junk all over kernel memory would be the
kind of thing I'd imagine you'd mightely piss-off enterprise customers with.
1 - 100 of 210 matches
Mail list logo