On Wed, Oct 17, 2007 at 01:05:16AM +0200, Mikulas Patocka wrote:
> > > I see, AMD says that WC memory loads can be out-of-order.
> > >
> > > There is very little usability to it --- framebuffer and AGP aperture is
> > > the only piece of memory that is WC and no kernel structures are placed
> >
On Tue, Oct 16, 2007 at 12:33:54PM +0200, Mikulas Patocka wrote:
>
>
> On Tue, 16 Oct 2007, Nick Piggin wrote:
>
> > > > The cpus also have an explicit set of instructions that deliberately do
> > > > unordered stores/loads, and s/lfence etc are mostly des
On Wednesday 17 October 2007 07:28, Theodore Tso wrote:
> On Tue, Oct 16, 2007 at 05:47:12PM +1000, Nick Piggin wrote:
> > + /*
> > +* ram device BLKFLSBUF has special semantics, we want to actually
> > +* release and destroy the ramdisk data.
> > +*/
On Wednesday 17 October 2007 05:06, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > On Tuesday 16 October 2007 08:42, Eric W. Biederman wrote:
> >> I have not observed this case but it is possible to get a dirty page
> >> cache with cle
On Tuesday 16 October 2007 18:17, Jan Engelhardt wrote:
> On Oct 16 2007 18:07, Nick Piggin wrote:
> >Changed. But it will hopefully just completely replace rd.c,
> >so I will probably just rename it to rd.c at some point (and
> >change .config options to stay compatible). U
On Tuesday 16 October 2007 08:42, Eric W. Biederman wrote:
> I have not observed this case but it is possible to get a dirty page
> cache with clean buffer heads if we get a clean ramdisk page with
> buffer heads generated by a filesystem calling __getblk and then write
> to that page from user
od idea. Was this causing the reiserfs problems?
If so, I think we should be concentrating on what the real problem
is with reiserfs... (or at least why this so obviously correct
looking patch is wrong).
Acked-by: Nick Piggin <[EMAIL PROTECTED]>
>
> Signed-off-by: Eric W. Biederma
On Tuesday 16 October 2007 17:52, Jan Engelhardt wrote:
> On Oct 16 2007 17:47, Nick Piggin wrote:
> >Here's a quick first hack...
>
> Inline patches preferred ;-)
Thanks for reviewing it anyway ;)
> >+config BLK_DEV_BRD
> >+tristate "RAM block
On Tuesday 16 October 2007 18:08, Nick Piggin wrote:
> On Tuesday 16 October 2007 14:57, Eric W. Biederman wrote:
> > > What magic restrictions on page allocations? Actually we have
> > > fewer restrictions on page allocations because we can use
> > > highmem!
>
On Tuesday 16 October 2007 18:08, Nick Piggin wrote:
On Tuesday 16 October 2007 14:57, Eric W. Biederman wrote:
What magic restrictions on page allocations? Actually we have
fewer restrictions on page allocations because we can use
highmem!
With the proposed rewrite yes.
Here's
On Tuesday 16 October 2007 17:52, Jan Engelhardt wrote:
On Oct 16 2007 17:47, Nick Piggin wrote:
Here's a quick first hack...
Inline patches preferred ;-)
Thanks for reviewing it anyway ;)
+config BLK_DEV_BRD
+tristate RAM block device support
+---help---
+ This is a new
be concentrating on what the real problem
is with reiserfs... (or at least why this so obviously correct
looking patch is wrong).
Acked-by: Nick Piggin [EMAIL PROTECTED]
Signed-off-by: Eric W. Biederman [EMAIL PROTECTED]
---
fs/buffer.c |3 +++
1 files changed, 3 insertions(+), 0 deletions
On Tuesday 16 October 2007 08:42, Eric W. Biederman wrote:
I have not observed this case but it is possible to get a dirty page
cache with clean buffer heads if we get a clean ramdisk page with
buffer heads generated by a filesystem calling __getblk and then write
to that page from user space
On Tuesday 16 October 2007 18:17, Jan Engelhardt wrote:
On Oct 16 2007 18:07, Nick Piggin wrote:
Changed. But it will hopefully just completely replace rd.c,
so I will probably just rename it to rd.c at some point (and
change .config options to stay compatible). Unless someone
sees a problem
On Wednesday 17 October 2007 05:06, Eric W. Biederman wrote:
Nick Piggin [EMAIL PROTECTED] writes:
On Tuesday 16 October 2007 08:42, Eric W. Biederman wrote:
I have not observed this case but it is possible to get a dirty page
cache with clean buffer heads if we get a clean ramdisk page
On Wednesday 17 October 2007 07:28, Theodore Tso wrote:
On Tue, Oct 16, 2007 at 05:47:12PM +1000, Nick Piggin wrote:
+ /*
+* ram device BLKFLSBUF has special semantics, we want to actually
+* release and destroy the ramdisk data.
+*/
We won't be able to fix completely
On Tue, Oct 16, 2007 at 12:33:54PM +0200, Mikulas Patocka wrote:
On Tue, 16 Oct 2007, Nick Piggin wrote:
The cpus also have an explicit set of instructions that deliberately do
unordered stores/loads, and s/lfence etc are mostly designed for those.
I know about unordered
On Wed, Oct 17, 2007 at 01:05:16AM +0200, Mikulas Patocka wrote:
I see, AMD says that WC memory loads can be out-of-order.
There is very little usability to it --- framebuffer and AGP aperture is
the only piece of memory that is WC and no kernel structures are placed
there, so it
On Wednesday 17 October 2007 09:48, Eric W. Biederman wrote:
Nick Piggin [EMAIL PROTECTED] writes:
On Wednesday 17 October 2007 07:28, Theodore Tso wrote:
On Tue, Oct 16, 2007 at 05:47:12PM +1000, Nick Piggin wrote:
+/*
+ * ram device BLKFLSBUF has special semantics, we
On Wednesday 17 October 2007 11:13, Eric W. Biederman wrote:
Nick Piggin [EMAIL PROTECTED] writes:
We have 2 problems. First is that, for testing/consistency, we
don't want BLKFLSBUF to throw out the data. Maybe hardly anything
uses BLKFLSBUF now, so it could be just a minor problem
On Tuesday 16 October 2007 14:57, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> >> make_page_uptodate() is most hideous part I have run into.
> >> It has to know details about other layers to now what not
> >> to stomp. I think my inco
On Tuesday 16 October 2007 14:38, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > On Tuesday 16 October 2007 13:55, Eric W. Biederman wrote:
> > I don't follow your logic. We don't need SWAP > RAM in order to swap
> > effectively, IMO.
&g
On Tuesday 16 October 2007 13:55, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > How much swap do you have configured? You really shouldn't configure
> > so much unless you do want the kernel to actually use it all, right?
>
> No.
>
&g
On Tuesday 16 October 2007 13:14, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > On Monday 15 October 2007 19:16, Andrew Morton wrote:
> >> On Tue, 16 Oct 2007 00:06:19 +1000 Nick Piggin <[EMAIL PROTECTED]>
> >
> > wrote:
> &g
On Mon, Oct 15, 2007 at 11:10:00AM +0200, Jarek Poplawski wrote:
> On Mon, Oct 15, 2007 at 10:09:24AM +0200, Nick Piggin wrote:
> ...
> > Has performance really been much problem for you? (even before the
> > lfence instruction, when you theoretically had to use a locked op
On Tue, Oct 16, 2007 at 12:08:01AM +0200, Mikulas Patocka wrote:
> > On Mon, 15 Oct 2007 22:47:42 +0200 (CEST)
> > Mikulas Patocka <[EMAIL PROTECTED]> wrote:
> >
> > > > According to latest memory ordering specification documents from
> > > > Intel and AMD, both manufacturers are committed to
On Tuesday 16 October 2007 00:17, Ingo Molnar wrote:
> Linus, please pull the latest scheduler git tree from:
>
>git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git
>
> It contains lots of scheduler updates from lots of people - hopefully
> the last big one for quite some
On Tuesday 16 October 2007 00:06, David Howells wrote:
> Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> > I get funny SIGBUS' like so:
> >
> > fault
> > if (->page_mkwrite() < 0)
> > nfs_vm_page_mkwrite()
> > nfs_write_begin()
> > nfs_flush_incompatible()
> >
On Monday 15 October 2007 21:07, Andi Kleen wrote:
> On Tue, Oct 16, 2007 at 12:56:46AM +1000, Nick Piggin wrote:
> > Is this true even if you don't write through those old mappings?
>
> I think it happened for reads too. It is a little counter intuitive
> because in theory th
On Monday 15 October 2007 19:16, Andrew Morton wrote:
> On Tue, 16 Oct 2007 00:06:19 +1000 Nick Piggin <[EMAIL PROTECTED]>
wrote:
> > On Monday 15 October 2007 18:28, Christian Borntraeger wrote:
> > > Andrew, this is a resend of a bugfix patch. Ramdisk seems a bit
> &
On Monday 15 October 2007 19:52, Rob Landley wrote:
> On Monday 15 October 2007 8:37:44 am Nick Piggin wrote:
> > > Virtual memory isn't perfect. I've _always_ been able to come up with
> > > examples where it just doesn't work for me. This doesn't mean VM
> > >
On Monday 15 October 2007 19:36, Andi Kleen wrote:
> David Chinner <[EMAIL PROTECTED]> writes:
> > And yes, we delay unmapping pages until we have a batch of them
> > to unmap. vmap and vunmap do not scale, so this is batching helps
> > alleviate some of the worst of the problems.
>
> You're
On Monday 15 October 2007 19:05, Christian Borntraeger wrote:
> Am Montag, 15. Oktober 2007 schrieb Nick Piggin:
> > On Monday 15 October 2007 18:28, Christian Borntraeger wrote:
> > > Andrew, this is a resend of a bugfix patch. Ramdisk seems a bit
> > > unmaintained, s
On Monday 15 October 2007 18:28, Christian Borntraeger wrote:
> Andrew, this is a resend of a bugfix patch. Ramdisk seems a bit
> unmaintained, so decided to sent the patch to you :-).
> I have CCed Ted, who did work on the code in the 90s. I found no current
> email address of Chad Page.
This
On Monday 15 October 2007 18:04, Rob Landley wrote:
> On Sunday 14 October 2007 8:45:03 pm Theodore Tso wrote:
> > > excuse for conflating different categories of devices in the first
> > > place.
> >
> > See the thinkpad Ultrabay drive example above.
>
> Last week I drove my laptop so deep into
On Mon, Oct 15, 2007 at 09:44:05AM +0200, Jarek Poplawski wrote:
> On Fri, Oct 12, 2007 at 08:13:52AM -0700, Linus Torvalds wrote:
> >
> >
> > On Fri, 12 Oct 2007, Jarek Poplawski wrote:
> ...
> > So no, there's no way a software person could have afforded to say "it
> > seems to work on my
On Monday 15 October 2007 16:54, Alok kataria wrote:
> Hi,
>
> Looking at the tlb_flush code path and its co-relation with
> ARCH_FREE_PTE_NR, on x86-64 architecture. I think we still don't use
> the ARCH_FREE_PTE_NR of 5350 as the caching value for the mmu_gathers
> structure, instead fallback to
On Monday 15 October 2007 16:54, Alok kataria wrote:
Hi,
Looking at the tlb_flush code path and its co-relation with
ARCH_FREE_PTE_NR, on x86-64 architecture. I think we still don't use
the ARCH_FREE_PTE_NR of 5350 as the caching value for the mmu_gathers
structure, instead fallback to using
On Monday 15 October 2007 18:04, Rob Landley wrote:
On Sunday 14 October 2007 8:45:03 pm Theodore Tso wrote:
excuse for conflating different categories of devices in the first
place.
See the thinkpad Ultrabay drive example above.
Last week I drove my laptop so deep into swap (with a
On Mon, Oct 15, 2007 at 09:44:05AM +0200, Jarek Poplawski wrote:
On Fri, Oct 12, 2007 at 08:13:52AM -0700, Linus Torvalds wrote:
On Fri, 12 Oct 2007, Jarek Poplawski wrote:
...
So no, there's no way a software person could have afforded to say it
seems to work on my setup even
On Monday 15 October 2007 18:28, Christian Borntraeger wrote:
Andrew, this is a resend of a bugfix patch. Ramdisk seems a bit
unmaintained, so decided to sent the patch to you :-).
I have CCed Ted, who did work on the code in the 90s. I found no current
email address of Chad Page.
This really
On Monday 15 October 2007 19:05, Christian Borntraeger wrote:
Am Montag, 15. Oktober 2007 schrieb Nick Piggin:
On Monday 15 October 2007 18:28, Christian Borntraeger wrote:
Andrew, this is a resend of a bugfix patch. Ramdisk seems a bit
unmaintained, so decided to sent the patch to you
On Monday 15 October 2007 19:36, Andi Kleen wrote:
David Chinner [EMAIL PROTECTED] writes:
And yes, we delay unmapping pages until we have a batch of them
to unmap. vmap and vunmap do not scale, so this is batching helps
alleviate some of the worst of the problems.
You're keeping vmaps
On Monday 15 October 2007 19:52, Rob Landley wrote:
On Monday 15 October 2007 8:37:44 am Nick Piggin wrote:
Virtual memory isn't perfect. I've _always_ been able to come up with
examples where it just doesn't work for me. This doesn't mean VM
overcommit should be abolished, because
On Monday 15 October 2007 19:16, Andrew Morton wrote:
On Tue, 16 Oct 2007 00:06:19 +1000 Nick Piggin [EMAIL PROTECTED]
wrote:
On Monday 15 October 2007 18:28, Christian Borntraeger wrote:
Andrew, this is a resend of a bugfix patch. Ramdisk seems a bit
unmaintained, so decided to sent
On Monday 15 October 2007 21:07, Andi Kleen wrote:
On Tue, Oct 16, 2007 at 12:56:46AM +1000, Nick Piggin wrote:
Is this true even if you don't write through those old mappings?
I think it happened for reads too. It is a little counter intuitive
because in theory the CPU doesn't need
On Tuesday 16 October 2007 00:06, David Howells wrote:
Peter Zijlstra [EMAIL PROTECTED] wrote:
I get funny SIGBUS' like so:
fault
if (-page_mkwrite() 0)
nfs_vm_page_mkwrite()
nfs_write_begin()
nfs_flush_incompatible()
nfs_wb_page()
On Tuesday 16 October 2007 00:17, Ingo Molnar wrote:
Linus, please pull the latest scheduler git tree from:
git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git
It contains lots of scheduler updates from lots of people - hopefully
the last big one for quite some time.
On Tue, Oct 16, 2007 at 12:08:01AM +0200, Mikulas Patocka wrote:
On Mon, 15 Oct 2007 22:47:42 +0200 (CEST)
Mikulas Patocka [EMAIL PROTECTED] wrote:
According to latest memory ordering specification documents from
Intel and AMD, both manufacturers are committed to in-order loads
On Mon, Oct 15, 2007 at 11:10:00AM +0200, Jarek Poplawski wrote:
On Mon, Oct 15, 2007 at 10:09:24AM +0200, Nick Piggin wrote:
...
Has performance really been much problem for you? (even before the
lfence instruction, when you theoretically had to use a locked op)?
I mean, I'd struggle
On Tuesday 16 October 2007 13:14, Eric W. Biederman wrote:
Nick Piggin [EMAIL PROTECTED] writes:
On Monday 15 October 2007 19:16, Andrew Morton wrote:
On Tue, 16 Oct 2007 00:06:19 +1000 Nick Piggin [EMAIL PROTECTED]
wrote:
On Monday 15 October 2007 18:28, Christian Borntraeger wrote
On Tuesday 16 October 2007 13:55, Eric W. Biederman wrote:
Nick Piggin [EMAIL PROTECTED] writes:
How much swap do you have configured? You really shouldn't configure
so much unless you do want the kernel to actually use it all, right?
No.
There are three basic swapping scenarios
On Tuesday 16 October 2007 14:38, Eric W. Biederman wrote:
Nick Piggin [EMAIL PROTECTED] writes:
On Tuesday 16 October 2007 13:55, Eric W. Biederman wrote:
I don't follow your logic. We don't need SWAP RAM in order to swap
effectively, IMO.
The steady state of a system that is heavily
On Tuesday 16 October 2007 14:57, Eric W. Biederman wrote:
Nick Piggin [EMAIL PROTECTED] writes:
make_page_uptodate() is most hideous part I have run into.
It has to know details about other layers to now what not
to stomp. I think my incorrect simplification of this is what messed
On Monday 15 October 2007 12:01, Al Viro wrote:
> AFAICS, videobuf-vmalloc use of mem->vma and mem->vmalloc is
> bogus.
>
> You obtain the latter with vmalloc_user(); so far, so good. Then you have
> retval=remap_vmalloc_range(vma, mem->vmalloc,0);
> where vma is given to you by
On Monday 15 October 2007 10:57, Jeremy Fitzhardinge wrote:
> Nick Piggin wrote:
> > Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
> > because it generally has to invalidate TLBs on all CPUs.
>
> I see.
>
> > I'm looking at some more general s
On Monday 15 October 2007 09:12, Jeremy Fitzhardinge wrote:
> David Chinner wrote:
> > You mean xfs_buf.c.
>
> Yes, sorry.
>
> > And yes, we delay unmapping pages until we have a batch of them
> > to unmap. vmap and vunmap do not scale, so this is batching helps
> > alleviate some of the worst of
On Sun, Oct 14, 2007 at 09:25:23AM +0200, Nick Piggin wrote:
> Here are a couple of fixes for the hdaps driver. I have kind of been
> blocking out the bug traces caused by these (the 2nd patch, actually)
> thinking that it's one of those transient / churn things... but it's
> getting
produces warnings, but I don't actually know if it
does the right thing (because I don't really know what the driver
does or how to test it anyway!).
---
hdaps was using incorrect mutex_trylock return code.
Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>
---
Index: linux-2.6/drivers/hwmon/h
produces warnings, but I don't actually know if it
does the right thing (because I don't really know what the driver
does or how to test it anyway!).
---
hdaps was using incorrect mutex_trylock return code.
Signed-off-by: Nick Piggin [EMAIL PROTECTED]
---
Index: linux-2.6/drivers/hwmon/hdaps.c
On Sun, Oct 14, 2007 at 09:25:23AM +0200, Nick Piggin wrote:
Here are a couple of fixes for the hdaps driver. I have kind of been
blocking out the bug traces caused by these (the 2nd patch, actually)
thinking that it's one of those transient / churn things... but it's
getting annoying now
On Monday 15 October 2007 09:12, Jeremy Fitzhardinge wrote:
David Chinner wrote:
You mean xfs_buf.c.
Yes, sorry.
And yes, we delay unmapping pages until we have a batch of them
to unmap. vmap and vunmap do not scale, so this is batching helps
alleviate some of the worst of the
On Monday 15 October 2007 10:57, Jeremy Fitzhardinge wrote:
Nick Piggin wrote:
Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
because it generally has to invalidate TLBs on all CPUs.
I see.
I'm looking at some more general solutions to this (already have some
On Monday 15 October 2007 12:01, Al Viro wrote:
AFAICS, videobuf-vmalloc use of mem-vma and mem-vmalloc is
bogus.
You obtain the latter with vmalloc_user(); so far, so good. Then you have
retval=remap_vmalloc_range(vma, mem-vmalloc,0);
where vma is given to you by mmap();
On Friday 12 October 2007 20:50, Peter Zijlstra wrote:
> On Fri, 2007-10-12 at 04:14 +1000, Nick Piggin wrote:
> > On Friday 12 October 2007 20:37, Peter Zijlstra wrote:
> > > The pages will still be read-only due to dirty tracking, so the first
> > > write
On Friday 12 October 2007 20:37, Peter Zijlstra wrote:
> On Fri, 2007-10-12 at 02:57 +1000, Nick Piggin wrote:
> > On Friday 12 October 2007 19:03, Peter Zijlstra wrote:
> > > Subject: mm: avoid dirtying shared mappings on mlock
> > >
> > > Suleiman noticed t
On Fri, Oct 12, 2007 at 11:55:05AM +0200, Jarek Poplawski wrote:
> On Fri, Oct 12, 2007 at 10:57:33AM +0200, Nick Piggin wrote:
> >
> > I don't know quite what you're saying... the CPUs could probably get
> > performance by having weakly ordered loads, OTOH I think the
On Fri, Oct 12, 2007 at 11:12:13AM +0200, Jarek Poplawski wrote:
> On Fri, Oct 12, 2007 at 10:42:34AM +0200, Helge Hafting wrote:
> > Jarek Poplawski wrote:
> > >On 04-10-2007 07:23, Nick Piggin wrote:
> > >
> > >>According to latest memory orde
On Friday 12 October 2007 02:57, Nick Piggin wrote:
> On Friday 12 October 2007 19:03, Peter Zijlstra wrote:
> > Subject: mm: avoid dirtying shared mappings on mlock
> >
> > Suleiman noticed that shared mappings get dirtied when mlocked.
> > Avoid this by teachi
On Friday 12 October 2007 19:03, Peter Zijlstra wrote:
> Subject: mm: avoid dirtying shared mappings on mlock
>
> Suleiman noticed that shared mappings get dirtied when mlocked.
> Avoid this by teaching make_pages_present about this case.
>
> Signed-off-by: Peter Zijlstra <[EMAIL PROTECTED]>
>
On Friday 12 October 2007 19:07, David Howells wrote:
> Hi Linus,
>
> Here's a set of patches that remove all calls to iget() and all
> read_inode() functions. They should be removed for two reasons: firstly
> they don't lend themselves to good error handling, and secondly their
> presence is a
On Fri, Oct 12, 2007 at 10:25:34AM +0200, Jarek Poplawski wrote:
> On 04-10-2007 07:23, Nick Piggin wrote:
> > According to latest memory ordering specification documents from Intel and
> > AMD, both manufacturers are committed to in-order loads from cacheable
> > m
On Friday 12 October 2007 15:46, Ingo Molnar wrote:
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
> > ;) I think you snipped the important bit:
> >
> > "the peak is terrible but it has virtually no dropoff and performs
> > better under load than the default 2
On Friday 12 October 2007 15:46, Ingo Molnar wrote:
* Nick Piggin [EMAIL PROTECTED] wrote:
;) I think you snipped the important bit:
the peak is terrible but it has virtually no dropoff and performs
better under load than the default 2.6.21 scheduler. (verbatim)
hm, i understood
On Friday 12 October 2007 02:57, Nick Piggin wrote:
On Friday 12 October 2007 19:03, Peter Zijlstra wrote:
Subject: mm: avoid dirtying shared mappings on mlock
Suleiman noticed that shared mappings get dirtied when mlocked.
Avoid this by teaching make_pages_present about this case
On Friday 12 October 2007 19:07, David Howells wrote:
Hi Linus,
Here's a set of patches that remove all calls to iget() and all
read_inode() functions. They should be removed for two reasons: firstly
they don't lend themselves to good error handling, and secondly their
presence is a
On Fri, Oct 12, 2007 at 10:25:34AM +0200, Jarek Poplawski wrote:
On 04-10-2007 07:23, Nick Piggin wrote:
According to latest memory ordering specification documents from Intel and
AMD, both manufacturers are committed to in-order loads from cacheable
memory
for the x86 architecture
On Friday 12 October 2007 19:03, Peter Zijlstra wrote:
Subject: mm: avoid dirtying shared mappings on mlock
Suleiman noticed that shared mappings get dirtied when mlocked.
Avoid this by teaching make_pages_present about this case.
Signed-off-by: Peter Zijlstra [EMAIL PROTECTED]
Acked-by:
On Fri, Oct 12, 2007 at 11:12:13AM +0200, Jarek Poplawski wrote:
On Fri, Oct 12, 2007 at 10:42:34AM +0200, Helge Hafting wrote:
Jarek Poplawski wrote:
On 04-10-2007 07:23, Nick Piggin wrote:
According to latest memory ordering specification documents from Intel and
AMD, both
On Fri, Oct 12, 2007 at 11:55:05AM +0200, Jarek Poplawski wrote:
On Fri, Oct 12, 2007 at 10:57:33AM +0200, Nick Piggin wrote:
I don't know quite what you're saying... the CPUs could probably get
performance by having weakly ordered loads, OTOH I think the Intel
ones might already do
On Friday 12 October 2007 20:37, Peter Zijlstra wrote:
On Fri, 2007-10-12 at 02:57 +1000, Nick Piggin wrote:
On Friday 12 October 2007 19:03, Peter Zijlstra wrote:
Subject: mm: avoid dirtying shared mappings on mlock
Suleiman noticed that shared mappings get dirtied when mlocked
On Friday 12 October 2007 20:50, Peter Zijlstra wrote:
On Fri, 2007-10-12 at 04:14 +1000, Nick Piggin wrote:
On Friday 12 October 2007 20:37, Peter Zijlstra wrote:
The pages will still be read-only due to dirty tracking, so the first
write will still do page_mkwrite().
Which can
On Friday 12 October 2007 02:23, Mr. Berkley Shands wrote:
> With DEBUG_SLAB on, I can run only a very short time under 2.6.23
> before a kernel panic.
>
> [ 626.028180] eth0: too many iterations (6) in nv_nic_irq.
> [ 626.167583] eth0: too many iterations (6) in nv_nic_irq.
> [ 626.206729]
On Wednesday 10 October 2007 20:14, Ingo Molnar wrote:
> * Nicholas Miell <[EMAIL PROTECTED]> wrote:
> > Does CFS still generate the following sysbench graphs with 2.6.23, or
> > did that get fixed?
> >
> > http://people.freebsd.org/~kris/scaling/linux-pgsql.png
> >
On Wednesday 10 October 2007 20:14, Ingo Molnar wrote:
* Nicholas Miell [EMAIL PROTECTED] wrote:
Does CFS still generate the following sysbench graphs with 2.6.23, or
did that get fixed?
http://people.freebsd.org/~kris/scaling/linux-pgsql.png
On Friday 12 October 2007 02:23, Mr. Berkley Shands wrote:
With DEBUG_SLAB on, I can run only a very short time under 2.6.23
before a kernel panic.
[ 626.028180] eth0: too many iterations (6) in nv_nic_irq.
[ 626.167583] eth0: too many iterations (6) in nv_nic_irq.
[ 626.206729] eth0: too
On Friday 12 October 2007 10:56, Berkley Shands wrote:
> 100% reproducible on the two motherboards in question.
> Does not happen on any other motherboard I have in my possession
> (not tyan, not uniwide, not socket 940...)
>
> No errors, no dmesg, nothing with debug_spinlock set.
> shows lots
On Thursday 11 October 2007 01:33, Berkley Shands wrote:
> 2.6.23 with CONFIG_DEBUG_SPINLOCK on does not hang under very high write
> loads to either an LSIELP (write rate 1.1GB/Sec) or to a highpoint
> RR2340 (write rate 1.0GB/Sec). With CONFIG_DEBUG_SPINLOCK off however, the
> system hangs
On Wednesday 10 October 2007 15:20, Linus Torvalds wrote:
> On Wed, 10 Oct 2007, Hugh Dickins wrote:
> > On Tue, 9 Oct 2007, Nick Piggin wrote:
> > > by it ;) To prove my point: the *first* approach I posted to fix this
> > > problem was exactly a patch to special-cas
On Wednesday 10 October 2007 15:20, Linus Torvalds wrote:
On Wed, 10 Oct 2007, Hugh Dickins wrote:
On Tue, 9 Oct 2007, Nick Piggin wrote:
by it ;) To prove my point: the *first* approach I posted to fix this
problem was exactly a patch to special-case the zero_page refcounting
which
On Thursday 11 October 2007 01:33, Berkley Shands wrote:
2.6.23 with CONFIG_DEBUG_SPINLOCK on does not hang under very high write
loads to either an LSIELP (write rate 1.1GB/Sec) or to a highpoint
RR2340 (write rate 1.0GB/Sec). With CONFIG_DEBUG_SPINLOCK off however, the
system hangs with
On Friday 12 October 2007 10:56, Berkley Shands wrote:
100% reproducible on the two motherboards in question.
Does not happen on any other motherboard I have in my possession
(not tyan, not uniwide, not socket 940...)
No errors, no dmesg, nothing with debug_spinlock set.
sysrq shows lots
On Tuesday 09 October 2007 23:50, Michael Stiller wrote:
> Hi list,
>
> i'm developing an application (in C) which needs to write about
> 1Gbit/s (125Mb/s) to a disk array attached via U320 SCSI.
> It runs on Dual Core 2 Xeons @2Ghz utilizing kernel 2.6.22.7.
>
> I buffer the data in (currently 4)
On Wednesday 10 October 2007 12:22, Linus Torvalds wrote:
> On Tue, 9 Oct 2007, Nick Piggin wrote:
> > Where do you suggest I go from here? Is there any way I can
> > convince you to try it? Make it a config option? (just kidding)
>
> No, I'll take the damn patch, but quite
On Wednesday 10 October 2007 11:26, Christoph Lameter wrote:
> On Tue, 9 Oct 2007, Nick Piggin wrote:
> > > We already use 32k stacks on IA64. So the memory argument fail there.
> >
> > I'm talking about generic code.
>
> The stack size is set in arch code not in ge
On Wednesday 10 October 2007 00:52, Linus Torvalds wrote:
> On Tue, 9 Oct 2007, Nick Piggin wrote:
> > I have done some tests which indicate a couple of very basic common tools
> > don't do much zero-page activity (ie. kbuild). And also combined with
> > some logical argument
On Wednesday 10 October 2007 04:39, Christoph Lameter wrote:
> On Mon, 8 Oct 2007, Nick Piggin wrote:
> > The tight memory restrictions on stack usage do not come about because
> > of the difficulty in increasing the stack size :) It is because we want
> > to k
On Tuesday 09 October 2007 18:55, Huang, Ying wrote:
> On Tue, 2007-10-09 at 02:06 +1000, Nick Piggin wrote:
> > I'm just wondering whether you really need to access highmem in
> > boot code...
>
> Because the zero page (boot_parameters) of i386 boot protocol has 4k
> l
On Tuesday 09 October 2007 18:22, Huang, Ying wrote:
> On Tue, 2007-10-09 at 01:25 +1000, Nick Piggin wrote:
> > On Tuesday 09 October 2007 16:40, Huang, Ying wrote:
> > > +unsigned long copy_from_phys(void *to, unsigned long from_phys,
> > > + unsi
On Tuesday 09 October 2007 16:40, Huang, Ying wrote:
> +unsigned long copy_from_phys(void *to, unsigned long from_phys,
> + unsigned long n)
> +{
> + struct page *page;
> + void *from;
> + unsigned long remain = n, offset, trunck;
> +
> + while (remain) {
701 - 800 of 3926 matches
Mail list logo