Can you retain cc list, please?
On Friday 26 October 2007 07:42, David Schwartz wrote:
> I asked a collection of knowledgeable people I know about the issue. The
> consensus is that the optimization is not permitted in POSIX code but that
> it is permitted in pure C code. The basic argument
On Thursday 25 October 2007 17:15, Andi Kleen wrote:
> On Thursday 25 October 2007 05:24, Nick Piggin wrote:
> > Basically, what the gcc developers are saying is that gcc is
> > free to load and store to any memory location, so long as it
> > behaves as if the instr
On Thu, Oct 25, 2007 at 09:07:36PM +0200, Jan Kara wrote:
> Hi,
>
> > This is overdue, sorry. Got a little complicated, and I've been away from
> > my filesystem test setup so I didn't want ot send it (lucky, coz I found
> > a bug after more substantial testing).
> >
> > Anyway, RFC?
> Hmm,
On Friday 26 October 2007 02:55, Randy Dunlap wrote:
> > Hmm, can we simply do
> >
> > static inline int test_and_set_bit_lock(int nr, volatile unsigned long *
> > addr) {
> > return test_and_set_bit(nr, addr);
> > }
> >
> > please ?
>
> Certainly. That does look better.
Thanks!
>
> ---
>
>
On Thursday 25 October 2007 15:45, Greg KH wrote:
> On Thu, Oct 25, 2007 at 12:31:06PM +1000, Nick Piggin wrote:
> > On Wednesday 24 October 2007 21:12, Kay Sievers wrote:
> > > On 10/24/07, Nick Piggin <[EMAIL PROTECTED]> wrote:
> > > It was intended to be some
On Thursday 25 October 2007 15:45, Greg KH wrote:
On Thu, Oct 25, 2007 at 12:31:06PM +1000, Nick Piggin wrote:
On Wednesday 24 October 2007 21:12, Kay Sievers wrote:
On 10/24/07, Nick Piggin [EMAIL PROTECTED] wrote:
It was intended to be something like /proc/sys/kernel/ only.
Really
On Friday 26 October 2007 02:55, Randy Dunlap wrote:
Hmm, can we simply do
static inline int test_and_set_bit_lock(int nr, volatile unsigned long *
addr) {
return test_and_set_bit(nr, addr);
}
please ?
Certainly. That does look better.
Thanks!
---
From: Randy Dunlap
On Thu, Oct 25, 2007 at 09:07:36PM +0200, Jan Kara wrote:
Hi,
This is overdue, sorry. Got a little complicated, and I've been away from
my filesystem test setup so I didn't want ot send it (lucky, coz I found
a bug after more substantial testing).
Anyway, RFC?
Hmm, maybe one
On Thursday 25 October 2007 17:15, Andi Kleen wrote:
On Thursday 25 October 2007 05:24, Nick Piggin wrote:
Basically, what the gcc developers are saying is that gcc is
free to load and store to any memory location, so long as it
behaves as if the instructions were executed in sequence
Can you retain cc list, please?
On Friday 26 October 2007 07:42, David Schwartz wrote:
I asked a collection of knowledgeable people I know about the issue. The
consensus is that the optimization is not permitted in POSIX code but that
it is permitted in pure C code. The basic argument
On Friday 26 October 2007 09:09, Andi Kleen wrote:
On Friday 26 October 2007 00:49:42 Nick Piggin wrote:
Marking volatile I think is out of the question. To start with,
volatile creates really poor code (and most of the time we actually
do want the code in critical sections to be as tight
On Friday 26 October 2007 09:55, Andi Kleen wrote:
But we don't actually know what it is, and it could change with
different architectures or versions of gcc. I think the sanest thing
is for gcc to help us out here, seeing as there is this very well
defined requirement that we want.
If
Hi,
Just out of interest, I did a grep for files containing test_and_set_bit
as well as clear_bit (excluding obvious ones like include/asm-*/bitops.h).
Quite a few interesting things. There is a lot of stuff in drivers/* that
could be suspect, WRT memory barriers, including lots I didn't touch.
On Friday 26 October 2007 13:35, Benjamin Herrenschmidt wrote:
[acks]
Thanks for those...
Index: linux-2.6/include/asm-powerpc/mmu_context.h
===
--- linux-2.6.orig/include/asm-powerpc/mmu_context.h
+++
Hi David,
[BTW. can you retain cc lists, please?]
On Thursday 25 October 2007 14:29, David Schwartz wrote:
> > Well that's exactly right. For threaded programs (and maybe even
> > real-world non-threaded ones in general), you don't want to be
> > even _reading_ global variables if you don't need
On Thursday 25 October 2007 14:11, Andrew Morton wrote:
> On Wed, 24 Oct 2007 08:24:57 -0400 Matthew Wilcox <[EMAIL PROTECTED]> wrote:
> > and associated infrastructure such as sync_page_killable and
> > fatal_signal_pending. Use lock_page_killable in
> > do_generic_mapping_read() to allow us to
On Thursday 25 October 2007 13:46, Arjan van de Ven wrote:
> On Thu, 25 Oct 2007 13:24:49 +1000
>
> Nick Piggin <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > Andi spotted this exchange on the gcc list. I don't think he's
> > brought it up here yet,
On Friday 19 October 2007 08:25, Matthew Wilcox wrote:
> This series of patches introduces the facility to deliver only fatal
> signals to tasks which are otherwise waiting uninterruptibly.
This is pretty nice I think. It also is a significant piece of
infrastructure required to fix some of the
On Friday 19 October 2007 08:26, Matthew Wilcox wrote:
> Use TASK_KILLABLE to allow wait_on_retry_sync_kiocb to return -EINTR.
> All callers then check the return value and break out of their loops.
>
> Signed-off-by: Matthew Wilcox <[EMAIL PROTECTED]>
> ---
> fs/read_write.c | 17
On Friday 19 October 2007 08:25, Matthew Wilcox wrote:
> Abstracting away direct uses of TASK_ flags allows us to change the
> definitions of the task flags more easily.
>
> Also restructure do_wait() a little
>
> Signed-off-by: Matthew Wilcox <[EMAIL PROTECTED]>
> ---
>
Hi,
Andi spotted this exchange on the gcc list. I don't think he's
brought it up here yet, but it worries me enough that I'd like
to discuss it.
Starts here
http://gcc.gnu.org/ml/gcc/2007-10/msg00266.html
Concrete example here
http://gcc.gnu.org/ml/gcc/2007-10/msg00275.html
Basically, what the
On Thursday 25 October 2007 12:43, Christoph Lameter wrote:
> On Thu, 25 Oct 2007, Nick Piggin wrote:
> > > Ummm... all unreclaimable is set! Are you mlocking the pages in memory?
> > > Or what causes this? All pages under writeback? What is the dirty ratio
> > &g
On Thursday 25 October 2007 12:15, Christoph Lameter wrote:
> On Wed, 24 Oct 2007, Alexey Dobriyan wrote:
> > [12728.701398] DMA free:8032kB min:32kB low:40kB high:48kB active:2716kB
> > inactive:2208kB present:12744kB pages_scanned:9299 all_unreclaimable?
> > yes [12728.701567] lowmem_reserve[]:
On Wednesday 24 October 2007 21:12, Kay Sievers wrote:
> On 10/24/07, Nick Piggin <[EMAIL PROTECTED]> wrote:
> > On Tuesday 23 October 2007 10:55, Takenori Nagano wrote:
> > > Nick Piggin wrote:
> > > > One thing I'd suggest is not to use debugfs, if it is g
On Thursday 25 October 2007 11:14, Andrew Morton wrote:
> On Wed, 24 Oct 2007 18:13:06 +1000 [EMAIL PROTECTED] wrote:
> > Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>
> >
> > ---
> > kernel/wait.c |2 +-
> > 1 file changed, 1 insertion(+), 1 dele
On Wednesday 24 October 2007 15:09, Randy Dunlap wrote:
> From: Randy Dunlap <[EMAIL PROTECTED]>
>
> Can we expand this macro definition, or should I look for a way to
> fool^W teach kernel-doc about this?
>
> scripts/kernel-doc says:
> Error(linux-2.6.24-rc1//include/asm-x86/bitops_32.h:188):
On Tuesday 23 October 2007 10:55, Takenori Nagano wrote:
> Nick Piggin wrote:
> > One thing I'd suggest is not to use debugfs, if it is going to
> > be a useful end-user feature.
>
> Is /sys/kernel/notifier_name/ an appropriate place?
Hi list,
I'm curious about the /sys/ker
On Thursday 25 October 2007 11:14, Andrew Morton wrote:
On Wed, 24 Oct 2007 18:13:06 +1000 [EMAIL PROTECTED] wrote:
Signed-off-by: Nick Piggin [EMAIL PROTECTED]
---
kernel/wait.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: linux-2.6/kernel/wait.c
On Wednesday 24 October 2007 21:12, Kay Sievers wrote:
On 10/24/07, Nick Piggin [EMAIL PROTECTED] wrote:
On Tuesday 23 October 2007 10:55, Takenori Nagano wrote:
Nick Piggin wrote:
One thing I'd suggest is not to use debugfs, if it is going to
be a useful end-user feature
On Thursday 25 October 2007 12:15, Christoph Lameter wrote:
On Wed, 24 Oct 2007, Alexey Dobriyan wrote:
[12728.701398] DMA free:8032kB min:32kB low:40kB high:48kB active:2716kB
inactive:2208kB present:12744kB pages_scanned:9299 all_unreclaimable?
yes [12728.701567] lowmem_reserve[]: 0 2003
On Thursday 25 October 2007 12:43, Christoph Lameter wrote:
On Thu, 25 Oct 2007, Nick Piggin wrote:
Ummm... all unreclaimable is set! Are you mlocking the pages in memory?
Or what causes this? All pages under writeback? What is the dirty ratio
set to?
Why is SLUB behaving differently
Hi,
Andi spotted this exchange on the gcc list. I don't think he's
brought it up here yet, but it worries me enough that I'd like
to discuss it.
Starts here
http://gcc.gnu.org/ml/gcc/2007-10/msg00266.html
Concrete example here
http://gcc.gnu.org/ml/gcc/2007-10/msg00275.html
Basically, what the
On Friday 19 October 2007 08:25, Matthew Wilcox wrote:
Abstracting away direct uses of TASK_ flags allows us to change the
definitions of the task flags more easily.
Also restructure do_wait() a little
Signed-off-by: Matthew Wilcox [EMAIL PROTECTED]
---
arch/ia64/kernel/perfmon.c |4
On Friday 19 October 2007 08:26, Matthew Wilcox wrote:
Use TASK_KILLABLE to allow wait_on_retry_sync_kiocb to return -EINTR.
All callers then check the return value and break out of their loops.
Signed-off-by: Matthew Wilcox [EMAIL PROTECTED]
---
fs/read_write.c | 17 -
1
On Friday 19 October 2007 08:25, Matthew Wilcox wrote:
This series of patches introduces the facility to deliver only fatal
signals to tasks which are otherwise waiting uninterruptibly.
This is pretty nice I think. It also is a significant piece of
infrastructure required to fix some of the
On Thursday 25 October 2007 13:46, Arjan van de Ven wrote:
On Thu, 25 Oct 2007 13:24:49 +1000
Nick Piggin [EMAIL PROTECTED] wrote:
Hi,
Andi spotted this exchange on the gcc list. I don't think he's
brought it up here yet, but it worries me enough that I'd like
to discuss
On Thursday 25 October 2007 14:11, Andrew Morton wrote:
On Wed, 24 Oct 2007 08:24:57 -0400 Matthew Wilcox [EMAIL PROTECTED] wrote:
and associated infrastructure such as sync_page_killable and
fatal_signal_pending. Use lock_page_killable in
do_generic_mapping_read() to allow us to kill
Hi David,
[BTW. can you retain cc lists, please?]
On Thursday 25 October 2007 14:29, David Schwartz wrote:
Well that's exactly right. For threaded programs (and maybe even
real-world non-threaded ones in general), you don't want to be
even _reading_ global variables if you don't need to.
On Tuesday 23 October 2007 10:55, Takenori Nagano wrote:
Nick Piggin wrote:
One thing I'd suggest is not to use debugfs, if it is going to
be a useful end-user feature.
Is /sys/kernel/notifier_name/ an appropriate place?
Hi list,
I'm curious about the /sys/kernel/ namespace. I had
On Wednesday 24 October 2007 15:09, Randy Dunlap wrote:
From: Randy Dunlap [EMAIL PROTECTED]
Can we expand this macro definition, or should I look for a way to
fool^W teach kernel-doc about this?
scripts/kernel-doc says:
Error(linux-2.6.24-rc1//include/asm-x86/bitops_32.h:188): cannot
On Monday 22 October 2007 14:28, dean gaudet wrote:
> On Sun, 21 Oct 2007, Jeremy Fitzhardinge wrote:
> > dean gaudet wrote:
> > > On Mon, 15 Oct 2007, Nick Piggin wrote:
> > >> Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
> > >> b
On Monday 22 October 2007 04:39, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > On Sunday 21 October 2007 18:23, Eric W. Biederman wrote:
> >> Christian Borntraeger <[EMAIL PROTECTED]> writes:
> >>
> >> Let me put it another w
On Monday 22 October 2007 03:56, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > OK, I missed that you set the new inode's aops to the ramdisk_aops
> > rather than the bd_inode. Which doesn't make a lot of sense because
> > you just have a lo
On Thursday 18 October 2007 18:52, Takenori Nagano wrote:
> Vivek Goyal wrote:
> > > My stance is that _all_ the RAS tools (kdb, kgdb, nlkd, netdump, lkcd,
> > > crash, kdump etc.) should be using a common interface that safely puts
> > > the entire system in a stopped state and saves the state
On Sunday 21 October 2007 18:23, Eric W. Biederman wrote:
> Christian Borntraeger <[EMAIL PROTECTED]> writes:
> Let me put it another way. Looking at /proc/slabinfo I can get
> 37 buffer_heads per page. I can allocate 10% of memory in
> buffer_heads before we start to reclaim them. So it
On Sunday 21 October 2007 18:55, David Woodhouse wrote:
> On Fri, 2007-10-19 at 17:16 +1000, Nick Piggin wrote:
> > if (writtenlen) {
> > - if (inode->i_size < (pg->index << PAGE_CACHE_SHIFT) +
> > start + writtenlen) { -
On Sunday 21 October 2007 16:48, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > Yes it does. It is exactly breaking the coherency between block
> > device and filesystem metadata coherency that Andrew cared about.
> > Whether or not that mat
On Sunday 21 October 2007 16:48, Eric W. Biederman wrote:
Nick Piggin [EMAIL PROTECTED] writes:
Yes it does. It is exactly breaking the coherency between block
device and filesystem metadata coherency that Andrew cared about.
Whether or not that matters, that is a much bigger conceptual
On Sunday 21 October 2007 18:23, Eric W. Biederman wrote:
Christian Borntraeger [EMAIL PROTECTED] writes:
Let me put it another way. Looking at /proc/slabinfo I can get
37 buffer_heads per page. I can allocate 10% of memory in
buffer_heads before we start to reclaim them. So it requires
On Sunday 21 October 2007 18:55, David Woodhouse wrote:
On Fri, 2007-10-19 at 17:16 +1000, Nick Piggin wrote:
if (writtenlen) {
- if (inode-i_size (pg-index PAGE_CACHE_SHIFT) +
start + writtenlen) { - inode-i_size = (pg-index
PAGE_CACHE_SHIFT
On Thursday 18 October 2007 18:52, Takenori Nagano wrote:
Vivek Goyal wrote:
My stance is that _all_ the RAS tools (kdb, kgdb, nlkd, netdump, lkcd,
crash, kdump etc.) should be using a common interface that safely puts
the entire system in a stopped state and saves the state of each
On Monday 22 October 2007 03:56, Eric W. Biederman wrote:
Nick Piggin [EMAIL PROTECTED] writes:
OK, I missed that you set the new inode's aops to the ramdisk_aops
rather than the bd_inode. Which doesn't make a lot of sense because
you just have a lot of useless aops there now.
Not totally
On Monday 22 October 2007 04:39, Eric W. Biederman wrote:
Nick Piggin [EMAIL PROTECTED] writes:
On Sunday 21 October 2007 18:23, Eric W. Biederman wrote:
Christian Borntraeger [EMAIL PROTECTED] writes:
Let me put it another way. Looking at /proc/slabinfo I can get
37 buffer_heads per
On Monday 22 October 2007 14:28, dean gaudet wrote:
On Sun, 21 Oct 2007, Jeremy Fitzhardinge wrote:
dean gaudet wrote:
On Mon, 15 Oct 2007, Nick Piggin wrote:
Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
because it generally has to invalidate TLBs on all CPUs
On Sunday 21 October 2007 14:53, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > On Saturday 20 October 2007 07:27, Eric W. Biederman wrote:
> >> Andrew Morton <[EMAIL PROTECTED]> writes:
> >> > I don't think we little ang
On Sunday 21 October 2007 15:10, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > On Saturday 20 October 2007 08:51, Eric W. Biederman wrote:
> >> Currently the ramdisk tries to keep the block device page cache pages
> >> from being mar
On Saturday 20 October 2007 08:51, Eric W. Biederman wrote:
> Currently the ramdisk tries to keep the block device page cache pages
> from being marked clean and dropped from memory. That fails for
> filesystems that use the buffer cache because the buffer cache is not
> an ordinary buffer cache
On Saturday 20 October 2007 07:27, Eric W. Biederman wrote:
> Andrew Morton <[EMAIL PROTECTED]> writes:
> > I don't think we little angels want to tread here. There are so many
> > weirdo things out there which will break if we bust the coherence between
> > the fs and /dev/hda1.
>
> We broke
On Saturday 20 October 2007 07:27, Eric W. Biederman wrote:
Andrew Morton [EMAIL PROTECTED] writes:
I don't think we little angels want to tread here. There are so many
weirdo things out there which will break if we bust the coherence between
the fs and /dev/hda1.
We broke coherence
On Saturday 20 October 2007 08:51, Eric W. Biederman wrote:
Currently the ramdisk tries to keep the block device page cache pages
from being marked clean and dropped from memory. That fails for
filesystems that use the buffer cache because the buffer cache is not
an ordinary buffer cache user
On Sunday 21 October 2007 15:10, Eric W. Biederman wrote:
Nick Piggin [EMAIL PROTECTED] writes:
On Saturday 20 October 2007 08:51, Eric W. Biederman wrote:
Currently the ramdisk tries to keep the block device page cache pages
from being marked clean and dropped from memory. That fails
On Sunday 21 October 2007 14:53, Eric W. Biederman wrote:
Nick Piggin [EMAIL PROTECTED] writes:
On Saturday 20 October 2007 07:27, Eric W. Biederman wrote:
Andrew Morton [EMAIL PROTECTED] writes:
I don't think we little angels want to tread here. There are so many
weirdo things out
On Friday 19 October 2007 17:03, Nick Piggin wrote:
> On Friday 19 October 2007 16:05, Erez Zadok wrote:
> > David,
> >
> > I'm testing unionfs on top of jffs2, using 2.6.24 as of linus's commit
> > 4fa4d23fa20de67df919030c1216295664866ad7. All of my unionfs tests pass
On Friday 19 October 2007 16:05, Erez Zadok wrote:
> David,
>
> I'm testing unionfs on top of jffs2, using 2.6.24 as of linus's commit
> 4fa4d23fa20de67df919030c1216295664866ad7. All of my unionfs tests pass
> when unionfs is stacked on top of jffs2, other than my truncate test --
> whic tries to
On Friday 19 October 2007 17:03, Nick Piggin wrote:
On Friday 19 October 2007 16:05, Erez Zadok wrote:
David,
I'm testing unionfs on top of jffs2, using 2.6.24 as of linus's commit
4fa4d23fa20de67df919030c1216295664866ad7. All of my unionfs tests pass
when unionfs is stacked on top
On Friday 19 October 2007 16:05, Erez Zadok wrote:
David,
I'm testing unionfs on top of jffs2, using 2.6.24 as of linus's commit
4fa4d23fa20de67df919030c1216295664866ad7. All of my unionfs tests pass
when unionfs is stacked on top of jffs2, other than my truncate test --
whic tries to
removes the mfence from __clear_bit_unlock (which is already a useful
primitive for SLUB).
Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>
---
Index: linux-2.6/include/asm-x86/bitops_32.h
===
--- linux-2.6.orig/include/asm-x86/bitop
On Friday 19 October 2007 13:28, Herbert Xu wrote:
> Nick Piggin <[EMAIL PROTECTED]> wrote:
> >> First of all let's agree on some basic assumptions:
> >>
> >> * A pair of spin lock/unlock subsumes the effect of a full mb.
> >
> > Not unless you mean
On Friday 19 October 2007 12:32, Herbert Xu wrote:
> First of all let's agree on some basic assumptions:
>
> * A pair of spin lock/unlock subsumes the effect of a full mb.
Not unless you mean a pair of spin lock/unlock as in
2 spin lock/unlock pairs (4 operations).
*X = 10;
spin_lock();
/* *Y
On Friday 19 October 2007 12:01, Christoph Lameter wrote:
> On Fri, 19 Oct 2007, Nick Piggin wrote:
> > > Yes that is what I attempted to do with the write barrier. To my
> > > knowledge there are no reads that could bleed out and I wanted to avoid
> > > a full fence
On Friday 19 October 2007 11:21, Christoph Lameter wrote:
> On Fri, 19 Oct 2007, Nick Piggin wrote:
> > Ah, thanks, but can we just use my earlier patch that does the
> > proper __bit_spin_unlock which is provided by
> > bit_spin_lock-use-lock-bitops.patch
>
> Ok.
>
On Friday 19 October 2007 08:05, Richard Jelinek wrote:
> Hello guys,
>
> I'm not subscribed to this list, so if you find this question valid
> enough to answer it, please cc me. Thanks.
>
> This is what the top-output looks like on my machine after having
> copied about 550GB of data from a
by looking at x86's spinlocks
into thinking this will work. However on powerpc and sparc, I
don't think it gives you the right types of barriers.
Slub can use the non-atomic version to unlock because other flags will not
get modified with the lock held.
Signed-off-by:
),
so you might have been confused by looking at x86's spinlocks
into thinking this will work. However on powerpc and sparc, I
don't think it gives you the right types of barriers.
Slub can use the non-atomic version to unlock because other flags will not
get modified with the lock hel
On Thursday 18 October 2007 16:16, Andrew A. Razdolsky wrote:
> Hello!
>
> In attachments i did pick all info i know about this failure.
Hi,
Does this actually cause problems for your system? Occasional
page allocation failures from interrupt context are expected.
If you are getting a lot of
On Thursday 18 October 2007 17:14, Vasily Averin wrote:
> Nick Piggin wrote:
> > Hi,
> >
> > On Thursday 18 October 2007 16:24, Vasily Averin wrote:
> >> Hi all,
> >>
> >> could anybody explain how "inactive" may be much greater than "
Hi,
On Thursday 18 October 2007 16:24, Vasily Averin wrote:
> Hi all,
>
> could anybody explain how "inactive" may be much greater than "cached"?
> stress test (http://weather.ou.edu/~apw/projects/stress/) that writes into
> removed files in cycle puts the node to the following state:
>
>
Hi,
On Thursday 18 October 2007 16:24, Vasily Averin wrote:
Hi all,
could anybody explain how inactive may be much greater than cached?
stress test (http://weather.ou.edu/~apw/projects/stress/) that writes into
removed files in cycle puts the node to the following state:
MemTotal: 16401648
On Thursday 18 October 2007 17:14, Vasily Averin wrote:
Nick Piggin wrote:
Hi,
On Thursday 18 October 2007 16:24, Vasily Averin wrote:
Hi all,
could anybody explain how inactive may be much greater than cached?
stress test (http://weather.ou.edu/~apw/projects/stress/) that writes
On Thursday 18 October 2007 16:16, Andrew A. Razdolsky wrote:
Hello!
In attachments i did pick all info i know about this failure.
Hi,
Does this actually cause problems for your system? Occasional
page allocation failures from interrupt context are expected.
If you are getting a lot of these
removes the mfence from __clear_bit_unlock (which is already a useful
primitive for SLUB).
Signed-off-by: Nick Piggin [EMAIL PROTECTED]
---
Index: linux-2.6/include/asm-x86/bitops_32.h
===
--- linux-2.6.orig/include/asm-x86/bitops_32.h
On Friday 19 October 2007 13:28, Herbert Xu wrote:
Nick Piggin [EMAIL PROTECTED] wrote:
First of all let's agree on some basic assumptions:
* A pair of spin lock/unlock subsumes the effect of a full mb.
Not unless you mean a pair of spin lock/unlock as in
2 spin lock/unlock pairs (4
On Friday 19 October 2007 12:32, Herbert Xu wrote:
First of all let's agree on some basic assumptions:
* A pair of spin lock/unlock subsumes the effect of a full mb.
Not unless you mean a pair of spin lock/unlock as in
2 spin lock/unlock pairs (4 operations).
*X = 10;
spin_lock(lock);
/* *Y
On Friday 19 October 2007 11:21, Christoph Lameter wrote:
On Fri, 19 Oct 2007, Nick Piggin wrote:
Ah, thanks, but can we just use my earlier patch that does the
proper __bit_spin_unlock which is provided by
bit_spin_lock-use-lock-bitops.patch
Ok.
This primitive should have a better
On Friday 19 October 2007 08:05, Richard Jelinek wrote:
Hello guys,
I'm not subscribed to this list, so if you find this question valid
enough to answer it, please cc me. Thanks.
This is what the top-output looks like on my machine after having
copied about 550GB of data from a twofish256
the non-atomic version to unlock because other flags will not
get modified with the lock held.
Signed-off-by: Nick Piggin [EMAIL PROTECTED]
---
mm/slub.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: linux-2.6/mm/slub.c
On Friday 19 October 2007 12:01, Christoph Lameter wrote:
On Fri, 19 Oct 2007, Nick Piggin wrote:
Yes that is what I attempted to do with the write barrier. To my
knowledge there are no reads that could bleed out and I wanted to avoid
a full fence instruction there.
Oh, OK. Bit risky
the non-atomic version to unlock because other flags will not
get modified with the lock held.
Signed-off-by: Nick Piggin [EMAIL PROTECTED]
---
mm/slub.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: linux-2.6/mm/slub.c
On Thursday 18 October 2007 13:59, Eric W. Biederman wrote:
> If filesystems care at all they want absolute control over the buffer
> cache. Controlling which buffers are dirty and when. Because we
> keep the buffer cache in the page cache for the block device we have
> not quite been giving
On Thursday 18 October 2007 04:45, Eric W. Biederman wrote:
> At this point my concern is what makes a clean code change in the
> kernel. Because user space can currently play with buffer_heads
> by way of the block device and cause lots of havoc (see the recent
Well if userspace is writing to
On Wednesday 17 October 2007 20:30, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > On Tuesday 16 October 2007 18:08, Nick Piggin wrote:
> >> On Tuesday 16 October 2007 14:57, Eric W. Biederman wrote:
> >> > > What magic restriction
On Wed, Oct 17, 2007 at 01:51:17PM +0800, Herbert Xu wrote:
> Nick Piggin <[EMAIL PROTECTED]> wrote:
> >
> > Also, for non-wb memory. I don't think the Intel document referenced
> > says anything about this, but the AMD document says that loads can pas
On Wed, Oct 17, 2007 at 02:30:32AM +0200, Mikulas Patocka wrote:
> > > You already must not place any data structures into WC memory --- for
> > > example, spinlocks wouldn't work there.
> >
> > What do you mean "already"?
>
> I mean "in current kernel" (I checked it in 2.6.22)
Ahh, that's not
On Thursday 18 October 2007 04:45, Eric W. Biederman wrote:
At this point my concern is what makes a clean code change in the
kernel. Because user space can currently play with buffer_heads
by way of the block device and cause lots of havoc (see the recent
Well if userspace is writing to the
On Thursday 18 October 2007 13:59, Eric W. Biederman wrote:
If filesystems care at all they want absolute control over the buffer
cache. Controlling which buffers are dirty and when. Because we
keep the buffer cache in the page cache for the block device we have
not quite been giving
On Wed, Oct 17, 2007 at 02:30:32AM +0200, Mikulas Patocka wrote:
You already must not place any data structures into WC memory --- for
example, spinlocks wouldn't work there.
What do you mean already?
I mean in current kernel (I checked it in 2.6.22)
Ahh, that's not current kernel,
On Wed, Oct 17, 2007 at 01:51:17PM +0800, Herbert Xu wrote:
Nick Piggin [EMAIL PROTECTED] wrote:
Also, for non-wb memory. I don't think the Intel document referenced
says anything about this, but the AMD document says that loads can pass
loads (page 8, rule b).
This is why our rmb
On Wednesday 17 October 2007 20:30, Eric W. Biederman wrote:
Nick Piggin [EMAIL PROTECTED] writes:
On Tuesday 16 October 2007 18:08, Nick Piggin wrote:
On Tuesday 16 October 2007 14:57, Eric W. Biederman wrote:
What magic restrictions on page allocations? Actually we have
fewer
On Wednesday 17 October 2007 11:13, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > We have 2 problems. First is that, for testing/consistency, we
> > don't want BLKFLSBUF to throw out the data. Maybe hardly anything
> > uses BLKFLSBUF now,
On Wednesday 17 October 2007 09:48, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > On Wednesday 17 October 2007 07:28, Theodore Tso wrote:
> >> On Tue, Oct 16, 2007 at 05:47:12PM +1000, Nick Piggin wrote:
> >> > +/*
> &g
601 - 700 of 3926 matches
Mail list logo