On Mon, May 21, 2007 at 10:04:10PM -0700, William Lee Irwin III wrote:
>> The size isn't the advantage being cited; I'd actually expect the net
>> result to be larger. It's the control over the layout of the metadata
>> for cache locality and even things like having enough flags, folding
>>
On Mon, 21 May 2007, Christoph Lameter wrote:
> On Tue, 22 May 2007, KAMEZAWA Hiroyuki wrote:
> > For i386(32bit arch), there is not enough space for vmemmap.
>
> I thought 32 bit would use flatmem? Is memory really sparse on 32
> bit? Likely difficult due to lack of address space?
Throwing in
On Mon, May 21, 2007 at 10:04:10PM -0700, William Lee Irwin III wrote:
> On Mon, May 21, 2007 at 06:39:51PM -0700, William Lee Irwin III wrote:
> >> address (virtual and physical are trivially inter-convertible), mock
> >> up something akin to what filesystems do for anonymous pages, etc.
> >> The
On Mon, May 21, 2007 at 10:04:10PM -0700, William Lee Irwin III wrote:
On Mon, May 21, 2007 at 06:39:51PM -0700, William Lee Irwin III wrote:
address (virtual and physical are trivially inter-convertible), mock
up something akin to what filesystems do for anonymous pages, etc.
The real
On Mon, 21 May 2007, Christoph Lameter wrote:
On Tue, 22 May 2007, KAMEZAWA Hiroyuki wrote:
For i386(32bit arch), there is not enough space for vmemmap.
I thought 32 bit would use flatmem? Is memory really sparse on 32
bit? Likely difficult due to lack of address space?
Throwing in more
On Mon, May 21, 2007 at 10:04:10PM -0700, William Lee Irwin III wrote:
The size isn't the advantage being cited; I'd actually expect the net
result to be larger. It's the control over the layout of the metadata
for cache locality and even things like having enough flags, folding
buffer_head
On Mon, May 21, 2007 at 06:39:51PM -0700, William Lee Irwin III wrote:
>> address (virtual and physical are trivially inter-convertible), mock
>> up something akin to what filesystems do for anonymous pages, etc.
>> The real objection everyone's going to have is that driver writers
>> will stain
On Mon, May 21, 2007 at 06:39:51PM -0700, William Lee Irwin III wrote:
> On Mon, May 21, 2007 at 11:27:42AM +0200, Nick Piggin wrote:
> >> ... yeah, something like that would bypass
>
> On Mon, May 21, 2007 at 05:43:16PM -0500, Matt Mackall wrote:
> > As long as we're throwing out crazy
On Mon, May 21, 2007 at 11:27:42AM +0200, Nick Piggin wrote:
>> ... yeah, something like that would bypass
On Mon, May 21, 2007 at 05:43:16PM -0500, Matt Mackall wrote:
> As long as we're throwing out crazy unpopular ideas, try this one:
> Divide struct page in two such that all the most
On Tue, 22 May 2007, Nick Piggin wrote:
> That would be unpopular with pagecache, because that uses pretty well
> all fields.
SLUB also uses all fields
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info
On Mon, May 21, 2007 at 05:43:16PM -0500, Matt Mackall wrote:
> On Mon, May 21, 2007 at 11:27:42AM +0200, Nick Piggin wrote:
> >
> > ... yeah, something like that would bypass
>
> As long as we're throwing out crazy unpopular ideas, try this one:
>
> Divide struct page in two such that all the
On Mon, 21 May 2007 17:38:58 -0700 (PDT)
Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Tue, 22 May 2007, KAMEZAWA Hiroyuki wrote:
>
> > For i386(32bit arch), there is not enough space for vmemmap.
>
> I thought 32 bit would use flatmem? Is memory really sparse on 32
> bit? Likely difficult
On Mon, May 21, 2007 at 04:26:03AM -0700, William Lee Irwin III wrote:
> On Mon, May 21, 2007 at 01:08:13AM -0700, William Lee Irwin III wrote:
> >> Choosing k distinct integers (mem_map array indices) from the interval
> >> [0,n-1] results in k(n-k+1)/n non-adjacent intervals of contiguous
> >>
On Tue, 22 May 2007, KAMEZAWA Hiroyuki wrote:
> For i386(32bit arch), there is not enough space for vmemmap.
I thought 32 bit would use flatmem? Is memory really sparse on 32
bit? Likely difficult due to lack of address space?
> For 64bit arch, page flags are not exhausted yet.
Right.
-
To
On Mon, 21 May 2007 10:08:06 -0700 (PDT)
Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Sun, 20 May 2007, Andi Kleen wrote:
>
> > Besides with the scarcity of pageflags it might make sense to do "64 bit
> > only"
> > flags at some point.
>
> There is no scarcity of page flags. There is
>
>
On Mon, May 21, 2007 at 11:27:42AM +0200, Nick Piggin wrote:
> On Mon, May 21, 2007 at 01:08:13AM -0700, William Lee Irwin III wrote:
> > On Sun, May 20, 2007 at 01:46:47AM -0700, William Lee Irwin III wrote:
> > >> The lack of consideration of the average case. I'll see what I can smoke
> > >>
On Sun, 20 May 2007, Andi Kleen wrote:
> Besides with the scarcity of pageflags it might make sense to do "64 bit only"
> flags at some point.
There is no scarcity of page flags. There is
1. Hoarding by Andrew
2. Waste by Sparsemem (section flags no longer necessary with
virtual memmap)
2
On Sun, 20 May 2007, Nick Piggin wrote:
> I _am_ considering the average case, and I consider the aligned structure
> is likely to win on average :) I just don't have numbers for it yet.
I'd be glad too if you could get some numbers. I did some benchmarking a
few weeks ago on x86_64 and I found
On Mon, May 21, 2007 at 01:08:13AM -0700, William Lee Irwin III wrote:
>> Choosing k distinct integers (mem_map array indices) from the interval
>> [0,n-1] results in k(n-k+1)/n non-adjacent intervals of contiguous
>> array indices on average. The average interval length is
>> (n+1)/(n-k+1) -
On Mon, May 21, 2007 at 11:12:59AM +0200, Helge Hafting wrote:
> Andrew Morton wrote:
> >On Sat, 19 May 2007 11:15:01 -0700 William Lee Irwin III
> ><[EMAIL PROTECTED]> wrote:
> >
> >
> >>Much the same holds for the atomic_t's; 32 + PAGE_SHIFT is
> >>44 bits or more, about as much as is
On Mon, 21 May 2007 01:08:13 -0700
William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> Now that I've been informed of the ->_count and ->_mapcount issues,
> I'd say that they're grave and should be corrected even at the cost
> of sizeof(struct page).
As long we handle 4 KB pages, adding 64 bits
On Mon, May 21, 2007 at 01:08:13AM -0700, William Lee Irwin III wrote:
> On Sun, May 20, 2007 at 01:46:47AM -0700, William Lee Irwin III wrote:
> >> The lack of consideration of the average case. I'll see what I can smoke
> >> out there.
>
> On Sun, May 20, 2007 at 11:25:52AM +0200, Nick Piggin
Andrew Morton wrote:
On Sat, 19 May 2007 11:15:01 -0700 William Lee Irwin III <[EMAIL PROTECTED]>
wrote:
Much the same holds for the atomic_t's; 32 + PAGE_SHIFT is
44 bits or more, about as much as is possible, and one reference per
page per page is not even feasible. Full-length
On Sun, May 20, 2007 at 01:46:47AM -0700, William Lee Irwin III wrote:
>> The lack of consideration of the average case. I'll see what I can smoke
>> out there.
On Sun, May 20, 2007 at 11:25:52AM +0200, Nick Piggin wrote:
> I _am_ considering the average case, and I consider the aligned structure
On Fri, 18 May 2007 13:37:09 -0700
"Luck, Tony" <[EMAIL PROTECTED]> wrote:
> > I wonder if there are other uses for the free space?
>
> unsigned long moreflags;
>
> Nick and Hugh were just sparring over adding a couple (or perhaps 8)
> flag bits. This would supply 64 new bits ... maybe
On Fri, 18 May 2007 13:37:09 -0700
Luck, Tony [EMAIL PROTECTED] wrote:
I wonder if there are other uses for the free space?
unsigned long moreflags;
Nick and Hugh were just sparring over adding a couple (or perhaps 8)
flag bits. This would supply 64 new bits ... maybe that would
On Sun, May 20, 2007 at 01:46:47AM -0700, William Lee Irwin III wrote:
The lack of consideration of the average case. I'll see what I can smoke
out there.
On Sun, May 20, 2007 at 11:25:52AM +0200, Nick Piggin wrote:
I _am_ considering the average case, and I consider the aligned structure
is
Andrew Morton wrote:
On Sat, 19 May 2007 11:15:01 -0700 William Lee Irwin III [EMAIL PROTECTED]
wrote:
Much the same holds for the atomic_t's; 32 + PAGE_SHIFT is
44 bits or more, about as much as is possible, and one reference per
page per page is not even feasible. Full-length atomic_t's
On Mon, May 21, 2007 at 01:08:13AM -0700, William Lee Irwin III wrote:
On Sun, May 20, 2007 at 01:46:47AM -0700, William Lee Irwin III wrote:
The lack of consideration of the average case. I'll see what I can smoke
out there.
On Sun, May 20, 2007 at 11:25:52AM +0200, Nick Piggin wrote:
I
On Mon, 21 May 2007 01:08:13 -0700
William Lee Irwin III [EMAIL PROTECTED] wrote:
Now that I've been informed of the -_count and -_mapcount issues,
I'd say that they're grave and should be corrected even at the cost
of sizeof(struct page).
As long we handle 4 KB pages, adding 64 bits per
On Mon, May 21, 2007 at 11:12:59AM +0200, Helge Hafting wrote:
Andrew Morton wrote:
On Sat, 19 May 2007 11:15:01 -0700 William Lee Irwin III
[EMAIL PROTECTED] wrote:
Much the same holds for the atomic_t's; 32 + PAGE_SHIFT is
44 bits or more, about as much as is possible, and one
On Mon, May 21, 2007 at 01:08:13AM -0700, William Lee Irwin III wrote:
Choosing k distinct integers (mem_map array indices) from the interval
[0,n-1] results in k(n-k+1)/n non-adjacent intervals of contiguous
array indices on average. The average interval length is
(n+1)/(n-k+1) - 1/C(n,k).
On Sun, 20 May 2007, Nick Piggin wrote:
I _am_ considering the average case, and I consider the aligned structure
is likely to win on average :) I just don't have numbers for it yet.
I'd be glad too if you could get some numbers. I did some benchmarking a
few weeks ago on x86_64 and I found
On Sun, 20 May 2007, Andi Kleen wrote:
Besides with the scarcity of pageflags it might make sense to do 64 bit only
flags at some point.
There is no scarcity of page flags. There is
1. Hoarding by Andrew
2. Waste by Sparsemem (section flags no longer necessary with
virtual memmap)
2 will
On Mon, May 21, 2007 at 11:27:42AM +0200, Nick Piggin wrote:
On Mon, May 21, 2007 at 01:08:13AM -0700, William Lee Irwin III wrote:
On Sun, May 20, 2007 at 01:46:47AM -0700, William Lee Irwin III wrote:
The lack of consideration of the average case. I'll see what I can smoke
out there.
On Mon, 21 May 2007 10:08:06 -0700 (PDT)
Christoph Lameter [EMAIL PROTECTED] wrote:
On Sun, 20 May 2007, Andi Kleen wrote:
Besides with the scarcity of pageflags it might make sense to do 64 bit
only
flags at some point.
There is no scarcity of page flags. There is
1. Hoarding by
On Tue, 22 May 2007, KAMEZAWA Hiroyuki wrote:
For i386(32bit arch), there is not enough space for vmemmap.
I thought 32 bit would use flatmem? Is memory really sparse on 32
bit? Likely difficult due to lack of address space?
For 64bit arch, page flags are not exhausted yet.
Right.
-
To
On Mon, May 21, 2007 at 04:26:03AM -0700, William Lee Irwin III wrote:
On Mon, May 21, 2007 at 01:08:13AM -0700, William Lee Irwin III wrote:
Choosing k distinct integers (mem_map array indices) from the interval
[0,n-1] results in k(n-k+1)/n non-adjacent intervals of contiguous
array
On Mon, 21 May 2007 17:38:58 -0700 (PDT)
Christoph Lameter [EMAIL PROTECTED] wrote:
On Tue, 22 May 2007, KAMEZAWA Hiroyuki wrote:
For i386(32bit arch), there is not enough space for vmemmap.
I thought 32 bit would use flatmem? Is memory really sparse on 32
bit? Likely difficult due to
On Mon, May 21, 2007 at 05:43:16PM -0500, Matt Mackall wrote:
On Mon, May 21, 2007 at 11:27:42AM +0200, Nick Piggin wrote:
... yeah, something like that would bypass
As long as we're throwing out crazy unpopular ideas, try this one:
Divide struct page in two such that all the most
On Tue, 22 May 2007, Nick Piggin wrote:
That would be unpopular with pagecache, because that uses pretty well
all fields.
SLUB also uses all fields
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
On Mon, May 21, 2007 at 11:27:42AM +0200, Nick Piggin wrote:
... yeah, something like that would bypass
On Mon, May 21, 2007 at 05:43:16PM -0500, Matt Mackall wrote:
As long as we're throwing out crazy unpopular ideas, try this one:
Divide struct page in two such that all the most commonly
On Mon, May 21, 2007 at 06:39:51PM -0700, William Lee Irwin III wrote:
On Mon, May 21, 2007 at 11:27:42AM +0200, Nick Piggin wrote:
... yeah, something like that would bypass
On Mon, May 21, 2007 at 05:43:16PM -0500, Matt Mackall wrote:
As long as we're throwing out crazy unpopular ideas,
On Mon, May 21, 2007 at 06:39:51PM -0700, William Lee Irwin III wrote:
address (virtual and physical are trivially inter-convertible), mock
up something akin to what filesystems do for anonymous pages, etc.
The real objection everyone's going to have is that driver writers
will stain their
On Sat, May 19, 2007 at 10:53:20AM -0700, William Lee Irwin III wrote:
> On Fri, May 18, 2007 at 04:42:10PM +0100, Hugh Dickins wrote:
> > Sooner rather than later, don't we need those 8 bytes to expand from
> > atomic_t to atomic64_t _count and _mapcount? Not that we really need
> > all 64 bits
On Fri, May 18, 2007 at 06:08:54AM +0200, Nick Piggin wrote:
> If we add 8 bytes to struct page on 64-bit machines, it becomes 64 bytes,
> which is quite a nice number for cache purposes.
We had those hardware alignment for many data structures where they
were only wasting memory (i.e. vmas).
On Sunday 20 May 2007 06:10:16 Eric Dumazet wrote:
> Christoph Lameter a écrit :
> > On Sat, 19 May 2007, William Lee Irwin III wrote:
> >
> >> However, there are numerous optimizations and features made possible
> >> with flag bits, which might as could be made cheap by padding struct
> >> page
On Sun, May 20, 2007 at 01:46:47AM -0700, William Lee Irwin III wrote:
> On Sat, May 19, 2007 at 11:15:01AM -0700, William Lee Irwin III wrote:
> >> The cache cost argument is specious. Even misaligned, smaller is
> >> smaller.
>
> On Sun, May 20, 2007 at 07:22:29AM +0200, Nick Piggin wrote:
> >
On Sat, May 19, 2007 at 11:15:01AM -0700, William Lee Irwin III wrote:
>> The cache cost argument is specious. Even misaligned, smaller is
>> smaller.
On Sun, May 20, 2007 at 07:22:29AM +0200, Nick Piggin wrote:
> Of course smaller is smaller ;) Why would that make the cache cost
> argument
On Sat, 19 May 2007 11:15:01 -0700 William Lee Irwin III <[EMAIL PROTECTED]>
wrote:
>> Much the same holds for the atomic_t's; 32 + PAGE_SHIFT is
>> 44 bits or more, about as much as is possible, and one reference per
>> page per page is not even feasible. Full-length atomic_t's are just
>> not
On Sat, 19 May 2007 11:15:01 -0700 William Lee Irwin III [EMAIL PROTECTED]
wrote:
Much the same holds for the atomic_t's; 32 + PAGE_SHIFT is
44 bits or more, about as much as is possible, and one reference per
page per page is not even feasible. Full-length atomic_t's are just
not necessary.
On Sat, May 19, 2007 at 11:15:01AM -0700, William Lee Irwin III wrote:
The cache cost argument is specious. Even misaligned, smaller is
smaller.
On Sun, May 20, 2007 at 07:22:29AM +0200, Nick Piggin wrote:
Of course smaller is smaller ;) Why would that make the cache cost
argument specious?
On Sun, May 20, 2007 at 01:46:47AM -0700, William Lee Irwin III wrote:
On Sat, May 19, 2007 at 11:15:01AM -0700, William Lee Irwin III wrote:
The cache cost argument is specious. Even misaligned, smaller is
smaller.
On Sun, May 20, 2007 at 07:22:29AM +0200, Nick Piggin wrote:
Of course
On Sunday 20 May 2007 06:10:16 Eric Dumazet wrote:
Christoph Lameter a écrit :
On Sat, 19 May 2007, William Lee Irwin III wrote:
However, there are numerous optimizations and features made possible
with flag bits, which might as could be made cheap by padding struct
page up to the next
On Fri, May 18, 2007 at 06:08:54AM +0200, Nick Piggin wrote:
If we add 8 bytes to struct page on 64-bit machines, it becomes 64 bytes,
which is quite a nice number for cache purposes.
We had those hardware alignment for many data structures where they
were only wasting memory (i.e. vmas).
On Sat, May 19, 2007 at 10:53:20AM -0700, William Lee Irwin III wrote:
On Fri, May 18, 2007 at 04:42:10PM +0100, Hugh Dickins wrote:
Sooner rather than later, don't we need those 8 bytes to expand from
atomic_t to atomic64_t _count and _mapcount? Not that we really need
all 64 bits of
On Sat, May 19, 2007 at 11:15:01AM -0700, William Lee Irwin III wrote:
> On Fri, May 18, 2007 at 11:14:26AM -0700, Christoph Lameter wrote:
> >> Right. That would simplify the calculations.
>
> On Sat, May 19, 2007 at 03:25:30AM +0200, Nick Piggin wrote:
> > It isn't the calculations I'm worried
Christoph Lameter a écrit :
On Sat, 19 May 2007, William Lee Irwin III wrote:
However, there are numerous optimizations and features made possible
with flag bits, which might as could be made cheap by padding struct
page up to the next highest power of 2 bytes with space for flag bits.
Well
On Sat, 19 May 2007 11:15:01 -0700 William Lee Irwin III <[EMAIL PROTECTED]>
wrote:
> Much the same holds for the atomic_t's; 32 + PAGE_SHIFT is
> 44 bits or more, about as much as is possible, and one reference per
> page per page is not even feasible. Full-length atomic_t's are just
> not
On Sat, 19 May 2007, William Lee Irwin III wrote:
> However, there are numerous optimizations and features made possible
> with flag bits, which might as could be made cheap by padding struct
> page up to the next highest power of 2 bytes with space for flag bits.
Well the last time I tried to
On Fri, May 18, 2007 at 11:14:26AM -0700, Christoph Lameter wrote:
>> Right. That would simplify the calculations.
On Sat, May 19, 2007 at 03:25:30AM +0200, Nick Piggin wrote:
> It isn't the calculations I'm worried about, although they'll get simpler
> too. It is the cache cost.
The cache cost
On Fri, 18 May 2007, Nick Piggin wrote:
>> If we add 8 bytes to struct page on 64-bit machines, it becomes 64 bytes,
>> which is quite a nice number for cache purposes.
>> However we don't have to let those 8 bytes go to waste: we can use them
>> to store the virtual address of the page, which
Christoph Lameter wrote:
> On Sat, 19 May 2007, Nick Piggin wrote:
>
>> Hugh points out that we should make _count and _mapcount atomic_long_t's,
>> which would probably be a better use of the space once your vmemmap goes
>> in.
>
> Well Andy was going to merge it:
>
>
Christoph Lameter wrote:
On Sat, 19 May 2007, Nick Piggin wrote:
Hugh points out that we should make _count and _mapcount atomic_long_t's,
which would probably be a better use of the space once your vmemmap goes
in.
Well Andy was going to merge it:
On Fri, 18 May 2007, Nick Piggin wrote:
If we add 8 bytes to struct page on 64-bit machines, it becomes 64 bytes,
which is quite a nice number for cache purposes.
However we don't have to let those 8 bytes go to waste: we can use them
to store the virtual address of the page, which kind of
On Fri, May 18, 2007 at 11:14:26AM -0700, Christoph Lameter wrote:
Right. That would simplify the calculations.
On Sat, May 19, 2007 at 03:25:30AM +0200, Nick Piggin wrote:
It isn't the calculations I'm worried about, although they'll get simpler
too. It is the cache cost.
The cache cost
On Sat, 19 May 2007, William Lee Irwin III wrote:
However, there are numerous optimizations and features made possible
with flag bits, which might as could be made cheap by padding struct
page up to the next highest power of 2 bytes with space for flag bits.
Well the last time I tried to get
On Sat, 19 May 2007 11:15:01 -0700 William Lee Irwin III [EMAIL PROTECTED]
wrote:
Much the same holds for the atomic_t's; 32 + PAGE_SHIFT is
44 bits or more, about as much as is possible, and one reference per
page per page is not even feasible. Full-length atomic_t's are just
not necessary.
Christoph Lameter a écrit :
On Sat, 19 May 2007, William Lee Irwin III wrote:
However, there are numerous optimizations and features made possible
with flag bits, which might as could be made cheap by padding struct
page up to the next highest power of 2 bytes with space for flag bits.
Well
On Sat, May 19, 2007 at 11:15:01AM -0700, William Lee Irwin III wrote:
On Fri, May 18, 2007 at 11:14:26AM -0700, Christoph Lameter wrote:
Right. That would simplify the calculations.
On Sat, May 19, 2007 at 03:25:30AM +0200, Nick Piggin wrote:
It isn't the calculations I'm worried about,
On Sat, 19 May 2007, Nick Piggin wrote:
> Hugh points out that we should make _count and _mapcount atomic_long_t's,
> which would probably be a better use of the space once your vmemmap goes
> in.
Well Andy was going to merge it:
http://marc.info/?l=linux-kernel=117620162415620=2
Andy when are
On Fri, May 18, 2007 at 10:42:30AM +0100, David Howells wrote:
> Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > I'd like to be the first to propose an increase to the size of struct page
> > just for the sake of increasing it!
>
> Heh. I'm surprised you haven't got more adverse reactions.
>
> >
On Fri, May 18, 2007 at 11:14:26AM -0700, Christoph Lameter wrote:
> On Fri, 18 May 2007, Nick Piggin wrote:
>
> > However we don't have to let those 8 bytes go to waste: we can use them
> > to store the virtual address of the page, which kind of makes sense for
> > 64-bit, because they can
On Fri, May 18, 2007 at 04:42:10PM +0100, Hugh Dickins wrote:
> On Fri, 18 May 2007, Nick Piggin wrote:
> >
> > If we add 8 bytes to struct page on 64-bit machines, it becomes 64 bytes,
> > which is quite a nice number for cache purposes.
> >
> > However we don't have to let those 8 bytes go to
> I wonder if there are other uses for the free space?
unsigned long moreflags;
Nick and Hugh were just sparring over adding a couple (or perhaps 8)
flag bits. This would supply 64 new bits ... maybe that would keep
them happy for a few more years.
-Tony
-
To unsubscribe from this
On Fri, 18 May 2007, Nick Piggin wrote:
> However we don't have to let those 8 bytes go to waste: we can use them
> to store the virtual address of the page, which kind of makes sense for
> 64-bit, because they can likely to use complicated memory models.
That is not a valid consideration
On Fri, 18 May 2007, Nick Piggin wrote:
> The page->virtual thing is just a bonus (although have you seen what
> sort of hoops SPARSEMEM has to go through to find page_address?! It
> will definitely be a win on those architectures).
That is on the way out. See the discussion on virtual memmap
On Fri, 18 May 2007, Nick Piggin wrote:
>
> If we add 8 bytes to struct page on 64-bit machines, it becomes 64 bytes,
> which is quite a nice number for cache purposes.
>
> However we don't have to let those 8 bytes go to waste: we can use them
> to store the virtual address of the page, which
>
> I'd say all up this is going to decrease overall cache footprint in
> fastpaths, both by reducing text and data footprint of page_address and
> related operations, and by reducing cacheline footprint of most batched
> operations on struct pages.
I suspect the cache line footprint is not the
Nick Piggin <[EMAIL PROTECTED]> wrote:
> I'd like to be the first to propose an increase to the size of struct page
> just for the sake of increasing it!
Heh. I'm surprised you haven't got more adverse reactions.
> If we add 8 bytes to struct page on 64-bit machines, it becomes 64 bytes,
>
On Fri, May 18, 2007 at 12:43:04AM -0700, Andrew Morton wrote:
> On Fri, 18 May 2007 09:32:23 +0200 Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > On Fri, May 18, 2007 at 12:19:05AM -0700, Andrew Morton wrote:
> > > On Fri, 18 May 2007 06:08:54 +0200 Nick Piggin <[EMAIL PROTECTED]> wrote:
> > >
>
On Fri, 18 May 2007 09:32:23 +0200 Nick Piggin <[EMAIL PROTECTED]> wrote:
> On Fri, May 18, 2007 at 12:19:05AM -0700, Andrew Morton wrote:
> > On Fri, 18 May 2007 06:08:54 +0200 Nick Piggin <[EMAIL PROTECTED]> wrote:
> >
> > > Many batch operations on struct page are completely random,
> >
> >
On Fri, May 18, 2007 at 12:19:05AM -0700, Andrew Morton wrote:
> On Fri, 18 May 2007 06:08:54 +0200 Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > Many batch operations on struct page are completely random,
>
> But they shouldn't be: we should aim to place physically contiguous pages
> into
On Fri, 18 May 2007 06:08:54 +0200 Nick Piggin <[EMAIL PROTECTED]> wrote:
> Many batch operations on struct page are completely random,
But they shouldn't be: we should aim to place physically contiguous pages
into logically contiguous pagecache slots, for all the reasons we
discussed.
If/when
On Fri, 18 May 2007 06:08:54 +0200 Nick Piggin [EMAIL PROTECTED] wrote:
Many batch operations on struct page are completely random,
But they shouldn't be: we should aim to place physically contiguous pages
into logically contiguous pagecache slots, for all the reasons we
discussed.
If/when
On Fri, May 18, 2007 at 12:19:05AM -0700, Andrew Morton wrote:
On Fri, 18 May 2007 06:08:54 +0200 Nick Piggin [EMAIL PROTECTED] wrote:
Many batch operations on struct page are completely random,
But they shouldn't be: we should aim to place physically contiguous pages
into logically
On Fri, 18 May 2007 09:32:23 +0200 Nick Piggin [EMAIL PROTECTED] wrote:
On Fri, May 18, 2007 at 12:19:05AM -0700, Andrew Morton wrote:
On Fri, 18 May 2007 06:08:54 +0200 Nick Piggin [EMAIL PROTECTED] wrote:
Many batch operations on struct page are completely random,
But they
On Fri, May 18, 2007 at 12:43:04AM -0700, Andrew Morton wrote:
On Fri, 18 May 2007 09:32:23 +0200 Nick Piggin [EMAIL PROTECTED] wrote:
On Fri, May 18, 2007 at 12:19:05AM -0700, Andrew Morton wrote:
On Fri, 18 May 2007 06:08:54 +0200 Nick Piggin [EMAIL PROTECTED] wrote:
Many batch
Nick Piggin [EMAIL PROTECTED] wrote:
I'd like to be the first to propose an increase to the size of struct page
just for the sake of increasing it!
Heh. I'm surprised you haven't got more adverse reactions.
If we add 8 bytes to struct page on 64-bit machines, it becomes 64 bytes,
which is
I'd say all up this is going to decrease overall cache footprint in
fastpaths, both by reducing text and data footprint of page_address and
related operations, and by reducing cacheline footprint of most batched
operations on struct pages.
I suspect the cache line footprint is not the main
On Fri, 18 May 2007, Nick Piggin wrote:
If we add 8 bytes to struct page on 64-bit machines, it becomes 64 bytes,
which is quite a nice number for cache purposes.
However we don't have to let those 8 bytes go to waste: we can use them
to store the virtual address of the page, which kind of
On Fri, 18 May 2007, Nick Piggin wrote:
However we don't have to let those 8 bytes go to waste: we can use them
to store the virtual address of the page, which kind of makes sense for
64-bit, because they can likely to use complicated memory models.
That is not a valid consideration anymore.
On Fri, 18 May 2007, Nick Piggin wrote:
The page-virtual thing is just a bonus (although have you seen what
sort of hoops SPARSEMEM has to go through to find page_address?! It
will definitely be a win on those architectures).
That is on the way out. See the discussion on virtual memmap
I wonder if there are other uses for the free space?
unsigned long moreflags;
Nick and Hugh were just sparring over adding a couple (or perhaps 8)
flag bits. This would supply 64 new bits ... maybe that would keep
them happy for a few more years.
-Tony
-
To unsubscribe from this list:
On Fri, May 18, 2007 at 04:42:10PM +0100, Hugh Dickins wrote:
On Fri, 18 May 2007, Nick Piggin wrote:
If we add 8 bytes to struct page on 64-bit machines, it becomes 64 bytes,
which is quite a nice number for cache purposes.
However we don't have to let those 8 bytes go to waste: we
On Fri, May 18, 2007 at 11:14:26AM -0700, Christoph Lameter wrote:
On Fri, 18 May 2007, Nick Piggin wrote:
However we don't have to let those 8 bytes go to waste: we can use them
to store the virtual address of the page, which kind of makes sense for
64-bit, because they can likely to use
On Fri, May 18, 2007 at 10:42:30AM +0100, David Howells wrote:
Nick Piggin [EMAIL PROTECTED] wrote:
I'd like to be the first to propose an increase to the size of struct page
just for the sake of increasing it!
Heh. I'm surprised you haven't got more adverse reactions.
If we add 8
On Sat, 19 May 2007, Nick Piggin wrote:
Hugh points out that we should make _count and _mapcount atomic_long_t's,
which would probably be a better use of the space once your vmemmap goes
in.
Well Andy was going to merge it:
http://marc.info/?l=linux-kernelm=117620162415620w=2
Andy when are
On Thu, May 17, 2007 at 10:22:17PM -0700, David Miller wrote:
> From: Nick Piggin <[EMAIL PROTECTED]>
> Date: Fri, 18 May 2007 07:12:38 +0200
>
> > The page->virtual thing is just a bonus (although have you seen what
> > sort of hoops SPARSEMEM has to go through to find page_address?! It
> > will
From: Nick Piggin <[EMAIL PROTECTED]>
Date: Fri, 18 May 2007 07:12:38 +0200
> The page->virtual thing is just a bonus (although have you seen what
> sort of hoops SPARSEMEM has to go through to find page_address?! It
> will definitely be a win on those architectures).
If you set the bit ranges
1 - 100 of 108 matches
Mail list logo