There is no user of local_t remaining after the cpu ops patchset. local_t
always suffered from the problem that the operations it generated were not
able to perform the relocation of a pointer to the target processor and the
atomic update at the same time. There was a need to disable preemption
and
The module subsystem cannot handle symbols that are zero. It prints out
a message that these symbols are unresolved. Define a constant
UNRESOLVED
that is used to hold the value used for unresolved symbols. Set it to 1
(its hopefully unlikely that a symbol will have the value 1). This is necessary
Use the CPU_xx operations to deal with the per cpu data.
Avoid a loop to NR_CPUS here. Use the possible map instead.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
include/linux/module.h | 13 +
kernel/module.c| 17 +++--
2 files changed, 12 inserti
The use of CPU ops here avoids the offset calculations that we used to have
to do with per cpu ops. The result of this patch is that event counters are
coded with a single instruction the following way:
incq %gs:offset(%rip)
Without these patches this was:
mov%gs:0x8,%rdx
mov%eax,0x38(
Get rid of one of the leftover pda accessors and cut out some more of pda.h.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
include/asm-x86/pda.h | 35 +--
include/asm-x86/percpu_64.h | 30 ++
2 files changed, 31 in
It is useless now since gs can always stand in for data_offset.
Move active_mm into the available slot in order to not upset the
established offsets.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
arch/x86/kernel/asm-offsets_64.c |1 -
arch/x86/kernel/entry_64.S |7 ++---
There needs to be a way to determine the offset for the CPU ops of per cpu
variables. The offset is simply the address of the variable. But we do not
want to code ugly things like
CPU_READ(per_cpu__statistics)
in the core. So define a new helper per_cpu_var(var) that simply adds
the per_c
If we move the pda to the beginning of the cpu area then the gs segment will
also point to the beginning of the cpu area. After this patch we can use gs
on any percpu variable or cpu_alloc pointer from cpu 0 to get to the active
processors variable. There is no need anymore to add a per cpu offset
Support fast cpu ops in x86_64 by providing a series of functions that
generate the proper instructions. Define CONFIG_FAST_CPU_OPS so that core code
can exploit the availability of fast per cpu operations.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
arch/x86/Kconfig|
Replace all uses of __per_cpu_offset with CPU_PTR. This will avoid a lot
of lookups for per cpu offset calculations.
Keep per_cpu_offset() itself because lockdep uses it.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
arch/x86/kernel/smpboot_64.c |8 +++-
include/asm-x86/percp
Use boot_cpu_alloc to allocate a cpu area chunk that is needed to store the
statically declared per cpu data and then point the per_cpu_offset pointers
to the cpu area.
The per cpu area is moved to a ZERO offset using some linker scripting.
All per cpu variable addresses become true offsets into a
Declare the pda as a per cpu variable. This will have the effect of moving
the pda data into the cpu area managed by cpu alloc.
The boot_pdas are only needed in head64.c so move the declaration
over there and make it static.
Remove the code that allocates special pda data structures.
Signed-off-
These are critical fast paths. Using a segment override instead of an address
calculation is reducing overhead.
Signed-off-by: Christoph LAmeter <[EMAIL PROTECTED]>
---
arch/x86/kernel/nmi_64.c |8
1 file changed, 4 insertions(+), 4 deletions(-)
Index: linux-2.6/arch/x86/kernel/nmi_
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
crypto/async_tx/async_tx.c | 15 ---
1 file changed, 8 insertions(+), 7 deletions(-)
Index: linux-2.6/crypto/async_tx/async_tx.c
===
--- linux-2.6.orig/crypto/asy
There is no user of allocpercpu left after all the earlier patches were
applied. Remove the code that realizes allocpercpu.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
include/linux/percpu.h | 80 --
mm/Makefile|1
mm/allocpercpu.c
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
drivers/infiniband/hw/ehca/ehca_irq.c | 22 +++---
1 file changed, 11 insertions(+), 11 deletions(-)
Index: linux-2.6/drivers/infiniband/hw/ehca/ehca_irq.c
==
Use the cpu alloc functions for the mib handling functions in the net
layer. The API for snmp_mib_free() is changed to add a size parameter
since cpu_fre requires that.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
include/net/ip.h|2 +-
include/net/snmp.h | 15 +++--
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
net/core/sock.c |8
1 file changed, 4 insertions(+), 4 deletions(-)
Index: linux-2.6/net/core/sock.c
===
--- linux-2.6.orig/net/core/sock.c 2007-11-18 14:38:2
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
drivers/net/loopback.c | 14 ++
1 file changed, 6 insertions(+), 8 deletions(-)
Index: linux-2.6/drivers/net/loopback.c
===
--- linux-2.6.orig/drivers/net/loopbac
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
drivers/net/veth.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
Index: linux-2.6/drivers/net/veth.c
===
--- linux-2.6.orig/drivers/net/veth.c 2007-11-15
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
drivers/net/chelsio/sge.c | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
Index: linux-2.6/drivers/net/chelsio/sge.c
===
--- linux-2.6.orig/drivers/net/ch
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
net/ipv4/ipcomp.c | 26 +-
net/ipv6/ipcomp6.c | 26 +-
2 files changed, 26 insertions(+), 26 deletions(-)
Index: linux-2.6/net/ipv4/ipcomp.c
==
Convert DMA engine to use CPU_xx operations. This also removes the use of
local_t
from the dmaengine.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
drivers/dma/dmaengine.c | 38 ++
include/linux/dmaengine.h | 16 ++--
2 files chang
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
net/ipv4/tcp.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
Index: linux-2.6/net/ipv4/tcp.c
===
--- linux-2.6.orig/net/ipv4/tcp.c 2007-11-15 21:17:2
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
include/net/neighbour.h |6 +-
net/core/neighbour.c| 11 ++-
2 files changed, 7 insertions(+), 10 deletions(-)
Index: linux-2.6/include/net/neighbour.h
==
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
block/blktrace.c |8
1 file changed, 4 insertions(+), 4 deletions(-)
Index: linux-2.6/block/blktrace.c
===
--- linux-2.6.orig/block/blktrace.c 2007-11-15 21:17
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/nfs/iostat.h |8
fs/nfs/super.c |2 +-
2 files changed, 5 insertions(+), 5 deletions(-)
Index: linux-2.6/fs/nfs/iostat.h
===
--- linux-2.6.orig/fs/nfs/iost
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
kernel/rcutorture.c |4 ++--
kernel/srcu.c | 20
2 files changed, 10 insertions(+), 14 deletions(-)
Index: linux-2.6/kernel/rcutorture.c
===
--
Also remove the useless zeroing after allocation. Allocpercpu already
zeroed the objects.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/xfs/xfs_mount.c | 24
1 file changed, 8 insertions(+), 16 deletions(-)
Index: linux-2.6/fs/xfs/xfs_mount.c
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
arch/x86/kernel/acpi/cstate.c |9 +
arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c |7 ---
drivers/acpi/processor_perflib.c |4 ++--
3 files changed, 11 insertions(+), 9 deletions(-)
Index: linux-2
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
include/linux/genhd.h | 16 ++--
1 file changed, 6 insertions(+), 10 deletions(-)
Index: linux-2.6/include/linux/genhd.h
===
--- linux-2.6.orig/include/linux/genh
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
arch/ia64/kernel/crash.c |2 +-
drivers/base/cpu.c |2 +-
kernel/kexec.c |4 ++--
3 files changed, 4 insertions(+), 4 deletions(-)
Index: linux-2.6/arch/ia64/kernel/crash.c
===
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
kernel/workqueue.c | 27 ++-
1 file changed, 14 insertions(+), 13 deletions(-)
Index: linux-2.6/kernel/workqueue.c
===
--- linux-2.6.orig/kernel/workq
Typical use of per cpu memory for a small system of 8G 8p 4node is less than
64k per cpu memory. This is increasing rapidly for larger systems where we can
get up to 512k or 1M of memory used for cpu storage.
The maximum size allowed of the cpu area is 128MB of memory.
The cpu area is placed in r
Use the new cpu_alloc functionality to avoid per cpu arrays in struct zone.
This drastically reduces the size of struct zone for systems with a large
amounts of processors and allows placement of critical variables of struct
zone in one cacheline even on very large systems.
Another effect is that
Enable a simple virtual configuration with 32MB available per cpu so that
we do not use a static area on sparc64.
[Not tested. I have no sparc64]
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
arch/sparc64/Kconfig | 15 +++
arch/sparc64/kernel/vmlinux.lds.S
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
lib/percpu_counter.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
Index: linux-2.6/lib/percpu_counter.c
===
--- linux-2.6.orig/lib/percpu_counter.c 2007
Virtually map the cpu areas. This allows bigger maximum sizes and to only
populate the virtual mappings on demand.
In order to use the virtual mapping capability the arch must setup some
configuration variables in arch/xxx/Kconfig:
CONFIG_CPU_AREA_VIRTUAL to y
CONFIG_CPU_AREA_ORDER
to th
Remove the fields in kmem_cache_cpu that were used to cache data from
kmem_cache when they were in different cachelines. The cacheline that holds
the per cpu array pointer now also holds these values. We can cut down the
kmem_cache_cpu size to almost half.
The get_freepointer() and set_freepointer
64 bit:
Set up a cpu area that allows the use of up 16MB for each processor.
Cpu memory use can grow a bit. F.e. if we assume that a pageset
occupies 64 bytes of memory and we have 3 zones in each of 1024 nodes
then we need 3 * 1k * 16k = 50 million pagesets or 3096 pagesets per
processor. This r
The core portion of the cpu allocator.
The per cpu allocator allows dynamic allocation of memory on all
processor simultaneously. A bitmap is used to track used areas.
The allocator implements tight packing to reduce the cache footprint
and increase speed since cacheline contention is typically no
Using cpu alloc removes the needs for the per cpu arrays in the kmem_cache
struct.
These could get quite big if we have to support system of up to thousands of
cpus.
The use of alloc_percpu means that:
1. The size of kmem_cache for SMP configuration shrinks since we will only
need 1 pointer i
ACPI uses NR_CPUS in various loops and in some it accesses per cpu
data of processors that are not present(!) and that will never be present.
The pointers to per cpu data are typically not initialized for processors
that are not present. So we seem to be reading something here from offset 0
in memo
Currently the per cpu subsystem is not able to use the atomic capabilities
of the processors we have.
This adds new functionality that allows the optimizing of per cpu variable
handliong. It in particular provides a simple way to exploit atomic operations
to avoid having to disable itnerrupts or a
This is a pretty early draft stage of the patch. It works on
x86_64 only. Its a bit massive so I'd like to have some feedback
before proceeding (and maybe some help)?.
The support for other arches was not tested yet.
The patch establishes a new set of cpu operations that allow to
exploit single i
Simplify page cache zeroing of segments of pages through 3 functions
zero_user_segments(page, start1, end1, start2, end2)
Zeros two segments of the page. It takes the position where to
start and end the zeroing which avoids length calculations.
zero_user_segment(page, start, end)
Use page_cache_xxx in mm/rmap.c
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/rmap.c | 13 +
1 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index 41ac397..d6a1771 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -188,9 +188,14 @@ static v
Use page_cache_xxx in mm/truncate.c
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/truncate.c | 35 ++-
1 files changed, 18 insertions(+), 17 deletions(-)
diff --git a/mm/truncate.c b/mm/truncate.c
index bf8068d..8c3d32e 100644
--- a/mm/truncate.c
+
Use page_cache_xxx in fs/sync.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/sync.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/sync.c b/fs/sync.c
index 7cd005e..f30d7eb 100644
--- a/fs/sync.c
+++ b/fs/sync.c
@@ -260,8 +260,8 @@ int do_sync_map
Use page_cache_xxx in mm/mpage.c
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/mpage.c | 28
1 files changed, 16 insertions(+), 12 deletions(-)
diff --git a/fs/mpage.c b/fs/mpage.c
index a5e1385..2843ed7 100644
--- a/fs/mpage.c
+++ b/fs/mpage.c
@@ -13
Use page_cache_xxx in fs/libfs.c
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/libfs.c | 12 +++-
1 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/fs/libfs.c b/fs/libfs.c
index 53b3dc5..e90f894 100644
--- a/fs/libfs.c
+++ b/fs/libfs.c
@@ -16,7 +16,8 @@ int si
Use page_cache_xxx in mm/filemap_xip.c
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/filemap_xip.c | 28 ++--
1 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/mm/filemap_xip.c b/mm/filemap_xip.c
index ba6892d..5237e53 100644
--- a/mm/filemap
Use page_cache_xxx functions in fs/ext2
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/ext2/dir.c | 40 +++-
1 files changed, 23 insertions(+), 17 deletions(-)
diff --git a/fs/ext2/dir.c b/fs/ext2/dir.c
index 2bf49d7..d72926f 100644
--- a/fs/ext
Use page_cache_xxx for fs/xfs
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/xfs/linux-2.6/xfs_aops.c | 55 ++
fs/xfs/linux-2.6/xfs_lrw.c |6 ++--
2 files changed, 32 insertions(+), 29 deletions(-)
diff --git a/fs/xfs/linux-2.6/xfs_aops
Use page_cache_xxx in drivers/block/rd.c
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
drivers/block/rd.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/block/rd.c b/drivers/block/rd.c
index 65150b5..e148b3b 100644
--- a/drivers/block/rd.c
+++ b
Allow the freeing of compound pages via pagevec.
In release_pages() we currently special case for compound pages in order to
be sure to always decrement the page count of the head page and not the
tail page. However that redirection to the head page is only necessary for
tail pages. So we can actu
compound_pages(page)-> Determines base pages of a compound page
compound_shift(page)-> Determine the page shift of a compound page
compound_size(page) -> Determine the size of a compound page
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
include/linux/mm.h | 15
Use page_cache_xxx in fs/ext4
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/ext4/dir.c |3 ++-
fs/ext4/inode.c | 31 ---
2 files changed, 18 insertions(+), 16 deletions(-)
diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
index 3ab01c0..9d6cd51 100644
-
Provide an alternate definition for the page_cache_xxx(mapping, ...)
functions that can determine the current page size from the mapping
and generate the appropriate shifts, sizes and mask for the page cache
operations. Change the basic functions that allocate pages for the
page cache to be able to
The only change needed to enable Large Block I/O in XFS is to remove
the check for a too large blocksize ;-)
Signed-off-by: Dave Chinner <[EMAIL PROTECTED]>
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/xfs/xfs_mount.c | 13 -
1 files changed, 0 insertions(+), 13 delet
The simplest file system to use for large blocksize support is ramfs.
Note that ramfs does not use the lower layers (buffer I/O etc) so this
case is useful for initial testing of changes to large buffer size
support if one just wants to exercise the higher layers.
The patch adds the ability to sp
We may now have to zero and flush higher order pages. Implement
clear_mapping_page and flush_mapping_page to do that job. Replace
the flushing and clearing at some key locations for the pagecache.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/libfs.c |4 ++--
includ
We use the macros PAGE_CACHE_SIZE PAGE_CACHE_SHIFT PAGE_CACHE_MASK
and PAGE_CACHE_ALIGN in various places in the kernel. Many times
common operations like calculating the offset or the index are coded
using shifts and adds. This patch provides inline functions to
get the calculations accomplished w
mapping_set_gfp_mask only works on order 0 page cache operations. Reiserfs
can use 8k pages (order 1). Replace the mapping_set_gfp_mask with
mapping_setup to make this work properly.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/reiserfs/xattr.c |3 ++-
1 files changed, 2 insert
Before allowing different page orders it may be wise to get some checkpoints
in at various places. Checkpoints will help debugging whenever a wrong order
page shows up in a mapping. Helps when converting new filesystems to utilize
larger pages.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
Compound pages of an arbitrary order may now be on the LRU and
may be reclaimed.
Adjust the counting in vmscan.c to could the number of base
pages.
Also change the active and inactive accounting to do the same.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
include/linux/mm_inline.h |
Fix up readhead for large I/O operations.
Only calculate the readahead until the 2M boundary then fall back to
one page.
Signed-off-by: Fengguang Wu <[EMAIL PROTECTED]>
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
===
---
i
Use page_cache_xxx in fs/buffer.c.
We have a special situation in set_bh_page() since reiserfs calls that
function before setting up the mapping. So retrieve the page size
from the page struct rather than the mapping.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/buffer.c | 110 ++
Use page_cache_xxx in fs/splice.c
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/splice.c | 27 ++-
1 files changed, 14 insertions(+), 13 deletions(-)
diff --git a/fs/splice.c b/fs/splice.c
index c010a72..7910f32 100644
--- a/fs/splice.c
+++ b/fs/splice.c
@
This patch enhances the handling of compound pages in the VM. It may also
be important also for the antifrag patches that need to manage a set of
higher order free pages and also for other uses of compound pages.
For now it simplifies accounting for SLUB pages but the groundwork here is
important
Use the new dec/inc functions to simplify SLUB's accounting
of pages.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/slub.c | 13 -
1 files changed, 4 insertions(+), 9 deletions(-)
Index: linux-2.6/mm/slub.c
=
This adds support for a block size of up to 64k on any platform.
It enables the mounting filesystems that have a larger blocksize
than the page size.
F.e. the following is possible on x86_64 and i386 that have only a 4k page
size:
mke2fs -b 16384 /dev/hdd2
mount /dev/hdd2 /media
ls -l /me
Use page_cache_xxx in fs/ext3
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/ext3/dir.c |3 ++-
fs/ext3/inode.c | 34 +-
2 files changed, 19 insertions(+), 18 deletions(-)
diff --git a/fs/ext3/dir.c b/fs/ext3/dir.c
index c00723a..a65b5a7 10064
This is needed by slab defragmentation. The refcount of a page head
may be incremented to ensure that a compound page will not go away under us.
It also may be needed for defragmentation of higher order pages. The
moving of compound pages may require the establishment of a reference
before the use
Use page_cache_xxx in fs/reiserfs
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/reiserfs/file.c| 83 ++---
fs/reiserfs/inode.c | 33 ++--
fs/reiserfs/ioctl.c |2 +-
fs/reiserfs/stree.c
Add support for compound pages so that
inc_ and dec_xxx
will increment the ZVCs by the number of base pages of the compound page.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
include/linux/vmstat.h |5 ++---
mm/vmstat.c| 18 +-
2 files changed,
Fix PAGE SIZE assumption in miscellaneous places.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
kernel/futex.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/futex.c b/kernel/futex.c
index a124250..c6102e8 100644
--- a/kernel/futex.c
+++ b/kernel/futex
Use page_cache_xxx in mm/fadvise.c
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/fadvise.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/fadvise.c b/mm/fadvise.c
index 0df4c89..804c2a9 100644
--- a/mm/fadvise.c
+++ b/mm/fadvise.c
@@ -79,8 +79,8 @
Conver the uses of PAGE_CACHE_xxx to use page_cache_xxx instead.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/filemap.c | 56
1 files changed, 28 insertions(+), 28 deletions(-)
Index: linux-2.6/mm/filemap.c
===
[An update before the Kernel Summit because of the numerous requests that I
have had for this patchset. Please speak up if you feel that we need something
like this.]
This patchset modifies the Linux kernel so that larger block sizes than
page size can be supported. Larger block sizes are handled
Use page_cache_xxx in mm/migrate.c
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/migrate.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 37c73b9..4949927 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -195,7 +195,7 @@ st
Use page_cache_xxx in mm/page-writeback.c
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/page-writeback.c |6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 63512a9..ebe76e3 100644
--- a/mm/page-writeback.c
++
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/rmap.c |8
1 file changed, 4 insertions(+), 4 deletions(-)
Index: linux-2.6.22-rc4-mm2/mm/rmap.c
===
--- linux-2.6.22-rc4-mm2.orig/mm/rmap.c 2007-06-14 10:35:4
Allow the freeing of compound pages via pagevec.
In release_pages() we currently special case for compound pages in order to
be sure to always decrement the page count of the head page and not the
tail page. However that redirection to the head page is only necessary for
tail pages. So use PageTai
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/mpage.c | 28
1 file changed, 16 insertions(+), 12 deletions(-)
Index: vps/fs/mpage.c
===
--- vps.orig/fs/mpage.c 2007-06-11 22:33:07.000
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/migrate.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: vps/mm/migrate.c
===
--- vps.orig/mm/migrate.c 2007-06-11 15:56:37.0 -0700
+++ vps/m
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/fadvise.c |8
1 file changed, 4 insertions(+), 4 deletions(-)
Index: vps/mm/fadvise.c
===
--- vps.orig/mm/fadvise.c 2007-06-04 17:57:25.0 -0700
+
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/ext2/dir.c | 40 +++-
1 file changed, 23 insertions(+), 17 deletions(-)
Index: linux-2.6.22-rc4-mm2/fs/ext2/dir.c
===
--- linux-2.6.22
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/reiserfs/bitmap.c |7 ++-
fs/reiserfs/file.c|5 +++--
fs/reiserfs/inode.c | 37 ++---
fs/reiserfs/ioctl.c |2 +-
fs/reiserfs/stree.c |
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
kernel/container.c |4 ++--
kernel/futex.c |2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
Index: vps/kernel/futex.c
===
--- vps.orig/kernel/futex.c 20
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
drivers/block/loop.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
Index: linux-2.6.22-rc4-mm2/drivers/block/loop.c
===
--- linux-2.6.22-rc4-mm2.orig/d
Add support for compound pages so that
inc_ and dec_xxx
will increment the ZVCs by the number of pages of the compound page.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
include/linux/vmstat.h |5 ++---
mm/vmstat.c| 18 +-
2 files changed, 15 i
This patch enhances the handling of compound pages in the VM. It may also
be important also for the antifrag patches that need to manage a set of
higher order free pages and also for other uses of compound pages.
For now it simplifies accounting for SLUB pages but the groundwork here is
important
Before we start allowing different page orders we better get checkpoints in
at various places in the VM. Checkpoints will help debugging whenever a
wrong order page shows up in a mapping. This will be helpful for converting
new filesystems to utilize larger pages.
Signed-off-by: Christoph Lameter
Provide an alternate definition for the page_cache_xxx(mapping, ...)
functions that can determine the current page size from the mapping
and generate the appropriate shifts, sizes and mask for the page cache
operations. Change the basic functions that allocate pages for the
page cache to be able to
We may now have to zero and flush higher order pages. Implement
clear_mapping_page and flush_mapping_page to do that job. Replace
the flushing and clearing at some key locations for the pagecache.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/libfs.c |4 ++--
inclu
The simplest file system to use for larg blocksize support is ramfs.
Add a mount parameter that specifies the page order of the pages
that ramfs should use.
Note that ramfs does not use the lower layers (buffer I/O etc) so this
case is useful for initial testing of changes to large buffer size
sup
This adds support for a block size of up to 64k on any platform.
It enables the mounting filesystems that have a larger blocksize
than the page size.
F.e. the following is possible on x86_64 and i386 that have only a 4k page
size.
mke2fs -b 16384 /dev/hdd2
mount /dev/hdd2 /media
ls -l /me
Fix up readhead for large I/O operations.
Only calculate the readahead until the 2M boundary then fall back to
one page.
Signed-off-by: Fengguang Wu <[EMAIL PROTECTED]>
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
===
---
i
mapping_set_gfp_mask only works on order 0 page cache operations. Reiserfs
can use 8k pages (order 1). Replace the mapping_set_gfp_mask with
mapping_setup to make this work properly.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
fs/reiserfs/xattr.c |3 ++-
1 file changed, 2 insert
1 - 100 of 232 matches
Mail list logo