[PATCH 1/3] arm64: armv8_deprecated: Fix swp_handler() signal generation
arm64_notify_segfault() was written to decide on the si_code from the assembly emulation of the swp_handler(), but was also used for the signal generation from failed access_ok() and unaligned instructions. When access_ok() fails, there is no need to search for the offending address in the VMA space. Instead, simply set the error to SIGSEGV with si_code SEGV_ACCERR. Change the return code from emulate_swpX() when there is an unaligned pointer so the caller can differentiate from the EFAULT. It is unnecessary to search the VMAs in the case of an unaligned pointer. This change uses SIGSEGV and SEGV_ACCERR instead of SIGBUS to keep with what was returned before. Fixes: bd35a4adc413 (arm64: Port SWP/SWPB emulation support from arm) Signed-off-by: Liam R. Howlett --- arch/arm64/kernel/armv8_deprecated.c | 20 +--- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c index 0e86e8b9cedd..f424082b3455 100644 --- a/arch/arm64/kernel/armv8_deprecated.c +++ b/arch/arm64/kernel/armv8_deprecated.c @@ -324,7 +324,7 @@ static int emulate_swpX(unsigned int address, unsigned int *data, if ((type != TYPE_SWPB) && (address & 0x3)) { /* SWP to unaligned address not permitted */ pr_debug("SWP instruction on unaligned pointer!\n"); - return -EFAULT; + return -ENXIO; } while (1) { @@ -406,15 +406,17 @@ static int swp_handler(struct pt_regs *regs, u32 instr) user_ptr = (const void __user *)(unsigned long)(address & ~3); if (!access_ok(user_ptr, 4)) { pr_debug("SWP{B} emulation: access to 0x%08x not allowed!\n", - address); - goto fault; +address); + goto e_access; } res = emulate_swpX(address, , type); - if (res == -EFAULT) - goto fault; - else if (res == 0) + if (!res) regs->user_regs.regs[destreg] = data; + else if (res == -EFAULT) + goto e_fault; + else if (res = -ENXIO) /* Unaligned pointer */ + goto e_align; ret: if (type == TYPE_SWPB) @@ -428,10 +430,14 @@ static int swp_handler(struct pt_regs *regs, u32 instr) arm64_skip_faulting_instruction(regs, 4); return 0; -fault: +e_fault: pr_debug("SWP{B} emulation: access caused memory abort!\n"); arm64_notify_segfault(address); + return 0; +e_align: +e_access: + force_signal_inject(SIGSEGV, SEGV_ACCERR, address, 0); return 0; } -- 2.30.2
[PATCH 2/3] arm64: signal: sigreturn() and rt_sigreturn() sometime returns the wrong signals
arm64_notify_segfault() was used to force a SIGSEGV in all error cases in sigreturn() and rt_sigreturn() to avoid writing a new sig handler. There is now a better sig handler to use which does not search the VMA address space and return a slightly incorrect error code. Restore the older and correct si_code of SI_KERNEL by using arm64_notify_die(). In the case of !access_ok(), simply return SIGSEGV with si_code SEGV_ACCERR. This change requires exporting arm64_notfiy_die() to the arm64 traps.h Fixes: f71016a8a8c5 (arm64: signal: Call arm64_notify_segfault when failing to deliver signal) Signed-off-by: Liam R. Howlett --- arch/arm64/include/asm/traps.h | 2 ++ arch/arm64/kernel/signal.c | 8 ++-- arch/arm64/kernel/signal32.c | 18 ++ 3 files changed, 22 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/traps.h b/arch/arm64/include/asm/traps.h index 54f32a0675df..9b76144fcba6 100644 --- a/arch/arm64/include/asm/traps.h +++ b/arch/arm64/include/asm/traps.h @@ -29,6 +29,8 @@ void arm64_notify_segfault(unsigned long addr); void arm64_force_sig_fault(int signo, int code, unsigned long far, const char *str); void arm64_force_sig_mceerr(int code, unsigned long far, short lsb, const char *str); void arm64_force_sig_ptrace_errno_trap(int errno, unsigned long far, const char *str); +void arm64_notify_die(const char *str, struct pt_regs *regs, int signo, + int sicode, unsigned long far, int err); /* * Move regs->pc to next instruction and do necessary setup before it diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 6237486ff6bb..9fde6dc760c3 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -544,7 +544,7 @@ SYSCALL_DEFINE0(rt_sigreturn) frame = (struct rt_sigframe __user *)regs->sp; if (!access_ok(frame, sizeof (*frame))) - goto badframe; + goto e_access; if (restore_sigframe(regs, frame)) goto badframe; @@ -555,7 +555,11 @@ SYSCALL_DEFINE0(rt_sigreturn) return regs->regs[0]; badframe: - arm64_notify_segfault(regs->sp); + arm64_notify_die("Bad frame", regs, SIGSEGV, SI_KERNEL, regs->sp, 0); + return 0; + +e_access: + force_signal_inject(SIGSEGV, SEGV_ACCERR, regs->sp, 0); return 0; } diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c index 2f507f565c48..af8b6c0eb8aa 100644 --- a/arch/arm64/kernel/signal32.c +++ b/arch/arm64/kernel/signal32.c @@ -248,7 +248,7 @@ COMPAT_SYSCALL_DEFINE0(sigreturn) frame = (struct compat_sigframe __user *)regs->compat_sp; if (!access_ok(frame, sizeof (*frame))) - goto badframe; + goto e_access; if (compat_restore_sigframe(regs, frame)) goto badframe; @@ -256,7 +256,12 @@ COMPAT_SYSCALL_DEFINE0(sigreturn) return regs->regs[0]; badframe: - arm64_notify_segfault(regs->compat_sp); + arm64_notify_die("Bad frame", regs, SIGSEGV, SI_KERNEL, +regs->compat_sp, 0); + return 0; + +e_access: + force_signal_inject(SIGSEGV, SEGV_ACCERR, regs->compat_sp, 0); return 0; } @@ -279,7 +284,7 @@ COMPAT_SYSCALL_DEFINE0(rt_sigreturn) frame = (struct compat_rt_sigframe __user *)regs->compat_sp; if (!access_ok(frame, sizeof (*frame))) - goto badframe; + goto e_access; if (compat_restore_sigframe(regs, >sig)) goto badframe; @@ -290,7 +295,12 @@ COMPAT_SYSCALL_DEFINE0(rt_sigreturn) return regs->regs[0]; badframe: - arm64_notify_segfault(regs->compat_sp); + arm64_notify_die("Bad frame", regs, SIGSEGV, SI_KERNEL, +regs->compat_sp, 0); + return 0; + +e_access: + force_signal_inject(SIGSEGV, SEGV_ACCERR, regs->compat_sp, 0); return 0; } -- 2.30.2
[PATCH 3/3] arch/arm64/kernel/traps: Use find_vma_intersection() in traps for setting si_code
find_vma() will continue to search upwards until the end of the virtual memory space. This means the si_code would almost never be set to SEGV_MAPERR even when the address falls outside of any VMA. The result is that the si_code is not reliable as it may or may not be set to the correct result, depending on where the address falls in the address space. Using find_vma_intersection() allows for what is intended by only returning a VMA if it falls within the range provided, in this case a window of 1. Fixes: bd35a4adc413 (arm64: Port SWP/SWPB emulation support from arm) Signed-off-by: Liam R. Howlett --- arch/arm64/kernel/traps.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index a05d34f0e82a..a44007904a64 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -383,9 +383,10 @@ void force_signal_inject(int signal, int code, unsigned long address, unsigned i void arm64_notify_segfault(unsigned long addr) { int code; + unsigned long ut_addr = untagged_addr(addr); mmap_read_lock(current->mm); - if (find_vma(current->mm, untagged_addr(addr)) == NULL) + if (find_vma_intersection(current->mm, ut_addr, ut_addr + 1) == NULL) code = SEGV_MAPERR; else code = SEGV_ACCERR; -- 2.30.2
Re: [PATCH] arch/arm64/kernel/traps: Use find_vma_intersection() in traps for setting si_code
* Catalin Marinas [210413 14:00]: > On Tue, Apr 13, 2021 at 04:52:34PM +0000, Liam Howlett wrote: > > * Catalin Marinas [210412 13:44]: > > > On Wed, Apr 07, 2021 at 03:11:06PM +, Liam Howlett wrote: > > > > find_vma() will continue to search upwards until the end of the virtual > > > > memory space. This means the si_code would almost never be set to > > > > SEGV_MAPERR even when the address falls outside of any VMA. The result > > > > is that the si_code is not reliable as it may or may not be set to the > > > > correct result, depending on where the address falls in the address > > > > space. > > > > > > > > Using find_vma_intersection() allows for what is intended by only > > > > returning a VMA if it falls within the range provided, in this case a > > > > window of 1. > > > > > > > > Signed-off-by: Liam R. Howlett > > > > --- > > > > arch/arm64/kernel/traps.c | 3 ++- > > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c > > > > index a05d34f0e82a..a44007904a64 100644 > > > > --- a/arch/arm64/kernel/traps.c > > > > +++ b/arch/arm64/kernel/traps.c > > > > @@ -383,9 +383,10 @@ void force_signal_inject(int signal, int code, > > > > unsigned long address, unsigned i > > > > void arm64_notify_segfault(unsigned long addr) > > > > { > > > > int code; > > > > + unsigned long ut_addr = untagged_addr(addr); > > > > > > > > mmap_read_lock(current->mm); > > > > - if (find_vma(current->mm, untagged_addr(addr)) == NULL) > > > > + if (find_vma_intersection(current->mm, ut_addr, ut_addr + 1) == > > > > NULL) > > > > code = SEGV_MAPERR; > > > > else > > > > code = SEGV_ACCERR; > [...] > > > I don't think your change is entirely correct either. We can have a > > > fault below the vma of a stack (with VM_GROWSDOWN) and > > > find_vma_intersection() would return NULL but it should be a SEGV_ACCERR > > > instead. > > > > I'm pretty sure I am missing something. From what you said above, I > > think this means that there can be a user cache fault below the stack > > which should notify the user application that they are not allowed to > > expand the stack by sending a SIGV_ACCERR in the si_code? Is this > > expected behaviour or am I missing a code path to this function? > > My point was that find_vma() may return a valid vma where addr < vm_end > but also addr < vm_addr. It's the responsibility of the caller to check > that that vma can be expanded (VM_GROWSDOWN) and we do something like > this in __do_page_fault(). find_vma_intersection(), OTOH, requires addr > >= vm_start. Right. The find_vma() interface is not clear by the function name; returning a VMA that doesn't include the address of interest is unclear. I think this is why we ended up with the bug in the first place. > > If we hit this case (addr < vm_start), normally we'd first need to check > whether it's expandable and, if not, return MAPERR. If it's expandable, > it should be ACCERR since something else caused the fault. > > Now, I think at least for user_cache_maint_handler(), we can assume that > __do_page_fault() handled any expansion already, so we don't need to > check it here. In this case, your find_vma_intersection() check should > work. > > Are there other cases where we invoke arm64_notify_segfault() without a > prior fault? I think in swp_handler() we can bail out early before we > even attempted the access so we may report MAPERR but ACCERR is a better > indication. swp_handler() is also buggy. It is currently getting the ACCERR as long as the address being checked is > mm->highest_vm_end. If access_ok() fails, it should return ACCERR and not search VMAs for the address at all. ... >Also in sys_rt_sigreturn() we always call it as > arm64_notify_segfault(regs->sp). I'm not sure that's correct in all > cases, see restore_altstack(). Ditto for sys_rt_sigreturn() and sys_sigreturn(), they both suffer the same bug as swp_handler() outlined above. In the case of restore_sigframe() or restore_altstack() failing, it seems that the signal shouldn't be dependent on where the address falls within the VMA at all. Should the signal still be SIGSEGV or something else? By the comments, I would have thought SIGBUS, si_code of BUS_ADRALN? > > I guess this code ne
Re: [PATCH] arch/arm64/kernel/traps: Use find_vma_intersection() in traps for setting si_code
* Catalin Marinas [210412 13:44]: > On Wed, Apr 07, 2021 at 03:11:06PM +0000, Liam Howlett wrote: > > find_vma() will continue to search upwards until the end of the virtual > > memory space. This means the si_code would almost never be set to > > SEGV_MAPERR even when the address falls outside of any VMA. The result > > is that the si_code is not reliable as it may or may not be set to the > > correct result, depending on where the address falls in the address > > space. > > > > Using find_vma_intersection() allows for what is intended by only > > returning a VMA if it falls within the range provided, in this case a > > window of 1. > > > > Signed-off-by: Liam R. Howlett > > --- > > arch/arm64/kernel/traps.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c > > index a05d34f0e82a..a44007904a64 100644 > > --- a/arch/arm64/kernel/traps.c > > +++ b/arch/arm64/kernel/traps.c > > @@ -383,9 +383,10 @@ void force_signal_inject(int signal, int code, > > unsigned long address, unsigned i > > void arm64_notify_segfault(unsigned long addr) > > { > > int code; > > + unsigned long ut_addr = untagged_addr(addr); > > > > mmap_read_lock(current->mm); > > - if (find_vma(current->mm, untagged_addr(addr)) == NULL) > > + if (find_vma_intersection(current->mm, ut_addr, ut_addr + 1) == NULL) > > code = SEGV_MAPERR; > > else > > code = SEGV_ACCERR; Thank you for taking the time to thoroughly review this patch. > > I don't think your change is entirely correct either. We can have a > fault below the vma of a stack (with VM_GROWSDOWN) and > find_vma_intersection() would return NULL but it should be a SEGV_ACCERR > instead. I'm pretty sure I am missing something. From what you said above, I think this means that there can be a user cache fault below the stack which should notify the user application that they are not allowed to expand the stack by sending a SIGV_ACCERR in the si_code? Is this expected behaviour or am I missing a code path to this function? > > Maybe this should employ similar checks as __do_page_fault() (with > expand_stack() and VM_GROWSDOWN). You mean the code needs to detect endianness and to check if this is an attempt to expand the stack for both cases? Thanks, Liam
Re: [PATCH] arch/m68k/kernel/sys_m68k: Add missing mmap_read_lock() to sys_cacheflush()
Forgot the fixes line. * Liam Howlett [210407 16:00]: > When the superuser flushes the entire cache, the mmap_read_lock() is not > taken, but mmap_read_unlock() is called. Add the missing > mmap_read_lock() call. > > Signed-off-by: Liam R. Howlett > --- > arch/m68k/kernel/sys_m68k.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/m68k/kernel/sys_m68k.c b/arch/m68k/kernel/sys_m68k.c > index 1c235d8f53f3..f55bdcb8e4f1 100644 > --- a/arch/m68k/kernel/sys_m68k.c > +++ b/arch/m68k/kernel/sys_m68k.c > @@ -388,6 +388,8 @@ sys_cacheflush (unsigned long addr, int scope, int cache, > unsigned long len) > ret = -EPERM; > if (!capable(CAP_SYS_ADMIN)) > goto out; > + > + mmap_read_lock(current->mm); > } else { > struct vm_area_struct *vma; > > -- > 2.30.0 From aeee71b15f54426f02f41a4408afbd0b5acab7ec Mon Sep 17 00:00:00 2001 From: "Liam R. Howlett" Date: Wed, 7 Apr 2021 11:39:06 -0400 Subject: [PATCH] arch/m68k/kernel/sys_m68k: Add missing mmap_read_lock() to sys_cacheflush() When the superuser flushes the entire cache, the mmap_read_lock() is not taken, but mmap_read_unlock() is called. Add the missing mmap_read_lock() call. Fixes: cd2567b6850b (m68k: call find_vma with the mmap_sem held in sys_cacheflush()) Signed-off-by: Liam R. Howlett --- arch/m68k/kernel/sys_m68k.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/m68k/kernel/sys_m68k.c b/arch/m68k/kernel/sys_m68k.c index 1c235d8f53f3..f55bdcb8e4f1 100644 --- a/arch/m68k/kernel/sys_m68k.c +++ b/arch/m68k/kernel/sys_m68k.c @@ -388,6 +388,8 @@ sys_cacheflush (unsigned long addr, int scope, int cache, unsigned long len) ret = -EPERM; if (!capable(CAP_SYS_ADMIN)) goto out; + + mmap_read_lock(current->mm); } else { struct vm_area_struct *vma; -- 2.30.0
[PATCH] arch/m68k/kernel/sys_m68k: Add missing mmap_read_lock() to sys_cacheflush()
When the superuser flushes the entire cache, the mmap_read_lock() is not taken, but mmap_read_unlock() is called. Add the missing mmap_read_lock() call. Signed-off-by: Liam R. Howlett --- arch/m68k/kernel/sys_m68k.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/m68k/kernel/sys_m68k.c b/arch/m68k/kernel/sys_m68k.c index 1c235d8f53f3..f55bdcb8e4f1 100644 --- a/arch/m68k/kernel/sys_m68k.c +++ b/arch/m68k/kernel/sys_m68k.c @@ -388,6 +388,8 @@ sys_cacheflush (unsigned long addr, int scope, int cache, unsigned long len) ret = -EPERM; if (!capable(CAP_SYS_ADMIN)) goto out; + + mmap_read_lock(current->mm); } else { struct vm_area_struct *vma; -- 2.30.0
[PATCH] arch/arm64/kernel/traps: Use find_vma_intersection() in traps for setting si_code
find_vma() will continue to search upwards until the end of the virtual memory space. This means the si_code would almost never be set to SEGV_MAPERR even when the address falls outside of any VMA. The result is that the si_code is not reliable as it may or may not be set to the correct result, depending on where the address falls in the address space. Using find_vma_intersection() allows for what is intended by only returning a VMA if it falls within the range provided, in this case a window of 1. Signed-off-by: Liam R. Howlett --- arch/arm64/kernel/traps.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index a05d34f0e82a..a44007904a64 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -383,9 +383,10 @@ void force_signal_inject(int signal, int code, unsigned long address, unsigned i void arm64_notify_segfault(unsigned long addr) { int code; + unsigned long ut_addr = untagged_addr(addr); mmap_read_lock(current->mm); - if (find_vma(current->mm, untagged_addr(addr)) == NULL) + if (find_vma_intersection(current->mm, ut_addr, ut_addr + 1) == NULL) code = SEGV_MAPERR; else code = SEGV_ACCERR; -- 2.30.0
Re: [PATCH] alpha/kernel/traps: Use find_vma_intersection() in traps for setting si_code
* Michel Lespinasse [210401 16:25]: > You are correct that find_vma is insufficient for what's intended > here, and that find_vma_intersection fixes it. > > I'll let the arch maintainers speak of what the consequences of the > changed si_code would be - the bug has been here so long, that I would > worry some userspace might have come to depend on it (the old "common > law feature" issue). Fair point. Is this a valid concern given the result will vary, although uncommon, based on the address passed in? A user would see different behaviour based on where the address lands in the virtual address space. > Just a concern I have, with 0 evidence behind it, so I hope it turns > out not to be an actual issue. > > Acked-by: Michel Lespinasse > > On Thu, Apr 1, 2021 at 12:51 PM Liam Howlett wrote: > > > > find_vma() will continue to search upwards until the end of the virtual > > memory space. This means the si_code would almost never be set to > > SEGV_MAPERR even when the address falls outside of any VMA. > > > > Using find_vma_intersection() allows for what is intended by only > > returning a VMA if it falls within the range provided, in this case a > > window of 1. > > > > Signed-off-by: Liam R. Howlett > > --- > > arch/alpha/kernel/traps.c | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c > > index 921d4b6e4d95..7f51386c06d0 100644 > > --- a/arch/alpha/kernel/traps.c > > +++ b/arch/alpha/kernel/traps.c > > @@ -957,8 +957,10 @@ do_entUnaUser(void __user * va, unsigned long opcode, > > si_code = SEGV_ACCERR; > > else { > > struct mm_struct *mm = current->mm; > > + unsigned long addr = (unsigned long)va; > > + > > mmap_read_lock(mm); > > - if (find_vma(mm, (unsigned long)va)) > > + if (find_vma_intersection(mm, addr, addr + 1)) > > si_code = SEGV_ACCERR; > > else > > si_code = SEGV_MAPERR; > > -- > > 2.30.0 > > > > -- > Michel "Walken" Lespinasse > A program is never fully debugged until the last user dies.
[PATCH] alpha/kernel/traps: Use find_vma_intersection() in traps for setting si_code
find_vma() will continue to search upwards until the end of the virtual memory space. This means the si_code would almost never be set to SEGV_MAPERR even when the address falls outside of any VMA. Using find_vma_intersection() allows for what is intended by only returning a VMA if it falls within the range provided, in this case a window of 1. Signed-off-by: Liam R. Howlett --- arch/alpha/kernel/traps.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c index 921d4b6e4d95..7f51386c06d0 100644 --- a/arch/alpha/kernel/traps.c +++ b/arch/alpha/kernel/traps.c @@ -957,8 +957,10 @@ do_entUnaUser(void __user * va, unsigned long opcode, si_code = SEGV_ACCERR; else { struct mm_struct *mm = current->mm; + unsigned long addr = (unsigned long)va; + mmap_read_lock(mm); - if (find_vma(mm, (unsigned long)va)) + if (find_vma_intersection(mm, addr, addr + 1)) si_code = SEGV_ACCERR; else si_code = SEGV_MAPERR; -- 2.30.0
[PATCH] i915_vma: Rename vma_lookup to i915_vma_lookup
Use i915 prefix to avoid name collision with future vma_lookup() in mm. Signed-off-by: Liam R. Howlett Reviewed-by: Matthew Wilcox (Oracle) --- drivers/gpu/drm/i915/i915_vma.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index caa9b041616b..ee0028c697f6 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -230,7 +230,7 @@ vma_create(struct drm_i915_gem_object *obj, } static struct i915_vma * -vma_lookup(struct drm_i915_gem_object *obj, +i915_vma_lookup(struct drm_i915_gem_object *obj, struct i915_address_space *vm, const struct i915_ggtt_view *view) { @@ -278,7 +278,7 @@ i915_vma_instance(struct drm_i915_gem_object *obj, GEM_BUG_ON(!atomic_read(>open)); spin_lock(>vma.lock); - vma = vma_lookup(obj, vm, view); + vma = i915_vma_lookup(obj, vm, view); spin_unlock(>vma.lock); /* vma_create() will resolve the race if another creates the vma */ -- 2.30.0
[PATCH v3] mm/mmap: Don't unlock VMAs in remap_file_pages()
Since this call uses MAP_FIXED, do_mmap() will munlock the necessary range. There is also an error in the loop test expression which will evaluate as false and the loop body has never execute. Signed-off-by: Liam R. Howlett Acked-by: Hugh Dickins --- mm/mmap.c | 18 +- 1 file changed, 1 insertion(+), 17 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index dc7206032387c..e22b048733269 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -3025,25 +3025,9 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size, flags &= MAP_NONBLOCK; flags |= MAP_SHARED | MAP_FIXED | MAP_POPULATE; - if (vma->vm_flags & VM_LOCKED) { - struct vm_area_struct *tmp; + if (vma->vm_flags & VM_LOCKED) flags |= MAP_LOCKED; - /* drop PG_Mlocked flag for over-mapped range */ - for (tmp = vma; tmp->vm_start >= start + size; - tmp = tmp->vm_next) { - /* -* Split pmd and munlock page on the border -* of the range. -*/ - vma_adjust_trans_huge(tmp, start, start + size, 0); - - munlock_vma_pages_range(tmp, - max(tmp->vm_start, start), - min(tmp->vm_end, start + size)); - } - } - file = get_file(vma->vm_file); ret = do_mmap(vma->vm_file, start, size, prot, flags, pgoff, , NULL); -- 2.30.0
Re: [rcu:willy-maple 189/202] mm/mmap.c:2830:18: warning: variable 'ma_lock' set but not used
* kernel test robot [210202 22:08]: > tree: https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git > willy-maple > head: 7e346d2845b4bd77663394f39fa70456e0084c86 > commit: e40a951e09ed0e66dbd646f938df19c876915b9d [189/202] mm: Remove vma > linked list. > config: alpha-defconfig (attached as .config) > compiler: alpha-linux-gcc (GCC) 9.3.0 > reproduce (this is a W=1 build): > wget > https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O > ~/bin/make.cross > chmod +x ~/bin/make.cross > # > https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git/commit/?id=e40a951e09ed0e66dbd646f938df19c876915b9d > git remote add rcu > https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git > git fetch --no-tags rcu willy-maple > git checkout e40a951e09ed0e66dbd646f938df19c876915b9d > # save the attached .config to linux build tree > COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross > ARCH=alpha > > If you fix the issue, kindly add following tag as appropriate > Reported-by: kernel test robot > Hello! Yes, this can be safely dropped. I will fix this in my next patch revision. Thank you, Liam > All warnings (new ones prefixed by >>): > >mm/mmap.c:2366:5: warning: no previous prototype for 'do_mas_align_munmap' > [-Wmissing-prototypes] > 2366 | int do_mas_align_munmap(struct ma_state *mas, struct > vm_area_struct *vma, > | ^~~ >mm/mmap.c: In function '__do_sys_remap_file_pages': > >> mm/mmap.c:2830:18: warning: variable 'ma_lock' set but not used > >> [-Wunused-but-set-variable] > 2830 | struct ma_state ma_lock; > | ^~~ > > > vim +/ma_lock +2830 mm/mmap.c > > 2824 > 2825struct mm_struct *mm = current->mm; > 2826struct vm_area_struct *vma; > 2827unsigned long populate = 0; > 2828unsigned long ret = -EINVAL; > 2829struct file *file; > > 2830struct ma_state ma_lock; > 2831MA_STATE(mas, >mm_mt, start, start); > 2832 > 2833pr_warn_once("%s (%d) uses deprecated > remap_file_pages() syscall. See Documentation/vm/remap_file_pages.rst.\n", > 2834 current->comm, current->pid); > 2835 > 2836if (prot) > 2837return ret; > 2838 > 2839start = start & PAGE_MASK; > 2840size = size & PAGE_MASK; > 2841if (start + size <= start) > 2842return ret; > 2843 > 2844/* Does pgoff wrap? */ > 2845if (pgoff + (size >> PAGE_SHIFT) < pgoff) > 2846return ret; > 2847 > 2848if (mmap_write_lock_killable(mm)) > 2849return -EINTR; > 2850 > 2851mas_set(, start); > 2852vma = mas_walk(); > 2853ma_lock = mas; > 2854 > 2855if (!vma || !(vma->vm_flags & VM_SHARED)) > 2856goto out; > 2857 > 2858if (!vma->vm_file) > 2859goto out; > 2860 > 2861if (start + size > vma->vm_end) { > 2862struct vm_area_struct *prev, *next; > 2863 > 2864prev = vma; > 2865mas_for_each(, next, start + size) { > 2866/* hole between vmas ? */ > 2867if (next->vm_start != prev->vm_end) > 2868goto out; > 2869 > 2870if (next->vm_file != vma->vm_file) > 2871goto out; > 2872 > 2873if (next->vm_flags != vma->vm_flags) > 2874goto out; > 2875 > 2876if (start + size <= next->vm_end) > 2877break; > 2878 > 2879prev = next; > 2880} > 2881 > 2882if (!next) > 2883goto out; > 2884} > 2885 > 2886prot |= vma->vm_flags & VM_READ ? PROT_READ : 0; > 2887prot |= vma->vm_flags & VM_WRITE ? PROT_WRITE : 0; > 2888prot |= vma->vm_flags & VM_EXEC ? PROT_EXEC : 0; > 2889 > 2890flags &= MAP_NONBLOCK; > 2891flags |= MAP_SHARED | MAP_FIXED | MAP_POPULATE; > 2892 > 2893file = get_file(vma->vm_file); >
Re: [rcu:willy-maple 134/202] mm/mmap.c:2919 do_brk_munmap() error: we previously assumed 'vma->anon_vma' could be null (see line 2884)
Hello, These are two valid issues. I had noticed one but both need to be addressed. Thank you Dan. Regards, Liam * Dan Carpenter [210203 08:15]: > tree: https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git > willy-maple > head: 7e346d2845b4bd77663394f39fa70456e0084c86 > commit: 5b05486ddd0127e852616630ef547dba96a7abad [134/202] mm/mmap: Change > do_brk_flags() to expand existing VMA and add do_brk_munmap() > config: x86_64-randconfig-m001-20210202 (attached as .config) > compiler: gcc-9 (Debian 9.3.0-15) 9.3.0 > > If you fix the issue, kindly add following tag as appropriate > Reported-by: kernel test robot > Reported-by: Dan Carpenter > > smatch warnings: > mm/mmap.c:2919 do_brk_munmap() error: we previously assumed 'vma->anon_vma' > could be null (see line 2884) > mm/mmap.c:3039 do_brk_flags() error: we previously assumed 'vma->anon_vma' > could be null (see line 2980) > > vim +2919 mm/mmap.c > > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2855 static int > do_brk_munmap(struct ma_state *mas, struct vm_area_struct *vma, > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2856 > unsigned long newbrk, unsigned long oldbrk, > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2857 > struct list_head *uf) > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2858 { > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2859 struct > mm_struct *mm = vma->vm_mm; > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2860 struct > vm_area_struct unmap; > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2861 unsigned long > unmap_pages; > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2862 int ret = 1; > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2863 > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2864 arch_unmap(mm, > newbrk, oldbrk); > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2865 > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2866 if > (likely(vma->vm_start >= newbrk)) { // remove entire mapping(s) > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2867 > mas_set(mas, newbrk); > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2868 if > (vma->vm_start != newbrk) > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2869 > mas_reset(mas); // cause a re-walk for the first overlap. > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2870 ret = > __do_munmap(mm, newbrk, oldbrk - newbrk, uf, true); > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2871 goto > munmap_full_vma; > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2872 } > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2873 > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2874 > vma_init(, mm); > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2875 unmap.vm_start > = newbrk; > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2876 unmap.vm_end = > oldbrk; > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2877 ret = > userfaultfd_unmap_prep(, newbrk, oldbrk, uf); > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2878 if (ret) > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2879 return > ret; > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2880 ret = 1; > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2881 > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2882 // Change the > oldbrk of vma to the newbrk of the munmap area > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2883 > vma_adjust_trans_huge(vma, vma->vm_start, newbrk, 0); > 5b05486ddd0127 Liam R. Howlett 2020-09-21 @2884 if > (vma->anon_vma) { > ^ > This code assumes "vma->anon_vma" can be NULL. > > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2885 > anon_vma_lock_write(vma->anon_vma); > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2886 > anon_vma_interval_tree_pre_update_vma(vma); > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2887 } > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2888 > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2889 vma->vm_end = > newbrk; > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2890 if > (vma_mas_remove(, mas)) > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2891 goto > mas_store_fail; > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2892 > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2893 > vmacache_invalidate(vma->vm_mm); > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2894 if > (vma->anon_vma) { > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2895 > anon_vma_interval_tree_post_update_vma(vma); > 5b05486ddd0127 Liam R. Howlett 2020-09-21 2896
NFS Killable tasks request comments on patch
Signed-off-by: Liam R. Howlett <[EMAIL PROTECTED]> This patch builds on willy's TASK_INTERRUPTABLE and my own TASK_KILLABLE patches that are currently in the mm branch. ( see http://lkml.org/lkml/2007/10/18/423 and http://lkml.org/lkml/2007/11/28/127 ) This patch removes the rpc sigmask code and changes the out_of_line_wait_on_bit and wait_on_bit calls in the sched.c file to use TASK_KILLABLE. The result of this patch is the ability to kill commands issued to a dead NFS mount by the normal ctrl+c method. I am looking for help getting this to work with commands that use the stat (and friends) system calls. These system calls seem to use spinlocks and are not killable-friendly due to the atomic operations involved. Does anyone have any thoughts on the patch so far or the remaining issues I am facing? Please CC me on any responses. Thanks, Liam R. Howlett --- fs/nfs/direct.c |8 fs/nfs/inode.c |4 fs/nfs/nfs3proc.c|3 --- fs/nfs/nfs4proc.c|9 - fs/nfs/pagelist.c|4 fs/nfs/read.c|5 - fs/nfs/write.c |5 - include/linux/nfs_fs.h |2 -- net/sunrpc/clnt.c| 41 - net/sunrpc/sched.c |4 ++-- net/sunrpc/sunrpc_syms.c |2 -- 11 files changed, 2 insertions(+), 85 deletions(-) diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c index afcab00..30eefa1 100644 --- a/fs/nfs/direct.c +++ b/fs/nfs/direct.c @@ -358,9 +358,7 @@ static ssize_t nfs_direct_read_schedule(struct nfs_direct_req *dreq, unsigned lo static ssize_t nfs_direct_read(struct kiocb *iocb, unsigned long user_addr, size_t count, loff_t pos) { ssize_t result = 0; - sigset_t oldset; struct inode *inode = iocb->ki_filp->f_mapping->host; - struct rpc_clnt *clnt = NFS_CLIENT(inode); struct nfs_direct_req *dreq; dreq = nfs_direct_req_alloc(); @@ -373,11 +371,9 @@ static ssize_t nfs_direct_read(struct kiocb *iocb, unsigned long user_addr, size dreq->iocb = iocb; nfs_add_stats(inode, NFSIOS_DIRECTREADBYTES, count); - rpc_clnt_sigmask(clnt, ); result = nfs_direct_read_schedule(dreq, user_addr, count, pos); if (!result) result = nfs_direct_wait(dreq); - rpc_clnt_sigunmask(clnt, ); nfs_direct_req_release(dreq); return result; @@ -700,9 +696,7 @@ static ssize_t nfs_direct_write_schedule(struct nfs_direct_req *dreq, unsigned l static ssize_t nfs_direct_write(struct kiocb *iocb, unsigned long user_addr, size_t count, loff_t pos) { ssize_t result = 0; - sigset_t oldset; struct inode *inode = iocb->ki_filp->f_mapping->host; - struct rpc_clnt *clnt = NFS_CLIENT(inode); struct nfs_direct_req *dreq; size_t wsize = NFS_SERVER(inode)->wsize; int sync = 0; @@ -722,11 +716,9 @@ static ssize_t nfs_direct_write(struct kiocb *iocb, unsigned long user_addr, siz nfs_add_stats(inode, NFSIOS_DIRECTWRITTENBYTES, count); - rpc_clnt_sigmask(clnt, ); result = nfs_direct_write_schedule(dreq, user_addr, count, pos, sync); if (!result) result = nfs_direct_wait(dreq); - rpc_clnt_sigunmask(clnt, ); nfs_direct_req_release(dreq); return result; diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c index db5d96d..7fbf610 100644 --- a/fs/nfs/inode.c +++ b/fs/nfs/inode.c @@ -433,15 +433,11 @@ static int nfs_wait_schedule(void *word) */ static int nfs_wait_on_inode(struct inode *inode) { - struct rpc_clnt *clnt = NFS_CLIENT(inode); struct nfs_inode *nfsi = NFS_I(inode); - sigset_t oldmask; int error; - rpc_clnt_sigmask(clnt, ); error = wait_on_bit_lock(>flags, NFS_INO_REVALIDATING, nfs_wait_schedule, TASK_INTERRUPTIBLE); - rpc_clnt_sigunmask(clnt, ); return error; } diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c index 4cdc236..cf98de9 100644 --- a/fs/nfs/nfs3proc.c +++ b/fs/nfs/nfs3proc.c @@ -27,9 +27,7 @@ static int nfs3_rpc_wrapper(struct rpc_clnt *clnt, struct rpc_message *msg, int flags) { - sigset_t oldset; int res; - rpc_clnt_sigmask(clnt, ); do { res = rpc_call_sync(clnt, msg, flags); if (res != -EJUKEBOX) @@ -37,7 +35,6 @@ nfs3_rpc_wrapper(struct rpc_clnt *clnt, struct rpc_message *msg, int flags) schedule_timeout_interruptible(NFS_JUKEBOX_RETRY_TIME); res = -ERESTARTSYS; } while (!signalled()); - rpc_clnt_sigunmask(clnt, ); return res; } diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index f03d9d5..c7d3955 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -316,12 +316,9 @@ static void nfs4_opendata_put(struct nfs4_opendata *p) static int nfs4_wait_for_completion_rpc_task(struct rpc_task *task) { -
NFS Killable tasks request comments on patch
Signed-off-by: Liam R. Howlett [EMAIL PROTECTED] This patch builds on willy's TASK_INTERRUPTABLE and my own TASK_KILLABLE patches that are currently in the mm branch. ( see http://lkml.org/lkml/2007/10/18/423 and http://lkml.org/lkml/2007/11/28/127 ) This patch removes the rpc sigmask code and changes the out_of_line_wait_on_bit and wait_on_bit calls in the sched.c file to use TASK_KILLABLE. The result of this patch is the ability to kill commands issued to a dead NFS mount by the normal ctrl+c method. I am looking for help getting this to work with commands that use the stat (and friends) system calls. These system calls seem to use spinlocks and are not killable-friendly due to the atomic operations involved. Does anyone have any thoughts on the patch so far or the remaining issues I am facing? Please CC me on any responses. Thanks, Liam R. Howlett --- fs/nfs/direct.c |8 fs/nfs/inode.c |4 fs/nfs/nfs3proc.c|3 --- fs/nfs/nfs4proc.c|9 - fs/nfs/pagelist.c|4 fs/nfs/read.c|5 - fs/nfs/write.c |5 - include/linux/nfs_fs.h |2 -- net/sunrpc/clnt.c| 41 - net/sunrpc/sched.c |4 ++-- net/sunrpc/sunrpc_syms.c |2 -- 11 files changed, 2 insertions(+), 85 deletions(-) diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c index afcab00..30eefa1 100644 --- a/fs/nfs/direct.c +++ b/fs/nfs/direct.c @@ -358,9 +358,7 @@ static ssize_t nfs_direct_read_schedule(struct nfs_direct_req *dreq, unsigned lo static ssize_t nfs_direct_read(struct kiocb *iocb, unsigned long user_addr, size_t count, loff_t pos) { ssize_t result = 0; - sigset_t oldset; struct inode *inode = iocb-ki_filp-f_mapping-host; - struct rpc_clnt *clnt = NFS_CLIENT(inode); struct nfs_direct_req *dreq; dreq = nfs_direct_req_alloc(); @@ -373,11 +371,9 @@ static ssize_t nfs_direct_read(struct kiocb *iocb, unsigned long user_addr, size dreq-iocb = iocb; nfs_add_stats(inode, NFSIOS_DIRECTREADBYTES, count); - rpc_clnt_sigmask(clnt, oldset); result = nfs_direct_read_schedule(dreq, user_addr, count, pos); if (!result) result = nfs_direct_wait(dreq); - rpc_clnt_sigunmask(clnt, oldset); nfs_direct_req_release(dreq); return result; @@ -700,9 +696,7 @@ static ssize_t nfs_direct_write_schedule(struct nfs_direct_req *dreq, unsigned l static ssize_t nfs_direct_write(struct kiocb *iocb, unsigned long user_addr, size_t count, loff_t pos) { ssize_t result = 0; - sigset_t oldset; struct inode *inode = iocb-ki_filp-f_mapping-host; - struct rpc_clnt *clnt = NFS_CLIENT(inode); struct nfs_direct_req *dreq; size_t wsize = NFS_SERVER(inode)-wsize; int sync = 0; @@ -722,11 +716,9 @@ static ssize_t nfs_direct_write(struct kiocb *iocb, unsigned long user_addr, siz nfs_add_stats(inode, NFSIOS_DIRECTWRITTENBYTES, count); - rpc_clnt_sigmask(clnt, oldset); result = nfs_direct_write_schedule(dreq, user_addr, count, pos, sync); if (!result) result = nfs_direct_wait(dreq); - rpc_clnt_sigunmask(clnt, oldset); nfs_direct_req_release(dreq); return result; diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c index db5d96d..7fbf610 100644 --- a/fs/nfs/inode.c +++ b/fs/nfs/inode.c @@ -433,15 +433,11 @@ static int nfs_wait_schedule(void *word) */ static int nfs_wait_on_inode(struct inode *inode) { - struct rpc_clnt *clnt = NFS_CLIENT(inode); struct nfs_inode *nfsi = NFS_I(inode); - sigset_t oldmask; int error; - rpc_clnt_sigmask(clnt, oldmask); error = wait_on_bit_lock(nfsi-flags, NFS_INO_REVALIDATING, nfs_wait_schedule, TASK_INTERRUPTIBLE); - rpc_clnt_sigunmask(clnt, oldmask); return error; } diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c index 4cdc236..cf98de9 100644 --- a/fs/nfs/nfs3proc.c +++ b/fs/nfs/nfs3proc.c @@ -27,9 +27,7 @@ static int nfs3_rpc_wrapper(struct rpc_clnt *clnt, struct rpc_message *msg, int flags) { - sigset_t oldset; int res; - rpc_clnt_sigmask(clnt, oldset); do { res = rpc_call_sync(clnt, msg, flags); if (res != -EJUKEBOX) @@ -37,7 +35,6 @@ nfs3_rpc_wrapper(struct rpc_clnt *clnt, struct rpc_message *msg, int flags) schedule_timeout_interruptible(NFS_JUKEBOX_RETRY_TIME); res = -ERESTARTSYS; } while (!signalled()); - rpc_clnt_sigunmask(clnt, oldset); return res; } diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index f03d9d5..c7d3955 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -316,12 +316,9 @@ static void nfs4_opendata_put(struct nfs4_opendata *p) static int
[Patch 0/2] Kernel: mutex_lock_killable
Hello, This series of patches add the ability to make mutex locks killable instead of uninterruptable. This patch set builds on willys 5 patches for TASK_KILLABLE that are currently in mm (see http://lkml.org/lkml/2007/10/18/423 for more details). I have used 2.6.24-rc3 with the addition of willys patches. The first patch adds the mutex_lock_killable to the kernel/mutex.c and kernel/mutex.h. The second patch is a use of the mutex_lock_killable to fs/readdir.c to use the new mutex_lock_killable. This was enough for a small test application to be killable once the ethernet was pulled on an nfs mount (please note that this does not allow a normal ls to be killed yet). akpm: can you take these for merge in 2.6.25? A big thanks to Matthew for all the help on these patches. Please CC me on any responses. Thanks, Liam R. Howlett - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[Patch 0/2] Kernel: mutex_lock_killable
Hello, This series of patches add the ability to make mutex locks killable instead of uninterruptable. This patch set builds on willys 5 patches for TASK_KILLABLE that are currently in mm (see http://lkml.org/lkml/2007/10/18/423 for more details). I have used 2.6.24-rc3 with the addition of willys patches. The first patch adds the mutex_lock_killable to the kernel/mutex.c and kernel/mutex.h. The second patch is a use of the mutex_lock_killable to fs/readdir.c to use the new mutex_lock_killable. This was enough for a small test application to be killable once the ethernet was pulled on an nfs mount (please note that this does not allow a normal ls to be killed yet). akpm: can you take these for merge in 2.6.25? A big thanks to Matthew for all the help on these patches. Please CC me on any responses. Thanks, Liam R. Howlett - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/