Re: [PATCH v2] slub: Do not assert not having lock in removing freed partial
On Thu, 6 Feb 2014, Vladimir Davydov wrote: > > @@ -2906,12 +2916,10 @@ static void early_kmem_cache_node_alloc( > > inc_slabs_node(kmem_cache_node, node, page->objects); > > > > /* > > -* the lock is for lockdep's sake, not for any actual > > -* race protection > > +* No locks need to be taken here as it has just been > > +* initialized and there is no concurrent access. > > */ > > - spin_lock(>list_lock); > > - add_partial(n, page, DEACTIVATE_TO_HEAD); > > - spin_unlock(>list_lock); > > + __add_partial(n, page, DEACTIVATE_TO_HEAD); > > } Ahh.. Much better. Acked-by: Christoph Lameter -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2] slub: Do not assert not having lock in removing freed partial
On Thu, 6 Feb 2014, Vladimir Davydov wrote: @@ -2906,12 +2916,10 @@ static void early_kmem_cache_node_alloc( inc_slabs_node(kmem_cache_node, node, page-objects); /* -* the lock is for lockdep's sake, not for any actual -* race protection +* No locks need to be taken here as it has just been +* initialized and there is no concurrent access. */ - spin_lock(n-list_lock); - add_partial(n, page, DEACTIVATE_TO_HEAD); - spin_unlock(n-list_lock); + __add_partial(n, page, DEACTIVATE_TO_HEAD); } Ahh.. Much better. Acked-by: Christoph Lameter c...@linux.com -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2] slub: Do not assert not having lock in removing freed partial
On 02/06/2014 07:21 AM, Steven Rostedt wrote: > Vladimir reported the following issue: > > Commit c65c1877bd68 ("slub: use lockdep_assert_held") requires > remove_partial() to be called with n->list_lock held, but free_partial() > called from kmem_cache_close() on cache destruction does not follow this > rule, leading to a warning: > > WARNING: CPU: 0 PID: 2787 at mm/slub.c:1536 > __kmem_cache_shutdown+0x1b2/0x1f0() > Modules linked in: > CPU: 0 PID: 2787 Comm: modprobe Tainted: GW3.14.0-rc1-mm1+ #1 > Hardware name: >0600 88003ae1dde8 816d9583 0600 > 88003ae1de28 8107c107 >880037ab2b00 88007c240d30 ea0001ee5280 ea0001ee52a0 > Call Trace: >[] dump_stack+0x51/0x6e >[] warn_slowpath_common+0x87/0xb0 >[] warn_slowpath_null+0x15/0x20 >[] __kmem_cache_shutdown+0x1b2/0x1f0 >[] kmem_cache_destroy+0x43/0xf0 >[] xfs_destroy_zones+0x103/0x110 [xfs] >[] exit_xfs_fs+0x38/0x4e4 [xfs] >[] SyS_delete_module+0x19a/0x1f0 >[] ? retint_swapgs+0x13/0x1b >[] ? trace_hardirqs_on_caller+0x105/0x1d0 >[] ? trace_hardirqs_on_thunk+0x3a/0x3f >[] system_call_fastpath+0x16/0x1b > > > His solution was to add a spinlock in order to quiet lockdep. Although > there would be no contention to adding the lock, that lock also > requires disabling of interrupts which will have a larger impact on the > system. > > Instead of adding a spinlock to a location where it is not needed for > lockdep, make a __remove_partial() function that does not test if > the list_lock is held, as no one should have it due to it being freed. > > Also added a __add_partial() function that does not do the lock validation > either, as it is not needed for the creation of the cache. > > Suggested-by: David Rientjes > Reported-by: Vladimir Davydov > Signed-off-by: Steven Rostedt > > Index: linux-trace.git/mm/slub.c > === > --- linux-trace.git.orig/mm/slub.c > +++ linux-trace.git/mm/slub.c > @@ -1520,11 +1520,9 @@ static void discard_slab(struct kmem_cac > /* > * Management of partially allocated slabs. > */ > -static inline void add_partial(struct kmem_cache_node *n, > - struct page *page, int tail) > +static inline void > +__add_partial(struct kmem_cache_node *n, struct page *page, int tail) > { > - lockdep_assert_held(>list_lock); > - > n->nr_partial++; > if (tail == DEACTIVATE_TO_TAIL) > list_add_tail(>lru, >partial); > @@ -1532,15 +1530,27 @@ static inline void add_partial(struct km > list_add(>lru, >partial); > } > > -static inline void remove_partial(struct kmem_cache_node *n, > - struct page *page) > +static inline void add_partial(struct kmem_cache_node *n, > + struct page *page, int tail) > { > lockdep_assert_held(>list_lock); > + __add_partial(n, page, tail); > +} > > +static inline void > +__remove_partial(struct kmem_cache_node *n, struct page *page) > +{ > list_del(>lru); > n->nr_partial--; > } > > +static inline void remove_partial(struct kmem_cache_node *n, > + struct page *page) > +{ > + lockdep_assert_held(>list_lock); > + __remove_partial(n, page); > +} > + > /* > * Remove slab from the partial list, freeze it and > * return the pointer to the freelist. > @@ -2906,12 +2916,10 @@ static void early_kmem_cache_node_alloc( > inc_slabs_node(kmem_cache_node, node, page->objects); > > /* > - * the lock is for lockdep's sake, not for any actual > - * race protection > + * No locks need to be taken here as it has just been > + * initialized and there is no concurrent access. >*/ > - spin_lock(>list_lock); > - add_partial(n, page, DEACTIVATE_TO_HEAD); > - spin_unlock(>list_lock); > + __add_partial(n, page, DEACTIVATE_TO_HEAD); > } > > static void free_kmem_cache_nodes(struct kmem_cache *s) > @@ -3197,7 +3205,7 @@ static void free_partial(struct kmem_cac > > list_for_each_entry_safe(page, h, >partial, lru) { > if (!page->inuse) { > - remove_partial(n, page); > + __remove_partial(n, page); > discard_slab(s, page); > } else { > list_slab_objects(s, page, Looks neat. FWIW, Acked-by: Vladimir Davydov Thanks. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2] slub: Do not assert not having lock in removing freed partial
On Wed, 5 Feb 2014, Steven Rostedt wrote: > Vladimir reported the following issue: > > Commit c65c1877bd68 ("slub: use lockdep_assert_held") requires > remove_partial() to be called with n->list_lock held, but free_partial() > called from kmem_cache_close() on cache destruction does not follow this > rule, leading to a warning: > > WARNING: CPU: 0 PID: 2787 at mm/slub.c:1536 > __kmem_cache_shutdown+0x1b2/0x1f0() > Modules linked in: > CPU: 0 PID: 2787 Comm: modprobe Tainted: GW3.14.0-rc1-mm1+ #1 > Hardware name: >0600 88003ae1dde8 816d9583 0600 > 88003ae1de28 8107c107 >880037ab2b00 88007c240d30 ea0001ee5280 ea0001ee52a0 > Call Trace: >[] dump_stack+0x51/0x6e >[] warn_slowpath_common+0x87/0xb0 >[] warn_slowpath_null+0x15/0x20 >[] __kmem_cache_shutdown+0x1b2/0x1f0 >[] kmem_cache_destroy+0x43/0xf0 >[] xfs_destroy_zones+0x103/0x110 [xfs] >[] exit_xfs_fs+0x38/0x4e4 [xfs] >[] SyS_delete_module+0x19a/0x1f0 >[] ? retint_swapgs+0x13/0x1b >[] ? trace_hardirqs_on_caller+0x105/0x1d0 >[] ? trace_hardirqs_on_thunk+0x3a/0x3f >[] system_call_fastpath+0x16/0x1b > > > His solution was to add a spinlock in order to quiet lockdep. Although > there would be no contention to adding the lock, that lock also > requires disabling of interrupts which will have a larger impact on the > system. > > Instead of adding a spinlock to a location where it is not needed for > lockdep, make a __remove_partial() function that does not test if > the list_lock is held, as no one should have it due to it being freed. > > Also added a __add_partial() function that does not do the lock validation > either, as it is not needed for the creation of the cache. > > Suggested-by: David Rientjes > Reported-by: Vladimir Davydov > Signed-off-by: Steven Rostedt Acked-by: David Rientjes Thanks Steven! -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v2] slub: Do not assert not having lock in removing freed partial
Vladimir reported the following issue: Commit c65c1877bd68 ("slub: use lockdep_assert_held") requires remove_partial() to be called with n->list_lock held, but free_partial() called from kmem_cache_close() on cache destruction does not follow this rule, leading to a warning: WARNING: CPU: 0 PID: 2787 at mm/slub.c:1536 __kmem_cache_shutdown+0x1b2/0x1f0() Modules linked in: CPU: 0 PID: 2787 Comm: modprobe Tainted: GW3.14.0-rc1-mm1+ #1 Hardware name: 0600 88003ae1dde8 816d9583 0600 88003ae1de28 8107c107 880037ab2b00 88007c240d30 ea0001ee5280 ea0001ee52a0 Call Trace: [] dump_stack+0x51/0x6e [] warn_slowpath_common+0x87/0xb0 [] warn_slowpath_null+0x15/0x20 [] __kmem_cache_shutdown+0x1b2/0x1f0 [] kmem_cache_destroy+0x43/0xf0 [] xfs_destroy_zones+0x103/0x110 [xfs] [] exit_xfs_fs+0x38/0x4e4 [xfs] [] SyS_delete_module+0x19a/0x1f0 [] ? retint_swapgs+0x13/0x1b [] ? trace_hardirqs_on_caller+0x105/0x1d0 [] ? trace_hardirqs_on_thunk+0x3a/0x3f [] system_call_fastpath+0x16/0x1b His solution was to add a spinlock in order to quiet lockdep. Although there would be no contention to adding the lock, that lock also requires disabling of interrupts which will have a larger impact on the system. Instead of adding a spinlock to a location where it is not needed for lockdep, make a __remove_partial() function that does not test if the list_lock is held, as no one should have it due to it being freed. Also added a __add_partial() function that does not do the lock validation either, as it is not needed for the creation of the cache. Suggested-by: David Rientjes Reported-by: Vladimir Davydov Signed-off-by: Steven Rostedt Index: linux-trace.git/mm/slub.c === --- linux-trace.git.orig/mm/slub.c +++ linux-trace.git/mm/slub.c @@ -1520,11 +1520,9 @@ static void discard_slab(struct kmem_cac /* * Management of partially allocated slabs. */ -static inline void add_partial(struct kmem_cache_node *n, - struct page *page, int tail) +static inline void +__add_partial(struct kmem_cache_node *n, struct page *page, int tail) { - lockdep_assert_held(>list_lock); - n->nr_partial++; if (tail == DEACTIVATE_TO_TAIL) list_add_tail(>lru, >partial); @@ -1532,15 +1530,27 @@ static inline void add_partial(struct km list_add(>lru, >partial); } -static inline void remove_partial(struct kmem_cache_node *n, - struct page *page) +static inline void add_partial(struct kmem_cache_node *n, + struct page *page, int tail) { lockdep_assert_held(>list_lock); + __add_partial(n, page, tail); +} +static inline void +__remove_partial(struct kmem_cache_node *n, struct page *page) +{ list_del(>lru); n->nr_partial--; } +static inline void remove_partial(struct kmem_cache_node *n, + struct page *page) +{ + lockdep_assert_held(>list_lock); + __remove_partial(n, page); +} + /* * Remove slab from the partial list, freeze it and * return the pointer to the freelist. @@ -2906,12 +2916,10 @@ static void early_kmem_cache_node_alloc( inc_slabs_node(kmem_cache_node, node, page->objects); /* -* the lock is for lockdep's sake, not for any actual -* race protection +* No locks need to be taken here as it has just been +* initialized and there is no concurrent access. */ - spin_lock(>list_lock); - add_partial(n, page, DEACTIVATE_TO_HEAD); - spin_unlock(>list_lock); + __add_partial(n, page, DEACTIVATE_TO_HEAD); } static void free_kmem_cache_nodes(struct kmem_cache *s) @@ -3197,7 +3205,7 @@ static void free_partial(struct kmem_cac list_for_each_entry_safe(page, h, >partial, lru) { if (!page->inuse) { - remove_partial(n, page); + __remove_partial(n, page); discard_slab(s, page); } else { list_slab_objects(s, page, -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v2] slub: Do not assert not having lock in removing freed partial
Vladimir reported the following issue: Commit c65c1877bd68 (slub: use lockdep_assert_held) requires remove_partial() to be called with n-list_lock held, but free_partial() called from kmem_cache_close() on cache destruction does not follow this rule, leading to a warning: WARNING: CPU: 0 PID: 2787 at mm/slub.c:1536 __kmem_cache_shutdown+0x1b2/0x1f0() Modules linked in: CPU: 0 PID: 2787 Comm: modprobe Tainted: GW3.14.0-rc1-mm1+ #1 Hardware name: 0600 88003ae1dde8 816d9583 0600 88003ae1de28 8107c107 880037ab2b00 88007c240d30 ea0001ee5280 ea0001ee52a0 Call Trace: [816d9583] dump_stack+0x51/0x6e [8107c107] warn_slowpath_common+0x87/0xb0 [8107c145] warn_slowpath_null+0x15/0x20 [811c7fe2] __kmem_cache_shutdown+0x1b2/0x1f0 [811908d3] kmem_cache_destroy+0x43/0xf0 [a013a123] xfs_destroy_zones+0x103/0x110 [xfs] [a0192b54] exit_xfs_fs+0x38/0x4e4 [xfs] [811036fa] SyS_delete_module+0x19a/0x1f0 [816dfcd8] ? retint_swapgs+0x13/0x1b [810d2125] ? trace_hardirqs_on_caller+0x105/0x1d0 [81359efe] ? trace_hardirqs_on_thunk+0x3a/0x3f [816e8539] system_call_fastpath+0x16/0x1b His solution was to add a spinlock in order to quiet lockdep. Although there would be no contention to adding the lock, that lock also requires disabling of interrupts which will have a larger impact on the system. Instead of adding a spinlock to a location where it is not needed for lockdep, make a __remove_partial() function that does not test if the list_lock is held, as no one should have it due to it being freed. Also added a __add_partial() function that does not do the lock validation either, as it is not needed for the creation of the cache. Suggested-by: David Rientjes rient...@google.com Reported-by: Vladimir Davydov vdavy...@parallels.com Signed-off-by: Steven Rostedt rost...@goodmis.org Index: linux-trace.git/mm/slub.c === --- linux-trace.git.orig/mm/slub.c +++ linux-trace.git/mm/slub.c @@ -1520,11 +1520,9 @@ static void discard_slab(struct kmem_cac /* * Management of partially allocated slabs. */ -static inline void add_partial(struct kmem_cache_node *n, - struct page *page, int tail) +static inline void +__add_partial(struct kmem_cache_node *n, struct page *page, int tail) { - lockdep_assert_held(n-list_lock); - n-nr_partial++; if (tail == DEACTIVATE_TO_TAIL) list_add_tail(page-lru, n-partial); @@ -1532,15 +1530,27 @@ static inline void add_partial(struct km list_add(page-lru, n-partial); } -static inline void remove_partial(struct kmem_cache_node *n, - struct page *page) +static inline void add_partial(struct kmem_cache_node *n, + struct page *page, int tail) { lockdep_assert_held(n-list_lock); + __add_partial(n, page, tail); +} +static inline void +__remove_partial(struct kmem_cache_node *n, struct page *page) +{ list_del(page-lru); n-nr_partial--; } +static inline void remove_partial(struct kmem_cache_node *n, + struct page *page) +{ + lockdep_assert_held(n-list_lock); + __remove_partial(n, page); +} + /* * Remove slab from the partial list, freeze it and * return the pointer to the freelist. @@ -2906,12 +2916,10 @@ static void early_kmem_cache_node_alloc( inc_slabs_node(kmem_cache_node, node, page-objects); /* -* the lock is for lockdep's sake, not for any actual -* race protection +* No locks need to be taken here as it has just been +* initialized and there is no concurrent access. */ - spin_lock(n-list_lock); - add_partial(n, page, DEACTIVATE_TO_HEAD); - spin_unlock(n-list_lock); + __add_partial(n, page, DEACTIVATE_TO_HEAD); } static void free_kmem_cache_nodes(struct kmem_cache *s) @@ -3197,7 +3205,7 @@ static void free_partial(struct kmem_cac list_for_each_entry_safe(page, h, n-partial, lru) { if (!page-inuse) { - remove_partial(n, page); + __remove_partial(n, page); discard_slab(s, page); } else { list_slab_objects(s, page, -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2] slub: Do not assert not having lock in removing freed partial
On Wed, 5 Feb 2014, Steven Rostedt wrote: Vladimir reported the following issue: Commit c65c1877bd68 (slub: use lockdep_assert_held) requires remove_partial() to be called with n-list_lock held, but free_partial() called from kmem_cache_close() on cache destruction does not follow this rule, leading to a warning: WARNING: CPU: 0 PID: 2787 at mm/slub.c:1536 __kmem_cache_shutdown+0x1b2/0x1f0() Modules linked in: CPU: 0 PID: 2787 Comm: modprobe Tainted: GW3.14.0-rc1-mm1+ #1 Hardware name: 0600 88003ae1dde8 816d9583 0600 88003ae1de28 8107c107 880037ab2b00 88007c240d30 ea0001ee5280 ea0001ee52a0 Call Trace: [816d9583] dump_stack+0x51/0x6e [8107c107] warn_slowpath_common+0x87/0xb0 [8107c145] warn_slowpath_null+0x15/0x20 [811c7fe2] __kmem_cache_shutdown+0x1b2/0x1f0 [811908d3] kmem_cache_destroy+0x43/0xf0 [a013a123] xfs_destroy_zones+0x103/0x110 [xfs] [a0192b54] exit_xfs_fs+0x38/0x4e4 [xfs] [811036fa] SyS_delete_module+0x19a/0x1f0 [816dfcd8] ? retint_swapgs+0x13/0x1b [810d2125] ? trace_hardirqs_on_caller+0x105/0x1d0 [81359efe] ? trace_hardirqs_on_thunk+0x3a/0x3f [816e8539] system_call_fastpath+0x16/0x1b His solution was to add a spinlock in order to quiet lockdep. Although there would be no contention to adding the lock, that lock also requires disabling of interrupts which will have a larger impact on the system. Instead of adding a spinlock to a location where it is not needed for lockdep, make a __remove_partial() function that does not test if the list_lock is held, as no one should have it due to it being freed. Also added a __add_partial() function that does not do the lock validation either, as it is not needed for the creation of the cache. Suggested-by: David Rientjes rient...@google.com Reported-by: Vladimir Davydov vdavy...@parallels.com Signed-off-by: Steven Rostedt rost...@goodmis.org Acked-by: David Rientjes rient...@google.com Thanks Steven! -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2] slub: Do not assert not having lock in removing freed partial
On 02/06/2014 07:21 AM, Steven Rostedt wrote: Vladimir reported the following issue: Commit c65c1877bd68 (slub: use lockdep_assert_held) requires remove_partial() to be called with n-list_lock held, but free_partial() called from kmem_cache_close() on cache destruction does not follow this rule, leading to a warning: WARNING: CPU: 0 PID: 2787 at mm/slub.c:1536 __kmem_cache_shutdown+0x1b2/0x1f0() Modules linked in: CPU: 0 PID: 2787 Comm: modprobe Tainted: GW3.14.0-rc1-mm1+ #1 Hardware name: 0600 88003ae1dde8 816d9583 0600 88003ae1de28 8107c107 880037ab2b00 88007c240d30 ea0001ee5280 ea0001ee52a0 Call Trace: [816d9583] dump_stack+0x51/0x6e [8107c107] warn_slowpath_common+0x87/0xb0 [8107c145] warn_slowpath_null+0x15/0x20 [811c7fe2] __kmem_cache_shutdown+0x1b2/0x1f0 [811908d3] kmem_cache_destroy+0x43/0xf0 [a013a123] xfs_destroy_zones+0x103/0x110 [xfs] [a0192b54] exit_xfs_fs+0x38/0x4e4 [xfs] [811036fa] SyS_delete_module+0x19a/0x1f0 [816dfcd8] ? retint_swapgs+0x13/0x1b [810d2125] ? trace_hardirqs_on_caller+0x105/0x1d0 [81359efe] ? trace_hardirqs_on_thunk+0x3a/0x3f [816e8539] system_call_fastpath+0x16/0x1b His solution was to add a spinlock in order to quiet lockdep. Although there would be no contention to adding the lock, that lock also requires disabling of interrupts which will have a larger impact on the system. Instead of adding a spinlock to a location where it is not needed for lockdep, make a __remove_partial() function that does not test if the list_lock is held, as no one should have it due to it being freed. Also added a __add_partial() function that does not do the lock validation either, as it is not needed for the creation of the cache. Suggested-by: David Rientjes rient...@google.com Reported-by: Vladimir Davydov vdavy...@parallels.com Signed-off-by: Steven Rostedt rost...@goodmis.org Index: linux-trace.git/mm/slub.c === --- linux-trace.git.orig/mm/slub.c +++ linux-trace.git/mm/slub.c @@ -1520,11 +1520,9 @@ static void discard_slab(struct kmem_cac /* * Management of partially allocated slabs. */ -static inline void add_partial(struct kmem_cache_node *n, - struct page *page, int tail) +static inline void +__add_partial(struct kmem_cache_node *n, struct page *page, int tail) { - lockdep_assert_held(n-list_lock); - n-nr_partial++; if (tail == DEACTIVATE_TO_TAIL) list_add_tail(page-lru, n-partial); @@ -1532,15 +1530,27 @@ static inline void add_partial(struct km list_add(page-lru, n-partial); } -static inline void remove_partial(struct kmem_cache_node *n, - struct page *page) +static inline void add_partial(struct kmem_cache_node *n, + struct page *page, int tail) { lockdep_assert_held(n-list_lock); + __add_partial(n, page, tail); +} +static inline void +__remove_partial(struct kmem_cache_node *n, struct page *page) +{ list_del(page-lru); n-nr_partial--; } +static inline void remove_partial(struct kmem_cache_node *n, + struct page *page) +{ + lockdep_assert_held(n-list_lock); + __remove_partial(n, page); +} + /* * Remove slab from the partial list, freeze it and * return the pointer to the freelist. @@ -2906,12 +2916,10 @@ static void early_kmem_cache_node_alloc( inc_slabs_node(kmem_cache_node, node, page-objects); /* - * the lock is for lockdep's sake, not for any actual - * race protection + * No locks need to be taken here as it has just been + * initialized and there is no concurrent access. */ - spin_lock(n-list_lock); - add_partial(n, page, DEACTIVATE_TO_HEAD); - spin_unlock(n-list_lock); + __add_partial(n, page, DEACTIVATE_TO_HEAD); } static void free_kmem_cache_nodes(struct kmem_cache *s) @@ -3197,7 +3205,7 @@ static void free_partial(struct kmem_cac list_for_each_entry_safe(page, h, n-partial, lru) { if (!page-inuse) { - remove_partial(n, page); + __remove_partial(n, page); discard_slab(s, page); } else { list_slab_objects(s, page, Looks neat. FWIW, Acked-by: Vladimir Davydov vdavy...@parallels.com Thanks. -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/