Re: [PATCH v2] netlink: Return unsigned value for nla_len()

2023-12-04 Thread Nicolas Dichtel
Le 04/12/2023 à 23:21, Kees Cook a écrit :
[snip]
>>> diff --git a/include/uapi/linux/netlink.h b/include/uapi/linux/netlink.h
>>> index f87aaf28a649..270feed9fd63 100644
>>> --- a/include/uapi/linux/netlink.h
>>> +++ b/include/uapi/linux/netlink.h
>>> @@ -247,7 +247,7 @@ struct nlattr {
>>>  
>>>  #define NLA_ALIGNTO4
>>>  #define NLA_ALIGN(len) (((len) + NLA_ALIGNTO - 1) & 
>>> ~(NLA_ALIGNTO - 1))
>>> -#define NLA_HDRLEN ((int) NLA_ALIGN(sizeof(struct nlattr)))
>>> +#define NLA_HDRLEN ((__u16) NLA_ALIGN(sizeof(struct nlattr)))
>> I wonder if this may break the compilation of some userspace tools with 
>> errors
>> like comparing signed and unsigned values.
> 
> Should I drop this part, then?
> 
Yes please.


Thank you,
Nicolas



Re: [PATCH 2/5] pstore: inode: Convert mutex usage to guard(mutex)

2023-12-04 Thread Dave Chinner
On Sat, Dec 02, 2023 at 01:22:12PM -0800, Kees Cook wrote:
> Replace open-coded mutex handling with cleanup.h guard(mutex) and
> scoped_guard(mutex, ...).
> 
> Cc: "Guilherme G. Piccoli" 
> Cc: Tony Luck 
> Cc: linux-hardening@vger.kernel.org
> Signed-off-by: Kees Cook 
> ---
>  fs/pstore/inode.c | 76 +++
>  1 file changed, 31 insertions(+), 45 deletions(-)

This doesn't really feel like an improvement. WE've gone from
clearly defined lock/unlock pairings to macro wrapped code that
hides scoping from the reader.

I'm going to have to to continually remind myself that this weird
looking code doesn't actually leak locks just because it returns
from a function with a lock held. That's 20 years of logic design
and pattern matching practice I'm going to have to break, and I feel
that's going to make it harder for me to maintain and review code
sanely as a result.

> diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c
> index 20f3452c8196..0d89e0014b6f 100644
> --- a/fs/pstore/inode.c
> +++ b/fs/pstore/inode.c
> @@ -180,25 +180,21 @@ static int pstore_unlink(struct inode *dir, struct 
> dentry *dentry)
>  {
>   struct pstore_private *p = d_inode(dentry)->i_private;
>   struct pstore_record *record = p->record;
> - int rc = 0;
>  
>   if (!record->psi->erase)
>   return -EPERM;
>  
>   /* Make sure we can't race while removing this file. */
> - mutex_lock(_list_lock);
> - if (!list_empty(>list))
> - list_del_init(>list);
> - else
> - rc = -ENOENT;
> - p->dentry = NULL;
> - mutex_unlock(_list_lock);
> - if (rc)
> - return rc;
> -
> - mutex_lock(>psi->read_mutex);
> - record->psi->erase(record);
> - mutex_unlock(>psi->read_mutex);
> + scoped_guard(mutex, _list_lock) {
> + if (!list_empty(>list))
> + list_del_init(>list);
> + else
> + return -ENOENT;
> + p->dentry = NULL;
> + }
> +
> + scoped_guard(mutex, >psi->read_mutex)
> + record->psi->erase(record);

And now we combine the new-fangled shiny with a mechanical change
that lacks critical thinking about the logic of the code.  Why drop
the mutex only to have to pick it back up again when the scoping
handles the error case automatically? i.e.:

scoped_guard(mutex, _list_lock) {
if (!list_empty(>list))
list_del_init(>list);
else
return -ENOENT;
p->dentry = NULL;
record->psi->erase(record);
}

Not a fan of the required indenting just for critical sections;
this will be somewhat nasty when multiple locks need to be take.

> @@ -317,19 +310,19 @@ int pstore_put_backend_records(struct pstore_info *psi)
>   if (!root)
>   return 0;
>  
> - mutex_lock(_list_lock);
> - list_for_each_entry_safe(pos, tmp, _list, list) {
> - if (pos->record->psi == psi) {
> - list_del_init(>list);
> - rc = simple_unlink(d_inode(root), pos->dentry);
> - if (WARN_ON(rc))
> - break;
> - d_drop(pos->dentry);
> - dput(pos->dentry);
> - pos->dentry = NULL;
> + scoped_guard(mutex, _list_lock) {
> + list_for_each_entry_safe(pos, tmp, _list, list) {
> + if (pos->record->psi == psi) {
> + list_del_init(>list);
> + rc = simple_unlink(d_inode(root), pos->dentry);
> + if (WARN_ON(rc))
> + break;
> + d_drop(pos->dentry);
> + dput(pos->dentry);
> + pos->dentry = NULL;
> + }
>   }
>   }
> - mutex_unlock(_list_lock);

This doesn't even save a line of code - there's no actual scoping
needed here because there is no return from within the loop. But
with a scoped guard we have to add an extra layer of indentation.
Not a fan of that, either.

> @@ -449,9 +439,8 @@ static int pstore_fill_super(struct super_block *sb, void 
> *data, int silent)
>   if (!sb->s_root)
>   return -ENOMEM;
>  
> - mutex_lock(_sb_lock);
> - pstore_sb = sb;
> - mutex_unlock(_sb_lock);
> + scoped_guard(mutex, _sb_lock)
> + pstore_sb = sb;
>  
>   pstore_get_records(0);
>  
> @@ -466,17 +455,14 @@ static struct dentry *pstore_mount(struct 
> file_system_type *fs_type,
>  
>  static void pstore_kill_sb(struct super_block *sb)
>  {
> - mutex_lock(_sb_lock);
> + guard(mutex)(_sb_lock);
>   WARN_ON(pstore_sb && pstore_sb != sb);
>  
>   kill_litter_super(sb);
>   pstore_sb = NULL;
>  
> - mutex_lock(_list_lock);
> + guard(mutex)(_list_lock);
> 

Re: [PATCH v2 00/18] sysctl: constify sysctl ctl_tables

2023-12-04 Thread Luis Chamberlain
On Mon, Dec 04, 2023 at 08:52:13AM +0100, Thomas Weißschuh wrote:
> Tested by booting and with the sysctl selftests on x86.

Can I trouble you to rebase on sysctl-next?

https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux.git/log/?h=sysctl-next

  Luis



Re: [PATCH v2 03/21] KASAN: remove code paths guarded by CONFIG_SLAB

2023-12-04 Thread Hyeonggon Yoo
On Tue, Dec 5, 2023 at 1:27 PM Hyeonggon Yoo <42.hye...@gmail.com> wrote:
>
> On Mon, Nov 20, 2023 at 07:34:14PM +0100, Vlastimil Babka wrote:
> > With SLAB removed and SLUB the only remaining allocator, we can clean up
> > some code that was depending on the choice.
> >
> > Reviewed-by: Kees Cook 
> > Reviewed-by: Marco Elver 
> > Signed-off-by: Vlastimil Babka 
> > ---
> >  mm/kasan/common.c | 13 ++---
> >  mm/kasan/kasan.h  |  3 +--
> >  mm/kasan/quarantine.c |  7 ---
> >  3 files changed, 3 insertions(+), 20 deletions(-)
> >
> > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> > index 256930da578a..5d95219e69d7 100644
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -153,10 +153,6 @@ void __kasan_poison_object_data(struct kmem_cache 
> > *cache, void *object)
> >   * 2. A cache might be SLAB_TYPESAFE_BY_RCU, which means objects can be
> >   *accessed after being freed. We preassign tags for objects in these
> >   *caches as well.
> > - * 3. For SLAB allocator we can't preassign tags randomly since the 
> > freelist
> > - *is stored as an array of indexes instead of a linked list. Assign 
> > tags
> > - *based on objects indexes, so that objects that are next to each other
> > - *get different tags.
> >   */
> >  static inline u8 assign_tag(struct kmem_cache *cache,
> >   const void *object, bool init)
> > @@ -171,17 +167,12 @@ static inline u8 assign_tag(struct kmem_cache *cache,
> >   if (!cache->ctor && !(cache->flags & SLAB_TYPESAFE_BY_RCU))
> >   return init ? KASAN_TAG_KERNEL : kasan_random_tag();
> >
> > - /* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: 
> > */
> > -#ifdef CONFIG_SLAB
> > - /* For SLAB assign tags based on the object index in the freelist. */
> > - return (u8)obj_to_index(cache, virt_to_slab(object), (void *)object);
> > -#else
> >   /*
> > -  * For SLUB assign a random tag during slab creation, otherwise reuse
> > +  * For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU,
> > +  * assign a random tag during slab creation, otherwise reuse
> >* the already assigned tag.
> >*/
> >   return init ? kasan_random_tag() : get_tag(object);
> > -#endif
> >  }
> >
> >  void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> > index 8b06bab5c406..eef50233640a 100644
> > --- a/mm/kasan/kasan.h
> > +++ b/mm/kasan/kasan.h
> > @@ -373,8 +373,7 @@ void kasan_set_track(struct kasan_track *track, gfp_t 
> > flags);
> >  void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t 
> > flags);
> >  void kasan_save_free_info(struct kmem_cache *cache, void *object);
> >
> > -#if defined(CONFIG_KASAN_GENERIC) && \
> > - (defined(CONFIG_SLAB) || defined(CONFIG_SLUB))
> > +#ifdef CONFIG_KASAN_GENERIC
> >  bool kasan_quarantine_put(struct kmem_cache *cache, void *object);
> >  void kasan_quarantine_reduce(void);
> >  void kasan_quarantine_remove_cache(struct kmem_cache *cache);
> > diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
> > index ca4529156735..138c57b836f2 100644
> > --- a/mm/kasan/quarantine.c
> > +++ b/mm/kasan/quarantine.c
> > @@ -144,10 +144,6 @@ static void qlink_free(struct qlist_node *qlink, 
> > struct kmem_cache *cache)
> >  {
> >   void *object = qlink_to_object(qlink, cache);
> >   struct kasan_free_meta *meta = kasan_get_free_meta(cache, object);
> > - unsigned long flags;
> > -
> > - if (IS_ENABLED(CONFIG_SLAB))
> > - local_irq_save(flags);
> >
> >   /*
> >* If init_on_free is enabled and KASAN's free metadata is stored in
> > @@ -166,9 +162,6 @@ static void qlink_free(struct qlist_node *qlink, struct 
> > kmem_cache *cache)
> >   *(u8 *)kasan_mem_to_shadow(object) = KASAN_SLAB_FREE;
> >
> >   ___cache_free(cache, object, _THIS_IP_);
> > -
> > - if (IS_ENABLED(CONFIG_SLAB))
> > - local_irq_restore(flags);
> >  }
> >
> >  static void qlist_free_all(struct qlist_head *q, struct kmem_cache *cache)
>
> Looks good to me,
> Reviewed-by: Hyeonggon Yoo <42.hye...@gmail.com>

nit: Some KASAN tests depends on SLUB, but as now it's the only allocator
  KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); in
  mm/kasan/kasan_test.c can be removed

>
> >
> > --
> > 2.42.1
> >
> >



Re: [PATCH v2 16/21] mm/slab: move kfree() from slab_common.c to slub.c

2023-12-04 Thread Hyeonggon Yoo
On Mon, Nov 20, 2023 at 07:34:27PM +0100, Vlastimil Babka wrote:
> This should result in better code. Currently kfree() makes a function
> call between compilation units to __kmem_cache_free() which does its own
> virt_to_slab(), throwing away the struct slab pointer we already had in
> kfree(). Now it can be reused. Additionally kfree() can now inline the
> whole SLUB freeing fastpath.
> 
> Also move over free_large_kmalloc() as the only callsites are now in
> slub.c, and make it static.
> 
> Reviewed-by: Kees Cook 
> Signed-off-by: Vlastimil Babka 
> ---
>  mm/slab.h|  4 
>  mm/slab_common.c | 45 -
>  mm/slub.c| 51 ++-
>  3 files changed, 46 insertions(+), 54 deletions(-)
> 
> diff --git a/mm/slab.h b/mm/slab.h
> index 5ae6a978e9c2..35a55c4a407d 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -395,8 +395,6 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags, 
> unsigned long caller);
>  void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags,
> int node, size_t orig_size,
> unsigned long caller);
> -void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller);
> -
>  gfp_t kmalloc_fix_flags(gfp_t flags);
>  
>  /* Functions provided by the slab allocators */
> @@ -559,8 +557,6 @@ static inline int memcg_alloc_slab_cgroups(struct slab 
> *slab,
>  }
>  #endif /* CONFIG_MEMCG_KMEM */
>  
> -void free_large_kmalloc(struct folio *folio, void *object);
> -
>  size_t __ksize(const void *objp);
>  
>  static inline size_t slab_ksize(const struct kmem_cache *s)
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index bbc2e3f061f1..f4f275613d2a 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -963,22 +963,6 @@ void __init create_kmalloc_caches(slab_flags_t flags)
>   slab_state = UP;
>  }
>  
> -void free_large_kmalloc(struct folio *folio, void *object)
> -{
> - unsigned int order = folio_order(folio);
> -
> - if (WARN_ON_ONCE(order == 0))
> - pr_warn_once("object pointer: 0x%p\n", object);
> -
> - kmemleak_free(object);
> - kasan_kfree_large(object);
> - kmsan_kfree_large(object);
> -
> - mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B,
> -   -(PAGE_SIZE << order));
> - __free_pages(folio_page(folio, 0), order);
> -}
> -
>  static void *__kmalloc_large_node(size_t size, gfp_t flags, int node);
>  static __always_inline
>  void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long 
> caller)
> @@ -1023,35 +1007,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t 
> flags,
>  }
>  EXPORT_SYMBOL(__kmalloc_node_track_caller);
>  
> -/**
> - * kfree - free previously allocated memory
> - * @object: pointer returned by kmalloc() or kmem_cache_alloc()
> - *
> - * If @object is NULL, no operation is performed.
> - */
> -void kfree(const void *object)
> -{
> - struct folio *folio;
> - struct slab *slab;
> - struct kmem_cache *s;
> -
> - trace_kfree(_RET_IP_, object);
> -
> - if (unlikely(ZERO_OR_NULL_PTR(object)))
> - return;
> -
> - folio = virt_to_folio(object);
> - if (unlikely(!folio_test_slab(folio))) {
> - free_large_kmalloc(folio, (void *)object);
> - return;
> - }
> -
> - slab = folio_slab(folio);
> - s = slab->slab_cache;
> - __kmem_cache_free(s, (void *)object, _RET_IP_);
> -}
> -EXPORT_SYMBOL(kfree);
> -
>  /**
>   * __ksize -- Report full size of underlying allocation
>   * @object: pointer to the object
> diff --git a/mm/slub.c b/mm/slub.c
> index cc801f8258fe..2baa9e94d9df 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4197,11 +4197,6 @@ static inline struct kmem_cache *cache_from_obj(struct 
> kmem_cache *s, void *x)
>   return cachep;
>  }
>  
> -void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller)
> -{
> - slab_free(s, virt_to_slab(x), x, NULL, , 1, caller);
> -}
> -
>  /**
>   * kmem_cache_free - Deallocate an object
>   * @s: The cache the allocation was from.
> @@ -4220,6 +4215,52 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
>  }
>  EXPORT_SYMBOL(kmem_cache_free);
>  
> +static void free_large_kmalloc(struct folio *folio, void *object)
> +{
> + unsigned int order = folio_order(folio);
> +
> + if (WARN_ON_ONCE(order == 0))
> + pr_warn_once("object pointer: 0x%p\n", object);
> +
> + kmemleak_free(object);
> + kasan_kfree_large(object);
> + kmsan_kfree_large(object);
> +
> + mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B,
> +   -(PAGE_SIZE << order));
> + __free_pages(folio_page(folio, 0), order);
> +}
> +
> +/**
> + * kfree - free previously allocated memory
> + * @object: pointer returned by kmalloc() or kmem_cache_alloc()
> + *
> + * If @object is NULL, no operation is 

Re: [PATCH v2 03/21] KASAN: remove code paths guarded by CONFIG_SLAB

2023-12-04 Thread Hyeonggon Yoo
On Mon, Nov 20, 2023 at 07:34:14PM +0100, Vlastimil Babka wrote:
> With SLAB removed and SLUB the only remaining allocator, we can clean up
> some code that was depending on the choice.
> 
> Reviewed-by: Kees Cook 
> Reviewed-by: Marco Elver 
> Signed-off-by: Vlastimil Babka 
> ---
>  mm/kasan/common.c | 13 ++---
>  mm/kasan/kasan.h  |  3 +--
>  mm/kasan/quarantine.c |  7 ---
>  3 files changed, 3 insertions(+), 20 deletions(-)
> 
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 256930da578a..5d95219e69d7 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -153,10 +153,6 @@ void __kasan_poison_object_data(struct kmem_cache 
> *cache, void *object)
>   * 2. A cache might be SLAB_TYPESAFE_BY_RCU, which means objects can be
>   *accessed after being freed. We preassign tags for objects in these
>   *caches as well.
> - * 3. For SLAB allocator we can't preassign tags randomly since the freelist
> - *is stored as an array of indexes instead of a linked list. Assign tags
> - *based on objects indexes, so that objects that are next to each other
> - *get different tags.
>   */
>  static inline u8 assign_tag(struct kmem_cache *cache,
>   const void *object, bool init)
> @@ -171,17 +167,12 @@ static inline u8 assign_tag(struct kmem_cache *cache,
>   if (!cache->ctor && !(cache->flags & SLAB_TYPESAFE_BY_RCU))
>   return init ? KASAN_TAG_KERNEL : kasan_random_tag();
>  
> - /* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */
> -#ifdef CONFIG_SLAB
> - /* For SLAB assign tags based on the object index in the freelist. */
> - return (u8)obj_to_index(cache, virt_to_slab(object), (void *)object);
> -#else
>   /*
> -  * For SLUB assign a random tag during slab creation, otherwise reuse
> +  * For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU,
> +  * assign a random tag during slab creation, otherwise reuse
>* the already assigned tag.
>*/
>   return init ? kasan_random_tag() : get_tag(object);
> -#endif
>  }
>  
>  void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 8b06bab5c406..eef50233640a 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -373,8 +373,7 @@ void kasan_set_track(struct kasan_track *track, gfp_t 
> flags);
>  void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t 
> flags);
>  void kasan_save_free_info(struct kmem_cache *cache, void *object);
>  
> -#if defined(CONFIG_KASAN_GENERIC) && \
> - (defined(CONFIG_SLAB) || defined(CONFIG_SLUB))
> +#ifdef CONFIG_KASAN_GENERIC
>  bool kasan_quarantine_put(struct kmem_cache *cache, void *object);
>  void kasan_quarantine_reduce(void);
>  void kasan_quarantine_remove_cache(struct kmem_cache *cache);
> diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
> index ca4529156735..138c57b836f2 100644
> --- a/mm/kasan/quarantine.c
> +++ b/mm/kasan/quarantine.c
> @@ -144,10 +144,6 @@ static void qlink_free(struct qlist_node *qlink, struct 
> kmem_cache *cache)
>  {
>   void *object = qlink_to_object(qlink, cache);
>   struct kasan_free_meta *meta = kasan_get_free_meta(cache, object);
> - unsigned long flags;
> -
> - if (IS_ENABLED(CONFIG_SLAB))
> - local_irq_save(flags);
>  
>   /*
>* If init_on_free is enabled and KASAN's free metadata is stored in
> @@ -166,9 +162,6 @@ static void qlink_free(struct qlist_node *qlink, struct 
> kmem_cache *cache)
>   *(u8 *)kasan_mem_to_shadow(object) = KASAN_SLAB_FREE;
>  
>   ___cache_free(cache, object, _THIS_IP_);
> -
> - if (IS_ENABLED(CONFIG_SLAB))
> - local_irq_restore(flags);
>  }
>  
>  static void qlist_free_all(struct qlist_head *q, struct kmem_cache *cache)

Looks good to me,
Reviewed-by: Hyeonggon Yoo <42.hye...@gmail.com>

> 
> -- 
> 2.42.1
> 
> 



Re: [PATCH v2 02/21] mm/slab: remove CONFIG_SLAB from all Kconfig and Makefile

2023-12-04 Thread Hyeonggon Yoo
On Mon, Nov 20, 2023 at 07:34:13PM +0100, Vlastimil Babka wrote:
> Remove CONFIG_SLAB, CONFIG_DEBUG_SLAB, CONFIG_SLAB_DEPRECATED and
> everything in Kconfig files and mm/Makefile that depends on those. Since
> SLUB is the only remaining allocator, remove the allocator choice, make
> CONFIG_SLUB a "def_bool y" for now and remove all explicit dependencies
> on SLUB or SLAB as it's now always enabled. Make every option's verbose
> name and description refer to "the slab allocator" without refering to
> the specific implementation. Do not rename the CONFIG_ option names yet.
> 
> Everything under #ifdef CONFIG_SLAB, and mm/slab.c is now dead code, all
> code under #ifdef CONFIG_SLUB is now always compiled.
> 
> Reviewed-by: Kees Cook 
> Reviewed-by: Christoph Lameter 
> Signed-off-by: Vlastimil Babka 
> ---
>  arch/arm64/Kconfig |  2 +-
>  arch/s390/Kconfig  |  2 +-
>  arch/x86/Kconfig   |  2 +-
>  lib/Kconfig.debug  |  1 -
>  lib/Kconfig.kasan  | 11 +++--
>  lib/Kconfig.kfence |  2 +-
>  lib/Kconfig.kmsan  |  2 +-
>  mm/Kconfig | 68 
> --
>  mm/Kconfig.debug   | 16 -
>  mm/Makefile|  6 +
>  10 files changed, 28 insertions(+), 84 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 7b071a00425d..325b7140b576 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -154,7 +154,7 @@ config ARM64
>   select HAVE_MOVE_PUD
>   select HAVE_PCI
>   select HAVE_ACPI_APEI if (ACPI && EFI)
> - select HAVE_ALIGNED_STRUCT_PAGE if SLUB
> + select HAVE_ALIGNED_STRUCT_PAGE
>   select HAVE_ARCH_AUDITSYSCALL
>   select HAVE_ARCH_BITREVERSE
>   select HAVE_ARCH_COMPILER_H
> diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
> index 3bec98d20283..afa42a6f2e09 100644
> --- a/arch/s390/Kconfig
> +++ b/arch/s390/Kconfig
> @@ -146,7 +146,7 @@ config S390
>   select GENERIC_TIME_VSYSCALL
>   select GENERIC_VDSO_TIME_NS
>   select GENERIC_IOREMAP if PCI
> - select HAVE_ALIGNED_STRUCT_PAGE if SLUB
> + select HAVE_ALIGNED_STRUCT_PAGE
>   select HAVE_ARCH_AUDITSYSCALL
>   select HAVE_ARCH_JUMP_LABEL
>   select HAVE_ARCH_JUMP_LABEL_RELATIVE
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 3762f41bb092..3f460f334d4e 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -169,7 +169,7 @@ config X86
>   select HAS_IOPORT
>   select HAVE_ACPI_APEI   if ACPI
>   select HAVE_ACPI_APEI_NMI   if ACPI
> - select HAVE_ALIGNED_STRUCT_PAGE if SLUB
> + select HAVE_ALIGNED_STRUCT_PAGE
>   select HAVE_ARCH_AUDITSYSCALL
>   select HAVE_ARCH_HUGE_VMAP  if X86_64 || X86_PAE
>   select HAVE_ARCH_HUGE_VMALLOC   if X86_64
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index cc7d53d9dc01..e1765face106 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -1985,7 +1985,6 @@ config FAULT_INJECTION
>  config FAILSLAB
>   bool "Fault-injection capability for kmalloc"
>   depends on FAULT_INJECTION
> - depends on SLAB || SLUB
>   help
> Provide fault-injection capability for kmalloc.
>  
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index fdca89c05745..97e1fdbb5910 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -37,7 +37,7 @@ menuconfig KASAN
>(HAVE_ARCH_KASAN_SW_TAGS && CC_HAS_KASAN_SW_TAGS)) && \
>   CC_HAS_WORKING_NOSANITIZE_ADDRESS) || \
>  HAVE_ARCH_KASAN_HW_TAGS
> - depends on (SLUB && SYSFS && !SLUB_TINY) || (SLAB && !DEBUG_SLAB)
> + depends on SYSFS && !SLUB_TINY
>   select STACKDEPOT_ALWAYS_INIT
>   help
> Enables KASAN (Kernel Address Sanitizer) - a dynamic memory safety
> @@ -78,7 +78,7 @@ config KASAN_GENERIC
>   bool "Generic KASAN"
>   depends on HAVE_ARCH_KASAN && CC_HAS_KASAN_GENERIC
>   depends on CC_HAS_WORKING_NOSANITIZE_ADDRESS
> - select SLUB_DEBUG if SLUB
> + select SLUB_DEBUG
>   select CONSTRUCTORS
>   help
> Enables Generic KASAN.
> @@ -89,13 +89,11 @@ config KASAN_GENERIC
> overhead of ~50% for dynamic allocations.
> The performance slowdown is ~x3.
>  
> -   (Incompatible with CONFIG_DEBUG_SLAB: the kernel does not boot.)
> -
>  config KASAN_SW_TAGS
>   bool "Software Tag-Based KASAN"
>   depends on HAVE_ARCH_KASAN_SW_TAGS && CC_HAS_KASAN_SW_TAGS
>   depends on CC_HAS_WORKING_NOSANITIZE_ADDRESS
> - select SLUB_DEBUG if SLUB
> + select SLUB_DEBUG
>   select CONSTRUCTORS
>   help
> Enables Software Tag-Based KASAN.
> @@ -110,12 +108,9 @@ config KASAN_SW_TAGS
> May potentially introduce problems related to pointer casting and
> comparison, as it embeds a tag into the top byte of each pointer.
>  
> -   (Incompatible with CONFIG_DEBUG_SLAB: the kernel does not boot.)
> -
>  config KASAN_HW_TAGS
>   

Re: [PATCH v2 01/21] mm/slab, docs: switch mm-api docs generation from slab.c to slub.c

2023-12-04 Thread Hyeonggon Yoo
On Mon, Nov 20, 2023 at 07:34:12PM +0100, Vlastimil Babka wrote:
> The SLAB implementation is going to be removed, and mm-api.rst currently
> uses mm/slab.c to obtain kerneldocs for some API functions. Switch it to
> mm/slub.c and move the relevant kerneldocs of exported functions from
> one to the other. The rest of kerneldocs in slab.c is for static SLAB
> implementation-specific functions that don't have counterparts in slub.c
> and thus can be simply removed with the implementation.
> 
> Signed-off-by: Vlastimil Babka 
> ---
>  Documentation/core-api/mm-api.rst |  2 +-
>  mm/slab.c | 21 -
>  mm/slub.c | 21 +
>  3 files changed, 22 insertions(+), 22 deletions(-)
> 
> diff --git a/Documentation/core-api/mm-api.rst 
> b/Documentation/core-api/mm-api.rst
> index 2d091c873d1e..af8151db88b2 100644
> --- a/Documentation/core-api/mm-api.rst
> +++ b/Documentation/core-api/mm-api.rst
> @@ -37,7 +37,7 @@ The Slab Cache
>  .. kernel-doc:: include/linux/slab.h
> :internal:
>  
> -.. kernel-doc:: mm/slab.c
> +.. kernel-doc:: mm/slub.c
> :export:
>  
>  .. kernel-doc:: mm/slab_common.c
> diff --git a/mm/slab.c b/mm/slab.c
> index 9ad3d0f2d1a5..37efe3241f9c 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -3491,19 +3491,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t 
> flags, size_t size,
>  }
>  EXPORT_SYMBOL(kmem_cache_alloc_bulk);
>  
> -/**
> - * kmem_cache_alloc_node - Allocate an object on the specified node
> - * @cachep: The cache to allocate from.
> - * @flags: See kmalloc().
> - * @nodeid: node number of the target node.
> - *
> - * Identical to kmem_cache_alloc but it will allocate memory on the given
> - * node, which can improve the performance for cpu bound structures.
> - *
> - * Fallback to other node is possible if __GFP_THISNODE is not set.
> - *
> - * Return: pointer to the new object or %NULL in case of error
> - */
>  void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int 
> nodeid)
>  {
>   void *ret = slab_alloc_node(cachep, NULL, flags, nodeid, 
> cachep->object_size, _RET_IP_);
> @@ -3564,14 +3551,6 @@ void __kmem_cache_free(struct kmem_cache *cachep, void 
> *objp,
>   __do_kmem_cache_free(cachep, objp, caller);
>  }
>  
> -/**
> - * kmem_cache_free - Deallocate an object
> - * @cachep: The cache the allocation was from.
> - * @objp: The previously allocated object.
> - *
> - * Free an object which was previously allocated from this
> - * cache.
> - */
>  void kmem_cache_free(struct kmem_cache *cachep, void *objp)
>  {
>   cachep = cache_from_obj(cachep, objp);
> diff --git a/mm/slub.c b/mm/slub.c
> index 63d281dfacdb..3e01731783df 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3518,6 +3518,19 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, 
> gfp_t gfpflags,
>  caller, orig_size);
>  }
>  
> +/**
> + * kmem_cache_alloc_node - Allocate an object on the specified node
> + * @s: The cache to allocate from.
> + * @gfpflags: See kmalloc().
> + * @node: node number of the target node.
> + *
> + * Identical to kmem_cache_alloc but it will allocate memory on the given
> + * node, which can improve the performance for cpu bound structures.
> + *
> + * Fallback to other node is possible if __GFP_THISNODE is not set.
> + *
> + * Return: pointer to the new object or %NULL in case of error
> + */
>  void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node)
>  {
>   void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, 
> s->object_size);
> @@ -3822,6 +3835,14 @@ void __kmem_cache_free(struct kmem_cache *s, void *x, 
> unsigned long caller)
>   slab_free(s, virt_to_slab(x), x, NULL, , 1, caller);
>  }
>  
> +/**
> + * kmem_cache_free - Deallocate an object
> + * @s: The cache the allocation was from.
> + * @x: The previously allocated object.
> + *
> + * Free an object which was previously allocated from this
> + * cache.
> + */
>  void kmem_cache_free(struct kmem_cache *s, void *x)
>  {
>   s = cache_from_obj(s, x);
> 

Looks good to me,
Reviewed-by: Hyeonggon Yoo <42.hye...@gmail.com>

> -- 
> 2.42.1
> 
> 



Re: [PATCH 5/5] pstore: inode: Use cleanup.h for struct pstore_private

2023-12-04 Thread Kees Cook
On Sat, Dec 02, 2023 at 10:27:06PM +, Al Viro wrote:
> On Sat, Dec 02, 2023 at 01:22:15PM -0800, Kees Cook wrote:
> 
> >  static void *pstore_ftrace_seq_start(struct seq_file *s, loff_t *pos)
> >  {
> > @@ -338,9 +339,8 @@ int pstore_mkfile(struct dentry *root, struct 
> > pstore_record *record)
> >  {
> > struct dentry   *dentry;
> > struct inode*inode __free(iput) = NULL;
> > -   int rc = 0;
> > charname[PSTORE_NAMELEN];
> > -   struct pstore_private   *private, *pos;
> > +   struct pstore_private   *private __free(pstore_private) = NULL, *pos;
> > size_t  size = record->size + record->ecc_notice_size;
> >  
> > if (WARN_ON(!inode_is_locked(d_inode(root
> > @@ -356,7 +356,6 @@ int pstore_mkfile(struct dentry *root, struct 
> > pstore_record *record)
> > return -EEXIST;
> > }
> >  
> > -   rc = -ENOMEM;
> > inode = pstore_get_inode(root->d_sb);
> > if (!inode)
> > return -ENOMEM;
> > @@ -373,7 +372,7 @@ int pstore_mkfile(struct dentry *root, struct 
> > pstore_record *record)
> >  
> > dentry = d_alloc_name(root, name);
> > if (!dentry)
> > -   goto fail_private;
> > +   return -ENOMEM;
> >  
> > private->dentry = dentry;
> > private->record = record;
> > @@ -386,13 +385,9 @@ int pstore_mkfile(struct dentry *root, struct 
> > pstore_record *record)
> >  
> > d_add(dentry, no_free_ptr(inode));
> >  
> > -   list_add(>list, _list);
> > +   list_add(&(no_free_ptr(private))->list, _list);
> 
> That's really brittle.  It critically depends upon having no failure
> exits past the assignment to ->i_private; once you've done that,
> you have transferred the ownership of that thing to the inode
> (look at your ->evict_inode()).

I guess so, but it was already that way, and it isn't assignment to
i_private until everything else is "done", apart from adding to the
dentry and the pstore internal list.

> But you can't say
> inode->i_private = no_free_ptr(private);
> since you are using private past that point.

True. How about this reordering:

if (record->time.tv_sec)
inode_set_mtime_to_ts(inode,
  inode_set_ctime_to_ts(inode, 
record->time));

list_add(>list, _list);
inode->i_private = no_free_ptr(private);

d_add(dentry, no_free_ptr(inode));


But none of this gets into how you'd like to see the iput handled. If
you'd prefer, I can just define a pstore_iput handler and you don't have
to look at it at all. :)

-Kees

-- 
Kees Cook



Re: [PATCH v2] netlink: Return unsigned value for nla_len()

2023-12-04 Thread Kees Cook
On Mon, Dec 04, 2023 at 10:22:25AM +0100, Nicolas Dichtel wrote:
> Le 02/12/2023 à 21:25, Kees Cook a écrit :
> > The return value from nla_len() is never expected to be negative, and can
> > never be more than struct nlattr::nla_len (a u16). Adjust the prototype
> > on the function. This will let GCC's value range optimization passes
> > know that the return can never be negative, and can never be larger than
> > u16. As recently discussed[1], this silences the following warning in
> > GCC 12+:
> > 
> > net/wireless/nl80211.c: In function 'nl80211_set_cqm_rssi.isra':
> > net/wireless/nl80211.c:12892:17: warning: 'memcpy' specified bound 
> > 18446744073709551615 exceeds maximum object size 9223372036854775807 
> > [-Wstringop-overflow=]
> > 12892 | memcpy(cqm_config->rssi_thresholds, thresholds,
> >   | ^~~
> > 12893 |flex_array_size(cqm_config, rssi_thresholds,
> >   |
> > 12894 |n_thresholds));
> >   |~~
> > 
> > A future change would be to clamp the subtraction to make sure it never
> > wraps around if nla_len is somehow less than NLA_HDRLEN, which would
> > have the additional benefit of being defensive in the face of nlattr
> > corruption or logic errors.
> > 
> > Reported-by: kernel test robot 
> > Closes: 
> > https://lore.kernel.org/oe-kbuild-all/202311090752.hwcjwahl-...@intel.com/ 
> > [1]
> > Cc: Jakub Kicinski 
> > Cc: "David S. Miller" 
> > Cc: Eric Dumazet 
> > Cc: Paolo Abeni 
> > Cc: Johannes Berg 
> > Cc: Jeff Johnson 
> > Cc: Michael Walle 
> > Cc: Max Schulze 
> > Cc: net...@vger.kernel.org
> > Cc: linux-wirel...@vger.kernel.org
> > Signed-off-by: Kees Cook 
> > ---
> >  v2:
> >  - do not clamp return value (kuba)
> >  - adjust NLA_HDRLEN to be u16 also
> >  v1: https://lore.kernel.org/all/20231130200058.work.520-k...@kernel.org/
> > ---
> >  include/net/netlink.h| 2 +-
> >  include/uapi/linux/netlink.h | 2 +-
> >  2 files changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/net/netlink.h b/include/net/netlink.h
> > index 83bdf787aeee..7678a596a86b 100644
> > --- a/include/net/netlink.h
> > +++ b/include/net/netlink.h
> > @@ -1200,7 +1200,7 @@ static inline void *nla_data(const struct nlattr *nla)
> >   * nla_len - length of payload
> >   * @nla: netlink attribute
> >   */
> > -static inline int nla_len(const struct nlattr *nla)
> > +static inline u16 nla_len(const struct nlattr *nla)
> >  {
> > return nla->nla_len - NLA_HDRLEN;
> >  }
> > diff --git a/include/uapi/linux/netlink.h b/include/uapi/linux/netlink.h
> > index f87aaf28a649..270feed9fd63 100644
> > --- a/include/uapi/linux/netlink.h
> > +++ b/include/uapi/linux/netlink.h
> > @@ -247,7 +247,7 @@ struct nlattr {
> >  
> >  #define NLA_ALIGNTO4
> >  #define NLA_ALIGN(len) (((len) + NLA_ALIGNTO - 1) & 
> > ~(NLA_ALIGNTO - 1))
> > -#define NLA_HDRLEN ((int) NLA_ALIGN(sizeof(struct nlattr)))
> > +#define NLA_HDRLEN ((__u16) NLA_ALIGN(sizeof(struct nlattr)))
> I wonder if this may break the compilation of some userspace tools with errors
> like comparing signed and unsigned values.

Should I drop this part, then?

-- 
Kees Cook



Re: [PATCH] md/md-multipath: Convert "struct mpconf" to flexible array

2023-12-04 Thread Kees Cook
On Sun, Dec 03, 2023 at 08:48:06PM +0100, Christophe JAILLET wrote:
> The 'multipaths' field of 'struct mpconf' can be declared as a flexible
> array.
> 
> The advantages are:
>- 1 less indirection when accessing to the 'multipaths' array
>- save 1 pointer in the structure
>- improve memory usage
>- give the opportunity to use __counted_by() for additional safety
> 
> Signed-off-by: Christophe JAILLET 

This looks like a really nice conversion. I haven't run-tested this, but
it reads correct to me.

Reviewed-by: Kees Cook 

-- 
Kees Cook



Re: [PATCH v2 12/18] sysctl: treewide: constify the ctl_table argument of handlers

2023-12-04 Thread Kees Cook
On Mon, Dec 04, 2023 at 08:52:25AM +0100, Thomas Weißschuh wrote:
> In a future commit the sysctl core will only use
> "const struct ctl_table". As a preparation for that adapt all the proc
> handlers.
> 
> Signed-off-by: Thomas Weißschuh 

Reviewed-by: Kees Cook  # security/*

-- 
Kees Cook



Re: [PATCH v2 08/18] stackleak: don't modify ctl_table argument

2023-12-04 Thread Kees Cook
On Mon, Dec 04, 2023 at 08:52:21AM +0100, Thomas Weißschuh wrote:
> In a future commit the proc_handlers will change to
> "const struct ctl_table".
> As a preparation for that adapt the logic to work with a temporary
> variable, similar to how it is done in other parts of the kernel.
> 
> Signed-off-by: Thomas Weißschuh 

Looks good -- thanks for catching the table-modification cases.

Acked-by: Kees Cook 

-- 
Kees Cook



Re: [PATCH v2 05/18] seccomp: constify ctl_table arguments of utility functions

2023-12-04 Thread Kees Cook
On Mon, Dec 04, 2023 at 08:52:18AM +0100, Thomas Weißschuh wrote:
> In a future commit the proc_handlers themselves will change to
> "const struct ctl_table". As a preparation for that adapt the internal
> helpers.
> 
> Signed-off-by: Thomas Weißschuh 

Acked-by: Kees Cook 

-- 
Kees Cook



Re: [PATCH v2 0/2] qnx4: Avoid confusing compiler about buffer lengths

2023-12-04 Thread Kees Cook
On Tue, Dec 05, 2023 at 01:46:27AM +1000, Ronald Monthero wrote:
> Cheers Kees,
> BR,
> ronald

Is this a "Tested-by"? :)

-Kees

> 
> 
> On Fri, Dec 1, 2023 at 6:51 AM Kees Cook  wrote:
> >
> > Hi,
> >
> > This attempts to fix the issue Ronald Monthero found[1]. Avoids using a
> > too-short struct buffer when reading the string, by using the existing
> > struct union.
> >
> > -Kees
> >
> > [1] 
> > https://lore.kernel.org/lkml/20231112095353.579855-1-debug.pengui...@gmail.com/
> >
> > v2:
> >  - Use BUILD_BUG_ON() instead of _Static_assert()
> > v1: https://lore.kernel.org/all/20231118032638.work.955-k...@kernel.org/
> >
> > Kees Cook (2):
> >   qnx4: Extract dir entry filename processing into helper
> >   qnx4: Use get_directory_fname() in qnx4_match()
> >
> >  fs/qnx4/dir.c   | 52 ++
> >  fs/qnx4/namei.c | 29 +---
> >  fs/qnx4/qnx4.h  | 60 +
> >  3 files changed, 78 insertions(+), 63 deletions(-)
> >
> > --
> > 2.34.1
> >

-- 
Kees Cook



Re: [PATCH v2] scsi: zfcp: Replace strlcpy() with strscpy()

2023-12-04 Thread Benjamin Block
Hello Kees, Martin, James,

On Thu, Nov 30, 2023 at 12:41:00PM -0800, Kees Cook wrote:
> strlcpy() reads the entire source buffer first. This read may exceed
> the destination size limit. This is both inefficient and can lead
> to linear read overflows if a source string is not NUL-terminated[1].
> Additionally, it returns the size of the source string, not the
> resulting size of the destination string. In an effort to remove strlcpy()
> completely[2], replace strlcpy() here with strscpy().
> 
> Overflow should be impossible here, but actually check for buffer sizes
> being identical with BUILD_BUG_ON(), and include a run-time check as
> well.
> 
> Link: https://www.kernel.org/doc/html/latest/process/deprecated.html#strlcpy 
> [1]
> Link: https://github.com/KSPP/linux/issues/89 [2]

> ---
>  drivers/s390/scsi/zfcp_fc.c | 15 +--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/s390/scsi/zfcp_fc.c b/drivers/s390/scsi/zfcp_fc.c
> index 4f0d0e55f0d4..d6516ab00437 100644
> --- a/drivers/s390/scsi/zfcp_fc.c
> +++ b/drivers/s390/scsi/zfcp_fc.c
> @@ -900,8 +900,19 @@ static void zfcp_fc_rspn(struct zfcp_adapter *adapter,
>   zfcp_fc_ct_ns_init(_req->ct_hdr, FC_NS_RSPN_ID,
>  FC_SYMBOLIC_NAME_SIZE);
>   hton24(rspn_req->rspn.fr_fid.fp_fid, fc_host_port_id(shost));
> - len = strlcpy(rspn_req->rspn.fr_name, fc_host_symbolic_name(shost),
> -   FC_SYMBOLIC_NAME_SIZE);
> +
> + BUILD_BUG_ON(sizeof(rspn_req->name) !=
> + sizeof(fc_host_symbolic_name(shost)));
> + BUILD_BUG_ON(sizeof(rspn_req->name) !=
> + type_max(typeof(rspn_req->rspn.fr_name_len)) + 1);
> + len = strscpy(rspn_req->name, fc_host_symbolic_name(shost),
> +   sizeof(rspn_req->name));
> + /*
> +  * It should be impossible for this to truncate (see BUILD_BUG_ON()
> +  * above), but be robust anyway.
> +  */
> + if (WARN_ON(len < 0))
> + len = sizeof(rspn_req->name) - 1;

Looks good to me.


Acked-by: Benjamin Block 


>   rspn_req->rspn.fr_name_len = len;

Martin, James, can you please pick this up for the v6.8 queue?


-- 
Best Regards, Benjamin Block/Linux on IBM Z Kernel Development
IBM Deutschland Research & Development GmbH/   https://www.ibm.com/privacy
Vors. Aufs.-R.: Gregor Pillen / Geschäftsführung: David Faller
Sitz der Ges.: Böblingen /Registergericht: AmtsG Stuttgart, HRB 243294



Re: [PATCH v2 0/2] qnx4: Avoid confusing compiler about buffer lengths

2023-12-04 Thread Ronald Monthero
Cheers Kees,
BR,
ronald


On Fri, Dec 1, 2023 at 6:51 AM Kees Cook  wrote:
>
> Hi,
>
> This attempts to fix the issue Ronald Monthero found[1]. Avoids using a
> too-short struct buffer when reading the string, by using the existing
> struct union.
>
> -Kees
>
> [1] 
> https://lore.kernel.org/lkml/20231112095353.579855-1-debug.pengui...@gmail.com/
>
> v2:
>  - Use BUILD_BUG_ON() instead of _Static_assert()
> v1: https://lore.kernel.org/all/20231118032638.work.955-k...@kernel.org/
>
> Kees Cook (2):
>   qnx4: Extract dir entry filename processing into helper
>   qnx4: Use get_directory_fname() in qnx4_match()
>
>  fs/qnx4/dir.c   | 52 ++
>  fs/qnx4/namei.c | 29 +---
>  fs/qnx4/qnx4.h  | 60 +
>  3 files changed, 78 insertions(+), 63 deletions(-)
>
> --
> 2.34.1
>



Re: [PATCH v2] netlink: Return unsigned value for nla_len()

2023-12-04 Thread Nicolas Dichtel
Le 02/12/2023 à 21:25, Kees Cook a écrit :
> The return value from nla_len() is never expected to be negative, and can
> never be more than struct nlattr::nla_len (a u16). Adjust the prototype
> on the function. This will let GCC's value range optimization passes
> know that the return can never be negative, and can never be larger than
> u16. As recently discussed[1], this silences the following warning in
> GCC 12+:
> 
> net/wireless/nl80211.c: In function 'nl80211_set_cqm_rssi.isra':
> net/wireless/nl80211.c:12892:17: warning: 'memcpy' specified bound 
> 18446744073709551615 exceeds maximum object size 9223372036854775807 
> [-Wstringop-overflow=]
> 12892 | memcpy(cqm_config->rssi_thresholds, thresholds,
>   | ^~~
> 12893 |flex_array_size(cqm_config, rssi_thresholds,
>   |
> 12894 |n_thresholds));
>   |~~
> 
> A future change would be to clamp the subtraction to make sure it never
> wraps around if nla_len is somehow less than NLA_HDRLEN, which would
> have the additional benefit of being defensive in the face of nlattr
> corruption or logic errors.
> 
> Reported-by: kernel test robot 
> Closes: 
> https://lore.kernel.org/oe-kbuild-all/202311090752.hwcjwahl-...@intel.com/ [1]
> Cc: Jakub Kicinski 
> Cc: "David S. Miller" 
> Cc: Eric Dumazet 
> Cc: Paolo Abeni 
> Cc: Johannes Berg 
> Cc: Jeff Johnson 
> Cc: Michael Walle 
> Cc: Max Schulze 
> Cc: net...@vger.kernel.org
> Cc: linux-wirel...@vger.kernel.org
> Signed-off-by: Kees Cook 
> ---
>  v2:
>  - do not clamp return value (kuba)
>  - adjust NLA_HDRLEN to be u16 also
>  v1: https://lore.kernel.org/all/20231130200058.work.520-k...@kernel.org/
> ---
>  include/net/netlink.h| 2 +-
>  include/uapi/linux/netlink.h | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/include/net/netlink.h b/include/net/netlink.h
> index 83bdf787aeee..7678a596a86b 100644
> --- a/include/net/netlink.h
> +++ b/include/net/netlink.h
> @@ -1200,7 +1200,7 @@ static inline void *nla_data(const struct nlattr *nla)
>   * nla_len - length of payload
>   * @nla: netlink attribute
>   */
> -static inline int nla_len(const struct nlattr *nla)
> +static inline u16 nla_len(const struct nlattr *nla)
>  {
>   return nla->nla_len - NLA_HDRLEN;
>  }
> diff --git a/include/uapi/linux/netlink.h b/include/uapi/linux/netlink.h
> index f87aaf28a649..270feed9fd63 100644
> --- a/include/uapi/linux/netlink.h
> +++ b/include/uapi/linux/netlink.h
> @@ -247,7 +247,7 @@ struct nlattr {
>  
>  #define NLA_ALIGNTO  4
>  #define NLA_ALIGN(len)   (((len) + NLA_ALIGNTO - 1) & 
> ~(NLA_ALIGNTO - 1))
> -#define NLA_HDRLEN   ((int) NLA_ALIGN(sizeof(struct nlattr)))
> +#define NLA_HDRLEN   ((__u16) NLA_ALIGN(sizeof(struct nlattr)))
I wonder if this may break the compilation of some userspace tools with errors
like comparing signed and unsigned values.


Regards,
Nicolas



Re: [PATCH RFC 0/7] sysctl: constify sysctl ctl_tables

2023-12-04 Thread Joel Granados
Hey

I see that you sent a V2. I'll try to get to it at the end of the week.

On Sun, Dec 03, 2023 at 04:37:01PM +0100, Thomas Weißschuh wrote:
> Hi Joel,
> 
> On 2023-12-01 17:31:20+0100, Joel Granados wrote:
> > Hey Thomas.
> > 
> > Thx for the clarifications. I did more of a deep dive into your set and
> > have additional comments (in line). I think const-ing all this is a good
> > approach. The way forward is to be able to see the entire patch set of
> > changes in a V1 or a shared repo somewhere to have a better picture of
> > what is going on. By the "entire patchset" I mean all the changes that
> > you described in the "full process".
> 
> All the changes will be a lot. I don't think the incremental value to
> migrate all proc_handlers versus the work is useful for the discussion.
> I can however write up my proposed changes for the sysctl core properly
> and submit them as part of the next revision.
Looking forward to seeing them in V2

> 
> > On Tue, Nov 28, 2023 at 09:18:30AM +0100, Thomas Weißschuh wrote:
> > > Hi Joel,
> > > 
> > > On 2023-11-27 11:13:23+0100, Joel Granados wrote:
> > > > In general I would like to see more clarity with the motivation and I
> > > > would also expect some system testing. My comments inline:

<--- snip --->

> > is all sysctl code and cannot be chunked up because of dependencies,
> > then it should be ok to do it in one go.
> > 
> > > > > * Migrate all other sysctl handlers to proc_handler_new.
> > > > > * Drop the old proc_handler_field.
> > > > > * Fix the sysctl core to not modify the tables anymore.
> > > > > * Adapt public sysctl APIs to take "const struct ctl_table *".
> > > > > * Teach checkpatch.pl to warn on non-const "struct ctl_table"
> > > > >   definitions.
> 
> > Have you considered how to ignore the cases where the ctl_tables are
> > supposed to be non-const when they are defined (like in the network
> > code that we were discussing earlier)
> 
> As it would be a checkpatch warning it can be ignore while writing the
> patch and it won't trigger afterwards.
I mention coccinelle it is able to identify const vs non-const uses of
the ctl_table and only warn on the cases where it makes sense. This
would remove false negatives from pushing patches through.

> 
> > > > > * Migrate definitions of "struct ctl_table" to "const" where 
> > > > > applicable.
> > These migrations are treewide and are usually reviewed by a wider
> > audience. You might need to chunk it up to make the review more palpable
> > for the other maintainers.
> 
> Ack.
> 
> > > > >  
> > > > > 
> > > > > Notes:
> > > > > 
> > > > > Just casting the function pointers around would trigger
> > > > > CFI (control flow integrity) warnings.
> > > > > 
> > > > > The name of the new handler "proc_handler_new" is a bit too long 
> > > > > messing
> > > > > up the alignment of the table definitions.
> > > > > Maybe "proc_handler2" or "proc_handler_c" for (const) would be better.
> > > 
> > > > indeed the name does not say much. "_new" looses its meaning quite fast
> > > > :)
> > > 
> > > Hopefully somebody comes up with a better name!
> 
> > I would like to avoid this all together and just do add the const to the
> > existing "proc_handler"
> 
> Ack.
> 
> > > 
> > > > In my experience these tree wide modifications are quite tricky. Have 
> > > > you
> > > > run any tests to see that everything is as it was? sysctl selftests and
> > > > 0-day come to mind.
> > > 
> > > I managed to miss one change in my initial submission:
> > > With the hunk below selftests and typing emails work.
> > > 
> > > --- a/fs/proc/proc_sysctl.c
> > > +++ b/fs/proc/proc_sysctl.c
> > > @@ -1151,7 +1151,7 @@ static int sysctl_check_table(const char *path, 
> > > struct ctl_table_header *header)
> > > else
> > > err |= sysctl_check_table_array(path, 
> > > entry);
> > > }
> > > -   if (!entry->proc_handler)
> > > +   if (!entry->proc_handler && !entry->proc_handler_new)
> > > err |= sysctl_err(path, entry, "No proc_handler");
> > >  
> > > if ((entry->mode & (S_IRUGO|S_IWUGO)) != entry->mode)
> > > 
> > > > [..]
> > > 
> > > [0] 43a7206b0963 ("driver core: class: make class_register() take a const 
> > > *")
> > > [1] 
> > > https://lore.kernel.org/lkml/20230930050033.41174-1-wedso...@gmail.com/
> 
> Thanks for the feedback!
> 
> Thomas

-- 

Joel Granados


signature.asc
Description: PGP signature


Re: [PATCH][next] wifi: mt76: mt7996: Use DECLARE_FLEX_ARRAY() and fix -Warray-bounds warnings

2023-12-04 Thread Kalle Valo
Kees Cook  writes:

> On Thu, Nov 16, 2023 at 02:57:24PM -0600, Gustavo A. R. Silva wrote:
>
>> Transform zero-length arrays `adm_stat` and `msdu_cnt` into proper
>> flexible-array members in anonymous union in `struct
>> mt7996_mcu_all_sta_info_event` via the DECLARE_FLEX_ARRAY()
>> helper; and fix multiple -Warray-bounds warnings:
>> 
>> drivers/net/wireless/mediatek/mt76/mt7996/mcu.c:483:61: warning:
>> array subscript  is outside array bounds of 'struct
>> [0]' [-Warray-bounds=]
>> drivers/net/wireless/mediatek/mt76/mt7996/mcu.c:490:58: warning:
>> array subscript  is outside array bounds of 'struct
>> [0]' [-Warray-bounds=]
>> drivers/net/wireless/mediatek/mt76/mt7996/mcu.c:492:58: warning:
>> array subscript  is outside array bounds of 'struct
>> [0]' [-Warray-bounds=]
>> drivers/net/wireless/mediatek/mt76/mt7996/mcu.c:469:61: warning:
>> array subscript  is outside array bounds of 'struct
>> [0]' [-Warray-bounds=]
>> drivers/net/wireless/mediatek/mt76/mt7996/mcu.c:477:66: warning:
>> array subscript  is outside array bounds of 'struct
>> [0]' [-Warray-bounds=]
>> drivers/net/wireless/mediatek/mt76/mt7996/mcu.c:479:66: warning:
>> array subscript  is outside array bounds of 'struct
>> [0]' [-Warray-bounds=]
>> 
>> This results in no differences in binary output, helps with the ongoing
>> efforts to globally enable -Warray-bounds.
>> 
>> Signed-off-by: Gustavo A. R. Silva 
>
> *thread ping*
>
> Can wireless folks please pick this patch up?

Ok, I assigned this to me on patchwork now. Felix, please let me know if
you prefer to take this to your tree instead.

-- 
https://patchwork.kernel.org/project/linux-wireless/list/

https://wireless.wiki.kernel.org/en/developers/documentation/submittingpatches