[linux-yocto] [PATCH v5.15-rt 00/11] Uprev from -rt61 to -rt72

2023-12-08 Thread Paul Gortmaker via lists.yoctoproject.org
From: Paul Gortmaker 

Bruce,

So, I knew that i915 fix we dealt with on v6.1 was actually present in
the v5.15-rt branch and not just floating out in space.

So I figured I'd help out and also look at getting this older version
caught up with the where the linux-stable-rt repository is today.

We are currently at -rt61 and upstream is at -rt72 so it looks like:

linux-stable-rt$git log --oneline --reverse --no-merges ^v5.15.96-rt61 
^v5.15.141 v5.15.141-rt72
22006998ccd1 Linux 5.15.107-rt62
03aa894822a7 Linux 5.15.111-rt63
05e341da6364 Linux 5.15.113-rt64
b850a37ed7e1 Linux 5.15.119-rt65
bc88aa9e2737 Linux 5.15.125-rt66
9157353f3ecf Linux 5.15.129-rt67
d535006be892 Linux 5.15.132-rt68
27b3564da988 Linux 5.15.133-rt69
f1bd52382dce io-mapping: don't disable preempt on RT in 
io_mapping_map_atomic_wc().
47364f671cbe locking/rwbase: Mitigate indefinite writer starvation
e94601d32f4d Revert "softirq: Let ksoftirqd do its job"
ccf6bfd49a8a debugobject: Ensure pool refill (again)
1992720dff25 debugobjects,locking: Annotate debug_object_fill_pool() wait type 
violation
20616d2c54d5 sched: avoid false lockdep splat in put_task_struct()
ef42a60e55bf locking/seqlock: Do the lockdep annotation before locking in 
do_write_seqcount_begin_nested()
f9fec545dea4 mm/page_alloc: Use write_seqlock_irqsave() instead write_seqlock() 
+ local_irq_save().
87572f0f6aa8 bpf: Remove in_atomic() from bpf_link_put().
71c09b1d6b07 posix-timers: Ensure timer ID search-loop limit is valid
e6eb0105c206 drm/i915: Do not disable preemption for resets
594a4d80f59c Linux 5.15.133-rt70
0508c80fa70d Linux 5.15.137-rt71
e9e280348657 Linux 5.15.141-rt72

It was all pretty seamless, except for two.

1) We've already got ef42a60e55bf via linux-stable commit 07b569051f6e

2) f9fec545dea4 is all about the printk_deferred stuff.  I had to stare
   at it for a while to figure out what is going on, and look at history.
   It turns out that we (Yocto) gutted all of the printk_deferred stuff:

   Author: John Ogness 
   AuthorDate: Mon Nov 30 01:42:08 2020 +0106
   Commit: Bruce Ashfield 
   CommitDate: Thu Nov 4 16:06:31 2021 -0400

printk: remove deferred printing

Since printing occurs either atomically or from the printing
kthread, there is no need for any deferring or tracking possible
recursion paths. Remove all printk defer functions and context
tracking.

Signed-off-by: John Ogness 
Signed-off-by: Thomas Gleixner 

...and then a stable cleanup fix from Kevin in 63a865cbbc8 to remove
the new instance added from linux-stable 0431e1323f42 [5.15.109]

I'm not sure where the Ogness commit came from; the printk_deferred()
removal, but it largely does not matter, as I'm not going to restore
it on this old release!  Leave well enough alone!

With that knowledge, the f9fec545dea4 page_alloc in the list above
essentially becomes a trivial no-op.  But I left in anyway as a
self-documenting commit, with an appropriate annotation

I didn't bother with doing an individual commit per version tag.  Just
moved forward localversion-rt from 61 to 72 in a single commit.

Testing:
   -sanity boot test on x86_64 (old AMD quad core)
   -sanity boot test on arm_64 (qemu)
-check /proc/version for -rt72
-check that dmesg is clean
-run some other random commands
   -test merge out to all BSPs (no conflicts)

If it makes life easier for you, I can push a branch to contrib.

Paul
---

Joseph Salisbury (1):
  Linux 5.15.141-rt72

Paolo Abeni (1):
  Revert "softirq: Let ksoftirqd do its job"

Peter Zijlstra (1):
  debugobjects,locking: Annotate debug_object_fill_pool() wait type
violation

Sebastian Andrzej Siewior (4):
  io-mapping: don't disable preempt on RT in io_mapping_map_atomic_wc().
  locking/rwbase: Mitigate indefinite writer starvation
  mm/page_alloc: Use write_seqlock_irqsave() instead write_seqlock() +
local_irq_save().
  bpf: Remove in_atomic() from bpf_link_put().

Thomas Gleixner (2):
  debugobject: Ensure pool refill (again)
  posix-timers: Ensure timer ID search-loop limit is valid

Tvrtko Ursulin (1):
  drm/i915: Do not disable preemption for resets

Wander Lairson Costa (1):
  sched: avoid false lockdep splat in put_task_struct()

 drivers/gpu/drm/i915/gt/intel_reset.c | 12 +-
 include/linux/io-mapping.h| 20 +
 include/linux/lockdep.h   | 14 
 include/linux/lockdep_types.h |  1 +
 include/linux/sched/signal.h  |  2 +-
 include/linux/sched/task.h| 18 +++
 kernel/bpf/syscall.c  | 29 +---
 kernel/locking/lockdep.c  | 28 +--
 kernel/locking/rwbase_rt.c|  9 
 kernel/softirq.c  | 22 ++
 kernel/time/posix-timers.c| 31 +++---
 lib/debugobjects.c| 32 ++-
 

[linux-yocto] [PATCH v5.15-rt 10/11] drm/i915: Do not disable preemption for resets

2023-12-08 Thread Paul Gortmaker via lists.yoctoproject.org
From: Tvrtko Ursulin 

commit e6eb0105c20694af5642c77bf63c9509e4f9bb28 in linux-stable-rt

[commit 40cd2835ced288789a685aa4aa7bc04b492dcd45 in linux-rt-devel]

Commit ade8a0f59844 ("drm/i915: Make all GPU resets atomic") added a
preempt disable section over the hardware reset callback to prepare the
driver for being able to reset from atomic contexts.

In retrospect I can see that the work item at a time was about removing
the struct mutex from the reset path. Code base also briefly entertained
the idea of doing the reset under stop_machine in order to serialize
userspace mmap and temporary glitch in the fence registers (see
eb8d0f5af4ec ("drm/i915: Remove GPU reset dependence on struct_mutex"),
but that never materialized and was soon removed in 2caffbf11762
("drm/i915: Revoke mmaps and prevent access to fence registers across
reset") and replaced with a SRCU based solution.

As such, as far as I can see, today we still have a requirement that
resets must not sleep (invoked from submission tasklets), but no need to
support invoking them from a truly atomic context.

Given that the preemption section is problematic on RT kernels, since the
uncore lock becomes a sleeping lock and so is invalid in such section,
lets try and remove it. Potential downside is that our short waits on GPU
to complete the reset may get extended if CPU scheduling interferes, but
in practice that probably isn't a deal breaker.

In terms of mechanics, since the preemption disabled block is being
removed we just need to replace a few of the wait_for_atomic macros into
busy looping versions which will work (and not complain) when called from
non-atomic sections.

Signed-off-by: Tvrtko Ursulin 
Cc: Chris Wilson 
Cc: Paul Gortmaker 
Cc: Sebastian Andrzej Siewior 
Acked-by: Sebastian Andrzej Siewior 
Link: 
https://lore.kernel.org/r/20230705093025.3689748-1-tvrtko.ursu...@linux.intel.com
Signed-off-by: Sebastian Andrzej Siewior 
[PG: backport from v6.4-rt ; minor context fixup caused by b7d70b8b06ed]
Signed-off-by: Paul Gortmaker 
Signed-off-by: Clark Williams 
(cherry picked from commit 1a80b572f783a15327663bf9e7d71163976e8d6a
v6.1-rt)
Signed-off-by: Joseph Salisbury 
Signed-off-by: Paul Gortmaker 
---
 drivers/gpu/drm/i915/gt/intel_reset.c | 12 +---
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c 
b/drivers/gpu/drm/i915/gt/intel_reset.c
index 9dc244b70ce4..06ab730dc9a8 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -167,13 +167,13 @@ static int i915_do_reset(struct intel_gt *gt,
/* Assert reset for at least 20 usec, and wait for acknowledgement. */
pci_write_config_byte(pdev, I915_GDRST, GRDOM_RESET_ENABLE);
udelay(50);
-   err = wait_for_atomic(i915_in_reset(pdev), 50);
+   err = _wait_for_atomic(i915_in_reset(pdev), 50, 0);
 
/* Clear the reset request. */
pci_write_config_byte(pdev, I915_GDRST, 0);
udelay(50);
if (!err)
-   err = wait_for_atomic(!i915_in_reset(pdev), 50);
+   err = _wait_for_atomic(!i915_in_reset(pdev), 50, 0);
 
return err;
 }
@@ -193,7 +193,7 @@ static int g33_do_reset(struct intel_gt *gt,
struct pci_dev *pdev = to_pci_dev(gt->i915->drm.dev);
 
pci_write_config_byte(pdev, I915_GDRST, GRDOM_RESET_ENABLE);
-   return wait_for_atomic(g4x_reset_complete(pdev), 50);
+   return _wait_for_atomic(g4x_reset_complete(pdev), 50, 0);
 }
 
 static int g4x_do_reset(struct intel_gt *gt,
@@ -210,7 +210,7 @@ static int g4x_do_reset(struct intel_gt *gt,
 
pci_write_config_byte(pdev, I915_GDRST,
  GRDOM_MEDIA | GRDOM_RESET_ENABLE);
-   ret =  wait_for_atomic(g4x_reset_complete(pdev), 50);
+   ret =  _wait_for_atomic(g4x_reset_complete(pdev), 50, 0);
if (ret) {
GT_TRACE(gt, "Wait for media reset failed\n");
goto out;
@@ -218,7 +218,7 @@ static int g4x_do_reset(struct intel_gt *gt,
 
pci_write_config_byte(pdev, I915_GDRST,
  GRDOM_RENDER | GRDOM_RESET_ENABLE);
-   ret =  wait_for_atomic(g4x_reset_complete(pdev), 50);
+   ret =  _wait_for_atomic(g4x_reset_complete(pdev), 50, 0);
if (ret) {
GT_TRACE(gt, "Wait for render reset failed\n");
goto out;
@@ -736,9 +736,7 @@ int __intel_gt_reset(struct intel_gt *gt, 
intel_engine_mask_t engine_mask)
intel_uncore_forcewake_get(gt->uncore, FORCEWAKE_ALL);
for (retry = 0; ret == -ETIMEDOUT && retry < retries; retry++) {
GT_TRACE(gt, "engine_mask=%x\n", engine_mask);
-   preempt_disable();
ret = reset(gt, engine_mask, retry);
-   preempt_enable();
}
intel_uncore_forcewake_put(gt->uncore, FORCEWAKE_ALL);
 
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13410): 

[linux-yocto] [PATCH v5.15-rt 11/11] Linux 5.15.141-rt72

2023-12-08 Thread Paul Gortmaker via lists.yoctoproject.org
From: Joseph Salisbury 

commit e9e280348657bf29b5f35e37e34e4da26821116c in linux-stable-rt

Signed-off-by: Joseph Salisbury 
Signed-off-by: Paul Gortmaker 
---
 localversion-rt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/localversion-rt b/localversion-rt
index 9b7de9345ef4..2c95a3cdbcb8 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt61
+-rt72
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13412): 
https://lists.yoctoproject.org/g/linux-yocto/message/13412
Mute This Topic: https://lists.yoctoproject.org/mt/103055700/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [PATCH v5.15-rt 09/11] posix-timers: Ensure timer ID search-loop limit is valid

2023-12-08 Thread Paul Gortmaker via lists.yoctoproject.org
From: Thomas Gleixner 

commit 71c09b1d6b07a7e4d8d1c686b53f8d1442f8ec14 in linux-stable-rt

posix_timer_add() tries to allocate a posix timer ID by starting from the
cached ID which was stored by the last successful allocation.

This is done in a loop searching the ID space for a free slot one by
one. The loop has to terminate when the search wrapped around to the
starting point.

But that's racy vs. establishing the starting point. That is read out
lockless, which leads to the following problem:

CPU0   CPU1
posix_timer_add()
  start = sig->posix_timer_id;
  lock(hash_lock);
  ...  posix_timer_add()
  if (++sig->posix_timer_id < 0)
 start = sig->posix_timer_id;
 sig->posix_timer_id = 0;

So CPU1 can observe a negative start value, i.e. -1, and the loop break
never happens because the condition can never be true:

  if (sig->posix_timer_id == start)
 break;

While this is unlikely to ever turn into an endless loop as the ID space is
huge (INT_MAX), the racy read of the start value caught the attention of
KCSAN and Dmitry unearthed that incorrectness.

Rewrite it so that all id operations are under the hash lock.

Reported-by: syzbot+5c54bd3eb218bb595...@syzkaller.appspotmail.com
Reported-by: Dmitry Vyukov 
Signed-off-by: Thomas Gleixner 
Reviewed-by: Frederic Weisbecker 
Link: https://lore.kernel.org/r/87bkhzdn6g.ffs@tglx

(cherry picked from commit 8ce8849dd1e78dadcee0ec9acbd259d239b7069f)
Signed-off-by: Joseph Salisbury 
Signed-off-by: Paul Gortmaker 
---
 include/linux/sched/signal.h |  2 +-
 kernel/time/posix-timers.c   | 31 ++-
 2 files changed, 19 insertions(+), 14 deletions(-)

diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
index 5f0e8403e8ce..9743f7d173a0 100644
--- a/include/linux/sched/signal.h
+++ b/include/linux/sched/signal.h
@@ -125,7 +125,7 @@ struct signal_struct {
 #ifdef CONFIG_POSIX_TIMERS
 
/* POSIX.1b Interval Timers */
-   int posix_timer_id;
+   unsigned intnext_posix_timer_id;
struct list_headposix_timers;
 
/* ITIMER_REAL timer for the process */
diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
index ed3c4a954398..2d6cf93ca370 100644
--- a/kernel/time/posix-timers.c
+++ b/kernel/time/posix-timers.c
@@ -140,25 +140,30 @@ static struct k_itimer *posix_timer_by_id(timer_t id)
 static int posix_timer_add(struct k_itimer *timer)
 {
struct signal_struct *sig = current->signal;
-   int first_free_id = sig->posix_timer_id;
struct hlist_head *head;
-   int ret = -ENOENT;
+   unsigned int cnt, id;
 
-   do {
+   /*
+* FIXME: Replace this by a per signal struct xarray once there is
+* a plan to handle the resulting CRIU regression gracefully.
+*/
+   for (cnt = 0; cnt <= INT_MAX; cnt++) {
spin_lock(_lock);
-   head = _timers_hashtable[hash(sig, sig->posix_timer_id)];
-   if (!__posix_timers_find(head, sig, sig->posix_timer_id)) {
+   id = sig->next_posix_timer_id;
+
+   /* Write the next ID back. Clamp it to the positive space */
+   sig->next_posix_timer_id = (id + 1) & INT_MAX;
+
+   head = _timers_hashtable[hash(sig, id)];
+   if (!__posix_timers_find(head, sig, id)) {
hlist_add_head_rcu(>t_hash, head);
-   ret = sig->posix_timer_id;
+   spin_unlock(_lock);
+   return id;
}
-   if (++sig->posix_timer_id < 0)
-   sig->posix_timer_id = 0;
-   if ((sig->posix_timer_id == first_free_id) && (ret == -ENOENT))
-   /* Loop over all possible ids completed */
-   ret = -EAGAIN;
spin_unlock(_lock);
-   } while (ret == -ENOENT);
-   return ret;
+   }
+   /* POSIX return code when no timer ID could be allocated */
+   return -EAGAIN;
 }
 
 static inline void unlock_timer(struct k_itimer *timr, unsigned long flags)
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13411): 
https://lists.yoctoproject.org/g/linux-yocto/message/13411
Mute This Topic: https://lists.yoctoproject.org/mt/103055699/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [PATCH v5.15-rt 08/11] bpf: Remove in_atomic() from bpf_link_put().

2023-12-08 Thread Paul Gortmaker via lists.yoctoproject.org
From: Sebastian Andrzej Siewior 

commit 87572f0f6aa8cbb3c69a7085fae70786cd217653 in linux-stable-rt

bpf_free_inode() is invoked as a RCU callback. Usually RCU callbacks are
invoked within softirq context. By setting rcutree.use_softirq=0 boot
option the RCU callbacks will be invoked in a per-CPU kthread with
bottom halves disabled which implies a RCU read section.

On PREEMPT_RT the context remains fully preemptible. The RCU read
section however does not allow schedule() invocation. The latter happens
in mutex_lock() performed by bpf_trampoline_unlink_prog() originated
from bpf_link_put().

It was pointed out that the bpf_link_put() invocation should not be
delayed if originated from close(). It was also pointed out that other
invocations from within a syscall should also avoid the workqueue.
Everyone else should use workqueue by default to remain safe in the
future (while auditing the code, every caller was preemptible except for
the RCU case).

Let bpf_link_put() use the worker unconditionally. Add
bpf_link_put_direct() which will directly free the resources and is used
by close() and from within __sys_bpf().

Signed-off-by: Sebastian Andrzej Siewior 
Signed-off-by: Andrii Nakryiko 
Link: https://lore.kernel.org/bpf/20230614083430.oenaw...@linutronix.de
(cherry picked from commit ab5d47bd41b1db82c295b0e751e2b822b43a4b5a)
Signed-off-by: Joseph Salisbury 
Signed-off-by: Paul Gortmaker 
---
 kernel/bpf/syscall.c | 29 -
 1 file changed, 16 insertions(+), 13 deletions(-)

diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index ad41b8230780..bcc01f9881cf 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -2454,27 +2454,30 @@ static void bpf_link_put_deferred(struct work_struct 
*work)
bpf_link_free(link);
 }
 
-/* bpf_link_put can be called from atomic context, but ensures that resources
- * are freed from process context
+/* bpf_link_put might be called from atomic context. It needs to be called
+ * from sleepable context in order to acquire sleeping locks during the 
process.
  */
 void bpf_link_put(struct bpf_link *link)
 {
if (!atomic64_dec_and_test(>refcnt))
return;
 
-   if (in_atomic()) {
-   INIT_WORK(>work, bpf_link_put_deferred);
-   schedule_work(>work);
-   } else {
-   bpf_link_free(link);
-   }
+   INIT_WORK(>work, bpf_link_put_deferred);
+   schedule_work(>work);
+}
+
+static void bpf_link_put_direct(struct bpf_link *link)
+{
+   if (!atomic64_dec_and_test(>refcnt))
+   return;
+   bpf_link_free(link);
 }
 
 static int bpf_link_release(struct inode *inode, struct file *filp)
 {
struct bpf_link *link = filp->private_data;
 
-   bpf_link_put(link);
+   bpf_link_put_direct(link);
return 0;
 }
 
@@ -4351,7 +4354,7 @@ static int link_update(union bpf_attr *attr)
if (ret)
bpf_prog_put(new_prog);
 out_put_link:
-   bpf_link_put(link);
+   bpf_link_put_direct(link);
return ret;
 }
 
@@ -4374,7 +4377,7 @@ static int link_detach(union bpf_attr *attr)
else
ret = -EOPNOTSUPP;
 
-   bpf_link_put(link);
+   bpf_link_put_direct(link);
return ret;
 }
 
@@ -4425,7 +4428,7 @@ static int bpf_link_get_fd_by_id(const union bpf_attr 
*attr)
 
fd = bpf_link_new_fd(link);
if (fd < 0)
-   bpf_link_put(link);
+   bpf_link_put_direct(link);
 
return fd;
 }
@@ -4502,7 +4505,7 @@ static int bpf_iter_create(union bpf_attr *attr)
return PTR_ERR(link);
 
err = bpf_iter_new_fd(link);
-   bpf_link_put(link);
+   bpf_link_put_direct(link);
 
return err;
 }
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13409): 
https://lists.yoctoproject.org/g/linux-yocto/message/13409
Mute This Topic: https://lists.yoctoproject.org/mt/103055697/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [PATCH v5.15-rt 05/11] debugobjects,locking: Annotate debug_object_fill_pool() wait type violation

2023-12-08 Thread Paul Gortmaker via lists.yoctoproject.org
From: Peter Zijlstra 

commit 1992720dff250e9d7d99696588ab1b197160c6b6 in linux-stable-rt

There is an explicit wait-type violation in debug_object_fill_pool()
for PREEMPT_RT=n kernels which allows them to more easily fill the
object pool and reduce the chance of allocation failures.

Lockdep's wait-type checks are designed to check the PREEMPT_RT
locking rules even for PREEMPT_RT=n kernels and object to this, so
create a lockdep annotation to allow this to stand.

Specifically, create a 'lock' type that overrides the inner wait-type
while it is held -- allowing one to temporarily raise it, such that
the violation is hidden.

Reported-by: Vlastimil Babka 
Reported-by: Qi Zheng 
Signed-off-by: Peter Zijlstra (Intel) 
Tested-by: Qi Zheng 
Link: 
https://lkml.kernel.org/r/20230429100614.ga1489...@hirez.programming.kicks-ass.net
(cherry picked from commit 0cce06ba859a515bd06224085d3addb870608b6d)
Signed-off-by: Joseph Salisbury 
Signed-off-by: Paul Gortmaker 
---
 include/linux/lockdep.h   | 14 ++
 include/linux/lockdep_types.h |  1 +
 kernel/locking/lockdep.c  | 28 +---
 lib/debugobjects.c| 15 +--
 4 files changed, 49 insertions(+), 9 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 1935e4b24359..155a4947d870 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -342,6 +342,16 @@ extern void lock_unpin_lock(struct lockdep_map *lock, 
struct pin_cookie);
 #define lockdep_repin_lock(l,c)lock_repin_lock(&(l)->dep_map, (c))
 #define lockdep_unpin_lock(l,c)lock_unpin_lock(&(l)->dep_map, (c))
 
+/*
+ * Must use lock_map_aquire_try() with override maps to avoid
+ * lockdep thinking they participate in the block chain.
+ */
+#define DEFINE_WAIT_OVERRIDE_MAP(_name, _wait_type)\
+   struct lockdep_map _name = {\
+   .name = #_name "-wait-type-override",   \
+   .wait_type_inner = _wait_type,  \
+   .lock_type = LD_LOCK_WAIT_OVERRIDE, }
+
 #else /* !CONFIG_LOCKDEP */
 
 static inline void lockdep_init_task(struct task_struct *task)
@@ -429,6 +439,9 @@ extern int lockdep_is_held(const void *);
 #define lockdep_repin_lock(l, c)   do { (void)(l); (void)(c); } 
while (0)
 #define lockdep_unpin_lock(l, c)   do { (void)(l); (void)(c); } 
while (0)
 
+#define DEFINE_WAIT_OVERRIDE_MAP(_name, _wait_type)\
+   struct lockdep_map __maybe_unused _name = {}
+
 #endif /* !LOCKDEP */
 
 enum xhlock_context_t {
@@ -571,6 +584,7 @@ do {
\
 #define rwsem_release(l, i)lock_release(l, i)
 
 #define lock_map_acquire(l)lock_acquire_exclusive(l, 0, 0, 
NULL, _THIS_IP_)
+#define lock_map_acquire_try(l)
lock_acquire_exclusive(l, 0, 1, NULL, _THIS_IP_)
 #define lock_map_acquire_read(l)   
lock_acquire_shared_recursive(l, 0, 0, NULL, _THIS_IP_)
 #define lock_map_acquire_tryread(l)
lock_acquire_shared_recursive(l, 0, 1, NULL, _THIS_IP_)
 #define lock_map_release(l)lock_release(l, _THIS_IP_)
diff --git a/include/linux/lockdep_types.h b/include/linux/lockdep_types.h
index 3e726ace5c62..a5f1519489df 100644
--- a/include/linux/lockdep_types.h
+++ b/include/linux/lockdep_types.h
@@ -33,6 +33,7 @@ enum lockdep_wait_type {
 enum lockdep_lock_type {
LD_LOCK_NORMAL = 0, /* normal, catch all */
LD_LOCK_PERCPU, /* percpu */
+   LD_LOCK_WAIT_OVERRIDE,  /* annotation */
LD_LOCK_MAX,
 };
 
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 44b78f63d5fe..f4f4593949a4 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2210,6 +2210,9 @@ static inline bool usage_match(struct lock_list *entry, 
void *mask)
 
 static inline bool usage_skip(struct lock_list *entry, void *mask)
 {
+   if (entry->class->lock_type == LD_LOCK_NORMAL)
+   return false;
+
/*
 * Skip local_lock() for irq inversion detection.
 *
@@ -2236,14 +2239,16 @@ static inline bool usage_skip(struct lock_list *entry, 
void *mask)
 * As a result, we will skip local_lock(), when we search for irq
 * inversion bugs.
 */
-   if (entry->class->lock_type == LD_LOCK_PERCPU) {
-   if (DEBUG_LOCKS_WARN_ON(entry->class->wait_type_inner < 
LD_WAIT_CONFIG))
-   return false;
+   if (entry->class->lock_type == LD_LOCK_PERCPU &&
+   DEBUG_LOCKS_WARN_ON(entry->class->wait_type_inner < LD_WAIT_CONFIG))
+   return false;
 
-   return true;
-   }
+   /*
+* Skip WAIT_OVERRIDE for irq inversion detection -- it's not actually
+* a lock and only used to override the wait_type.
+*/
 
-   return false;
+   return true;
 }
 
 /*
@@ -4710,7 +4715,8 @@ static int 

[linux-yocto] [PATCH v5.15-rt 06/11] sched: avoid false lockdep splat in put_task_struct()

2023-12-08 Thread Paul Gortmaker via lists.yoctoproject.org
From: Wander Lairson Costa 

commit 20616d2c54d5db199f983ca9515630f361d5c995 in linux-stable-rt

In put_task_struct(), a spin_lock is indirectly acquired under the kernel
stock. When running the kernel in real-time (RT) configuration, the
operation is dispatched to a preemptible context call to ensure
guaranteed preemption. However, if PROVE_RAW_LOCK_NESTING is enabled
and __put_task_struct() is called while holding a raw_spinlock, lockdep
incorrectly reports an "Invalid lock context" in the stock kernel.

This false splat occurs because lockdep is unaware of the different
route taken under RT. To address this issue, override the inner wait
type to prevent the false lockdep splat.

Suggested-by: Oleg Nesterov 
Suggested-by: Sebastian Andrzej Siewior 
Suggested-by: Peter Zijlstra 
Signed-off-by: Wander Lairson Costa 
Signed-off-by: Peter Zijlstra (Intel) 
Link: https://lore.kernel.org/r/20230614122323.37957-3-wan...@redhat.com
(cherry picked from commit 893cdaaa3977be6afb3a7f756fbfd7be83f68d8c)
Signed-off-by: Joseph Salisbury 
Signed-off-by: Paul Gortmaker 
---
 include/linux/sched/task.h | 18 ++
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index 0c2d00809915..75d52a9e7620 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -115,6 +115,19 @@ static inline void put_task_struct(struct task_struct *t)
if (!refcount_dec_and_test(>usage))
return;
 
+   /*
+* In !RT, it is always safe to call __put_task_struct().
+* Under RT, we can only call it in preemptible context.
+*/
+   if (!IS_ENABLED(CONFIG_PREEMPT_RT) || preemptible()) {
+   static DEFINE_WAIT_OVERRIDE_MAP(put_task_map, LD_WAIT_SLEEP);
+
+   lock_map_acquire_try(_task_map);
+   __put_task_struct(t);
+   lock_map_release(_task_map);
+   return;
+   }
+
/*
 * under PREEMPT_RT, we can't call put_task_struct
 * in atomic context because it will indirectly
@@ -135,10 +148,7 @@ static inline void put_task_struct(struct task_struct *t)
 * when it fails to fork a process. Therefore, there is no
 * way it can conflict with put_task_struct().
 */
-   if (IS_ENABLED(CONFIG_PREEMPT_RT) && !preemptible())
-   call_rcu(>rcu, __put_task_struct_rcu_cb);
-   else
-   __put_task_struct(t);
+   call_rcu(>rcu, __put_task_struct_rcu_cb);
 }
 
 static inline void put_task_struct_many(struct task_struct *t, int nr)
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13406): 
https://lists.yoctoproject.org/g/linux-yocto/message/13406
Mute This Topic: https://lists.yoctoproject.org/mt/103055694/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [PATCH v5.15-rt 07/11] mm/page_alloc: Use write_seqlock_irqsave() instead write_seqlock() + local_irq_save().

2023-12-08 Thread Paul Gortmaker via lists.yoctoproject.org
From: Sebastian Andrzej Siewior 

commit f9fec545dea4aac71dfb54e3a6d187cc92af9ea4 in linux-stable-rt

__build_all_zonelists() acquires zonelist_update_seq by first disabling
interrupts via local_irq_save() and then acquiring the seqlock with
write_seqlock(). This is troublesome and leads to problems on
PREEMPT_RT. The problem is that the inner spinlock_t becomes a sleeping
lock on PREEMPT_RT and must not be acquired with disabled interrupts.

The API provides write_seqlock_irqsave() which does the right thing in
one step.
printk_deferred_enter() has to be invoked in non-migrate-able context to
ensure that deferred printing is enabled and disabled on the same CPU.
This is the case after zonelist_update_seq has been acquired.

There was discussion on the first submission that the order should be:
local_irq_disable();
printk_deferred_enter();
write_seqlock();

to avoid pitfalls like having an unaccounted printk() coming from
write_seqlock_irqsave() before printk_deferred_enter() is invoked. The
only origin of such a printk() can be a lockdep splat because the
lockdep annotation happens after the sequence count is incremented.
This is exceptional and subject to change.

It was also pointed that PREEMPT_RT can be affected by the printk
problem since its write_seqlock_irqsave() does not really disable
interrupts. This isn't the case because PREEMPT_RT's printk
implementation differs from the mainline implementation in two important
aspects:
- Printing happens in a dedicated threads and not at during the
  invocation of printk().
- In emergency cases where synchronous printing is used, a different
  driver is used which does not use tty_port::lock.

Acquire zonelist_update_seq with write_seqlock_irqsave() and then defer
printk output.

Fixes: 1007843a91909 ("mm/page_alloc: fix potential deadlock on 
zonelist_update_seq seqlock")
Acked-by: Michal Hocko 
Reviewed-by: David Hildenbrand 
Link: https://lore.kernel.org/r/20230623201517.yw286...@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior 
(cherry picked from commit 4d1139baae8bc4fff3728d1d204bdb04c13dbe10)
Signed-off-by: Joseph Salisbury 
[PG: this basically becomes a trivial no-op because Yocto has stripped
out all the printk_deferred stuff in 25f13bd1d07b and 63a865cbbc8a]
Signed-off-by: Paul Gortmaker 
---
 mm/page_alloc.c | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 30be96ae9a34..189f097253e2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6423,8 +6423,7 @@ static void __build_all_zonelists(void *data)
 * to prevent any IRQ handler from calling into the page allocator
 * (e.g. GFP_ATOMIC) that could hit zonelist_iter_begin and livelock.
 */
-   local_irq_save(flags);
-   write_seqlock(_update_seq);
+   write_seqlock_irqsave(_update_seq, flags);
 
 #ifdef CONFIG_NUMA
memset(node_load, 0, sizeof(node_load));
@@ -6457,8 +6456,7 @@ static void __build_all_zonelists(void *data)
 #endif
}
 
-   write_sequnlock(_update_seq);
-   local_irq_restore(flags);
+   write_sequnlock_irqrestore(_update_seq, flags);
 }
 
 static noinline void __init
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13408): 
https://lists.yoctoproject.org/g/linux-yocto/message/13408
Mute This Topic: https://lists.yoctoproject.org/mt/103055696/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [PATCH v5.15-rt 04/11] debugobject: Ensure pool refill (again)

2023-12-08 Thread Paul Gortmaker via lists.yoctoproject.org
From: Thomas Gleixner 

commit ccf6bfd49a8a7d25bacc8e84ec5dbdfe513c29c3 in linux-stable-rt

The recent fix to ensure atomicity of lookup and allocation inadvertently
broke the pool refill mechanism.

Prior to that change debug_objects_activate() and debug_objecs_assert_init()
invoked debug_objecs_init() to set up the tracking object for statically
initialized objects. That's not longer the case and debug_objecs_init() is
now the only place which does pool refills.

Depending on the number of statically initialized objects this can be
enough to actually deplete the pool, which was observed by Ido via a
debugobjects OOM warning.

Restore the old behaviour by adding explicit refill opportunities to
debug_objects_activate() and debug_objecs_assert_init().

Fixes: 63a759694eed ("debugobject: Prevent init race with static objects")
Reported-by: Ido Schimmel 
Signed-off-by: Thomas Gleixner 
Tested-by: Ido Schimmel 
Link: https://lore.kernel.org/r/871qk05a9d.ffs@tglx

(cherry picked from commit 0af462f19e635ad522f28981238334620881badc)
Signed-off-by: Joseph Salisbury 
Signed-off-by: Paul Gortmaker 
---
 lib/debugobjects.c | 21 +++--
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index 579406c1e9ed..4c39678c03ee 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -590,6 +590,16 @@ static struct debug_obj *lookup_object_or_alloc(void 
*addr, struct debug_bucket
return NULL;
 }
 
+static void debug_objects_fill_pool(void)
+{
+   /*
+* On RT enabled kernels the pool refill must happen in preemptible
+* context:
+*/
+   if (!IS_ENABLED(CONFIG_PREEMPT_RT) || preemptible())
+   fill_pool();
+}
+
 static void
 __debug_object_init(void *addr, const struct debug_obj_descr *descr, int 
onstack)
 {
@@ -598,12 +608,7 @@ __debug_object_init(void *addr, const struct 
debug_obj_descr *descr, int onstack
struct debug_obj *obj;
unsigned long flags;
 
-   /*
-* On RT enabled kernels the pool refill must happen in preemptible
-* context:
-*/
-   if (!IS_ENABLED(CONFIG_PREEMPT_RT) || preemptible())
-   fill_pool();
+   debug_objects_fill_pool();
 
db = get_bucket((unsigned long) addr);
 
@@ -688,6 +693,8 @@ int debug_object_activate(void *addr, const struct 
debug_obj_descr *descr)
if (!debug_objects_enabled)
return 0;
 
+   debug_objects_fill_pool();
+
db = get_bucket((unsigned long) addr);
 
raw_spin_lock_irqsave(>lock, flags);
@@ -897,6 +904,8 @@ void debug_object_assert_init(void *addr, const struct 
debug_obj_descr *descr)
if (!debug_objects_enabled)
return;
 
+   debug_objects_fill_pool();
+
db = get_bucket((unsigned long) addr);
 
raw_spin_lock_irqsave(>lock, flags);
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13405): 
https://lists.yoctoproject.org/g/linux-yocto/message/13405
Mute This Topic: https://lists.yoctoproject.org/mt/103055693/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [PATCH v5.15-rt 01/11] io-mapping: don't disable preempt on RT in io_mapping_map_atomic_wc().

2023-12-08 Thread Paul Gortmaker via lists.yoctoproject.org
From: Sebastian Andrzej Siewior 

commit f1bd52382dcefb82cdc243575ab81f3966165b47 in linux-stable-rt

io_mapping_map_atomic_wc() disables preemption and pagefaults for
historical reasons.  The conversion to io_mapping_map_local_wc(), which
only disables migration, cannot be done wholesale because quite some call
sites need to be updated to accommodate with the changed semantics.

On PREEMPT_RT enabled kernels the io_mapping_map_atomic_wc() semantics are
problematic due to the implicit disabling of preemption which makes it
impossible to acquire 'sleeping' spinlocks within the mapped atomic
sections.

PREEMPT_RT replaces the preempt_disable() with a migrate_disable() for
more than a decade.  It could be argued that this is a justification to do
this unconditionally, but PREEMPT_RT covers only a limited number of
architectures and it disables some functionality which limits the coverage
further.

Limit the replacement to PREEMPT_RT for now.  This is also done
kmap_atomic().

Link: https://lkml.kernel.org/r/20230310162905.o57pj...@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior 
Reported-by: Richard Weinberger 
  Link: 
https://lore.kernel.org/caflxgvw0wmxamqyqj5wgvvsbkhq2d2xcxtogmcpgq9ndc-m...@mail.gmail.com
Cc: Thomas Gleixner 
Signed-off-by: Andrew Morton 
(cherry picked from commit 7eb16f23b9a415f062db22739e59bb144e0b24ab)
Signed-off-by: Joseph Salisbury 
Signed-off-by: Paul Gortmaker 
---
 include/linux/io-mapping.h | 20 
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/include/linux/io-mapping.h b/include/linux/io-mapping.h
index e9743cfd8585..b0f196e51dca 100644
--- a/include/linux/io-mapping.h
+++ b/include/linux/io-mapping.h
@@ -69,7 +69,10 @@ io_mapping_map_atomic_wc(struct io_mapping *mapping,
 
BUG_ON(offset >= mapping->size);
phys_addr = mapping->base + offset;
-   preempt_disable();
+   if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+   preempt_disable();
+   else
+   migrate_disable();
pagefault_disable();
return __iomap_local_pfn_prot(PHYS_PFN(phys_addr), mapping->prot);
 }
@@ -79,7 +82,10 @@ io_mapping_unmap_atomic(void __iomem *vaddr)
 {
kunmap_local_indexed((void __force *)vaddr);
pagefault_enable();
-   preempt_enable();
+   if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+   preempt_enable();
+   else
+   migrate_enable();
 }
 
 static inline void __iomem *
@@ -168,7 +174,10 @@ static inline void __iomem *
 io_mapping_map_atomic_wc(struct io_mapping *mapping,
 unsigned long offset)
 {
-   preempt_disable();
+   if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+   preempt_disable();
+   else
+   migrate_disable();
pagefault_disable();
return io_mapping_map_wc(mapping, offset, PAGE_SIZE);
 }
@@ -178,7 +187,10 @@ io_mapping_unmap_atomic(void __iomem *vaddr)
 {
io_mapping_unmap(vaddr);
pagefault_enable();
-   preempt_enable();
+   if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+   preempt_enable();
+   else
+   migrate_enable();
 }
 
 static inline void __iomem *
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13401): 
https://lists.yoctoproject.org/g/linux-yocto/message/13401
Mute This Topic: https://lists.yoctoproject.org/mt/103055689/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [PATCH v5.15-rt 02/11] locking/rwbase: Mitigate indefinite writer starvation

2023-12-08 Thread Paul Gortmaker via lists.yoctoproject.org
From: Sebastian Andrzej Siewior 

commit 47364f671cbed35071551bd911dc7b89a1761804 in linux-stable-rt

On PREEMPT_RT, rw_semaphore and rwlock_t locks are unfair to writers.
Readers can indefinitely acquire the lock unless the writer fully acquired
the lock, which might never happen if there is always a reader in the
critical section owning the lock.

Mel Gorman reported that since LTP-20220121 the dio_truncate test case
went from having 1 reader to having 16 readers and that number of readers
is sufficient to prevent the down_write ever succeeding while readers
exist. Eventually the test is killed after 30 minutes as a failure.

Mel proposed a timeout to limit how long a writer can be blocked until
the reader is forced into the slowpath.

Thomas argued that there is no added value by providing this timeout.  From
a PREEMPT_RT point of view, there are no critical rw_semaphore or rwlock_t
locks left where the reader must be preferred.

Mitigate indefinite writer starvation by forcing the READER into the
slowpath once the WRITER attempts to acquire the lock.

Reported-by: Mel Gorman 
Signed-off-by: Sebastian Andrzej Siewior 
Signed-off-by: Thomas Gleixner 
Signed-off-by: Ingo Molnar 
Acked-by: Mel Gorman 
Link: https://lore.kernel.org/877cwbq4cq.ffs@tglx
Link: https://lore.kernel.org/r/20230321161140.hmcqe...@linutronix.de
Cc: Linus Torvalds 
(cherry picked from commit 286deb7ec03d941664ac3ffaff58814b454adf65)
Signed-off-by: Joseph Salisbury 
Signed-off-by: Paul Gortmaker 
---
 kernel/locking/rwbase_rt.c | 9 -
 1 file changed, 9 deletions(-)

diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c
index 88191f6e252c..a28148a05383 100644
--- a/kernel/locking/rwbase_rt.c
+++ b/kernel/locking/rwbase_rt.c
@@ -73,15 +73,6 @@ static int __sched __rwbase_read_lock(struct rwbase_rt *rwb,
int ret;
 
raw_spin_lock_irq(>wait_lock);
-   /*
-* Allow readers, as long as the writer has not completely
-* acquired the semaphore for write.
-*/
-   if (atomic_read(>readers) != WRITER_BIAS) {
-   atomic_inc(>readers);
-   raw_spin_unlock_irq(>wait_lock);
-   return 0;
-   }
 
/*
 * Call into the slow lock path with the rtmutex->wait_lock
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13402): 
https://lists.yoctoproject.org/g/linux-yocto/message/13402
Mute This Topic: https://lists.yoctoproject.org/mt/103055690/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [PATCH v5.15-rt 03/11] Revert "softirq: Let ksoftirqd do its job"

2023-12-08 Thread Paul Gortmaker via lists.yoctoproject.org
From: Paolo Abeni 

commit e94601d32f4d7fdc28da15a72fe5262c63a5755a in linux-stable-rt

This reverts the following commits:

  4cd13c21b207 ("softirq: Let ksoftirqd do its job")
  3c53776e29f8 ("Mark HI and TASKLET softirq synchronous")
  1342d8080f61 ("softirq: Don't skip softirq execution when softirq thread is 
parking")

in a single change to avoid known bad intermediate states introduced by a
patch series reverting them individually.

Due to the mentioned commit, when the ksoftirqd threads take charge of
softirq processing, the system can experience high latencies.

In the past a few workarounds have been implemented for specific
side-effects of the initial ksoftirqd enforcement commit:

commit 1ff688209e2e ("watchdog: core: make sure the watchdog_worker is not 
deferred")
commit 8d5755b3f77b ("watchdog: softdog: fire watchdog even if softirqs do not 
get to run")
commit 217f69743681 ("net: busy-poll: allow preemption in sk_busy_loop()")
commit 3c53776e29f8 ("Mark HI and TASKLET softirq synchronous")

But the latency problem still exists in real-life workloads, see the link
below.

The reverted commit intended to solve a live-lock scenario that can now be
addressed with the NAPI threaded mode, introduced with commit 29863d41bb6e
("net: implement threaded-able napi poll loop support"), which is nowadays
in a pretty stable status.

While a complete solution to put softirq processing under nice resource
control would be preferable, that has proven to be a very hard task. In
the short term, remove the main pain point, and also simplify a bit the
current softirq implementation.

Signed-off-by: Paolo Abeni 
Signed-off-by: Thomas Gleixner 
Tested-by: Jason Xing 
Reviewed-by: Jakub Kicinski 
Reviewed-by: Eric Dumazet 
Reviewed-by: Sebastian Andrzej Siewior 
Cc: "Paul E. McKenney" 
Cc: Peter Zijlstra 
Cc: net...@vger.kernel.org
Link: 
https://lore.kernel.org/netdev/305d7742212cbe98621b16be782b0562f1012cb6.ca...@redhat.com
Link: 
https://lore.kernel.org/r/57e66b364f1b6f09c9bc0316742c3b14f4ce83bd.1683526542.git.pab...@redhat.com
(cherry picked from commit d15121be7485655129101f3960ae6add40204463)
Signed-off-by: Joseph Salisbury 
Signed-off-by: Paul Gortmaker 
---
 kernel/softirq.c | 22 ++
 1 file changed, 2 insertions(+), 20 deletions(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 41f470929e99..398951403331 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -80,21 +80,6 @@ static void wakeup_softirqd(void)
wake_up_process(tsk);
 }
 
-/*
- * If ksoftirqd is scheduled, we do not want to process pending softirqs
- * right now. Let ksoftirqd handle this at its own rate, to get fairness,
- * unless we're doing some of the synchronous softirqs.
- */
-#define SOFTIRQ_NOW_MASK ((1 << HI_SOFTIRQ) | (1 << TASKLET_SOFTIRQ))
-static bool ksoftirqd_running(unsigned long pending)
-{
-   struct task_struct *tsk = __this_cpu_read(ksoftirqd);
-
-   if (pending & SOFTIRQ_NOW_MASK)
-   return false;
-   return tsk && task_is_running(tsk) && !__kthread_should_park(tsk);
-}
-
 #ifdef CONFIG_TRACE_IRQFLAGS
 DEFINE_PER_CPU(int, hardirqs_enabled);
 DEFINE_PER_CPU(int, hardirq_context);
@@ -236,7 +221,7 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int 
cnt)
goto out;
 
pending = local_softirq_pending();
-   if (!pending || ksoftirqd_running(pending))
+   if (!pending)
goto out;
 
/*
@@ -419,9 +404,6 @@ static inline bool should_wake_ksoftirqd(void)
 
 static inline void invoke_softirq(void)
 {
-   if (ksoftirqd_running(local_softirq_pending()))
-   return;
-
if (!force_irqthreads() || !__this_cpu_read(ksoftirqd)) {
 #ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK
/*
@@ -455,7 +437,7 @@ asmlinkage __visible void do_softirq(void)
 
pending = local_softirq_pending();
 
-   if (pending && !ksoftirqd_running(pending))
+   if (pending)
do_softirq_own_stack();
 
local_irq_restore(flags);
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13404): 
https://lists.yoctoproject.org/g/linux-yocto/message/13404
Mute This Topic: https://lists.yoctoproject.org/mt/103055692/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [v6.1/standard/preempt-rt/x86] Add drm/i915 reset -rt fix

2023-12-06 Thread Paul Gortmaker via lists.yoctoproject.org
Bruce,

This commit has lingered on a dead end branch in linux-stable-rt since August:

linux-stable-rt$git branch --contains 1a80b572f783a
  v6.1-rt-next
linux-stable-rt$

I will contact the maintainer, but the updates are pretty slow over there.

Nothing wrong with the commit - in fact it is now upstream with just a
trivial comment change added to it.

https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-stable-rt.git/commit/?id=1a80b572f783a1

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1e975e591a

I put x86 in the Subject 'cause it will only be functional there, but it
applies to v6.1/standard/preempt/base and so put it on one board or on
all -rt boards ; whatver makes your maintenance life easier.

It was added in v6.7 so I checked v6.5 and the -rt maintainer already
added it there (832fa067488), so nothing to do on v6.5 (yay!)

Thanks,
Paul
--

>From 1a80b572f783a15327663bf9e7d71163976e8d6a Mon Sep 17 00:00:00 2001
From: Tvrtko Ursulin 
Date: Fri, 18 Aug 2023 22:45:25 -0400
Subject: [PATCH] drm/i915: Do not disable preemption for resets

[commit 40cd2835ced288789a685aa4aa7bc04b492dcd45 in linux-rt-devel]

Commit ade8a0f59844 ("drm/i915: Make all GPU resets atomic") added a
preempt disable section over the hardware reset callback to prepare the
driver for being able to reset from atomic contexts.

In retrospect I can see that the work item at a time was about removing
the struct mutex from the reset path. Code base also briefly entertained
the idea of doing the reset under stop_machine in order to serialize
userspace mmap and temporary glitch in the fence registers (see
eb8d0f5af4ec ("drm/i915: Remove GPU reset dependence on struct_mutex"),
but that never materialized and was soon removed in 2caffbf11762
("drm/i915: Revoke mmaps and prevent access to fence registers across
reset") and replaced with a SRCU based solution.

As such, as far as I can see, today we still have a requirement that
resets must not sleep (invoked from submission tasklets), but no need to
support invoking them from a truly atomic context.

Given that the preemption section is problematic on RT kernels, since the
uncore lock becomes a sleeping lock and so is invalid in such section,
lets try and remove it. Potential downside is that our short waits on GPU
to complete the reset may get extended if CPU scheduling interferes, but
in practice that probably isn't a deal breaker.

In terms of mechanics, since the preemption disabled block is being
removed we just need to replace a few of the wait_for_atomic macros into
busy looping versions which will work (and not complain) when called from
non-atomic sections.

Signed-off-by: Tvrtko Ursulin 
Cc: Chris Wilson 
Cc: Paul Gortmaker 
Cc: Sebastian Andrzej Siewior 
Acked-by: Sebastian Andrzej Siewior 
Link: 
https://lore.kernel.org/r/20230705093025.3689748-1-tvrtko.ursu...@linux.intel.com
Signed-off-by: Sebastian Andrzej Siewior 
[PG: backport from v6.4-rt ; minor context fixup caused by b7d70b8b06ed]
Signed-off-by: Paul Gortmaker 
Signed-off-by: Clark Williams 

diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c 
b/drivers/gpu/drm/i915/gt/intel_reset.c
index 10b930eaa8cb..6108a449cd19 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -174,13 +174,13 @@ static int i915_do_reset(struct intel_gt *gt,
/* Assert reset for at least 20 usec, and wait for acknowledgement. */
pci_write_config_byte(pdev, I915_GDRST, GRDOM_RESET_ENABLE);
udelay(50);
-   err = wait_for_atomic(i915_in_reset(pdev), 50);
+   err = _wait_for_atomic(i915_in_reset(pdev), 50, 0);
 
/* Clear the reset request. */
pci_write_config_byte(pdev, I915_GDRST, 0);
udelay(50);
if (!err)
-   err = wait_for_atomic(!i915_in_reset(pdev), 50);
+   err = _wait_for_atomic(!i915_in_reset(pdev), 50, 0);
 
return err;
 }
@@ -200,7 +200,7 @@ static int g33_do_reset(struct intel_gt *gt,
struct pci_dev *pdev = to_pci_dev(gt->i915->drm.dev);
 
pci_write_config_byte(pdev, I915_GDRST, GRDOM_RESET_ENABLE);
-   return wait_for_atomic(g4x_reset_complete(pdev), 50);
+   return _wait_for_atomic(g4x_reset_complete(pdev), 50, 0);
 }
 
 static int g4x_do_reset(struct intel_gt *gt,
@@ -217,7 +217,7 @@ static int g4x_do_reset(struct intel_gt *gt,
 
pci_write_config_byte(pdev, I915_GDRST,
  GRDOM_MEDIA | GRDOM_RESET_ENABLE);
-   ret =  wait_for_atomic(g4x_reset_complete(pdev), 50);
+   ret =  _wait_for_atomic(g4x_reset_complete(pdev), 50, 0);
if (ret) {
GT_TRACE(gt, "Wait for media reset failed\n");
goto out;
@@ -225,7 +225,7 @@ static int g4x_do_reset(struct intel_gt *gt,
 
pci_write_config_byte(pdev, I915_GDRST,
  GRDOM_RENDER | GRDOM_RESET_ENABLE);
-   ret =  wait_for_atomic(g4x_reset_complete(pdev), 50);
+   ret =  

[linux-yocto] [PATCH] features/ima: drop now retired IMA_TRUSTED_KEYRING option

2023-12-06 Thread Paul Gortmaker via lists.yoctoproject.org
From: Paul Gortmaker 

Unfortunately linux-stable backported this:

  Subject: ima: Remove deprecated IMA_TRUSTED_KEYRING Kconfig

  From: Nayna Jain 

  [ Upstream commit 5087fd9e80e539d2163accd045b73da64de7de95 ]

  Time to remove "IMA_TRUSTED_KEYRING".

...to all releases still being maintained.

stable-queue$git grep -l 5087fd9e80e539
releases/5.10.195/ima-remove-deprecated-ima_trusted_keyring-kconfig.patch
releases/5.15.132/ima-remove-deprecated-ima_trusted_keyring-kconfig.patch
releases/5.4.257/ima-remove-deprecated-ima_trusted_keyring-kconfig.patch
releases/6.1.53/ima-remove-deprecated-ima_trusted_keyring-kconfig.patch
releases/6.4.16/ima-remove-deprecated-ima_trusted_keyring-kconfig.patch
releases/6.5.3/ima-remove-deprecated-ima_trusted_keyring-kconfig.patch

So now when someone uses the feature, it triggers a do_kernel_configcheck
warning when the audit runs.

We added this file way back in 2019 so this fix will be needed on all
active branches that are using an LTS linux-stable kernel listed above.

Signed-off-by: Paul Gortmaker 

diff --git a/features/ima/ima.cfg b/features/ima/ima.cfg
index acb5fd02986f..5fd3288e1986 100644
--- a/features/ima/ima.cfg
+++ b/features/ima/ima.cfg
@@ -13,7 +13,6 @@ CONFIG_IMA_APPRAISE_SIGNED_INIT=y
 CONFIG_IMA_MEASURE_ASYMMETRIC_KEYS=y
 CONFIG_IMA_QUEUE_EARLY_BOOT_KEYS=y
 CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT=y
-CONFIG_IMA_TRUSTED_KEYRING=y
 CONFIG_IMA_KEYRINGS_PERMIT_SIGNED_BY_BUILTIN_OR_SECONDARY=y
 CONFIG_SIGNATURE=y
 CONFIG_IMA_WRITE_POLICY=y
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13387): 
https://lists.yoctoproject.org/g/linux-yocto/message/13387
Mute This Topic: https://lists.yoctoproject.org/mt/103013154/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [linux-yocto] [PATCH 4/5] x86-64: use the defaults for number of CPUs

2023-11-30 Thread Paul Gortmaker via lists.yoctoproject.org
[RE: [linux-yocto] [PATCH 4/5] x86-64: use the defaults for number of CPUs] On 
30/11/2023 (Thu 21:43) Liu, Yongxin wrote:

> > -Original Message-
> > From: Gortmaker, Paul 
> > Sent: Friday, December 1, 2023 10:27
> > To: Liu, Yongxin 
> > Cc: Bruce Ashfield ; linux-
> > yo...@lists.yoctoproject.org
> > Subject: Re: [linux-yocto] [PATCH 4/5] x86-64: use the defaults for number
> > of CPUs
> > 
> > [RE: [linux-yocto] [PATCH 4/5] x86-64: use the defaults for number of CPUs]
> > On 30/11/2023 (Thu 20:12) Liu, Yongxin wrote:
> > 
> > > > -Original Message-
> > > > From: linux-yocto@lists.yoctoproject.org  > > > yo...@lists.yoctoproject.org> On Behalf Of Paul Gortmaker via
> > > > lists.yoctoproject.org
> > > > Sent: Friday, December 1, 2023 03:08
> > > > To: Bruce Ashfield 
> > > > Cc: linux-yocto@lists.yoctoproject.org
> > > > Subject: [linux-yocto] [PATCH 4/5] x86-64: use the defaults for
> > > > number of CPUs
> > > >
> > > > From: Paul Gortmaker 
> > > >
> > > > The x86-64 BSP isn't quite the same as the "more specific" BSP like
> > > > a Beaglebone Black or the (now deleted) Edgerouter.  Where we have
> > > > exact hardware specifics for boards like those, the x86-64 BSP is
> > > > more of a "generic" thing used as the baseline across an endless sea
> > of boards.
> > > >
> > > > To that end, this is somewhat a revert of commit bd77e1f904f6
> > > > ("bsp/intel-x86: change the supported maximum number of CPUs to 512
> > > > in 64- bit bsp")
> > > >
> > > > It is great that a handful of people out there are using Yocto on
> > > > these huge server machines, but that doesn't reflect 99% of the rest
> > > > of us who continue to lean towards the original "embedded theme" of
> > Yocto.
> > > >
> > > > That means a whole bunch of extra per-CPU jumping through hoops;
> > > > some can be mitigated by booting with "nr_cpus=4" (or whatever the
> > > > core count
> > > > is) but I guarantee largely nobody out there is doing that.
> > > >
> > > > Let those users with the crazy CPU count own that config
> > > > customization locally.  The default is 64 which still seems way too
> > > > large IMHO, but at least we are moving in the right direction.
> > >
> > >
> > > This intel-x86-64 BSP is a generic one used from mobile to server.
> > >
> > > Customers need to customize not only the CPU number config but also
> > > other configs, like, removing unused drivers or adding debug options.
> > > From this point of view, there is no difference between 64 or 512.
> 
> I changed 64 to 512. Because we have server machines with more than 64 CPU.
> I want the BSP support those machines by default.

But you still miss the point.  It doesn't matter what you or any company
"want" in this case.  Like it or not, it is a shared resource and so the
defaults have to be what is good for Yocto project and not for *you*

> 
> > 
> > So you've basically argued my case for me.  If changes are inevitable,
> > then why do we change the default?
> > 
> > > But it changes the "rule" that intel-x86-64 works for all supported
> > platforms.
> > > We need to do extra work for servers with large CPU number.
> > 
> > No.  There is no "rule" in Yocto like that.  That is nonsense because
> > there is no way Yocto can commit to "support" all the crazy different
> > x86-64 variants out there.
> 
> 
> I think this "bsp/intel-x86" is used only by Wind River.
> So bsp/intel-x86 should work for all supported machines claimed by Wind River.

No. That is where you are dead wrong.  Wind River does not own Yocto.
Think for a minute.  A new Yocto user comes along and sees "intel-x86"
and because that name is so generic -- thinks "I'll build that for my old PC."

> If we need to do some local change to support some machine. That's not good.
> Because people usually build image with default configs and then complain 
> something doesn't work.

Again, it is NOT the problem of the Yocto project what isn't good for YOU.
If you need EDAC and NUMA and 500+ CPU support, then make a proper BSP
with those settings and submit it as "bsp/mega-server-2000" or whatever.

Don't just be using intel-x86 as a dumping ground for whatever random
setting you need today.  That isn't fair to all the oth

Re: [linux-yocto] [PATCH 3/5] x86-64: don't force EDAC support on everyone

2023-11-30 Thread Paul Gortmaker via lists.yoctoproject.org
[RE: [linux-yocto] [PATCH 3/5] x86-64: don't force EDAC support on everyone] On 
30/11/2023 (Thu 19:29) Liu, Yongxin wrote:

> > -Original Message-
> > From: linux-yocto@lists.yoctoproject.org  > yo...@lists.yoctoproject.org> On Behalf Of Paul Gortmaker via
> > lists.yoctoproject.org
> > Sent: Friday, December 1, 2023 03:08
> > To: Bruce Ashfield 
> > Cc: linux-yocto@lists.yoctoproject.org
> > Subject: [linux-yocto] [PATCH 3/5] x86-64: don't force EDAC support on
> > everyone
> > 
> > From: Paul Gortmaker 
> > 
> > Similar to the argument of why we shouldn't force NUMA on everyone, the
> > 9 chip registered ECC RAM type stuff also tends to be found mostly on
> > larger server type stuff and less so on embedded targets.
> > 
> > We already have a skeleton EDAC feature, so move the features over there.
> > One could argue that we might want to separate into arch specific config
> > fragments, but to me - that seems overkill at this point in time.
> > 
> > Signed-off-by: Paul Gortmaker 
> > ---
> >  bsp/intel-x86/intel-x86-64.cfg | 13 -
> >  features/edac/edac.cfg |  8 
> >  2 files changed, 8 insertions(+), 13 deletions(-)
> > 
> > diff --git a/bsp/intel-x86/intel-x86-64.cfg b/bsp/intel-x86/intel-x86-
> > 64.cfg index f31711e73181..58b0fed637e8 100644
> > --- a/bsp/intel-x86/intel-x86-64.cfg
> > +++ b/bsp/intel-x86/intel-x86-64.cfg
> > @@ -3,19 +3,6 @@
> >  # General setup
> >  #
> > 
> > -# EDAC
> > -CONFIG_EDAC=y
> > -CONFIG_EDAC_DEBUG=y
> > -CONFIG_EDAC_SBRIDGE=m
> > -CONFIG_ACPI_APEI=y
> > -CONFIG_ACPI_APEI_EINJ=m
> > -CONFIG_ACPI_APEI_GHES=y
> > -CONFIG_EDAC_PND2=m
> > -CONFIG_EDAC_SKX=m
> > -CONFIG_EDAC_I10NM=m
> > -CONFIG_EDAC_IGEN6=m
> > -
> > -
> >  # ISH
> >  CONFIG_INTEL_ISH_HID=m
> > 
> > diff --git a/features/edac/edac.cfg b/features/edac/edac.cfg index
> > 9b3d3fc59eae..4f75d2f825ee 100644
> > --- a/features/edac/edac.cfg
> > +++ b/features/edac/edac.cfg
> > @@ -15,3 +15,11 @@
> >  CONFIG_RAS=y
> >  CONFIG_EDAC=y
> >  CONFIG_EDAC_DEBUG=y
> > +CONFIG_EDAC_SBRIDGE=m
> > +CONFIG_ACPI_APEI=y
> > +CONFIG_ACPI_APEI_EINJ=m
> > +CONFIG_ACPI_APEI_GHES=y
> > +CONFIG_EDAC_PND2=m
> > +CONFIG_EDAC_SKX=m
> > +CONFIG_EDAC_I10NM=m
> 
> Other arch/bsp may include edac.scc. They clearly don't want EDAC drivers for 
> x86 platform.
> And since CONFIG_EDAC_I10NM depends on X86_64, won't it cause warnings when 
> doing kernel_configcheck for other arch?

Did you read the 0/5 or the commit log?  I explicitly said we do this in
master and then as we have the cushion of time, we see if there is
demand for making an arch separation.  At this point in time, my
experience tells me we don't need it.

Paul.
--

> 
> 
> Thanks,
> Yongxin
> 
> 
> > +CONFIG_EDAC_IGEN6=m
> > --
> > 2.40.0
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13344): 
https://lists.yoctoproject.org/g/linux-yocto/message/13344
Mute This Topic: https://lists.yoctoproject.org/mt/102900653/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [linux-yocto] [PATCH 4/5] x86-64: use the defaults for number of CPUs

2023-11-30 Thread Paul Gortmaker via lists.yoctoproject.org
[RE: [linux-yocto] [PATCH 4/5] x86-64: use the defaults for number of CPUs] On 
30/11/2023 (Thu 20:12) Liu, Yongxin wrote:

> > -Original Message-
> > From: linux-yocto@lists.yoctoproject.org  > yo...@lists.yoctoproject.org> On Behalf Of Paul Gortmaker via
> > lists.yoctoproject.org
> > Sent: Friday, December 1, 2023 03:08
> > To: Bruce Ashfield 
> > Cc: linux-yocto@lists.yoctoproject.org
> > Subject: [linux-yocto] [PATCH 4/5] x86-64: use the defaults for number of
> > CPUs
> > 
> > From: Paul Gortmaker 
> > 
> > The x86-64 BSP isn't quite the same as the "more specific" BSP like a
> > Beaglebone Black or the (now deleted) Edgerouter.  Where we have exact
> > hardware specifics for boards like those, the x86-64 BSP is more of a
> > "generic" thing used as the baseline across an endless sea of boards.
> > 
> > To that end, this is somewhat a revert of commit bd77e1f904f6
> > ("bsp/intel-x86: change the supported maximum number of CPUs to 512 in 64-
> > bit bsp")
> > 
> > It is great that a handful of people out there are using Yocto on these
> > huge server machines, but that doesn't reflect 99% of the rest of us who
> > continue to lean towards the original "embedded theme" of Yocto.
> > 
> > That means a whole bunch of extra per-CPU jumping through hoops; some can
> > be mitigated by booting with "nr_cpus=4" (or whatever the core count
> > is) but I guarantee largely nobody out there is doing that.
> > 
> > Let those users with the crazy CPU count own that config customization
> > locally.  The default is 64 which still seems way too large IMHO, but at
> > least we are moving in the right direction.
> 
> 
> This intel-x86-64 BSP is a generic one used from mobile to server.
> 
> Customers need to customize not only the CPU number config but also other 
> configs,
> like, removing unused drivers or adding debug options.
> From this point of view, there is no difference between 64 or 512.

So you've basically argued my case for me.  If changes are inevitable,
then why do we change the default?

> But it changes the "rule" that intel-x86-64 works for all supported platforms.
> We need to do extra work for servers with large CPU number.

No.  There is no "rule" in Yocto like that.  That is nonsense because
there is no way Yocto can commit to "support" all the crazy different
x86-64 variants out there.

If a re-seller/integrator wants to take Yocto and tune it for platform
XYZ because there is customer demand and claim it is then "supported" by
them, then fine.  But then to expect the Yocto project to own that?  No.

Paul.
--

> 
> Thanks,
> Yongxin
> 
> > 
> > Signed-off-by: Paul Gortmaker 
> > ---
> >  bsp/intel-x86/intel-x86-64.cfg | 3 ---
> >  1 file changed, 3 deletions(-)
> > 
> > diff --git a/bsp/intel-x86/intel-x86-64.cfg b/bsp/intel-x86/intel-x86-
> > 64.cfg index 58b0fed637e8..da9bc7b57eca 100644
> > --- a/bsp/intel-x86/intel-x86-64.cfg
> > +++ b/bsp/intel-x86/intel-x86-64.cfg
> > @@ -31,6 +31,3 @@ CONFIG_CRYPTO_DEV_QAT_DH895xCCVF=m
> > 
> >  # x86 CPU resource control support
> >  CONFIG_X86_CPU_RESCTRL=y
> > -
> > -# Processor type and features
> > -CONFIG_NR_CPUS=512
> > --
> > 2.40.0
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13343): 
https://lists.yoctoproject.org/g/linux-yocto/message/13343
Mute This Topic: https://lists.yoctoproject.org/mt/102900654/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [PATCH 5/5] BSP: remove from all - latencytop feature inclusion

2023-11-30 Thread Paul Gortmaker via lists.yoctoproject.org
From: Paul Gortmaker 

Consider this 5+ year old commit

commit bcbc7bbc4fb967d8d4ae6333f71b73491a80b94e
Author: Alexander Kanavin 
Date:   Thu Mar 1 16:00:41 2018 +0200

latencytop: remove recipe

Last commit and release were in 2009; website is down; it's a dead project.

(From OE-Core rev: 36aae56e7f86a4d5ce93e4528e7dcc42f60c705e)

Signed-off-by: Alexander Kanavin 
Signed-off-by: Ross Burton 
Signed-off-by: Richard Purdie 

Given that, it seems sensible to drop it from default inclusion across
the BSPs.  I've left the feature itself, so anyone who still cares can
easily manually add it still.

Signed-off-by: Paul Gortmaker 
---
 bsp/amd-x86/amd-x86-64.scc   | 1 -
 bsp/bcm-2xxx-rpi/bcm-2xxx-rpi.scc| 1 -
 bsp/common-pc/common-pc-preempt-rt.scc   | 1 -
 bsp/fsl-mpc8315e-rdb/fsl-mpc8315e-rdb-preempt-rt.scc | 1 -
 bsp/fsl-mpc8315e-rdb/fsl-mpc8315e-rdb-standard.scc   | 1 -
 bsp/intel-common/intel-developer-drivers.scc | 1 -
 bsp/intel-x86/intel-x86.scc  | 1 -
 bsp/minnow/minnow-preempt-rt.scc | 1 -
 bsp/minnow/minnow-standard.scc   | 1 -
 bsp/mti-malta32/mti-malta32.scc  | 1 -
 bsp/mti-malta64/mti-malta64-be-developer.scc | 1 -
 bsp/qemu-ppc32/qemu-ppc32.scc| 1 -
 bsp/qemu-ppc64/qemu-ppc64-standard.scc   | 3 ---
 bsp/qemumicroblaze/qemumicroblazeeb-standard.scc | 1 -
 bsp/qemumicroblaze/qemumicroblazeel-standard.scc | 1 -
 bsp/xilinx/zynq-standard.scc | 1 -
 16 files changed, 18 deletions(-)

diff --git a/bsp/amd-x86/amd-x86-64.scc b/bsp/amd-x86/amd-x86-64.scc
index 8080eadcb462..87f23b51db70 100644
--- a/bsp/amd-x86/amd-x86-64.scc
+++ b/bsp/amd-x86/amd-x86-64.scc
@@ -9,7 +9,6 @@ include cfg/efi-ext.scc
 include cfg/virtio.scc
 include cfg/boot-live.scc
 include cfg/usb-mass-storage.scc
-include features/latencytop/latencytop.scc
 include features/profiling/profiling.scc
 
 include features/netfilter/netfilter.scc
diff --git a/bsp/bcm-2xxx-rpi/bcm-2xxx-rpi.scc 
b/bsp/bcm-2xxx-rpi/bcm-2xxx-rpi.scc
index 42b9c6917593..8c654b99736f 100755
--- a/bsp/bcm-2xxx-rpi/bcm-2xxx-rpi.scc
+++ b/bsp/bcm-2xxx-rpi/bcm-2xxx-rpi.scc
@@ -3,7 +3,6 @@ kconf hardware bcm-2xxx-rpi.cfg
 
 include cfg/usb-mass-storage.scc
 include features/profiling/profiling.scc
-include features/latencytop/latencytop.scc
 
 include features/hostapd/hostapd.scc
 include features/mac80211/mac80211.scc
diff --git a/bsp/common-pc/common-pc-preempt-rt.scc 
b/bsp/common-pc/common-pc-preempt-rt.scc
index cdba3bd014cc..7044022de9b9 100644
--- a/bsp/common-pc/common-pc-preempt-rt.scc
+++ b/bsp/common-pc/common-pc-preempt-rt.scc
@@ -12,6 +12,5 @@ include bsp/common-pc/common-pc.scc
 # default policy for preempt-rt kernels
 include cfg/boot-live.scc
 include cfg/usb-mass-storage.scc
-include features/latencytop/latencytop.scc
 include features/profiling/profiling.scc
 include cfg/virtio.scc
diff --git a/bsp/fsl-mpc8315e-rdb/fsl-mpc8315e-rdb-preempt-rt.scc 
b/bsp/fsl-mpc8315e-rdb/fsl-mpc8315e-rdb-preempt-rt.scc
index 4f8bcf253f21..231d56542b7e 100644
--- a/bsp/fsl-mpc8315e-rdb/fsl-mpc8315e-rdb-preempt-rt.scc
+++ b/bsp/fsl-mpc8315e-rdb/fsl-mpc8315e-rdb-preempt-rt.scc
@@ -9,5 +9,4 @@ include ktypes/preempt-rt/preempt-rt.scc
 include fsl-mpc8315e-rdb.scc
 
 # default policy for preempt-rt kernels
-include features/latencytop/latencytop.scc
 include features/profiling/profiling.scc
diff --git a/bsp/fsl-mpc8315e-rdb/fsl-mpc8315e-rdb-standard.scc 
b/bsp/fsl-mpc8315e-rdb/fsl-mpc8315e-rdb-standard.scc
index 0f00d23ed784..fa797badf622 100644
--- a/bsp/fsl-mpc8315e-rdb/fsl-mpc8315e-rdb-standard.scc
+++ b/bsp/fsl-mpc8315e-rdb/fsl-mpc8315e-rdb-standard.scc
@@ -10,5 +10,4 @@ branch fsl-mpc8315e-rdb
 include fsl-mpc8315e-rdb.scc
 
 # default policy for standard kernels
-include features/latencytop/latencytop.scc
 include features/profiling/profiling.scc
diff --git a/bsp/intel-common/intel-developer-drivers.scc 
b/bsp/intel-common/intel-developer-drivers.scc
index 5bb73e3e1da2..090d05ed7d72 100644
--- a/bsp/intel-common/intel-developer-drivers.scc
+++ b/bsp/intel-common/intel-developer-drivers.scc
@@ -1,4 +1,3 @@
 # SPDX-License-Identifier: MIT
 # Additional features for developer bsps
-include features/latencytop/latencytop.scc
 include features/profiling/profiling.scc
diff --git a/bsp/intel-x86/intel-x86.scc b/bsp/intel-x86/intel-x86.scc
index a747961fdbd1..7825075d5dcc 100644
--- a/bsp/intel-x86/intel-x86.scc
+++ b/bsp/intel-x86/intel-x86.scc
@@ -29,7 +29,6 @@ include features/usb/uhci-hcd.scc
 include features/usb/ehci-hcd.scc
 include features/usb/xhci-hcd.scc
 include features/hostapd/hostapd.scc
-include features/latencytop/latencytop.scc
 include features/uio/uio.scc
 include features/spi/spi.scc
 include features/mtd/mtd.scc
diff --git a/bsp/minnow/minnow-preempt-rt.scc b/bsp/minnow/minnow-preempt-rt.scc
index 

[linux-yocto] [PATCH 3/5] x86-64: don't force EDAC support on everyone

2023-11-30 Thread Paul Gortmaker via lists.yoctoproject.org
From: Paul Gortmaker 

Similar to the argument of why we shouldn't force NUMA on everyone, the
9 chip registered ECC RAM type stuff also tends to be found mostly on
larger server type stuff and less so on embedded targets.

We already have a skeleton EDAC feature, so move the features over
there.  One could argue that we might want to separate into arch
specific config fragments, but to me - that seems overkill at this
point in time.

Signed-off-by: Paul Gortmaker 
---
 bsp/intel-x86/intel-x86-64.cfg | 13 -
 features/edac/edac.cfg |  8 
 2 files changed, 8 insertions(+), 13 deletions(-)

diff --git a/bsp/intel-x86/intel-x86-64.cfg b/bsp/intel-x86/intel-x86-64.cfg
index f31711e73181..58b0fed637e8 100644
--- a/bsp/intel-x86/intel-x86-64.cfg
+++ b/bsp/intel-x86/intel-x86-64.cfg
@@ -3,19 +3,6 @@
 # General setup
 #
 
-# EDAC
-CONFIG_EDAC=y
-CONFIG_EDAC_DEBUG=y
-CONFIG_EDAC_SBRIDGE=m
-CONFIG_ACPI_APEI=y
-CONFIG_ACPI_APEI_EINJ=m
-CONFIG_ACPI_APEI_GHES=y
-CONFIG_EDAC_PND2=m
-CONFIG_EDAC_SKX=m
-CONFIG_EDAC_I10NM=m
-CONFIG_EDAC_IGEN6=m
-
-
 # ISH
 CONFIG_INTEL_ISH_HID=m
 
diff --git a/features/edac/edac.cfg b/features/edac/edac.cfg
index 9b3d3fc59eae..4f75d2f825ee 100644
--- a/features/edac/edac.cfg
+++ b/features/edac/edac.cfg
@@ -15,3 +15,11 @@
 CONFIG_RAS=y
 CONFIG_EDAC=y
 CONFIG_EDAC_DEBUG=y
+CONFIG_EDAC_SBRIDGE=m
+CONFIG_ACPI_APEI=y
+CONFIG_ACPI_APEI_EINJ=m
+CONFIG_ACPI_APEI_GHES=y
+CONFIG_EDAC_PND2=m
+CONFIG_EDAC_SKX=m
+CONFIG_EDAC_I10NM=m
+CONFIG_EDAC_IGEN6=m
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13338): 
https://lists.yoctoproject.org/g/linux-yocto/message/13338
Mute This Topic: https://lists.yoctoproject.org/mt/102900653/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [PATCH 2/5] x86-64: separate out the NUMA features to our existing NUMA scc/cfg

2023-11-30 Thread Paul Gortmaker via lists.yoctoproject.org
From: Paul Gortmaker 

A user reported getting NUMA warnings like the ones reported here:

https://www.suse.com/support/kb/doc/?id=21040

"Fail to get numa node for CPU:0 bus:0 dev:0 fn:1"

...and repeated for every core on the platform.  Distracting.

When I asked if it was a crazy big server system with multiple CPU
sockets and localized RAM near each socket - the answer was "no".

Turns out they didn't choose NUMA support - rather we did it for them.

Yocto has been and still remains more "embedded leaning".  That is not
to say we can't support NUMA.  We just shouldn't be enabling it by
default in the base x86-64 config fragment that everyone uses.

Move the two NUMA settings that were not in our existing numa.cfg
feature out of the BSP and into the feature.

Signed-off-by: Paul Gortmaker 
---
 bsp/intel-x86/intel-x86-64.cfg | 7 ---
 features/numa/numa.cfg | 2 ++
 2 files changed, 2 insertions(+), 7 deletions(-)

diff --git a/bsp/intel-x86/intel-x86-64.cfg b/bsp/intel-x86/intel-x86-64.cfg
index a8de30cae983..f31711e73181 100644
--- a/bsp/intel-x86/intel-x86-64.cfg
+++ b/bsp/intel-x86/intel-x86-64.cfg
@@ -2,13 +2,6 @@
 #
 # General setup
 #
-CONFIG_NUMA_BALANCING=y
-CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
-
-#
-# ACPI NUMA
-#
-CONFIG_X86_64_ACPI_NUMA=y
 
 # EDAC
 CONFIG_EDAC=y
diff --git a/features/numa/numa.cfg b/features/numa/numa.cfg
index cc550c4c3c96..e925f90ea148 100644
--- a/features/numa/numa.cfg
+++ b/features/numa/numa.cfg
@@ -1,5 +1,7 @@
 # SPDX-License-Identifier: MIT
 CONFIG_NUMA=y
+CONFIG_NUMA_BALANCING=y
+CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
 CONFIG_X86_64_ACPI_NUMA=y
 CONFIG_NUMA_EMU=y
 CONFIG_NODES_SHIFT=6
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13337): 
https://lists.yoctoproject.org/g/linux-yocto/message/13337
Mute This Topic: https://lists.yoctoproject.org/mt/102900652/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [PATCH 0/5] kernel-cache: config cleanups

2023-11-30 Thread Paul Gortmaker via lists.yoctoproject.org
From: Paul Gortmaker 

Bruce,

Here are a few things that have bugged me and finally added up to make a
series possibly worthwhile.

We've been forcing NUMA and EDAC and 512 CPU support on everyone: even
people building for a point-of-sale terminal using an intel Atom.
Doesn't make sense.  Let those "big iron" server folks opt-in as they
see fit using our features support.

I didn't touch x86-32 in regards to the above changes, because I
consider that "legacy EOL" -- which generally means "leave alone".

Also, latencytop has "expired" years ago, so we catch up accordingly.

Target branch is master so we have time to see if anyone complains.
I don't see a lot of value in backporting to already released branches.

Further details are in the commit logs.

Thanks,
Paul.
--

Paul Gortmaker (5):
  x86-64: consolidate crypto options
  x86-64: separate out the NUMA features to our existing NUMA scc/cfg
  x86-64: don't force EDAC support on everyone
  x86-64: use the defaults for number of CPUs
  BSP: remove from all - latencytop feature inclusion

 bsp/amd-x86/amd-x86-64.scc|  1 -
 bsp/bcm-2xxx-rpi/bcm-2xxx-rpi.scc |  1 -
 bsp/common-pc/common-pc-preempt-rt.scc|  1 -
 .../fsl-mpc8315e-rdb-preempt-rt.scc   |  1 -
 .../fsl-mpc8315e-rdb-standard.scc |  1 -
 bsp/intel-common/intel-developer-drivers.scc  |  1 -
 bsp/intel-x86/intel-x86-64.cfg| 31 +++
 bsp/intel-x86/intel-x86.scc   |  1 -
 bsp/minnow/minnow-preempt-rt.scc  |  1 -
 bsp/minnow/minnow-standard.scc|  1 -
 bsp/mti-malta32/mti-malta32.scc   |  1 -
 bsp/mti-malta64/mti-malta64-be-developer.scc  |  1 -
 bsp/qemu-ppc32/qemu-ppc32.scc |  1 -
 bsp/qemu-ppc64/qemu-ppc64-standard.scc|  3 --
 .../qemumicroblazeeb-standard.scc |  1 -
 .../qemumicroblazeel-standard.scc |  1 -
 bsp/xilinx/zynq-standard.scc  |  1 -
 features/edac/edac.cfg|  8 +
 features/numa/numa.cfg|  2 ++
 19 files changed, 14 insertions(+), 45 deletions(-)

-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13335): 
https://lists.yoctoproject.org/g/linux-yocto/message/13335
Mute This Topic: https://lists.yoctoproject.org/mt/102900650/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [PATCH 1/5] x86-64: consolidate crypto options

2023-11-30 Thread Paul Gortmaker via lists.yoctoproject.org
From: Paul Gortmaker 

No functional change - just makes further reorganizations and
refactoring more easy to review/parse.

Signed-off-by: Paul Gortmaker 
---
 bsp/intel-x86/intel-x86-64.cfg | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/bsp/intel-x86/intel-x86-64.cfg b/bsp/intel-x86/intel-x86-64.cfg
index 682d5f9d125f..a8de30cae983 100644
--- a/bsp/intel-x86/intel-x86-64.cfg
+++ b/bsp/intel-x86/intel-x86-64.cfg
@@ -9,10 +9,6 @@ CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
 # ACPI NUMA
 #
 CONFIG_X86_64_ACPI_NUMA=y
-CONFIG_CRYPTO_CRCT10DIF_PCLMUL=m
-CONFIG_CRYPTO_SHA1_SSSE3=m
-CONFIG_CRYPTO_SHA256_SSSE3=m
-CONFIG_CRYPTO_SHA512_SSSE3=m
 
 # EDAC
 CONFIG_EDAC=y
@@ -38,6 +34,10 @@ CONFIG_PCI_IOV=y
 CONFIG_CRYPTO=y
 CONFIG_CRYPTO_SHA1=y
 CONFIG_CRYPTO_HMAC=y
+CONFIG_CRYPTO_CRCT10DIF_PCLMUL=m
+CONFIG_CRYPTO_SHA1_SSSE3=m
+CONFIG_CRYPTO_SHA256_SSSE3=m
+CONFIG_CRYPTO_SHA512_SSSE3=m
 CONFIG_CRYPTO_AES_NI_INTEL=m
 
 # For different QAT devices
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13336): 
https://lists.yoctoproject.org/g/linux-yocto/message/13336
Mute This Topic: https://lists.yoctoproject.org/mt/102900651/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [PATCH 4/5] x86-64: use the defaults for number of CPUs

2023-11-30 Thread Paul Gortmaker via lists.yoctoproject.org
From: Paul Gortmaker 

The x86-64 BSP isn't quite the same as the "more specific" BSP like a
Beaglebone Black or the (now deleted) Edgerouter.  Where we have exact
hardware specifics for boards like those, the x86-64 BSP is more of a
"generic" thing used as the baseline across an endless sea of boards.

To that end, this is somewhat a revert of commit bd77e1f904f6
("bsp/intel-x86: change the supported maximum number of CPUs to 512 in 64-bit 
bsp")

It is great that a handful of people out there are using Yocto on these
huge server machines, but that doesn't reflect 99% of the rest of us who
continue to lean towards the original "embedded theme" of Yocto.

That means a whole bunch of extra per-CPU jumping through hoops; some
can be mitigated by booting with "nr_cpus=4" (or whatever the core count
is) but I guarantee largely nobody out there is doing that.

Let those users with the crazy CPU count own that config customization
locally.  The default is 64 which still seems way too large IMHO, but
at least we are moving in the right direction.

Signed-off-by: Paul Gortmaker 
---
 bsp/intel-x86/intel-x86-64.cfg | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/bsp/intel-x86/intel-x86-64.cfg b/bsp/intel-x86/intel-x86-64.cfg
index 58b0fed637e8..da9bc7b57eca 100644
--- a/bsp/intel-x86/intel-x86-64.cfg
+++ b/bsp/intel-x86/intel-x86-64.cfg
@@ -31,6 +31,3 @@ CONFIG_CRYPTO_DEV_QAT_DH895xCCVF=m
 
 # x86 CPU resource control support
 CONFIG_X86_CPU_RESCTRL=y
-
-# Processor type and features
-CONFIG_NR_CPUS=512
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13339): 
https://lists.yoctoproject.org/g/linux-yocto/message/13339
Mute This Topic: https://lists.yoctoproject.org/mt/102900654/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [linux-yocto] [V2-revised] Microchip polarfire SoC - yocto-kernel-cache & linux-yocto V2 patch.

2023-11-20 Thread Paul Gortmaker via lists.yoctoproject.org
[Re: [linux-yocto] [V2-revised] Microchip polarfire SoC - yocto-kernel-cache & 
linux-yocto V2 patch.] On 16/11/2023 (Thu 18:24) Kadambathur Subramaniyam, 
Saravanan via lists.yoctoproject.org wrote:

> Hi Bruce,
> We have two requests, first one is for linux-yocto and the second one is for
> Yocto-kernel-cache.  I received your email about patch corruption and to 
> resend
> V3 for the yocto-kernel-cache but this email is meant for linux-yocto. 
> 
> For the linux-yocto, since it has around 120+ patches i generated PR using 
> "git
> request-pull" command and sent through email client(outlook).

Saravanan.K.S,

I am a bit concerned about your workflow, even if Bruce (or git) finally did 
manage
to pipe your submission through w3m or ???  Firstly - "outlook" and then...

> For the yocto-kernel-cache today i resent the patch twice to you through "git
> send-email" but still its not delivered to you / linux-yocto mailing list...

The e-mail is not instant.  There are moderators involved when there are
people sending who may not be subscribed - or if someone sends 100+
patches at once, a moderator might need to confirm it is not spam.  If
you don't wait, check, and confirm - then you risk filling the
maintainer's mailbox with multiple copies, which is not nice.

> PFB the log which i got from my git send-email command. I checked our build
> server by sending git send-email to my email id and i received the email

For the record, the git send-email doesn't really do a lot beyond
looking for a Subject and Date and From line.  Beyond that, you can fill
the "to-send" files with complete random noise and they will send.

That said, it really isn't for Bruce to review your process and find
your issues.  All this stuff has been solved on-line in hundreds
(thousands?) of forums.  But he gave you a giant hint - he wanted v3
in "plain text" - see here:

https://lists.yoctoproject.org/g/linux-yocto/message/13293

There are pages in the kernel covering this:

"No MIME, no links, no compression, no attachments.  Just plain text"

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/submitting-patches.rst#n280

When I opened your e-mail in gmail, this is what I saw:

   --_000_SJ0PR11MB82969FC0AEA199B8A587ADA1F1B0ASJ0PR11MB8296namp_
   Content-Type: text/plain; charset="utf-8"
   Content-Transfer-Encoding: base64
   
   SGkgQnJ1Y2UsDQpXZSBoYXZlIHR3byByZXF1ZXN0cywgZmlyc3Qgb25lIGlzIGZvciBsaW51eC15
   b2N0byBhbmQgdGhlIHNlY29uZCBvbmUgaXMgZm9yIFlvY3RvLWtlcm5lbC1jYWNoZS4gIEkgcmVj
   ZWl2ZWQgeW91ciBlbWFpbCBhYm91dCBwYXRjaCBjb3JydXB0aW9uIGFuZCB0byByZXNlbmQgVjMg
   Zm9yIHRoZSB5b2N0by1rZXJuZWwtY2FjaGUgYnV0IHRoaXMgZW1haWwgaXMgbWVhbnQgZm9yIGxp
   [thousands more similar unreadable lines...]

See that "base64" line?  That means it is not plain text.  It is encoded.
Once again, the kernel has some info on e-mail clients (like "outlook").

Even without that link, just scroll down your own v3 send.  Notice the
font size change between your mail and the one from Bruce?  That tells
you that you did not send plain text.
   
> Let me check and retry sending the yocto-kernel-cache patches to you. 
> 
> In the meantime could you please help to merge the below pull request for
> linux-yocto?. 

I'd suggest you do some more test sends with an internal co-worker
before sending any more large patch series externally.  Make 100% sure
they are plain text - not HTML-mail or base64-mail.  Bruce has been
generous with his time by "unwinding" the base64 stuff this time, but we
need to do better with future submissions.

Thanks,
Paul.
--

> 
> Log:
> skadamba@blr-linux-engg1$git send-email --annotate --subject-prefix="v3]
> [yocto-kernel-cache yocto-6.1][PATCH" --to bruce.ashfi...@gmail.com
> --suppress-cc=all --cc linux-yocto@lists.yoctoproject.org -M yocto-6.1
> /tmp/KrQspiQqYJ/0001-microchip-polarfire-soc-add-configure-file-for-micro.patch
> OK. Log says:
> Sendmail: /usr/sbin/sendmail -i bruce.ashfi...@gmail.com
> linux-yocto@lists.yoctoproject.org
> From: "Saravanan.K.S" 
> To: bruce.ashfi...@gmail.com
> Cc: linux-yocto@lists.yoctoproject.org
> Subject: [v3][yocto-kernel-cache yocto-6.1][PATCH] microchip-polarfire-soc: 
> add
> configure file for microchip-polarfire-soc BSP in kernel-cache
> Date: Thu, 16 Nov 2023 17:49:19 +
> Message-Id:
> <20231116174919.3056255-1-saravanan.kadambathursubramani...@windriver.com>
> X-Mailer: git-send-email 2.40.0
> MIME-Version: 1.0
> Content-Transfer-Encoding: 8bit
> 
> Result: OK
> 
> ---
> From: Bruce Ashfield 
> Sent: Thursday, November 16, 2023 11:30 PM
> To: Kadambathur Subramaniyam, Saravanan
> 
> Cc: linux-yocto@lists.yoctoproject.org 
> Subject: Re: [linux-yocto] [V2-revised] Microchip polarfire SoC -
> yocto-kernel-cache & linux-yocto V2 patch.
>  
> CAUTION: This email comes from a non Wind River email account!
> Do not click links or open attachments unless you recognize the sender 

Re: [linux-yocto][v5.10/standard/preempt-rt/base][PATCH] fix linux-yocto-rt compile error

2023-10-23 Thread Paul Gortmaker via lists.yoctoproject.org
[[linux-yocto][v5.10/standard/preempt-rt/base][PATCH] fix linux-yocto-rt 
compile error] On 22/10/2023 (Sun 19:21) Li Wang via lists.yoctoproject.org 
wrote:

> kernel-source/include/net/sch_generic.h:198:17: error: implicit
> declaration of function 'raw_write_seqcount_t_begin'; did you mean
> 'raw_write_seqcount_begin'? [-Werror=implicit-function-declaration]

Your commit seems reasonable, but it is missing one simple step.

Running "git blame" on the unpatched file, which leads to:

It isn't so much about the "blame" -- but knowing where the issure
originated from, so we can direct it to other development streams if
appropriate.

So in doing so I see afe3f03a84d51:

 -
commit afe3f03a84d5119b8a8af700e8360e4e4e2dc33c
Author: Sebastian Andrzej Siewior 
AuthorDate: Tue Sep 8 16:57:11 2020 +0200
Commit: Bruce Ashfield 
CommitDate: Thu Dec 17 12:35:26 2020 -0500

net: Properly annotate the try-lock for the seqlock
 -

So now we have more questions.

This is an old commit from 2020.  Why is it showing up as compile
breakage today?

Does the commit added to v5.10-yocto match the original in the
linux-stable-rt repo, or did Bruce do a compile tweak for it on the fly
back 3y ago and now upstream fixed the function name to not look like a
typedef?

Are we going to encounter the same issue on v5.15 in another 24 hours?

Your job as submitter is not just to provide the "raw" fix to Bruce, but
to ALSO provide the "how did we get here" story so he has a better idea
of the scope of impact and can perhaps better react in the future by
knowing what happened here so it can be prevented next time.

Thanks,
Paul.
--

> 
> Signed-off-by: Li Wang 
> ---
>  include/net/sch_generic.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
> index 72be68652bb8..4574dd262efd 100644
> --- a/include/net/sch_generic.h
> +++ b/include/net/sch_generic.h
> @@ -195,7 +195,7 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
>* Variant of write_seqcount_t_begin() telling lockdep that a
>* trylock was attempted.
>*/
> - raw_write_seqcount_t_begin(s);
> + raw_write_seqcount_begin(s);
>   seqcount_acquire(>dep_map, 0, 1, _RET_IP_);
>   return true;
>   }
> -- 
> 2.25.1
> 

> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13222): 
https://lists.yoctoproject.org/g/linux-yocto/message/13222
Mute This Topic: https://lists.yoctoproject.org/mt/102114616/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [linux-yocto] [PATCH] kernel/sched: Fix double free on invalid isolcpus/nohz_full params

2023-08-18 Thread Paul Gortmaker via lists.yoctoproject.org
[[linux-yocto] [PATCH] kernel/sched: Fix double free on invalid 
isolcpus/nohz_full params] On 17/08/2023 (Thu 10:56) Adrian Cinal via 
lists.yoctoproject.org wrote:

> A previous patch left behind a redundant call to free_bootmem_cpumask_var
> possibly leading to a double free (once in the if-branch and once in the
> unwind code at the end of the function) if the isolcpus= or nohz_full=
> kernel command line parameters failed validation, cf.:
> 
> https://lists.yoctoproject.org/g/linux-yocto/message/12797

Once again, I wanted to know exactly how we got here. So here is how.

The key to this issue is this commit from v5.18:

commit 0cd3e59de1f53978873669c7c8225ec13e88c3ae
Author: Frederic Weisbecker 
Date:   Mon Feb 7 16:59:08 2022 +0100

sched/isolation: Consolidate error handling

Centralize the mask freeing and return value for the error path. This
makes potential leaks more visible.

Simple and common enough; it contains multiple instances of:

-   free_bootmem_cpumask_var(non_housekeeping_mask);
-   return 0;
+   goto free_non_housekeeping_mask;

But from a kernel version persepective, it means we have an "old style" free
and the "new style" free.

The patch sent to lkml was for post 5.18 kernels, and used the new style,
and was even documented as such in the 0/N (see asteriks):

 -
This is a rebase and retest of two fixes I'd sent earlier[1].

The rebase is required due to conflicts in my patch #1 and where Frederic
updated the unwind code in housekeeping_setup in his series[2] and that series
is now in sched/core of tip[3].

So this update is against a baseline of ed3b362d54f0 found in sched/core as
"sched/isolation: Split housekeeping cpumask per isolation features" in tip.

   **
Changes amount to "return 0" ---> "goto out_free" and adding a nod to PaulM's
observation that nohz_full w/o a cpuset is coming someday into the commit log.
   **

[1] 
https://lore.kernel.org/all/20211206145950.10927-1-paul.gortma...@windriver.com/
[2] https://lore.kernel.org/all/20220207155910.527133-1-frede...@kernel.org/
[3] git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
 -

https://lore.kernel.org/lkml/20220221182009.1283-1-paul.gortma...@windriver.com/

Similarly, the submission for yocto for the v5.15 kernel (which was current
at the time) used the old style and everything was fine.

When yocto moved to v6.1, the uprev generally carries forward commits that
have not been merged to mainline or declared obsolete.

So the v5.15 old style commit was carried forward to v6.1, resulting in a
mix of old and new style free.  Not ideal, but still functionally correct.

The double free risk comes from your change which deleted the "return 0"
underneath the old free:

https://lists.yoctoproject.org/g/linux-yocto/topic/99772129

It shouldn't have done that, and I missed it in review.

Bruce: the executive summary is that this delta fix is correct, and should
be placed on the 6.1/6.4 branches that got the previous commit from Adrian.

The yocto-kernel-cache can and should ignore all this churn, and simply
contain the post v5.18 "new style" version of the commit sent to lkml:

https://lore.kernel.org/lkml/20220221182009.1283-2-paul.gortma...@windriver.com/

Thanks,
Paul.




> 
> Signed-off-by: Adrian Cinal 
> ---
>  kernel/sched/isolation.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index b97d6e05013d..7bebfdc42486 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -133,7 +133,6 @@ static int __init housekeeping_setup(char *str, unsigned 
> long flags)
>  
>   if (cpumask_empty(non_housekeeping_mask)) {
>   pr_info("housekeeping: kernel parameter 'nohz_full=' or 
> 'isolcpus=' has no valid CPUs.\n");
> - free_bootmem_cpumask_var(non_housekeeping_mask);
>   goto free_non_housekeeping_mask;
>   }
>  
> -- 
> 2.41.0
> 

> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#12998): 
https://lists.yoctoproject.org/g/linux-yocto/message/12998
Mute This Topic: https://lists.yoctoproject.org/mt/100796885/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [linux-yocto] [PATCH] kernel/sched: Fix uninitialized read in nohz_full/isolcpus setup

2023-08-14 Thread Paul Gortmaker via lists.yoctoproject.org
[Re: [linux-yocto] [PATCH] kernel/sched: Fix uninitialized read in 
nohz_full/isolcpus setup] On 26/06/2023 (Mon 11:05) Paul Gortmaker wrote:

> [[linux-yocto] [PATCH] kernel/sched: Fix uninitialized read in 
> nohz_full/isolcpus setup] On 25/06/2023 (Sun 18:50) Adrian Cinal via 
> lists.yoctoproject.org wrote:
> 
> > Fix reading uninitialized cpumask and using it to validate the nohz_full=
> > and isolcpus= kernel command line parameters.
> > 
> > An older version of a patch from lkml was incorporated into linux-yocto,
> > whereas a newer, rebased version was later published. See:
> > https://lore.kernel.org/lkml/20220221182009.1283-1-paul.gortma...@windriver.com/
> 
> Let me remind myself of what got merged upstream and what didn't and
> why, and I'll follow up shortly with a Yocto specific update.

Sorry for the delayed reply.  The commit log kind of confused me for a
while until I had a quiet moment to get the cobwebs out of my head and
realize what happened.

Your fix is correct. The v6.1 (and v6.4) kernels are performing the
sanity tests on uninitialized memory and hence isolcpus= can randomly
reject perfectly valid inputs.  Same for nohz_full= it seems.

I'd suggest we augment the commit log with this:

 --
PG: To be more clear as to what happened here - it isn't a broken older
patch from lkml integrated into linux-yocto.  It is a carry forward of
a correct commit from the v5.15 linux-yocto kernel:

https://git.yoctoproject.org/linux-yocto/commit/?id=97c96388922

...in which case the sanity checks are properly *after* the allocation
and processing of the bootargs into the cpumask.

However, it seems patch (or wiggle?) apparently decided to put the
sanity checks *before* the population of the cpumask during the
carry-forward and generation of the new v6.1 kernel.  Meaning they are
validating uninitialized memory and hence nohz_full= and isolcpus= are
subject to random failures even for valid input ranges.

Acked-by: Paul Gortmaker 
 --

Bruce - both carry-forwards -- the v6.1 [d81fac6e842] and v6.4 kernels
[23b162bc3058] have this issue.  The commit IDs above are in their
respective standard/base version and hence this fix will have to also
land there and be merged out to -rt and and all BSPs etc etc.

The copies in the yocto-kernel-cache also have the sanity checks above
the actual cpulist_parse(str, non_housekeeping_mask) which populates the
cpumask with the data to be validated and hence are also broken.

https://git.yoctoproject.org/yocto-kernel-cache/tree/features/clear_warn_once/sched-isolation-really-align-nohz_full-with-rcu_nocb.patch?h=yocto-6.1
https://git.yoctoproject.org/yocto-kernel-cache/tree/features/clear_warn_once/sched-isolation-really-align-nohz_full-with-rcu_nocb.patch?h=yocto-6.4

Thanks to Adrian for tracking this down and sending the fix!

Paul.
--

> 
> Thanks,
> Paul.
> --
> 
> > 
> > Signed-off-by: Adrian Cinal 
> > ---
> >  kernel/sched/isolation.c | 12 ++--
> >  1 file changed, 6 insertions(+), 6 deletions(-)
> > 
> > diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> > index 73386019efcb..b97d6e05013d 100644
> > --- a/kernel/sched/isolation.c
> > +++ b/kernel/sched/isolation.c
> > @@ -119,6 +119,12 @@ static int __init housekeeping_setup(char *str, 
> > unsigned long flags)
> > }
> > }
> >  
> > +   alloc_bootmem_cpumask_var(_housekeeping_mask);
> > +   if (cpulist_parse(str, non_housekeeping_mask) < 0) {
> > +   pr_warn("Housekeeping: nohz_full= or isolcpus= incorrect CPU 
> > range\n");
> > +   goto free_non_housekeeping_mask;
> > +   }
> > +
> > if (!cpumask_subset(non_housekeeping_mask, cpu_possible_mask)) {
> > pr_info("housekeeping: kernel parameter 'nohz_full=' or 
> > 'isolcpus=' contains nonexistent CPUs.\n");
> > cpumask_and(non_housekeeping_mask, cpu_possible_mask,
> > @@ -128,12 +134,6 @@ static int __init housekeeping_setup(char *str, 
> > unsigned long flags)
> > if (cpumask_empty(non_housekeeping_mask)) {
> > pr_info("housekeeping: kernel parameter 'nohz_full=' or 
> > 'isolcpus=' has no valid CPUs.\n");
> > free_bootmem_cpumask_var(non_housekeeping_mask);
> > -   return 0;
> > -   }
> > -
> > -   alloc_bootmem_cpumask_var(_housekeeping_mask);
> > -   if (cpulist_parse(str, non_housekeeping_mask) < 0) {
> > -   pr_warn("Housekeeping: nohz_full= or isolcpus= incorrect CPU 
> > range\n");
> > goto free_non_housekeeping_mask;
> > }
> >  
> > -- 
> > 2.41.0
> > 
> 
> > 
> > 
> > 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#12975): 
https://lists.yoctoproject.org/g/linux-yocto/message/12975
Mute This Topic: https://lists.yoctoproject.org/mt/99772129/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [linux-yocto] [PATCH] kernel/sched: Fix uninitialized read in nohz_full/isolcpus setup

2023-06-26 Thread Paul Gortmaker via lists.yoctoproject.org
[[linux-yocto] [PATCH] kernel/sched: Fix uninitialized read in 
nohz_full/isolcpus setup] On 25/06/2023 (Sun 18:50) Adrian Cinal via 
lists.yoctoproject.org wrote:

> Fix reading uninitialized cpumask and using it to validate the nohz_full=
> and isolcpus= kernel command line parameters.
> 
> An older version of a patch from lkml was incorporated into linux-yocto,
> whereas a newer, rebased version was later published. See:
> https://lore.kernel.org/lkml/20220221182009.1283-1-paul.gortma...@windriver.com/

Let me remind myself of what got merged upstream and what didn't and
why, and I'll follow up shortly with a Yocto specific update.

Thanks,
Paul.
--

> 
> Signed-off-by: Adrian Cinal 
> ---
>  kernel/sched/isolation.c | 12 ++--
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index 73386019efcb..b97d6e05013d 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -119,6 +119,12 @@ static int __init housekeeping_setup(char *str, unsigned 
> long flags)
>   }
>   }
>  
> + alloc_bootmem_cpumask_var(_housekeeping_mask);
> + if (cpulist_parse(str, non_housekeeping_mask) < 0) {
> + pr_warn("Housekeeping: nohz_full= or isolcpus= incorrect CPU 
> range\n");
> + goto free_non_housekeeping_mask;
> + }
> +
>   if (!cpumask_subset(non_housekeeping_mask, cpu_possible_mask)) {
>   pr_info("housekeeping: kernel parameter 'nohz_full=' or 
> 'isolcpus=' contains nonexistent CPUs.\n");
>   cpumask_and(non_housekeeping_mask, cpu_possible_mask,
> @@ -128,12 +134,6 @@ static int __init housekeeping_setup(char *str, unsigned 
> long flags)
>   if (cpumask_empty(non_housekeeping_mask)) {
>   pr_info("housekeeping: kernel parameter 'nohz_full=' or 
> 'isolcpus=' has no valid CPUs.\n");
>   free_bootmem_cpumask_var(non_housekeeping_mask);
> - return 0;
> - }
> -
> - alloc_bootmem_cpumask_var(_housekeeping_mask);
> - if (cpulist_parse(str, non_housekeeping_mask) < 0) {
> - pr_warn("Housekeeping: nohz_full= or isolcpus= incorrect CPU 
> range\n");
>   goto free_non_housekeeping_mask;
>   }
>  
> -- 
> 2.41.0
> 

> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#12803): 
https://lists.yoctoproject.org/g/linux-yocto/message/12803
Mute This Topic: https://lists.yoctoproject.org/mt/99772129/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [linux-yocto] Trial merge of v5.15.111 v6.1.28 for linux-yocto

2023-05-11 Thread Paul Gortmaker via lists.yoctoproject.org
[[linux-yocto] Trial merge of v5.15.111 v6.1.28 for linux-yocto] On 12/05/2023 
(Fri 10:19) Kevin Hao via lists.yoctoproject.org wrote:

> Hi Bruce,
> 
> This is a trial merge of the stable kernel v5.15.111 v6.1.28 for the 
> following branches in the linux-yocto.

[...]

> This is a much bigger stable cycle. There are 372 patches in v5.15.111 and 
> 611 patches in v6.1.28.

That is a lot of conflicts. I hope most were semi-trivial.  Sure would
be nice if the stable cycles had the smaller footprint they had 10y ago.

> So we got more merge conflicts than usual. There is one merge conflict in 
> kernel/workqueue.c on
> v5.15 rt kernel, all the others are for various drivers. I believe I have 
> fixed all the mess and 
> all the merges have passed my build test. I have pushed these branches to:
> https://github.com/haokexin/linux

Thanks for helping out and doing this non-trivial amount of work.

Paul.
--

> 
> You can use this as a reference for the linux-yocto stable kernel bump.
> 
> Thanks,
> Kevin

> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#12498): 
https://lists.yoctoproject.org/g/linux-yocto/message/12498
Mute This Topic: https://lists.yoctoproject.org/mt/98841823/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-