From: Markus Elfring
Date: Fri, 29 Dec 2023 09:28:09 +0100
A few update suggestions were taken into account
from static source code analysis.
Markus Elfring (2):
Improve three size determinations
Improve error handling in virtio_fs_get_tree()
fs/fuse/virtio_fs.c | 19 +++
1
From: Markus Elfring
Date: Fri, 29 Dec 2023 08:42:04 +0100
Replace the specification of data structures by pointer dereferences
as the parameter for the operator “sizeof” to make the corresponding size
determination a bit safer according to the Linux coding style convention.
This issue was detec
From: Markus Elfring
Date: Fri, 29 Dec 2023 09:15:07 +0100
The kfree() function was called in two cases by
the virtio_fs_get_tree() function during error handling
even if the passed variable contained a null pointer.
This issue was detected by using the Coccinelle software.
* Thus use another la
On Fri, Dec 29, 2023 at 09:38:47AM +0100, Markus Elfring wrote:
> From: Markus Elfring
> Date: Fri, 29 Dec 2023 09:15:07 +0100
>
> The kfree() function was called in two cases by
> the virtio_fs_get_tree() function during error handling
> even if the passed variable contained a null pointer.
So
crypto: crypto@1dfa000 {
---
base-commit: 39676dfe52331dba909c617f213fdb21015c8d10
change-id: 20231229-sc7280-cryptobam-fixup-fb5f94a5572f
Best regards,
--
Luca Weiss
>> The kfree() function was called in two cases by
>> the virtio_fs_get_tree() function during error handling
>> even if the passed variable contained a null pointer.
>
> So what? kfree(NULL) is perfectly acceptable.
I suggest to reconsider the usefulness of such a special function call.
> Are
On 28/12/2023 11:41, Ulf Hansson wrote:
Let's avoid some of the boilerplate code to manage the vcodec PM domains,
by converting into using dev_pm_domain_attach|detach_list().
Cc: Mauro Carvalho Chehab
Cc: Stanimir Varbanov
Cc: Vikash Garodia
Cc: "Bryan O'Donoghue"
Cc: Bjorn Andersson
Cc: Ko
On Thu, 28 Dec 2023 23:05:21 -0500
Steven Rostedt wrote:
> On Wed, 27 Dec 2023 07:57:08 +0900
> Masami Hiramatsu (Google) wrote:
>
> > On Tue, 26 Dec 2023 12:59:02 -0500
> > Steven Rostedt wrote:
> >
> > > From: "Steven Rostedt (Google)"
> > >
> > > The tracefs file "buffer_percent" is to a
: 39676dfe52331dba909c617f213fdb21015c8d10
change-id: 20231229-fp5-reserved-mem-b88e822b1127
Best regards,
--
Luca Weiss
On 29.12.2023 13:53, Luca Weiss wrote:
> It seems we also need to reserve a region of 81 MiB called "removed_mem"
> otherwise we can easily hit the following error with higher RAM usage:
>
> [ 1467.809274] Internal error: synchronous external abort: 9610
> [#2] SMP
>
> Fixes: eee96
On Thu, 28 Dec 2023 09:51:49 -0500
Steven Rostedt wrote:
> From: "Steven Rostedt (Google)"
>
> If an application blocks on the snapshot or snapshot_raw files, expecting
> to be woken up when a snapshot occurs, it will not happen. Or it may
> happen with an unexpected result.
>
> That result is
On Fri, Dec 29, 2023 at 10:10:08AM +0100, Markus Elfring wrote:
> >> The kfree() function was called in two cases by
> >> the virtio_fs_get_tree() function during error handling
> >> even if the passed variable contained a null pointer.
> >
> > So what? kfree(NULL) is perfectly acceptable.
>
> I
On Wed, 27 Dec 2023 21:38:25 +0900
"Masami Hiramatsu (Google)" wrote:
> From: Masami Hiramatsu (Google)
>
> If ftrace_register_direct() called with a large number of target
There's no function called "ftrace_register_direct()", I guess you meant
register_ftrace_direct()?
> functions (e.g. 65)
From: "Steven Rostedt (Google)"
Masami Hiramatsu reported a memory leak in register_ftrace_direct() where
if the number of new entries are added is large enough to cause two
allocations in the loop:
for (i = 0; i < size; i++) {
hlist_for_each_entry(entry, &hash->buckets[i
From: Vincent Donnefort
In preparation for allowing the user-space to map a ring-buffer, add
a set of mapping functions:
ring_buffer_{map,unmap}()
ring_buffer_map_fault()
And controls on the ring-buffer:
ring_buffer_map_get_reader() /* swap reader and head */
Mapping the ring-buffer al
I'm sending this to a wider audience, as I want to hear more
feedback on this before I accept it.
Vincent has been working on allowing the ftrace ring buffer to be
memory mapped into user space. This has been going on since
last year, where we talked at the 2022 Tracing Summit in London.
Vincen
From: Vincent Donnefort
Currently, user-space extracts data from the ring-buffer via splice,
which is handy for storage or network sharing. However, due to splice
limitations, it is imposible to do real-time analysis without a copy.
A solution for that problem is to let the user-space map the ri
On Fri, 29 Dec 2023 13:40:50 -0500
Steven Rostedt wrote:
> I'm sending this to a wider audience, as I want to hear more
> feedback on this before I accept it.
>
I forgot to mention that this can be applied on top of:
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git for-n
On 12/28/2023 3:41 AM, Ulf Hansson wrote:
Attaching/detaching of a device to multiple PM domains has started to
become a common operation for many drivers, typically during ->probe() and
->remove(). In most cases, this has lead to lots of boilerplate code in the
drivers.
To fixup up the situat
this_cpu_ptr() is rather more expensive than raw_cpu_read() since
the latter can use an 'offset from register' (%gs for x86-84).
Add a 'self' field to 'struct optimistic_spin_node' that can be
read with raw_cpu_read(), initialise on first call.
Signed-off-by: David Laight
---
kernel/locking/osq
osq_wait_next() is passed 'prev' from osq_lock() and NULL from osq_unlock()
but only needs the 'cpu' value to write to lock->tail.
Just pass prev->cpu or OSQ_UNLOCKED_VAL instead.
Also directly return NULL or 'next' instead of breaking the loop.
Should have no effect on the generated code since g
Zeng Heng noted that heavy use of the osq (optimistic spin queue) code
used rather more cpu than might be expected. See:
https://lore.kernel.org/lkml/202312210155.wc2huk8c-...@intel.com/T/#mcc46eedd1ef22a0d668828b1d088508c9b1875b8
Part of the problem is there is a pretty much guaranteed cache line
The vcpu_is_preempted() test stops osq_lock() spinning if a virtual
cpu is no longer running.
Although patched out for bare-metal the code still needs the cpu number.
Reading this from 'prev->cpu' is a pretty much guaranteed have a cache miss
when osq_unlock() is waking up the next cpu.
Instead
struct optimistic_spin_node is private to the implementation.
Move it into the C file to ensure nothing is accessing it.
Signed-off-by: David Laight
---
include/linux/osq_lock.h | 5 -
kernel/locking/osq_lock.c | 7 +++
2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/incl
osq_lock() starts by setting node->next to NULL and node->locked to 0.
Careful analysis shows that node->next is always NULL on entry.
node->locked is set non-zero by another cpu to force a wakeup.
This can only happen after the 'prev->next = node' assignment,
so locked can be set to zero just bef
Masami and Jiri,
This patch made it through all my tests. If I can get an Acked-by by
Sunday, I'll include it in my push to Linus (I have a couple of other fixes
to send him).
-- Steve
On Fri, 29 Dec 2023 11:51:34 -0500
Steven Rostedt wrote:
> From: "Steven Rostedt (Google)"
>
> Masami Hi
osq_lock() starts by setting node->next to NULL and node->locked to 0.
Careful analysis shows that node->next is always NULL on entry.
node->locked is set non-zero by another cpu to force a wakeup.
This can only happen after the 'prev->next = node' assignment,
so locked can be set to zero just bef
On Fri, 29 Dec 2023 at 12:56, David Laight wrote:
>
> osq_wait_next() is passed 'prev' from osq_lock() and NULL from osq_unlock()
> but only needs the 'cpu' value to write to lock->tail.
> Just pass prev->cpu or OSQ_UNLOCKED_VAL instead.
>
> Also directly return NULL or 'next' instead of breaking
On 12/29/23 15:53, David Laight wrote:
struct optimistic_spin_node is private to the implementation.
Move it into the C file to ensure nothing is accessing it.
Signed-off-by: David Laight
---
include/linux/osq_lock.h | 5 -
kernel/locking/osq_lock.c | 7 +++
2 files changed, 7 inse
On 12/29/23 15:56, David Laight wrote:
osq_wait_next() is passed 'prev' from osq_lock() and NULL from osq_unlock()
but only needs the 'cpu' value to write to lock->tail.
Just pass prev->cpu or OSQ_UNLOCKED_VAL instead.
Also directly return NULL or 'next' instead of breaking the loop.
Should h
On 12/29/23 15:57, David Laight wrote:
this_cpu_ptr() is rather more expensive than raw_cpu_read() since
the latter can use an 'offset from register' (%gs for x86-84).
Add a 'self' field to 'struct optimistic_spin_node' that can be
read with raw_cpu_read(), initialise on first call.
Signed-off-
On 12/29/23 15:58, David Laight wrote:
The vcpu_is_preempted() test stops osq_lock() spinning if a virtual
cpu is no longer running.
Although patched out for bare-metal the code still needs the cpu number.
Reading this from 'prev->cpu' is a pretty much guaranteed have a cache miss
when osq_u
On 12/29/23 17:11, David Laight wrote:
osq_lock() starts by setting node->next to NULL and node->locked to 0.
Careful analysis shows that node->next is always NULL on entry.
node->locked is set non-zero by another cpu to force a wakeup.
This can only happen after the 'prev->next = node' assign
Hi Luis,
On 12/22/23 21:10, Luis Chamberlain wrote:
On Fri, Dec 22, 2023 at 01:13:26PM +0100, Helge Deller wrote:
On 12/22/23 06:59, Luis Chamberlain wrote:
On Wed, Nov 22, 2023 at 11:18:12PM +0100, del...@kernel.org wrote:
On 64-bit architectures without CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
(
34 matches
Mail list logo