On Mon, Jun 16, 2025 at 10:25 AM Jason Wang wrote:
>
> Hello all:
>
> This sereis tries to implement the VIRTIO_F_IN_ORDER to
> virtio_ring. This is done by introducing virtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> t
This patch implements in order support for both split virtqueue and
packed virtqueue.
Benchmark with KVM guest + testpmd on the host shows:
For split virtqueue: no obvious differences were noticed
For packed virtqueue:
1) RX gets 3.1% PPS improvements from 6.3 Mpps to 6.5 Mpps
2) TX gets 4.6
Hello all:
This sereis tries to implement the VIRTIO_F_IN_ORDER to
virtio_ring. This is done by introducing virtqueue ops so we can
implement separate helpers for different virtqueue layout/features
then the in-order were implemented on top.
Tests shows 3%-5% imporvment with packed virtqueue PPS
Previously, the order for acquiring the locks required for the migration
function move_enc_context_from() was: 1) memslot lock 2) vCPU lock. This
can trigger a deadlock warning because a vCPU IOCTL modifying memslots
will acquire the locks in reverse order: 1) vCPU lock 2) memslot lock.
This
we can
> > implement separate helpers for different virtqueue layout/features
> > then the in-order were implemented on top.
> >
> > Tests shows 3%-5% imporvment with packed virtqueue PPS with KVM guest
> > testpmd on the host.
>
> ok this looks quite clean. We are i
On Wed, May 28, 2025 at 02:42:15PM +0800, Jason Wang wrote:
> Hello all:
>
> This sereis tries to implement the VIRTIO_F_IN_ORDER to
> virtio_ring. This is done by introducing virtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> then
On Wed, May 28, 2025 at 8:42 AM Jason Wang wrote:
>
> Hello all:
>
> This sereis tries to implement the VIRTIO_F_IN_ORDER to
> virtio_ring. This is done by introducing virtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> then the in-o
This patch implements in order support for both split virtqueue and
packed virtqueue.
Benchmark with KVM guest + testpmd on the host shows:
For split virtqueue: no obvious differences were noticed
For packed virtqueue:
1) RX gets 3.1% PPS improvements from 6.3 Mpps to 6.5 Mpps
2) TX gets 4.6
Hello all:
This sereis tries to implement the VIRTIO_F_IN_ORDER to
virtio_ring. This is done by introducing virtqueue ops so we can
implement separate helpers for different virtqueue layout/features
then the in-order were implemented on top.
Tests shows 3%-5% imporvment with packed virtqueue PPS
>
> > > > Tested-by: Lei Yang
> > > >
> > > > On Mon, Mar 24, 2025 at 1:45 PM Jason Wang wrote:
> > > > >
> > > > > Hello all:
> > > > >
> > > > > This sereis tries to implement the VIRTIO_F_IN_OR
Thanks
Lei
>
>
> > > Tested-by: Lei Yang
> > >
> > > On Mon, Mar 24, 2025 at 1:45 PM Jason Wang wrote:
> > > >
> > > > Hello all:
> > > >
> > > > This sereis tries to implement the VIRTIO_F_IN_ORDER to
> > > &
to implement the VIRTIO_F_IN_ORDER to
> > > virtio_ring. This is done by introducing virtqueue ops so we can
> > > implement separate helpers for different virtqueue layout/features
> > > then the in-order were implmeented on top.
> > >
> > > Tests sh
On Sat, May 17, 2025 at 07:27:51PM +0200, Konrad Dybcio wrote:
> From: Konrad Dybcio
>
> Certain /soc@0 subnodes are very out of order. Reshuffle them.
>
> Signed-off-by: Konrad Dybcio
> ---
> arch/arm64/boot/dts/qcom/sc8280xp.dtsi | 574
> ---
From: Konrad Dybcio
Certain /soc@0 subnodes are very out of order. Reshuffle them.
Signed-off-by: Konrad Dybcio
---
arch/arm64/boot/dts/qcom/sc8280xp.dtsi | 574 -
1 file changed, 287 insertions(+), 287 deletions(-)
diff --git a/arch/arm64/boot/dts/qcom
The TI-SCI processor control handle, 'tsp', will be refactored from
k3_r5_core struct into k3_r5_rproc struct in a future commit. So, the
'tsp' pointer will be initialized inside k3_r5_cluster_rproc_init() now.
Move the k3_r5_release_tsp() function, which releases the tsp handle,
above k3_r5_clust
invoked by the later.
While at it, also re-order the k3_r5_core_of_get_sram_memories() to keep
all the internal memory initialization functions at one place.
Signed-off-by: Beleswar Padhi
Tested-by: Judith Mendez
Reviewed-by: Andrew Davis
---
v12: Changelog:
1. Carried R/B tag.
Link to
Kdamond.update_schemes_tried_regions() reads and stores tried regions
information out of address order. It makes debugging a test failure
difficult. Change the behavior to do the reading and writing in the
address order.
Signed-off-by: SeongJae Park
---
tools/testing/selftests/damon
From: Konrad Dybcio
Certain /soc@0 subnodes are very out of order. Reshuffle them.
Signed-off-by: Konrad Dybcio
---
arch/arm64/boot/dts/qcom/sc8280xp.dtsi | 574 -
1 file changed, 287 insertions(+), 287 deletions(-)
diff --git a/arch/arm64/boot/dts/qcom
The TI-SCI processor control handle, 'tsp', will be refactored from
k3_r5_core struct into k3_r5_rproc struct in a future commit. So, the
'tsp' pointer will be initialized inside k3_r5_cluster_rproc_init() now.
Move the k3_r5_release_tsp() function, which releases the tsp handle,
above k3_r5_clust
invoked by the later.
While at it, also re-order the k3_r5_core_of_get_sram_memories() to keep
all the internal memory initialization functions at one place.
Signed-off-by: Beleswar Padhi
Tested-by: Judith Mendez
---
v11: Changelog:
1. Carried T/B tag.
Link to v10:
https://lore.kern
he legacy interface, the device formatting these as
> > little endian when the guest is big endian would surprise me more
> > than
> > it using guest native byte order (which would make it compatible with
> > the current implementation). Nevertheless somebody trying to
>
On Thu, 17 Apr 2025 11:01:54 -0700 Dan Williams
wrote:
> Darrick J. Wong wrote:
> > On Thu, Apr 10, 2025 at 12:12:33PM -0700, Alison Schofield wrote:
> > > On Thu, Apr 10, 2025 at 11:10:20AM +0200, David Hildenbrand wrote:
> > > > Alison reports an issue with fsdax when large extends end up usin
invoked by the later.
While at it, also re-order the k3_r5_core_of_get_sram_memories() to keep
all the internal memory initialization functions at one place.
Signed-off-by: Beleswar Padhi
---
v10: Changelog:
1. Re-ordered both core_of_get_{internal/sram}_memories() together.
2. Moved releas
The TI-SCI processor control handle, 'tsp', will be refactored from
k3_r5_core struct into k3_r5_rproc struct in a future commit. So, the
'tsp' pointer will be initialized inside k3_r5_cluster_rproc_init() now.
Move the k3_r5_release_tsp() function, which releases the tsp handle,
above k3_r5_clust
Darrick J. Wong wrote:
> On Thu, Apr 10, 2025 at 12:12:33PM -0700, Alison Schofield wrote:
> > On Thu, Apr 10, 2025 at 11:10:20AM +0200, David Hildenbrand wrote:
> > > Alison reports an issue with fsdax when large extends end up using
> > > large ZONE_DEVICE folios:
> > >
> >
> > Passes the ndctl/
On Thu, Apr 10, 2025 at 12:12:33PM -0700, Alison Schofield wrote:
> On Thu, Apr 10, 2025 at 11:10:20AM +0200, David Hildenbrand wrote:
> > Alison reports an issue with fsdax when large extends end up using
> > large ZONE_DEVICE folios:
> >
>
> Passes the ndctl/dax unit tests.
>
> Tested-by: Aliso
> by the letter of the spec virtio_le_to_cpu() would have been
> sufficient.
> But when the legacy interface is not used, it boils down to the same.
>
> And when using the legacy interface, the device formatting these as
> little endian when the guest is big endian would surprise me more
>
On Sun, 13 Apr 2025 17:52:12 -0500
Ira Weiny wrote:
> Device partitions have an implied order which is made more complex by
> the addition of a dynamic partition.
>
> Remove the ram special case information calls in favor of generic calls
> with a check ahead of time to ensure t
/fs/dax.c
> > > +++ b/fs/dax.c
> > > @@ -396,6 +396,7 @@ static inline unsigned long dax_folio_put(struct
> > > folio *folio)
> > > order = folio_order(folio);
> > > if (!order)
> > > return 0;
> > > + folio_rese
Device partitions have an implied order which is made more complex by
the addition of a dynamic partition.
Remove the ram special case information calls in favor of generic calls
with a check ahead of time to ensure the preservation of the implied
partition order.
Signed-off-by: Ira Weiny
(adding CC list again, because I assume it was dropped by accident)
diff --git a/fs/dax.c b/fs/dax.c
index af5045b0f476e..676303419e9e8 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -396,6 +396,7 @@ static inline unsigned long dax_folio_put(struct folio
*folio)
order = folio_order(folio
order)
folio_set_order(folio, new_order);
else
- ClearPageCompound(&folio->page);
+ folio_reset_order(folio);
}
I think that's wrong. We're splitting this folio into order-0 folios,
but folio_reset_order() is going to modify folio->
__split_folio_to_order(struct folio *folio,
> int old_order,
> if (new_order)
> folio_set_order(folio, new_order);
> else
> - ClearPageCompound(&folio->page);
> + folio_reset_order(folio);
> }
I think that's wrong. We
Matthew Wilcox wrote:
> On Thu, Apr 10, 2025 at 01:15:07PM -0700, Dan Williams wrote:
> > For consistency and clarity what about this incremental change, to make
> > the __split_folio_to_order() path reuse folio_reset_order(), and use
> > typical bitfield helpers for manipulating _flags_1?
>
> I d
On Thu, Apr 10, 2025 at 01:15:07PM -0700, Dan Williams wrote:
> For consistency and clarity what about this incremental change, to make
> the __split_folio_to_order() path reuse folio_reset_order(), and use
> typical bitfield helpers for manipulating _flags_1?
I dislike this intensely. It obfusca
31/0x180
> [ 417.817859] __handle_mm_fault+0xee1/0x1a60
> [ 417.818325] ? debug_smp_processor_id+0x17/0x20
> [ 417.818844] handle_mm_fault+0xe1/0x2b0
> [...]
>
> The issue is that when we split a large ZONE_DEVICE folio to order-0
> ones, we don't reset the order/_nr_p
On Thu, Apr 10, 2025 at 11:10:20AM +0200, David Hildenbrand wrote:
> Alison reports an issue with fsdax when large extends end up using
> large ZONE_DEVICE folios:
>
Passes the ndctl/dax unit tests.
Tested-by: Alison Schofield
snip
[ 417.817424] __do_fault+0x31/0x180
[ 417.817859] __handle_mm_fault+0xee1/0x1a60
[ 417.818325] ? debug_smp_processor_id+0x17/0x20
[ 417.818844] handle_mm_fault+0xe1/0x2b0
[...]
The issue is that when we split a large ZONE_DEVICE folio to order-0
ones, we don't reset the order/_nr_pages. As
On 07/04/25 18:59, Andrew Davis wrote:
On 3/17/25 7:05 AM, Beleswar Padhi wrote:
The core's internal memory data structure will be refactored to be part
of the k3_r5_rproc structure in a future commit. As a result, internal
memory initialization will need to be performed inside
k3_r5_cluster_r
On 3/17/25 7:05 AM, Beleswar Padhi wrote:
The core's internal memory data structure will be refactored to be part
of the k3_r5_rproc structure in a future commit. As a result, internal
memory initialization will need to be performed inside
k3_r5_cluster_rproc_init() after rproc_alloc().
Therefor
erent virtqueue layout/features
> then the in-order were implmeented on top.
>
> Tests shows 5% imporvemnt in RX PPS with KVM guest + testpmd on the
> host.
>
> Please review.
>
> Thanks
>
> Jason Wang (19):
> virtio_ring: rename virtqueue_reinit_xxx to virt
; > Fixes: 8345adbf96fc1 ("virtio: console: Accept console size along with
> > resize control message")
> > Signed-off-by: Halil Pasic
> > Cc: sta...@vger.kernel.org # v2.6.35+
> > ---
> >
> > @Michael: I think it would be nice to add a clarification on t
; Signed-off-by: Halil Pasic
> Cc: sta...@vger.kernel.org # v2.6.35+
> ---
>
> @Michael: I think it would be nice to add a clarification on the byte
> order to be used for cols and rows when the legacy interface is used to
> the spec, regardless of what we decide the right byte or
on, Mar 24, 2025 at 1:45 PM Jason Wang wrote:
> >
> > Hello all:
> >
> > This sereis tries to implement the VIRTIO_F_IN_ORDER to
> > virtio_ring. This is done by introducing virtqueue ops so we can
> > implement separate helpers for different virtqueue layo
gt; > - __virtio16 rows;
> > __virtio16 cols;
> > + __virtio16 rows;
> > } size;
>
> The order of the fields after the patch matches the spec, so from that
> perspective, looks fine:
> Reviewed-by: Daniel V
e
> *vdev,
> break;
> case VIRTIO_CONSOLE_RESIZE: {
> struct {
> - __virtio16 rows;
> __virtio16 cols;
> + __virtio16 rows;
> } size;
The order of the fields after th
> by the letter of the spec virtio_le_to_cpu() would have been
> sufficient.
> But when the legacy interface is not used, it boils down to the same.
>
> And when using the legacy interface, the device formatting these as
> little endian when the guest is big endian would surprise me more
>
According to section 5.3.6.2 (Multiport Device Operation) of the virtio
spec(version 1.2) a control buffer with the event VIRTIO_CONSOLE_RESIZE
is followed by a virtio_console_resize struct containing cols then rows.
The kernel implements this the wrong way around (rows then cols) resulting
in the
irtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> then the in-order were implmeented on top.
>
> Tests shows 5% imporvemnt in RX PPS with KVM guest + testpmd on the
> host.
>
> Please review.
>
> Thanks
>
> Jason Wang (19):
This patch implements in order support for both split virtqueue and
packed virtqueue. Dedicate virtqueue ops are introduced for the packed
virtqueue. Most of the ops were reused but the ones that have major
difference.
KVM guest + testpmd on the host shows 5% improvement in packed
virtqueue TX
Hello all:
This sereis tries to implement the VIRTIO_F_IN_ORDER to
virtio_ring. This is done by introducing virtqueue ops so we can
implement separate helpers for different virtqueue layout/features
then the in-order were implmeented on top.
Tests shows 5% imporvemnt in RX PPS with KVM guest
used, it boils down to the same.
And when using the legacy interface, the device formatting these as
little endian when the guest is big endian would surprise me more than
it using guest native byte order (which would make it compatible with
the current implementation). Nevertheless somebody trying to
The core's internal memory data structure will be refactored to be part
of the k3_r5_rproc structure in a future commit. As a result, internal
memory initialization will need to be performed inside
k3_r5_cluster_rproc_init() after rproc_alloc().
Therefore, move the internal memory initialization f
On 3/14/25 10:17 AM, Luca Weiss wrote:
> During upstreaming the order of clocks was adjusted to match the
> upstream sort order, but mistakently freq-table-hz wasn't re-ordered
> with the new order.
>
> Fix that by moving the entry for the ICE clk to the last place.
>
During upstreaming the order of clocks was adjusted to match the
upstream sort order, but mistakently freq-table-hz wasn't re-ordered
with the new order.
Fix that by moving the entry for the ICE clk to the last place.
Fixes: 5a814af5fc22 ("arm64: dts: qcom: sm6350: Add UFS nodes&quo
The execution order of constructors in undefined and depends on the
toolchain. While recent toolchains seems to have a stable order, it
doesn't work for older ones and may also change at any time.
Stop validating the order and instead only validate that all
constructors are executed.
Rep
On Thu, Mar 06, 2025 at 10:52:39PM +0100, Thomas Weißschuh wrote:
> The execution order of constructors in undefined and depends on the
> toolchain. While recent toolchains seems to have a stable order, it
> doesn't work for older ones and may also change at any time.
>
>
The core's internal memory data structure will be refactored to be part
of the k3_r5_rproc structure in a future commit. As a result, internal
memory initialization will need to be performed inside
k3_r5_cluster_rproc_init() after rproc_alloc().
Therefore, move the internal memory initialization f
,
struct list_head *list,
return -EINVAL;
}
} else if (new_order) {
- /* Split shmem folio to non-zero order not supported */
- if (shmem_mapping(folio->mapping)) {
- VM_WARN_ONCE(1,
- "Cannot
Now split_huge_page*() supports shmem THP split to any lower order.
Test it.
The test now reads file content out after split to check if the split
corrupts the file data.
Signed-off-by: Zi Yan
Reviewed-by: Baolin Wang
Tested-by: Baolin Wang
---
tools/testing/selftests/mm
} else if (new_order) {
- /* Split shmem folio to non-zero order not supported */
- if (shmem_mapping(folio->mapping)) {
- VM_WARN_ONCE(1,
- "Cannot split shmem folio to non-0 order");
return -EINVAL;
}
} else if (new_order) {
- /* Split shmem folio to non-zero order not supported */
- if (shmem_mapping(folio->mapping)) {
- VM_WARN_ONCE(1,
- "Cannot
Now split_huge_page*() supports shmem THP split to any lower order.
Test it.
The test now reads file content out after split to check if the split
corrupts the file data.
Signed-off-by: Zi Yan
Reviewed-by: Baolin Wang
Tested-by: Baolin Wang
---
.../selftests/mm/split_huge_page_test.c
} else if (new_order) {
- /* Split shmem folio to non-zero order not supported */
- if (shmem_mapping(folio->mapping)) {
- VM_WARN_ONCE(1,
- "Cannot split shmem folio to non-0 order");
On 2025/1/17 05:10, Zi Yan wrote:
Commit 4d684b5f92ba ("mm: shmem: add large folio support for tmpfs") has
added large folio support to shmem. Remove the restriction in
split_huge_page*().
Agree.
Signed-off-by: Zi Yan
LGTM. Thanks.
Reviewed-by: Baolin Wang
On 2025/1/17 05:10, Zi Yan wrote:
Now split_huge_page*() supports shmem THP split to any lower order.
Test it.
The test now reads file content out after split to check if the split
corrupts the file data.
Signed-off-by: Zi Yan
LGTM.
Reviewed-by: Baolin Wang
Tested-by: Baolin Wang
/* Split shmem folio to non-zero order not supported */
- if (shmem_mapping(folio->mapping)) {
- VM_WARN_ONCE(1,
- "Cannot split shmem folio to non-0 order");
Now split_huge_page*() supports shmem THP split to any lower order.
Test it.
The test now reads file content out after split to check if the split
corrupts the file data.
Signed-off-by: Zi Yan
---
.../selftests/mm/split_huge_page_test.c | 30 ++-
1 file changed, 23
On Thu, 19 Dec 2024 17:10:32 -0500, Maxim Levitsky wrote:
> Reverse the order in which
> the PML log is read to align more closely to the hardware. It should
> not affect regular users of the dirty logging but it fixes a unit test
> specific assumption in the dirty_log_test dir
Intel's PRM specifies that the CPU writes to the PML log 'backwards'
or in other words, it first writes entry 511, then entry 510 and so on.
I also confirmed on the bare metal that the CPU indeed writes the entries
in this order.
KVM on the other hand, reads the entries in the
Reverse the order in which
the PML log is read to align more closely to the hardware. It should
not affect regular users of the dirty logging but it fixes a unit test
specific assumption in the dirty_log_test dirty-ring mode.
Best regards,
Maxim Levitsky
Maxim Levitsky (2):
KVM: VMX
On Fri, Dec 13, 2024, Maxim Levitsky wrote:
> On Thu, 2024-12-12 at 22:19 -0800, Sean Christopherson wrote:
> > On Thu, Dec 12, 2024, Maxim Levitsky wrote:
> > > On Wed, 2024-12-11 at 16:44 -0800, Sean Christopherson wrote:
> > > > But, I can't help but wonder why KVM bothers emulating PML. I can
On Thu, 2024-12-12 at 22:19 -0800, Sean Christopherson wrote:
> On Thu, Dec 12, 2024, Maxim Levitsky wrote:
> > On Wed, 2024-12-11 at 16:44 -0800, Sean Christopherson wrote:
> > > But, I can't help but wonder why KVM bothers emulating PML. I can
> > > appreciate
> > > that avoiding exits to L1 wo
On Thu, Dec 12, 2024, Maxim Levitsky wrote:
> On Wed, 2024-12-11 at 16:44 -0800, Sean Christopherson wrote:
> > But, I can't help but wonder why KVM bothers emulating PML. I can
> > appreciate
> > that avoiding exits to L1 would be beneficial, but what use case actually
> > cares
> > about dirty
ites entry 511, then entry 510 and so on,
> > until it writes entry 0, after which the 'PML log full' VM exit happens.
> >
> > I also confirmed on the bare metal that the CPU indeed writes the entries
> > in this order.
> >
> > KVM on the other ha
og full' VM exit happens.
>
> I also confirmed on the bare metal that the CPU indeed writes the entries
> in this order.
>
> KVM on the other hand, reads the entries in the opposite order, from the
> last written entry and towards entry 511 and dumps them in this order
the entries
in this order.
KVM on the other hand, reads the entries in the opposite order, from the
last written entry and towards entry 511 and dumps them in this order to
the dirty ring.
Usually this doesn't matter, except for one complex nesting case:
KVM reties the instructions that cau
Hello,
On Wed, 11 Dec 2024, David Laight wrote:
> From: Dan Carpenter
> > Sent: 11 December 2024 13:17
> >
> > We recently added some build time asserts to detect incorrect calls to
> > clamp and it detected this bug which breaks the build. The variable
> > in this clamp is "max_avail
We recently added some build time asserts to detect incorrect calls to
clamp and it detected this bug which breaks the build. The variable
in this clamp is "max_avail" and it should be the first argument. The
code currently is the equivalent to max = min(max_avail, max).
There probably aren't ve
On Wed, Dec 11, 2024 at 02:27:06PM +, David Laight wrote:
> From: Dan Carpenter
> > Sent: 11 December 2024 13:17
> >
> > We recently added some build time asserts to detect incorrect calls to
> > clamp and it detected this bug which breaks the build. The variable
> > in this clamp is "max_ava
From: Dan Carpenter
> Sent: 11 December 2024 13:17
>
> We recently added some build time asserts to detect incorrect calls to
> clamp and it detected this bug which breaks the build. The variable
> in this clamp is "max_avail" and it should be the first argument. The
> code currently is the equi
On Wed, Dec 11, 2024 at 2:16 PM Dan Carpenter wrote:
>
> We recently added some build time asserts to detect incorrect calls to
> clamp and it detected this bug which breaks the build. The variable
> in this clamp is "max_avail" and it should be the first argument. The
> code currently is the eq
We recently added some build time asserts to detect incorrect calls to
clamp and it detected this bug which breaks the build. The variable
in this clamp is "max_avail" and it should be the first argument. The
code currently is the equivalent to max = max(max_avail, max).
Reported-by: Linux Kerne
Hi,
On Mon, Aug 19, 2024 at 4:40 PM Doug Anderson wrote:
>
> Hi,
>
> On Mon, Aug 19, 2024 at 12:30 AM Sibi Sankar wrote:
> >
> > Any write access to the IMEM region when the Q6 is setting up XPU
> > protection on it will result in a XPU violation. Fix this by ensuring
> > IMEM writes related to
Hi! Petr!
> On Sep 24, 2024, at 19:27, Petr Mladek wrote:
>
> This does not work well. It uses the order on the stack when
> the livepatch is being loaded. It is not updated when any livepatch gets
> removed. It might create wrong values.
>
> I have even tr
On Tue 2024-09-24 13:27:58, Petr Mladek wrote:
> On Fri 2024-09-20 17:04:03, Wardenjohn wrote:
> > This feature can provide livepatch patch order information.
> > With the order of sysfs interface of one klp_patch, we can
> > use patch order to find out which function of
On Fri 2024-09-20 17:04:03, Wardenjohn wrote:
> This feature can provide livepatch patch order information.
> With the order of sysfs interface of one klp_patch, we can
> use patch order to find out which function of the patch is
> now activate.
>
> After the discussion, we
Update description of klp_patch order sysfs interface to livepatch
ABI documentation.
Signed-off-by: Wardenjohn
diff --git a/Documentation/ABI/testing/sysfs-kernel-livepatch
b/Documentation/ABI/testing/sysfs-kernel-livepatch
index 3735d868013d..14218419b9ea 100644
--- a/Documentation/ABI
This feature can provide livepatch patch order information.
With the order of sysfs interface of one klp_patch, we can
use patch order to find out which function of the patch is
now activate.
After the discussion, we decided that patch-level sysfs
interface is the only accaptable way to introduce
As previous discussion, maintainers think that patch-level sysfs interface is
the
only acceptable way to maintain the information of the order that klp_patch is
applied to the system.
However, the previous patch introduce klp_ops into klp_func is a optimization
methods of the patch introducing
Fixes a race between parent and child threads in futex_requeue.
Similar to fbf4dec70277 ("selftests/futex: Order calls to
futex_lock_pi"), which fixed a flake in futex_lock_pi due to racing
between the parent and child threads.
The same issue can occur in the futex_requeue test,
Hi Edward,
Thanks for your patch!
Em 03/09/2024 17:39, Edward Liaw escreveu:
Similar to fbf4dec70277 ("selftests/futex: Order calls to
futex_lock_pi"), which fixed a flake in futex_lock_pi due to racing
between the parent and child threads.
The same issue can occur in the futex_re
Similar to fbf4dec70277 ("selftests/futex: Order calls to
futex_lock_pi"), which fixed a flake in futex_lock_pi due to racing
between the parent and child threads.
The same issue can occur in the futex_requeue test, because it expects
waiterfn to make progress to futex_wait before
Hi,
On Mon, Aug 19, 2024 at 12:30 AM Sibi Sankar wrote:
>
> Any write access to the IMEM region when the Q6 is setting up XPU
> protection on it will result in a XPU violation. Fix this by ensuring
> IMEM writes related to the MBA post-mortem logs happen before the Q6
> is brought out of reset.
>
Any write access to the IMEM region when the Q6 is setting up XPU
protection on it will result in a XPU violation. Fix this by ensuring
IMEM writes related to the MBA post-mortem logs happen before the Q6
is brought out of reset.
Fixes: 318130cc9362 ("remoteproc: qcom_q6v5_mss: Add MBA log extract
On Mon, Jul 15, 2024 at 03:04:12PM -0400, Steven Rostedt wrote:
>
> [ Adding sched maintainers, as this is a scheduling trace event ]
>
> On Wed, 3 Jul 2024 11:33:53 +0800
> Tio Zhang wrote:
>
> > Switch the order of prev_comm and next_comm in sched_switch's code t
[ Adding sched maintainers, as this is a scheduling trace event ]
On Wed, 3 Jul 2024 11:33:53 +0800
Tio Zhang wrote:
> Switch the order of prev_comm and next_comm in sched_switch's code to
> align with its printing order.
I'm going to pick this up in my tree, as it is pretty
Switch the order of prev_comm and next_comm in sched_switch's code to
align with its printing order.
Signed-off-by: Tio Zhang
Reviewed-by: Madadi Vineeth Reddy
---
include/trace/events/sched.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/trace/events/sc
until you
have agreed on the changes to the specification.
On Fri, May 17, 2024 at 10:46:03PM GMT, Xuewei Niu wrote:
The "order" field determines the location of the device in the linked list,
the device with CID 4, having a smallest order, is in the first place, and
so forth.
Do we
-4780004
On Fri, May 17, 2024 at 10:46:03PM +0800, Xuewei Niu wrote:
> The "order" field determines the location of the device in the linked list,
> the device with CID 4, having a smallest order, is in the first place, and
> so forth.
>
> Rules:
>
> * It doesn’
1 - 100 of 3132 matches
Mail list logo