This last patch of the series adds new tracepoint - mmu_vm_stack_fault -
which when enabled allows one to see how particular app triggers
the stack page faults. The tracepoint captures the stack fault address,
the thread id and number of the page (0 being the 1st page).
Please note this does not c
This patch adds new mmap flag - mmap_stack - that is used when mmaping
a stack when creating new pthread. This new flag is only used when
the build parameter CONF_lazy_stack is enabled.
Signed-off-by: Waldemar Kozaczuk
---
include/osv/mmu-defs.hh | 1 +
libc/mman.cc| 7 +--
libc/
This patch modifies the last set of call sites where we do not know statically
if any of interrupts or preemption are enabled. In those cases we
dynamically check if both preemtion and interrupts are enabled
and only then pre-fault the stack.
Most of these places are in the tracepoint and sampler
This patch makes all functions in core/mmu.cc that take vma_list_mutex
for write to pre-fault the stack two pages deep before using the mutex.
This is necessary to prevent any follow up stack faults down the call stack
after the vma_list_mutex is take for write as this would lead to a deadlock
exp
This patch modifies number of critical places mostly in the scheduler
code to dynamically pre-fault the stack if preemption is enabled. In all
these places we can be statically sure that interrupts are enabled
but not so sure about preemption (maybe in future we can prove it is enabled
at least in
This patch modifies all relevant places, where we statically know
that both interrupts and preemption should be enabled, to unconditionally
pre-fault the stack.
These include places in code before:
- WITH_LOCK(preemption_lock) block
- WITH_LOCK(rcu_read_lock) block
- sched::preempt_disable() call
This patch annotates the relevant call sites with the invariant assert
expressions to validate assumptions that let us do "nothing" in all these
cases. We also reorganize some code in the scheduler to help differentiate
between cases when given function/method is called with interrupts or preemptio
This is the 1st of the total of eight patches that implement optional
support of so called "lazy stack" feature. The lazy stack is well explained
by the issue #143 and allows to save substantial amount of memory if
application spawns many pthreads with large stacks by letting stack grow
dynamically
On Tue, Aug 30, 2022 at 6:59 PM Commit Bot wrote:
> From: Waldemar Kozaczuk
> Committer: Waldemar Kozaczuk
> Branch: master
>
> trace: do not use malloc/free when interrupts are disabled
>
> This patch fixes a subtle bug found when working on lazy stack changes
> and trying to establish some in
What a difference one character makes :-)
--
Nadav Har'El
n...@scylladb.com
On Tue, Aug 30, 2022 at 6:59 PM Commit Bot wrote:
> From: Waldemar Kozaczuk
> Committer: Waldemar Kozaczuk
> Branch: master
>
> aarch64 trace: make compiler pick a register instead of x0
>
> This subtle 1-character p
From: Waldemar Kozaczuk
Committer: Waldemar Kozaczuk
Branch: master
aarch64: download boost program-options library needed by some tests
Signed-off-by: Waldemar Kozaczuk
---
diff --git a/scripts/download_aarch64_packages.py
b/scripts/download_aarch64_packages.py
--- a/scripts/download_aarch6
From: Waldemar Kozaczuk
Committer: Waldemar Kozaczuk
Branch: master
trace: do not use malloc/free when interrupts are disabled
This patch fixes a subtle bug found when working on lazy stack changes
and trying to establish some invariants in relevant places in the kernel code.
One of the finding
From: Waldemar Kozaczuk
Committer: Waldemar Kozaczuk
Branch: master
aarch64 trace: make compiler pick a register instead of x0
This subtle 1-character patch fixes a nasty bug that causes interrupts
to be enabled instead of correctly restored to the state it was when
saving the state. This bug w
13 matches
Mail list logo