* Andi Kleen wrote:
> Various optimizations related to CONFIG_PREEMPT_VOLUNTARY and x86
> uaccess
Looks mostly good.
Does this patchset change the number of cond_resched() preemption points
on CONFIG_PREEMPT_VOLUNTARY=y, or is it a scheduling invariant?
Thanks,
Ingo
--
To
* Andi Kleen a...@firstfloor.org wrote:
Various optimizations related to CONFIG_PREEMPT_VOLUNTARY and x86
uaccess
Looks mostly good.
Does this patchset change the number of cond_resched() preemption points
on CONFIG_PREEMPT_VOLUNTARY=y, or is it a scheduling invariant?
Thanks,
Various optimizations related to CONFIG_PREEMPT_VOLUNTARY
and x86 uaccess
- Optimize copy_*_inatomic on x86-64 to handle 1-8 bytes
without string instructions
- Inline might_sleep and other preempt code
to optimize various preemption paths
This costs about 10k text size, but generates far
Various optimizations related to CONFIG_PREEMPT_VOLUNTARY
and x86 uaccess
- Optimize copy_*_inatomic on x86-64 to handle 1-8 bytes
without string instructions
- Inline might_sleep and other preempt code
to optimize various preemption paths
This costs about 10k text size, but generates far
4 matches
Mail list logo