On Thu, Nov 19 2020 at 13:12, Peter Zijlstra wrote:
> On Thu, Nov 19, 2020 at 12:51:32PM +0100, Peter Zijlstra wrote:
>> > +void __kmap_local_sched_in(void)
>> > +{
>> > + struct task_struct *tsk = current;
>> > + pte_t *kmap_pte = kmap_get_pte();
>> > + int i;
>> > +
>> > + /* Restore kmaps */
On Thu, Nov 19, 2020 at 12:51:32PM +0100, Peter Zijlstra wrote:
> On Wed, Nov 18, 2020 at 08:48:43PM +0100, Thomas Gleixner wrote:
>
> > @@ -4073,6 +4089,7 @@ prepare_task_switch(struct rq *rq, struc
> > perf_event_task_sched_out(prev, next);
> > rseq_preempt(prev);
> > fire_sched_out_
On Wed, Nov 18, 2020 at 08:48:43PM +0100, Thomas Gleixner wrote:
> @@ -4073,6 +4089,7 @@ prepare_task_switch(struct rq *rq, struc
> perf_event_task_sched_out(prev, next);
> rseq_preempt(prev);
> fire_sched_out_preempt_notifiers(prev, next);
> + kmap_local_sched_out();
>
On Wed, Nov 18, 2020 at 08:48:43PM +0100, Thomas Gleixner wrote:
> @@ -4073,6 +4089,7 @@ prepare_task_switch(struct rq *rq, struc
> perf_event_task_sched_out(prev, next);
> rseq_preempt(prev);
> fire_sched_out_preempt_notifiers(prev, next);
> + kmap_local_sched_out();
>
From: Thomas Gleixner
Instead of storing the map per CPU provide and use per task storage. That
prepares for local kmaps which are preemptible.
The context switch code is preparatory and not yet in use because
kmap_atomic() runs with preemption disabled. Will be made usable in the
next step.
Th
5 matches
Mail list logo