Re: [Xenomai-core] [Xenomai-help] Sporadic PC freeze after rt_task_start

2007-07-19 Thread Philippe Gerum
On Thu, 2007-07-19 at 14:40 +0200, Jan Kiszka wrote:
> Philippe Gerum wrote:
> >> And when looking at the holders of rpilock, I think one issue could be
> >> that we hold that lock while calling into xnpod_renice_root [1], ie.
> >> doing a potential context switch. Was this checked to be save?
> > 
> > xnpod_renice_root() does no reschedule immediately on purpose, we would
> > never have been able to run any SMP config more than a couple of seconds
> > otherwise. (See the NOSWITCH bit).
> 
> OK, then it's not the cause.
> 
> > 
> >> Furthermore, that code path reveals that we take nklock nested into
> >> rpilock [2]. I haven't found a spot for the other way around (and I hope
> >> there is none)
> > 
> > xnshadow_start().
> 
> Nope, that one is not holding nklock.

Indeed, but this only works because its callers who may hold this lock
do not activate shadow threads so far. This looks so fragile... I'll add
some comment about this in the doc.

> But I found an offender...
> 
> > 
> >> , but such nesting is already evil per se...
> > 
> > Well, nesting spinlocks only falls into evilness when you get a circular
> > graph, but since the rpilock is a rookie in the locking team, I'm going
> > to check this.
> 
> Take this one: gatekeeper_thread calls into rpi_pop with nklock
> acquired. So we have a classic ABAB locking bug. Bang!
> 

Damnit.

The fix needs some thought and attention, we are running against the
deletion path here.

PS: Time to switch to -core.

> Jan
> 
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Xenomai-help] Sporadic PC freeze after rt_task_start

2007-07-19 Thread Philippe Gerum
On Thu, 2007-07-19 at 14:40 +0200, Jan Kiszka wrote:
> Philippe Gerum wrote:
> >> And when looking at the holders of rpilock, I think one issue could be
> >> that we hold that lock while calling into xnpod_renice_root [1], ie.
> >> doing a potential context switch. Was this checked to be save?
> > 
> > xnpod_renice_root() does no reschedule immediately on purpose, we would
> > never have been able to run any SMP config more than a couple of seconds
> > otherwise. (See the NOSWITCH bit).
> 
> OK, then it's not the cause.
> 
> > 
> >> Furthermore, that code path reveals that we take nklock nested into
> >> rpilock [2]. I haven't found a spot for the other way around (and I hope
> >> there is none)
> > 
> > xnshadow_start().
> 
> Nope, that one is not holding nklock. But I found an offender...

Gasp. xnshadow_renice() kills us too.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Xenomai-help] Sporadic PC freeze after rt_task_start

2007-07-19 Thread Jan Kiszka
Philippe Gerum wrote:
> On Thu, 2007-07-19 at 14:40 +0200, Jan Kiszka wrote:
>> Philippe Gerum wrote:
 And when looking at the holders of rpilock, I think one issue could be
 that we hold that lock while calling into xnpod_renice_root [1], ie.
 doing a potential context switch. Was this checked to be save?
>>> xnpod_renice_root() does no reschedule immediately on purpose, we would
>>> never have been able to run any SMP config more than a couple of seconds
>>> otherwise. (See the NOSWITCH bit).
>> OK, then it's not the cause.
>>
 Furthermore, that code path reveals that we take nklock nested into
 rpilock [2]. I haven't found a spot for the other way around (and I hope
 there is none)
>>> xnshadow_start().
>> Nope, that one is not holding nklock. But I found an offender...
> 
> Gasp. xnshadow_renice() kills us too.

Looks like we are approaching mainline "qualities" here - but they have
at least lockdep (and still face nasty races regularly).

As long as you can't avoid nesting or the inner lock only protects
really, really trivial code (list manipulation etc.), I would say there
is one lock too much... Did I mention that I consider nesting to be
evil? :-> Besides correctness, there is also an increasing worst-case
behaviour issue with each additional nesting level.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Xenomai-help] Sporadic PC freeze after rt_task_start

2007-07-19 Thread Philippe Gerum
On Thu, 2007-07-19 at 17:35 +0200, Jan Kiszka wrote:
> Philippe Gerum wrote:
> > On Thu, 2007-07-19 at 14:40 +0200, Jan Kiszka wrote:
> >> Philippe Gerum wrote:
>  And when looking at the holders of rpilock, I think one issue could be
>  that we hold that lock while calling into xnpod_renice_root [1], ie.
>  doing a potential context switch. Was this checked to be save?
> >>> xnpod_renice_root() does no reschedule immediately on purpose, we would
> >>> never have been able to run any SMP config more than a couple of seconds
> >>> otherwise. (See the NOSWITCH bit).
> >> OK, then it's not the cause.
> >>
>  Furthermore, that code path reveals that we take nklock nested into
>  rpilock [2]. I haven't found a spot for the other way around (and I hope
>  there is none)
> >>> xnshadow_start().
> >> Nope, that one is not holding nklock. But I found an offender...
> > 
> > Gasp. xnshadow_renice() kills us too.
> 
> Looks like we are approaching mainline "qualities" here - but they have
> at least lockdep (and still face nasty races regularly).
> 

We only have a 2-level locking depth at most, thare barely qualifies for
being compared to the situation with mainline. Most often, the more
radical the solution, the less relevant it is: simple nesting on very
few levels is not bad, bugous nesting sequence is.

> As long as you can't avoid nesting or the inner lock only protects
> really, really trivial code (list manipulation etc.), I would say there
> is one lock too much... Did I mention that I consider nesting to be
> evil? :-> Besides correctness, there is also an increasing worst-case
> behaviour issue with each additional nesting level.
> 

In this case, we do not want the RPI manipulation to affect the
worst-case of all other threads by holding the nklock. This is
fundamentally a migration-related issue, which is a situation that must
not impact all other contexts relying on the nklock. Given this, you
need to protect the RPI list and prevent the scheduler data to be
altered at the same time, there is no cheap trick to avoid this.

We need to keep the rpilock, otherwise we would have significantly large
latency penalties, especially when domain migration are frequent, and
yes, we do need RPI, otherwise the sequence for emulated RTOS services
would be plain wrong (e.g. task creation).

Ok, the rpilock is local, the nesting level is bearable, let's focus on
putting this thingy straight.

> Jan
> 
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Xenomai-help] Sporadic PC freeze after rt_task_start

2007-07-19 Thread Jan Kiszka
Philippe Gerum wrote:
> On Thu, 2007-07-19 at 17:35 +0200, Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> On Thu, 2007-07-19 at 14:40 +0200, Jan Kiszka wrote:
 Philippe Gerum wrote:
>> And when looking at the holders of rpilock, I think one issue could be
>> that we hold that lock while calling into xnpod_renice_root [1], ie.
>> doing a potential context switch. Was this checked to be save?
> xnpod_renice_root() does no reschedule immediately on purpose, we would
> never have been able to run any SMP config more than a couple of seconds
> otherwise. (See the NOSWITCH bit).
 OK, then it's not the cause.

>> Furthermore, that code path reveals that we take nklock nested into
>> rpilock [2]. I haven't found a spot for the other way around (and I hope
>> there is none)
> xnshadow_start().
 Nope, that one is not holding nklock. But I found an offender...
>>> Gasp. xnshadow_renice() kills us too.
>> Looks like we are approaching mainline "qualities" here - but they have
>> at least lockdep (and still face nasty races regularly).
>>
> 
> We only have a 2-level locking depth at most, thare barely qualifies for
> being compared to the situation with mainline. Most often, the more
> radical the solution, the less relevant it is: simple nesting on very
> few levels is not bad, bugous nesting sequence is.
> 
>> As long as you can't avoid nesting or the inner lock only protects
>> really, really trivial code (list manipulation etc.), I would say there
>> is one lock too much... Did I mention that I consider nesting to be
>> evil? :-> Besides correctness, there is also an increasing worst-case
>> behaviour issue with each additional nesting level.
>>
> 
> In this case, we do not want the RPI manipulation to affect the
> worst-case of all other threads by holding the nklock. This is
> fundamentally a migration-related issue, which is a situation that must
> not impact all other contexts relying on the nklock. Given this, you
> need to protect the RPI list and prevent the scheduler data to be
> altered at the same time, there is no cheap trick to avoid this.
> 
> We need to keep the rpilock, otherwise we would have significantly large
> latency penalties, especially when domain migration are frequent, and
> yes, we do need RPI, otherwise the sequence for emulated RTOS services
> would be plain wrong (e.g. task creation).

If rpilock is known to protect potentially costly code, you _must not_
hold other locks while taking it. Otherwise, you do not win a dime by
using two locks, rather make things worse (overhead of taking two locks
instead of just one). That all relates to the worst case, of course, the
one thing we are worried about most.

In that light, the nesting nklock->rpilock must go away, independently
of the ordering bug. The other way around might be a different thing,
though I'm not sure if there is actually so much difference between the
locks in the worst case.

What is the actual _combined_ lock holding time in the longest
nklock/rpilock nesting path? Is that one really larger than any other
pre-existing nklock path? Only in that case, it makes sense to think
about splitting, though you will still be left with precisely the same
(rather a few cycles more) CPU-local latency. Is there really no chance
to split the lock paths?

> Ok, the rpilock is local, the nesting level is bearable, let's focus on
> putting this thingy straight.

The whole RPI thing, though required for some scenarios, remains ugly
and error-prone (including worst-case latency issues). I can only
underline my recommendation to switch off complexity in Xenomai when one
doesn't need it - which often includes RPI. Sorry, Philippe, but I think
we have to be honest to the users here. RPI remains problematic, at
least /wrt your beloved latency.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Xenomai-help] Sporadic PC freeze after rt_task_start

2007-07-19 Thread Jan Kiszka
Philippe Gerum wrote:
> Ok, the rpilock is local, the nesting level is bearable, let's focus on
> putting this thingy straight.

Well, redesigning things may not necessarily improve the situation, but
reducing the amount of special RPI code might be worth a thought:

What is so special about RPI compared to standard prio inheritance? What
about [wild idea ahead!] modelling RPI as a virtual mutex that is
permanently held by the ROOT thread and which relaxed threads try to
acquire? They would never get it, rather drop the request (and thus the
inheritance) once they are to be hardened again or Linux starts to
schedule around.

*If* that is possible, we would
 A) reuse existing code heavily,
 B) lack any argument for separate locking,
 C) make things far easier to understand and review.

Sounds too beautiful to work, I'm afraid...

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Xenomai-help] Sporadic PC freeze after rt_task_start

2007-07-19 Thread Philippe Gerum
On Thu, 2007-07-19 at 19:18 +0200, Jan Kiszka wrote:
> Philippe Gerum wrote:
> > On Thu, 2007-07-19 at 17:35 +0200, Jan Kiszka wrote:
> >> Philippe Gerum wrote:
> >>> On Thu, 2007-07-19 at 14:40 +0200, Jan Kiszka wrote:
>  Philippe Gerum wrote:
> >> And when looking at the holders of rpilock, I think one issue could be
> >> that we hold that lock while calling into xnpod_renice_root [1], ie.
> >> doing a potential context switch. Was this checked to be save?
> > xnpod_renice_root() does no reschedule immediately on purpose, we would
> > never have been able to run any SMP config more than a couple of seconds
> > otherwise. (See the NOSWITCH bit).
>  OK, then it's not the cause.
> 
> >> Furthermore, that code path reveals that we take nklock nested into
> >> rpilock [2]. I haven't found a spot for the other way around (and I 
> >> hope
> >> there is none)
> > xnshadow_start().
>  Nope, that one is not holding nklock. But I found an offender...
> >>> Gasp. xnshadow_renice() kills us too.
> >> Looks like we are approaching mainline "qualities" here - but they have
> >> at least lockdep (and still face nasty races regularly).
> >>
> > 
> > We only have a 2-level locking depth at most, thare barely qualifies for
> > being compared to the situation with mainline. Most often, the more
> > radical the solution, the less relevant it is: simple nesting on very
> > few levels is not bad, bugous nesting sequence is.
> > 
> >> As long as you can't avoid nesting or the inner lock only protects
> >> really, really trivial code (list manipulation etc.), I would say there
> >> is one lock too much... Did I mention that I consider nesting to be
> >> evil? :-> Besides correctness, there is also an increasing worst-case
> >> behaviour issue with each additional nesting level.
> >>
> > 
> > In this case, we do not want the RPI manipulation to affect the
> > worst-case of all other threads by holding the nklock. This is
> > fundamentally a migration-related issue, which is a situation that must
> > not impact all other contexts relying on the nklock. Given this, you
> > need to protect the RPI list and prevent the scheduler data to be
> > altered at the same time, there is no cheap trick to avoid this.
> > 
> > We need to keep the rpilock, otherwise we would have significantly large
> > latency penalties, especially when domain migration are frequent, and
> > yes, we do need RPI, otherwise the sequence for emulated RTOS services
> > would be plain wrong (e.g. task creation).
> 
> If rpilock is known to protect potentially costly code, you _must not_
> hold other locks while taking it. Otherwise, you do not win a dime by
> using two locks, rather make things worse (overhead of taking two locks
> instead of just one).

I guess that by now you already understood that holding such outer lock
is what should not be done, and what should be fixed, right? So let's
focus on the real issue here: holding two locks is not the problem,
holding them in the wrong sequence, is.

>  That all relates to the worst case, of course, the
> one thing we are worried about most.
> 
> In that light, the nesting nklock->rpilock must go away, independently
> of the ordering bug. The other way around might be a different thing,
> though I'm not sure if there is actually so much difference between the
> locks in the worst case.
> 
> What is the actual _combined_ lock holding time in the longest
> nklock/rpilock nesting path?

It is short.

>  Is that one really larger than any other
> pre-existing nklock path?

Yes. Look, could you please assume one second that I did not choose this
implementation randomly? :o)

>  Only in that case, it makes sense to think
> about splitting, though you will still be left with precisely the same
> (rather a few cycles more) CPU-local latency. Is there really no chance
> to split the lock paths?
> 

The answer to your question is into the dynamics of migrating tasks
between domains, and how this relates to the overall dynamics of the
system. Migration needs priority tracking, priority tracking requires
almost the same amount of work than updating the scheduler data. Since
we can reduce the pressure on the nklock during migration which is a
thread-local action additionally involving the root thread, it is _good_
to do so. Even if this costs a few brain cycles more.

> > Ok, the rpilock is local, the nesting level is bearable, let's focus on
> > putting this thingy straight.
> 
> The whole RPI thing, though required for some scenarios, remains ugly
> and error-prone (including worst-case latency issues).
>  I can only
> underline my recommendation to switch off complexity in Xenomai when one
> doesn't need it - which often includes RPI.
>  Sorry, Philippe, but I think
> we have to be honest to the users here. RPI remains problematic, at
> least /wrt your beloved latency.

The best way to be honest to users is to depict things as they are:

1) RPI is there because we c

Re: [Xenomai-core] [Xenomai-help] Sporadic PC freeze after rt_task_start

2007-07-19 Thread Jan Kiszka
Philippe Gerum wrote:
> On Thu, 2007-07-19 at 19:18 +0200, Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> On Thu, 2007-07-19 at 17:35 +0200, Jan Kiszka wrote:
 Philippe Gerum wrote:
> On Thu, 2007-07-19 at 14:40 +0200, Jan Kiszka wrote:
>> Philippe Gerum wrote:
 And when looking at the holders of rpilock, I think one issue could be
 that we hold that lock while calling into xnpod_renice_root [1], ie.
 doing a potential context switch. Was this checked to be save?
>>> xnpod_renice_root() does no reschedule immediately on purpose, we would
>>> never have been able to run any SMP config more than a couple of seconds
>>> otherwise. (See the NOSWITCH bit).
>> OK, then it's not the cause.
>>
 Furthermore, that code path reveals that we take nklock nested into
 rpilock [2]. I haven't found a spot for the other way around (and I 
 hope
 there is none)
>>> xnshadow_start().
>> Nope, that one is not holding nklock. But I found an offender...
> Gasp. xnshadow_renice() kills us too.
 Looks like we are approaching mainline "qualities" here - but they have
 at least lockdep (and still face nasty races regularly).

>>> We only have a 2-level locking depth at most, thare barely qualifies for
>>> being compared to the situation with mainline. Most often, the more
>>> radical the solution, the less relevant it is: simple nesting on very
>>> few levels is not bad, bugous nesting sequence is.
>>>
 As long as you can't avoid nesting or the inner lock only protects
 really, really trivial code (list manipulation etc.), I would say there
 is one lock too much... Did I mention that I consider nesting to be
 evil? :-> Besides correctness, there is also an increasing worst-case
 behaviour issue with each additional nesting level.

>>> In this case, we do not want the RPI manipulation to affect the
>>> worst-case of all other threads by holding the nklock. This is
>>> fundamentally a migration-related issue, which is a situation that must
>>> not impact all other contexts relying on the nklock. Given this, you
>>> need to protect the RPI list and prevent the scheduler data to be
>>> altered at the same time, there is no cheap trick to avoid this.
>>>
>>> We need to keep the rpilock, otherwise we would have significantly large
>>> latency penalties, especially when domain migration are frequent, and
>>> yes, we do need RPI, otherwise the sequence for emulated RTOS services
>>> would be plain wrong (e.g. task creation).
>> If rpilock is known to protect potentially costly code, you _must not_
>> hold other locks while taking it. Otherwise, you do not win a dime by
>> using two locks, rather make things worse (overhead of taking two locks
>> instead of just one).
> 
> I guess that by now you already understood that holding such outer lock
> is what should not be done, and what should be fixed, right? So let's
> focus on the real issue here: holding two locks is not the problem,
> holding them in the wrong sequence, is.

Holding two locks in the right order can still be wrong /wrt to latency
as I pointed out. If you can avoid holding both here, I would be much
happier immediately.

> 
>>  That all relates to the worst case, of course, the
>> one thing we are worried about most.
>>
>> In that light, the nesting nklock->rpilock must go away, independently
>> of the ordering bug. The other way around might be a different thing,
>> though I'm not sure if there is actually so much difference between the
>> locks in the worst case.
>>
>> What is the actual _combined_ lock holding time in the longest
>> nklock/rpilock nesting path?
> 
> It is short.
> 
>>  Is that one really larger than any other
>> pre-existing nklock path?
> 
> Yes. Look, could you please assume one second that I did not choose this
> implementation randomly? :o)

For sure not randomly, but I still don't understand the motivations
completely.

> 
>>  Only in that case, it makes sense to think
>> about splitting, though you will still be left with precisely the same
>> (rather a few cycles more) CPU-local latency. Is there really no chance
>> to split the lock paths?
>>
> 
> The answer to your question is into the dynamics of migrating tasks
> between domains, and how this relates to the overall dynamics of the
> system. Migration needs priority tracking, priority tracking requires
> almost the same amount of work than updating the scheduler data. Since
> we can reduce the pressure on the nklock during migration which is a
> thread-local action additionally involving the root thread, it is _good_
> to do so. Even if this costs a few brain cycles more.

So we are trading off average performance against worst-case spinning
time here?

> 
>>> Ok, the rpilock is local, the nesting level is bearable, let's focus on
>>> putting this thingy straight.
>> The whole RPI thing, though required for some scenarios, remains ugly
>> and error-prone (includi

Re: [Xenomai-core] [Xenomai-help] Sporadic PC freeze after rt_task_start

2007-07-19 Thread Philippe Gerum
On Thu, 2007-07-19 at 22:15 +0200, Jan Kiszka wrote:
> Philippe Gerum wrote:
> > On Thu, 2007-07-19 at 19:18 +0200, Jan Kiszka wrote:
> >> Philippe Gerum wrote:
> >>> On Thu, 2007-07-19 at 17:35 +0200, Jan Kiszka wrote:
>  Philippe Gerum wrote:
> > On Thu, 2007-07-19 at 14:40 +0200, Jan Kiszka wrote:
> >> Philippe Gerum wrote:
>  And when looking at the holders of rpilock, I think one issue could 
>  be
>  that we hold that lock while calling into xnpod_renice_root [1], ie.
>  doing a potential context switch. Was this checked to be save?
> >>> xnpod_renice_root() does no reschedule immediately on purpose, we 
> >>> would
> >>> never have been able to run any SMP config more than a couple of 
> >>> seconds
> >>> otherwise. (See the NOSWITCH bit).
> >> OK, then it's not the cause.
> >>
>  Furthermore, that code path reveals that we take nklock nested into
>  rpilock [2]. I haven't found a spot for the other way around (and I 
>  hope
>  there is none)
> >>> xnshadow_start().
> >> Nope, that one is not holding nklock. But I found an offender...
> > Gasp. xnshadow_renice() kills us too.
>  Looks like we are approaching mainline "qualities" here - but they have
>  at least lockdep (and still face nasty races regularly).
> 
> >>> We only have a 2-level locking depth at most, thare barely qualifies for
> >>> being compared to the situation with mainline. Most often, the more
> >>> radical the solution, the less relevant it is: simple nesting on very
> >>> few levels is not bad, bugous nesting sequence is.
> >>>
>  As long as you can't avoid nesting or the inner lock only protects
>  really, really trivial code (list manipulation etc.), I would say there
>  is one lock too much... Did I mention that I consider nesting to be
>  evil? :-> Besides correctness, there is also an increasing worst-case
>  behaviour issue with each additional nesting level.
> 
> >>> In this case, we do not want the RPI manipulation to affect the
> >>> worst-case of all other threads by holding the nklock. This is
> >>> fundamentally a migration-related issue, which is a situation that must
> >>> not impact all other contexts relying on the nklock. Given this, you
> >>> need to protect the RPI list and prevent the scheduler data to be
> >>> altered at the same time, there is no cheap trick to avoid this.
> >>>
> >>> We need to keep the rpilock, otherwise we would have significantly large
> >>> latency penalties, especially when domain migration are frequent, and
> >>> yes, we do need RPI, otherwise the sequence for emulated RTOS services
> >>> would be plain wrong (e.g. task creation).
> >> If rpilock is known to protect potentially costly code, you _must not_
> >> hold other locks while taking it. Otherwise, you do not win a dime by
> >> using two locks, rather make things worse (overhead of taking two locks
> >> instead of just one).
> > 
> > I guess that by now you already understood that holding such outer lock
> > is what should not be done, and what should be fixed, right? So let's
> > focus on the real issue here: holding two locks is not the problem,
> > holding them in the wrong sequence, is.
> 
> Holding two locks in the right order can still be wrong /wrt to latency
> as I pointed out. If you can avoid holding both here, I would be much
> happier immediately.
> 

The point is not about making you happier I'm afraid, but only to get
things right. If a nested lock has to be held for a short time, in order
to maintain consistency while an outer lock must be held for a longer
time, then it's ok, provided the locking sequence is correct.

> > 
> >>  That all relates to the worst case, of course, the
> >> one thing we are worried about most.
> >>
> >> In that light, the nesting nklock->rpilock must go away, independently
> >> of the ordering bug. The other way around might be a different thing,
> >> though I'm not sure if there is actually so much difference between the
> >> locks in the worst case.
> >>
> >> What is the actual _combined_ lock holding time in the longest
> >> nklock/rpilock nesting path?
> > 
> > It is short.
> > 
> >>  Is that one really larger than any other
> >> pre-existing nklock path?
> > 
> > Yes. Look, could you please assume one second that I did not choose this
> > implementation randomly? :o)
> 
> For sure not randomly, but I still don't understand the motivations
> completely.
> 

My description of why I want RPI to be available was clear though.

> > 
> >>  Only in that case, it makes sense to think
> >> about splitting, though you will still be left with precisely the same
> >> (rather a few cycles more) CPU-local latency. Is there really no chance
> >> to split the lock paths?
> >>
> > 
> > The answer to your question is into the dynamics of migrating tasks
> > between domains, and how this relates to the overall dynamics of the
> > system. Migration ne

Re: [Xenomai-core] [Xenomai-help] Sporadic PC freeze after rt_task_start

2007-07-20 Thread Jan Kiszka
Philippe Gerum wrote:
...
> Read my mail, without listening to your own grumble at the same time,
> you should see that this is not a matter of being right or wrong, it is
> a matter of who needs what, and how one will use Xenomai. Your grumble
> does not prove anything unfortunately, otherwise everything would be
> fixed since many moons.

Why things are unfixed has something to do with their complexity. RPI is
a complex thing AND it is a separate mechanism to the core (that's why I
was suggesting to reuse PI code if possible - something that is already
integrated for many moons).

> What I'm suggesting now, so that you can't tell the rest of the world
> that I'm such an old and deaf cranky meatball, is that we do place RPI
> under strict observation until the latest 2.4-rc is out, and we would
> decide at this point whether we should change the default value for the
> skins for which it makes sense (both for v2.3.x and 2.4). Obviously,
> this would only make sense if key users actually give hell to the 2.4
> testing releases (Mathias, the world is watching you).

OK, let's go through this another time, this time under the motto "get
the locking right". As a start (and a help for myself), here comes an
overview of the scheme the final version may expose - as long as there
are separate locks:

gatekeeper_thread / xnshadow_relax:
rpilock, followed by nklock
(while xnshadow_relax puts both under irqsave...)

xnshadow_unmap:
nklock, then rpilock nested

xnshadow_start:
rpilock, followed by nklock

xnshadow_renice:
nklock, then rpilock nested

schedule_event:
only rpilock

setsched_event:
nklock, followed by rpilock, followed by nklock again

And then there is xnshadow_rpi_check which has to be fixed to:
nklock, followed by rpilock (here was our lock-up bug)

That's a scheme which /should/ be safe. Unfortunately, I see no way to
get rid of the remaining nestings.

And I still doubt we are gaining much by the lock split-up on SMP (it's
pointless for UP due to xnshadow_relax). In case there is heavy
migration activity on multiple cores/CPUs, we now regularly content for
two locks in the hot paths instead of just the one everyone has to go
through anyway. And while we obviously don't win a dime for the worst
case, the average reduction of spinning times trades off against more
atomic (cache-line bouncing) operations. Were you able to measure some
improvement?

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Xenomai-help] Sporadic PC freeze after rt_task_start

2007-07-20 Thread Philippe Gerum
On Fri, 2007-07-20 at 16:20 +0200, Jan Kiszka wrote:
> Philippe Gerum wrote:
> ...
> > Read my mail, without listening to your own grumble at the same time,
> > you should see that this is not a matter of being right or wrong, it is
> > a matter of who needs what, and how one will use Xenomai. Your grumble
> > does not prove anything unfortunately, otherwise everything would be
> > fixed since many moons.
> 
> Why things are unfixed has something to do with their complexity. RPI is
> a complex thing AND it is a separate mechanism to the core (that's why I
> was suggesting to reuse PI code if possible - something that is already
> integrated for many moons).
> 

I'm afraid RPI and PI are very different beasts. The purpose of RPI is
to track real-time priority for the _pseudo_ root thread, PI deals with
Linux tasks. Moroever, RPI does no priority propagation beyond the first
level (i.e. the root thread one), and only has to handle backtracking in
a trivial way. For this reason, the PI implementation is way more
complex, zillion times beyond RPI, so the effort would be absolutely
counter-productive.

I understand your POV, the whole RPI thing seems baroque to you, and I
can only agree with you here, it is. However, we still need RPI for
proper behaviour in a lot of cases, at least with a co-kernel technology
under our feet. So, I'm going to submit fixes for this issue, and agree
to change the default knob from enabled to disabled for the native and
POSIX skins if need be, if the observation period tells us so.

Now, within the RPI issue, there is the double locking one: I'm going to
be very pragmatic here. If this is logically possible to keep the double
locking, I will keep it. The point being that people running real-time
applications on SMP configs tend in fact to prefer asymmetry to symmetry
when building their design. I mean that separate CPUs are usually
dedicated to different application tasks; in such a common pattern, if
one of the CPU is running a frequent (mode) switching task, it may put a
serious pressure on the nklock for all others (imagine a fast periodic
timeline on one CPU sending data to a secondary mode logger on a second
CPU, both being synchronized on a Xenomai synch). This is what I don't
want, if possible. If it is not possible to define a proper locking
scheme without resorting to 1) hairy and overly complex constructs, or
2) voodoo spells, then I will put everyone under the nklock, albeit I
think this is a sub-optimal solution.

Ok, let's move on. The main focus is -rc1, and beyond that 2.4 final. We
are damned late already.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Xenomai-help] Sporadic PC freeze after rt_task_start

2007-07-21 Thread Philippe Gerum
On Fri, 2007-07-20 at 16:20 +0200, Jan Kiszka wrote:

> OK, let's go through this another time, this time under the motto "get
> the locking right". As a start (and a help for myself), here comes an
> overview of the scheme the final version may expose - as long as there
> are separate locks:
> 
> gatekeeper_thread / xnshadow_relax:
>   rpilock, followed by nklock
>   (while xnshadow_relax puts both under irqsave...)
> 

The relaxing thread must not be preempted in primary mode before it
schedules out but after it has been linked to the RPI list, otherwise
the root thread would benefit from a spurious priority boost. This said,
in the UP case, we have no lock to contend for anyway, so the point of
discussing whether we should have the rpilock or not is moot here.

> xnshadow_unmap:
>   nklock, then rpilock nested
> 

This one is the hardest to solve.

> xnshadow_start:
>   rpilock, followed by nklock
> 
> xnshadow_renice:
>   nklock, then rpilock nested
> 
> schedule_event:
>   only rpilock
> 
> setsched_event:
>   nklock, followed by rpilock, followed by nklock again
> 
> And then there is xnshadow_rpi_check which has to be fixed to:
>   nklock, followed by rpilock (here was our lock-up bug)
> 

rpilock -> nklock in fact. The last lockup was rather likely due to the
gatekeeper's dangerous nesting of nklock -> rpilock -> nklock.

> That's a scheme which /should/ be safe. Unfortunately, I see no way to
> get rid of the remaining nestings.
> 

There is one, which consists of getting rid of the rpilock entirely. The
purpose of such lock is to protect the RPI list when fixing the
situation after a task migration in secondary mode triggered from the
Linux side. Addressing the latter issue differently may solve the
problem more elegantly than figuring out how to combine the two locks,
or hammering the hot path with the nklock. Will look at this.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Xenomai-help] Sporadic PC freeze after rt_task_start

2007-07-21 Thread Philippe Gerum
On Thu, 2007-07-19 at 19:57 +0200, Jan Kiszka wrote:
> Philippe Gerum wrote:
> > Ok, the rpilock is local, the nesting level is bearable, let's focus on
> > putting this thingy straight.
> 

Sorry, I missed this one, which in fact explains that you were referring
to Xenomai PI and not PREEMPT_RT PI (yeah, I thought for a while that
you were nuts enough to ask me to model RPI after RT-PI... so I must be
nuts myself)

> Well, redesigning things may not necessarily improve the situation, but
> reducing the amount of special RPI code might be worth a thought:
> 
> What is so special about RPI compared to standard prio inheritance?

Basically, boost propagation and priority backtracking as I previously
answered in the wrong context. This said, I still think that PI (the
Xenomai one) complexity is much higher than RPI in its current form.

>  What
> about [wild idea ahead!] modelling RPI as a virtual mutex that is
> permanently held by the ROOT thread and which relaxed threads try to
> acquire? They would never get it, rather drop the request (and thus the
> inheritance) once they are to be hardened again or Linux starts to
> schedule around.
> 
> *If* that is possible, we would
>  A) reuse existing code heavily,
>  B) lack any argument for separate locking,
>  C) make things far easier to understand and review.
> 
> Sounds too beautiful to work, I'm afraid...
> 

It would be more elegant than RPI currently is, not question. This is
the way message passing works in the native API, in order to implement
the inheritance by the server of the client priority, for instance.

The main problem with PI, is that all starts from xnsynch_sleep_on.
Since we could not use this interface to activate PI, we would have to
craft another one. Additionally, some Linux activities may change the
RPI state (e.g. sched_setscheduler()), so we would have to create a
parallel path to fix this state without resorting to the normal PI
mechanism aimed at being used over a blockable context, Xenomai-wise.
A lot of changes for the purpose of solely recycling the basics of a PI
implementation.

> Jan
> 
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Xenomai-help] Sporadic PC freeze after rt_task_start

2007-07-22 Thread Jan Kiszka
Philippe Gerum wrote:
> On Fri, 2007-07-20 at 16:20 +0200, Jan Kiszka wrote:
> 
>> OK, let's go through this another time, this time under the motto "get
>> the locking right". As a start (and a help for myself), here comes an
>> overview of the scheme the final version may expose - as long as there
>> are separate locks:
>>
>> gatekeeper_thread / xnshadow_relax:
>>  rpilock, followed by nklock
>>  (while xnshadow_relax puts both under irqsave...)
>>
> 
> The relaxing thread must not be preempted in primary mode before it
> schedules out but after it has been linked to the RPI list, otherwise
> the root thread would benefit from a spurious priority boost. This said,
> in the UP case, we have no lock to contend for anyway, so the point of
> discussing whether we should have the rpilock or not is moot here.
> 
>> xnshadow_unmap:
>>  nklock, then rpilock nested
>>
> 
> This one is the hardest to solve.
> 
>> xnshadow_start:
>>  rpilock, followed by nklock
>>
>> xnshadow_renice:
>>  nklock, then rpilock nested
>>
>> schedule_event:
>>  only rpilock
>>
>> setsched_event:
>>  nklock, followed by rpilock, followed by nklock again
>>
>> And then there is xnshadow_rpi_check which has to be fixed to:
>>  nklock, followed by rpilock (here was our lock-up bug)
>>
> 
> rpilock -> nklock in fact.

Yes, meant it the other way around: The invocation of
xnpod_renice_root() must be moved out of nklock - which should be
trivial, correct?

> The last lockup was rather likely due to the
> gatekeeper's dangerous nesting of nklock -> rpilock -> nklock.

This path - as one of three with this ordering - surely triggered the
bug. But given the fact that the other two nestings of this kind are yet
unresolvable while our reversely ordered nesting in xnshadow_rpi_check
is, it is clear that the latter one is the weak point. So far we only
have a fix for Mathias' test case which stresses just a subset of all
rpilock paths appropriately.

> 
>> That's a scheme which /should/ be safe. Unfortunately, I see no way to
>> get rid of the remaining nestings.
>>
> 
> There is one, which consists of getting rid of the rpilock entirely. The
> purpose of such lock is to protect the RPI list when fixing the
> situation after a task migration in secondary mode triggered from the
> Linux side. Addressing the latter issue differently may solve the
> problem more elegantly than figuring out how to combine the two locks,
> or hammering the hot path with the nklock. Will look at this.

Even the better! Looking forward.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core