On 10/10/2012 11:01 AM, Jan Kiszka wrote:
> On 2012-10-10 10:58, Gilles Chanteperdrix wrote:
>> On 10/10/2012 10:10 AM, Jan Kiszka wrote:
>>> On 2012-10-10 10:04, Gilles Chanteperdrix wrote:
>>>> On 10/10/2012 09:56 AM, Jan Kiszka wrote:
>>>>> On 2012-10-10 09:51, Gilles Chanteperdrix wrote:
>>>>>> On 10/10/2012 09:38 AM, Thierry Bultel wrote:
>>>>>>> Hi Gilles,
>>>>>>>
>>>>>>> Many thanks,
>>>>>>> The first patch does not work, the second does.
>>>>>>> I think the reason for 1st patch why is that in rtcan_virt, we have
>>>>>>>
>>>>>>> rtdm_lock_get_irqsave(&rtcan_recv_list_lock, lock_ctx);
>>>>>>> rtdm_lock_get(&rtcan_socket_lock);
>>>>>>>
>>>>>>> ...
>>>>>>> ---> rtcan_rcv(rx_dev, &skb);
>>>>>>> ....
>>>>>>>
>>>>>>> rtdm_lock_put(&rtcan_socket_lock);
>>>>>>> rtdm_lock_put_irqrestore(&rtcan_recv_list_lock, lock_ctx);
>>>>>>>
>>>>>>> and rtcan_rcv->rtcan_rcv_deliver->rtdm_sem_up(&sock->recv_sem);
>>>>>>>
>>>>>>> thus the same re-scheduling stuff with interrupts locked.
>>>>>>>
>>>>>>> Are you not not afraid of side effects with the second patch,
>>>>>>> since you change the overall behaviour ?
>>>>>>> Won't you prefer a only locally modified rtcan_virt ?
>>>>>>
>>>>>> We should ask Jan's opinion. In any case, if we adopt the second patch,
>>>>>> we might want to try and reduce the overhead of xnpod_unlock_sched.
>>>>>>
>>>>>
>>>>> We were signaling the semaphore while holding a spin lock? That's a
>>>>> clear bug. Your patch is aligning rtcan to the pattern we are also using
>>>>> in RTnet. We just need to make sure (haven't looked at the full context
>>>>> yet) that sock remains valid even after dropping the lock(s).
>>>>
>>>> The second patch idea was to lock the scheduler while spinlocks are
>>>> held, so that posting a semaphore while holding a spin lock is no longer
>>>> a bug.
>>>
>>> Sounds a bit hacky,
>>
>> Well, that is what the linux kernel does.
>>
>> but I think we have this pattern
>>> (RTDM_EXECUTE_ATOMICALLY)
>>
>> RTDM_EXECUTE_ATOMICALLY is a bit of a misnomer, if you do:
>> RTDM_EXECUTE_ATOMICALLY(foo(); rtdm_sem_up(); bar());
>> foo() and bar() are not executed atomically if sem_up wakes up another
>> thread.
>>
>> So, I do not see how RTDM_EXECUTE_ATOMICALLY solves the issue we are
>> talking about.
>
> RTDM_EXECUTE_ATOMICALLY holds the nucleus lock across the encapsulated
> code, executing it atomically as rescheduling is postponed until the end
> of the block.
Err... no. Absolutely not. Calling xnpod_schedule while holding the
nucleus lock does not change anything, the rescheduling takes place at
that point. What you say only works if RTDM_EXECUTE_ATOMICALLY is called
from a secondary domain context, where xnpod_schedule needs the
escalation virq, and indeed the escalation virq is only handled when the
xenomai domain is unstalled.
--
Gilles.
_______________________________________________
Xenomai mailing list
[email protected]
http://www.xenomai.org/mailman/listinfo/xenomai