On 10/10/2012 09:56 AM, Jan Kiszka wrote:
> On 2012-10-10 09:51, Gilles Chanteperdrix wrote:
>> On 10/10/2012 09:38 AM, Thierry Bultel wrote:
>>> Hi Gilles,
>>>
>>> Many thanks,
>>> The first patch does not work, the second does.
>>> I think the reason for 1st patch why is that in rtcan_virt, we have
>>>
>>> rtdm_lock_get_irqsave(&rtcan_recv_list_lock, lock_ctx);
>>> rtdm_lock_get(&rtcan_socket_lock);
>>>
>>> ...
>>> --->        rtcan_rcv(rx_dev, &skb);
>>> ....
>>>
>>> rtdm_lock_put(&rtcan_socket_lock);
>>> rtdm_lock_put_irqrestore(&rtcan_recv_list_lock, lock_ctx);
>>>
>>> and rtcan_rcv->rtcan_rcv_deliver->rtdm_sem_up(&sock->recv_sem);
>>>
>>> thus the same re-scheduling stuff with interrupts locked.
>>>
>>> Are you not not afraid of side effects with the second patch, 
>>> since you change the overall behaviour ?
>>> Won't you prefer a only locally modified rtcan_virt ?
>>
>> We should ask Jan's opinion. In any case, if we adopt the second patch,
>> we might want to try and reduce the overhead of xnpod_unlock_sched.
>>
> 
> We were signaling the semaphore while holding a spin lock? That's a
> clear bug. Your patch is aligning rtcan to the pattern we are also using
> in RTnet. We just need to make sure (haven't looked at the full context
> yet) that sock remains valid even after dropping the lock(s).

The second patch idea was to lock the scheduler while spinlocks are
held, so that posting a semaphore while holding a spin lock is no longer
a bug.

-- 
                                            Gilles.

_______________________________________________
Xenomai mailing list
[email protected]
http://www.xenomai.org/mailman/listinfo/xenomai

Reply via email to