Hi Ken Shirriff,
could you please give a code example or a link about how to manage the 
interrupt handling on the Linux user code side?
Thanks in advance.
RoSchmi

Am Freitag, 10. November 2017 06:19:47 UTC+1 schrieb Ken Shirriff:

> Thanks everyone for the suggestions. I used Dimitar's approach and it 
> works reliably and made my code more comprehensible.
>
> I now have a single event loop that does the wait/clear/process, rather 
> than trying to handle things semi-synchronously and expecting to get an 
> interrupt event in response to a particular PRU request. I also made 
> "ownership" of each buffer explicit between the PRU and the ARM. When the 
> ARM has a buffer ready for the PRU, it marks the owner as "PRU". When the 
> PRU is done with a buffer, it marks the owner as  "ARM" and sends an 
> interrupt.  So when the ARM gets an interrupt, it doesn't assume anything 
> is done, but checks the owner tags to see what it should do.
>
> The shorter explanation is that before I was using the interrupt event to 
> indicate a particular task was done, which was a race condition mess. Now I 
> use the interrupt event to indicate that something has (probably) changed 
> and then check to see what changed.
>
> Ken
>
> On Tuesday, November 7, 2017 at 9:42:58 AM UTC-8, din...@gmail.com wrote:
>>
>> Hi,
>>
>> FYI, recent remoteproc RPMSG versions have moved from mailboxes to 
>> interrupts for communication: 
>> https://git.ti.com/pru-software-support-package/pru-software-support-package/commit/69805828df0f262fb60363c2db189d1b8d0b693c
>>
>> A race-free algorithm would require the interrupts simply to wake the 
>> peer, and rely on shared memory FIFO for handling events. AFAIK, that's the 
>> idea used by virtio/RPMSG. In pseudo-code:
>>
>> 1. Wait for interrupt.
>> 2. Clear interrupt.
>> 3. Drain the events-FIFO located in shared memory.
>>
>> Regards,
>> Dimitar
>>
>> On Tuesday, November 7, 2017 at 4:09:33 AM UTC+2, Ken Shirriff wrote:
>>>
>>> I'm trying to send information back and forth between the processor and 
>>> the PRU, and I'm looking for suggestions on the best way to do this.
>>>
>>> Currently I'm using PRU_EVTOUT0 to send events from the PRU. The 
>>> processor code does a select() on the PRU_EVTOUT_0 fd to find out when an 
>>> event has happened. Then I do a prussdrv_pru_wait_event() and 
>>> prussdrv_pru_clear_event() to get rid of the event. (The select is because 
>>> I also want to wait for network data.)
>>>
>>> However, this is kind of a mess of race conditions, since an event can 
>>> come in between the select and the clear. Or two events can happen before 
>>> the select. So I have various status flags that the PRU sets in memory. But 
>>> that leads to other race conditions.
>>>
>>> So, I'm wondering if there's a better way to handle events back and 
>>> forth. Other people must have dealt with this and come up with good 
>>> solutions.
>>>
>>> I've seen stuff about Remoteproc - is that the cool new technology? Its 
>>> mailboxes seem like a good model. However, I'd rather stick with the UIO 
>>> model instead of moving to a new kernel and rewriting everything if 
>>> possible. 
>>>
>>> My application, in case it's relevant: I'm building a network gateway 
>>> with the PRU bit-banging a 3 megabit/second Ethernet. So the processor 
>>> sends packets to the PRU to transmit, and the PRU tells the processor about 
>>> incoming packets. The PRU needs to tell the processor when a send is 
>>> completed, or when a packet has arrived. 
>>>
>>> Thanks for any help,
>>> Ken
>>>
>>

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to beagleboard+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beagleboard/b78a5075-2151-4141-9547-85d3535ccb03%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to