Jassi,
>>> See how mailbox_startup() tries to balance mbox->ops->startup() and
>>> mailbox_fini() the mbox->ops->shutdown() That's very fragile and the
>>> cause of imbalance between rpm enable/disable, unless your clients are
>>> buggy.
>>
>> Yeah, it is kinda messed up in the existing code, the
Hello Suman,
On 10 May 2013 05:48, Suman Anna wrote:
>> No, please. The controller driver should not implement any policy (of
>> allowing/disallowing requests). It should simply try to do as
>> directed. If the client screwed up even after getting info from
>> platform_data/DT, let it suffer.
>
Hi Jassi,
>
> On 9 May 2013 06:55, Suman Anna wrote:
>
>>> so it can't be driven by the controller. We could make it a Kconfig option.
>>> What do you suggest?
>>
>> I am saying controller/link because they are the ones that knows how the
>> physical transport is, and it may vary from one to anot
Hi Suman,
On 9 May 2013 06:55, Suman Anna wrote:
>> so it can't be driven by the controller. We could make it a Kconfig option.
>> What do you suggest?
>
> I am saying controller/link because they are the ones that knows how the
> physical transport is, and it may vary from one to another. I wou
Hi Jassi,
>
> The client(s) can always generate TX requests at a rate greater than
> the API could transmit on the physical link. So as much as we dislike
> it, we have to buffer TX requests, otherwise N clients would.
The current code doesn't support N clients today any
On 8 May 2013 03:18, Suman Anna wrote:
> Hi Jassi,
>
>> On 7 May 2013 05:15, Suman Anna wrote:
The client(s) can always generate TX requests at a rate greater than
the API could transmit on the physical link. So as much as we dislike
it, we have to buffer TX requests, otherwi
Hi Jassi,
> On 7 May 2013 05:15, Suman Anna wrote:
>>>
>>> The client(s) can always generate TX requests at a rate greater than
>>> the API could transmit on the physical link. So as much as we dislike
>>> it, we have to buffer TX requests, otherwise N clients would.
>>
>> The current code doesn
Hi Suman,
On 7 May 2013 05:15, Suman Anna wrote:
>>
>> The client(s) can always generate TX requests at a rate greater than
>> the API could transmit on the physical link. So as much as we dislike
>> it, we have to buffer TX requests, otherwise N clients would.
>
> The current code doesn't suppo
Hi Jassi,
On 05/04/2013 02:08 PM, Jassi Brar wrote:
> Hi Suman,
>
>> Anyway, here is a summary of the open points that we have:
>> 1. Atomic Callbacks:
>> The current code provides some sort of buffering on Tx, but imposes the
>> restriction that the clients do the buffering on Rx. This is main
>
Hi Suman,
On 4 May 2013 07:50, Suman Anna wrote:
> Hi Jassi,
>
> On 04/27/2013 01:14 PM, jassisinghb...@gmail.com wrote:
>> From: Jassi Brar
>>
>> Introduce common framework for client/protocol drivers and
>> controller drivers of Inter-Processor-Communication (IPC).
>>
>> Client driver develope
Hi Jassi,
On 04/27/2013 01:14 PM, jassisinghb...@gmail.com wrote:
> From: Jassi Brar
>
> Introduce common framework for client/protocol drivers and
> controller drivers of Inter-Processor-Communication (IPC).
>
> Client driver developers should have a look at
> include/linux/mailbox_client.h t
From: Jassi Brar
Introduce common framework for client/protocol drivers and
controller drivers of Inter-Processor-Communication (IPC).
Client driver developers should have a look at
include/linux/mailbox_client.h to understand the part of
the API exposed to client drivers.
Similarly controller
12 matches
Mail list logo