> Let me ask a question about your centralized and pre-cooked buffering 
> approach.
> 
> As far as I see, even then the kernel API must notify the driver at the right 
> moment
> that a new block has arrived. Right?

The low level driver queues words (data byte, flag byte)
The buffer processing workqueue picks those bytes from the queue and
atomically empties the queue
The workqueue involves the receive handler.

> But how does the kernel API know how long such a block is?

It's as long as the data that has arrived in that time.

> Usually there is a start byte/character, sometimes a length indicator, then 
> payload data,
> some checksum and finally a stop byte/character. For NMEA it is $, no length, 
> * and \r\n.
> For other serial protocols it might be AT, no length, and \r. Or something 
> different.
> HCI seems to use 2 byte op-code or 1 byte event code and 1 byte parameter 
> length.

It doesn't look for any kind of protocol block headers. The routine
invoked by the work queue does any frame recovery.

> So I would even conclude that you usually can't even use DMA based UART 
> receive
> processing for arbitrary and not well-defined protocols. Or have to assume 
> that the

We do, today for bluetooth and other protocols just fine - it's all about
data flows not about framing in the protocol sense.

Alan

Reply via email to