>> 
>> You see where PIPO looses its time?
> 
> Not really, I mean the data transfers wind up being slower because
> they're bigger than they looked from the timings (you haven't mentioned
> how big they are...) but I don't really see why PIO (assuming that's
> what you mean by "PIPO" which is an acronym I don't recall hearing
> before) impacts the time taken to raise /CS.

The Transfers are: 1+1CS+2+3CS+3 (numbers = bytes, CS=CS_CHANGE)

But what you ignore is the fact that under load
the spi-bcm-2835 is running at 90+% System CPU.
while the DMA driver runs at 80-85% System load.
(Not to mention number of interrupts, context switches,...)

And DMA is not fully optimized yet - so do not compare it to something
that was used as a first basis to get the issues of DMA solved before 
optimizing the full way.

You are essentially comparing a "production" code (in kernel)
to a R&D code (the dma version).

Take the "case" of DMA as a prototype to see how it does with the 
one-message interface.
And then let us see how pipelined DMA works by comparison...

And then add the "prepare" message into the mix as further
possible optimizations to see how far this brings us away from
the original driver with its high system_CPU, interrupts and context
switch count.

Ciao,
        Martin


--
To unsubscribe from this list: send the line "unsubscribe linux-spi" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to