John Lazzaro wrote:

Steve Harris <[EMAIL PROTECTED]> writes:

SAOL is still block based AFAIK.
See:

http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/wemp01.pdf

Sfront does no block-based optimizations. And for many
purposes, sfront is fast enough to do the job.
It may very well be that sfront could go even faster
with blocking, although the analysis is quite subtle --
in a machine with a large cache, and a moderate-sized
SAOL program, you're running your code and your data in the cache most of the time.
I totally agree that there's no intrinsic performance difference
between blockless and block-based processing other than what sits
better in the cache, and that block-based processing might very
well go faster because of this, although it might not.

But the plugin scheme we've been discussing under the title of
"blockless processing" raises a few more issues.

By doing just-in-time compilation of a whole bunch of connected
plugins, which may have been written separately but are now
to be connected together into a graph by the host application,
you get the following opportunities:

+ Code optimisation across plugin boundaries (especially powerful
if there are unconnected inputs and outputs, or constant control
values).

+ Compile-time parameterisation of plugins (ok, you can do that
with any plugin but "compile time" is actually instantiation time
in this scheme so it becomes something the host can specify).

+ One-sample latency between connected plugins (the original
reason this whole thing got discussed IIRC).

The compiled result might well be partially or fully block processed
- subject to the host tolerating that, and internal feedback loops being
processed sample-by-sample - but the compiler would be able to base
its optimisation decisions on how the *entire* graph to be processed
fitted into the *actual* available cache.

Simon Jenkins
(Bristol, UK)


Reply via email to