On 14 May 2013, at 23:24, Sébastien Bourdeauducq 
<sebastien.bourdeaud...@lekernel.net> wrote:

> On 05/14/2013 09:34 PM, toby @ tobyz.net wrote:
>> - This function has a sampler available to bring in arbitrary pixels from 
>> sources
> 
> Hmm, that might work with a pipeline like this:
> 
> "PFPU" generating addresses -> memory controller -> "PFPU" processing the 
> data -> video DAC
> 
> plus a compiler that splits the code into address generation and processing 
> parts. Memory addresses that depend on memory content cannot work with an 
> architecture like that, but it should not be a problem for many graphics 
> processing algos :)

Hmmm.

The arbitrary sampling is there for 
- basics: resizing inputs, handling mismatched aspect ratios
- advanced: display wall controller, soft edge blending, warping for projection 
mapping
- crazy: art, ie. as d-fuse we do a lot of 2D cut-up and manipulation of 
sources live with a load of openGL patches I've written.

And I'll be upfront that my interest is in what could be done with the M^3 
platform and a software definable pipeline between multiple inputs and outputs.

Which leads to the questions - is the present M1 implementation limited to 1:1 
pixel mapping between inputs and output, and is there an alternative route you 
were intending to implement that doesn't have the pixel shader with sampler 
approach?

> (Of course, "memory controller" is an oversimplification, we'll need 
> multi-ported prefetch caches and what not in order to obtain halfway decent 
> performance from the Slowtan-6. This paper describes an interesting solution: 
> https://graphics.stanford.edu/papers/texture_prefetch/)
> 
> Sebastien
> 
> _______________________________________________
> http://lists.milkymist.org/listinfo.cgi/devel-milkymist.org
> IRC: #milkymist@Freenode

_______________________________________________
http://lists.milkymist.org/listinfo.cgi/devel-milkymist.org
IRC: #milkymist@Freenode

Reply via email to