On Tue, 13 Dec 2005 16:04:30 +0900 Carsten writes:
> 
> 
> for now the only thing i have done are copy (blit) and 
> alpha blend
> (with or without destination alpha).
> 
> for C, mmx, sse and sse2 i have done:
> done:
> 
> * solid pixel copy forwards
> * pixel blend
> * pixel blend dst alpha
> 
> to do:
> 
> * solid pixel copy backwards
> * solid color copy
> * solid color blend
> * solid color blend dst alpha
> * color mul pixel copy
> * color mul pixel blend
> * color mul pixel blend dst alpha
> * alpha mask color copy
> * alpha mask color blend
> * alpha mask color blend dst alpha
> * alpha mask mul pixel copy
> * alpha mask mul pixel blend
> * alpha mask mul pixel blend dst alpha
> * pixel argb mask mul pixel copy
> * pixel argb mask mul pixel blend
> * pixel argb mask mul pixel blend dst alpha
> * yuv(yv12) to rgb
> * yuva(yv12+a plane) to argb
> * scale image (nearest)
> * scale image (filtered)
> * scale & rotate (transform) image non-repeat (nearest)
> * scale & rotate (transform) image repeat (nearest)
> * scale & rotate (transform) image non-repeat (smooth)
> * scale & rotate (transform) image repeat (smooth)
> * pixel box filter blurr copy
> * pixel gaussian blurr copy
> * alpha mask box filter blurr copy
> * alpha mask gaussian blurr copy
> 
> ok - so why so all of these separately? build a set of really really 
> realy
> really fast routines to build "evas 2" software on top of. some of 
> these listed
> are not in current evas. i think we can remove the cmod routines as 
> they arent
> used and are just a pain. some of the aboive (yuv->rgb) exist 
> already in highly
> optimsied format - it'll be hard to beat them - and an altivec one 
> to boot.

        Having these set apart from any given lib, plus a test suite,
is a good step.
        While we're on the subject of 'blend' functions, let me remark
on a couple of things.

        I've actually sat down on a couple of ocassions and wrote a few
more of the ones listed above, pre-mul and non pre-mul alpha versions..
and so have you actually (eg. you have gaussian blurring in e). It's
likely that nearly everyone out there has seen fit at some time to play
around with such things....

        Having fast c/mmx/sse/...xyz algorithms implementing compositing
ops is great.. and doublets there are already plenty of excellent
implementations out there already.

        But there are other things involved in 'common practice' use of
gfx libs.

        This has been the thrust of my experimenting with these, and
eg. the recent blend funcs I sent you were not really meant to provide
particularly better algos for doing c or mmx compositing... What they
were meant to do (besides adding mmx support to dst-alpha, in a truly
banal way) was to continue with that experiment.

        It was to see how useful it would be - in "common use" - to split
the functions into 'cases', as suitable... Wether it's testing per-pixel
alpha triviality, or in choosing when to multiply alphas and/or colors,
or in picking 'specialized' functions that assume certain things about
its inputs, etc.

        You see, just providing two sets of (mask x color -> dst) functions,
one for the case of color's alpha = 255, and another for color's alpha <
255,
gives large gains in solid color text drawing over having one
general-case
function. Gains can go up by something like 30% if one can assume that
the
color is opague -- and this is very frequently the case for a large
amount
of text.

        These kinds of things will apply, in varying degrees, independent
of the actual algos used for implementing the compositing.

        One other thing I'd mention here regarding compositing/transforming/etc
image data is the issue of "quality".. Speed is necessary for real-time
gfx use, but there are also needs that don't need speed and need instead
the highest accuracy possible. Hence, I'd say that an option to set the
level of "quality" (to say, "best") for rendering would be a good idea,
and this in turn requires a similar set of such functions implemented
so as to give the best agreement with the 'ideal' case.

        Other aspects are relevant as well, such as the question of
supporting other compositing ops...

> some other routines i want to create massively optimal subsystems 
> for:
> 
> * detecting blit regions (this means making a very fast rect list 
> region
> implementation that merges rects quickly on the fly and can do 
> boolean logic
> (set, get, union, intersection, difference/cut) WITH motion vector 
> tags.
> * better gradient fills (jose - you have this well in hand)
>  
> this combined with the basic routines as above shoudl be enough to 
> implement
> much more liek arbitray clipping, in-canvas blur filters (filter 
> objects to
> blurr anything they "filter" like clip objects filter anything they 
> clip). we
> should put these all into an external test harness and make ti work 
> then work
> on merging it in later. :)
> 

        One thing that can also be done, besides setting up such a CVS
"gfx-routines" project, is to have an 'experimental' brach of evas
which can be used to throw in this or that for testing in 'real use'.



-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to