Gustavo wrote:

>>> Well, what we've discussed at BossaConference, raster, cedric and
>>> others might complement/correct me here: add a mechanism similar to
>>> clip as it would accumulate various filters (so rotation + shear + ...
>>> will look fine). Make a function available for the filter to query for
>>> the output window (given these objects, what's the output bounding
>>> box), then allocate a semi-transparent buffer and blit them all there,
>>> apply the filter to this buffer when going to the end destination
>>> (output buffer). This have couple of "issues", like you cannot have
>>> filtered and non-filtered interleaved, but I think this is acceptable
>>> given the easy it will bring to implementation and it should cover
>>> most of the cases. Someone need to think about how to apply it to
>>> smart objects, if we can do a automatic apply-to-all-member or provide
>>> a specific smart api for it... for clippers, usually the smart object
>>> creates an internal clipper and all members that should be clipped
>>> will be clipped to it (it's in Edje, for example). But if we create a
>>> "dummy" filter for the smart, then we might have lots of overhead if
>>> the implementation is naive :-/
>>>   Summary:
>>>      - similar to clip;
>>>      - filters provide a way to get the output window (bounding box)
>>> given a set of 'filtered'  objects;
>>>      - filters allocate a temporary ARGB buffer, all objects are
>>> rendered there in order, then this buffer is filtered and the output
>>> is placed at the screen (outbuf). Maybe the implementation will be
>>> smart enough and filters should also return if the image should be
>>> ARGB or RGB (ie: rotate a jpeg) and if the output have holes and
>>> should be handled as transparent or not (rotate a jpeg = transparent
>>> area, blur a jpeg = opaque area). These buffers can be GL Framebuffer
>>> Objects...
>>>      - filters should work based on the output window, this will
>>> avoid "holes" in the output for some filters (ie: rotation). Maybe it
>>> can be flexible enough to support the other way? Does it worth to have
>>> both?
>>>      - not clear on how to go with smart objects api, needs evaluation.
>>>
>>>       
>>      This is exactly what I don't like.. It's complex, slow, and
>>  I feel not really needed or warranted.
>>      Most of what people really want is fairly simple - transform
>>  an object and possibly mask it with an image or gadient (itself
>>  possibly transformed), possibly with some 'effects' filter applied
>>  (pointwise or spatial), and composite with the dst surface (image
>>  objs can also have a separate fill-transfom set on them much like
>>  grad objs now allow for fill rotations).
>>      This can be done easily and efficiently via separate transforms,
>>  masks, and a certain set of 'effects' filters (blur, cmods, etc),
>>  if need be.. and avoid complexities of modifiers of modifiers of
>>  modifiers of .... with no clear mechanism for optimization except
>>  happy buffer land.
>>
>>      But if everyone feels that the generalized 'clip' mechanism
>>  is the way to go.. then fine, please do carry on.
>>     
>
> If _I_ would do it, I'd do it for usage with OpenGL or other
> hw-accelerated systems, so this would map easily to them and would be
> fast.
>   

      That remains to be seen - ie. just how fast a good antialias pipeline
would be. Not only that though, but also wether such an approach is
warranted as a basis for all engines, and maps well to common use
cases, and has no 'surprises'.
      It's tempting to have a single mechanism, a powerful one that
generalizes... but it can also be unsuitable for some things. I'd see
something like that more as built-in to an immediate mode pipeline,
perhaps even via the 'evas imaging' route... I just don't see such
a generic method as satisfactory or necessary for the kinds of uses
that evas objects would be called upon in most real-time rendering.
      As I mentioned, I actually did most of this before and didn't like
it - unwarranted complexity that was difficult to optimize for most
common cases that I could foresee.. But who knows.
> However, as I said, I have no time to work on this ATM, so if you like
> to try an alternative approach, please do it. Keep it as a branch
> somewhere and share your results, someone may test it and see how well
> it works, maybe it would suffice and this would be integrated,
> everyone is happy :-)
>   

      I've already tried both approaches, and others as well. There's 
nothing
here that's new, though there's certainly more than one way to do anything.
As to some 'branch' somewhere.. maybe it's best to wait and see how your
approach works out and if that would suffice, integrate it as you say. :)


-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to