> >     At the lowest level, this can be made fairly efficient --
> > For each pair of evas objects A and B, write a function for 
> drawing
> > A clipped by B (onto a dst image), as efficient as possible. 
> >     Since 'A clipped by B' equals 'B clipped by A', one'd
> > need n(n + 1)/2 functions, where n = num-of-evas-objects.
> 
> aaah but what about:
> 
> A clip B clip C clip D
> 
> A is image
> B is rect
> C is rect
> D is polygon
> 
> we can actually do this clip with NO intermediate buffers, but what 
> you have
> requires an intermediate buffer in each stage. we need to 
> intelligently punch
> through clips until you NEED a buffer. the problem here becomes - 
> clip buffer
> management. when a clip buffer is simple (a n image, not scaled 
> etc.) there is
> no point to make a buffer - simply use the the original. but now 
> lets say you
> want to clip by a SCALED image - but the same scaled image is used 
> to clip
> multiple objects - u dont want to go rescale it every time
> so u need to intelligently manage this as a clip buffer
> 
> i think u can do it by generating a clipped "scanline" then 
> recursively applyign
> the clip transforms
> but you first need to generate a minimal scanline... :) anyway - 
> doing this fast
> and efficiently and intelligently isnt a walk in the park :)
> 
> >     Now,  if you want multiple clipping:  A clipped by B,
> > clipped by C, ... -- then it may depend on just how this 'exists' 
> in
> > the lib..  ie. how many objects can be 'set' as a clipping object
> > at a time, etc..  Ultimately, it seems you're going to need image
> > buffers of some type if you want complex lists/iterations of this
> > sort of thing..
> 
> it's unlimited. u can clip by as many objects as u want :) and yes - 
> i will
> ultimately need a buffer of some sort... its managing these 
> pre-calculated clip
> mask buffers that is where the pain lies 

        I see.  Well, if you want this kind of generality then it can
be done with one function --  with the input object, that's to be
clipped by the others, considered as a new first or last such on
the list..  But, it won't be as fast for most cases... so it may be
best to keep faster, special case functions, as you have now.

        Setting aside the question of clipping by transformed
images and how to best cache such ...  the basic way I see
of doing this is as a generalization of the polygon drawing
algorithm.
        First, intersect all the clip rects to get a minimal such.
Then, intersect the bounding rects of the clip images with the
above.  Next, get a bounding rect for the clip polygons (I'd suggest
that the polygon structure keep such bounds data, it can be
easily updated whenever a vertex is added/deleted) and intersect
with the previous, etc ...  So we now have a minimal bounding rect
to work from.
        Now, we need to intersect the polys -- do this scanline
by scanline, as you suggest, getting active edges for each clip
polygon and the spans given by intersecting the left-right edges
with the bounding rect (for aa-polys this is a little more involved
but not a problem), when an edge intersects a span and it's aa,
then it has to be carried along as part of the state.  Do this over
the list of clip polygons, intersecting spans as we go along, and
thus obtaining a minimal set of spans (plus possibly aa edges)
on that scanline.
        Finally, start computing the alpha/color multiplications of
the clip masks/images (and of the polys' edges -- if aa) over each
span and 'draw' the result to the dst image..  either point by point
or by allocating rgba-spans and using the appropiate blending
function.

        I've neglected gradients, lines, text, and ellipses, from
the discussion...  But things like gradients we can get the pixels
by either actually mapping them or 'vitually' doing so..
Text objects are glyph alpha masks..  Ellipse objects would
be handled much as polygons, and similarly for line objects.

        But yeah, I think it's quite doable :)

> 
> i'd love to see these features eventually:
> 
        Just a few things, eh?  :) :)

> * "thick" options for line objects
> * polyline objects
> * filled spline objects
> * circle/oval/ark objects
> * outline options for rect, poly, spline and circle objects
> * video objects (can load .mpg, .avi files, spool files or streams 
> themselves)
> * gradient object enhancements to be able to do more advanced 
> gradients (not
> just angle, but endpoints, radial gradients etc.)
        What are "endpoints"?

> * anti-alias options for all vector objects
> * right-to-left text handling for text objects
> * paragraph objects (can handle full paragraph formatting for all 
> languages
> (left-to-right, top-to-bottom languages like hebrew, arabic, 
> chinese, japanese 
> etc.) so u can do multiple lines, wrapping etc. all properly for the 
> language
> * fontset handling
> * greymap pre-rasterised font support (to avoid massive .ttf files 
> for large
> charset languages)
        I'm all fonted-out for several years...  :(
> 
> * filter objects
> *  blur filter
> *  sharpen filter
> *  bump map filter
> *  color map filter (map arbitrary rgb values to a gradient of other 
> values)
> 
        What do you mean by this last one?  How would you use it?

> * clipping of any object to any object
> * recursive evas objects (canvas within a canvas object - so u can 
> nest
> canvases)
> * multiple pixel formats supported by image objects (YUV420, YUV444, 
> RGB332,
> RGB565, RGB444, RGB8 indexed etc.)
> * xrender engine
> * abstracted pixel-sources for image objects (eg - you can set an 
> image object
> to follow a particular X pixmap, or some other abstract source as 
> its data
> source. this would allow us, if we had an xrender engine, to use an 
> xrender
> primitive directly and build a composting manager using evas 
> directly - with the
> caveats that its the xrender engine - which doesn't exist as of 
> right now). 
> * much better keyboard handling api (input methods)
> * modular loader system (shared with imlib2 - merge the 2 loader 
> api's)
        'Shared' with imlib2?  As in a separate lib that both evas
and imlib2 can use to do image loading?  Or just a similar one
to imlib2's, so evas can use the existing imlib2 loaders?..

> * optimisations (yes there are ways of speeding evas up)

> consider the above a "todo" list for evas for now. the above could 
> keep me
> (personally) busy as a fulltime job for several years. so there is a 
> LOT to do.
> 
> :)
> 
        I doubt it :)  It's just that you work on so many different
projects at the same time, and review other's, etc.. 

        Well, we'll see what we can do with some of the above...  :)



-------------------------------------------------------
This SF.Net email is sponsored by: Oracle 10g
Get certified on the hottest thing ever to hit the market... Oracle 10g. 
Take an Oracle 10g class now, and we'll give you the exam FREE. 
http://ads.osdn.com/?ad_id=3149&alloc_id=8166&op=click
_______________________________________________
enlightenment-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to