Gustavo wrote: >> PS. >> This is fairly easy to do right now in evas, except with the >> gl engine, since one'd need to draw to gl textures and there's no >> code in evas right now to do that. So, your work on the gl filters >> stuff could actually be very useful for that (among other things). >> > > Yes, that's something good to have and it's basically a code refactor > of the evas_render.c. As usual, I'm not sure of the impact this would >
No need for much of that, though what would be best is to redo most of the image internals, and other obj rendering calls.. needs to be done anyway. > cause on NATIVE surfaces and their mixing... it would be interesting > to know the results of rendering with XRender to a GL surface, I > wonder if it's even possible. Maybe we should have a way to say "use > native" or "use software" and user would be responsible for doing > that. > > This wouldn't really have any impact with the use of 'native surfaces' - well, depends on how wide the interpretation of such. All the engines - with the singular exeption of the gl one - use native surfaces to do the rendering to, in one way or another. The update buffer images that objs are rendered to then get put on the dst display target. For the gl engine, image obj data is still held in gl textures, so one'd need to be able to render to those. Mixing gl and xrender? Well, depends how.. If you mean use gl to do filters or such, then you can get a texture from the pixmap associated with the xrender pictures that are used internally by the xrender engine to hold image data, and use gl to draw to the texture. Or do everything in software and put the result on the picture's pixmap as is now done for some things. Xrender supports projective transforms, and allows for certain filters, convolution matrices, which can even be used to, badly, mimic blurs for example. I can't begin to imagine how the idea of "rendering with xrender to a gl surface" could possibly come up.. except in a really wild interpretation of 'native surfaces'. >> PPS. >> You mentioned that "Filters would be rotation, shear and blur >> >> since they're easier to work and can do lots of the simulation." >> Ummm... rotation and shear are both just specific examples of affine >> transformations.. but anyway, what do you mean by 'can do lots of >> the simulation'? Simulation of what? >> > > simulation of cooperative and non-cooperative filters. As you said > shear and rotation could use the same affine transformation, thus > being cooperative (avoiding intermediate buffer), while shear and blur > wouldn't. We could do tests like: > - shear + rotation; > - shear + blur; > - shear + blur + rotation; > > that's lots of simulation I'm talking... sometimes I exagerate :-) > Ahhh.. Ok, I see what you mean. ------------------------------------------------------------------------- This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone _______________________________________________ enlightenment-devel mailing list enlightenment-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/enlightenment-devel