Around 12 o'clock on Sep 12, Owen Taylor wrote:
> If you want to have any support for hardware acceleration, filter > specification should support "intent based" instead of precisely > specified. (None, Fast-but-filtered, best-tradeoff, best-looks). Hmm. Yes, I think can do this. My current plan is to advertise filter names as just strings, the query would return the list of supported filter names. Those filter names could include aliases as well, the aliases would indicate which real filter they mapped to. Define a set of required real filters, a set of required alias names and applications should be able to avoid round trips most of the time. Oh, the mapping would be per-screen. > The invariant that needs to be preserved is that the alpha values for a > transformation of a solid image should be identical to the clip region > transformed as a polygon then rendered via the polygon rendering > rules. Please remember that the transformed pixels form a virtual source image and don't represent the final rendered data. Rendering is always constrained by the mask operand, whether implicit (for polygons), or explicit (for Compose and the Glyph operations). If you want AA edges, draw trapezoids. The polygon rules shouldn't apply here -- 'nearest neighbor' will have alpha values of either 0 or 1 depending on whether the nearest pixel falls within the image or without. Similarly, bi-linear interpolated data should average the alpha values of the surrounding pixels. By defining the pixels outside of the image as transparent, you average the interior alpha values with transparency yielding a nice alpha-blended edge. Any other definition makes a lot less sense, and is really expensive to boot. > But to my knowledge, the only valid use for graphics expose events is > scrolling a single drawable. So, I think you could very easily > disable graphics expose events for non-identify transformations. > Why make things hard for yourself? It's also valid when copying data from one window to an off-screen pixmap; it's nice to know what regions of the pixmap contain valid data and which contain garbage. One case which pops to mind is a mini-application view where the contents are a scaled-down version of the input. I suspect I'll need to define this as a region covering the projection of the undefined areas in the source. That will require inverting the transformation matrix, but is otherwise relatively straightforward to implement. What I won't do is fill that area with the background; I'll leave it up to the application to draw whatever they like there. > Edge conditions are definitely the hard part of the specification; consider > a straight scale of a solid rectangle from 100x100 pixels by a ratio > of 75/100. The desired result is obvious ... a 75x75 pixel solid square > with hard edges. Both nearest neighbor and bilinear interpolation will yield this result. > Then get the final values by something like: > (SOURCE_tranformed IN SOURCE_boundary) IN (MASK_transformed IN MASK_boundary) OP >DEST This seems excessively complicated; I don't see the utility of anti-aliasing the edges of a nearest-neighbor resampling operation. > Isn't this trivial? I think you just return FALSE out of > > SetupForCPUToScreenAlphaTexture The driver can, but I'm going to fix XAADoComposite to do the check so that existing drivers will just work. This looks easy. Keith Packard XFree86 Core Team HP Cambridge Research Lab _______________________________________________ Render mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/render
