After thinking about how projective image transformations should fit into 
the Render extension, I've come up with a simple definition.

A Transform is associated with each Picture and is used to map coordinates 
when the Picture is used as a source or mask in any operation.  A
Transform is specified by nine Fixed point numbers representing a 
homogeneous matrix.

The compositing operation then constructs an intermediate source and mask 
operand formed by transforming the original source and mask operands by 
their transformation matrices.  It is these intermediate operands which 
form the basis for the final compositing operation.

Because the transformation is a property of the source and mask pictures, 
image transformation is now isomorphic to the operation being performed; 
text, polygons and rectangles all take advantage of the transformation.

The transformation matrix specifies the mapping from this intermediate 
surface back to the original surface.

Some questions:

 +      How should I specify filters?  I'd like to avoid round trips,
        so using atoms seems like a bad idea.  Is there any good reason
        to not just use a simple enumeration of the obvious common
        filter types?  A query operation would permit apps to find out
        which filters were supported, a default filter would allow 
        applications which didn't care to avoid even that round trip.

 +      What about expose events?  Because the source image now forms
        an arbitrary quadrilateral in the destination, missing pieces
        from the source don't form nice clean rectangles in the dest.

        I could compute the actual expose region and send the whole mess
        off to the application.  That could be a few rectangles, but
        presumably most apps would never do something that stupid.

        I could implicitly disable graphics expose events for non I
        transformations.

        Separately, I'm thinking here of changing the existing Render
        semantics to specify that pixels beyond the border of the source
        or mask are transparent.  This gives a nice clean semantic for
        the edges of these transformed sources.  The current semantics
        call for clipping to the source; clipping to the source is 
        equivalent to pretending that the source is transparent for
        the Over operator.

 +      Existing accelerated drives will all need to check for
        the presense of a transformation operator in the source
        or mask pictures and fall back to software rendering until
        acceleration is added.  Can I do this easily in XAA?

Keith Packard        XFree86 Core Team        HP Cambridge Research Lab


_______________________________________________
Render mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/render

Reply via email to