Around 20 o'clock on Sep 12, Owen Taylor wrote:

> Originally we have alpha values:
> 
>   1
> 
> When scaled to 3x3 with bilinear interpolation, using
> the "all pixels outside the image are transparent rule",
> we get:
> 
>  0.44 0.66 0.44 
> 
>  0.66   1  0.66 
> 
>  0.44 0.66 0.44 

>From a sampling theory, this makes lots of sense; you've reduced
the sample rate by a factor of three, so the rise time of your 
waveform is naturally increased by a factor of three.

However, what is true is that many people "expect" that enlarging an
image by an integer ratio will yield an image with sharp edges.  Wrong
in a theoretical sense, but right for the user.

Essentially, the question boils down to how the filter used to generate 
the output samples handles missing input samples.  In this case, the 
missing input samples are beyond the edge of the available data when the
filter lies across the boundary.

Existing image manipulation systems have several ways of "filling in" this
missing data:

 +      Use a constant value (transparent is a "good choice")
 +      Use the nearest available image pixel (clamping coordinates)
 +      Treat the source as a tile (wrap coordinates)
 +      Reflect the image across the edge (mirror coordinates)

We've already got tiling using the repeat setting, my original suggestion 
was to just use 'transparent' in the non-repeat mode.

Owen makes a strong case for adding a third option -- clamping the 
coordinates.  This gives sharp results in the above case, matching
the users expectation.  I'm not very interested in the mirroring case; it's
a weird operation when extended far beyond the edge of the image and 
doesn't significantly affect the results when clipped to the shape of the 
image.

So, we've got three modes now:

 +      Constant value (I'll make it settable, transparent default)
 +      Tiling (use 'repeat')
 +      Extend (use nearest available source pixel)

This changes the semantics of all existing operations -- currently 
operations are clipped to the bounds of the source and mask, I'm 
suggesting that we ignore source and mask boundaries and use just these
rules to synthesize pixel values beyond their edges.  For Over, the effect 
is equivalent to the current semantic.  For other operations, the effects 
will vary.

I think the result will be a more consistent rendering model.

Keith Packard        XFree86 Core Team        HP Cambridge Research Lab



_______________________________________________
Render mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/render

Reply via email to