Hi Laurent,

On 7/2/15 2:19 PM, Laurent Bourgès wrote:
I meant dda with fixed point maths (integer) to get rid of
floating-point maths in endRendering () => no cast, ceil, floor anymore.
But of course addLine () must still round float coords to integer and
deal with dx/dy and error...
Do you have experienced such approach in the past ?

I've done fixed point iteration in ShapeSpanIterator.c with a bump for X and then another error term to determine when we need to bump one more beyond that. Similar to DDA. The "floor" functionality was achieved by seperating the sub-pixel error from the pixel-based coordinate:

x0 += seg->bumpx;
err += seg->bumperr;
x0 -= (err >> 31);
err &= 0x7fffffff;

x0 is always the pixel number of the crossing we want and there are no conditional statements required for the "extra +1" when the sub-pixel error accumulates an additional step.

2/ Determine edgeBucket used range: correct too.

- Floor: it is used by NormalizingPathIterator to compute x/ adjust
values = 0.5 - fractional_part (coord).
Here the possible values may be in the full float domain !

Probably I should use a correct Floor impl as before or propose a new
method to compute the fractional part directly.
Fract = coord - floor (coord) = coord % 1f


What do you mean by a correct Floor impl?

My previous impl supporting the complete float input range ie not
limited to integer range.

Yes, when processing the raw path data before you've narrowed it down to just the segments in range of our output, you'll need to use math functions with better range.

I'd have to see how the fract method would be used there to see if it makes 
sense.  One thing I noticed when I read through that code is that we duplicate 
some calculations.  In particular, we have:

coord = coords[N];
adjust = <round calc> - coord;
... then after the if statement:
coords[N] += adjust;

This is equivalent to:

coord = coords[N];
adjust = <round> - coord;
val = coords[N];  // This is the same as coord, right?
coords[N] = val + adjust;  // This is the same as <round>, right?

We should just do it all together as in:

coord = coords[N];
val = <round>;
coords[N] = val;
adjust = val - coord;

It saves a memory load and having to undo a subtract with an add.

The fract method itself may also avoid the substraction ...

Sounds good.

Any idea?
Again how to deal with NaN / Inf values in this case ?


This is interesting because if we ever encounter a NaN then the adjust 
parameters will tend to carry it forward in all of the calculations and so the 
rest of the path could become all NaN values.  Hopefully the code I pointed out 
above can provide an idea of how to deal with this. NormalizingPathIterator 
could serve both as a normalizer and a NaN rejecter.  We'd need some form of 
NaN rejection for the case where STROKE_NORMALIZE is turned off, though.

Seems definitely a good idea, but it also looks like a sort of a
clipping algo ... (again).

Clipping? It does eliminate segments, but it doesn't require things like interpolating where the clip boundary is intersected like other clipping facilities. We just stop passing through segments until we see non-imaginary geometry again.

PS: Could you give me an example when the rendering clip boundaries may
be negative ?

I can't think of one since we tend to start with [0, 0, drawableW, drawableH] and only intersect from there (if we're talking about the composite clip, not the raw user clip). So, by the time the data reaches this mechanism, the clip should be all non-negative...

                        ...jim

Reply via email to