Hello Jim.

> I'll take a look at this if I can, but last minute JavaOne issues (like 
> a rewrite of my slides - gulp) are starting to take over my life for the 
> next couple of weeks.

That's ok. That webrev wasn't as polished as I thought. I've done a lot of
testing since then and fixed many bugs. I ran Java2D without any problems.
The frame rate for the animations isn't noticeably different, but that's because
they don't draw many curves. I put timers in pisces and the time spent in it
(including line/move/quad/curve calls to the output PathConsumer2D) is at least
3-4 times smaller than in pisces without this changeset. When many curves are
widened the improvement goes up to a factor of 17-20.
Anti aliasing was also a bit better (this did come with a frame rate 
improvement). 

I also ran a 2D graphics test suite: http://icedtea.classpath.org/hg/gfx-test/
It generates thousands of images using combinations of different strokes
colours and shapes. It is fairly exhaustive, in that it uses all caps, all joins
many different line widths, different dashing patterns, different colours,
different shapes, and so on. It does this for a java implementation to be tested
and for a reference implementation and compares the generated images against 
each
other. I've been using the closed source java as a reference. Compared to
icedtea6 version 1.8, openjdk7 with my patch does much better. The number of
generated images that are identical to closed java has more than doubled. No
test suite performs worse and every image I've seen is closer to the reference
images. I have not put any of the test results online yet because that would be
400 megabytes and I'm not sure I'm allowed. I'll try tomorrow.

I've also rendered at least 50000 random curves using this changeset
just to make sure there are no catastrophic failures (things like crashes, half
the screen going black, etc.) Everything looked good.

I should give a high level overview of how things have changed:
1) Antialiasing uses adaptive forward differencing. Now I rasterize each curve
as soon as I receive it and store the crossings instead of storing curves and 
computing
the crossings as needed. This is faster, but not as memory friendly so I'm a 
bit uneasy
about this decision. What do you think?
2) In Dasher.java I implemented the buffer to store initial segments.
3) For dashing, I compute curve lengths using the algorithm you told me about.
4) Transforms are handled differently. We used to transform everything before 
feeding
it to pisces. Since pisces has to compute offsets, it needed to know about these
transforms. This made things very confusing. I have introduced a class which 
Stroker
and Dasher use for IO that knows about transformations. So, when a shape is 
being
rendered its path iterator transforms the points. These transformed points are
then normalized (if normalization is on). Then they are passed through this IO
handler which untransforms the points and feeds them to Dasher or Stroker, which
pass their output through the IO handler again which transforms them. 
Unfortunately,
this will do many more transformations than before. The reason I did this is to
avoid going back and forth between user space and device space coordinates in 
the
same file.
5) In stroker, we used to keep variables that stored the previous point 
(px0,py0)
and the second point (sx1 and sy1, I think). I eliminated these. They were
misleading and unnecessary and just don't make sense if we have curves. They 
were
misleading because the only way they were ever used was to get tangent vectors 
at
the start and current position in the path. I replaced them with variables that
hold these tangents. This is much clearer and saves us many subtractions. 
Because
of this some of the methods parameters have changed. The computeMiter parameters
are a good example of ones that should have changed but didn't because I didn't
have time to refactor it. This should be done because if we use vectors it will
be clearer and will help with 9).
6) 0 length curves and lines were being ignored. This isn't what proprietary 
java
does (which is drawing caps/joins as if a tiny horizontal line going right had
just been drawn). I "fixed" this to match the behaviour of proprietary java.
Because of the change above, this turned out to be a 3 liner.
7) I put code that draws joins in its own method (drawJoin). This made the close
and finish methods much clearer and allowed me to fix this createStrokedShape
issue: http://bugs.openjdk.java.net/show_bug.cgi?id=100046
8) To compute cubic offset curves first I subdivide the original curve at points
where it starts bending too much (i.e. more than 90 degrees since the last
subdivision), has inflection points, and where one of the offset curves has
cusps (see comments in the file for more on this). Finding the offset cusps was
the main reason for 4, since if we worked with transformed coordinates there
could be shears and the linewidth would not be constant (we need the line width
because offset cusps occur when the radius of curvature is equal to the width).
Then for each of these curves, I find a left offset and a right offset using
constraints that the curve and the offsets must have parallel tangents at t=0
and t=1 and that the computed offsets at t=0.5 must be equal to the ideal
true offsets at t=0.5.
9) To compute quadratic offsets I subdivide as above (except for the inflection
points, of which there are none). Then for each subdivision I compute the start
and end point in the obvious manner, and I compute the control point as the
intersection of the lines that go through the end points and are parallel to
the tangents at the end points.
10) I have included my old patch for drawing round joins using bezier curves.
11) The side and isCCW methods have changed are are almost exactly the same
as each other. I still keep them both so that round joins can be drawn in cases
such as: p.moveTo(x,y);p.lineTo(x+c,y);p.lineTo(x,y). Proprietary java does not
do this, but I think it should, since joins are meant to join different segments
in a path. In the example above there are 2 segments, and it is possible to
draw a round join, so why not? Does the specification say anything about this 
case?
I changed the side() algorithm because I think what I use now also works (or at
least it certainly works for all cases in which we use it), it is faster and
it is more accurate.

I think the above are all the important changes.
Some things I wanted to discuss:
1. Should I change antialiasing and make it store the curves instead of the 
crossings?
2. I'm not completely comfortable with the way special cases are handled in
the somethingTo, computeOffsetQuad, and computeOffsetCubic methods in 
Stroker.java
("special cases" being very tiny curves).
3. From tests it looks to me like offsets for quads might not be good enough. 
It's
barely visible, but I don't know if I'm comfortable with it (fixing this would 
be
pretty simple - either add some subdivisions, or approximate quad curves with 
cubics
(the latter might be especially nice, since we could use the same code for both
curve types)).

>> 3. Should line 862 in Stroker be enabled (I give some reasons for wanting to
>> in a comment below it).
> That'll take me a day or two just to understand and figure out... 
> <crosseyed smiley>
I'm sorry, I should have explained it. That line was a check for lineWidth2 > 
1.5 &&
subdivisionsSoFar < 5. It was meant to avoid trying to find points where the 
offset
curves have cusps if we already have "enough" subdivisions. Now I always try to 
find
these points because I don't think this procedure is expensive enough to 
jeopardize
rendering quality.

I would very much appreciate any comments or suggestions.

Thank you,
Denis.

----- "Jim Graham" <james.gra...@oracle.com> wrote:

> On 9/14/2010 10:48 AM, Denis Lila wrote:
> > Hello Jim.
> >
> >> sx1,sy1 have lost their way along the way somewhere.
> >
> > I think that was my fault:
> 
> Not entirely - see below.
> 
> >> I don't think you actually need sx1, sy1 - instead you need to
> buffer
> >> and save aside the first set of dashes and I don't see that buffer
> >> anywhere...
> >
> > sx1, sy1 were the buffer themselves, because we used to do
> flattening
> > so there were no curves, and no need for an array. I did not see
> what
> 

> 
> > they were used for, so I did not implement a buffer to replace
> them.
> > I have now. I also fixed a couple of bugs (one related to drawing
> square
> > caps and the other related to drawing joins), and I have tried to
> > fix the whitespace (please tell me if I missed anything).
> > The link is the same:
> > http://icedtea.classpath.org/~dlila/webrevs/noflatten/webrev/
> 

> 
> > There are a few smallish issues I wanted to discuss:
> > 1. To subdivide at arbitrary t, I use equations like (1-t)*p1 +
> t*p2.
> > Should I use the more intuitive p1+t*(p2-p1)? It's clearer and
> should be
> > faster since it has 1 fewer multiplication.
> 
> I like that other equation.  Both are find and equally understandable
> so 
> use whatever makes the code flow or perform better for your cases.
> 
> > 2. When computing offset curves, I check for degenerate curves (i.e.
> equal
> > control and end points). I don't use == for comparisons. I use a
> function
> > that returns true if 2 floats are close to each other, for fear of
> > catastrophic cancellation. Is this really a good idea? If so, how
> close
> > should the numbers be before I return true?
> 
> That's a great question.  In device space I think a very small 
> difference (like 1/256 of a pixel) could be enough to ignore the
> curve, 
> but in user space it's harder to make a judgment call without knowing
> 
> how much they'll be scaled up.  In cases like that I've sometimes used
> 
> the Math.ulp() method to figure out how many "bits of error" we have.
> 
> That method gives you the smallest increment for a given float or
> double 
> so what it is telling you is the magnitude of an "off by 1 bit" error.
> 
> If you allow differences less than 3*ulp(one of the values) then that
> is 
> allowing 1 or 2 bits of precision error.  I'd use more than 2*ulp 
> because the numbers you are comparing might be close to a cutoff where
> 
> the binary exponent is increased.  You can also choose the largest 
> (magnitude) of the values you are comparing and the ulp() of that
> value 
> gives you a safer estimate of being "off by a bit or two", but that 
> requires you to do a max() (and possibly an abs()) test just to find
> out 
> how big of a difference to allow so I usually use a multiplier of at 
> least 3 or so.
> 

> 
>                       ...jim

Reply via email to