Hi Denis,

Sorry for being so distracted (on the other hand, it was for JavaOne ;-), but it looks like you made some incredible progress in the meantime!

On 9/22/2010 2:22 PM, Denis Lila wrote:
I also ran a 2D graphics test suite: http://icedtea.classpath.org/hg/gfx-test/
icedtea6 version 1.8, openjdk7 with my patch does much better. The number of
generated images that are identical to closed java has more than doubled. No

Woohoo!

I should give a high level overview of how things have changed:
1) Antialiasing uses adaptive forward differencing. Now I rasterize each curve
as soon as I receive it and store the crossings instead of storing curves and 
computing
the crossings as needed. This is faster, but not as memory friendly so I'm a 
bit uneasy
about this decision. What do you think?

How much faster? I'm worried about this, especially given our tiled approach to requesting the data. What was the bottleneck before? (It's been a while since I visited the code - we weren't computing the crossings for every curve in the path for every tile being generated were we?)

2) In Dasher.java I implemented the buffer to store initial segments.
3) For dashing, I compute curve lengths using the algorithm you told me about.

Cool. Keep in mind that I never did an exhaustive search for "best way to measure curve length" so we may very well find a better algorithm if we need to.

4) Transforms are handled differently. We used to transform everything before 
feeding
it to pisces. Since pisces has to compute offsets, it needed to know about these
transforms. This made things very confusing. I have introduced a class which 
Stroker
and Dasher use for IO that knows about transformations. So, when a shape is 
being
rendered its path iterator transforms the points. These transformed points are
then normalized (if normalization is on). Then they are passed through this IO
handler which untransforms the points and feeds them to Dasher or Stroker, which
pass their output through the IO handler again which transforms them. 
Unfortunately,
this will do many more transformations than before. The reason I did this is to
avoid going back and forth between user space and device space coordinates in 
the
same file.

I can see your points here. I think there are solutions to avoid much of the untransforming we can consider, but your solution works well so let's get it in and then we can look at optimizations if we feel they are causing a measurable problem later.

5) In stroker, we used to keep variables that stored the previous point 
(px0,py0)
and the second point (sx1 and sy1, I think). I eliminated these. They were
misleading and unnecessary and just don't make sense if we have curves. They 
were
misleading because the only way they were ever used was to get tangent vectors 
at
the start and current position in the path. I replaced them with variables that
hold these tangents. This is much clearer and saves us many subtractions. 
Because
of this some of the methods parameters have changed. The computeMiter parameters
are a good example of ones that should have changed but didn't because I didn't
have time to refactor it. This should be done because if we use vectors it will
be clearer and will help with 9).

Cool!

6) 0 length curves and lines were being ignored. This isn't what proprietary 
java
does (which is drawing caps/joins as if a tiny horizontal line going right had
just been drawn). I "fixed" this to match the behaviour of proprietary java.
Because of the change above, this turned out to be a 3 liner.

I'm glad these restructuring changes are starting to pay off. Hopefully a lot more tweaks will get easier in the future.

7) I put code that draws joins in its own method (drawJoin). This made the close
and finish methods much clearer and allowed me to fix this createStrokedShape
issue: http://bugs.openjdk.java.net/show_bug.cgi?id=100046

Saner outlines are a good indication of cleaner internal processes.

8) To compute cubic offset curves first I subdivide the original curve at points
where it starts bending too much (i.e. more than 90 degrees since the last
subdivision), has inflection points, and where one of the offset curves has
cusps (see comments in the file for more on this). Finding the offset cusps was
the main reason for 4, since if we worked with transformed coordinates there
could be shears and the linewidth would not be constant (we need the line width
because offset cusps occur when the radius of curvature is equal to the width).
Then for each of these curves, I find a left offset and a right offset using
constraints that the curve and the offsets must have parallel tangents at t=0
and t=1 and that the computed offsets at t=0.5 must be equal to the ideal
true offsets at t=0.5.

Note that these kinds of heuristics are the kinds of things that people could probably create dissertations on, so my hat is off to you. Hopefully I'll be able to wrap my head around it without too much trouble.

9) To compute quadratic offsets I subdivide as above (except for the inflection
points, of which there are none). Then for each subdivision I compute the start
and end point in the obvious manner, and I compute the control point as the
intersection of the lines that go through the end points and are parallel to
the tangents at the end points.

I'm not sure I understand the reasoning of the control point calculation. I'll have to look at the code to register an opinion.

10) I have included my old patch for drawing round joins using bezier curves.

Yay!

11) The side and isCCW methods have changed are are almost exactly the same
as each other. I still keep them both so that round joins can be drawn in cases
such as: p.moveTo(x,y);p.lineTo(x+c,y);p.lineTo(x,y). Proprietary java does not
do this, but I think it should, since joins are meant to join different segments
in a path. In the example above there are 2 segments, and it is possible to
draw a round join, so why not? Does the specification say anything about this 
case?
I changed the side() algorithm because I think what I use now also works (or at
least it certainly works for all cases in which we use it), it is faster and
it is more accurate.

It sounds like you are correct here.  What does the closed source code draw?

I think the above are all the important changes.
Some things I wanted to discuss:
1. Should I change antialiasing and make it store the curves instead of the 
crossings?

It's a topic for discussion. I'm wary of the memory footprint, but I don't want to kill performance. Is it a simple change to switch? I'm also a fan of "get what you have in now as a stake in the ground and then deal with some of these issues as follow-on optimizations".

2. I'm not completely comfortable with the way special cases are handled in
the somethingTo, computeOffsetQuad, and computeOffsetCubic methods in 
Stroker.java
("special cases" being very tiny curves).

I'll take a look and report my comfort level too.

3. From tests it looks to me like offsets for quads might not be good enough. 
It's
barely visible, but I don't know if I'm comfortable with it (fixing this would 
be
pretty simple - either add some subdivisions, or approximate quad curves with 
cubics
(the latter might be especially nice, since we could use the same code for both
curve types)).

I'll need to look at code here too before I can comment.

3. Should line 862 in Stroker be enabled (I give some reasons for wanting to
in a comment below it).
That'll take me a day or two just to understand and figure out...
<crosseyed smiley>
I'm sorry, I should have explained it. That line was a check for lineWidth2>  
1.5&&
subdivisionsSoFar<  5. It was meant to avoid trying to find points where the 
offset
curves have cusps if we already have "enough" subdivisions. Now I always try to 
find
these points because I don't think this procedure is expensive enough to 
jeopardize
rendering quality.

Are you saying that I can ignore that question now?

I would very much appreciate any comments or suggestions.

I'm afraid to make too many more before you get a chance to check in what you have now and at least get us several concrete steps towards a quality implementation.

I don't think I've given much in the way of counter-opinions above so if you are comfortable with my answers above then let's work on reviewing the code with an eye towards getting it in as an important milestone before we muck with it too much more...

                        ...jim

Reply via email to