Hi Denis,
I think the key point is when you say "it's better to have too many
roots than too few".
Also, I'm not mathematically familiar with what the case of "D close to
zero" means. Generally, in floating point math, when you do a test and
change the approach based on the results of that
Hi Jim.
I've attached another modification of your CubicSolver.java.
I tried many different things, but I think what's in the
attachment is the only satisfactory implementation. The logic
is somewhat similar to what you suggested, in that it computes
3 roots for D < 0. However, once roots are comp
Hi Jim.
> If you want to include these rendering tests as extra verification
> along with your other changes, then that is fine.
Ok, thanks. So, I'll update the performance webrev to include them.
> Also, I think we might have a script that forceably checks the value
> of the @bug tag and ensure
Hi Jim.
> What about logic like this:
>
> boolean checkRoots = false;
> if (D < 0) {
> // 3 solution form is possible, so use it
> checkRoots = (D > -TINY); // Check them if we were borderline
> // compute 3 roots as before
> } else {
> double u = ...;
> double v = ...;
> res[0] = u+v; // should
Hi Denis,
What about logic like this:
boolean checkRoots = false;
if (D < 0) {
// 3 solution form is possible, so use it
checkRoots = (D > -TINY); // Check them if we were borderline
// compute 3 roots as before
} else {
double u = ...;
double
Hi Jim.
> The test as it is has a test case (I just chose random numbers to
> check
> and got lucky - d'oh!) that generates 1 solution from the new code
> even
> though the equation had 2 distinct solutions that weren't even near
> each
> other...
I figured out why this happens. It's because of c
Correction...
On 12/28/2010 3:00 PM, Jim Graham wrote:
math. (Though it begs the question - is "-q/sqrt(-p^3)" more accurate
than "-q/(p*sqrt(-p)"? If p is < 1 then the cube is an even smaller
number, does that matter?)
Make that "-q/(-p*sqrt(-p))", or "q/(p*sqrt(-p))"...
Hi Denis,
I'm attaching a test program I wrote that compares the old and new
algorithms.
Obviously the old one missed a bunch of solutions because it classified
all solutions as 1 or 3, but the new one also sometimes misses a
solution. You might want to turn this into an automated test for
Aha!
I finally figured out what I was missing.
On 12/24/2010 4:43 PM, Denis Lila wrote:
Line 1133 - I don't understand why that term has -q in it. The above
link and the original code both computed essentially the arccos of
this
Basically, the negative comes in when you push all of the p term
Hi Jim.
> Unfortunately, this means that the names here
> and the values assigned to them and the comment above them conflict.
> If the variables could be named "p/3" and "q/2" then all would be clear,
> but I don't know how to do that naming very easily. Perhaps the
> comment could be simply rewo
The regression tests for this bug do not call the method directly. They
may exercise the function indirectly in some pipelines, but not all
pipelines will use this method (the current version of Pisces in OpenJDK
doesn't even use it until you integrate your other changes as far as I
know).
I
Hi Denis,
Line 1099 - I decided to check out Cordano's method and noticed a
discrepancy. The comment here says we are calculating the p and q for
this equation, but the values assigned to the p and q variables in lines
1102,1103 happen to be p/3 and q/2. That's fine because almost all of
th
Hi Jim.
> Lines 1094-1096, they could also be NaN if any of the numerators were
> also zero and these tests might fail (but only for the case of all of
> them being zero I guess, otherwise one of the other divisions would
> result in infinity). Are accidental infinities (caused by overflow
> rathe
Hi Denis,
Lines 1094-1096, they could also be NaN if any of the numerators were
also zero and these tests might fail (but only for the case of all of
them being zero I guess, otherwise one of the other divisions would
result in infinity). Are accidental infinities (caused by overflow
rather
Hi Jim.
> Also, I wrote new hit testing code in jdk6 that used bezier recursion
> to compute the values and it ran way faster than any root-finding methods
> (mainly because all you have to worry about is subdividing enough that
> the curve can be classified as above, below, or to the left or righ
Hi Denis,
That sounds like some very good ideas for making this method very accurate.
On the other hand, we're starting to get into the territory where an
advanced math package should be catering to these requirements. The
solveCubic was an internal helper function for implementing the hit
t
Hi Jim.
> How big are these errors expressed as multiples of the ULP of the
> coefficients? Obviously 1e-17 is a lot smaller than 1e-4, but was
> 1e-17
> representing "just a couple of bits of error" or was it still way off
> with respect to the numbers being used? And were these fairly obscure
Hi Denis,
On 12/14/2010 5:11 PM, Denis Lila wrote:
I have one question though: how fast does this have to be? I can come
up with fairly reasonable examples for which both CubicCurve2D.solveCubic
and the implementation I found give very inaccurate results
The existing method was not very accura
Hi Jim.
> You might want to submit it as a separate push and get credit for
> fixing 4645692 (solveCubic doesn't return all answers),
Sure, that sounds good. Reading through the code I found, I spotted
a few things that might have been problematic in some extremely
rare cases. I've been working o
Hi Denis,
Those sound like just the kind of problems I believed existed in the
CC2D algorithm.
You might want to submit it as a separate push and get credit for fixing
4645692 (solveCubic doesn't return all answers), and maybe even the
following failures in the containment methods (which cou
> Very nice! How does it compare to CubicCurve.solveCubic() (which I
> know
> has issues with the 2 root case, but I got it from a "reliable source"
> -
> some textbook on Numerical Recipes)?
I wrote a tests that generated 2559960 polynomials, and in 2493075 of
those, the computed roots were i
Very nice! How does it compare to CubicCurve.solveCubic() (which I know
has issues with the 2 root case, but I got it from a "reliable source" -
some textbook on Numerical Recipes)?
Also, one area that I had issues with the version I used in CC2D was
that it chose a hard cutoff to classify th
On 12/13/2010 10:54 AM, Denis Lila wrote:
Hi Jim.
With respect to finding a cubic root, currently you are doing that in
2 dimensions, but what if we converted to 1 dimension?
Consider that the control polygon is "fairly linear". What if we
rotated our perspective so that it was horizontal and
Hi again.
I found an implementation of a closed form cubic root solver
(from graphics gems):
http://read.pudn.com/downloads21/sourcecode/graph/71499/gems/Roots3And4.c__.htm
I did some micro benchmarks, and it's about 25% slower than the one I have.
I'm thinking we should use it anyway because it'
Hi Jim.
> With respect to finding a cubic root, currently you are doing that in
> 2 dimensions, but what if we converted to 1 dimension?
> Consider that the control polygon is "fairly linear". What if we
> rotated our perspective so that it was horizontal and then squashed it
> flat? Consider i
Hi Jim.
> Woohoo!
:)
> >> How often do we end up needing getTCloseTo in practice?
> >
> > It depends on the ratios of the lengths of the sides of the control
> > polygon. The closer they are to 1, the less we need it. I'm not
> sure
> > how to answer more precisely - for that I would need a
> re
Hi Denis,
The example I gave was intended to be very crude - I was simply
describing the technique, but as I said it would require better math to
really know what the right formula would be.
With respect to finding a cubic root, currently you are doing that in 2
dimensions, but what if we co
On 12/10/2010 8:27 AM, Denis Lila wrote:
Hi Jim.
Yes. The improvement shown by the bench marks is substantial.
Then this is great news!
Indeed :-)
Woohoo!
How often do we end up needing getTCloseTo in practice?
It depends on the ratios of the lengths of the sides of the control
polygon
> Of course, all this is useless
> if I've done something to make things look horrible, so I'm going to
> run the gfx tests again.
I just ran them. All is good. The only change compared to the old test
result is that the number of dashed round rectangles that are identical
to what is produced by t
Hi Jim.
> By "without this optimization" do you mean back when you did a full
> scan for the proper T?
Yes. The improvement shown by the bench marks is substantial.
> Then this is great news!
Indeed :-)
> How often do we end up needing getTCloseTo in practice?
It depends on the ratios of the
Hi Jim.
> Actually, even if the lengths aren't close the lengths may give you
> enough information about the acceleration along the curve that you can
> do a decent approximation of the accelerated T value. The T could be
> biased by some formula that is weighted by the ratios of the control
>
By "without this optimization" do you mean back when you did a full scan
for the proper T? Then this is great news!
How often do we end up needing getTCloseTo in practice?
...jim
On 12/8/2010 1:54 PM, Denis Lila wrote:
Hi Jim.
How about "if the 3 segments of the con
Hi Denis,
On 12/8/2010 12:04 PM, Denis Lila wrote:
I'm not sure how the closed interval is awkward. Isn't it just proper
choice of ">= and<= vs.> and<" in the testing method?
In the filtering function, yes, but I was referring to cubicRootsInAB in
Helpers:122-133 where we iterate through int
Hi Jim.
> > How about "if the 3 segments of the control polygon are all close to
> > each other in length and angle", then the optimization applies. Is
> > that easy to test?
>
> Hmm, that would actually be extremely easy to test and it would cost
> almost nothing. We already compute the control
> I'm not sure how the closed interval is awkward. Isn't it just proper
> choice of ">= and <= vs. > and <" in the testing method?
In the filtering function, yes, but I was referring to cubicRootsInAB in
Helpers:122-133 where we iterate through intervals. For each interval,
we have the values of
On 12/8/2010 9:37 AM, Denis Lila wrote:
Shouldn't it be [A, B]?
I thought about this when implementing it, but I don't think it mattered
whether it was closed or half open, and the closed interval would have been
somewhat more awkward to implement.
I'm not sure how the closed interval is awkw
Hi Jim.
> The main problem is that "must" doesn't exist for IEEE floating point
> numbers. You can find the root for one of the endpoints and it may
> return "t = -.1" even though the value exactly matched the
> endpoint, but after all the math was said and done the answer
> it came up had t
Hi Denis,
On 12/7/2010 10:47 AM, Denis Lila wrote:
Hi Jim.
I'm sure you will likely find a root, but the method you are using is
"roots*inAB*" which may throw the root out because it is out of range,
no?
I'm sure some roots will be thrown out, but I think in every call of
getTCloseTo there w
Hi Jim.
> I'm sure you will likely find a root, but the method you are using is
> "roots*inAB*" which may throw the root out because it is out of range,
> no?
I'm sure some roots will be thrown out, but I think in every call of
getTCloseTo there will be at least one root that isn't thrown out. Th
Hi Denis,
On 12/6/2010 4:21 PM, Denis Lila wrote:
Hi Jim.
line 134 - what if numx or numy == 0 because only roots outside [0,1]
were found?
In this case lines 151-162 will execute, and nothing is wrong. The only
problem is when both numx and numy are 0. This is certainly possible in
the gene
Hi Jim.
> line 134 - what if numx or numy == 0 because only roots outside [0,1]
> were found?
In this case lines 151-162 will execute, and nothing is wrong. The only
problem is when both numx and numy are 0. This is certainly possible in
the general case (but only with quadratic curves), but the
Hi Denis,
Yes, I remember this now that you reminded me. I'm sorry for having let
it slide the first time... :-(
Curve.java:
line 134 - what if numx or numy == 0 because only roots outside [0,1]
were found?
line 145 - what if d0 and d1 are both 0? NaN results. What if you just
used a s
Hi Jim.
About two weeks or so ago I replied to one of your very old e-mails
about dashing performance and I included a link to the webrev:
http://icedtea.classpath.org/~dlila/webrevs/perfWebrev/webrev/
I suspect you might have missed it, so that's why I'm writing this.
If you haven't, I apologize
A few corrections/completions:
> I tried to do this. I used the netbeans compiler
netbeans *profiler*.
> I tried to implement something like this. What I did was: I reduce the
> length of the buckets array to have only one bucket per pixel row. I removed
> the array that kept track of how many li
Hi Jim.
I have a new webrev:
http://icedtea.classpath.org/~dlila/webrevs/perfWebrev/webrev/
> How about looking more at the stroking end of the process and I'll dig
> a little more into optimal rasterization code. I have a lot of
> experience with optimizing rasterizer code (and JNI if it comes
FYI - we have a bug to integrate some optimizations from JDK6 for wide
lines and transformed rectangles. In 6 I did some work a year ago or so
to detect simply wide lines and transformed rectangles and to issue a
"fillParallelogram" internal method. OGL and D3D can then implement
these direct
Hi Jim.
> - get rid of edgeMxy in all methods but addLine()
> - addLine computes min/max of first/lastScanline
> - addLine also computes min/max of x1,x2 values
>
> this turned out to be just about the same speed for my FX rendering
> version (which I believe is more sensitive than the way it is
Hi Jim.
> Did you have to modify the AFD code for this (in terms of changing
> their limit constants to get good results)?
No, I didn't. By handling non monotonic curves, the AFD algorithm
is going through more iterations, but the only way in which this
could be a problem is through accumulation
I ended up going with:
- get rid of edgeMxy in all methods but addLine()
- addLine computes min/max of first/lastScanline
- addLine also computes min/max of x1,x2 values
this turned out to be just about the same speed for my FX rendering
version (which I believe is more sensitive than the way i
Hi Denis,
On 11/9/2010 3:06 PM, Denis Lila wrote:
I see. In that case, I think it's a good idea if we don't make curves
"monotonic". I already did this, by moving the edgeMin/axX/Y handling
and orientation computations in addLine. This did make it slower compared
to the file you sent me, but onl
Hi again.
I just thought of this: if we're really concerned about the accuracy
of the edgeMinX edgeMaxX variables, we could find the curves'
critical points and use them to compute the min/max X values. After all,
we're creating (or rather "setting") the Curve objects anyway. This isn't
as fast a
Hi Jim.
> All lines generated from a given "allegedly monotonic" curve are
> recorded with the same "or" (orientation) value. But, if the curves
> are not truly monotonic then it might be theoretically possible to
> generate a line that is backwards with respect to the expected orientation.
>
Hi Jim.
> The problem is that pruning complicates the inner loop that advances
> the scanline and you can't tell without scanning every segment in play
> whether you need to do it in its own loop. Thus, for one of my test
> cases it was really expensive without some up front record of whether
>
Hi Denis,
The problem is that pruning complicates the inner loop that advances the
scanline and you can't tell without scanning every segment in play
whether you need to do it in its own loop. Thus, for one of my test
cases it was really expensive without some up front record of whether or
n
Hi Denis,
On 11/8/2010 2:39 PM, Denis Lila wrote:
Finally, I discovered (while testing for other problems) that the
curves are not truly monotonic after slicing them. I realized this years ago
when I was writing my Area code (see sun.awt.geom.Curve) and put in
tweaking code to make them monoton
Hi Jim.
I like the new Renderer, but I have a question about edgeBucketCount.
As far as I can see it is only used so that we can tell whether
any given bucket contains only edges that go all the way to the last
(or beyond) scanline. Is this really common enough that we gain by
treating it as a sp
Hi Jim.
> A couple of questions about the code that I haven't touched...
> Is there some reason why the AFD for cubics doesn't have any tests for
> dddxy (the "constants" for its equations), but the AFD for quads is
> testing the ddxy on every loop? I know that these values do change
> when the
Also, some things to note in the new version - things I had to "fix" not
related to performance.
In endRendering the pixel bounds of the edges needed to be computed and
passed along to the next stage. Unfortunately the code in endRendering
computed the sub-pixel bounds and then simply shifted
A couple of questions about the code that I haven't touched...
Is there some reason why the AFD for cubics doesn't have any tests for
dddxy (the "constants" for its equations), but the AFD for quads is
testing the ddxy on every loop? I know that these values do change when
the AFD variables a
It's still a work in progress, but I've cleaned up a lot of logic and
made it faster in a number of ways. Note that I've abstracted out the
cache stuff and created an "AlphaConsumer" interface which may evolve
over time.
In FX we actually consume alphas in larger chunks than the code in JDK
On 11/8/2010 6:34 AM, Denis Lila wrote:
Hi Clemens.
I've only followed your discussion with Jim but skipped all the
in-depth discussion.
From my prior experiences usually JNI is not woth the trouble, if
you don't have a serious reason why using native code would have
advantages (like the poss
Il giorno lun, 08/11/2010 alle 09.34 -0500, Denis Lila ha scritto:
> > Have you had a look at the Netbeans profiler? It supports sampling
> > based testing to keep the influence of the profiler at a minimum.
>
> No, I don't use netbeans - it doesn't render underscores properly (although
> I think
Hi Clemens.
> I've only followed your discussion with Jim but skipped all the
> in-depth discussion.
> From my prior experiences usually JNI is not woth the trouble, if
> you don't have a serious reason why using native code would have
> advantages (like the possibility of using SIMD or when valu
Hi Jim.
> Also, I've gotten another 20% improvement out of the design with a few
> more tweaks. (Though I measured the 20% in the stripped down version
> that I'm prototyping with FX so I'm not sure how much of that 20%
> would show up through the layers of the 2D code. Overall, I've about
> dou
Hi Denis,
Also, I've gotten another 20% improvement out of the design with a few
more tweaks. (Though I measured the 20% in the stripped down version
that I'm prototyping with FX so I'm not sure how much of that 20% would
show up through the layers of the 2D code. Overall, I've about doubled
Hi Denis,
> It's not obvious to me why this happened, so I think now I will put
> this type of optimization aside and convert to JNI,
I've only followed your discussion with Jim but skipped all the
in-depth discussion.
>From my prior experiences usually JNI is not woth the trouble, if you
don't
Hi Denis,
I had a bit of luck with the following next() method:
private int next() {
// TODO: make function that convert from y value to bucket idx?
int bucket = nextY - boundsMinY;
for (int ecur = edgeBuckets[bucket]; ecur != NULL; ecur =
(int)edges[
Hi Jim.
I implemented a middle ground between what I sent yesterday and
what we have now: using the array of linked lists only to replace
the quicksort (I think this was your original suggestion).
Unfortunately, this didn't work out (at least according to the
benchmarks). Curves were no different
Hi Denis,
A generic suggestion - make all of your classes final - that gives the
compiler the maximum flexibility to inline any methods you write.
With respect to the algorithm choices:
I think they key is that the X sorting rarely has any work to do. The
first test of "does this edge need
Hi Jim.
I implemented your bucket sort idea. I'm not just using the buckets
to remove the y-sort. I use them in the iteration through the scanlines
too. What happens is that on any iteration, the active list is the
doubly linked list buckets[nextY-boundsMinY]. I did this because I thought
less mem
Hi Denis,
Good news!
On 10/28/2010 3:27 PM, Denis Lila wrote:
If we moved to a Curve class or some other way to
consolidate the 3 lists (may be easier in native code), this might win
in more cases...
Does that mean you no longer think we should flatten every curve as soon
as we get it?
No,
Hi Jim.
I removed the cubic and quadratic lists and am now flattening
everything as it comes in, like you suggested. I ran some AA
benchmarks and the curves test was about 15% faster.
Then I started using your insertion sort in the only edges version
and re ran the benchmarks. Curves improved fro
Hi Denis,
On 10/26/2010 6:58 AM, Denis Lila wrote:
90% (guesstimate) of the time edges do not cross each other, thus if
you sort the crossings without reordering the active edges then you just
end up doing the same sorting work (same swaps) on the next scanline. My
SpanShapeIterator code actual
On 10/26/2010 6:58 AM, Denis Lila wrote:
If we are really worried about the y-sort, then how about creating a
bunch of buckets and doing a bucket sort of the edges? As they are
added to the list of segments, we accumulate their indices in a row
list based on their startY so that each step of the
Hi Jim.
> Just to be certain - you are still planning on putting the existing
> stuff back and we're talking about future work, right? I'd love to
> get a stake in the ground here.
Yes, I'll push today.
> If we are really worried about the y-sort, then how about creating a
> bunch of buckets
Hi Denis,
Just to be certain - you are still planning on putting the existing
stuff back and we're talking about future work, right? I'd love to get
a stake in the ground here.
On 10/25/2010 3:30 PM, Denis Lila wrote:
- Create a curve class and store an array of those so you don't have
to i
Hi Jim.
> - Create a curve class and store an array of those so you don't have
> to iterate 3 different arrays of values and use array accesses to grab
> the data (each array access is checked for index OOB exceptions).
I actually implemented something like this in my first draft of the current
v
Hi Denis,
On 10/25/2010 7:34 AM, Denis Lila wrote:
(and I have some ideas on further optimizations to consider if you are still
game after this goes in)...
I'd love to hear what they are.
Here are my thoughts:
- Currently Renderer has more stages than we probably should have:
for (each
> That's great. I will be pushing today.
About that: you wrote the TransformingPathConsumer2D file,
so how should you be given credit? Should I put your name in
"Contributed-by"? Should I put an @author tag in the file?
Or does the "reviewed-by" suffice?
Regards,
Denis.
- "Denis Lila" wrote
Hi Jim.
> How about this:
>
> (Math.abs(len-leftLen) < err*len)
>
> (noting that err*len can be calculated outside of the loop).
This is what I use now.
> Note that a custom shape can send segments in any order that it wants
> so close,close can happen from a custom shape even if Path2D
Hi Jim.
> It's interesting to note that non-AA dashed ovals took a
> much bigger hit than AA dashed ovals so we need to see which code is
> fielding those and see what its issue is.
I think that's because when AA is used the tests take more time, so
a smaller proportion of time is being spent in
On 10/22/2010 12:22 PM, Denis Lila wrote:
Because the error is meant to be relative. What I use is supposed to be
equivalent to difference/AverageOfCorrectValueAndApproximation< err,
where, in our case, AverageOfCorrectValueAndApproximation=(len+leafLen)/2,
so that multiplication by 2 should hav
Hi Denis,
Interesting results. Note that sun-jdk7 is already showing some
regressions so the more intesting results are the old-to-new pisces
comparisons. It's interesting to note that non-AA dashed ovals took a
much bigger hit than AA dashed ovals so we need to see which code is
fielding t
Hi Jim.
> I was going to run these today, but fixing the dashing bug above and
> rerunning the tests took a while and it's already 8:30 pm here and I
> have to go home. I'll run them tomorrow morning.
I ran the benchmarks. I've attached the options file. I ran benchmarks
of my icedtea installatio
Hi Jim.
>>> http://icedtea.classpath.org/~dlila/webrevs/noflatten2/webrev/
> lines 279 and 303 - maybe add Helpers.nearZero(v, nulp) for these?
I've introduced this method and am using it in 303. However, I think
line 279 is good as it is, since we're comparing Float.MIN_VALUE and a double.
>
Hi Jim.
> Does closed JDK emit something on moveto,close? (Or close,close?)
Surprisingly, it does. On moveto,close it draws caps just as if a 0
length line had been drawn. Isn't this a bug? It doesn't make much sense
to me that something should be drawn because of a moveTo.
So, if we want to re
Hi Denis,
This should be the last batch of comments, most of which may require a
10 second answer. Most of them could just be ignored as they are minor
optimizations. There are only a couple where I think something is "off"...
PiscesRenderingEngine.java:
lines 279 and 303 - maybe add Helpe
This comment may make more sense if I explain that the condition for
when finish() is executed in closePath is the reverse of the condition
in move and done. I agree that it should return if prev!=OP_TO, but I'm
not so sure about doing a finish(). Does closed JDK emit something on
moveto,clos
Hi Denis,
I saw something in the latest webrev that reminded me of an earlier comment.
On 10/18/2010 2:21 PM, Denis Lila wrote:
line 389 - The test here is different from closePath. What if they
were both "prev == DRAWING_OP_TO"?
I am now using prev!=DRAWING_OP_TO (not ==, since it is suppos
Hi Jim.
> In the meantime, can you outline the tests that you ran?
I ran Java2D without any problems. There's also been an
icedtea bug http://icedtea.classpath.org/bugzilla/show_bug.cgi?id=450
related to a look and feel (tinylaf:
http://www.muntjak.de/hans/java/tinylaf/tinylaf-1_4_0.zip
to run a
Hi Denis,
I'll be focusing on this later today, just a last proofread.
In the meantime, can you outline the tests that you ran?
Also, have you used J2DBench at all? I know you ran some of your own
benchmarks, but didn't know if you were familiar with this tool:
{OpenJDK}/src/share/d
Hi Jim.
> When one talks about curves and being parallel, my mind
> tends to think of the tangents of the curves being parallel and
> tangents are directed by the first derivative.
That's what I was trying to express. The tangents of curves A and B
at t are parallel if and only if there's some c
Hello Jim.
> So either it is always on the right side, always on the wrong side
> (i.e. just reverse the rotation in the math), or always on the right/wrong
> side depending on the CWness of the join angle - which would be
> reflected in rev... No?
That's a good point. I've changed the test to
OK, I can see how your terminology works now, but it seems odd to me. I
never consider re-expressing the coordinates on a curve as a vector and
basing geometric properties on those constructed vectors. I either
consider the points on the curve, or its tangent or its normal - none of
which is
Right, but it seemed to me that if omxy was the "from" vector and mxy
was the "to" vector, that the computed mmxy should always be predictably
on the same side of it, no? If it was on the wrong side then it
wouldn't be a random occurence, it must be related to the input data.
So either it is a
> Also, how is A(t) and B(t) are parallel not the same as "the curves A
> and B are parallel at t"?
Well, suppose A and B are lines with endpoints (0,0), (2,0) for A
and (0,1),(2,1) for B. Obviously, for all t, A and B are parallel at t.
However let t = 0.5. Then A(t) = (1,0) and B(t) = (1, 1). Th
> Cool, but above I was also asking the same question about line 231,
> and you provided a lot of information about line 231 (and a test to verify
> it), but didn't answer if the test in line 231 also tracks rev the
> same way...?
Oh, no, line 231 isn't mean to be related to rev at all. It just ch
On 10/20/10 7:54 AM, Denis Lila wrote:
In #2, you have a bunch of "I'() || B'()" which I read as "the slope
of the derivative (i.e. acceleration) is equal", don't you really mean
"I() || B()" which would mean the original curves should be parallel?
Otherwise you could say "I'() == B'()", but I th
Hi Denis,
One clarification:
On 10/20/10 7:11 AM, Denis Lila wrote:
When would the isCW test trigger? Does it track "rev"? What happens
at 180 degrees (is that test reliable for the randomization that might
happen when omxy are directly opposite mxy)?
isCw is used for computing the arc bise
Hi Jim.
> I wasn't sure why you isolated that term out there instead of just
> grouping it with the rest of the numerator - is there a danger of
> overflow if you multiply it before you do the division? If so, then
> that is fine since it doesn't actually affect the number of fp ops so
> it sh
1 - 100 of 172 matches
Mail list logo