On Thu, Apr 20, 2017 at 02:51:12PM +0200, Timon Gehr via Digitalmars-d wrote:
> On 20.04.2017 03:00, H. S. Teoh via Digitalmars-d wrote:
> > On Thu, Apr 20, 2017 at 02:01:20AM +0200, Timon Gehr via Digitalmars-d 
> > wrote:
> > [...]
> > > Yes, there is in fact a beautifully simple way to do better. :)
> > > ...
> > 
> > Ahh, so *that's* what it's all about. I figured that's what I was
> > missing. :-D  Thanks, I'll include this in QRat soon.

Update: QRat now supports ^^. :-) Integral exponents only, of course.  I
also implemented negative exponents, because QRat supports division and
the same algorithm can be easily reused for that purpose.

Interestingly enough, std.math has an overload of pow() that pretty much
does exactly the same thing, except that its sig constraints require a
built-in floating-point type. I'm half-tempted to submit a Phobos PR to
relax the sig constraints so that we could actually use it for QRat
without essentially duplicating the code.


[...]
> BTW, you are right that with std.bigint, computation using a linear
> number of additions is actually faster for my example (100000th
> Fibonacci number).  The asymptotic running time of the version with
> pow on QRats is lower though, so there ought to be a crossover point.
> (It is Θ(n^2) vs.  O(n^log₂(3)·log(n)). std.bigint does not implement
> anything that is asymptotically faster than Karatsuba.)

Yeah, probably there is a crossover point. But it might be quite large.
I suppose one could make a graph of the running times for increasing n,
and either find the crossover point that way or extrapolate using the
known curve shapes.

Having said that, I haven't scrutinized the performance characteristics
of QRat too carefully just yet -- there is probably room for
optimization.


[...]
> > > That would certainly be nice. Note that QRat is basically already
> > > there for different quadratic radicals, the only reason it does
> > > not work already is that we cannot use a QRat as the base field
> > > instead of ℚ (because ℚ is hardcoded).
> > 
> > Oh?  I didn't try it myself, but if QRat itself passes
> > isArithmeticType, I'd venture to say QRat!(n, QRat!m) ought to
> > work... There are some hidden assumptions about properties of the
> > rationals, though, but I surmise none that couldn't be replaced by
> > prerequisites about the relative linear dependence of the mixed
> > radicals over Q.  ...
> 
> The issue is that gcd does not work on QRats. If QRat had two
> coefficients from an arbitrary (possibly ordered) field instead of
> encoding rationals explicitly, I think it would work.

You're right, without gcd it won't work.  The current implementation is
a bit overzealous on using gcd (cf. the division algorithm), mainly
because I'm concerned with integer overflow on native types. Probably
some of these uses can be dispensed with, where BigInt is involved. But
normalize() still needs gcd, otherwise sgn() and opCmp() may produce
wrong results.

One thought is that if there is a QInt base type (i.e., implementing
numbers of the form a+b√r, without the denominator), then we could
implement a gcd algorithm for it, and we'd be able to instantiate
QRat!(r, QInt).


> > What I had in mind, though, was a more direct approach that perhaps
> > may reduce the total number of operations, since if the code is
> > aware that multiple radicals are involved, it could potentially
> > factor out some commonalities to minimize recomputations.  ...
> 
> This is probably the case.

Upon reviewing the algorithms I've come up with in the past, it appears
that QRat!(r, QInt) may in fact produce essentially the same code.


> > Also, the current implementation of QRat fixes the radical at
> > compile-time; I wanted to see if I could dynamically handle
> > arbitrary radicals at runtime. It would have to be restricted by
> > only allowing operations between two QRats of the same extension, of
> > course, but if the code could handle arbitrary extensions
> > dynamically, then that restriction could be lifted and we could
> > (potentially, anyway) support arbitrary combinations of expressions
> > involving radicals. That would be far more useful than QRat, for
> > some of the things I'd like to use it for.  ...
> 
> What applications do you have in mind? Computational geometry?

Yes. In particular, manipulating the coordinates of certain kinds of
polytopes. I currently do have code that can do this with
floating-point, but I'd like to be able to deal with exact coordinates
rather than floating-point approximations.


[...]
> > > All your conjectures are true, except the last one. (Galois theory
> > > is not an obstacle, because here, we only need to consider
> > > splitting fields of particularly simple polynomials that are
> > > solvable in radicals.)
> > 
> > That's nice to know. But I suppose Galois theory *would* become an
> > obstacle if I wanted to implement, for example, Q(x) for an
> > arbitrary algebraic x?  ...
> 
> All that the result about the quintic really says is that you will
> not, in general, be able to express x using field operations on
> radicals. It is still possible to compute the roots to arbitrary
> precision.

Oh I know that; I'm not really concerned with computing roots to
arbitrary precision here though, but more with implementing precise
arithmetic on expressions involving said roots.


> Computing the field operations in ℚ(x) will actually still be quite
> straightforward but you'd have to think about what to do with toString
> and opCmp. (Or more generally, you'd have to think about how to pick
> one of the roots of a given polynomial.)

Hmm, good point.  I suppose I haven't really thought through the
consequences of what might happen if I implemented a reciprocation
algorithm for an algebraic number k where the defining polynomial for k
may have multiple roots. At some point, assumptions about which root is
being used need to come into play, I suppose.  How to encode this in the
API and internally in the code is an interesting question.


> > > You can even mix radicals of different degrees.
> > 
> > Yes, I've thought about that before. So it should be possible to
> > represent Q(r1,r2,...rn) using deg(r1)*deg(r2)*...*deg(rn)+1
> > coefficients?
> > ...
> 
> Yes, at most, except you don't need "+1". (For each radical ri, you
> will at most need to pick a power between 0 to deg(ri)-1 to index into
> the coefficients.)
[...]

The +1 is for the denominator, assuming integer coefficients.  Since
having 2^n rational coefficients is equivalent to having 2^n integer
coefficients (which are half the size of rational coefficients,
computer-representation-wise) + 1 denominator.

Though it's arguable whether this really saves that much once you get
out of the realm of native machine integer types into BigInt. It's
debatable whether that much is really saved in terms of space and CPU
time if you have two or more rational coefficients with denominators
that are relatively prime to each other, so that merging them all into a
single denominator may just cause all coefficients to explode in size
whereas separate rational coefficients could in fact be more compact.

But, at least in theory, if you're dealing with relatively small members
in the field, you could fit everything in native int types and having
2^n + 1 integer coefficients may perform better than having 2^n rational
coefficients.  I suspect the applicability of this is rather narrow,
however, because once you get past a small handful of radicals, the
coefficients (esp. intermediate coefficients computed during
reciprocation) will easily overflow native machine int types, thus
necessitating BigInt coefficients pretty quickly.

The last time I attempted an implementation with 3-4 separate radicals
many years ago, I found that even small starting coefficients (i.e., 1-2
digits) quickly exploded in internal algorithms due to repeated
multiplication, so that after just a small number of operations I was
already running into integer overflows. This was back when I was still
doing it in C/C++... I did attempt a re-implementation using libgmp, but
never finished.


T

-- 
"Real programmers can write assembly code in any language. :-)" -- Larry Wall

Reply via email to