> > > Don't forget bigrats.
> > 
> > I'm not too familiar with the concept of rational numbers in a computing
> > complex. What's your definition of a (big)rat? Fixed point??
> 
> bigint1 / bigint2.  Possibly represented as a triad of bigints,
> bigint1 + bigint2 / bigint3.

I'm tempted to suggest that bigrats, like complex numbers, fall into
the category of 'only mostly compatible'.
ie
1 ($bigrat1 op  $bigrat2) accurately gives $bigrat3
2 ($bigrat1 op [any numeric scalar that can be accurately extracted
        using get_realn()]) accurately gives $bigrat3
3 ($bigrat1 op $daves_different_but_smaller_bigrat_implementation2)
        gives a bigrat, but its accuracry depends on (2)

> Remember also abnormalities like NaN.

I think this depends largely on how we standardise the arbitrary
real represenation as retrieved by get_realn - if our format allows
NaN et al, then get_realn() on a Nan should get a NaN, otherwise an
exception should be thrown. Of course, get_intn() on a NaN should always
throw an exception.

> 
> > IE should bigint_subtract(bigint1,bigint2)
> > 
> > 1) always return a new bigint SV
> > 2) a bigint normally, but an int if it happens to know that its result
> >    will fit in a standard int, or
> > 3) a bigint or an int depending on some as yet undefined external context?
> > 
> > My own feeling is that it should just stick with (1) - if someone has some
> > code that uses bigints, the chances are that the results of bigint 
expressions
> > are most likely to fed into further bigint expressions, so demoting
> > to int then promoting again would probably less efficient;  also if
> > you're working with bigints in the first place, then I'd expect cases
> > where the result of an op fits in an IV would be the minority.
> > But never having worked with bigints myself, I could be speaking from
> > my derierre ;-)
> 
> Likewise.  But having such a reduction/collapsion code is very natural
> because we need to have the 'inverse' of that logic to know when to stop
> using ints and start using bigints; 

Hmmm, 2**1000. raises some interesting issues.
Cureently the perl language itself has no builtin support for big numbers,
and all I have been proposing so far is an scheme for the numerical
part of the vtable API that in principle allows other people to write their
own large types which (mostly) interoperate with standard perl numeric types.

Also, the current Perl language has no general mechanism for telling an op
what type it should return, and I've sort of come to the conclusion that
it would hard/messy to do so.

Perl currently has the syntax

my Bigint $b = ...;
but that is a compiler hint that $b is a reference to an object of
type Bigint - not that $b is a scalar with vtable type bigint.

I had vaguely assumed that if someone wrote a Bigint scalar type, they would
also provide a perl-level constructor.

So,

$x = 2^1000;

would evaluate 2^1000 at compile time, and if it didnt fit into an NV
(or IV if 'use integer' is in effect), create a compile-time error.
The compiler has no way of knowing that you want a compile-type bigint
constant.

To do proper bigint arithmetic, you might do

use Bigint;
$x = bigint(2)^1000;

where bigint is a function imported from Bigint that returns a bigint scalar
(as opposed to a Bigint object ref).

This code would be evaluated as follows:

a standard SV with value 1000 is pushed on the stack.
a standard SV with value 2 is pushed on the stack
bigint is called, and returns a new SV of type bigint, with value 2.
This is pushed on the stack.
pp_exp is called. It examines its 2 args; then as defined by precison(),
the bigint is 'bigger' than the std scalar, so
bigint_power() is called.
This evaluates 2^1000 and returns the result to pp_exp as a new bigint SV.
pp_exp then pushes this value on the stack.
pp_assign is then called, whihc blows away the current contents of $x,
and relaces them with the return value from pp_exp.

Without language extensions, perl XS functions are essentially the only
way to create values of a user-defined type. As long as this remains the
case, it might be wise for ops not to 'degrade' their results.

For example,

$b1 = bigint(9999999999999999999999999999999999999999999999);
$b2 = bigint(9999999999999999999999999999999999999999999900);
$b3 = $b2 - $b1; # happens to be 99, but could just as easily be 10^100
$b4 = $b3 ^ 100000;

In this case, $b3 *sometimes* gets downgraded a std int, which means that
*sometimes* the last line will die.

On the other hand, if ops never downgrade, there needs to be a manual
way of converting a bigint to a standard one for the odd occasion
when the programmer needs it. We could argue that it up to the author
of Bigint to provide a perl-level function that does this, eg
big2int(), whihc might be used a context like this:

if ($b >= 0 and $b < 2^32) {
        freeze(big2int($b);
} else {
        freeze($b);
}

(Of course a lot of the time, extracting the integer value from a bigint
would be automatic, eg

eg $array[$b4] - the get_int method of $b4 would automatically get called
here.)

On the other hand, we already have a perl builtin called int() - the
semantics of this could be extended to mean it returns
        round_to_ingeger(sv->get_real(sv))
for any SV type, and define a new perl built-in called float() or real(),
that just returns
        sv->get_real(sv)

But this is all mainly wild hand-waving....



* Dave Mitchell, Operations Manager,
* Fretwell-Downing Facilities Ltd, UK.  [EMAIL PROTECTED]
* Tel: +44 114 281 6113.                The usual disclaimers....
*
* Standards (n). Battle insignia or tribal totems


Reply via email to