On Wed, 25 Aug 2004 08:40:32 -0400, "Gay, Jerry" <[EMAIL PROTECTED]> said: > Leopold Toetsch <[EMAIL PROTECTED]> wrote: > > BigNums grow on demand. It depends on value and precision. > > > > can BigNum then start at sizeof(int)? overflow would auto-grow the BigNum > to > the appropriate size, and most integer math operations will keep space > usage > as low as possible. > > in fact, then int is just a degenerate case of BigNum, one that doesn't > grow > and throws an exception instead. or, maybe that's the case already, i > should > probably read the docs. > > ~jerry >
What is the most reasonable paradigm for scientific/high precision applications? It seems to me that this type of thing has been hashed out before, and it should be designed in a way that makes it attractive/sellable for scientists, engineers, etc. One handicap that Perl has (by reputation only) in the sciences is that it is not good for precision math. I know this is not true, and you all know this is not true, but the community(ies) at large do not know - they are stuck in the land of Fortran, and from my experience people are by-passing Perl for things like Python when they do venture out. Just out of curiosity, is BigNum like a "double" (16 bit) or is it just limited by the precision of the machine, i.e. 32 or 64 bit? Thanks, Brett Perl6 ToDo: http://www.parrotcode.org/todo