On Friday, 21 February 2014 at 07:42:36 UTC, francesco cattoglio
wrote:
On Friday, 21 February 2014 at 05:21:53 UTC, Frustrated wrote:

I think though adding a "repeating" bit would make it even more
accurate so that repeating decimals within the bounds of maximum
bits used could be represented perfectly. e.g., 1/3 = 0.3333...
could be represented perfectly with such a bit and sliding fp
type. With proper cpu support one could have 0.3333... * 3 = 1
exactly.

By having two extra bits one could represent constants to any
degree of accuracy. e.g., the last bit says the value represents
the ith constant in some list. This would allow very common
irrational constants to be used: e, pi, sqrt(2), etc...
Unfortunately maths (real world maths) isn't made by "common" constants. More, such a "repeating bit" should become a "repeating counter", since you usually get a certain number of repeating digits, not just a single one.


I disagree. Many computations done by computers involve constants
in the calculation... pi, e, sqrt(2), sqrt(-1), many physical
constants, etc. The cpu could store these constants at a higher
bit resolution than standard, say 128 bits or whatever.

For floating points, the 3rd last bit could represent a repeating
decimal or they could be used in the constants for common
repeating decimals. (since chances are, repeated calculations
would not produce repeating decimals)
Things like those are cool and might have their application (I'm thinking mostly at message passing via TCP/IP), but have no real use in scientific computation. If you want good precision, you might as well be better off with bignum numbers.

Simply not true. Maple, for example, uses constants and can
compute using constants. It might speed up the calculations
significantly if one could compute at the cpu level using
constants. e.g. pi*sqrt(2) is just another constant. 2*pi is
another constant. Obviously there is a limit to the number of
constants one can store but one can encode many constants easily
without storage. (where the "index" itself is used as the value)

Also, the cpu could reduce values to constants... so if you have
a fp value that is 3.14159265..... or whatever, the cpu could
convert this to the constant pi since for all purposes it is
pi(even if it is just an approximation or pi - 1/10^10).

Basically the benefit of this(and potential) outweigh the cost of
1 bit out of 64-bits. I doubt anyone would miss it. In fact,
probably no real loss.... NaN could be a constant, in which case
you would have only one rather than the 2 billion or whatever you
have now).

Reply via email to