Roger Glover wrote:
>You are confusing accuracy with precision. You cannot get penny-level accuracy
>if your number scheme only supports 14 decimal digits of precision (like IEEE
>double) and the number to be represented is one hundred trillion(US:
>10-to-the-14th) dollars(US).
>
Depends on which calculations you are performing. If tracking the
expenditures within any particular agency, you are working at the penny
level but the total amount is significantly lower than 100 trillion
dollars. If working with total budgets values as, for instance, the
breakdown of the entire Federal budget, the values are in increments of
$1000. So a $100 trillion budget would be 1.0e11 instead of 1.0e14.
Besides, the entire Federal budget is, what, something around $5 or $6
trillion. It will probably be another year or two before it hits $100
trillion.
>However, when dealing with normal (significantly
>smaller) amounts you should be able to get the desired accuracy, unless the
>number of calculations begins to cause rounding error to creep into the pennies
>digit...
>
Rounding error starts creeping in at the very first calculation, unless
you happen to hit those specific values that are absolutely represented
by the bits
>Err... no. The padded cell crowd, if they are serious about these rediculous
>precisions, have many alternatives to Java that are more appropriate for this
>kind of work.
>
Possibly. That doesn't mean the developers of Java weren't trying to
win over some of them.
>A long will hold the first 92 terms of the classic Fibonacci sequence. Am I to
>understand that you needed more terms than that?
>
I don't know -- I only wrote it, I don't use it. However, I did set a
limit of 256 (double the long limit and round up to the next power of
two). It was a nice, round (binarily speaking) value and I figured any
value larger than that was an error and attention should be brought to
it. Besides, I didn't even want to test to see how long it would take
to generate fib(65536)!
>Besides that, if you use doubles you will lose precision sooner than you will
>with longs. A double can represent a larger absolute value than a long, but
>doubles have *less* precision than longs:
> double: 14 decimal digits of precision.
> long: 18 decimal digits of precision.
>
True. Aamof, I wrote it originally using longs. But as I couldn't get
a guarantee that even that limit would never, ever be exceeded. Maybe
I'll insert some code to keep track of the largest value it is asked to
generate and check it periodically. Then again, it works -- I've gotten
no complaints -- so maybe I'll leave it be. Besides, it store the
values it generates into an array. If called to generate a value, and
it has already generated that value or higher, it indexes into the array
and returns. So even though it uses BigInteger, the overall performance
is excellent.
>That is why most statistical calculations which require division of factorials
>are based on the algebraic reduction of the ratio to least (algebraic) terms.
>So in the following calculation:
> (n!)/((n-r)!)
>The computation would be reduced to something like this:
> long n = whatever;
> long r = somethinglessthanwhatever;
> long result = 1;
> for( long i = 0; i < r; i++ )
> {
> result *= n - i;
> }
>
>In this case, if we are sure the result will be "reasonable", then we can be
>sure that each of the intermediate terms is "reasonable" as well.
>
True, again. Maybe statistics was not the best example to use. I was
just examining the real-world uses of BI & BD and including the
possibility of calculations that generate extremely large interim values
-- and that can't be so conveniently canceled out.
>If you had take the time to read the Javadoc before asking this question, ...
>
Had I been working with BD, I would have. It was just idle curiosity.
But thanks for the answer, anyway. 8^)
Tomm
To change your JDJList options, please visit: http://www.sys-con.com/java/list.cfm