[boost] Re: Fixed-Point Decimal Formal Review

2003-07-25 Thread Mike Cowlishaw
Dirk Schreib wrote:
> I completly agree with your your statement.
Always nice to hear :-).

> We used the proposed IEEE754 layouts for our own number class
> with some minor changes. The proposed format seems to be ideal
> for hardware but was a little bit to slow for a pure software
> implementation.
>
> For every feature in the specification we asked "Is there a
> performance penalty even if I don't use this feature" and removed it
> if necessary.
> The resulting class is quite fast (not as fast as a fixed point
> decimal class).

Any documentation of what you ended up with?

Mike Cowlishaw



___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: Fixed-Point Decimal Formal Review

2003-07-24 Thread Dirk Schreib
Hello Mike,

I completly agree with your your statement.

> The IEEE 754 revision committee has added decimal formats (32, 64,
> and 128 bits) to the proposed new Floating-point standard, along
> with full DFP arithmetic.

> May I suggest that the class be changed to implement the proposed IEEE
> 754 arithmetic?  This would be a tremendous contribution, and it could
> even use the existing representation (perhaps expanded to 128 bits), or
> it could use the proposed IEEE 754 layouts.  The latter offers the
> possibility of changing the class later to take advantage of the
> hardware DFP when that becomes available.

We used the proposed IEEE754 layouts for our own number class
with some minor changes. The proposed format seems to be ideal
for hardware but was a little bit to slow for a pure software
implementation.

For every feature in the specification we asked "Is there a performance
penalty even if I don't use this feature" and removed it if necessary.
The resulting class is quite fast (not as fast as a fixed point decimal
class).

Dirk



___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: Fixed-Point Decimal Formal Review

2003-07-22 Thread Ilya Buchkin
"Bill Seymour" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Despite the several "yes" votes (thanks), I think I have to agree
> with Ilya and vote "no" for my own library.  Of the problems
> that have been mentioned, two, IMO, are show-stoppers:
> the problem with the scale being immutable resulting in
> inequality after assignment (if the scale isn't part of
> the type), and the need for a floating-point decimal type
> to hold intermediate results.
>
> (Ilya, you're right that I haven't responded to the latter
> issue.  It's not that I haven't read your posts, but rather
> that I didn't think I had anything intelligent to say yet.
> Please don't take it personally.)

Bill,
I have no problem, I mentioned your non-response in my Review
only to state the current status and to explain my conclusions.
I hope that you do not take personally any of my notes either,
my only desire has been to give an honest professional critique
to the material, and to support the high standards of BOOST.

> Also, I got a note this morning from Raymond Mak of IBM in
> Toronto who will be proposing a floating-point decimal TR
> at the J11/WG14 session in Kona.  It's based on the proposed
> IEEE 754 that Mike Cowlishaw mentioned in another Boost post.
>
> So even if the current version of the library is accepted
> into Boost, I'll be going back to the drawing board anyway.
> Maybe acceptance is premature at this point.
>
> --Bill Seymour

I believe your decision is most responsible and commendable
choice at this point, and that it will benefit everybody in the
long run.
As for myself, I want to thank you for your effort to propose
the solution -- I learned a lot about the issue since the time
I posted my first questions on your material, I did not realize
half of the complexity.

Regards.
Ilya Buchkin
MetaCommunications Engineering




___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: Fixed-Point Decimal Formal Review

2003-07-22 Thread Bill Seymour
Despite the several "yes" votes (thanks), I think I have to agree
with Ilya and vote "no" for my own library.  Of the problems
that have been mentioned, two, IMO, are show-stoppers:
the problem with the scale being immutable resulting in
inequality after assignment (if the scale isn't part of
the type), and the need for a floating-point decimal type
to hold intermediate results.

(Ilya, you're right that I haven't responded to the latter
issue.  It's not that I haven't read your posts, but rather
that I didn't think I had anything intelligent to say yet.
Please don't take it personally.)

Also, I got a note this morning from Raymond Mak of IBM in
Toronto who will be proposing a floating-point decimal TR
at the J11/WG14 session in Kona.  It's based on the proposed
IEEE 754 that Mike Cowlishaw mentioned in another Boost post.

So even if the current version of the library is accepted
into Boost, I'll be going back to the drawing board anyway.
Maybe acceptance is premature at this point.

--Bill Seymour

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: Fixed-Point Decimal Formal Review

2003-07-21 Thread Fernando Cacciola
In spite of the issues still to be resolved, I vote to ACCEPT the library, provided 
that suitable test suites are added (which
could be along the lines Bill proposed + Jens latest suggestion)

Fernando Cacciola



___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Fixed-Point Decimal Formal Review

2003-07-19 Thread Beman Dawes
At 10:00 PM 7/18/2003, Ilya Buchkin wrote:

>"Bill Seymour" (Friday, 18 July, 2003 12:43) says:
>
>> Ilya seems to be giving me mutually contradictory requirements.
>
>I have use cases/request for TWO distinctly different needs
>that I encountered in financial applications:
>  1. storage (representation of financial data),
Internal or external? It can be an important distinction; the requirements 
are often so different that they have to be different types.

>  2. calculations.

Are the requirements that different between an internal data storage type 
and a calculation type? That would be a shame.

--Beman

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: Fixed-Point Decimal Formal Review

2003-07-19 Thread Keith Burton
On the question of scale as part of the type :

My experience with my own libraries ( i.e. several iterations ) is that
:
a) scale ( and precision ) is required as part of the type
b) a common base for decimals which allows forwarding is
required 
c) a common base for decimals which allows some read-only
functions especially output is desirable

But my libraries used a format where the scale and precision were part
of the ( base ) data as well as part of the type.
They are not a candidate for Boost as they are written in assembler.


On the question of scale of intermediate results :
To make decimals useful then some built-in support for
maintaining the precision of intermediate result is vital. I used
multiply and divide functions that only returned a very high precision
result which was then rounded on the final assignment. This , at least ,
allowed currency conversions to be implemented with the precision
mandated by some Euporean countries without taking special measures in
the user source.


With apologies for not having time to review the decimal library

Keith Burton








___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: Fixed-Point Decimal Formal Review

2003-07-18 Thread Ilya Buchkin
"Bill Seymour" (Friday, 18 July, 2003 12:43) says:

> Ilya seems to be giving me mutually contradictory requirements.

I have use cases/request for TWO distinctly different needs
that I encountered in financial applications:
  1. storage (representation of financial data),
  2. calculations.
They are complementary to each other, not contradictory.
I further suggest that they are implemented using TWO
different classes.

> On the one hand, he wants objects with a footprint of no more
> than eight bytes; and he correctly points out that that's
> technically feasible if the scale is a template parameter.

This is correct.
For representation of data, it is most natural that precision/scale
is specified explicitly and bound to type. Templates implement it
most adequately in C++.

> Oh the other hand; he seems to want to be able to do any
> arbitrary calculation without rounding which, in general,
> would require objects of infinite precision.

This is not correct, I have not asked for infinite precision.

I did suggest that all calculations should be done with *maximum*
possible precision, which should guarantee precision for ADD/SUB,
and where possible for MUL/DIV/MOD. I also suggest that
calculations are implemented using *another* class (non-template),
where scale would not need to be specified, and would be variable
(not kept on assignment).

>  I'm afraid I'll have to
> disappoint him on the second count. I haven't finally
> decided about the first yet.

So what is your current position?
I understood from your rationale you try to solve the needs
of financial applications, and yet you do not plan to cover
the use cases I described. Please provide *your* detailed
examples of what you think they are. Let's discuss them
specifically.


>  Daryle Walker and Ilya Buchkin are raising some important
> issues that probably go away if the scale is part of the type.
> I'll address that in another message; ...

Let's be specific again.

One of issues that Daryle Walker raised is that current
implementation is INCOMPATIBLE WITH THE STANDARD LIBRARY
(because a != b after a = b). I.e. this would produce
unexpected results when used with std::vector<>. I have not
seen your response on it.

One of issues I raised is with design and semantics of rounding.
I think most of the behavior is unnatural, and the last line
(modification of an argument) is simply dangerous:

decimal const a( 1, "1.5" );  decimal const b( 1, "1.5" );
decimal a2( 1, "1.5" ), c1( 2 ), c2( 2 ), c3( 2 ), c4( 2 );
c1 = a * b;// expected 2.25, but c1 == 2.20
c2 [round_up]= a * b;  // expected 2.25, but c2 == 2.20
c3 = a [round_up]* b;  // expected 2.25, but c3 == 2.30
c4 = a2 [round_up]* b; // expected 2.25, but c4 == 2.30, AND a2 ==
2.3!!!

I will certainly stay tuned for your comments on how you
plan to address these.

--
Thanks & Best Regards.

Ilya Buchkin
MetaCommunications Engineering
mailto:[EMAIL PROTECTED]




___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: Fixed-Point Decimal Formal Review

2003-07-18 Thread Fernando Cacciola

Bill Seymour <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Paul Bristow has convinced me that I need longer, clearer names
> for the I/O manipulators.  First draft:
>
>   pure_number
>   national_currency
>   international_currency
>
>   number_stream_frac_digits
>   number_decimal_frac_digits
>
>   money_stream_frac_digits
>   money_decimal_frac_digits
>   money_locale_frac_digits
>
Those look fine to me.

>
> I'll also take the suggestion to put frac_digits() in the public
> interface and "fraction digits" in the text of the documentation.
>
fine

>
> Daryle Walker and Ilya Buchkin are raising some important issues
> that probably go away if the scale is part of the type.

I'm not sure how to decide if making the scale part of the type is a good
idea.
Do users really want that?
Suppose we apply the LSP (*1) to decimals of different scales, that is, we
subsitute a decimal of one scale with a decimal of a different scale, how
much does the change affect the program behaviour?
If the answer is "not much", it would indicate that type equivalence between
decimals of different scales is important, thus making the scale part of the
type will be more of a burden than a gain.

OTOH, if it is important not to mix/substitue decimals of different scales
without explicit intervention, then the scale should be part of the type.

If the motivation is _only_ to allow for more efficient algorithms and more
compact representations, then I don't think coupling the scale with the type
is the right choice.

(*1) Liskov Substitution Principle
http://www.objectmentor.com/resources/articles/lsp.pdf


>
> Daryle suggests operations that increment and decrement by 1 ULP;
> and I think that's a good idea.  The names I like are next_value()
> and previous_value();

Good.

>  and I'll probably include versions with
> dummy int arguments to parallel the built-in postfix operators.
>
???

> Daryle also suggests opening up the internal representation
> with accessors.  How about:
>
>   int_type raw_value() const;   // value in ULPs
>   int_type unity_value() const; // representation of 1.0
>   int frac_digits() const;  // formerly scale()
>
H.
Why not: significand() (or mantisa()), and one()?

Fernando Cacciola




___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost