Re: [GENERAL] Rounding Problems?

2016-11-29 Thread Albe Laurenz
elbriga wrote:
> Thanks for the detailed answer!
> 
> Changing the function sinature seams to have solved the problem:
> CREATE OR REPLACE FUNCTION ceilDecimal(num numeric) RETURNS float AS
> $BODY$
> BEGIN
>   RETURN CEIL(num * 100) / 100;
> END
> $BODY$
> LANGUAGE 'plpgsql';
> 
> 
> SELECT ceilDecimal(0.07);
>  ceildecimal
> -
> 0.07

Yes, because "numeric" stores the exact number, so it will be
exactly 0.07 after conversion.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rounding Problems?

2016-11-29 Thread elbriga
Thanks for the detailed answer!

Changing the function sinature seams to have solved the problem:
CREATE OR REPLACE FUNCTION ceilDecimal(num numeric) RETURNS float AS
$BODY$
BEGIN
RETURN CEIL(num * 100) / 100;
END
$BODY$
LANGUAGE 'plpgsql';


SELECT ceilDecimal(0.07);
 ceildecimal 
-
0.07




--
View this message in context: 
http://postgresql.nabble.com/Rounding-Problems-tp5932388p5932397.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rounding Problems?

2016-11-29 Thread Albe Laurenz
elbriga wrote:
> Hi,
>   I have this pl function:
> CREATE OR REPLACE FUNCTION ceilDecimal(num float) RETURNS float AS
> $BODY$
> BEGIN
>   RETURN CEIL(num * 100) / 100;
> END
> $BODY$
> LANGUAGE 'plpgsql';
> 
>   It is supposed to do a "decimail ceil" for 2 decimal places.
>   But when I do "SELECT ceilDecimal(0.07)"
> It will return
>  ceildecimal
> -
> 0.08
> 
>   For other numbers the result is as expected:
> SELECT ceilDecimal(0.17);
>  ceildecimal
> -
> 0.17
> 
> WHY? rsrsr

Let's do your calculation step by step.

100 is (implicitly) an integer value.

When a "real" and an "integer" are multiplied, PostgreSQL casts them to
"double precision" before the operation. That would be necessary anyway,
because "ceil()" only operates on "double precision" (or "numeric").

test=> SELECT CAST (REAL '0.07' AS double precision);
   float8

 0.070002980232
(1 row)

The weird digits are because 0.07 can never represented exactly
as a floating point number (with base 2).
They become visible because "double precision" has greater precision.

test=> SELECT REAL '0.07' * 100;
 ?column?
--
 7.0002980232
(1 row)

test=> SELECT ceil(REAL '0.07' * 100);
 ceil
--
8
(1 row)

The value is rounded up correctly, because it is greater than 7.


For some "real" values, the representation will be slightly less
then the correct value:

test=> SELECT CAST (REAL '0.47' AS double precision);
  float8
---
 0.46998807907
(1 row)

For such values, your function will work as you expect.

You could work around the problem by subtracting a small delta
from the value after converting it to "double precision".

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Rounding Problems?

2016-11-29 Thread elbriga
Hi,
  I have this pl function:
CREATE OR REPLACE FUNCTION ceilDecimal(num float) RETURNS float AS
$BODY$
BEGIN
RETURN CEIL(num * 100) / 100;
END
$BODY$
LANGUAGE 'plpgsql';

  It is supposed to do a "decimail ceil" for 2 decimal places.
  But when I do "SELECT ceilDecimal(0.07)"
It will return
 ceildecimal 
-
0.08

  For other numbers the result is as expected:
SELECT ceilDecimal(0.17);
 ceildecimal 
-
0.17

WHY? rsrsr

Thanks in advance!



--
View this message in context: 
http://postgresql.nabble.com/Rounding-Problems-tp5932388.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Rounding problems

2009-05-03 Thread Paolo Saudin
Hi,

 

I have a problem with a query wich simple aggregate values. In the sample
below I have two values, 1.3 and 1.4. Rounding their average with one
decimals, should give 1.4.

The first query with  -  cast( tables_seb.tbl_arvier_chamencon.id_1  AS
numeric) AS value  - give the expected result, while the second one with -
tables_seb.tbl_arvier_chamencon.id_1 AS value - give 1.3. 

 

Which could be the reason ??

 

 

-- data

fulldate timestamp;tables_seb.tbl_arvier_chamencon.id_1- reals

2009-03-29 00:00:00;   1.3

2009-03-29 00:30:00;   1.4

 

--Good query

SELECT date_trunc('hour', data) AS data, round(cast(avg(value) AS numeric),
1 ) AS value

FROM 

(

SELECT _master_30.fulldate AS data,

cast( tables_seb.tbl_arvier_chamencon.id_1  AS numeric) AS
value

--tables_seb.tbl_arvier_chamencon.id_1 AS value

FROM _master_30 

LEFT JOIN tables_seb.tbl_arvier_chamencon ON
_master_30.fulldate = tables_seb.tbl_arvier_chamencon.fulldate 

WHERE _master_30.fulldate between '2009-03-29 00:00:00' AND
'2009-03-29 00:59:59'

ORDER BY data

) foo

GROUP BY 1 ORDER BY 1;

-- value = 1.4 OK

 

 

--wrong query

SELECT date_trunc('hour', data) AS data, round(cast(avg(value) AS numeric),
1 ) AS value

FROM 

(

SELECT _master_30.fulldate AS data,

--cast( tables_seb.tbl_arvier_chamencon.id_1  AS numeric) AS
value

tables_seb.tbl_arvier_chamencon.id_1 AS value

FROM _master_30 

LEFT JOIN tables_seb.tbl_arvier_chamencon ON
_master_30.fulldate = tables_seb.tbl_arvier_chamencon.fulldate 

WHERE _master_30.fulldate between '2009-03-29 00:00:00' AND
'2009-03-29 00:59:59'

ORDER BY data

) foo

GROUP BY 1 ORDER BY 1

-- value = 1.3 NOT OK

 

-- test

select round(cast( (1.3 + 1.4)::real / 2 as numeric), 1); 

-- value = 1.4 OK

 

 

Using PostgreSQL 8.3.7 on Windows Server 2008

 

 

Thank in advance,

Paolo Saudin

 



Re: [GENERAL] Rounding problems

2009-05-03 Thread Justin






Paolo Saudin wrote:

  
  
  

  
  Hi,
  
  I have a problem with a query wich simple
aggregate values.
In the sample below I have two values, 1.3 and 1.4. Rounding their
average with
one decimals, should give 1.4.
  The first query with - cast(
tables_seb.tbl_arvier_chamencon.id_1 AS numeric) AS value - give
the expected result, while the second one with -
tables_seb.tbl_arvier_chamencon.id_1
AS value - give 1.3. 
  
  Which could be the reason ??
  
  
  

My first thought is whats with all the castings??? 

Castings are mostly likely the cause of your problems,  What
is tbl_arvier_chamencon.id_1 data type???

I'm guessing its something other than numeric. All other floating point
data types will have problems caused by Binary Floating-Point Arithmetic

Numeric data type uses different functions to do its math for the
stated purpose of being exact yet being allot slower. 

In one query casting is done prior to avg() yet in the other casting is
done after avg(). This will allow Postgres to use different functions
to calculate average giving an unexpected result.




Re: [GENERAL] Rounding problems

2009-05-03 Thread Tom Lane
Paolo Saudin pa...@ecometer.it writes:
 I have a problem with a query wich simple aggregate values. In the sample
 below I have two values, 1.3 and 1.4. Rounding their average with one
 decimals, should give 1.4.

You seem way overoptimistic about float4 values being exact.  They are
not.  The actual computation being done here is more like

regression=# select (1.3::real + 1.4::real) / 2 ;   
 ?column? 
--
 1.3490463257
(1 row)

If you want an exact sum with no roundoff error you should be storing
all your values as numeric (and taking the consequent speed and space
hit :-().

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


R: [GENERAL] Rounding problems

2009-05-03 Thread Paolo Saudin
Paolo Saudin wrote: 

Hi,

 

I have a problem with a query wich simple aggregate values. In the sample
below I have two values, 1.3 and 1.4. Rounding their average with one
decimals, should give 1.4.

The first query with  -  cast( tables_seb.tbl_arvier_chamencon.id_1  AS
numeric) AS value  - give the expected result, while the second one with -
tables_seb.tbl_arvier_chamencon.id_1 AS value - give 1.3. 

 

Which could be the reason ??

 

 

My first thought is whats with all the castings???   

Castings  are mostly likely the cause of your problems,   What is
tbl_arvier_chamencon.id_1 data type???

I'm guessing its something other than numeric. All other floating point
data types will have problems caused by Binary Floating-Point Arithmetic

Numeric data type uses different functions to do its math for the stated
purpose of being exact yet being allot slower. 

In one query casting is done prior to avg() yet in the other casting is
done after avg().   This will allow Postgres to use different functions to
calculate average giving an unexpected result.

 

 

Here is the table layout 

 

CREATE TABLE tables_seb.tbl_arvier_chamencon

(

  fulldate timestamp without time zone NOT NULL DEFAULT '2000-01-01
00:00:00'::timestamp without time zone,

  id_1 real,

  id_1_cod smallint,

  id_2 real,

  id_2_cod smallint,

  id_3 real,

  id_3_cod smallint,

  id_4 real,

  id_4_cod smallint,

  CONSTRAINT tbl_arvier_chamencon_pkey PRIMARY KEY (fulldate)

) WITH (  OIDS=FALSE);

 

Thanks,

Paolo Saudin

 



R: [GENERAL] Rounding problems

2009-05-03 Thread Paolo Saudin
Paolo Saudin pa...@ecometer.it writes:
 I have a problem with a query wich simple aggregate values. In the sample
 below I have two values, 1.3 and 1.4. Rounding their average with one
 decimals, should give 1.4.

You seem way overoptimistic about float4 values being exact.  They are
not.  The actual computation being done here is more like

regression=# select (1.3::real + 1.4::real) / 2 ;   
 ?column? 
--
 1.3490463257
 (1 row)

If you want an exact sum with no roundoff error you should be storing
all your values as numeric (and taking the consequent speed and space
hit :-().

   regards, tom lane



I converted all the fields in numeric type instead of real and now both
queries return the same result !
Now I need to test about performances ...

Thank you very much !!
Paolo Saudin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-20 Thread glene77is
On May 14, 3:27 pm, [EMAIL PROTECTED] (Sam Mason) wrote:
 On Wed, May 14, 2008 at 02:08:47PM -0400, Justin wrote:
  Sam Mason wrote:
  What doesfoxprouse for storing numbers? or is it just that you never
  pushed it hard enough for the abstractions to show through.

  I know i pushed it.  Foxpro for the most has only  4 basic data types
  Numeric (similar to Posgresql numeric), Boolean, Date, Text aka
  (string)  Thefoxprotables supported far more data types but when every
  it was dumped to variable it acted like one of the 4.

 I really meant how much did you check the results, or did you accept
 that they were correct?

 Foxprodid not suffer floating point math errors.  I normally used 8 to
  10 points precision.  Foxprowas limited to 15 points of precision
  period.   No more and no less, once you hit that was it.

 15 places seems very similar to what a 64bit IEEE floating point number
 will give you, i.e. a double in C/C++.

  My problem is we calculate resistance of parts in aFoxproapp that we
  want to move because we want to bring all the custom apps into one
  framework and single database.

  Take this calculation  (0.05/3* 1.0025) which is used to calculate
  parts resistance and Tolerance. (its Ohms Law)  The value returned  from
  C++ = .016708 which is wrong
  it should be .0167418.  We just shrank the tolerance on the part we
  make

 Why are you so sure about theFoxProresult?  I've just checked a few
 calculators and get results consistent with your C++ version.

   Justin C: 0.016708
   JFoxPro: 0.0167418
   My C: 0.01670833
  bc[1]: 0.01670832
  PG[2]: 0.016708336675
  Google[3]: 0.016708 (actually gives 1.6708e-6)

 Both bc and Postgres use their own code (i.e. not the CPU's FPU) to do
 the math, and as they all agree I'm thinkingFoxProis incorrect!  Next
 I tried doing it accurately (in Haskell if it makes any difference) and
 get an answer of 401/24000 out, which would agree with everything
 butFoxPro.  If I calculate the ratio back out forFoxProI get
 401/239520242 which is a little way out.

  The Documentation from MS says 15 points of precision but the result say
  otherwise.

 The docs for what?FoxProor their C compiler?

 If you meanFoxPro, I think this is another case of MS screwing up.

  I'm glad You and others are taking the time to explain to me
  the odd results before i get into redoing that application.

 Welcome to the PG community, lots of people to get interested in lots of
 things!

  Why oh Why did MS killFoxpro. :'(   I understood it, knew its quirks
  and it worked very well with Postgresql

 Are you sure you want to stay with it if its answers are wrong?

   Sam

*
This is fun, at 0400 AM.  I enjoy reading  Experts having serious fun!

VFP 6.0, using my defaults
? (0.05/3* 1.00250)
displays  0l.0167083000

SET DECIMALS TO 15
? ((0.05/3)* 1.0025)
displays 0.0167083

and a frivolous example:
SET DECIMALS TO 18
? ((0.05/3.)* 1.0025)
displays 0.0167083000

Anybody tried to reckon this math
the way we used to do it with a Slide-Rule ???
(In VFP of course)

glene77is

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-20 Thread Justin



glene77is wrote:

On May 14, 3:27 pm, [EMAIL PROTECTED] (Sam Mason) wrote:
  

On Wed, May 14, 2008 at 02:08:47PM -0400, Justin wrote:


Sam Mason wrote:
  

What doesfoxprouse for storing numbers? or is it just that you never
pushed it hard enough for the abstractions to show through.


I know i pushed it.  Foxpro for the most has only  4 basic data types
Numeric (similar to Posgresql numeric), Boolean, Date, Text aka
(string)  Thefoxprotables supported far more data types but when every
it was dumped to variable it acted like one of the 4.
  

I really meant how much did you check the results, or did you accept
that they were correct?



Foxprodid not suffer floating point math errors.  I normally used 8 to
10 points precision.  Foxprowas limited to 15 points of precision
period.   No more and no less, once you hit that was it.
  

15 places seems very similar to what a 64bit IEEE floating point number
will give you, i.e. a double in C/C++.



My problem is we calculate resistance of parts in aFoxproapp that we
want to move because we want to bring all the custom apps into one
framework and single database.
  
Take this calculation  (0.05/3* 1.0025) which is used to calculate

parts resistance and Tolerance. (its Ohms Law)  The value returned  from
C++ = .016708 which is wrong
it should be .0167418.  We just shrank the tolerance on the part we
make
  

Why are you so sure about theFoxProresult?  I've just checked a few
calculators and get results consistent with your C++ version.

  Justin C: 0.016708
  JFoxPro: 0.0167418
  My C: 0.01670833
 bc[1]: 0.01670832
 PG[2]: 0.016708336675
 Google[3]: 0.016708 (actually gives 1.6708e-6)

Both bc and Postgres use their own code (i.e. not the CPU's FPU) to do
the math, and as they all agree I'm thinkingFoxProis incorrect!  Next
I tried doing it accurately (in Haskell if it makes any difference) and
get an answer of 401/24000 out, which would agree with everything
butFoxPro.  If I calculate the ratio back out forFoxProI get
401/239520242 which is a little way out.



The Documentation from MS says 15 points of precision but the result say
otherwise.
  

The docs for what?FoxProor their C compiler?

If you meanFoxPro, I think this is another case of MS screwing up.



I'm glad You and others are taking the time to explain to me
the odd results before i get into redoing that application.
  

Welcome to the PG community, lots of people to get interested in lots of
things!



Why oh Why did MS killFoxpro. :'(   I understood it, knew its quirks
and it worked very well with Postgresql
  

Are you sure you want to stay with it if its answers are wrong?

  Sam



*
This is fun, at 0400 AM.  I enjoy reading  Experts having serious fun!

VFP 6.0, using my defaults
? (0.05/3* 1.00250)
displays  0l.0167083000

SET DECIMALS TO 15
? ((0.05/3)* 1.0025)
displays 0.0167083

and a frivolous example:
SET DECIMALS TO 18
? ((0.05/3.)* 1.0025)
displays 0.0167083000
  
Foxpro always stops at 15 decimals points,  Even though some of the 
documentation says 20  and 22 points of precision depending on the 
version.  I have versions 5 to 9

Anybody tried to reckon this math
the way we used to do it with a Slide-Rule ???
(In VFP of course)
  
A slide what??.  I have never touched one or seen a slide rule in real 
life, just pretty pictures  :-)

glene77is
  







Re: [GENERAL] rounding problems

2008-05-14 Thread Sam Mason
On Tue, May 13, 2008 at 02:36:18PM -0400, Justin wrote:
 Double holds 15 places which is the highest value of precision it can 
 maintain before rounding occurs.
 
 Is is limit less no, but what is?
 
 Practically speaking  taking a vale 0.000,000,000,000,001 aka
 1 trillionth of anything,

But remember that if you add this value onto a large number and then
take off the large number the result will be zero.

  (0.000,000,000,01 + 1,000,000) - 1,000,000  ==  0
  0.000,000,000,01 + (1,000,000 - 1,000,000)  ==  0.000,000,000,01

In general, operations on floating point numbers will increase their
errors.

 i view the problem solved for 98% of problems.

Floating point math is good for most problems, hence why most languages
expose the abstraction.


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-14 Thread Decibel!

On May 12, 2008, at 10:42 PM, Craig Ringer wrote:

Personally I'd be tempted to use a `double precision' (float8) for
things like materials consumption.



Or you could just use an un-bounded numeric...
--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] rounding problems

2008-05-14 Thread Decibel!

On May 13, 2008, at 1:36 PM, Justin wrote:

Is is limit less no, but what is?



numeric is limitless, unless you specifically bound it. Or you run  
out of space...

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] rounding problems

2008-05-14 Thread Justin

I have forgotten how much i hate C++

Its not doing what you say it would but it did do other odd ball 
things.  I miss my foxpro :-(. 


Plus its not holding 15 precision points

#include stdio.h
#include cmath

int main()
{
   double a = 0.1;
   //double b = 1000;
   double c = 1;
   double d ; //= a * b;
   for( int i = 1 ; i  10 ; i++)
   {   
   d = pow(a,i)+ c ;

   printf(%.10f\n, d);
   d = d-c ;
   printf(%.10f\n, d);
   }

   return 0;
}

Sam Mason wrote:

On Tue, May 13, 2008 at 02:36:18PM -0400, Justin wrote:
  
Double holds 15 places which is the highest value of precision it can 
maintain before rounding occurs.


Is is limit less no, but what is?

Practically speaking  taking a vale 0.000,000,000,000,001 aka
1 trillionth of anything,



But remember that if you add this value onto a large number and then
take off the large number the result will be zero.

  (0.000,000,000,01 + 1,000,000) - 1,000,000  ==  0
  0.000,000,000,01 + (1,000,000 - 1,000,000)  ==  0.000,000,000,01

In general, operations on floating point numbers will increase their
errors.

  

i view the problem solved for 98% of problems.



Floating point math is good for most problems, hence why most languages
expose the abstraction.


  Sam

  


Re: [GENERAL] rounding problems

2008-05-14 Thread Sam Mason
On Wed, May 14, 2008 at 11:47:52AM -0400, Justin wrote:
 I have forgotten how much i hate C++

What we're talking about doesn't have much to do with C++, it's floating
point maths in general.

 Its not doing what you say it would but it did do other odd ball 
 things.  I miss my foxpro :-(. 

What does foxpro use for storing numbers? or is it just that you never
pushed it hard enough for the abstractions to show through.

 Plus its not holding 15 precision points

after changing the output to be:

  printf(%.10f %.10f\n, d, d-c);

I get:

  1.099940 0.099940
  1.010054 0.010054
  1.001020 0.001020
  1.000117 0.000117
  1.099987 0.099987
  1.009984 0.009984
  1.001043 0.001043
  1.000149 0.000149
  1.00 0.00

Which looks reasonable.  Remember that floating point numbers store
their state in base two, not base ten.  All of those numbers look good
to 15 decimal digits.


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-14 Thread Justin



Sam Mason wrote:

On Wed, May 14, 2008 at 11:47:52AM -0400, Justin wrote:
  

I have forgotten how much i hate C++



What we're talking about doesn't have much to do with C++, it's floating
point maths in general.

  
Its not doing what you say it would but it did do other odd ball 
things.  I miss my foxpro :-(. 



What does foxpro use for storing numbers? or is it just that you never
pushed it hard enough for the abstractions to show through.
  
I know i pushed it.  Foxpro  for the most has only  4 basic data types   
Numeric (similar to Posgresql numeric), Boolean, Date, Text aka 
(string)  The foxpro tables supported far more data types but when every 
it was dumped to variable it acted like one of the 4. 

Foxpro did not suffer floating point math errors.  I normally used 8 to 
10 points precision.  Foxpro was limited to 15 points of precision  
period.   No more and no less, once you hit that was it.



  

Plus its not holding 15 precision points



after changing the output to be:

  printf(%.10f %.10f\n, d, d-c);

I get:

  1.099940 0.099940
  1.010054 0.010054
  1.001020 0.001020
  1.000117 0.000117
  1.099987 0.099987
  1.009984 0.009984
  1.001043 0.001043
  1.000149 0.000149
  1.00 0.00

Which looks reasonable.  Remember that floating point numbers store
their state in base two, not base ten.  All of those numbers look good
to 15 decimal digits.
  


My problem is we calculate resistance of parts in a Foxpro app that we 
want to move because we want to bring all the custom apps into one 
framework and single database.


Take this calculation  (0.05/3* 1.0025) which is used to calculate 
parts resistance and Tolerance. (its Ohms Law)  The value returned  from 
C++ = .016708 which is wrong
it should be .0167418.  We just shrank the tolerance on the part we 
make


Take the other side  (0.05/3* .9975) = .016625 from C++  this 
way wrong and the tolerance just grew .0166583.  Guess what this 
could result in shipping a $12,000+ out to a customer wrong.


The Documentation from MS says 15 points of precision but the result say 
otherwise.  I'm glad You and others are taking the time to explain to me 
the odd results before i get into redoing that application.


Why oh Why did MS kill Foxpro. :'(   I understood it, knew its quirks 
and it worked very well with Postgresql


Re: [GENERAL] rounding problems

2008-05-14 Thread Sam Mason
On Wed, May 14, 2008 at 02:08:47PM -0400, Justin wrote:
 Sam Mason wrote:
 What does foxpro use for storing numbers? or is it just that you never
 pushed it hard enough for the abstractions to show through.
 
 I know i pushed it.  Foxpro  for the most has only  4 basic data types   
 Numeric (similar to Posgresql numeric), Boolean, Date, Text aka 
 (string)  The foxpro tables supported far more data types but when every 
 it was dumped to variable it acted like one of the 4. 

I really meant how much did you check the results, or did you accept
that they were correct?

 Foxpro did not suffer floating point math errors.  I normally used 8 to 
 10 points precision.  Foxpro was limited to 15 points of precision  
 period.   No more and no less, once you hit that was it.

15 places seems very similar to what a 64bit IEEE floating point number
will give you, i.e. a double in C/C++.

 My problem is we calculate resistance of parts in a Foxpro app that we 
 want to move because we want to bring all the custom apps into one 
 framework and single database.
 
 Take this calculation  (0.05/3* 1.0025) which is used to calculate 
 parts resistance and Tolerance. (its Ohms Law)  The value returned  from 
 C++ = .016708 which is wrong
 it should be .0167418.  We just shrank the tolerance on the part we 
 make

Why are you so sure about the FoxPro result?  I've just checked a few
calculators and get results consistent with your C++ version.

  Justin C: 0.016708
  J FoxPro: 0.0167418
  My C: 0.01670833
 bc[1]: 0.01670832
 PG[2]: 0.016708336675
 Google[3]: 0.016708 (actually gives 1.6708e-6)

Both bc and Postgres use their own code (i.e. not the CPU's FPU) to do
the math, and as they all agree I'm thinking FoxPro is incorrect!  Next
I tried doing it accurately (in Haskell if it makes any difference) and
get an answer of 401/24000 out, which would agree with everything
but FoxPro.  If I calculate the ratio back out for FoxPro I get
401/239520242 which is a little way out.

 The Documentation from MS says 15 points of precision but the result say 
 otherwise.

The docs for what? FoxPro or their C compiler?

If you mean FoxPro, I think this is another case of MS screwing up.

 I'm glad You and others are taking the time to explain to me 
 the odd results before i get into redoing that application.

Welcome to the PG community, lots of people to get interested in lots of
things!

 Why oh Why did MS kill Foxpro. :'(   I understood it, knew its quirks 
 and it worked very well with Postgresql

Are you sure you want to stay with it if its answers are wrong?


  Sam

 [1] http://www.gnu.org/software/bc/manual/html_mono/bc.html
 [2] http://doxygen.postgresql.org/backend_2utils_2adt_2numeric_8c-source.html
 [3] http://www.google.com/search?q=0.05/3*1.0025

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-14 Thread Andy Anderson

On Wed, May 14, 2008 at 02:08:47PM -0400, Justin wrote:
My problem is we calculate resistance of parts in a Foxpro app  
that we

want to move because we want to bring all the custom apps into one
framework and single database.

Take this calculation  (0.05/3* 1.0025) which is used to  
calculate
parts resistance and Tolerance. (its Ohms Law)  The value  
returned  from

C++ = .016708 which is wrong it should be .0167418.


Why are you so sure about the FoxPro result?  I've just checked a few
calculators and get results consistent with your C++ version.

  Justin C: 0.016708
  J FoxPro: 0.0167418
  My C: 0.01670833
 bc[1]: 0.01670832
 PG[2]: 0.016708336675
 Google[3]: 0.016708 (actually gives 1.6708e-6)

Both bc and Postgres use their own code (i.e. not the CPU's FPU) to do
the math, and as they all agree I'm thinking FoxPro is incorrect!   
Next
I tried doing it accurately (in Haskell if it makes any difference)  
and

get an answer of 401/24000 out, which would agree with everything
but FoxPro.  If I calculate the ratio back out for FoxPro I get
401/239520242 which is a little way out.


I'll add my Casio scientific calculator to the list, which also gives  
the non-FoxPro result.


We can also write this exactly as:

(0.05 * 1.0025 / 3) * 10^-5
= (5 * 10025 / 3) * 10^-11
= (50,125 / 3) * 0.001
= (16,708 + 1/3) * 0.001

which you can verify in your head, so the correct answer is clear.

Is it possible that the FoxPro calculation isn't what you describe?  
If you use the tolerance 1.0045, a single digit off, then you get  
0.0167417, very close to your result.


-- Andy


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-14 Thread Justin



Sam Mason wrote:

On Wed, May 14, 2008 at 02:08:47PM -0400, Justin wrote:
  

Sam Mason wrote:


What does foxpro use for storing numbers? or is it just that you never
pushed it hard enough for the abstractions to show through.
  
I know i pushed it.  Foxpro  for the most has only  4 basic data types   
Numeric (similar to Posgresql numeric), Boolean, Date, Text aka 
(string)  The foxpro tables supported far more data types but when every 
it was dumped to variable it acted like one of the 4. 



I really meant how much did you check the results, or did you accept
that they were correct?

  
Foxpro did not suffer floating point math errors.  I normally used 8 to 
10 points precision.  Foxpro was limited to 15 points of precision  
period.   No more and no less, once you hit that was it.



15 places seems very similar to what a 64bit IEEE floating point number
will give you, i.e. a double in C/C++.

  
My problem is we calculate resistance of parts in a Foxpro app that we 
want to move because we want to bring all the custom apps into one 
framework and single database.


Take this calculation  (0.05/3* 1.0025) which is used to calculate 
parts resistance and Tolerance. (its Ohms Law)  The value returned  from 
C++ = .016708 which is wrong
it should be .0167418.  We just shrank the tolerance on the part we 
make



Why are you so sure about the FoxPro result?  I've just checked a few
calculators and get results consistent with your C++ version.

  Justin C: 0.016708
  J FoxPro: 0.0167418
  
this 167418 came of my ti 89 calculator, going back i noticed that i 
accident rounded it to .0167 which gives a bad result.


So what i typed in after that point is wrong.  OOPS.

But loosing the 3 will put out of the tolerance sense its the last 
significant digit needed thats displayed on the measurement devices.  So 
if the 3 becomes a 4 your out of tolerance.

  My C: 0.01670833
 bc[1]: 0.01670832
 PG[2]: 0.016708336675
  
 Google[3]: 0.016708 (actually gives 1.6708e-6)
  

Foxpro Agrees with what you have  0.016708
the code looks like this

SET DECIMALS TO 15
? ((0.05/3)* 1.0025)

When i wrote the application like 10 years ago I spent allot time making 
sure the numbers where correct even doing  some by hand.


If I gotten it wrong there's allot National labs, Universities, Big 
companies that are generating allot bad results in their QC departments.  


Chced

Both bc and Postgres use their own code (i.e. not the CPU's FPU) to do
the math, and as they all agree I'm thinking FoxPro is incorrect!


Here is the foxpro Documentation

Integers or decimal numbers

For example, the quantity of items ordered



8 bytes in memory; 1 to 20 bytes in table



- .99E+19 to .99E+20



  Next
I tried doing it accurately (in Haskell if it makes any difference) and
get an answer of 401/24000 out, which would agree with everything
but FoxPro.  If I calculate the ratio back out for FoxPro I get
401/239520242 which is a little way out.
  
  
The Documentation from MS says 15 points of precision but the result say 
otherwise.



The docs for what? FoxPro or their C compiler?
  

From the MS Document here is Copied text

*Microsoft Specific ---*

The double type contains 64 bits: 1 for sign, 11 for the exponent, and 
52 for the mantissa. Its range is +/--1.7E308 with at least 15 digits of 
precision.


*END Microsoft Specific*



If you mean FoxPro, I think this is another case of MS screwing up.
  
Foxpro normally did not suffer form other MS screw ups.  
  
I'm glad You and others are taking the time to explain to me 
the odd results before i get into redoing that application.



Welcome to the PG community, lots of people to get interested in lots of
things!

  
Why oh Why did MS kill Foxpro. :'(   I understood it, knew its quirks 
and it worked very well with Postgresql



Are you sure you want to stay with it if its answers are wrong?


  Sam

 [1] http://www.gnu.org/software/bc/manual/html_mono/bc.html
 [2] http://doxygen.postgresql.org/backend_2utils_2adt_2numeric_8c-source.html
 [3] http://www.google.com/search?q=0.05/3*1.0025

  


Re: [GENERAL] rounding problems

2008-05-14 Thread Justin



Andy Anderson wrote:

On Wed, May 14, 2008 at 02:08:47PM -0400, Justin wrote:

My problem is we calculate resistance of parts in a Foxpro app that we
want to move because we want to bring all the custom apps into one
framework and single database.

Take this calculation  (0.05/3* 1.0025) which is used to calculate
parts resistance and Tolerance. (its Ohms Law)  The value returned  
from

C++ = .016708 which is wrong it should be .0167418.


Why are you so sure about the FoxPro result?  I've just checked a few
calculators and get results consistent with your C++ version.

  Justin C: 0.016708
  J FoxPro: 0.0167418
  My C: 0.01670833
 bc[1]: 0.01670832
 PG[2]: 0.016708336675
 Google[3]: 0.016708 (actually gives 1.6708e-6)

Both bc and Postgres use their own code (i.e. not the CPU's FPU) to do
the math, and as they all agree I'm thinking FoxPro is incorrect!  Next
I tried doing it accurately (in Haskell if it makes any difference) and
get an answer of 401/24000 out, which would agree with everything
but FoxPro.  If I calculate the ratio back out for FoxPro I get
401/239520242 which is a little way out.


I'll add my Casio scientific calculator to the list, which also gives 
the non-FoxPro result.


We can also write this exactly as:

(0.05 * 1.0025 / 3) * 10^-5
= (5 * 10025 / 3) * 10^-11
= (50,125 / 3) * 0.001
= (16,708 + 1/3) * 0.001

which you can verify in your head, so the correct answer is clear.

Is it possible that the FoxPro calculation isn't what you describe? If 
you use the tolerance 1.0045, a single digit off, then you get 
0.0167417, very close to your result.


-- Andy


I had typo in my calculator and did not catch it OOPS. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-14 Thread Andy Anderson

Sam Mason wrote:


If you mean FoxPro, I think this is another case of MS screwing up.

On May 14, 2008, at 4:08 PM, Justin wrote:

Foxpro normally did not suffer form other MS screw ups.



That's because MS bought it from a third-party developer.
(And so, of course, they couldn't allow that to stand, and had to  
develop their own product, Access. What a hunk of junk that was for  
the first couple of years. :-(

-- Andy



Re: [GENERAL] rounding problems

2008-05-14 Thread Justin



Andy Anderson wrote:

Sam Mason wrote:

If you mean FoxPro, I think this is another case of MS screwing up.

On May 14, 2008, at 4:08 PM, Justin wrote:
Foxpro normally did not suffer form other MS screw ups.  

That's because MS bought it from a third-party developer.
(And so, of course, they couldn't allow that to stand, and had to develop their 
own product, Access. What a hunk of junk that was for the first couple of 
years. :-(
-- Andy
  
I loved foxpro its the best Xbase language.  MS killed because they did 
not want to move it to 64 bit.   Which would have made all the 
limitations that it suffered from due to 32 integer go away. 

What annoys me everyone told me how crappy xbase languages were/are but 
today most of the popular programming languages are now just getting 
tools that we Xbases developers have had  forever.


Don't get me started on the Access  Jet Engine blue or red.  They both 
suck and prone to failure.  I still have a hard  time believing MS uses 
Jet Engine Blue for Exchange and AD.   I have spent allot time 
recovering from corrupt databases.  And the way MS shoe horned Exchange 
into that flawed database design is amazing.





Re: [GENERAL] rounding problems

2008-05-13 Thread Tomasz Ostrowski
On 2008-05-12 20:49, Justin wrote:

 We take (List Price * discount Percent) * Number of Pieces = net
 price.

This is wrong. You should do in Excel:
( price * amount ) * discount
As otherwise any small error in representation of price*discount would
be multiplied by usually high amount.

I'd do this way to get it right:
round( round(price * amount, 2) * discount, 2)
This way every sum should match. There could be errors, but in pennies,
not dollars. These errors will be because Excel does not have a decimal
type.

It is a lost case to break Postgres so it will match Excel. Much easier
would be to correct Excel spreadsheet. And this is a right thing to do.

Regards
Tometzky
-- 
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
  Winnie the Pooh


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-13 Thread Justin

Tomasz Ostrowski wrote:

On 2008-05-12 20:49, Justin wrote:

  

We take (List Price * discount Percent) * Number of Pieces = net
price.



This is wrong. You should do in Excel:
( price * amount ) * discount
As otherwise any small error in representation of price*discount would
be multiplied by usually high amount.
  


Your saying in Excel,  Multiplication is not  Commutativity???  that 
sends shudders  down my back

I'd do this way to get it right:
round( round(price * amount, 2) * discount, 2)
This way every sum should match. There could be errors, but in pennies,
not dollars. These errors will be because Excel does not have a decimal
type.
  
That makes sense you are keeping the precision the same through the 
calculation

It is a lost case to break Postgres so it will match Excel. Much easier
would be to correct Excel spreadsheet. And this is a right thing to do.

Regards
Tometzky
  


Re: [GENERAL] rounding problems

2008-05-13 Thread Douglas McNaught
2008/5/13 Justin [EMAIL PROTECTED]:

  Your saying in Excel,  Multiplication is not  Commutativity???  that sends
 shudders  down my back

The word you want in this case is associative.  Since floating point
math is not exact, sometimes the associativity (and other) properties
of some operations are not preserved.  This is true for floating point
in general, not just Excel.

-Doug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-13 Thread Justin
Thanks to Andy's  C code here is the pl/pgSQL function which does the 
same thing. 

I added a tiny bit so instead of returning a numeric value with 20 
trailing zeros it returns a value with the desired precision.



-Begin Code -

create or replace function  roundup(pToRound numeric, pPrecision 
integer) returns numeric  as

$Body$
declare
   factor numeric ;
   poweredup numeric;
   trunced numeric;
   fraction numeric ;
Begin
   factor := power(10.0, pPrecision) ;
   poweredup := pToRound * factor ;
   trunced := trunc(poweredup);
   fraction :=  poweredup - trunced;

   if (fraction = 0.5) then
   return trunc(((trunced + 1)/factor), pPrecision);
   end if ;
   if (fraction = -0.5) then
   return trunc(((trunced - 1)/factor), pPrecision);
   end if ;
  
   return trunc((trunced/factor), pPrecision);

END;
$Body$
 LANGUAGE 'plpgsql' VOLATILE
 COST 100;
ALTER FUNCTION roundup(pToRound numeric, pPrecision integer) owner to 
postgres ;



---End Code ---



function roundup(n, i)
{
factor = power(10.0, i);
nd = n * factor;
ni = trunc(nd);
fraction = nd - ni;
if (fraction = 0.5)
return (ni + 1)/factor;
if (fraction = -0.5)
return (ni - 1)/factor;
return ni/factor;
}

Apologies for using C and warnings that I haven't thoroughly tested this.

P.S. You could also write a round-even function for Excel and get them 
to use it on their next printout! :-)


-- Andy



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-13 Thread Justin


Tomasz Ostrowski wrote:

On 2008-05-13 17:29, Justin wrote:
  

Tomasz Ostrowski wrote:


On 2008-05-12 20:49, Justin wrote:

  
  

We take (List Price * discount Percent) * Number of Pieces = net
price.



This is wrong. You should do in Excel:
( price * amount ) * discount
As otherwise any small error in representation of price*discount would
be multiplied by usually high amount.
  
  

Your saying in Excel,  Multiplication is not  Commutativity???  that
sends shudders  down my back



On floats it is not in any language. Try this C program:

#include stdio.h

int main()
{
float a = 0.1;
float b = 10;
float c = 10;
float d = a * b;
printf(%.10f\n, d*c);
d = b * c;
printf(%.10f\n, a*d);
return 0;
}

On my Intel 32-bit I get:
10.00
100014.9011611938
  

I ran this on Windows using MSVC

The Result are the same when using float .  But the Ms compiler throws 
warnings, crying  truncation and rounding problems

10.00
100014.9011612000

I change it to double  problem goes away.
10.00
10.00

I have always avoided floating points.  In all the work i have done thus 
far the problems with floating math would  cause scrap parts or make 
Quality control worthless. 


Accuracy trumps pretty much every concern around here.



Pozdrawiam
Tometzky
  


Re: [GENERAL] rounding problems

2008-05-13 Thread Justin
Double holds 15 places which is the highest value of precision it can 
maintain before rounding occurs.


Is is limit less no, but what is?

Practically speaking  taking a vale 0.000,000,000,000,001 aka
1 trillionth of anything,  i view the problem solved for 98% of problems.

Does it hide?  Yes,  but it hides it very well. :-P

Tomasz Ostrowski wrote:

On 2008-05-13 19:21, Justin wrote:


I change it to double  problem goes away.


No, it does not. It only hides it deeper. double is also a floating 
point type and has the same problems.


Regards
Tometzky


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] rounding problems

2008-05-12 Thread Justin
I have very annoying problem that i would like to get a work around  in 
place so the data entry people stop trying to kill me.


Normally people give quotes out of the price book which was done in 
Excel like 15 years ago and just has been updated over the years.  the 
problem is excel is rounding differently than postgres 8.3.1 (Yes i know 
Excel rounds incorrectly) which results in normally being pennies off 
but on large qty its usually under a few bucks on the postgresql side.   
We internally don't  care but those annoying customers scream bloody 
murder if the quote don't agree to the penny on the invoice  Even when 
its to their benefit .  

Has anyone every got  Postgresql and Excel to agree on rounding. 

I have checked excel up to Office XP and its still wrong.  (open office 
was looked out and the people  screamed really loudly NO )


Another annoying thing is the calculators on everyones desk get it wrong 
to if the rounding is turned to 2 places.  

Although my TI-89, and TI-36X calculators agree perfectly with 
postgresql . 





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Lincoln Yeoh

At 01:48 AM 5/13/2008, Justin wrote:
I have very annoying problem that i would like to get a work 
around  in place so the data entry people stop trying to kill me.


Normally people give quotes out of the price book which was done in 
Excel like 15 years ago and just has been updated over the 
years.  the problem is excel is rounding differently than postgres 
8.3.1 (Yes i know Excel rounds incorrectly) which results in 
normally being pennies off but on large qty its usually under a few 
bucks on the postgresql side.
We internally don't  care but those annoying customers scream bloody 
murder if the quote don't agree to the penny on the invoice  Even 
when its to their benefit .


Has anyone every got  Postgresql and Excel to agree on rounding.
I have checked excel up to Office XP and its still wrong.  (open 
office was looked out and the people  screamed really loudly NO )


Another annoying thing is the calculators on everyones desk get it 
wrong to if the rounding is turned to 2 places.


Although my TI-89, and TI-36X calculators agree perfectly with postgresql .


Bad news, the Excel thing is probably doing math very wrong.

Also, my guess is you're treating one penny as 0.01, which is also wrong.

When you do financial calculations you should avoid floating point 
where possible. Floating point is really tricky to get right. There 
are scary books on it.


I'm no expert in financial calculations and floating point stuff, my 
_guess_ is a good start is probably treating one penny as 1, instead 
of 0.01. But better wait for the experts to chime in.


That said, if you're going to insist on using the wrong numbers from 
the Excel Invoice, can't you work some way of getting them into 
Postgresql and stored as is, rather than having Postgresql 
calculate them differently ( I suspect you're using floating point in 
postgresql and so it'll be wrong too, just maybe a bit less wrong 
than Excel ;) ).


Regards,
Link.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Andy Anderson
Can you be more explicit about the rounding that's wrong in Excel?  
Are you talking about the n5 round-up to n+1 that Excel uses  
vs. n5 round-to-even n (sometimes called Banker's Rounding)?


-- Andy

On May 12, 2008, at 1:48 PM, Justin wrote:

I have very annoying problem that i would like to get a work  
around  in place so the data entry people stop trying to kill me.


Normally people give quotes out of the price book which was done in  
Excel like 15 years ago and just has been updated over the years.   
the problem is excel is rounding differently than postgres 8.3.1  
(Yes i know Excel rounds incorrectly) which results in normally  
being pennies off but on large qty its usually under a few bucks on  
the postgresql side.   We internally don't  care but those annoying  
customers scream bloody murder if the quote don't agree to the  
penny on the invoice  Even when its to their benefit .

Has anyone every got  Postgresql and Excel to agree on rounding.
I have checked excel up to Office XP and its still wrong.  (open  
office was looked out and the people  screamed really loudly NO )


Another annoying thing is the calculators on everyones desk get it  
wrong to if the rounding is turned to 2 places.
Although my TI-89, and TI-36X calculators agree perfectly with  
postgresql .




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Justin



Andy Anderson wrote:
Can you be more explicit about the rounding that's wrong in Excel? Are 
you talking about the n5 round-up to n+1 that Excel uses vs. 
n5 round-to-even n (sometimes called Banker's Rounding)?


Yes i'm talking about difference between bankers rounding verse Excels
crappy math.   I have dealt with excels crappy math skills in scientific
measurements dumped from AD cards, the simply solution was increase the
decimal range to 1 more than i needed.  But in this case it won't work
sense this published material will disagree with how postgresql rounds.

We take (List Price * discount Percent) * Number of Pieces = net
price.List Prices is stored as numeric (16,4) discount is stored as
numeric(10,4)
the result is numeric (16,4).  On the UI its rounded to 2 and displays
correctly and agrees with my TI-89

The problem is the price book which is used to quotes is almost always
0.01 to 0.015 pennies higher.  Net result the invoices are almost always
lower than Quoted price.  (yet customers still through a fit.)




-- Andy

On May 12, 2008, at 1:48 PM, Justin wrote:

I have very annoying problem that i would like to get a work around  
in place so the data entry people stop trying to kill me.


Normally people give quotes out of the price book which was done in 
Excel like 15 years ago and just has been updated over the years.  
the problem is excel is rounding differently than postgres 8.3.1 (Yes 
i know Excel rounds incorrectly) which results in normally being 
pennies off but on large qty its usually under a few bucks on the 
postgresql side.   We internally don't  care but those annoying 
customers scream bloody murder if the quote don't agree to the penny 
on the invoice  Even when its to their benefit .

Has anyone every got  Postgresql and Excel to agree on rounding.
I have checked excel up to Office XP and its still wrong.  (open 
office was looked out and the people  screamed really loudly NO )


Another annoying thing is the calculators on everyones desk get it 
wrong to if the rounding is turned to 2 places.
Although my TI-89, and TI-36X calculators agree perfectly with 
postgresql .




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Justin



Lincoln Yeoh wrote:

At 01:48 AM 5/13/2008, Justin wrote:
I have very annoying problem that i would like to get a work around  
in place so the data entry people stop trying to kill me.


Normally people give quotes out of the price book which was done in 
Excel like 15 years ago and just has been updated over the years.  
the problem is excel is rounding differently than postgres 8.3.1 (Yes 
i know Excel rounds incorrectly) which results in normally being 
pennies off but on large qty its usually under a few bucks on the 
postgresql side.
We internally don't  care but those annoying customers scream bloody 
murder if the quote don't agree to the penny on the invoice  Even 
when its to their benefit .


Has anyone every got  Postgresql and Excel to agree on rounding.
I have checked excel up to Office XP and its still wrong.  (open 
office was looked out and the people  screamed really loudly NO )


Another annoying thing is the calculators on everyones desk get it 
wrong to if the rounding is turned to 2 places.


Although my TI-89, and TI-36X calculators agree perfectly with 
postgresql .


Bad news, the Excel thing is probably doing math very wrong.

Also, my guess is you're treating one penny as 0.01, which is also wrong.
The fields are numeric(12,4)  and numeric(10,2) .  I'm in process of 
extending the precision out on the acounting side because its causing 
problems with inventory costing, as we have raw material priced in $50 
to $100 a pound but only consume .000235 lbs per part.  so we can 
getting some funky results. 

I did not layout the database.  The person who laid out the database 
knows even less math than i do, we have numeric fields (20,10) to (10,4) 
and everything in between.  it creates some funky results due to 
truncating and rounding in the different fields.  You have raw material 
priced as high as thing are today it starts adding up to some major 
issues.  Multiply that by thousands of transactions it just way wrong.


I learned long ago make sure every field in the database have the same 
precision and deal with the rounding at the UI side.  I learned this 
because of my work in low resistance measurements taken at the ppm scale.


When you do financial calculations you should avoid floating point 
where possible. Floating point is really tricky to get right. There 
are scary books on it.


I know this and experienced it before.  Again someone did not know what 
they where doing and i got left picking up the pieces.  Not to say my 
first time through i did not make all kind of mistakes but i fixed my.


To add further murky the water for the users our last ERP packaged used 
round to next highest number which trashed cost accounting as it used 
more raw material than it should have.




I'm no expert in financial calculations and floating point stuff, my 
_guess_ is a good start is probably treating one penny as 1, instead 
of 0.01. But better wait for the experts to chime in.


That said, if you're going to insist on using the wrong numbers from 
the Excel Invoice, can't you work some way of getting them into 
Postgresql and stored as is, rather than having Postgresql calculate 
them differently ( I suspect you're using floating point in postgresql 
and so it'll be wrong too, just maybe a bit less wrong than Excel ;) ).


No floating point is being used every variable is declared as numeric on 
the Postgresql side and in the C++  which is the UI side everything is 
double.


Regards,
Link.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Justin

thats how i loaded the price list to start with.  The problems with
sales orders are entered and the automatic pricing kicks in ( the
discounts are calculated * the number or pieces ordered)  it goes to
down the tubes.

I could just rewrite the pricing stored procedures to call a rounding
procedure that would make the results agree with stupid excel :-\

Not the preferred way  but it would make data entry people leave me
alone.  Thanks for the idea.


Christophe wrote:
Rather than try to recreate Excel's rounding algorithm, perhaps use 
Excel to create a table of input values and results, and load that 
into the database?  It might be easier than trying to back-engineer 
Excel's broken math.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Andy Anderson

Andy Anderson wrote:
Can you be more explicit about the rounding that's wrong in Excel?  
Are you talking about the n5 round-up to n+1 that Excel uses  
vs. n5 round-to-even n (sometimes called Banker's Rounding)?


On May 12, 2008, at 2:38 PM, Justin wrote:
Yes i'm taking about difference between bankers rounding verse  
Excels crappy math.   I have dealt with excels crappy math skills  
in scientific measurements dumped from AD cards the simply solution  
was increase the decimal range to 1 more than i needed.  But in  
this case it won't work sense this published material will disagree  
with how postgresql rounds.



Well, I won't call it crappy, just different; it depends on your  
purpose. I learned round-even in grade school, but I've seen many  
college students in the last two decades who learned round-up.  
Microsoft actually explains these two and several other ways to  
implement rounding on this page:


http://support.microsoft.com/kb/196652

(But they don't justify their choice for Excel, very odd given its  
extensive financial use.)


Anyway, I would imagine you could implement a custom function to  
replace Postgres' round(n, i) along the lines of:


function roundup(n, i)
{
factor = power(10.0, i);
nd = n * factor;
ni = trunc(nd);
fraction = nd - ni;
if (fraction = 0.5)
return (ni + 1)/factor;
if (fraction = -0.5)
return (ni - 1)/factor;
return ni/factor;
}

Apologies for using C and warnings that I haven't thoroughly tested  
this.


P.S. You could also write a round-even function for Excel and get  
them to use it on their next printout! :-)


-- Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Craig Ringer

Justin wrote:

No floating point is being used every variable is declared as numeric on 
the Postgresql side and in the C++  which is the UI side everything is 
double.


`double' in C++ refers to double precision floating point. `double' is 
subject to all the usual fun with rational decimals being irrational 
binary floats (and vice versa).


One of the reasons I chose Java for my current work is that it has a 
built-in decimal type (like `numeric') called BigDecimal . This makes 
working with exact quantities a lot easier as there's no conversion and 
rounding occurring each time data goes to/from the database.


Are there any particular decimal/numeric libraries people here like to 
use with C++ ? Or do you just use double precision floats and a good 
deal of caution?


I'd expect that using double would be OK so long as the scale of your 
numeric values never approaches the floating point precision limit of 
the double type. I'm far from sure about that, though, and it'd be handy 
to hear from people who're doing it. Personally I like to stick to 
numeric/decimal types.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Justin



Craig Ringer wrote:

Justin wrote:

No floating point is being used every variable is declared as numeric 
on the Postgresql side and in the C++  which is the UI side 
everything is double.


`double' in C++ refers to double precision floating point. `double' is 
subject to all the usual fun with rational decimals being irrational 
binary floats (and vice versa).


One of the reasons I chose Java for my current work is that it has a 
built-in decimal type (like `numeric') called BigDecimal . This makes 
working with exact quantities a lot easier as there's no conversion 
and rounding occurring each time data goes to/from the database.

Not according to MS specific if i'm reading it correctly

*Microsoft Specific *

The double type contains 64 bits: 1 for sign, 11 for the exponent, and 
52 for the mantissa. Its range is +/--1.7E308 with at least 15 digits of 
precision


Are there any particular decimal/numeric libraries people here like to 
use with C++ ? Or do you just use double precision floats and a good 
deal of caution?


I'd expect that using double would be OK so long as the scale of your 
numeric values never approaches the floating point precision limit of 
the double type. I'm far from sure about that, though, and it'd be 
handy to hear from people who're doing it. Personally I like to stick 
to numeric/decimal types.


--
Craig Ringer


Re: [GENERAL] rounding problems

2008-05-12 Thread Justin
As i'm playing around with rounding and the numeric field precision ran 
into a odd set of results i don't understand


here is the sql i wrote the first four inserts are calculations we run 
everyday and they make sense but if  division is used the results are 
not right or am i missing something


create table test_num (
   num1 numeric(20,1),
   num2 numeric(20,2),
   num3 numeric(20,3),
   num4 numeric(20,4),
   num5 numeric(20,5),
   num6 numeric(20,6),
   num7 numeric(20,7),
   num8 numeric(20,8),
   num9 numeric(20,9));

delete from test_num;

insert into test_num values ( (0.70 *1.05), (0.70 *1.05), (0.70 *1.05),
 (0.70 *1.05), (0.70 *1.05), (0.70 *1.05),
 (0.70 *1.05), (0.70 *1.05), (0.70 *1.05));

insert into test_num values ( (0.709 *1.05), (0.709 *1.05), (0.709 *1.05),
 (0.709 *1.05), (0.709 *1.05), (0.709 *1.05),
 (0.709 *1.05), (0.709 *1.05), (0.709 *1.05));

insert into test_num values( (.5/.03), (.5/.3), (.5/3),

(.5/30), (.5/300), (.5/3000),
(.5/3), (.5/3), (.5/3));


insert into test_num values( (.5/.03)*.9975, (.5/.3)*.9975, (.5/3)*.9975,
(.5/30)*.9975, (.5/300)*.9975, (.5/3000)*.9975,
(.5/3)*.9975, (.5/3)*.9975, (.5/3)*.9975);

insert into test_num values( (9*.1),

(9*.01),
(9*.001),
(9*.0001),
(9*.1),
(9*.01),
(9*.001),
(9*.0001),
(9*.1));

insert into test_num values ( (9/10),
 (9/100),
 (9/1000),
 (9/1),
 (9/10),
 (9/100),
 (9/1000),
 (9/1),
 (9/10));

insert into test_num values( (1*.1),

(1*.01),
(1*.001),
(1*.0001),
(1*.1),
(1*.01),
(1*.001),
(1*.0001),
(1*.1));
   
insert into test_num values ( (1/10),

 (1/100),
 (1/1000),
 (1/1),
 (1/10),
 (1/100),
 (1/1000),
 (1/1),
 (1/10));

select * from test_num ;
  
  


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Justin

I tried casting them to numeric and it was still wrong

OK i just added decimal point after the 9 and 1  it work at that point.

Thats an odd result i would not have expected it to do that.

This prompts another question how does postgres figure out the data
types passed in an SQL string???

Andy Anderson wrote:
I would guess the issue is that 9/10 is an integer calculation, with 
result 0. Use instead 9./10 or 9/10. or 9./10. with result 0.9.


-- Andy

On May 12, 2008, at 5:09 PM, Justin wrote:

As i'm playing around with rounding and the numeric field precision 
ran into a odd set of results i don't understand


here is the sql i wrote the first four inserts are calculations we 
run everyday and they make sense but if  division is used the results 
are not right or am i missing something


create table test_num (
   num1 numeric(20,1),
   num2 numeric(20,2),
   num3 numeric(20,3),
   num4 numeric(20,4),
   num5 numeric(20,5),
   num6 numeric(20,6),
   num7 numeric(20,7),
   num8 numeric(20,8),
   num9 numeric(20,9));

delete from test_num;

insert into test_num values ( (0.70 *1.05), (0.70 *1.05), (0.70 *1.05),
 (0.70 *1.05), (0.70 *1.05), (0.70 *1.05),
 (0.70 *1.05), (0.70 *1.05), (0.70 *1.05));

insert into test_num values ( (0.709 *1.05), (0.709 *1.05), (0.709 
*1.05),

 (0.709 *1.05), (0.709 *1.05), (0.709 *1.05),
 (0.709 *1.05), (0.709 *1.05), (0.709 *1.05));
insert into test_num values( (.5/.03), (.5/.3), (.5/3),
(.5/30), (.5/300), (.5/3000),
(.5/3), (.5/3), (.5/3));


insert into test_num values( (.5/.03)*.9975, (.5/.3)*.9975, 
(.5/3)*.9975,

(.5/30)*.9975, (.5/300)*.9975, (.5/3000)*.9975,
(.5/3)*.9975, (.5/3)*.9975, (.5/3)*.9975);
insert into test_num values( (9*.1),
(9*.01),
(9*.001),
(9*.0001),
(9*.1),
(9*.01),
(9*.001),
(9*.0001),
(9*.1));

insert into test_num values ( (9/10),
 (9/100),
 (9/1000),
 (9/1),
 (9/10),
 (9/100),
 (9/1000),
 (9/1),
 (9/10));
insert into test_num values( (1*.1),
(1*.01),
(1*.001),
(1*.0001),
(1*.1),
(1*.01),
(1*.001),
(1*.0001),
(1*.1));
   insert into test_num values ( (1/10),
 (1/100),
 (1/1000),
 (1/1),
 (1/10),
 (1/100),
 (1/1000),
 (1/1),
 (1/10));

select * from test_num ;

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Craig Ringer

Justin wrote:


Craig Ringer wrote:


`double' in C++ refers to double precision floating point. `double' is 
subject to all the usual fun with rational decimals being irrational 
binary floats (and vice versa).



Not according to MS specific if i'm reading it correctly

*Microsoft Specific *

The double type contains 64 bits: 1 for sign, 11 for the exponent, and 
52 for the mantissa. Its range is +/--1.7E308 with at least 15 digits of 
precision


I take it you're referring to:

http://msdn.microsoft.com/en-us/library/e02ya398(VS.80).aspx ?

See how it says The format is similar to the float format ?

As you can see from:

http://msdn.microsoft.com/en-us/library/hd7199ke(VS.80).aspx

the `double' type is a binary floating point representation, just like 
float. It just has a bigger exponent and a bigger mantissa, so it can 
represent more extreme values and do so with more precision.


Being a binary floating point representation it's subject to all the 
usual problems with comparison for equality, rounding oddities, etc.


Here's one of the many explanations out there on the 'net. I haven't 
read this particular one, it's just a viable looking Google hit:


http://www.cprogramming.com/tutorial/floating_point/understanding_floating_point.html



By the way, there was at least a proposal for a numeric/decimal type for 
C++0x . It doesn't seem to have made the cut.


http://209.85.173.104/search?q=cache:D0Iqhgz7X1QJ:www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2041.pdf+%22c%2B%2B0x%22+decimal+OR+numerichl=enct=clnkcd=1gl=auclient=firefox-a
http://en.wikipedia.org/wiki/C%2B%2B0x
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/

It looks like ISO C might adopt a decimal type or library though:

http://www2.hursley.ibm.com/decimal/

Note in particular the support in gcc 4.2 or newer.

There's also a library:

http://www2.hursley.ibm.com/decimal/dfpal/

that might be useful.

--
Craig Ringe

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Craig Ringer

Justin wrote:

I tried casting them to numeric and it was still wrong


How do the results differ from what you expect? You've posted a bunch of 
code, but haven't explained what you think is wrong with the results.


Can you post a couple of SMALL examples and explain how the results are 
different from what you expect them to be?


Try the example using the following formats for the literals in your test:

   2.0
   '2.0'::numeric  (this is a BCD decimal)
   '2.0'::float4 (this is a C++/IEEE float)
   '2.0'::float8 (this is a C++/IEEE double)

and see how the results differ.

--
Craig Riniger

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Justin

I guess i have not been very clear.

lets take this
   select (9/10), (9/10)::numeric, (9::numeric/10::numeric), (9./10),
(9*.1)

With the given select statement i  expected the results all to be same,
especially sense it cast 4 of the 5 to numeric either with explicit cast
or by containing a decimal.  Instead postgresql cast the  first 2
calculations to integer, it then uses integer math so the result is 0.

To Add further conversion to my small brain there is a specific type
cast to the second calculation but it still returned 0.  Not what i
would have expected.  After thinking about it for say 10 seconds, i see
that Postgresql is following the order of operation in the 2nd
calculation where it does integer math then cast the results to numeric.

I made the incorrect assumption Postgresql would have casted all the
arguments to numeric then done the math.  After thinking this through
for a short bit i see why postgresql is casting the arguments to integer
type as numeric/floating point math can be a pretty heavy hit
performance wise.

So this prompts the question how does postgresql decide what types to
cast arguments to.  It seems thus far if a decimal is found in the
argument its numeric and everything else is assumed to be integer if it
does not contain a decimal point.




Craig Ringer wrote:

Justin wrote:

I tried casting them to numeric and it was still wrong


How do the results differ from what you expect? You've posted a bunch 
of code, but haven't explained what you think is wrong with the results.


Can you post a couple of SMALL examples and explain how the results 
are different from what you expect them to be?


Try the example using the following formats for the literals in your 
test:


   2.0
   '2.0'::numeric  (this is a BCD decimal)
   '2.0'::float4 (this is a C++/IEEE float)
   '2.0'::float8 (this is a C++/IEEE double)

and see how the results differ.

--
Craig Riniger





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Christophe
Yet another option, of course, is to simply not do any calculations  
in PostgreSQL, and accept the results from Excel as definitive...  
which seems to be what is desired, anyway.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Andy Anderson

On May 12, 2008, at 6:37 PM, Justin wrote:

lets take this
   select (9/10), (9/10)::numeric, (9::numeric/10::numeric), (9./10),
(9*.1)

With the given select statement i  expected the results all to be  
same,
especially sense it cast 4 of the 5 to numeric either with explicit  
cast

or by containing a decimal.  Instead postgresql cast the  first 2
calculations to integer, it then uses integer math so the result is 0.


Putting a decimal on a string of digits is the standard way to  
specify that it's numeric rather than integer; see 4.1.2.4. Numeric  
Constants:


	http://www.postgresql.org/docs/8.3/interactive/sql-syntax- 
lexical.html#AEN1276


In other words, 9. is equivalent to 9::numeric, though the latter  
involves an operation on an integer.


If a calculation contains a numeric value, any integers involved will  
be cast to a numeric value first, and then the calculation will  
proceed numerically.


9/10 = 0	(a purely integer calculation, division truncates the  
fractional part)
(9/10)::numeric = 0::numeric = 0.  		(using parentheses forces the  
integer calculation to occur *before* the cast)
9::numeric/10::numeric = 9./10. = 0.9		(using one or two casts  
forces a numeric calculation)
9./10 = 9./10. = 0.9(specifying a numeric value forces the  
integer to be cast to numeric)



To Add further conversion to my small brain there is a specific type
cast to the second calculation but it still returned 0.  Not what i
would have expected.  After thinking about it for say 10 seconds, i  
see

that Postgresql is following the order of operation in the 2nd
calculation where it does integer math then cast the results to  
numeric.


I made the incorrect assumption Postgresql would have casted all the
arguments to numeric then done the math.


Not when you change the order of evaluation by using parentheses. See  
the precedence table in 4.1.6. Lexical Precedence:


	http://www.postgresql.org/docs/8.3/interactive/sql-syntax- 
lexical.html#SQL-PRECEDENCE



After thinking this through
for a short bit i see why postgresql is casting the arguments to  
integer

type as numeric/floating point math can be a pretty heavy hit
performance wise.

So this prompts the question how does postgresql decide what types to
cast arguments to.


It starts with operator precedence to determine the order of  
operation, and then for each operator it decides how it will cast  
arguments for the best results.


-- Andy


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Sam Mason
On Mon, May 12, 2008 at 06:37:02PM -0400, Justin wrote:
 I guess i have not been very clear.
 
 lets take this
select (9/10), (9/10)::numeric, (9::numeric/10::numeric), (9./10),
 (9*.1)
 
 With the given select statement i  expected the results all to be same,
 especially sense it cast 4 of the 5 to numeric either with explicit cast
 or by containing a decimal.  Instead postgresql cast the  first 2
 calculations to integer, it then uses integer math so the result is 0.
 
 To Add further conversion to my small brain there is a specific type
 cast to the second calculation but it still returned 0.  Not what i
 would have expected.  After thinking about it for say 10 seconds, i see
 that Postgresql is following the order of operation in the 2nd
 calculation where it does integer math then cast the results to numeric.

PG does very similar things to what C does.  '9' is an integer literal,
and so is '10', there is a '/' operator that takes two integers and
returns an integer and this gets used, resulting in an integer.  If you
happen to cast the thing to a value of numeric type this will happen
after the division (i.e. it follows the syntax, like C does).  Casting
the integers to values of numeric type is similar, just the numeric
version of the division operator gets used.  The last example exercises
a different code path, in that '9.' is a value of numeric type and '10'
is still of integer type.  There's some magic somewhere in PG that says
that values of numeric type are more expressive than values of integer
type causing the parser (I'm guessing here) to insert a cast to numeric
type.  The types now unify and one value can be divided by the other.

The magic seems somewhat arbitrary; what if I wanted to go to the less
precise type or generally be told when things didn't unify.

 I made the incorrect assumption Postgresql would have casted all the
 arguments to numeric then done the math.  After thinking this through
 for a short bit i see why postgresql is casting the arguments to integer
 type as numeric/floating point math can be a pretty heavy hit
 performance wise.

I don't think it's accurate to say the behaviour is there because
of performance reasons, it's just evaluating your code as you've
written it.  The behaviour you describe is closer to an untyped (i.e.
dynamically checked, or as they seem to be popularly known weakly
typed) scripting language.  Either that or something like Haskell which
treats types much more rigorously than PG, where the expression (9.0 /
(10::Int)) would fail to type check, and 9.0/10 (or even 9/10) would do
what you wanted and parse 10 as any value that implements the fractional
type class (probably a floating point number).

The easiest way to understand what's going on is generally playing with
a single expression, then changing the literals to represent values
of different types and seeing how the result changes.  You may get
some mileage out of using EXPLAIN VERBOSE (you can see the cast being
inserted in the 9./10 case, when compared to 9/10---function OID 1740
takes an int4 and returns a numeric) but it's somewhat difficult to
read.


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Justin
thats what   i'm trying to get a grasp on, what postgres is doing with 
calculation as it truncates or rounds the number when committing the 
records to the physical table.


I just start digging into this as we are having problems where some 
fields in the database are precision of 2 and other go all the way to 10 
decimal places. 

The table layout we have is not consistent and the result are hundred to 
thousandths of pennies off but those pennies start become dollars every 
100 to 1000 transactions.  It seems the pg rounding is  favoring the 
lower side of the number when being committed to the table.  I've been 
going over transactions in WIP and compared to values in the Generial 
Ledger i'm off 6 cents and thats only on 36 transactions that i have 
handcheck.GL has a  lower value compared to the records in WIP 
tables which have 4 and 6 decimals precision versues GL 2 decimal 
precision in the tables


I going through the tables and making all the numeric fields all the 
same.  I have run into problems as some of columns are referenced by 
views and other constraints and its not letting me change them.  :'(


WE have several columns in table defined with numeric (20,10) thats is 
just insanity.   Unless your doing scientific calculations which we do, 
do.   Having that many decimal points for an accounting package is just 
nonsense and then its rounded to 4 or 6 in Inventory  and Wip tables 
then 2 when the numbers finally hit the GL tables.Who ever laid 
these tables out has never had to try and get numbers to balance and 
agree across tables :-( .   Every time i dig a little deeper i keep 
finding stupid things like this.


Some people may think i'm crazy trying to track this down but when 
you're only consume 0.003186 lbs of a metal per part that cost 22.7868 
per  lb and the work order calls fro 1148 parts.  how the machine rounds 
becomes a big problem   (.00318611*1148) = 3.65765 lbs consumed * 
22.7868 = $83.3462 . Sense GL tables are 2 decimal the value is stored 
as $83.35


But the problem is far worse than that.  BOM allows for greater 
precision of 8 wip Inventory Movements shows only 6, Wip tables has 6  
and 4.


The question quickly becomes what number is the correct number.  Wip 
truncates the material consumed to .003186*1148 = 3.6575  * 22.7868  = 
83.3434 which is rounded = 83.34


Multiply this by 1000 transactions a day and we start having major problems.




Sam Mason wrote:

On Mon, May 12, 2008 at 06:37:02PM -0400, Justin wrote:
  

I guess i have not been very clear.

lets take this
   select (9/10), (9/10)::numeric, (9::numeric/10::numeric), (9./10),
(9*.1)

With the given select statement i  expected the results all to be same,
especially sense it cast 4 of the 5 to numeric either with explicit cast
or by containing a decimal.  Instead postgresql cast the  first 2
calculations to integer, it then uses integer math so the result is 0.

To Add further conversion to my small brain there is a specific type
cast to the second calculation but it still returned 0.  Not what i
would have expected.  After thinking about it for say 10 seconds, i see
that Postgresql is following the order of operation in the 2nd
calculation where it does integer math then cast the results to numeric.



PG does very similar things to what C does.  '9' is an integer literal,
and so is '10', there is a '/' operator that takes two integers and
returns an integer and this gets used, resulting in an integer.  If you
happen to cast the thing to a value of numeric type this will happen
after the division (i.e. it follows the syntax, like C does).  Casting
the integers to values of numeric type is similar, just the numeric
version of the division operator gets used.  The last example exercises
a different code path, in that '9.' is a value of numeric type and '10'
is still of integer type.  There's some magic somewhere in PG that says
that values of numeric type are more expressive than values of integer
type causing the parser (I'm guessing here) to insert a cast to numeric
type.  The types now unify and one value can be divided by the other.

The magic seems somewhat arbitrary; what if I wanted to go to the less
precise type or generally be told when things didn't unify.

  

I made the incorrect assumption Postgresql would have casted all the
arguments to numeric then done the math.  After thinking this through
for a short bit i see why postgresql is casting the arguments to integer
type as numeric/floating point math can be a pretty heavy hit
performance wise.



I don't think it's accurate to say the behaviour is there because
of performance reasons, it's just evaluating your code as you've
written it.  The behaviour you describe is closer to an untyped (i.e.
dynamically checked, or as they seem to be popularly known weakly
typed) scripting language.  Either that or something like Haskell which
treats types much more rigorously than PG, where the expression (9.0 /
(10::Int)) would fail 

Re: [GENERAL] rounding problems

2008-05-12 Thread Craig Ringer
Justin wrote:

 WE have several columns in table defined with numeric (20,10) thats is
 just insanity.

Not necessarily. I have a few places where a monetary value is
mulitiplied by a ratio quantity. For some of the historical data
imported from another system the ratio can be irrational or at least not
representable in any small precision value.

I ended up needing a precision of 8 numeric digits to acceptably
represent these ratios, resulting in a numeric(16,8) type to permit
ratio values up to . . I probably could've got away with
numeric(13,8) or even numeric(12,8) but as space and performance aren't
utterly critical it didn't seem to be worth the risk of hitting limits
and overflows later. As it is I'm tempted to go to 10 digits of
precision, as there's still a 3 cent difference between the totals from
the old system and the same data imported into the new system.

You'll encounter similar situations in your materials consumption
tracking (as you detailed below) and other places. So don't discount the
 use of high precision numeric values just yet.

Personally I'd be tempted to use a `double precision' (float8) for
things like materials consumption. Materials consumed in huge quantities
will have lower price rates, and materials consumed in tiny quantities
will often be priced higher. With wally-numbers: You're not going to
care about the 0.0003 kg of steel consumed at a price of $0.1 , but
the same amount of something valuable might have a detectable (if still
sub-cent) value. Floating point numbers are IMO better for that than BCD
numeric. However, since the float will just get converted to numeric
during multiplication with a numeric price-per-mass ratio it may well
not be worth worrying about it.

There's a use for that numeric(20,10).


   Unless your doing scientific calculations which we do,
 do.   Having that many decimal points for an accounting package is just
 nonsense and then its rounded to 4 or 6 in Inventory  and Wip tables
 then 2 when the numbers finally hit the GL tables. Who ever laid
 these tables out has never had to try and get numbers to balance and
 agree across tables :-( .   Every time i dig a little deeper i keep
 finding stupid things like this.

It sounds like you might have quite a bit of compounded rounding error
from the successive stages of rounding as data moves through the system.
Maybe you're rounding too aggressively?

I like to store a bit more precision than I have to, unless there's a
business rule that requires rounding to a particular precision. For
example, if your invoice items are rounded to whole cents you'd probably
round the calculated invoice item price when inserting into an invoice
item table.

Of course, that means that
   sum(calculation of invoice item price)
   
   sum(rounded price of invoice items)
because of rounding. That's fine; you can't balance the two things
exactly because they're actually subtly different things. If you're
using an appropriate rounding method for financial data, like
round-to-even, you'll only ever get a couple of cents difference and
that should be expected and ignored.

 Some people may think i'm crazy trying to track this down but when
 you're only consume 0.003186 lbs of a metal per part that cost 22.7868
 per  lb and the work order calls fro 1148 parts.  how the machine rounds
 becomes a big problem   (.00318611*1148) = 3.65765 lbs consumed *
 22.7868 = $83.3462 . Sense GL tables are 2 decimal the value is stored
 as $83.35

Thinking about correct rounding and precision is very important, and far
from crazy.

 The question quickly becomes what number is the correct number.

Sometimes the answer is both of them - even though they are different.
See the example above with rounded invoice items.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rounding problems

2008-05-12 Thread Justin



Craig Ringer wrote:

Justin wrote:

  

WE have several columns in table defined with numeric (20,10) thats is
just insanity.



Not necessarily. I have a few places where a monetary value is
mulitiplied by a ratio quantity. For some of the historical data
imported from another system the ratio can be irrational or at least not
representable in any small precision value.

I ended up needing a precision of 8 numeric digits to acceptably
represent these ratios, resulting in a numeric(16,8) type to permit
ratio values up to . . I probably could've got away with
numeric(13,8) or even numeric(12,8) but as space and performance aren't
utterly critical it didn't seem to be worth the risk of hitting limits
and overflows later. As it is I'm tempted to go to 10 digits of
precision, as there's still a 3 cent difference between the totals from
the old system and the same data imported into the new system.
  
That 3 cent difference is over how many transactions ??? 

The differences i'm seeing are getting into the hundreds of dollars in 1 
quarter within this stupid application.


The person/persons who laid this database out do not or did not 
understand the compound rounding errors. I'm just trying to figure out 
how best to fix it. 




You'll encounter similar situations in your materials consumption
tracking (as you detailed below) and other places. So don't discount the
 use of high precision numeric values just yet.

Personally I'd be tempted to use a `double precision' (float8) for
things like materials consumption. Materials consumed in huge quantities
will have lower price rates, and materials consumed in tiny quantities
will often be priced higher. With wally-numbers: You're not going to
care about the 0.0003 kg of steel consumed at a price of $0.1 , but
the same amount of something valuable might have a detectable (if still
sub-cent) value. Floating point numbers are IMO better for that than BCD
numeric. However, since the float will just get converted to numeric
during multiplication with a numeric price-per-mass ratio it may well
not be worth worrying about it.

There's a use for that numeric(20,10).
  


I'm moving all the numeric fields to numeric(20,8) .  I feel its pretty 
safe with that scale setting.  I agree data storage and performance 
aren't critical concerns as they once were


  

  Unless your doing scientific calculations which we do,
do.   Having that many decimal points for an accounting package is just
nonsense and then its rounded to 4 or 6 in Inventory  and Wip tables
then 2 when the numbers finally hit the GL tables. Who ever laid
these tables out has never had to try and get numbers to balance and
agree across tables :-( .   Every time i dig a little deeper i keep
finding stupid things like this.



It sounds like you might have quite a bit of compounded rounding error
from the successive stages of rounding as data moves through the system.
Maybe you're rounding too aggressively?
  


Thats the problem the database layout is crap. 

I like to store a bit more precision than I have to, unless there's a
business rule that requires rounding to a particular precision. For
example, if your invoice items are rounded to whole cents you'd probably
round the calculated invoice item price when inserting into an invoice
item table.

Of course, that means that
   sum(calculation of invoice item price)
   
   sum(rounded price of invoice items)
  
because of rounding. That's fine; you can't balance the two things

exactly because they're actually subtly different things. If you're
using an appropriate rounding method for financial data, like
round-to-even, you'll only ever get a couple of cents difference and
that should be expected and ignored.
  
I normally would but given all the tables are showing different values 
when summed over a Accounting period its adding up to significant 
differences between all the tables. 
  

Some people may think i'm crazy trying to track this down but when
you're only consume 0.003186 lbs of a metal per part that cost 22.7868
per  lb and the work order calls fro 1148 parts.  how the machine rounds
becomes a big problem   (.00318611*1148) = 3.65765 lbs consumed *
22.7868 = $83.3462 . Sense GL tables are 2 decimal the value is stored
as $83.35



Thinking about correct rounding and precision is very important, and far
from crazy.

  

The question quickly becomes what number is the correct number.



Sometimes the answer is both of them - even though they are different.
See the example above with rounded invoice items.

--
Craig Ringer