Re: infinite (fractional) precision

2002-10-10 Thread David Lester


Dear All,

A really, really simple version in Haskell 1.2 has been available from

ftp://ftp.cs.man.ac.uk/pub/arithmetic/Haskell/Era1/Era.hs

for some considerable time.

Of course the only reason for producing it was to show that the language
designers didn't get it right. Take it from me, they never do [Is he being
ironic here?].

In particular, although () and (==) are not computable functions over the
computable reals, min and max are! Similarly, signum isn't computable, but
abs is. The classes Ord and Num are a theoreticians nightmare.

[For the mathematically sophisticated, the problem with these operations
revolves around their continuity (in a weird numeric _and_ information
theoretic sense) properties given the discrete nature of their range.]

Then there's the rounding operation in Haskell 1.2. I must have wasted
more hours fixing bugs caused by my naive understanding of _that_
operation than an other single misapprehension about a programming
language construct in any langauge!

In other files hanging off of

http://www.cs.man.ac.uk/arch/dlester/exact.html

you'll find a PVS development that shows that (most of) the implementation
is correct. A paper on this theorem-proving work has been accepted for
TCS, but I don't know whether it'll be published in my life time; and, I'm
unsure about the ethics involved in puting up a copy of the paper on my
website.

However, a summary is: 4 bugs found, 3 only becoming manifest on doing the
PVS work. I guess there's something to this formal methods lark after all.

I'd planned to do a Functional Pearl about this library/package, but the
theoretical development is embarassingly inelegant when compared to
Richard Bird's classic functional pearls.

If you think it'd be worth it, I'll see what I can do.

I trust that this will save anyone wasting too much more time over this
topic.

David Lester.



On Thu, 10 Oct 2002, Ashley Yakeley wrote:

 At 2002-10-10 01:29, Ketil Z. Malde wrote:
 
 I realize it's probably far from trivial, e.g. comparing two equal
 numbers could easily not terminate, and memory exhaustion would
 probably arise in many other cases.
 
 I considered doing something very like this for real (computable) 
 numbers, but because I couldn't properly make the type an instance of Eq, 
 I left it. Actually it was worse than that. Suppose I'm adding two 
 numbers, both of which are actually 1, but I don't know that:
 
  1.0 +
  0.9
 
 The trouble is, as far as I know with a finite number of digits, the 
 answer might be
 
  1.99937425
 
 or it might be
 
  2.00013565
 
 ...so I can't actually generate any digits at all. So I can't even make 
 the type an instance of my Additive class. Not very useful...
 
 


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: infinite (fractional) precision

2002-10-10 Thread David Lester


On Thu, 10 Oct 2002, Jerzy Karczmarczuk wrote:

 Ashley Yakeley wrote:
 
  I considered doing something very like this for real (computable)
  numbers, but because I couldn't properly make the type an instance of Eq,
  I left it. Actually it was worse than that. Suppose I'm adding two
  numbers, both of which are actually 1, but I don't know that:
  
   1.0 +0.9
  
  The trouble is, as far as I know with a finite number of digits, the
  answer might be   1.99937425   or it might be  2.00013565
  
  ...so I can't actually generate any digits at all. So I can't even make
  the type an instance of my Additive class.
 
 You can, unless you are so ambitious that you want to have an ideal solution.
 Doing the stuff lazily means that you will have a thunk used in further
 computations, and the digits will be generated according to your needs.
 
 You *MAY* generate these digits physically ('1' or '2' in your case) if you
 permit yourself to engage in a possibly bottom-less recursive pit, which
 in most interesting cases actually *has* a bottom, and the process stops.
 Please look my Pi crazy essay. Once the decision concerning the carry is
 taken, the process becomes sane, generative, co-recursive, until the next
 ambiguity.
 
 There are of course more serious approaches: intervals, etc. The infinite-
 precision arithmetic is a mature domain, developed by many people. Actually
 the Gosper arithmetic of continued fractions is also based on co-recursive
 expansion, although I have never seen anybody implementing it using a lazy
 language, and a lazy protocol.

I submitted a paper to JFP about lazy continued fractions in about 1997, 
but got side-tracked into answering the reviewers' comments.

It _is_ possible to do continued fractions lazily, but proving that it's 
correct involves a proof with several thousand cases. A discussion of
that proof can be found in 15th IEEE Symposium on Computer Arithmetic,
Vail 2001. I ought to get around to a journal publication someday.

David Lester.






 Anybody wants to do it with me? (Serious offers only...)
 
 Jerzy Karczmarczuk
 ___
 Haskell-Cafe mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: java program

2002-02-27 Thread David Lester

On Tue, 26 Feb 2002, Peter White wrote:

 Since you did not mention the name of your professor, I have forwarded your
 query to both the undergraduate and graduate secretaries at the University
 of Manchester. I am sure they will be able to deal more effectively with
 your homework and your complaints than I can. I am sorry to hear that you
 find yourself embarrassed. You can thank me for my help when you have the
 time.

Though, of course, sending to Manchester Metropolitan University,
might have had a slightly greater effect!

www.mmu.ac.uk

is probably what you'd want.

David Lester, Manchester University.


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: rounding in haskell

2000-02-08 Thread David Lester

I had at first thought that this might be a discussion of the US style
definition of round in the standard prelude that always bites me...

... Then that it might be the bug in floor within the standard issue
of GMP.

Instead its a discussion of FP problems.

(1) My friend Jean-Michel Muller at Lyon has published a number of
papers on the extension of IEEE 754 to cover transcendentals to
within 1ULP. He has heuristic data to indicate that you need
`about' twice the mantissa size for the internal registers
to make the gaurantee possible.

(2) If anyone is interested (then to 40dp):

tan(10^100)
0.4012319619908143541857543436532949583239

sin(10^100)
-0.3723761236612766882620866955531642957197

(3) The above from a small library of Exact Arithmetic functions that
my PhD studen and I have been working on. Rather gratifyingly
Norbert Muller (Trier) and I agree on our results to a 1000 decimal
places.

Just let me know if you need more digits...

---
David Lester



Re: Rambling on numbers in Haskell

1998-08-13 Thread David Lester

Simon says:

 [...]

 Bottom line: Haskell 2 is pretty wide open.  If some group of you
 out there get excited about this and design and implement a coherent
 set of numeric libraries then I think you'd be welcomed with open
 arms, even if it wasn't compatible with Haskell's current numeric
 class structure.  Sergey's work is moving in that direction, which
 is great.

Perhaps we could have a chat about this sometime this summer. As you
know I have had a few ideas about this for sometime.

 Above all don't leave it to us compiler hackers!  We aren't numeric
 experts like some of you are, and are most unlikely to do a good job
 of revising Haskell's numeric libraries.

All I ask of compiler hackers is that they do not treat (a+b)+c as
a+(b+c)!

---
David Lester.





Re: vagiaries of computer arithmetic

1997-03-10 Thread David Lester

 What about `=='?

A very good question!

The answer to which is that `==' doesn't really exist for real numbers
and nor does `=', but max and min do. Even for Floats and Doubles,
sensible numeric code ought to have overlapping domains: i.e.

f(x) = if x  k +/- epsilon then g(x) else h(x)
^^^
[the tolerance of the comparison]

then g(x) ought to be defined upto k+epsilon, and h(x) ought to be
defined from k-epsilon.

   Is it guaranteed that if `x == y' is true then
 `f x == f y' is true for all `f' (presuming that `f x == f y' is
 type-correct and does not evaluate to bottom)?

In general this isn't possible.

 For that matter, are functions required to always return the same results
 for the same arguments?

Exercise:

Without using non-computable operations and concepts such as
`==', cast the above into mathematics.

[This can be done, see my paper in Glasgow FP workshop 1991, section 1]

Provided you accept the limitation of computable continuity on the
function and mumble about what it means to be the `same' then yes, we
can ensure that functions give the `same' answer for the `same'
arguments.

 If that were the case, then it would prevent certain optimizations,
 e.g. replacing `(x + 0.1) + 0.2' with `x + 0.3', because such
 optimizations can change the results returned.  Does Haskell allow
 these sort of optimizations?

What I would hope is that user-placed parentheses would prevent
constant-folding. The alternative is that the Haskell compiler becomes
a skilled Numerical Analyst.

---
David Lester.






Re: function denotations

1993-11-17 Thread David Lester



Greg,

Full abstraction: there is junk in the standard denotational domains,
in particular a parallel or function (por). We can write two new
functions f1 and f2, such that f1 por = 1 and f2 por = 2, (and both
give bottom for other boolean functions of the correct type).

So: does f1 = f2? Using an extensional definition of function equality
(applying the functions to the same argument gives the same answer),
we conclude that denotationally f1 /= f2, and operationally f1 = f2.
(Because por is not a sequentially definable function it doesn't appear
in the operational domain.)

Using Ong and Abramskys' fully abstract model of the lazy lambda
calculus will circumvent this problem (by equating f1 and f2) at `some
minor conceptual inconvenience'.

The destruction of the congruence arises because the proof makes
explicit use of the extensional definition of equality of functions.
A fudge is possible if we restrict attention at output to non-function
values.

See: Part I of my DPhil thesis, PRG-73, Oxford [1]; or JE Stoy,
"Congruence of two programming language definitions", TCS 13(2), 1981.

---
David Lester, Informationsbehandling, Chalmers TH, 412-96 Goteborg, Sweden.
[but usually: CS Department, Manchester University, Manchester M13 9PL, UK.]

[1] In case anyone is interested, for FPCA this year, Sava Mintchev and
I ran my congruence proof through Isabelle, and concluded that the
inductions for the proofs of Lemmata 3.18 and 3.19 are mutual, not
seperate.
The embarrassment of theorem provers!