Re: Implement the "unum" representation in D ?

2015-09-17 Thread Anthony Di Franco via Digitalmars-d

On Tuesday, 15 September 2015 at 05:16:53 UTC, deadalnix wrote:

On Saturday, 11 July 2015 at 03:02:24 UTC, Nick B wrote:

On Thursday, 20 February 2014 at 10:10:13 UTC, Nick B wrote:

...


If you are at all interested in computer arithmetic or 
numerical methods, read this book. It is destined to be a 
classic.


To be honest, that sound like snake oil salesman speech to me 
rather than science. It's all hand waving and nothing concrete 
is provided, the whole thing wrapped in way too much 
superlatives.


The guy seems to have good credential. Why should I read that 
book ?


I read the whole book and did not regret it at all, but I was 
already looking for good interval arithmetic implementations. I 
found that the techniques are not too different (though improved 
in important ways) from what is mainstream in verified computing. 
I found signs that techniques like this are standard in beam 
physics. (Caveat, I am not a beam physicist, but my friend at 
CERN is.) And the bibliography for MSU's COSY INFINITY, a 
verified computing tool from the beam physics community, provided 
a lot of interesting background information: 
http://www.bt.pa.msu.edu/pub/


What is perhaps most surprising is not that the techniques work 
well but that they can be implemented efficiently in a sensible 
amount of hardware, little more expensive than bare floating 
point hardware. Even maybe the most surprising results about 
search-based global optimization with unums have precedent, see 
e.g. "Rigorous Global Search using Taylor Models," M. Berz, K. 
Makino, Symbolic Numeric Computation 2009, (2009) 11-19, 
http://bt.pa.msu.edu/cgi-bin/display.pl?name=GOSNC09 (Taylor 
model techniques are similar to direct computation with intervals 
but add a bit more sophistication.)


So, from my perspective, I think a unum library would at least be 
an interesting and useful library, and roughly in the style of 
the Mathematica / Python libraries could reduce unum interval 
computations to floating point computations with a modest amount 
of overhead. There is a sense in which we might expect the small 
overhead up front to be well worth it in the overall system: less 
haste, more speed. Hacks to try to compensate for incorrect 
behavior after the fact may end up being more costly overall, 
certainly to the programmer but perhaps also to the machine. For 
example, the common need to stick a loop over an entire vector or 
matrix into the inner loop of an iterative method to renormalize 
to prevent floating point rounding errors from accumulating.


Whether this library should be part of the standard library, I 
don't know. It would seem to depend on how much people want the 
standard library to support verified numerical computing. If it 
is clear that verified numerical computing needs good support in 
the standard library, something like unums should be there, maybe 
even with some other techniques built on top of them (Taylor 
model or Levi-Civita for example).


Anthony


Re: Implement the "unum" representation in D ?

2015-09-14 Thread Anthony Di Franco via Digitalmars-d

On Wednesday, 22 July 2015 at 20:41:42 UTC, jmh530 wrote:
On Wednesday, 22 July 2015 at 19:28:41 UTC, Andrei Alexandrescu 
wrote:

On 7/13/15 1:20 AM, Nick B wrote:

All we can do now, with our limited resources, is to keep an 
eye on developments and express cautious interest. If someone 
able and willing comes along with a unum library for D, that 
would be great.




The book has quite a bit of Mathematica code at the end. A 
first pass at a unum library could probably just involve 
porting that to D.


Here is a Python port of the Mathematica code which is that much 
closer to D:

https://github.com/jrmuizel/pyunum

Regardless of whether the representation used follows the unum 
bitwise format proposed in the book, having the semantics of 
interval arithmetic that keeps proper account of amount of 
precision, and exact values vs. open/closed ended intervals on 
the extended real line would be quite valuable for verified 
computation and for how it enables simple root finding / 
optimization / differential equation solving via search, and 
could build on hard float directly andor an extended precision 
float library such as MPFR to keep performance close to raw 
floats.


The bitwise format is intended to fit as much data as possible 
through the time-energy bottleneck between main memory and the 
processor. This would clearly be of great value if the format 
were supported in hardware but of less clear value otherwise. The 
semantic improvements would be quite welcome regardless. They go 
much further than best practices with floats to put error bounds 
on computations, handle over/underflow and infinity correctly, 
and keep algorithms correct by default. We might also consider 
small semantic improvements if not rigidly bound by the proposed 
formats and the need to implement in hardware, such as having 
division by an interval including zero return the union of the 
results from the intervals on either side of the zero as well as 
NaN (1-D intervals are strictly represented as the convex hull of 
at most two points in the proposed format, so an interval such as 
this made of a sentinel value unified with two disjoint 
intervals, though it has better semantics, is not supported).


Anthony