Thank you all of you, for engaging with my off-the-wall problem. You've been tracking my thoughts closely, which is both good and bad – good because it reassures me I'm still on this planet, and able to express myself to people still on the leading edge – bad because it may mean I'm dropping so far behind I'm no longer aware of it.
But I'm benefiting from it all by being handed so many novel (to me) ideas. and learning about so many primary sources that I'd never have discovered for myself, or arrived at in any other way. I'm still very much in the early stages of my investigations, but already I see provocative results. A Newton's method algorithm for which I foresaw problems, which duly arrived on-schedule, now runs for an appreciable time (but not too long). And hits the target squarely, which it never did before. And this before I'd even begun to "rationalize" the code, if I dare use that term. It's based on (g^:_) and as you could predict I've loaded up (g) with timeouts, diagnostics, heuristics, stopping rules, fancy traces (it even plots a graph), all the time developing notions of "close enough" to reinforce or replace J's "tolerant comparison", and so to snatch empirical success from the jaws of (theoretical) failure. All redundant, because it's turned into a totally different ball-game. Raul offers me an interesting idea which I must find time to explore: to replace my traditional notions based on Gaussian error with ones expressing asymmetrical regrets, e.g. where the cost of overshooting far exceeds the cost of falling short. I read a book recently which made the case for optical illusions (notably Poggendorff) having evolved due just such a system of regrets arising out of swinging through the branches of trees. (Solves the problem: if evolution is so wonderful, why doesn't the visual cortex compute the correct solution?) One might consider something analogous to an optical illusion to overcome certain sorts of instability in a N-R algorithm. William invites me to expect phenomena some which have already unfolded before my eyes, including the runaway rational, which is the program feeling its way towards a solution which is precisely 1. I read once that the physical universe takes its form and being from the interplay of ratios of small numbers. Back when the Great Museum of Alexandria was in its heyday, the neo-Platonists tried to work out the consequences of that insight, and indeed it motivates their preoccupation with Diophantine equations. Perhaps the idea goes right back to Pythagoras and the Pythagoreans. Anyway, what with string theory and charmed quarks, that particular line of research is still alive and kicking, and possibly never really let up. One idea to come out of all this, which may or may not be original, but is impossible to contemplate addressing with floating-point numbers, is that inside a tool which is working with real-world observations, the numbers chiefly of interest will be small ones, in terms of the amount of storage they occupy, so that when designing iterative methods there may be some traction in steering away from large numbers into neighbouring small ones. In opposition to that idea, it's worth remembering that between any two rational numbers there is an uncountable infinity of irrational ones, and any attempt to avoid them might reintroduce the very sort of noise I'm aiming to eliminate. But the overwhelming sensation I have at the moment is one of admiration plus gratitude for the originators (and improvers) of J, for all that hidden work where nobody imagined it would matter. To seamlessly integrate rational numbers with all the existing precisions, and not tell themselves "only a fool would want to do that" – and simply leave loose ends! Clearly a labour of love, a commodity in all too short supply in the programming shops of large organisations. It seems I'm now really reaping the benefits of this labour of love. Instead of having to hack through the undergrowth whenever I leave the beaten track, which has ever been my experience with Objective-C, I find I can take a bold leap of the imagination, and within a day or two come up with code which works first go – well, almost. Nothing whets the appetite for blue skies research more than that. And nothing kills it faster than having to use a kludge built by an army of 9-to-5ers, to do tasks which have not been clearly foreseen and worried over for at least the preceding year. (I look at what I've written and I say to myself: here's a man who no longer has to work to deadlines!) Ian Clark On Fri, 29 Mar 2019 at 18:07, Raul Miller <[email protected]> wrote: > On Fri, Mar 29, 2019 at 1:57 PM Don Guinn <[email protected]> wrote: > > There is a new IEEE decimal float standard kicking around. It could > address > > many problems we run into when trying to do calculations which are > critical > > in decimal using binary floats. Money is a big one. I always do money > > calculations in pennies for that reason. But I believe conversions in > > monetary exchanges are given in decimal fractions, not binary fractions. > I > > might be wrong. But we either use floating point to do the calculations > or > > go to a lot of trouble to do them in decimal. But when making such > > conversions legally they must be exact as described in the conversion > > method. Can't be off even a penny. > > This depends on context. Some contexts allow estimates and/or have other > issues. > > For example, USA tax forms typically allow you to round money amounts > to the nearest dollar. > > For example, international monetary exchange conversions are time > sensitive. > > That said, it does indeed look like IEEE may eventually catch up with > ancient iBM practice: > > https://en.wikipedia.org/wiki/Decimal_floating_point#IEEE_754-2008_encoding > > Thanks, > > > -- > Raul > ---------------------------------------------------------------------- > For information about J forums see http://www.jsoftware.com/forums.htm ---------------------------------------------------------------------- For information about J forums see http://www.jsoftware.com/forums.htm
