Here's using this kind of thing to find a rational approximation of a
floating point approximation of an irrational number:

f2r=:4 :0
  r=.''
  q=. y
  'a b c d'=.0 1 1 0x
  for.i.x do.
    L=. a%b
    H=. c%d
    r=.r,M=. (a+c)%(b+d)
    if. M < q do.
      'a b'=. (a+c),(b+d)
    elseif. M > q do.
      'c d'=. (a+c),(b+d)
    elseif. do. r return. end.
  end.r
)

Technically, what I want is the last value of M, but I also am
interested in how fast this converges, and I also want to compare this
mechanism with J's x: implementation:

   #1000 f2r 1p1
328
   {:1000 f2r 1p1
5419351r1725033
   x:1p1
1285290289249r409120605684
   {:0.1,5419351r1725033-1285290289249r409120605684
1.71261e_13

But which approximation holds the most error?

It's simple to look up more precise approximations of pi. So (beware
email induced line wrap):

pi=: (10^100x)%~(10^10x)#.3 1415926535 8979323846 2643383279
5028841971 6939937510 5820974944 5923078164 0628620899 8628034825
3421170679x

With that
   }.0.1,5419351r1725033 1285290289249r409120605684-pi
2.21448e_14 _1.49116e_13

So it looks like this approach *might be* a more precise (but
presumably more computationally expensive) alternative to x:

We need a significantly larger sample set, though, before we can have
much confidence in this kind of test result. (It would be important to
consider different epsilon values - we're using J's default here - and
it would be important to consider different target irrational numbers.
Also, of course, this f2r assumes a positive target.)

There would not be much point going with a less efficient approach if
it's not reliably an order of magnitude more accurate...

Thanks,

-- 
Raul



On Wed, Jan 31, 2018 at 3:12 PM, William Tanksley, Jr
<[email protected]> wrote:
> The Stern-Brocot tree is _incredible_; I've been trying to grasp the
> implications for a while. It entirely includes the ability to
> programmatically search all of the rationals, to automatically find the
> best rational approximation to any testable real (for several excellent and
> useful definitions of "best"), and so on. It's been expanded to include the
> n-tuples of rationals as well, including limited support for complex
> numbers (although that theory isn't _quite_ as simple).
>
> You don't need to "know" your target (and think about it, an irrational
> real number cannot possibly be _known_ without computing it); you only need
> to be able to test whether you're below, above, or at your target (for each
> dimension you're considering). So you have to know something _about_ your
> target. I suspect this means your target must be computable -- Chaitin's
> Omega is one of aleph-1 examples of such uncomputable reals. (Wikipedia: "a
> Chaitin constant (Chaitin omega number)[1] or halting probability is a real
> number that, informally speaking, represents the probability that a
> randomly constructed program will halt.")
>
> It's an incredible way to look at the deep structure of the rationals,
> which we're often tempted to mistake as being a simple, flat, and boring
> continuum.
>
> The following is a good video introduction -- not multimedia, just a good
> clear lecture. The next few episodes add some more recent research in the
> tree as well.
> https://www.youtube.com/watch?v=CiO8iAYC6xI
>
> -Wm
>
> On Wed, Jan 31, 2018 at 3:56 AM Martin Kreuzer <[email protected]> wrote:
>
>> After some reading I now accept that going through the tree I could
>> approximate any real number with fractions (taking mediants on the
>> way). But doesn't that mean I have to know the target in the first place..?
>>
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to