On Mon, Oct 2, 2017 at 9:10 PM, Aaron Meurer <asmeu...@gmail.com> wrote:

> That's a great blog post Fredrik. Since your blog doesn't seem to have
> a comments box, I will add my comments here.
>

Thanks!


> One thing I would add to the mpmath history is the SymPy 1.0 release
> (March 2016), which officially made mpmath an external dependency of
> SymPy. Prior to that, a copy of mpmath shipped as sympy.mpmath.
>
>
Good point. I've shamelessly copied this into the post.


> I've been using mpmath (via SymPy) myself quite a bit in my own recent
> research (computing the CRAM approximation to exp(-t) on [0, oo) to
> arbitrary precision). I'm always amazed at how stable mpmath is. It
> always gives what seem to be correct answers, or fails nicely if it
> can't. While I did find some minor holes in mpmath (I had to tweak the
> maxsteps and tol parameters of findroot (via sympy.nsolve), see
> https://github.com/fredrik-johansson/mpmath/issues/339), it was quite
> easy to work around it.
>
> Regarding Arb, I would love to see Python bindings. I would suggest
> writing some ArbPy wrapper library, so that people can use it in
> Python on its own, and then we can use that to improve mpmath and
> SymPy. There's been some interest in using something like Arb for code
> generation. The idea is this: you can use SymPy to create a model for
> something, and then use the codegen module to generate fast machine
> code to compute it. But the problem is that you don't necessarily know
> how precise that machine code is. What if there are numerical issues
> that lead to highly inaccurate results? So the idea is to swap out the
> backend for the code generator to something like Arb, and perform the
> same computation with guaranteed bounds. This will obviously be slower
> than the machine code, so you wouldn't use it in practice, but instead
> you'd use it to get some assurance on the accuracy of your results
> with machine floats. If the accuracy is bad, you might have to look
> into modifying the algorithm. Or in the worst case, you just have to
> use a slower arbitrary precision library to get the precision you
> need. But critically, since everything is code generated, the whole
> thing would (in theory at least) be as simple as changing some flag in
> the code generator.
>

As I wrote in the post, python-flint (
https://github.com/fredrik-johansson/python-flint) already exists:

>>> from flint import arb, good
>>> arb(3).sqrt()
[1.73205080756888 +/- 3.03e-15]
>>> good(lambda: arb(1) + arb("1e-1000") - arb(1), maxprec=10000)
[1.00000000000000e-1000 +/- 3e-1019]

It still needs work with the setup code, documentation, tests, general code
cleanup, and interface tweaks... volunteers are welcome.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at https://groups.google.com/group/sympy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/CAJdUXT%2Bh3Kz8N3Z3KXMZ0XhwUH%3Dm2zZeW_Ri-C1Jpiy3wgYh%2BQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to