[sympy] Re: fastlog name

2009-07-08 Thread Fredrik Johansson

On Wed, Jul 8, 2009 at 9:09 PM, smichr wrote:
>
>
>
> On Jul 8, 10:23 pm, Ondrej Certik  wrote:
>
>> Ah, yes, I think it should be called fastlog2. Fredrik, what do you think?
>>
> And someone who has used this should probably comment on the need to
> have the sign ignored: the fastlog for both 16 and -16 would be 4 as
> it is written now. *Should* the sign be ignored or should a value
> error be raised or should the docstring say that the fastlog of the
> absolute value of x is being computed?

It's intended to compute an upper bound for the base-2 logarithm of
the absolute value of a number. This is used to determine how much
precision is required for various calculations. Whether it's exact too
high by 1 or 2 bits doesn't really matter.

It's also strictly intended as an internal function, so it probably
doesn't need much more elaborate documentation or a more precise name
(unless its present role is unclear to someone trying to understand
the internals of the evalf module).

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to sympy+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: e.n(3667) is imprecise

2009-07-08 Thread Fredrik Johansson

On Wed, Jul 8, 2009 at 10:07 PM, Ondrej Certik wrote:

> so I am confused about those digits at the end. But apparently they
> can't be trusted.

This is arguably not a bug. It's hard to determine whether a given
decimal expansion that goes "999..." or "000..." should be rounded up
or down. The only way to do it is to continue calculating until a
digit other than 9 or 0 appears, and in general it's impossible to
tell whether trying will get you stuck in an infinite loop.

You could write function (or add an evalf option) that tries to find n
digits, and make it look for trailing 9's or 0's, but you'd have to
implement some stopping criterion.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to sympy+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: subs without error

2009-08-06 Thread Fredrik Johansson

On Thu, Aug 6, 2009 at 11:08 PM, Vinzent
Steinberg wrote:
>
> On 6 Aug., 21:35, "Aaron S. Meurer"  wrote:
>> Yes, you should avoid using the names I, E, S, N, C, or O (uppercase
>> only) for Symbols because they are predefined by SymPy.  
>> Seehttp://docs.sympy.org/gotchas.html#id2
>
> The new assumptions system (yet to be released) also uses Q.
>
>> By the way, you can use the mnemonic COSINE to remember these.
>
> Nice one! But it will be significantly harder to find another mnemonic
> with a Q. :)

Q-COSINE?

http://mathworld.wolfram.com/q-Cosine.html

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to sympy+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: NameError with nsolve

2009-08-12 Thread Fredrik Johansson

On Tue, Aug 11, 2009 at 10:58 PM, Vinzent
Steinberg wrote:
>
> # mpmath has a solver for polynomials, but we have to convert it to a
> list of
> # coefficients (please not that the results are not very accurate, you
> can refine
> # them using an iterative solver)

polyroots should give full accuracy unless there are repeated roots.

> Indeed it's strange that the imaginary parts don't vanish, even if you
> use higher precision for evaluating. This smells like a bug (assuming
> the roots are really real). Fredrik, what do you think?
> Please note that the second variant is somewhat inaccurate, you can
> however use it as starting points for the first variant (see
> documentation).

It should work if you pass myEqConstants.n(50) instead of
myEqConstants as input. I think it assumes that the input is only
15-digit accurate.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to sympy+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] ANN: mpmath 0.13 released

2009-08-13 Thread Fredrik Johansson

Hi all,

Version 0.13 of mpmath is now available from the website:
http://code.google.com/p/mpmath/

It can also be downloaded from the Python Package Index:
http://pypi.python.org/pypi/mpmath/0.13

Mpmath is a pure-Python library for arbitrary-precision floating-point
arithmetic that implements an extensive set of mathematical functions.
It can be used as a standalone library or via SymPy
(http://code.google.com/p/sympy/), and is also available as a
component of Sage (http://sagemath.org/).

Version 0.13 implements about 30 new special functions, including
Kelvin, Struve, Coulomb, Whittaker, associated Legendre, Meijer G,
Appell, incomplete beta, generalized exponential integral, Hurwitz
zeta and Clausen functions. The algorithms for hypergeometric-type
functions have been greatly improved to robustly handle arbitrarily
large arguments and limit cases of the parameters. Other new features
and bug fixes are included as well.

For a more comprehensive changelog, see:
http://mpmath.googlecode.com/svn/tags/0.13/CHANGES

For development tidbits and demonstrations of the new features, see
the blog: http://fredrik-j.blogspot.com/

Extensive documentation is available at:
http://mpmath.googlecode.com/svn/tags/0.13/doc/build/index.html

Bug reports and other comments are welcome on the issue tracker at
http://code.google.com/p/mpmath/issues/list or the mpmath mailing
list: http://groups.google.com/group/mpmath

My work on mpmath 0.13 was made with the goal to bring
arbitrary-precision evaluation of special functions in Sage up to par
with Mathematica and Maple, and was kindly sponsored by the American
Institute of Mathematics under the support of National Science
Foundation Grant No. 0757627. Special thanks to Sage's lead developer
William Stein for offering his grant resources to support this
project, and for providing much encouragement. The new version of
mpmath will soon be available in Sage.

Enjoy,

Fredrik Johansson

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to sympy+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: Sympy benchmarks

2009-08-17 Thread Fredrik Johansson

On Tue, Aug 18, 2009 at 12:15 AM, fijal wrote:
>
> Hi.
>
> I've been looking for some sympy benchmarks as a potential target for
> pypy's jit. I've found this: http://wiki.sympy.org/wiki/Symbench
>
> What's the reasonable small, yet telling something benchmark that
> makes
> sense? We're basically trying to collect some that are both simple and
> yet
> real-world enough. Any quick thoughts?
>
> Cheers,
> fijal
>
> PS. I really like the fact that sympy is pure python, that makes it a
> good
> target.

I think SymPy is an excellent benchmark target. The nature of SymPy
(or any computer algebra system) is such that any high-level operation
will exercise most parts of the system. For example
"integrate(x**3*exp(x)*sin(x), x)" performs ~4 million function calls
to some 200 functions all over SymPy, and it's a calculation that
you'd use SymPy for in practice, so it would be a good real-world test
case.

Also, mpmath might be a good target (mpmath is a subpackage of SymPy).
There are some microbenchmarks at [1] although I could come up with
some slightly more complex "real world" calculation if you are
interested. Mpmath heavily depends on long integer performance in
particular, but if you use low precision, it will exercise general
Python performance. For myself, I would be interested in whether
PyPy's new JIT can beat psyco, which all around makes mpmath ~2x
faster on top of CPython.

(If you *are* interested in testing long integer performance in
particular, then mpmath should be an especially good choice ;)

[1] http://mpmath.googlecode.com/svn/bench/mpbench.html

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to sympy+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: Sympy benchmarks

2009-08-18 Thread Fredrik Johansson

On Tue, Aug 18, 2009 at 8:50 AM, Maciej Fijalkowski wrote:
>
> Hi.
>
>>
>> I think SymPy is an excellent benchmark target. The nature of SymPy
>> (or any computer algebra system) is such that any high-level operation
>> will exercise most parts of the system. For example
>> "integrate(x**3*exp(x)*sin(x), x)" performs ~4 million function calls
>> to some 200 functions all over SymPy, and it's a calculation that
>> you'd use SymPy for in practice, so it would be a good real-world test
>> case.
>>
>> Also, mpmath might be a good target (mpmath is a subpackage of SymPy).
>> There are some microbenchmarks at [1] although I could come up with
>> some slightly more complex "real world" calculation if you are
>> interested. Mpmath heavily depends on long integer performance in
>> particular, but if you use low precision, it will exercise general
>> Python performance. For myself, I would be interested in whether
>> PyPy's new JIT can beat psyco, which all around makes mpmath ~2x
>> faster on top of CPython.
>
> Long integer performance is not *exactly* on top of my list of stuff to look 
> to.
> About PyPy JIT beating psyco, yes, but not exactly right now :-)
>
> I was also wondering what *does not* exercise most of the system and yet
> still makes some sort of sense.

By wondering, do you mean that you are looking for this (i.e. you are
looking for a benchmark that invokes a relatively small amount of
code), or are you implicating me as not making sense? :-)

> Cheers,
> fijal
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to sympy+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: Sympy benchmarks

2009-08-19 Thread Fredrik Johansson

On Tue, Aug 18, 2009 at 9:31 AM, Maciej Fijalkowski wrote:
>
> On Tue, Aug 18, 2009 at 1:12 AM, Fredrik
> Johansson wrote:
>>
>> On Tue, Aug 18, 2009 at 8:50 AM, Maciej Fijalkowski wrote:
>>>
>>> Hi.
>>>
>>>>
>>>> I think SymPy is an excellent benchmark target. The nature of SymPy
>>>> (or any computer algebra system) is such that any high-level operation
>>>> will exercise most parts of the system. For example
>>>> "integrate(x**3*exp(x)*sin(x), x)" performs ~4 million function calls
>>>> to some 200 functions all over SymPy, and it's a calculation that
>>>> you'd use SymPy for in practice, so it would be a good real-world test
>>>> case.
>>>>
>>>> Also, mpmath might be a good target (mpmath is a subpackage of SymPy).
>>>> There are some microbenchmarks at [1] although I could come up with
>>>> some slightly more complex "real world" calculation if you are
>>>> interested. Mpmath heavily depends on long integer performance in
>>>> particular, but if you use low precision, it will exercise general
>>>> Python performance. For myself, I would be interested in whether
>>>> PyPy's new JIT can beat psyco, which all around makes mpmath ~2x
>>>> faster on top of CPython.
>>>
>>> Long integer performance is not *exactly* on top of my list of stuff to 
>>> look to.
>>> About PyPy JIT beating psyco, yes, but not exactly right now :-)
>>>
>>> I was also wondering what *does not* exercise most of the system and yet
>>> still makes some sort of sense.
>>
>> By wondering, do you mean that you are looking for this (i.e. you are
>> looking for a benchmark that invokes a relatively small amount of
>> code), or are you implicating me as not making sense? :-)
>>
>
> Heh :-) I suppose I'm looking for a benchmark that invokes relatively
> small amount
> of code, but still a bit more than a single loop.
>
> Cheers,
> fijal

A very simple, and equally important, benchmark would be the basic
performance of the Integer and Rational classes.

Integer is just a wrapper for Python ints with some extra type
checking; a JIT should theoretically be able to make

def foo(N):
while N > 0:
N -= 1

run as fast if called with foo(100) as with foo(sympy.Integer(100)).

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to sympy+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: dealing with small parameters

2009-10-11 Thread Fredrik Johansson

On Sun, Oct 11, 2009 at 11:33 PM, klmn  wrote:
>
> 0) I have recently discovered Sympy to myself and am pleased using it.
> It is already rich with many interesting capabilities. Thanks a lot
> for it!
>
> 1) I am trying to solve symbolically an eigen-problem depending on a
> small parameter. For my 2x2 matrix this takes couple of hours. For a
> 3x3 matrix the eigen-problem becomes an ewig-problem: I have never
> managed to wait till the end.
> Q1) Is there a way how to simplify the problem by retaining only a few
> leading terms in the small parameter expansion?
>
> 2) I tried to use .series() function for simplifying eigenvalues and
> eigenvectors of a 2x2 matrix. It takes very long. In the example below
> I would expect an immediate result for the expansion, not several
> minutes. It seems I am using Sympy not in the best way.
> Q2) How can I better deal with small parameters? Do you have docs/
> examples?
>
> Thank you in advance!
>
> Example: **
> d, na, nb, e, dna, dnb, nanb= symbols('d na nb e dna dnb nanb')
> d2, na2, nb2= symbols('d2 na2 nb2', positive=True)
>
> aaa = d2 + dna*e + dnb*e - (8*d2*dna*e + 8*d2*dnb*e + 4*d2**2 +
> e**4*na2**2 + e**4*nb2**2 + 4*e**4*nanb**2 + 8*dna**2*e**2 +
> 8*dnb**2*e**2 + 8*d2*nanb*e**2 - 4*dna*nb2*e**3 - 4*dnb*na2*e**3 +
> 4*dna*na2*e**3 + 4*dnb*nb2*e**3 + 8*dna*nanb*e**3 + 8*dnb*nanb*e**3 -
> 2*na2*nb2*e**4)**(sympify(1)/2)/2 + na2*e**2/2 + nb2*e**2/2
>
> aaa.series(e,0,3)
> ** the last command takes several minutes to complete.

SymPy's series function is known to be slow in many cases. It's often
faster to use Taylor's formula directly. For your expression this
takes less than a second:

>>> sum = __builtins__.sum
>>> sum(aaa.diff(e,k).subs(e,0)*e**k/factorial(k) for k in range(3)).expand()
na2*e**2/2 + nb2*e**2/2 - nanb*e**2 - dna**2*e**2/(2*d2) -
dnb**2*e**2/(2*d2) + dna*dnb*e**2/d2

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to sympy+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] ANN: mpmath 0.14 released

2010-02-05 Thread Fredrik Johansson
Hi all,

Version 0.14 of mpmath is now available on the website:
http://code.google.com/p/mpmath/

It can also be downloaded from the Python Package Index:
http://pypi.python.org/pypi/mpmath/0.14

Mpmath is a pure-Python library for arbitrary-precision floating-point
arithmetic that implements an extensive set of mathematical functions. It
can be used as a standalone library or via SymPy (
http://code.google.com/p/sympy/), and is also available as a standard
component of Sage (http://sagemath.org/).

For a list of new features, see the blog post and changelog:
http://fredrik-j.blogspot.com/2010/02/mpmath-014-released.html
http://mpmath.googlecode.com/svn/tags/0.14/CHANGES

For a brief summary, the new features in 0.14 include support for using a
Cython-based backend soon to be added to Sage (giving a large speedup of
mpmath in Sage); support for 3D plotting; fast low-precision functions
(using Python's builtin float/complex types); an implementation of the
Riemann-Siegel expansion for the Riemann zeta function; many improvements to
evaluation of hypergeometric functions; miscellaneous new special functions;
matrix functions; and several bugfixes and optimizations.

Extensive documentation is available at:
http://mpmath.googlecode.com/svn/trunk/doc/build/index.html (or
equivalently)
http://mpmath.googlecode.com/svn/tags/0.14/doc/build/index.html

Bug reports and other comments are welcome on the issue tracker at
http://code.google.com/p/mpmath/issues/list or the mpmath mailing list:
http://groups.google.com/group/mpmath

Enjoy, and extra thanks to Juan Arias de Reyna, Vinzent Steinberg, Jorn
Baayen and Chris Smith who contributed to this version.

Fredrik Johansson

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] tutorial from the Ohio Supercomputer Center

2010-02-07 Thread Fredrik Johansson
On Mon, Feb 8, 2010 at 12:10 AM, Ondrej Certik  wrote:

> Hi,
>
> I just discovered this nice tutorial about sympy:
>
> https://www.osc.edu/cms/sip/node/26
>
> it's essentially an introduction, done on Windows. Some things I noticed:
>
> * the building Plot function looks ugly, we should use matplotlib for
> our default plotting, and only use pyglet if the user wants
>

Just use mpmath's plotting functions; it already solves the problem of
wrapping matplotlib in a convenient way :-) It supports 3D plotting now as
well.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Re: Is SymPy project interested in this

2010-03-09 Thread Fredrik Johansson
On Tue, Mar 9, 2010 at 9:45 AM, Freddie Witherden wrote:


> So far as symbolic manipulation goes I am unsure how useful a GPU (or
> similar device) would be. However, the mpmath project may very well be
> interested (and is very closely related to SymPy and equally as awesome!).
> Of course there is only really a benefit when large data-sets are being
> processed (as the is a not-insignificant overhead).
>

Nearly everything in mpmath boils down to operations on big integers. The
right place to do this kind of optimization is in MPIR/GMP.

In fact, if you look at http://mpir.org/#projects, "There are a number of
important development directions for MPIR at present: [...] Parallel
processing, including CUDA development and OpenMP pragmas."

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Re: cot(0)=0? and x * cot(x) evaluated near 0

2010-06-02 Thread Fredrik Johansson
On Wed, Jun 2, 2010 at 8:02 PM, Scott  wrote:

> Aaron
>
> Thanks for the tips.
>
> Where are the "issues" located?
>
> I am numerically evaluating x*cos(x)/sin(x) on [-pi/2,pi/2] and the
> spurious singularity at x= 0 is giving me grief. x/sin(x)=1 at x=0.
>
> After looking at my problem it seems that I should have asked if there
> is and efficient way to embed sin(x)/x or x/sin(x) in a function that
> is evaluated at 0. I will probably use a 7th order Taylor series
> unless there another clever option.
>
> The series for x/sin(x) has much better convergence than the series
> for x*cot(x) in my range of interest (+- pi/2).
>
> In [41]: (x/sin(x)).series(x, 0, 8)
> Out[41]: 1 + x**2/6 + 7*x**4/360 + 31*x**6/15120 + O(x**7)
>


SymPy is missing the sinc function. I created an issue:
http://code.google.com/p/sympy/issues/detail?id=1952

If you want to have a go at implementing this function (it shouldn't be too
hard), see sympy/functions/elementary/trigonometric.py

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



[sympy] ANN: mpmath 0.15 released

2010-06-06 Thread Fredrik Johansson
Hi all,

Version 0.15 of mpmath is now available on the website:
http://code.google.com/p/mpmath/

It can also be downloaded from the Python Package Index:
http://pypi.python.org/pypi/mpmath/0.15

Mpmath is a pure-Python library for arbitrary-precision floating-point
arithmetic that implements an extensive set of mathematical functions. It
can be used as a standalone library or via SymPy
(http://code.google.com/p/sympy/), and is also available as a standard
component of Sage (http://sagemath.org/). The versions in Sage
and SymPy will be updated soon.

For details about the new features in this version, see the following
blog post and the changelog:
http://fredrik-j.blogspot.com/2010/06/announcing-mpmath-015.html
http://mpmath.googlecode.com/svn/tags/0.15/CHANGES

Briefly, besides many small fixes, 0.15 includes large performance
improvements for transcendental functions, new code for computing the
nontrivial zeros of the Riemann zeta function (contributed by
Juan Arias de Reyna), and many new special functions (including
generalized 2D hypergeometric series, q-functions, and new
elliptic functions). Support for complex interval arithmetic
has also been added.

Extensive documentation is available at:
http://mpmath.googlecode.com/svn/trunk/doc/build/index.html (or
equivalently)
http://mpmath.googlecode.com/svn/tags/0.15/doc/build/index.html

Bug reports and other comments are welcome on the issue tracker at
http://code.google.com/p/mpmath/issues/list or the mpmath mailing list:
http://groups.google.com/group/mpmath

Enjoy,
Fredrik Johansson

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Re: ANN: mpmath 0.15 released

2010-06-06 Thread Fredrik Johansson
On Sun, Jun 6, 2010 at 7:12 PM, Ondrej Certik  wrote:

>
> Congratulations! Also good luck for the summer. What kind of special
> functions will you be working on?
>
> Ondrej
>

Thanks, and the same to you! (I read on your blog that you're going to
LLNL.)

Much of the work will be of a general nature. I will write more Cython code
to speed up mpmath in Sage. I will also continue to polish the
hypergeometric code add more special hypergeometric functions. There are
some other missing special functions to be added, but it's a rather mixed
list. I may be able to take requests, in case someone's favorite special
function is missing :)

I also expect to do some work on symbolic special functions in Sage.

Fredrik


(Note: replying only to the sympy list since you set this as the reply-to
address)

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Re: ANN: mpmath 0.15 released

2010-06-07 Thread Fredrik Johansson
On Mon, Jun 7, 2010 at 4:18 AM, Ondrej Certik  wrote:

> Will it be possible to also use it outside of Sage? E.g. I guess if
> you do "setup.py install" in mpmath, that it would compile the Cython
> codes and install it? That'd be really cool.
>

It can be done in theory. The main problem is that the Cython extension code
requires MPIR and the MPIR interface code in Sage.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Re: ANN: mpmath 0.15 released

2010-06-07 Thread Fredrik Johansson
On Mon, Jun 7, 2010 at 6:26 PM, Ondrej Certik  wrote:

> On Mon, Jun 7, 2010 at 1:00 AM, Fredrik Johansson
>  wrote:
> > On Mon, Jun 7, 2010 at 4:18 AM, Ondrej Certik  wrote:
> >>
> >> Will it be possible to also use it outside of Sage? E.g. I guess if
> >> you do "setup.py install" in mpmath, that it would compile the Cython
> >> codes and install it? That'd be really cool.
> >
> >
> > It can be done in theory. The main problem is that the Cython extension
> code
> > requires MPIR and the MPIR interface code in Sage.
>
> How many things are needed from MPIR? Looking at
> mpmath/libmp/backend.py, do you need only MPZ (=sage.Integer), or do
> you need more things?
>
> Also, where in Sage will the Cython code be? Or will it be in the
> mpmath repository?
>

The Cython code is here:
http://hg.sagemath.org/sage-main/file/2cffe66bd642/sage/libs/mpmath

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Re: reclaim forbidden characters?

2010-08-26 Thread Fredrik Johansson
On Fri, Aug 27, 2010 at 6:08 AM, Aaron S. Meurer  wrote:
> from sympy.abc import *
>
> vs
>
> var('a b c d …')
>
> Plus, I also like abc for doctests.

What if var() is changed to do the equivalent of "from sympy.abc
import *"? (It currently just raises an exception when called with no
arguments.)

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



[sympy] ANN: mpmath 0.16 released

2010-09-24 Thread Fredrik Johansson
Hi all,

Version 0.16 of mpmath is now available on the project website:
http://code.google.com/p/mpmath/

It can also be downloaded from the Python Package Index:
http://pypi.python.org/pypi/mpmath/0.16

Mpmath is a pure-Python library for arbitrary-precision floating-point
arithmetic that implements an extensive set of mathematical functions. It
can be used as a standalone library or via SymPy
(http://code.google.com/p/sympy/), and is also available as a standard
component of Sage (http://sagemath.org/). The versions in Sage
and SymPy will be updated soon.

For details about the new features in this version, see the following
blog post and the changelog:
http://fredrik-j.blogspot.com/2010/09/announcing-mpmath-016.html
http://mpmath.googlecode.com/svn/tags/0.16/CHANGES

Changes in 0.16 include new special functions (incomplete elliptic integrals,
inhomogeneous Bessel functions, Bessel function zeros, parabolic cylinder
functions), rewritten functions (Lambert W function, Airy functions), and
various other fixes and improvements. Support has also been added for new
extension code that will make mpmath 0.16 much faster in Sage,
particularly affecting elementary and hypergeometric functions.

My work on mpmath 0.16 was funded using resources from NSF grant DMS-0757627,
whose support is gratefully acknowledged. Special thanks to William Stein
for enabling this.

Extensive documentation is available at:
http://mpmath.googlecode.com/svn/trunk/doc/build/index.html or
http://mpmath.googlecode.com/svn/tags/0.16/doc/build/index.html

Bug reports and other comments are welcome on the issue tracker at
http://code.google.com/p/mpmath/issues/list or the mpmath mailing list:
http://groups.google.com/group/mpmath

Enjoy,
Fredrik Johansson

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Higher Quality Logo

2010-12-08 Thread Fredrik Johansson
On Thu, Dec 9, 2010 at 6:32 AM, Aaron S. Meurer  wrote:
> Do we have a higher quality logo than the one at 
> https://github.com/sympy/sympy/blob/master/doc/logo/sympy-160px.png?  I am 
> doing a report on my work on the Risch Algorithm this summer for my Technical 
> Writing class, and I would like to include the SymPy logo in my presentation, 
> but this one looks kind of pixelated at the size I want to have it.
>
> Aaron Meurer

The original svg is here:
http://code.google.com/p/sympy/source/browse/#svn/materials/logo

I don't remember, did anyone create an updated version?

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



[sympy] Re: Updating mpmath to 0.16 in SymPy

2011-01-05 Thread Fredrik Johansson
On Wed, Jan 5, 2011 at 1:09 AM, Aaron S. Meurer  wrote:
> I figured it out.  The docstrings are all buried in function_docs.py.  I'm 
> not sure how good of an idea that is, but it's not my project.

Originally, there was a single functions.py and it was getting far too
large, with docstrings making up more than 2/3 of the space. Since
many of the functions are generated dynamically, their docstrings were
listed separately anyway, so an easy fix was to move all docstrings to
a separate file.

Now that functions.py has been split into several submodules, space
shouldn't be a problem anymore, so I should really move the docstrings
back where the code is (except for the generated functions).

> Anyway, this branch is ready to be fully reviewed and merged in.

Excellent, thanks! I've also removed the itertools.product use in the
mpmath svn trunk.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Re: series reviewer needed

2011-01-21 Thread Fredrik Johansson
On Wed, Jan 19, 2011 at 4:44 AM, smichr  wrote:
> When requesting the series of cos(x) I wouldn't expect to see cos()
> terms in the result, but the default behavior does this. I think what
> is happening is that it is returning Taylor's series terms rather than
> terms corresponding to the series expansion of cos(x). Which do you
> think should be returned for the first two terms of cos(x) at x=1:
>
>    3/2 - x
> or
>    cos(1) + sin(1) - x*sin(1) ~= 1.38 - 0.84*x
>
> Shouldn't the first expression be returned?

Definitely not. I would expect series() to return a truncated series
(Taylor, Laurent, Puiseux...) expansion of the function. In
particular, the initial terms should be independent of the order of
truncation.

Generally, composing truncations as you propose does not even give
series that converge to the initial function. Consider for example
log(1+x).subs(x,4+x).series(x,1,5) vs
log(1+x).series(x,1,5).subs(x,4+x). Put differently, the expansion
should always be local and not depend on some implicit point of
reference (e.g. 0) that may be outside of the radius of convergence.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Re: series reviewer needed

2011-01-21 Thread Fredrik Johansson
On Fri, Jan 21, 2011 at 1:56 PM, smichr  wrote:
>> >    3/2 - x
>> > or
>> >    cos(1) + sin(1) - x*sin(1) ~= 1.38 - 0.84*x
>>
>> > Shouldn't the first expression be returned?
>>
>> Definitely not. I would expect series() to return a truncated series
>> (Taylor, Laurent, Puiseux...) expansion of the function. In
>> particular, the initial terms should be independent of the order of
>> truncation.
>
> But aren't we talking about different types of series? There is a
> power-series expansion of cos() involving only powers of x and a
> Taylor's series form (involving the functions being evaluated at the
> x0 value: f(x0), f'(x0), etc... AND powers of x. In Alexey's note he
> refers to these as the coefficients and the basis functions. In the
> power series the coefficients are rational; in Taylors (for a
> function) they are not.

No, you are mistaken thinking that there is "a power-series expansion
of cos() involving only powers of x" which is distinct from a Taylor
series.  The series you're thinking of is just the Taylor series of
cos(x) around x = 0, and there is nothing special about this point,
except that the values cos(0), cos'(0) etc happen to be rational
numbers. But this is just an accident. Another function might be
transcendental at x = 0 and rational at x = 1/pi, say. As another
example, consider erf(x) = (2/sqrt(pi))*(x - x^3/3 + ...). How would
you choose rational convergents to sqrt(pi)?

>>
>> Generally, composing truncations as you propose does not even give
>> series that converge to the initial function. Consider for example
>> log(1+x).subs(x,4+x).series(x,1,5) vs
>> log(1+x).series(x,1,5).subs(x,4+x).
>
> I'm not sure I'm following, but yes, if you plot the results the first
> one will match up with log(5+x) at x=1 while the second will be what
> log(1+x) looked like at x=1 shifted to the left 4 units.

They will not line up because the series of log(1+x) converges only
for abs(x) < 1.

>> Put differently, the expansion
>> should always be local and not depend on some implicit point of
>> reference (e.g. 0) that may be outside of the radius of convergence.
>>
>
> I'm not sure I understand the point you are trying to make. (My
> understanding of series is very rudimentary.) If I say that I want the
> series that represents what cos(x) looks like at x=1 I want a function
> returned that will be a pretty good approximation of cos(x) at x=1.
> That's what both of the forms of the series I presented above do: at
> x=1 the first one gives 0.5 and the second 0.540302305868140.

The best approximation for cos(x) at x = 1 is cos(1). If you want to
approximate it using a series around some other point p, then you
should compute that other series and plug 1-p into the resulting
polynomial. There is no canonical choice of expansion point (it just
happens that x = 0 is useful for some functions, like cos, but for
other functions other points are better, e.g. x = 1 for sqrt assuming
0 < x < 2.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



[sympy] ANN: mpmath 0.17 (Python 3 support and more)

2011-02-01 Thread Fredrik Johansson
Hi all,

Version 0.17 of mpmath is now available on the project website:
http://code.google.com/p/mpmath/

It can also be downloaded from the Python Package Index:
http://pypi.python.org/pypi/mpmath/0.17

Mpmath is a pure-Python library for arbitrary-precision floating-point
arithmetic that implements an extensive set of mathematical functions. It
can be used as a standalone library or via SymPy
(http://code.google.com/p/sympy/), and is also available as a standard
component of Sage (http://sagemath.org/).

The major news in 0.17 is that mpmath now works with Python 3. To
support both Python 2.x and 3.x with the same codebase, compatibility with
Python 2.4 has been dropped (mpmath now requires 2.5 or higher). New
functionality in mpmath 0.17 includes an implementation of the Lerch
transcendent, Riemann zeta zero counting, and improved support for
evaluating derivatives of the Hurwitz zeta function and related functions.

Many thanks to Juan Arias de Reyna and Case Vanhorsen who contributed
to this version.

For more details, see the changelog:
http://mpmath.googlecode.com/svn/tags/0.17/CHANGES

Extensive documentation is available at:
http://mpmath.googlecode.com/svn/trunk/doc/build/index.html or
http://mpmath.googlecode.com/svn/tags/0.17/doc/build/index.html

Bug reports and other comments are welcome on the issue tracker at
http://code.google.com/p/mpmath/issues/list or the mpmath mailing list:
http://groups.google.com/group/mpmath

Enjoy,
Fredrik Johansson

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] ANN: mpmath 0.17 (Python 3 support and more)

2011-02-02 Thread Fredrik Johansson
Hi Chris,

On Wed, Feb 2, 2011 at 6:18 PM, Chris Smith  wrote:
> Something I noticed in mpmath files (in sympy) was the use of _ in files. For 
> some reason I recall that this should probably be removed with a "del _" 
> command. This use was located in libmpf and libintmath.

Why is this a problem? Names starting with "_" don't get imported
through "import *" (and there aren't any "import *"'s left for those
modules anyway, I think).

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Re: ANN: mpmath 0.17 (Python 3 support and more)

2011-02-02 Thread Fredrik Johansson
On Wed, Feb 2, 2011 at 2:17 AM, Aaron S. Meurer  wrote:
> We need to update this after the 0.7.0 release because of the dropped Python 
> 2.4 support.  See http://code.google.com/p/sympy/issues/detail?id=2176.

Thanks for creating the issue!

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] GSoC 2011 Results

2011-04-26 Thread Fredrik Johansson
2011/4/25 Aaron S. Meurer :
> Hi everyone.  As many of you may have noticed, Google has announced the 
> results for Google Summer of Code.  I am proud to announce that we got nine 
> slots from Google.  The following projects have been accepted:

An impressive list. Congrats to all :-)

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Assumptions

2011-05-02 Thread Fredrik Johansson
Most of the problems with assumptions in SymPy are symptoms of larger
design problems which also affect performance, code clarity, and
customizability. The (semi-)new polynomial code, which supports
explicit coefficient rings (and term orders, etc), goes a long way to
address such issues, although its scope is limited.

For general symbolics, the analog would be to allow constructing
symbolic algebras, of which particular expressions would be elements
(compare with Parent/Element in Sage). Then simplification routines
would just be methods of the algebra class, and assumptions would be
properties attached to the algebra. For example, one could disable all
(or some) simplifications through subclassing. I'm almost convinced
that this is the cleanest object-oriented way to do it, since the
algebra would encapsulate all state and more.

Everything else is then just a matter of syntax (e.g. one can have
"global" assumptions by using a mutable global default algebra, and
one can have "local" assumptions by creating a local/temporary algebra
instance for this purpose). Caches could also be properties of
algebras. This would also allow algorithms to construct new algebras
for internal use,  use any assumptions, caching, etc., and be sure
that this wouldn't have an effect on the outside world.

I did a similar thing with the "contexts" in mpmath, although I didn't
really go all the way (creating multiple instances of the same context
class is still a bit flaky, and doesn't work in Sage, but the fp and
iv contexts show how powerful this approach is). This helped
*tremendously* with writing the Cython backend in Sage, anyhow.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Re: Assumptions

2011-05-02 Thread Fredrik Johansson
On Mon, May 2, 2011 at 5:46 PM, Haz  wrote:
> Tom: Apologies, but I'm having trouble inferring what you github username is
> -- where is the branch?
> Fredrik: I like the idea, but that seems like a massive shift in the SymPy
> core that I don't feel is possible in the time frame that the assumptions
> need to be fixed in.

I don't think it's necessarily that much work. The only essential
thing that needs to be done is to provide every expression object with
a reference to its parent algebra, and to overwrite sympification to
convert inputs to have the same parent. Everything else can be
implemented gradually. The major obstacles ought to be caching and the
current assumptions -- which need to go anyway.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Re: Assumptions

2011-05-03 Thread Fredrik Johansson
On Tue, May 3, 2011 at 3:00 PM, Tom Bachmann  wrote:
> Actually how does this relate to the following wiki page:
>
> https://github.com/sympy/sympy/wiki/Algebras-in-SymPyCore

It's roughly the same thing.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Re: Assumptions

2011-05-04 Thread Fredrik Johansson
On Tue, May 3, 2011 at 11:24 PM, Ronan Lamy  wrote:
> Le mardi 03 mai 2011 à 22:02 +0100, Tom Bachmann a écrit :
>> On 03.05.2011 21:47, Ondrej Certik wrote:
>> > On Tue, May 3, 2011 at 8:44 AM, Fredrik Johansson
>> >   wrote:
>> >> On Tue, May 3, 2011 at 3:00 PM, Tom Bachmann  wrote:
>> >>> Actually how does this relate to the following wiki page:
>> >>>
>> >>> https://github.com/sympy/sympy/wiki/Algebras-in-SymPyCore
>> >>
>> >> It's roughly the same thing.
>> >
>> > I also think that what Fredrik says might be a good idea. I don't have
>> > much experience with this to have a clear opinion though. The reason I
>> > have just used Add/Mul/Pow classes for everything in SymPy (long time
>> > ago) is that it is conceptually a super simple idea, and it got us
>> > very far. E.g. from the Zen of Python:
>> >
>> > 1) Simple is better than complex (Add/Mul/Pow is simpler than all the
>> > algebras+other machinery)
>> > 2) Complex is better than complicated (the algebras are probably
>> > better than the complicated entangled assumptions+cache)
>> >
>> >
>> > As such, I now that we can get very fast just with Add/Mul/Pow (see
>> > the csympy code: https://github.com/certik/csympy), and when using
>> > Refine() and other things, we should be able to have core not using
>> > assumptions nor cache, be fast, and using the new assumptions in
>> > refine(). That fixes the current sympy, using pretty much the same
>> > architecture.
>> >
>>
>> I don't find that a very convincing argument (which is not saying you
>> are wrong, of course). Given a specific problem everyone (given enough
>> time, energy, and general cleverness) can come up with a nice and clean
>> solution that is also fast. The problem with comparing this to current
>> sympy is that current sympy does *a lot* more. E.g. all of the core
>> classes (Mul, Pow etc) treat orders, non-commutative symbols, etc etc.
>> Now you may rightly argue that this should not be in core, but I suppose
>> you do not want to throw it away either...
>>
>> This is why I think the algebras approach is better: there different
>> algebras can manage expressions of different complexity. So lots of
>> things that are in current core and slow us down can just become part of
>> more specialised algebras. Note also that csympy would/could then become
>> the "core" algebra, achieving a final synthesis of approaches.
>
> I don't understand that argument. You could just as well say, with
> sympy's current design, that different expressions can be implemented by
> different classes, etc. The big issue I see with these algebras is that
> it creates a design that's more functional than object-oriented and
> destroys the identity of objects that belong to several structures (e.g.
> in Sage, Integer(1) are different objects).

I don't see how this is "more functional than object-oriented". On the
contrary, using a class to encapsulate the notion of an algebra is
more object-oriented than spreading the equivalent code across various
methods in an ad-hoc fashion (for example, having Mul know about lots
of different mathematical objects that can be multiplied) and choosing
between options mostly by passing flags to functions. The code in Sage
is quite clean, and very easy to extend (often easier than doing the
same thing in SymPy).

That's not to say SymPy should necessarily adopt an identical
approach, but it's worth thinking about using explicit objects
(whether they are called "algebras", "rings", "contexts", etc) to
distinguish between different classes of mathematical objects and to
store options.

Whether singleton objects are used for special values like 1 is rather
a trivial issue and completely orthogonal to other design
considerations. AFAIK, the current design exists only for performance
reasons ('is' is much faster than '==' in Python, but this is mostly
irrelevant with C-based types, and it can be worked around fairly
easily in Python anyhow).

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Re: Assumptions

2011-05-04 Thread Fredrik Johansson
On Wed, May 4, 2011 at 7:53 PM, Ronan Lamy  wrote:
> Le mercredi 04 mai 2011 à 10:37 +0100, Tom Bachmann a écrit :
>> On 02.05.2011 19:57, Aaron S. Meurer wrote:
>> > I agree that Frederik's idea is an interesting one, but we would need to
>> > have other people who understand it well if we were to attempt to
>> > implement it. If you could write something up on the wiki, it would go a
>> > long way towards this.
>>
>> I wrote up my view of the algebras model. Obviously the typical
>> disclaimers apply: I don't know sympy very well, I don't really know
>> sympycore at all, bla bla. Please comment.
>>
>> Ondrej, Ronan: I hope this answers your questions as well.
>>
> Thanks for the write-up. It does confirm what I had been thinking: this
> model basically amounts to rewriting sympy in a Lispish rather than
> Pythonic style - consider, for instance, (ADD, (x, y, 5)) vs Add(x, y,
> 5). Besides that, I don't see anything that couldn't be done with the
> current design, replacing "the object's algebra" with "the object's
> class" and with the equivalences Verbatim == Basic, Calculus == Expr,
> Algebra == BasicMeta, CachingAlgebra == AssumeMeths, etc. but I'm
> probably overlooking something.

These are just implementation details of SympyCore, and they probably
don't belong in the assumptions writeup. How SymPy represents
expressions internally is largely irrelevant to whether one adopts a
parent-element model. In fact much of the point is to allow different
ways to represent data (as SymPy already does with the new polynomial
code).

The starting point is just that all elements have a reference to a
parent, and one way to implement assumptions then would be to make
assumptions a (mutable) property of the parent. The reason the
parent-element model makes sense to discuss in the context of
assumptions is that it provides a natural way to define domains for
symbols (as a "first level" of assumptions) -- for example, if one
wants symbols representing elements of R rather than C by default, one
can use an algebra for this purpose.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] SymPy 0.7.0 Released

2011-06-28 Thread Fredrik Johansson
On Tue, Jun 28, 2011 at 9:37 AM, Aaron Meurer  wrote:
> Hi everyone.  After more than a year, we're finally releasing SymPy 0.7.0.

Congratulations all!

Looks like http://docs.sympy.org hasn't been updated?

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Handling of branch cuts

2011-07-05 Thread Fredrik Johansson
On Tue, Jul 5, 2011 at 1:49 AM, Tom Bachmann  wrote:
> My real questions are then as follows, I guess:
>
> 1. Does anyone see a better way around the issue?

I'm afraid not.

It's possible to some extent to avoid branch cuts simply by using
'better' special functions, e.g. loggamma(z) (the nice version)
instead of gamma(z) and log(gamma(z)). But you will probably not be
able to do that everywhere.

> 2. Any suggestions for a better interface?

My first instinct is that a class/symbolic function Polar(Abs(z),
Arg(z)) would be a nicer representation, though in other contexts one
would probably rather want such a type to normalize the argument to
the standard range.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



[sympy] Python wrapper for FLINT 2

2011-07-05 Thread Fredrik Johansson
Hi,

I have created the beginning of an easy-to-use Python wrapper for
FLINT 2: http://fredrik-johansson.github.com/python-flint/

It currently provides numbers, (dense univariate) polynomials and
(dense) matrices over Z/nZ (for word-size n), Z and Q.

In [1]: import flint
In [2]: A = flint.fmpz_poly(range(1)); B = A + 1
In [3]: %timeit A * B
100 loops, best of 3: 4.68 ms per loop
In [4]: from random import randint
In [5]: M = flint.fmpz_mat(100,100,[randint(-3,3) for i in range(1)])
In [6]: %timeit M.det()
100 loops, best of 3: 11.7 ms per loop

I'm posting this here as there might be some interest in using the
FLINT types as faster ground types in SymPy (assuming that this is
technically possible with SymPy's current code structure). The fmpz
and fmpq types don't offer any advantage over gmpy's mpz and mpq types
(although the nmod type probably does), but the FLINT polynomials and
matrices of course are massively faster than Python
polynomials/matrices of such numbers.

The code is in a very early state, and so far only a small set of the
functionality in FLINT 2 is wrapped. But it could potentially be of
interest to some people already.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Handling of branch cuts

2011-07-06 Thread Fredrik Johansson
Oh, by the way, you're in good company if you're having trouble
getting the branch cuts right.

Today I found the following interesting result in Mathematica:

N[Integrate[1/z, {z, 1, I, -1, -I, 1}]]
0. + 6.28319 I

N[Integrate[Cos[z]/z, {z, 1, I, -1, -I, 1}]]
0. + 6.28319 I

N[Integrate[Sin[z]/z^2, {z, 1, I, -1, -I, 1}]]
0. + 6.28319 I

N[Integrate[Exp[z]/z, {z, 1, I, -1, -I, 1}]]
1.66533*10^-16 + 0. I

Or more simply:

{NIntegrate[Exp[z]/z, {z, 1, I}], N[Integrate[Exp[z]/z, {z, 1, I}]]}
{-1.55771 + 2.51688 I, -1.55771 + 0.946083 I}

This is curious because even the naive computation Ei(I) - Ei(1) gives
the expected thing...

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Python wrapper for FLINT 2

2011-07-11 Thread Fredrik Johansson
On Mon, Jul 11, 2011 at 9:20 PM, Ondrej Certik  wrote:
>> The code is in a very early state, and so far only a small set of the
>> functionality in FLINT 2 is wrapped. But it could potentially be of
>> interest to some people already.
>
> Thanks for sharing it. Btw, the link to FLINT from your page:
> http://www.flintlib.org/ doesn't seem to be working.

Right, it's hosted on sagemath which is down at the moment.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Changing function - Fibonacci

2011-08-02 Thread Fredrik Johansson
On Tue, Aug 2, 2011 at 9:47 PM, Hector  wrote:
> Hello folk,
>
> I was browsing through the code and found that fibonacci numbers are
> calculated by recursion method. So the time complexity is of O(n) and I
> don't expect space complexity be any optimal. It gave me the following
> results.

> There exists a algorithm which can calculate n^th fibonacci number in
> O(ln(n)) times. It only requires to calculate few fibonacci numbers (
> approximately ln(n)/ln(2) ) to calculate F_n. I implemented it and it shows
> the following results.

Just to nitpick: the recursive method has time complexity O(n^2) (the
SymPy version is terrible for large n due to also using O(n^2) memory)
and the fast algorithm has time complexity about O(n^1.6) (with Python
arithmetic).

I'm also not sure why your code is so complex (and it's not entirely
obvious that it's correct). Something like this is much nicer:

def fib(n):
a, b, p, q = 1, 0, 0, 1
while n:
if n % 2:
a, b = (a+b)*q + a*p, b*p + a*q
n -= 1
else:
t = q*q
p, q = p*p + t, t + 2*p*q
n //= 2
return b

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Changing function - Fibonacci

2011-08-03 Thread Fredrik Johansson
On Wed, Aug 3, 2011 at 12:06 AM, Aaron Meurer  wrote:
> This doesn't work, as you have to take care of precision (try
> computing fib(10)).  I suspect that it's also slower when you do
> that, though Fredrik would have to comment.

It's a bit slower than than fast integer methods, but only really by a
constant factor.

Come to think of it, there already is a fast integer Fibonacci
function in mpmath, mpmath.libmp.ifib. The simplest (and probably
fastest) solution would be to just call that.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



[sympy] github repository

2011-12-07 Thread Fredrik Johansson
Hi everybody,

I've created a git repository for mpmath on github:
https://github.com/fredrik-johansson/mpmath

This should hopefully make it easier for people to contribute to
mpmath. In particular, sympy developers (who all already live on
github) might find it more convenient to submit patches upstream this
way.

I have not updated the project website or readme to make this
"official" yet, but I expect to deprecate the svn repository on Google
Code. I will just have to think about the best way to manage the
documentation.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Why is evalf so complicated?

2012-04-13 Thread Fredrik Johansson
On Fri, Apr 13, 2012 at 12:36, krastanov.ste...@gmail.com
 wrote:
> I was looking at evalf.py. and I don't understand how it come to be so
> complicated. I thought that all the evaluation is automatically done
> by mpmath, but there is much pre- and post-processing that I do not
> understand. Can someone explain the general idea to me?

The pre- and post-processing is mostly tracking accuracy, but there
are lots of subtle special cases that need handling.

Now that mpmath has much better support for interval arithmetic, it
wouldn't be unreasonable to rewrite evalf using interval functions.

You'd traverse the expression tree and do interval arithmetic,
starting at the target prec plus a few extra bits, then check if the
resulting interval is precise enough, and if not try again at
geometrically increasing precision (say adding 20, 40, 80, 160, ...
bits). The evalf code does basically this plus a few more tricks that
could be incoporated (this would add back some of the complexity, but
it would still probably be cleaner and less bug-prone than the present
code).

If the internal interval functions (mpi_ and mpci_ prefix in
mpmath.libmp) are used, performance should be ok, probably a bit
slower than the current evalf code but not terrible (and some of the
slowdown could probably be recovered with future improvements to the
interval functions in mpmath). And for arithmetic operations, floor
function, etc. the error tracking will be completely rigorous.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Why is evalf so complicated?

2012-04-13 Thread Fredrik Johansson
On Fri, Apr 13, 2012 at 13:12, krastanov.ste...@gmail.com
 wrote:
> There is a hypsum function in evalf that calls lambdify in a way that
> uses python-math or numpy instead of mpmath. Doesn't that hurt
> precision?

Yes. There are several functions that don't track precision.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Airy functions

2012-05-11 Thread Fredrik Johansson
On Fri, May 11, 2012 at 3:16 PM, someone  wrote:
> What names should the Airy functions get in sympy?
> The following list shows what other CAS and similar
> software does:
>
> Maxima:
>
>   airy_ai(...)
>   airy_dai(...)
>
> Sage:
>
>   airy_ai(...)
>   airy_ai_prime(...)
>
> mpmath:
>
>   airyai(...)
>   No primed functions?

airyai(..., derivative=k) for the kth derivative

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Roots of Legendre polynomials

2012-08-08 Thread Fredrik Johansson
On Thu, Aug 9, 2012 at 12:44 AM, Ondřej Čertík  wrote:
> Hi,
>
> I want to have a simple script to obtain the Gaussian quadrature
> points and weights.
> I started with points, that are just roots of Legendre polynomials [1, 2]:

You could perhaps adapt the code mpmath uses internally to compute
weights for Gaussian quadrature:
https://github.com/fredrik-johansson/mpmath/blob/master/mpmath/calculus/quadrature.py#L428

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Roots of Legendre polynomials

2012-08-08 Thread Fredrik Johansson
On Thu, Aug 9, 2012 at 2:00 AM, Fredrik Johansson
 wrote:
> On Thu, Aug 9, 2012 at 12:44 AM, Ondřej Čertík  
> wrote:
>> Hi,
>>
>> I want to have a simple script to obtain the Gaussian quadrature
>> points and weights.
>> I started with points, that are just roots of Legendre polynomials [1, 2]:
>
> You could perhaps adapt the code mpmath uses internally to compute
> weights for Gaussian quadrature:
> https://github.com/fredrik-johansson/mpmath/blob/master/mpmath/calculus/quadrature.py#L428

Or, even easier, just use the obvious mpmath functions directly:

>>> from mpmath import mp, legendre, findroot
>>> mp.dps = 200
>>> print findroot(lambda x: legendre(64, x), 0.02435029)
0.024350292663424432508955842853715661426887109314975809163453166396056696516629
52885298530616571168948823704930136717175604799266794080688526173425869681909194
43025679363843727751902756254975073084367

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



Re: [sympy] Integer square root function

2012-09-20 Thread Fredrik Johansson
On Thu, Sep 20, 2012 at 10:57 PM, M H  wrote:
> Hello sympy group,
>
> I'm looking for a python function, perhaps named isqrt(), that can find the
> truncated square root of an integer quickly.
>

> Does sympy have a faster one somewhere? Thanks.

sympy.mpmath.libmp.isqrt

It's highly (as far as a pure Python implementation goes) optimized.
See 
https://github.com/fredrik-johansson/mpmath/blob/master/mpmath/libmp/libintmath.py
for the source code.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



[sympy] Re: Rational Chebyshev approximations for the special functions

2012-10-04 Thread Fredrik Johansson
On Wed, Oct 3, 2012 at 2:12 AM, Ondřej Čertík  wrote:
> Hi,
>
> Does anyone have experience with implementing rational function approximations
> to a given special function of one variable? This would be extremely
> useful addition
> to sympy. Here is an example for the error function from the standard
> gfortran library:

This is something I've needed quite frequently, but I've never been
bothered enough to code it myself. I usually just use Mathematica
(EconomizedRationalApproximation or MiniMaxApproximation). It would be
great to have in sympy or mpmath.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.



[sympy] Towards mpmath-0.18

2013-12-26 Thread Fredrik Johansson
Hi all,

The release of mpmath-0.18 is long overdue. Although there are still
some bugs that would be nice to fix first, they are not too critical,
and there are some nice new features (such as the eigenvalue code by
Timo Hartmann). Besides, 0.17 still has some compatibility problems
and I'm tired of pointing people to the git version to work around
them.

I've put up a preliminary source package (based on the current git
revision) here:
http://sage.math.washington.edu/home/fredrik/mpmath-0.18-a1.tar.gz

I would appreciate any help testing it with as many systems/Python
versions/gmpy versions(etc.) as possible. Testing that SymPy works
cleanly with it would be nice. I'd also appreciate if someone could
test that it works if one installs it in the latest Sage (I will be
doing this myself, but it's good to get more than one data point).

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
For more options, visit https://groups.google.com/groups/opt_out.


[sympy] Re: Towards mpmath-0.18

2013-12-26 Thread Fredrik Johansson
On Fri, Dec 27, 2013 at 3:50 AM, François
 wrote:
> There is a ticket to update sympy to 0.7.4 in sage and I think it is in the
> latest beta.
> I know latest is 0.7.4.1 but it should be close enough.
> But sage's sympy uses the bundled mpmath that comes with sympy.
> There was a long thread about mpmath and packaging on the sympy mailing list
> a couple of month ago. That's when I learnt that the version of mpmath in
> sympy
> was several commits ahead of mpmath-0.17. Will this release catch up all
> these
> commits?

The mpmath bundled with sympy has some patches for paths and doctests
as described on https://github.com/sympy/sympy/wiki/Update-mpmath.
AFAIK, these patches are not relevant for the standalone mpmath.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
For more options, visit https://groups.google.com/groups/opt_out.


[sympy] Re: Towards mpmath-0.18

2013-12-28 Thread Fredrik Johansson
On Sat, Dec 28, 2013 at 5:01 AM, Aaron Meurer  wrote:
> I haven't tested the docs yet, but the code seems to work just fine in
> SymPy, in my testing so far. I'll push a branch up and let you know if
> Travis catches anything.

Thanks!

> One request for the final release: could you make it so that every
> file is valid syntax in both Python 2 and Python 3? You need to use a
> trick like 
> https://github.com/sympy/sympy/blob/master/sympy/core/compatibility.py#L128.
> A lot of people complain about the syntax errors that come up when
> installing.

Is there anything specific that still needs to be fixed? I just tried
installing mpmath in python3 (with setup.py) and did not get any error
messages.

Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
For more options, visit https://groups.google.com/groups/opt_out.


[sympy] mpmath 0.18

2013-12-31 Thread Fredrik Johansson
Hi all,

In order not to delay the release by another year (at least in my own
time zone!), I've marked the current revision of mpmath as 0.18.

Repository: https://github.com/fredrik-johansson/mpmath/tree/0.18

Source download:
http://sage.math.washington.edu/home/fredrik/mpmath/mpmath-0.18.tar.gz

Documentation source:
http://sage.math.washington.edu/home/fredrik/mpmath/mpmath-docsrc-0.18.tar.gz

Built documentation:
http://sage.math.washington.edu/home/fredrik/mpmath/doc/0.18/

Highlights of this release include major new linear algebra
functionality contributed by Timo Hartmann and Ken Allen: functions
for eigendecomposition, singular value decomposition, and QR
factorization of real and complex matrices. This release also includes
a number of bug fixes and compatibility improvements. For a longer
list of changes, see:
https://github.com/fredrik-johansson/mpmath/blob/0.18/CHANGES

Happy New Year,
Fredrik

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Introducing SymPy Plot

2007-07-21 Thread Fredrik Johansson

On 7/22/07, Brian Jorgensen <[EMAIL PROTECTED]> wrote:
> Was there any opinion on something like Basic.approx(), which would fall
> back on python's built-ins and math lib for speed? I could implement this,
> if given the okay. Given the design of the core, I think it would
> unnecessarily more difficult and harder to maintain if we did something like
> create an approximating wrapper class for each slow function, and then
> 'compiled' a version of a sympy expression which could use these
> approximations. Instead, I would just do:

There's still a lot of overhead from moving around SymPy objects. The
best solution would be to implement an alternative to __str__ that
prints a Python-compatible expression. For example,
Rational(1,5)*x**2+sin(x) would print as "0.2*x**2+sin(x)". The two
primary issues are to pad with parentheses (SymPy tends to print too
few of them) and to ensure that float division is used. Then you
create the function for plotting as f = eval("lambda x: " + str). To
handle functions that aren't in the math module, pass them to eval via
its context argument.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: Introducing SymPy Plot

2007-07-21 Thread Fredrik Johansson

On 7/22/07, Brian Jorgensen <[EMAIL PROTECTED]> wrote:
> Correct me if I'm way off base, but wouldn't building that string require
> walking the hierarchy in much the same way? I'm not quite sure I understand
> what you mean by 'moving around SymPy objects.' Thanks for the input.

You walk the hierarchy only once to create a Python function. Then you
call that function 5000 times (or whatever number of evaluations is
needed for the plot), instead of walking the hierarchy each of those
5000 times.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: difficulty with polynomials.roots() function

2007-08-21 Thread Fredrik Johansson

On 8/21/07, jv <[EMAIL PROTECTED]> wrote:
> I'm having difficulty using polynomials.roots().  It returns an empty
> list when scipy.poly1d(...) finds 3 real roots.
> Below is an example:

roots() only returns exact roots. The roots of your polynomial
probably cannot be expressed in explicit form. There should eventually
be a way in SymPy to work symbolically with implicit representations
of polynomial roots (like RootOf() in Maple and Root[] in
Mathematica), but it hasn't been implemented yet.

There is a function polyroots that gives numerical values:

>>> p = x**5 + 500*x**4 + (R(125)/1024)*Pi**8 + ...

>>> from sympy.numerics import Float
>>> from sympy.numerics.optimize import polyroots
>>> roots, error = polyroots(p)
>>> for root in roots: print root
...
(0.0137019084707057 + -3.08148791101958E-33*I)
(5.32152018552940 + -1.65660790096412E-29*I)
(-2.66727282648048 + 4.96547898874866*I)
(-2.66727282648048 + -4.96547898874866*I)
(-500.000676441039 + 5.37072623928395E-27*I)
>>> error
Float('4.7804080567565852E-15')

However, if you're thinking of using it for anything serious, I should
note that this function is *much* slower than SciPy's polynomial
root-finder and less well tested. Its main advantage is that it can
work at arbitrary precision (which could be useful for dealing with
some very ill-conditioned polynomials):

>>> Float.setdps(50)
>>> print polyroots(p)[0][0]
(0.013701908470705722659587288208422234247370024971454 + 1.142632740433154356093
8217050216228308336415112217E-84*I)

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: [sage-devel] Re: Python bindings for Ginac

2007-09-09 Thread Fredrik Johansson

On 9/9/07, Pablo De Napoli <[EMAIL PROTECTED]> wrote:
>
> Simpy is indeed an interesting package and could be useful in a future
> for rewriting the
> calculus package (replacing maxima)
>
> However. rather than incorporating it into Sage as a package, I feel
> that we will need to take some of it code and re-write it to fit well
> into Sage.
>
> This is because, Sage already has faster alternatives to do the
> computations in many places
> (for example: factorization of polynomials that are needed in the
> symbolic computations)

It is possible to speed up basic symbolic arithmetic in SymPy by at
least a factor of 10 (memory usage, which I've noticed is a problem in
SymPy for some calculations due to excessive caching, should be
possible to reduce by at least a factor 10 at the same time). I'm
working on a rewrite of the core that achieves this, but it is not yet
in the SymPy SVN. This still won't make it competitive with a C core
in terms of speed, but most things should be a lot smoother.

On 9/9/07, Pablo De Napoli <[EMAIL PROTECTED]> wrote:
> (For example, I've seen that the version of Simpy in svn includes a
> function for computing the number of partitions, but Sage has a faster
> function for that)

I'm happy that someone noticed the partition function in SymPy
(motivated by the awesome recent work in SAGE), even though it doesn't
stand up to the competition, and obviously no amount of optimization
will get it close as long as SymPy is written in pure Python. But our
goal for SymPy should be to make it able to solve small problems
correctly without leaving Python. Users who need to solve big problems
should switch to SAGE (take the car instead of the bicycle? :-), and
our documentation should point them that way.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Separate project for floating-point arithmetic

2007-09-25 Thread Fredrik Johansson

Hi all,

I've set up a separate project http://code.google.com/p/mpmath/ for
the Float code. I'm basically going to make this a very light-weight
library that will essentially just provide arithmetic and the standard
math and cmath functions. There will only be one or two .py modules,
and the interface will be simple so that SymPy can easily import the
functions for its Float class (this is already being implemented in
branches/sympy-sandbox). Much of the current numeric code in SymPy,
like integration and optimization code, will not be moved over to this
project, since that code is most useful when combined with the
symbolic functions in SymPy.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: Separate project for floating-point arithmetic

2007-09-25 Thread Fredrik Johansson

On 9/25/07, Ondrej Certik <[EMAIL PROTECTED]> wrote:

> I think we can simply copy the latest files from mpmath to sympy, the
> same way we do with pyglet. So that users don't have to care.

Agreed. And that's why it's also a good idea to keep the project small.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: sympycore project

2007-10-16 Thread Fredrik Johansson

On 10/16/07, Pearu Peterson <[EMAIL PROTECTED]> wrote:

> I have moved the development of sympy/sandbox to a new project
> sympycore:
>
>   http://code.google.com/p/sympycore/
>
> as there is lots of work ahead to improve the assumptions model among
> other
> things and much faster core in sympycore is already usable for many
> applications.
>
> The plan is to keep copying code from sympy project minimal so that it
> would be easier
> to keep track on the new improved features in sympycore and not to mix
> old coding techniques with new ones.

Seems like a viable approach. Could you add me as a developer?

Fredrik Johansson

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: Convert fractions with common denominator

2007-11-03 Thread Fredrik Johansson

On 11/3/07, friedrich82 <[EMAIL PROTECTED]> wrote:
> Could you expand the polynomial modul to convert fractions with a
> common
> denominator
> e.g.
>
>   1/R1 + 1/R2 <==> (R1+R2)/
> (R1*R2)

Use the function together():

>>> R1 = Symbol('R1')
>>> R2 = Symbol('R2')
>>> a = 1/R1 + 1/R2
>>> a
1/R1 + 1/R2
>>> together(a)
1/R1/R2*(R1 + R2)

The printing of that last expression is a bit weird (I'm not sure if
that has been fixed in the latest development version), but the value
is correct.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: SymPy code sprint

2007-11-04 Thread Fredrik Johansson

On 11/3/07, Robert Schwarz <[EMAIL PROTECTED]> wrote:
>
> Hey,
>
> what about a code sprint at the end of this year? I'm probably going to
> visit my parents for Christmas, Prague isn't too far from there.
>
> Any plans?
>
> Robert

I might be able to attend a sprint in early January. Late December
would be more difficult.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: Substitution in polynomials

2007-11-04 Thread Fredrik Johansson

On 11/4/07, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
>
> Hi there! :-)
>
> First thank you very much for your work on sympy it is a really
> usefull program.
>
> I want to substitute variables in a polynomial, something similar
> to what is possible with matrices:
> ---
> >>> M
> x 0 0
> 0 x 0
> 0 0 x
> >>> M.subs(x, 4)
> 4 0 0
> 0 4 0
> 0 0 4
> ---
>
> Suppose you have a polynomial q(x,y,z) and i want to substitute the
> x,y,z
> with other expressions, namely other polynomials. Suppose all used
> variables
> are already declared as symbols.
> q = xy + y**2 + zx
> x = x1 + x2**2
> y = y1 + y3
> z = z2*z3
> now i want q to become
> q =  (x1 + x2**2)*(y1 + y3) + (y1 + y3)**2 + (x1 + x2**2)*(z2*z3)
> I know i can achive this by declairing x,y,z first but that is not
> possible
> in this case.
>
> How can I do it? :)
>
> Thank you very much and greetings
> Christian Brumm

Hi Christian,

You have to be careful to distinguish between Python variables and
SymPy symbols.

I would suggest naming the expressions

  x_ = x1 + x2**2
  y_ = y1 + y3
  z_ = z2*z3

and doing

  q.subs(x, x_).subs(y, y_).subs(z, z_).

Alternatively, you could do it like this:

  x = x1 + x2**2
  y = y1 + y3
  z = z2*z3

  x_ = Symbol('x')
  y_ = Symbol('y')
  z_ = Symbol('z')

  q.subs(x_, x).subs(y_, y).subs(z_, z)

but I think it's generally better to name the variable referring to a
symbol exactly the same thing as the symbol.

This is a rather subtle issue in SymPy and we probably need more
documentation about it.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: Substitution in polynomials

2007-11-05 Thread Fredrik Johansson

On 11/5/07, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
>
> Hallo Ondrej & Fredrik, thanks a lot for your answers.
>
> Fredrik was right, i was a bit confused with variables and symbols.
> subs() worked fine for me with a new set of symbols.
>
> However things start to get messy when you need hundreds of variables
> and polynomials.
>
> For example if you want to initalize variables x1 ... x100 (with
> symbols),
> i did it with the following rather "dirty" and ineffizient code
>  (i guess exec causes the process to fork?):
> ---
> vars = ['x'+str(i) for i in range(1,101)]
> for i, value in enumerate(vars):
> exec("x%d = Symbol(value)" %(i+1))
> ---
>
> I used code similar to the above to perform substitutions on hundreds
> of
> variables, but all becomes pretty messy and unreadable. (I want to
> implement
> a cryptosystem that relies on multivariate polynomials.)
>
> Is there a better way to handle these larger scales? Or do I need
> something
> like Maple or Mathematica here? I would really love to do it in Python

Why do you need separate named variables? It seems natural to index
the symbols in a list or dict:

x = dict([(i, Symbol('x' + str(i))) for i in range(1, 101)])

gives

x[1] --> Symbol('x1')
x[2] --> Symbol('x2')

etc.

By the way, the exec statement just compiles and interprets the given
code like an ordinary statement. As far as I know, it does not fork
the process.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: Easy syntax for substitution

2007-11-06 Thread Fredrik Johansson

On 11/6/07, Gael Varoquaux <[EMAIL PROTECTED]> wrote:
> For instance if I want to define a function as a solution of an equation,
> using sympy to solve the equation symbolicaly, but afterwards doing
> numerical work with the solution, I have to do something like:
>
> ++
> from sympy import solve, Symbol
> x = Symbol('x')
> y = Symbol('y')
> f = solve(x+y+1, x) # f gives you the symbolic result
> F = lambda v: f.subs(y, v)
> ++
>

SymPy has a function Lambda that you should be able to use like this:

F = Lambda(f[0], y)
F(3)

It doesn't seem to work with my copy of SymPy; not sure why. It does
work in the sympycore (http://code.google.com/p/sympycore/) branch,
though there the syntax has been changed to

F = Lambda(y, expr)

to match the argument order of Python's lambda.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: mpmath and MPFR

2007-11-18 Thread Fredrik Johansson

On Nov 18, 2007 2:18 PM, Ondrej Certik <[EMAIL PROTECTED]> wrote:

> Both MPFR and SymPy (mpmath) is in SAGE, so you can play with both of
> them from Python in SAGE. So it might be interesting for you Fredrik
> to compare mpmath to mpfr.

I'm well aware of MPFR :-) The goals of mpmath are very similar:
arbitrary precision, rigorous rounding so you can implement interval
arithmetic strong enough for testing inequalities in a computer
algebra system.

Unfortunately, the rigorous rounding only works for the +, -, *, /
operations so far in mpmath; getting it right for more advanced
functions will require (1) careful error analysis of the truncation
and rounding errors in all algorithms, (2) using Ziv's method (as
described in Paul Zimmerman's presentation) to handle rounding in
special cases.

(2) should be fairly easy to implement generically. (1) is much
harder. The MPFR documentation is very helpful here, but I can't use
many of its error bounds directly since MPFR often uses a different
algorithm. In mpmath, I prefer (with a few exceptions) to use the
simplest possible algorithm that is reasonably fast, not only because
I want to keep the code simple, but also because simpler algorithms
are often faster when implemented in Python. I also don't mind using
looser error bounds when they are simpler, and that will simplify
things.

Advantages of mpmath:
* It runs in Python, without compilation (or chance of miscompilation).
* Shorter and easier-to-read code.
* Supports arbitrary-sized exponents.
* Has some support for complex numbers.

Advantages of MPFR:
* Much faster.
* Overall has more features.
* Is mature, well-tested, and should be nearly bug-free (this can
definitely not be said of mpmath yet).
* Has a bigger and more skilled development team.

Perhaps you'll be interested in a speed comparison as well. I haven't
yet had the opportunity to test both MPFR and mpmath on the same
computer, but the numbers from the presentation could be used as
reference. Those were computed using a 1.8 GHz Athlon, and my computer
is a 2.2 GHz Athlon. Here are my results for mpmath (I hopefully
didn't make any stupid mistakes):

op, digits, time in ms / time with psyco (times slower than MPFR / with psyco)

x*y, 100, 0.0093 / 0.0048 (19x / 10x)
x*y, 1, 5.25 (11x)

x/y, 100, 0.017 / 0.010 (17x / 10x)
x/y, 1, 40 (33x)

sqrt(x), 100, 0.033 / 0.015 (24x / 11x)
sqrt(x), 1, 26 (32x)

exp(x), 100, 0.095 / 0.070 (6x / 4x)
exp(x), 1, 1204 (22x)

log(x), 100, 0.21 / 0.14 (7x / 5x)
log(x), 1, 1319 (39x)

sin(x), 100, 0.20 / 0.12 (9x / 5x)
sin(x), 1, 5983 (77x)

Note that due to the different processor speeds, mpmath is actually
probably a bit worse than these numbers indicate. Adding correct
rounding to exp, log and sin will also make them a bit slower in
mpmath (probably not much). But on average, in the area of 100 digits
which is typically most relevant for CAS operations, mpmath is fairly
consistently only about 10x slower than MPFR, and that is not too bad
for pure Python. (Interestingly, mpmath even seems to be faster than
NTL for exp and log.) From a practical point of view, with 1K-100K
ops/s at 100 digits (including elementary functions), mpmath is fast
enough to evaluate any reasonably complicated single expression at
interactive speed, which covers most use cases in SymPy.

The relatively good speed in mpmath is due to the fact that you can
implement arbitrary-precision arithmetic using Python ints instead of
digit arrays, and most of the work is done in the Python C core. For
comparison, I once tried to write an FFT in Python and never even got
it within 100x of the speed of a C FFT. Matrix multiplication is
similar. Although I generally think SymPy should be self-contained,
and I think mpmath will be good enough for arbitrary-precision
numerical calculations that we won't need an interface to MPFR (that
is of course my biased opinion), we should make sure that it is easy
to interact with SciPy for low-precision array computations.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: number theory

2007-11-19 Thread Fredrik Johansson

On Nov 18, 2007 7:49 PM, Goutham <[EMAIL PROTECTED]> wrote:
>
> hi,
> Iam new to this community. I came across SymPy while going through
> GSOC-07 entries.
> I was wondering if u are looking at incorporating some number
> theoretic stuff into sympy?
>
> Goutham

Hi Goutham,

SymPy has a basic number theory module that mainly implements prime
number generation. If you'd like to contribute more features, you'd be
most welcome.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: efficiency

2007-11-25 Thread Fredrik Johansson

On Nov 25, 2007 10:07 PM, kent-and <[EMAIL PROTECTED]> wrote:
>
>
> Hi, I just implemented a small toolbox for finite element calculations
> based on Lagrangian elements
> and compared it to similar code using GiNaC. The difference in
> efficiency is remarkable. My sympy
> code takes about 20 seconds where corresponding GiNaC code is below
> one second.
>
> Any comments on this ?

>From profiling your code, it appears that nearly all the time is spent
in integrate().
This shows quite clearly that we need to optimize the integration code for
polynomials.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: efficiency

2007-11-25 Thread Fredrik Johansson

On Nov 25, 2007 10:51 PM, Ondrej Certik <[EMAIL PROTECTED]> wrote:

> But generally, Python is like 200 slower than C++ in my experience, so
> this is what has to be expected.

I'd say it's closer to 20x slower for most code. (In fact, the
Computer Language Benchmarks Game lists Python as 17x slower than C++
on average.) There gap is larger for number crunching, but symbolic
computing doesn't suffer so badly since operations like comparions and
slicing can be delegated to Python built-in functions.

Did anyone compare sympycore with a C/C++ computer algebra system yet?
IIRC we compared some operation to Maxima and sympycore was 7x or so
slower.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: mailinglist for mpmath?

2007-12-08 Thread Fredrik Johansson

On Dec 8, 2007 9:57 PM, Ondrej Certik <[EMAIL PROTECTED]> wrote:
>
> Hi Fredrik,
>
> do you think you could create a mailinglist for mpmath?

Sure; I've set up http://groups.google.com/group/mpmath

I have forwarded your message to that list and will reply there.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: Wrong series() expansion

2008-01-03 Thread Fredrik Johansson

On 1/3/08, Ondrej Certik <[EMAIL PROTECTED]> wrote:
>
> Hi Fabian!
>
> On Jan 3, 2008 5:22 PM, Fabian Steiner <[EMAIL PROTECTED]> wrote:
> >
> > Hello!
> >
> > Attempting to obtain the Tayor series of sqrt(x) and 1/x gives the
> > following:
> >
> > >>> x = Symbol('x')
> > >>> sqrt(x).series(x, 4)
> > x**(1/2)
> > >>> (1/x).series(x, 4)
> > 1/x
> >
> > But these are definitely no Taylor polynoms so that sympy should throw
> > an exception or inform the user that both expressions have no taylor
> > expansion at x = 0 (just like Maple does).
>
>
> The series does Laurent (or generalized) series expansion, so
>
> 1/x  is expanded to 1/x, which is correct
>
> sqrt(x) cannot be expanded, so it is left as is. Maybe we can consider
> raising an exception,
> but I think returning the expression is also fine.
>
> Does maple really raise an exception for 1/x? (I don't think so)
>
> and for sqrt(x)? This one could be fixed if all other CAS raise an exception.

IMO, series() should return an asymptotic series, necessarily
containing nonpolynomial terms around singular points. There could
perhaps be a function taylor() that returns a series but raises an
exception when the series is not a Taylor series.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: logarithm expansion

2008-04-14 Thread Fredrik Johansson
On Mon, Apr 14, 2008 at 11:04 AM, Ondrej Certik <[EMAIL PROTECTED]> wrote:
>  I think log(x**2) shouldn't be automatically expanded to 2*log(x).

It shouldn't, because it's wrong. For example, if x = -1, log(x**2) =
0 but 2*log(x) = 2*pi*I.

>  >  There is an issue about this [2] but I don't see a clear decision made.
>
>  I think a conclusion hasn't been made. BTW Sage also collects
>  exponentials automatically:
>
>  sage: exp(x)*exp(x)
>  e^(2*x)
>
>  the same way we do:
>
>  In [1]: exp(x)*exp(x)
>  Out[1]:
>   2*x
>  ℯ

This is right because the identity exp(x)*exp(x) == exp(2*x) holds for all x.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: pPrint an equation

2008-04-17 Thread Fredrik Johansson

Why does it make sense to cover all equalities and inequalities by
this one operator Eq? The present syntax is to me like spelling x*y+z
as Add(Add(x,'*',y), '+', z) Doesn't it make more sense to define
separate Eq, Ne, Lt, Le, Gt, Ge operators?

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: Function evaluation confusion (and numerics)

2008-04-17 Thread Fredrik Johansson

This is why floats should be contagious, meaning that sin(2) -> sin(2)
but sin(2.0) -> 0.909297426825682. Asking for a numerical evaluation
of a function then becomes simple and intuitive. In particular,
map(sin, ) neatly gives a floating-point array
back.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Mpmath 0.8 has been released

2008-04-20 Thread Fredrik Johansson

Hi all,

Mpmath 0.8 is now available and can be downloaded from the website:
http://code.google.com/p/mpmath/

It is also available in the Python Package Index:
http://pypi.python.org/pypi/mpmath/0.8

This version adds methods for oscillatory quadrature, accelerated
summation of infinite series, limit computation, integration of ODEs,
and constant recognition (similar to the Inverse Symbolic Calculator).
There are several new mathematical functions, including the Lambert W
function, elliptic integrals and various hypergeometric functions,
plus a few new mathematical constants such as the golden ratio and
Khinchin's constant. Various existing functions have also been tuned
for accuracy and speed.

A number of important bugfixes have been committed, including a
workaround for a core bug in some versions of Python 2.4 that would
break mpmath. The tests for machine float compatibility have also been
edited to work on versions of Python compiled to use x87 float
instructions. Finally, the documentation has been improved, and can
now be converted to pretty HTML (as available on the website) using
Sphinx.

Many thanks to Mario Pernici for contributing high quality
enhancements especially to the complex arithmetic and elementary
functions (but also other parts of the code), and for helping out with
debugging. Thanks also to Ondrej for his ODE solver code and debugging
assistance, and to several other people who reported bugs, posted
comments to the issue tracker, or provided other feedback.

As usual, there are likely bugs, and the quicker they can be reported,
the better. Bug reports can be sent to the issue tracker at
http://code.google.com/p/mpmath/issues/list or the mpmath mailing
list.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: Mpmath 0.8 has been released

2008-04-22 Thread Fredrik Johansson

On Tue, Apr 22, 2008 at 7:31 PM, Vinzent Steinberg
<[EMAIL PROTECTED]> wrote:
>  Why is sympy so much slower? Printing 1000 digits of pi takes nothing
>  with mpmath and forever with sympy...
>
>  I thought sympy uses mpmath? Why the difference?

SymPy contains mpmath as a third-party module, but does not actually
use it anywhere. Fixing that will be (part of) my GSoC project.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



Re: mercurial on windows

2008-05-02 Thread Fredrik Johansson

On Wed, Apr 30, 2008 at 5:32 PM, Ondrej Certik <[EMAIL PROTECTED]> wrote:
>  Fredrik, you told me you just call mercurial from the "cmd" console,
>  right? And how do you edit python files and how do you run them?
>  E.g. do you use your favourite editor (in my case it'd be vim) and
>  then go to the "cmd" console and run python/hg from there?

Yes, pretty much. As a unix user coming to Windows, you really should
consider cygwin though.

I have tried TortoiseHg, but it is not as mature as TortoiseSVN.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: unit step (heaviside) function in sympy?

2008-05-05 Thread Fredrik Johansson

On Mon, May 5, 2008 at 11:07 PM, Friedrich Hagedorn <[EMAIL PROTECTED]> wrote:
>
>  On Mon, May 05, 2008 at 10:39:23PM +0200, Ondrej Certik wrote:
>  >
>  > On Mon, May 5, 2008 at 8:22 PM, Reckoner <[EMAIL PROTECTED]> wrote:
>  > >
>  > > is there a unit step (heaviside) function in sympy?
>  > >
>  > > I need to work a conditional into a symbolic expression.
>  >
>  > We have sign which is basically the same thing:
>  >
>  > In [1]: sign(x)
>  > Out[1]: sign(x)
>  >
>  > In [2]: sign(1)
>  > Out[2]: 1
>  >
>  > In [3]: sign(-5)
>  > Out[3]: -1
>
>  And so you have the heaviside function:
>
>  In [1]: H=Lambda(x, (sign(x)+1)/2)
>
>  In [2]: H(-1)
>  Out[2]: 0
>
>  In [3]: H(0)
>  Out[3]: 1
>
>  In [4]: H(1)
>  Out[4]: 1

I think there is a bug here. sign(0) should be 0 (and as a consequence
H(0) should be 1/2).

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: 'vectorize' with mpc?

2008-05-18 Thread Fredrik Johansson

On May 18, 1:07 pm, Zoho <[EMAIL PROTECTED]> wrote:
> I am using an LU decomposition routine and wanted to 'vectorize' a
> relation but mpmath borks when used with complex numbers. Is this a
> known problem?

It is now (thanks). I've opened 
http://code.google.com/p/mpmath/issues/detail?id=40

As a workaround, you can put the scalar on the right hand side:

b[0,0:2] -= b[0,0:2]*pivot

I should note that almost no effort has been made so far to make
mpmath compatible with numpy. What does work, more or less works by
accident.

Fredrik
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: online sympy shell at live.sympy.org

2008-05-19 Thread Fredrik Johansson

On Mon, May 19, 2008 at 4:35 PM, Ondrej Certik <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I've setup:
>
> http://live.sympy.org/
>
> which is a python shell (see the link in the app for sources) and I've
> included a sympy module in it. Sample session:

Very nice!

Can you set it up to import sympy.interactive automatically?

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: should we allow using implicit imports?

2008-06-02 Thread Fredrik Johansson

On Mon, Jun 2, 2008 at 11:48 AM, Ondrej Certik <[EMAIL PROTECTED]> wrote:
> What do you think Fredrik? Let's use explicit in mpmath as well, at
> least in SymPy?

When a function name changes, you have to change not just the function
and the code that refers to it, but also lots of imports. You can
catch errors this way, but those imports should be checked via unit
tests anyway. In many ways I think explicit imports are a bit like
(explicit) static type declarations, which I don't like :-)

As I've said, it adds a lot of clutter when there are 50 items to
import. A possible solution is to keep the namespace of the entire
module (requiring just one import), but that results in even more
clutter if the imported objects are used in 100 places in the code.

Note that some mpmath modules define __all__, which I think is a good
compromise, as it prevents "leakage" via subsequent imports. So it
doesn't really matter to SymPy what mpmath does to import items
between modules internally.

The argument that things get clearer is valid. But it might be even
better with a comment such as

# Here we import all the low level functions fadd, fmul, etc, which in
this module will be wrapped to operate on mpf instances
from lib import *

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: Identifying repeated subexpressions in systems of equations

2008-06-18 Thread Fredrik Johansson
I've implemented an evaluate=False option for Add, Mul, Pow and
functions (see attachment). This could be useful to suppress default
behavior like Sub(x,y) -> Add(x,Mul(-1,y)) for code generation etc. As
it happens, I need something like this for evalf testing as well.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---

# HG changeset patch
# User Fredrik Johansson <[EMAIL PROTECTED]>
# Date 1213795035 -7200
# Node ID 809a34695ade804a9d3efce6ad8f100f05dafe73
# Parent  0842457787a92e2132778d3a379dd0f40dea6333
implement evaluate=False option for Add, Mul, Pow and functions

diff -r 0842457787a9 -r 809a34695ade sympy/core/function.py
--- a/sympy/core/function.pyTue Jun 17 18:26:55 2008 +0200
+++ b/sympy/core/function.pyWed Jun 18 15:17:15 2008 +0200
@@ -126,6 +126,8 @@
 if opt in options:
 del options[opt]
 # up to here.
+if options.get('evaluate') is False:
+return Basic.__new__(cls, *args, **options)
 r = cls.canonize(*args, **options)
 if isinstance(r, Basic):
 return r
diff -r 0842457787a9 -r 809a34695ade sympy/core/operations.py
--- a/sympy/core/operations.py  Tue Jun 17 18:26:55 2008 +0200
+++ b/sympy/core/operations.py  Wed Jun 18 15:17:15 2008 +0200
@@ -21,6 +21,8 @@

 @cacheit
 def __new__(cls, *args, **assumptions):
+if assumptions.get('evaluate') is False:
+return Basic.__new__(cls, *map(_sympify, args), **assumptions)
 if len(args)==0:
 return cls.identity()
 if len(args)==1:
diff -r 0842457787a9 -r 809a34695ade sympy/core/power.py
--- a/sympy/core/power.py   Tue Jun 17 18:26:55 2008 +0200
+++ b/sympy/core/power.py   Wed Jun 18 15:17:15 2008 +0200
@@ -60,6 +60,8 @@
 def __new__(cls, a, b, **assumptions):
 a = _sympify(a)
 b = _sympify(b)
+if assumptions.get('evaluate') is False:
+return Basic.__new__(cls, a, b, **assumptions)
 if b is S.Zero:
 return S.One
 if b is S.One:
diff -r 0842457787a9 -r 809a34695ade sympy/core/tests/test_arit.py
--- a/sympy/core/tests/test_arit.py Tue Jun 17 18:26:55 2008 +0200
+++ b/sympy/core/tests/test_arit.py Wed Jun 18 15:17:15 2008 +0200
@@ -1,5 +1,5 @@
 from sympy import Symbol, sin, cos, exp, O, sqrt, Rational, Real, re, pi, \
-sympify, sqrt
+sympify, sqrt, Add, Mul, Pow
 from sympy.utilities.pytest import XFAIL

 x = Symbol("x")
@@ -941,3 +941,17 @@
 e = 2*a + b
 f = b + 2*a
 assert e == f
+
+def test_suppressed_evaluation():
+a = Add(1,3,2,evaluate=False)
+b = Mul(1,3,2,evaluate=False)
+c = Pow(3,2,evaluate=False)
+assert a != 6
+assert a.func is Add
+assert a.args == (1,3,2)
+assert b != 6
+assert b.func is Mul
+assert b.args == (1,3,2)
+assert c != 9
+assert c.func is Pow
+assert c.args == (3,2)
diff -r 0842457787a9 -r 809a34695ade sympy/core/tests/test_functions.py
--- a/sympy/core/tests/test_functions.pyTue Jun 17 18:26:55 2008 +0200
+++ b/sympy/core/tests/test_functions.pyWed Jun 18 15:17:15 2008 +0200
@@ -277,3 +277,10 @@
 assert diff(x**3, x) == 3*x**2
 assert diff(x**3, x, evaluate=False) != 3*x**2
 assert diff(x**3, x, evaluate=False) == Derivative(x**3, x)
+
+def test_suppressed_evaluation():
+a = sin(0,evaluate=False)
+assert a != 0
+assert str(a) == "sin(0)"
+assert a.func is sin
+assert a.args == (0,)


[sympy] Re: __str__ and __repr__ confusion

2008-06-27 Thread Fredrik Johansson

On Fri, Jun 27, 2008 at 10:56 PM, Kirill Smelkov
<[EMAIL PROTECTED]> wrote:

> I quote http://docs.python.org/ref/customization.html#l2h-183
>
> __repr__(self):
>
>  Called by the repr() built-in function and by string conversions (reverse
>  quotes) to compute the ``official'' string representation of an object. If at
>  all possible, this should look like a valid Python expression that could be
>  used to recreate an object with the same value (given an appropriate
>  environment). If this is not possible, a string of the form "<...some useful
>  description...>" should be returned. The return value must be a string 
> object.
>  If a class defines __repr__() but not __str__(), then __repr__() is also used
>  when an ``informal'' string representation of instances of that class is
>  required.

Note "given an appropriate environment". The appropriate environment
is use within SymPy so I think it is actually sufficient if
sympify(repr(x)) == x holds, sympify taking the role of eval. It is
fine in my opinion if repr displays rationals as 2/3, etc.

>  This is typically used for debugging, so it is important that the
>  representation is information-rich and unambiguous.
>
>
> So, say about Symbol('x') -> 'x' -- symbols could be Dummy, Temporary, they
> could have non-default assumption, etc.

Yes, and this is something I have pointed out before. It is very
problematic that expressions can have "hidden" properties, that don't
show up in the output of either repr() or str(). My suggested solution
is to remove assumptions from expressions and get rid of
Dummy/Temporary/Wild, just having symbols.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: milion digits of pi benchmarks (sympy vs Sage)

2008-07-02 Thread Fredrik Johansson

Note: I just implemented the Chudnovsky algorithm in mpmath SVN. This
makes computing pi about 2.5x faster.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: milion digits of pi benchmarks (sympy vs Sage)

2008-07-02 Thread Fredrik Johansson

On Wed, Jul 2, 2008 at 3:56 PM, Ondrej Certik <[EMAIL PROTECTED]> wrote:
> Ah, that's him, great. Fredrik, you should make your time in August
> and come to scipy2008. You can apply for a sponsorship to Enthought. I
> am sure you will not regret.

I have exams then, unfortunately.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: [newb] Whither N?

2008-07-08 Thread Fredrik Johansson
On Tue, Jul 8, 2008 at 3:47 PM, Ondrej Certik <[EMAIL PROTECTED]> wrote:
> On Tue, Jul 8, 2008 at 3:27 PM, Neal Becker <[EMAIL PROTECTED]> wrote:
>>
>> I'm reading:
>> http://planet.sympy.org/
>>
>> There, I see this example:
>>>>> from sympy import *
>>>>> var('x')
>> x
>>>>> gauss = Integral(exp(-x**2), (x, -oo, oo))
>>>>> N(gauss, 15)
>> '1.77245385090552'
>>
>> But I get this:
>>>>> from sympy import *
>>>>> var ('x')
>> x
>>>>> gauss = Integral (exp(-x**2), (x, -oo, oo))
>>>>> N(gauss, 15)
>> Traceback (most recent call last):
>>  File "", line 1, in 
>> NameError: name 'N' is not defined
>
> Thanks for noticing. Fredrik, what is your plan to put it in?
>
> Currently, you can do this:
>
> wget http://www.dd.chalmers.se/~frejohl/code/evalf.py
>
> Apply this patch:
>
> --- evalf.orig  2008-07-02 17:12:10.0 +0200
> +++ evalf.py2008-07-08 15:44:31.446086994 +0200
> @@ -20,11 +20,11 @@
>
>  """
>
> -from mpmath.lib import (from_int, from_rational, fpi, fzero, fcmp,
> +from sympy.mpmath.lib import (from_int, from_rational, fpi, fzero, fcmp,
> normalize, bitcount, round_nearest, to_str, fpow, fone, fpowi, fe,
> fnone, fhalf, fcos, fsin, flog, fatan, fmul, fneg, to_float, fshift)
>
> -from mpmath import mpf, mpc, quadts, mp
> +from sympy.mpmath import mpf, mpc, quadts, mp
>
>  import math
>  import sympy
>
>
> Then start python, or bin/isympy and:
>
> In [1]: from evalf import N
>
> In [2]: gauss = Integral(exp(-x**2), (x, -oo, oo))
>
> In [3]: gauss
> Out[3]:
> ∞
> ⌠
> ⎮ 2
> ⎮   -x
> ⎮  ℯdx
> ⌡
> -∞
>
> In [4]: N(gauss, 15)
> Out[4]: 1.77245385090552
>
>
> Please let us know if you find any other similar problems,
> Thanks,
> Ondrej
>
> >
>



-- 
Fredrik Johansson

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: [newb] Whither N?

2008-07-08 Thread Fredrik Johansson

On Tue, Jul 8, 2008 at 11:53 PM, Fredrik Johansson
<[EMAIL PROTECTED]> wrote:
>> Thanks for noticing. Fredrik, what is your plan to put it in?
>>
>> Currently, you can do this:
>>
>> wget http://www.dd.chalmers.se/~frejohl/code/evalf.py

Yep, the function N is implemented in that file and not yet available
in SymPy. My plan is to put it in when it is ready :-)

(Sorry for the empty mail; I clicked "Send" by accident.)

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: high school factoring

2008-07-21 Thread Fredrik Johansson

On Mon, Jul 21, 2008 at 8:12 PM, Ondrej Certik <[EMAIL PROTECTED]> wrote:

> Not bad. However the time varies a lot on my laptop if I repeat it in
> another ipython session -- from 0.7s (!) to 8s. Can anyone verify this
> please? You need the latest hg sympy. Why is that?

I would guess because factor uses a randomized algorithm. No doubt the
worst case time can be improved though.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] ANN: mpmath 0.9 released

2008-08-23 Thread Fredrik Johansson

Hi,

Mpmath version 0.9 is now available from the website:
http://code.google.com/p/mpmath/

It can also be downloaded from the Python Package Index:
http://pypi.python.org/pypi/mpmath/0.9

Mpmath is a pure-Python library for arbitrary-precision
floating-point arithmetic that implements an extensive set of
mathematical functions. It can be used as a standalone library
or via SymPy (http://code.google.com/p/sympy/).

The most significant change in 0.9 is that mpmath now transparently
uses GMPY (http://code.google.com/p/gmpy/) integers instead of
Python's builtin integers if GMPY is installed. This makes mpmath
much faster at high precision. Computing 1 million digits of pi,
for example, now only takes ~10 seconds.

Extensive benchmarks (with and without GMPY) are available here:
http://mpmath.googlecode.com/svn/bench/mpbench.html

Credit goes to Case Van Horsen for implementing GMPY support.

There are many new functions, including Jacobi elliptic functions
(contributed by Mike Taschuk), various exponential integrals,
Airy functions, Fresnel integrals, etc. Several missing basic
utility functions have also been added, and Mario Pernici has
taken great care to optimize the implementations of various
elementary functions.

For a more complete changelog, see:
http://mpmath.googlecode.com/svn/trunk/CHANGES

Bug reports and other comments are welcome at the issue tracker:
http://code.google.com/p/sympy/issues/list

or the mpmath mailing list:
http://groups.google.com/group/mpmath

Thanks to all who contributed code or provided feedback for
this release!

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: Improving sympy/Sage integration

2008-08-27 Thread Fredrik Johansson

On Wed, Aug 27, 2008 at 11:56 AM, Ondrej Certik <[EMAIL PROTECTED]> wrote:
> It is sometimes slow if the nsimplify() cannot make it, so it probably
> should not be default, but it should be optional.
>
> I know many people (including myself) have burned themselves by
> writing 1/2*x**2, so this might help.

If one is just interested in matching fractions, continued fractions
can be used. This is extremely fast. See
http://en.wikipedia.org/wiki/Continued_fraction#Best_rational_approximations

Still, I think this is too much magic to use by default in sympify().
And most users won't have imported division from __future__ anyway, so
it won't help when 1/2 == 0.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: make sympy more consistent, learn from Mathematica?

2008-08-30 Thread Fredrik Johansson

+1 for:

* canonize -> eval
* doit -> eval
* Integral = integrate, and suppressed evaluation handled the same as
for any other object

Maybe add evaluate=False support to sympify? Sympycore also implements
suppresed evaluation via the Verbatim algebra, by the way.

I don't think Hold() is possible, at least not without some major
interpreter hackage. The problem is that when Python evaluates
Hold(2+2), it evaluates 2+2 before it even calls Hold.

As for Head, I think that is x.func. In sympycore, we have x.func and
x.args representing the mathematical structure (to be compatible with
sympy), and x.pair = (x.head, x.data) for the internal representation.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: make sympy more consistent, learn from Mathematica?

2008-08-31 Thread Fredrik Johansson

> I don't think Hold() is possible, at least not without some major
> interpreter hackage. The problem is that when Python evaluates
> Hold(2+2), it evaluates 2+2 before it even calls Hold.

Come to think of it, it can be done with 'with':

with no_evaluation:
z = x + y

though unfortunately we lose the ability to do it inline in an
expression. Maybe ask the Python devs what they would think about
adding support for the syntax

z = x+y with no_evaluation
z = 3*(abs(x+y) with assumptions(x > 0, y > 0)) + 4*x

?

On a related note, I think __pos__ should be defined to force
reevaluate (like Decimal and mpmath.mpf). Thus:

a = abs(x)

with no_evaluation:
b = integrate(x**2, (x, 1, 2))

b = +b

with assumptions(x > 0):
a = +a


On Sun, Aug 31, 2008 at 2:42 AM, Ondrej Certik <[EMAIL PROTECTED]> wrote:

> Another thing with this --- the x.data in sympycore is implementation
> dependend, right?
>
> I believe the following is simpler:
>
> x.head
> x.args

That is just the current system but with different names.

> Because that way we can plug in as our core basically anything, be it
> sympycore, or Sage's pynac, or anything way better, if someones
> manages to write. What do you think?

For the expression itself, we should really just need a tree where the
nodes are numbers, symbols, add, mul, pow, and a few more standard
items. Then additional information like "the symbol x represents a
complex number" or "the symbol x represents a 2x2 matrix over a finite
field" (or what have you). can be passed on the side as an assumption.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: preliminary Mathematica like docs

2008-09-04 Thread Fredrik Johansson

On Thu, Sep 4, 2008 at 6:47 PM, Ondrej Certik <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> as I promised, I started to rewrite the Mathematica docs to sphinx
> using Python+SymPy. You can pull from the doc branch here:

Careful. While creating documentation in a similar style seems
worthwhile, copying their selection of examples arguably constitutes
copyright infringement.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: bug in evalf

2008-10-01 Thread Fredrik Johansson

On Wed, Oct 1, 2008 at 10:50 PM, william ratcliff
<[EMAIL PROTECTED]> wrote:
> I believe there is a glitch in evalf when complex numbers are involved:
>
> import sympy
> import numpy as N
> import scipy.linalg
>
> pi=N.pi
> g=sympy.exp(2*pi*sympy.I)
> A=N.matrix([[g,0],[0,g]])
>
> scipy.linalg.eig(A)
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "c:\python25\lib\site-packages\scipy\linalg\decomp.py", line 149, in
> eig
> overwrite_a=overwrite_a)
>   File "c:\python25\lib\site-packages\sympy\core\basic.py", line 1860, in
> __float__
> result = self.evalf()
>   File "c:\python25\Lib\site-packages\sympy\core\evalf.py", line 950, in
> Basic_evalf
> prec = dps_to_prec(n)
>   File "c:\python25\Lib\site-packages\sympy\mpmath\lib.py", line 102, in
> dps_to_prec
> return max(1, int(round((int(n)+1)*3.3219280948873626)))
>   File "c:\python25\lib\site-packages\sympy\core\basic.py", line 1864, in
> __float__
> raise ValueError("Symbolic value, can't compute")
> ValueError: Symbolic value, can't compute
>
>
> I think that the problem is that if there is a complex number that should
> result from the evaluation of the exponential, the imaginary portion is left
> as a symbol instead of being converted to a complex number.   Any ideas?
>
>
> Thanks,
> William

This does not look like a bug in evalf. It seems that numpy is calling
float(), which fails because g actually has nonzero imaginary part:

>>> complex(g)
(1-2.4286128663675299e-016j)

Is there a good reason for using numpy.pi instead of sympy.pi? With
sympy.pi, the complex exponential will simplify symbolically to a real
number.

Generally, I think using numpy linear algebra functions with sympy
elements is unlikely to work well (if at all). You should probably
convert all elements to float or complex first.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: Heard on Wikipedia

2008-10-03 Thread Fredrik Johansson

On Fri, Oct 3, 2008 at 11:09 AM, Ondrej Certik <[EMAIL PROTECTED]> wrote:
> Wow. Big credit goes to Mateusz for this.
>
> Ondrej

Indeed. And let's add a test for this.

On the other hand, SymPy is unable to do the integral
x/sqrt(x^4+10x^2-96x-71), also mentioned on the Wikipedia page.
Apparently Axiom (but no other systems) can do it.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Heard on Wikipedia

2008-10-03 Thread Fredrik Johansson

>From http://en.wikipedia.org/wiki/Risch_algorithm, "The following is a
more complex example, which no software (as of March 2008) is known to
find an antiderivative for: [..."] But yesterday an anonymous user
pointed out on the talk page that SymPy is actually able to calculate
the integral. The user posted the interactive example here:
http://dpaste.com/81951/

I tried the integral in Mathematica, which indeed is unable to do it.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: Virasoro algebra in sympy

2008-10-05 Thread Fredrik Johansson

On Sun, Oct 5, 2008 at 3:31 PM, Ondrej Certik <[EMAIL PROTECTED]> wrote:
> In the past, we played with having a NCMul like ginac, then we
> switched to just having one Mul and the commutative=False assumption,
> currently attached to the symbols directly, in the future this will be
> handled by the assumptions system, just like in Mathematica. If you
> have a better idea how this could/should be handled, please share it.

But note that Mathematica handles noncommutative multiplication with a
separate "class", not with assumptions.  I think a NCMul class would
be a better solution for SymPy too.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: ANN: mpmath 0.10 released

2008-10-15 Thread Fredrik Johansson

On Wed, Oct 15, 2008 at 3:56 PM, Ondrej Certik <[EMAIL PROTECTED]> wrote:

> Indeed, me too. Could you please Fredrik (or anyone else) prepare a patch?

I don't have time at the moment (I made the release so I could take a
break for the next few days).

A few updates to SymPy will be needed, mainly due to the fact that
many of the functions used by evalf.py have been renamed. This should
be straightforward to fix by search-and-replace (the use of explicit
imports helps here :-). Some SymPy functions should also be updated to
use new mpmath features (see e.g. issue 1094, Bernoulli numbers).

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] ANN: mpmath 0.10 released

2008-10-15 Thread Fredrik Johansson

Hi,

Mpmath version 0.10 is now available from the website:
http://code.google.com/p/mpmath/

It can also be downloaded from the Python Package Index:
http://pypi.python.org/pypi/mpmath/0.10

Mpmath is a pure-Python library for arbitrary-precision floating-point
arithmetic that implements an extensive set of mathematical functions.
It can be used as a standalone library or via SymPy
(http://code.google.com/p/sympy/).

Additions in 0.10 include plotting support, matrices and linear
algebra functions, new root-finding and quadrature algorithms,
enhanced interval arithmetic, and some new special functions. Many
speed improvements have been committed (a few functions are an order
of magnitude faster than in 0.9), and as usual various bugs have been
fixed. Importantly, this release fixes mpmath to work with Python 2.6.

For a more complete changelog, see:
http://mpmath.googlecode.com/svn/trunk/CHANGES

Special thanks go to Vinzent Steinberg who contributed the new linear
algebra and root-finding code, and to everybody else who made a
contribution or provided feedback.

Bug reports and other comments are welcome at the issue tracker at
http://code.google.com/p/sympy/issues/list or the mpmath mailing list:
http://groups.google.com/group/mpmath

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: ANN: mpmath 0.10 released

2008-10-16 Thread Fredrik Johansson

On Thu, Oct 16, 2008 at 4:43 PM, Ondrej Certik <[EMAIL PROTECTED]> wrote:
> Unfortunately, I now encountered a more serious problem with x[0] and
> x._mpf_[0]. Here is how to reproduce (Kirill, is there an easier way
> to checkout the remote branch?):

> I don't like the way I fixed that. Fredrik, do you think you could
> please look into that? I think we'll release with our old mpmath, as
> that is well tested and works. And import the new mpmath into our next
> release, so that we have enough time to discover and fix bugs.

I don't like that fix either :-) There should be no non-tuples in there.

The fix is that the mpmath functions mpf_gamma, mpf_pi and mpf_e
should be used instead of functions.gamma, functions.pi, functions.e.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: x.simplify() or simplify(x) or both

2008-10-17 Thread Fredrik Johansson

On Fri, Oct 17, 2008 at 10:50 AM, Fabian Seoane <[EMAIL PROTECTED]> wrote:
> Other related things: we should agree on how to name assumptions, I've
> seen in the code is_integer and is_Integer, is_number and
> is_Number ... we really need some
> rules on coding style ...

x.is_Integer means x is an Integer instance with a definite value;
x.is_integer means that it is any symbolic expression representing an
integer.

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



[sympy] Re: ANN: mpmath 0.10 released

2008-10-17 Thread Fredrik Johansson

On Fri, Oct 17, 2008 at 8:40 PM, Ondrej Certik <[EMAIL PROTECTED]> wrote:

The first is due to quadts(f, a, b) changing syntax to quadts(f, [a,
b]) (a trivial fix in evalf.py) and the second is due to findpoly
reversing order (see the "XXX:" comment in nsimplify; also trivially
fixed)

Fredrik

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~--~~~~--~~--~--~---



  1   2   >