Ah, sorry for the spam. It's computing the reduced normal forms.

On 17.04.2012 20:47, Tom Bachmann wrote:
Actually, solving the systems is quite fast. Not sure what the prolem is
so far...

On 17.04.2012 20:39, Tom Bachmann wrote:
Hi all,

so recently we (I ...) pushed the function ratsimpmodprime, created by
Mateusz' last year's gsoc student. It can be used to simplify fractions
modulo prime ideals, and it was suggested we try to incorporate in into
trigsimp. I tried basically that [with little success, see below]. First
of all, the code is last in the email :D. Second of all, I apologize
this is long; I couldn't manage to explain it more succinctly.

Examples
--------

>>> mytrigsimp(sin(x)*cos(x)/sin(2*x))
1/2
>>> mytrigsimp((-sin(x) + 1)/cos(x) + cos(x)/(-sin(x) + 1))
1/cos(x)
>>> mytrigsimp(sin(x)*cos(x)/sin(5*x))
2
- sin (x) + 1
─────────────────────────────────────────────────
-6⋅sin(x)⋅sin(2⋅x) - 2⋅sin(x)⋅sin(4⋅x) + 5⋅cos(x)

[is that any good?]

>>>
mytrigsimp((sin(x)*cos(x)-sin(2*x))/(sin(x**2)*cos(x**2)-sin(2*x**2)))

sin(2⋅x)
─────────
⎛ 2⎞
sin⎝2⋅x ⎠



Problems (practical)
--------------------

For one thing, ratsimpmodprime seems to be buggy. For example,

mytrigsimp((-sin(x) + 1)/cos(x) + cos(x)/(-sin(x*2) + 1), order=grlex)

seems to compute a wrong answer, whereas lex order works [but perhaps my
code could also be wrong, who knows]. I fixed one bug (where
ratsimpmodprime would return infinites), but this seems to be
non-trivial (but should of course be fixable).

The real problem is that ratsimpmodprime solves "large" linear systems.
Here large means, for example, 20 variables and 20 equations. Mind you,
these are homogeneous linear equations with integer / rational
coefficients. My gut feeling is that solving this should really be a
piece of cake, but quite probably my gut feeling about numerical
analysis could be wrong :D.

This is the main problem which stops me from further experimenting. Does
anyone have experience in this regard? What performance in solving such
systems is attainable?

Another problem I had thought would occur is that groebner basis
computations get slow quick, but I did not hit that because of the above.


Theory
------

ratsimpmodprime basically solves the following problem: let k be a
field, R = k[x_1, ..., x_n], and I a prime ideal of R. Let
p(x) = p(x_1, .., x_n), q(x) = q(x_1, ..., x_n) be polynomials with q(x)
not in I. Then p(x)/q(x) represents an element of the field of fractions
of R/I. Find polynomials c(x), d(x) of minimal total degree such that
c(x)/d(x) represents the same element as p(x)/q(x).

This is quite helpful for trigonometric simplification. For example,
taking only the variables c and s, with generated by the single relation
c**2 + s**2 - 1 ("cos(x)**2 + sin(x)**2 = 1") allows this algorithm to
reduce (-sin(x) + 1)/cos(x) + cos(x)/(-sin(x) + 1) to 1/cos(x) --
something asked on the mailing list earlier today.

We can extend the procedure considerably (ignoring practical problems
:D). We start with the following lemma:

Let k be a field and R_1, R_2 geometrically integral k-algebras (that
is, R_i are rings containing k, and R_i (x) k^alg is an integral domain,
where (x) denotes tensor product and k^alg is the algebraic closure of
k). Then R_1 (x) R_2 is an integral domain.

In particular, if we have two sets of variables x_1, .., x_n and y_1,
.., y_n, together with ideals I_1 and I_2 (I_1 only involving the x-s,
I_2 only the y-s) such that k[x]/I_1 and k[y]/I_2 are geometrically
integral, then the ideal I = I_1 + I_2 of k[x, y] is prime. I.e. under
reasonable conditions we can "amalgamate" relations on different
variables.

So suppose we have an expression involving only sin(x) and cos(x). Then
we just work with generators sin(x), cos(x) and the ideal generated by
sin(x)**2 + cos(x)**2 - 1. (Btw this polynomial is irreducible over
k^alg so the quotient is geometrically integral.)

Now suppose the expression also involves sin(2*x) - or perhaps we think
a simpler expression would involve that. Note that sin(2*x) =
2*sin(x)*cos(x). Hence if we take the ideal I'=[sin(2*x) -
2*sin(x)*cos(x), sin(x)**2 + cos(x)**2 - 1], then k[sin(x), cos(x),
sin(2*x)]/I' is isomorphic to the ring in the previous paragraph (in
particular, also geometrically integral).

In this way, we can "introduce" sin and cos of arbitrary fixed integral
multiples of x to ratsimpmodprime, and it will automatically minimize
the total degree in all variables. Note how conveniently the degree of
sin(2*x) is less than that of sin(x)*cos(x)...

By the tensor product lemma above, we can just add generators for e.g.
sin(y) as well and this way treat as many variables as we like. Also
"ordinary" variables (a free variable x, or perhaps gamma(y), or
whatever) can all just be taken as generators of the ring and will not
influence ratsimpmodprime (except for making it slower...)

So mytrigsimp basically does this: it takes the expr, converts it into a
fraction of polynomials. It examines the generators, and groups the
trigonometric terms in such a way that if x is a rational multiple of y,
sin(x) and sin(y) [and cos(x) and cos(y)] end up in the same group. It
then figures out the gcd of the prefactors, introduces a new variable z,
and builds all the relations described above.

[For example, if the expression is sin(x**2)+sin(2*x)+cos(x/3), then the
groups are [sin(x**2)] and [sin(2*x), cos(x/3)], and in the second case
the new variable z is just x/3, and we build relations for sin(z),
cos(z), sin(2*z), cos(2*z), ... up to at least sin(6*z), cos(6*z).]

Note that we are quite flexible in what additional generators to
introduce, my original idea was to use, for starters, all multiples of z
up to twice what we need - but that got shot down my the linear algebra
problems.


Problems in Theory
------------------

I'm not sure how to incorporate e.g. tan into the picture.


The Code
--------

This fixes a bug (I think ...) and adds a debug statement.

diff --git a/sympy/simplify/simplify.py b/sympy/simplify/simplify.py
index 336a098..954ae6c 100644
--- a/sympy/simplify/simplify.py
+++ b/sympy/simplify/simplify.py
@@ -854,6 +854,7 @@ def _ratsimpmodprime(a, b, N=0, D=0):
r = reduced(a * d_hat - b * c_hat, G, opt.gens, order=opt.order,
polys=True)[1]

S = r.coeffs()
+ print len(S), len(Cs + Ds)
sol = solve(S, Cs + Ds)

# If nontrivial solutions exist, solve will give them
@@ -863,7 +864,7 @@ def _ratsimpmodprime(a, b, N=0, D=0):
for key in sol.keys():
sol[key] = sol[key].subs(dict(zip(Cs + Ds, [1] * (len(Cs) + len(Ds)))))

- if sol and not all([s == 0 for s in sol.itervalues()]):
+ if sol and not all((s == 0 or var in Cs) for var, s in
sol.iteritems()):
c = c_hat.subs(sol)
d = d_hat.subs(sol)


This is my test file:
from sympy import *

def build_ideal(x, n):
I = [cos(x)**2 + sin(x)**2 - 1]
y = Dummy('y')
for k in range(2, n + 1):
cn = cos(k*y).expand(trig=True).subs(y, x)
sn = sin(k*y).expand(trig=True).subs(y, x)
I.append(cos(k*x) - cn)
I.append(sin(k*x) - sn)
return I

def analyse_gens(gens):
trigterms = [g.args[0].as_coeff_mul() for g in gens if g.func in [sin,
cos]]
trigdict = {}
for val, key in trigterms:
trigdict.setdefault(key, []).append(val)
res = []
gens
for key, val in trigdict.iteritems():
gcd = reduce(igcd, val)
n = max(x / gcd for x in val) # * 2
res.extend(build_ideal(gcd*Mul(*key), n))
return res

def mytrigsimp(expr, **opts):
num, denom = cancel(expr).as_numer_denom()
polys, opt = parallel_poly_from_expr([num, denom], **opts)
G = groebner(analyse_gens(opt.gens))
return ratsimpmodprime(expr, list(G), **opts)


Best,
Tom



--
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.

Reply via email to