On Thu, Apr 19, 2012 at 1:24 AM, Tom Bachmann <e_mc...@web.de> wrote:
> On 18.04.2012 23:25, Aaron Meurer wrote:
>>>
>>> Opts is mostly passed through to polys. You probably shouldn't use the
>>> gens=.. option. It is probably a good idea to pass order=grlex or
>>> order=grevlex -- for reasons that are not clear to me the default order
>>> in
>>> polys is lex. In any case, unless you pass a graded order (i.e. grlex or
>>> grevlex), the result will *not* be of minimal total degree, in the naive
>>> sense.
>>
>>
>> I guess just because lex is the default for printing.
>>
>
> Maybe we should consider changing that (but it's not high priority I
> suppose). It is fairly well-known that lex is "typically" the slowest order
> for groebner basis computations
>
>
>>> Ratsimpmodprime simplifies f/g mod I, essentially by writing out a
>>> general
>>> fraction p/q of degree n in the generators (say (a + b*x + c*y)/(e + f*x
>>> +
>>> g*y) for n=1, generators x,y), and then checking if there exists a choice
>>> of
>>> coefficients such that f*q - p*g is in I. What the original code did was
>>> to
>>> change the ground field from QQ to QQ(a, b, c, e, f, g) and then reduce
>>> f*q-p*g modulo the groebner basis (one may easily see that changing the
>>> ground field in this way does indeed preserve groebner bases). This will
>>> result in an expression linear in the new coefficients, and finding a
>>> solution is a matter of linear algebra. The problem is that computations
>>> over this field of fractions are slow.
>>>
>>> My new method adds a, b, c, e, f, g to the generators of our polynomial
>>> ring, that is we now work in k[x, y, a, .., g] instead of k[x, y]. Again
>>> groebner bases are preserved under this operaton, and I believe a (the)
>>> "completely reduced normal form" has exactly the properties we want, as
>>> outlined above.
>>
>>
>> I'm still trying to wrap my head around this stuff, so excuse me if
>> this doesn't make sense, but doesn't this change the ordering, and the
>> total degree of each term?
>>
>
> It absolutely changes the ring (as does the previous method), the idea is
> that everything works nonetheless :D. The first thing to check is that a
> Groebner basis over the old ring is still one over the new ring. That's
> actually quite obvious. The second thing to check is that reduced normal
> form over the new ring has the properties you want.
>
>
>>>
>>>
>>> Discussion and Future
>>> ---------------------
>>>
>>> It seems to me that this method is quite powerful. I believe the most
>>> important thing now that this proof of concept code exists is to
>>> investigate
>>> what it can and cannot do. So please test it thoroughly and report all
>>> surprises :-). I believe we can get a fairly good trigsimp along the
>>> following lines:
>>>
>>> 1. Replace all trig expressions by sin and cos equivalents.
>>> 2. Run mytrigsimp, probably some clever improved version.
>>> 3. Run some clever heuristics to get tan and similar back.
>>> 4. Run the old trigsimp.
>>
>>
>> Why do we still need the old trigsimp?
>>
>
> Being the pessimistic guy I am ;).
>
>
>>> expr = (4*sin(x)*cos(x) + 12*sin(x) + 5*cos(x)**3 + 21*cos(x)**2 +
>>> 23*cos(x)
>>> + 15)/(-sin(x)*cos(x)**2 + 2*sin(x)*cos(x) + 15*sin(x) + 7*cos(x)**3 +
>>> 31*cos(x)**2 + 37*cos(x) + 21)
>>
>>
>> I tried this for n=2 and got the same thing as for n=1.  For n=3 I got
>> AttributeError: 'NoneType' object has no attribute 'iteritems' (coming
>> from line 231).
>>
>
> The idea is that you should get the same thing with every n, that's what
> findsol is for. I cannot reproduce the bug but the change in your patch
> makes sense.
>
>
>>> Regarding (2), I don't see much to do about that. We could try to
>>> precompute
>>> groebner bases, or at least cache them intelligently. This would probably
>>> speed up common cases. As the exrpession gets larger (in particular
>>> higher
>>> degree or more variables), the reduction computations also quickly become
>>> slow.
>>
>>
>> Also we could add heuristics, to make things faster in the common case
>> where it would otherwise be slow.  Something like the fu et. al.
>> algorithm could be used as a fast preprocessor, for example.
>>
>> What slows it down more, adding multiple independent generators (like
>> sin(x) and sin(y)), or multiple dependent generators (like sin(2*x),
>> ..., sin(10*x))?
>>
>
> I'll investigate.
>
>
>>>
>>> Regarding (1), there are actually some possibilities. ratsimpmodprime is
>>> quite general, so various simplification strategies are possible. The
>>> problem is that we have to produce a prime ideal of polynomial relations,
>>> which is not quite trivial...
>>
>>
>> Is the problem that the equations have to be independent of one another?
>>
>
> The ideal you generate has to be prime :). In particular, if we want to say
> add tan(x), we would like to prove that the ideal
> <c**2 + s**2 - 1, c*t - s> of CC[s, t, c] is a prime ideal. This sort of
> problem can be non-trivial (In fact it's not at all obvious how to proceed
> here, at least to me. Algebraic geometry probably suggests some clever
> methods for thinking about this. The half-angle formule involving tan come
> to mind...). If I interpret macaulay2 right, then this ideal is prime over
> QQ, which means it is almost certainly prime over CC (and a rigorous
> argument can probably be made).

My algebra is still to weak to be of much help here. My intuition
tells me that as long as all the identities we add are true, then they
should be prime, because the consistency of them will prevent any bad
things from happening.  But that's a long way from any kind of
rigorous argument :)

>
> In any case it does not really matter (for correctness) if the ideal is
> *not* prime. The worst thing that happens is that ratsimpmodprime proves it
> is not prime and raises an exception ^^.
>
>
>> I tried adding tan(x)*cos(x) - sin(x) to the ideal and (after fixing
>> the above bug), I got a lot of cool results, like
>>
>
> Nice :-). Adding the following kinds of generators is definitely safe:
>
> (new generator) - (arbitry poly in old gens)
> (new generator)*(arbitrary expression in old gens) - 1.

Awesome.  This kind of algorithm is extremely general, then.  We can
clearly add all the trig functions (including sec, csc, and cot), and
it will automatically find the "best" representation among them
(lowest degree).  Furthermore, we can also generalize it to hyperbolic
trig functions, and indeed, simplification of other kinds of
transcendental functions as well.

So one thing that we might want to work on is improving rewriting and
pattern matching so that we can easily do this.  rewrite() is
currently limited in the kinds of rules it can accept.  For example,
there are several ways to rewrite tan(x) in terms of sin.  Some will
just use sin(x), some will also include cos(x).  Depending on what you
want to do, some of these may be more useful than others.
Furthermore, it would be perfectly legitimate under the current system
for tan(x).rewrite(sin) to return any of them.  We need to think of a
better way to framework this.

I'm sure we'll also start to run into limitations of pattern matching
if we do this seriously.

Aaron Meurer

>
>
>>>>> mytrigsimp(sin(x).rewrite(tan).rewrite(cos), n=2)
>>
>> sin(x)
>>>>>
>>>>> mytrigsimp(sin(x)/cos(x))
>>
>> tan(x)
>>>>>
>>>>> mytrigsimp(cos(x)/sin(x))
>>
>> 1/tan(x)
>>>>>
>>>>> mytrigsimp((cos(x)*sin(x) - cos(x))/sin(x))
>>
>> (sin(x) + 1)/tan(x)
>>
>> I tried adding tan double angle identities to the generators list by
>> adding tan(k*x).rewrite(cos).expand(trig=True), but I couldn't get
>> just sin(x).rewrite(tan) to simplify (though I could get some other
>> stuff to work, like
>> mytrigsimp(tan(2*x).rewrite(cos).expand(trig=True), n=2)).
>>
>
> I'll try to see how far we can get it.
>
>
>> I'm curious if we can extend the algorithm to work with sum and
>> difference formula (like sin(x + y) = ...).
>>
>
> Add sin(x)*cos(y) + cos(x)*sin(y) - sin(x + y) to the ideal (admittedly that
> isn't possible the way it is currently generated) and it should work (note
> this is of the "safe" form above). The problem is that sin(x + 2*y) is not
> going to work automagically... you see the pattern.

Exactly.  This is the pattern matching issue. It's non-trivial because
we can write that as sin(x + 2*y) or sin((x +y) + y) or sin(x + y + y)
(with a triple sum formula), and we really have to use the rest of the
expression to figure out which to use.

>
>
>>>
>>>
>>> Anyhow, I have spent much longer than I intended already on writing this
>>> mail. Let me know what you think!
>>
>>
>> I think it's awesome!
>>
>
> I'll see if I can get it polished :).

Even what you just have now is better than trigsimp, so I think you
should iron out the issues (like that bug) and put it in trigsimp, and
then we can work from there.

Aaron Meurer

>
>
>>>
>>>
>>>
>>> Best,
>>> Tom
>>
>>
>> By the way, why is mytrigsimp in a separate file?  Why not just put it
>> in simplify.py (or create a new trigsimp.py)?
>>
>
> No reason. Just feels better to do some hackety-testing in a new file.
>
>> Aaron Meurer
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To post to this group, send email to sympy@googlegroups.com.
> To unsubscribe from this group, send email to
> sympy+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/sympy?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.

Reply via email to