On Mon, 1 Nov 2021 at 17:17, Andreas Schuldei <andr...@schuldei.org> wrote:
> .
> Oscar schrieb am Sonntag, 31. Oktober 2021 um 23:50:57 UTC+1:
>>
>> On Sun, 31 Oct 2021 at 20:10, Andreas Schuldei <and...@schuldei.org> wrote:
>> >
>> > Thank you very much, Oscar. That is excellent feedback.
>> >
>> > You are right. The solver didn't find a solution after about 18h of 
>> > running. For the solver to work, I had broken up my vector equation into 
>> > vector component equations, and while doing that, I chose slightly 
>> > different variables to solve for, too. The equations were still monstrous, 
>> > though, and it is little wonder that the solver found no solution. I will 
>> > give your suggestion to use quaternions a try once I understand what they 
>> > are. :-)
>>
>> They might not help but I have sometimes found that problems involving
>> fully arbitrary 3D rotations can be handled more simply with
>> quaternions.
>>
>> > I don't care if the solution I get is too complicated to look at, as long 
>> > as I can implement it and feed it with my known values. I need it to spit 
>> > out the one result I want - that vector CM0 (or CM1, CM2, or CM3 - in real 
>> > life, those will be roughly similar), but even the value c02, c12, c22, or 
>> > c32 would be equally good to know.
>>
>> It might be possible to solve a subset of your equations for a subset
>> of your unknowns to get a partial analytic solution.
>>
>> > I still refuse to throw a numerical search algorithm at this problem, even 
>> > though that most likely would work, because I hope to better understand 
>> > the problem (and solution) by solving it symbolically.
>>
>> If your ultimate goal is to feed in values then why is it better to do
>> that after calling solve rather than before? You say that you want to
>> better understand the problem and solution but I think that if you got
>> a solution in general symbols here then it would be so monstrously
>> complicated that it wouldn't help to understand anything.
>>
>> Analytic solutions are nice when they are possible and not overly
>> complicated. Numerics are more widely used than symbolics precisely
>> because many problems do not have useful analytic solutions but can be
>> handled numerically.
>
> The technical advantage of a symbolic solution is shorter, constant execution 
> time compared to a numeric search. If all goes well, this algorithm will be 
> used on embedded systems, perhaps under CPU constraints.

It is not always the case that using a symbolic solution gives shorter
execution time. In some cases the symbolic solution explodes in
complexity to the point that just substituting the values into the
solution is slower than solving the original system numerically.

Here's a demonstration:

In [1]: a, b, c, d, e, x = symbols('a:e, x')

In [2]: p = a*x**4 + b*x**3 + c*x**2 + d*x + e

In [3]: values = {a:1,b:-2,c:3,d:-4,e:-5}

In [4]: r1, r2, r3, r4 = roots(p, x)

In [5]: %time result = r1.evalf(subs=values)  # using symbolic
solution for one root
CPU times: user 296 ms, sys: 0 ns, total: 296 ms
Wall time: 297 ms

In [6]: %time results = nroots(p.subs(values))  # solve numerically
for all roots
CPU times: user 32 ms, sys: 0 ns, total: 32 ms
Wall time: 34.4 ms

If you print out r1 then you can see what the expression looks like
(it's the quartic formula). The formula is so large that we can
actually solve for the roots numerically more efficiently than by
evaluating the formula.

Of course both approaches here are quite slow because they are
implemented in Python and are not that well optimised. Also evalf and
nroots are not a fair comparison to standard numerical routines
because they both use multiprecision arithmetic for highly accurate
results. We can try using lambdify to get a more accurate timing for
pure Python code to evaluate this result:

In [9]: f = lambdify((a,b,c,d,e), r1, 'numpy')

In [10]: %time result = f(1,-2,3,-4,-5)
<string>:2: RuntimeWarning: invalid value encountered in sqrt
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 823 µs

In [11]: result
Out[11]: array(nan+0.j)

Although this is a lot faster it demonstrates another problem: this
just returns nan because the expression for the root cannot be
computed without complex numbers even though the root itself is real.

By contrast the numpy numerical roots function gives reasonably
accurate results very quickly:

In [12]: %time results = np.roots([1,-2,3,-4,-5])
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 663 µs

I think for your problem I would probably use sympy to:

1. Find a partial solution for some of the simpler equations.
2. Derive and simplify/optimise expressions for the equations and
their Jacobian (or perhaps its inverse)
3. Use code generation to implement these things in the target language
4. Run a numerical solver on the embedded system

Another possibility is that you could use sympy to get an approximate
solution. Being able to initialise a numerical routine with a good
guess makes it more robust but also means that in many applications
only a single Newton iteration is needed to get the target accuracy
resulting in a constant time algorithm.

--
Oscar

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/CAHVvXxRZ1MUmr%3DLZts1OJZQvUC5PCdE2yVfpj4Dtz0rCNraSqQ%40mail.gmail.com.

Reply via email to