It seems somehow I had missed this conversation (on the issues page). It is
definitely true that this transformation adds many auxiliary variables (as
many as the number of non-atomic clauses). However what I am working on is
to optimize this encoding to such an extent that the encoding plus
satisfiability time is reduced overall. Firstly (as observed on the issues
thread) handling duplicate subexpressions optimizes the process to some
extent. Additionally, I worked out a way (I don't know if it is an obvious
generalization or something new) such that And(a, b, c, ...) can be treated
as only one operator (though the number of clauses this encoding will
produce will still increase linearly with the number of arguments). This is
what makes the method much faster for cases where the average number of
arguments to And/Or is around 3 (or more). I find the point made in the
issues page to be quite interesting that many of the encodings are quite
redundant. Will look into optimization from this approach. There is yet
another polarity based optimization, that I am yet to look into. I think we
will have to look at encoding plus solving as the unit of comparison i.e.
is the entire process faster or slower.

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to