Am 23.10.2014 um 20:23 schrieb James Crist:
However, this isn't the easiest thing to do in Python. The best
*composable* option
What's "composable" in this context?
> is to use signal.alarm, which is only available on *NIX
systems. This can also cause problems with threaded applications.
That would not affect SymPy itself, as it is not multithreaded.
(This, in turn, has its reason in the default Python implementation
having weak-to-nonexistent support for multithreading.)
> Checks
for windows
It should work without a problem under Windows - there's no mention that
signal.alarm() does not work on any platform.
In fact the timeout code for the unit tests uses this, and AFAIK it
works well enough.
> or not running in the main thread could be added to handle this
though, but would limit it's use.
Actually you'll get a Python exception if you try to set a signal
handler anywhere except in the main thread. Or at least the Python docs
claim so, I haven't tried.
OTOH anybody who wants a timeout on SymPy (or any other piece of Python
code) can set up signal.alarm() themselves. There's simply no need for
SymPy itself to cater for this.
A second option would be to implement a "pseudo-timeout". This only works
for functions that have many calls, but each call is guaranteed to complete
in a reasonable amount of time (recursive, simple rules, e.g. `fu`). The
timeout won't be exact, but should limit excessively long recursive
functions to approximately the timeout. I wrote up a quick implementation
of this here <https://gist.github.com/jcrist/c451f3bdd6d038521a12>. It
requires some function boilerplate for each recursive call that *can't* be
replaced with a decorator. However, it's only a few lines per function. I
think this is the best option if we were to go about adding this.
It's quite intrusive.
It's also going to be broken with every new algorithm, because people
will (rightly) concentrate on getting it right first.
This means we'll always have a list of algorithms that do this kind of
cooperative multitaking less often than we'd like.
There's an alternative: sys.trace(). I'm not sure what kind of overhead
is assocated with that (depending on implementation specifics, it could
be quite big).
OT3H letting SymPy functions test for timeout on a regular basis isn't
going to come for free, either. People will always have to find the
right middle ground between checking too often (slowdown) or too rarely
(unresponsive).
So... my best advice (not necessarily THE best advice) would be to leave
this to people who call SymPy.
BTW here's the timeout code in sympy/utilities/runtest.py:
def _timeout(self, function, timeout):
def callback(x, y):
signal.alarm(0)
raise Skipped("Timeout")
signal.signal(signal.SIGALRM, callback)
signal.alarm(timeout) # Set an alarm with a given timeout
function()
signal.alarm(0) # Disable the alarm
It's noncomposable (it assumes exclusive use of SIGALRM), and it's
strictly limited to being run in the main thread. That said, it has been
working really well, and maybe the best approach would be to document it
and point people with timeout needs towards it - those who use Stackless
or whatever real multithreading options are out there will be able to
use a better timeout wrapper, so this should be the least intrusive way
to deal with timeout requirements.
just my 2c.
Jo
--
You received this message because you are subscribed to the Google Groups
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit
https://groups.google.com/d/msgid/sympy/54495948.9050903%40durchholz.org.
For more options, visit https://groups.google.com/d/optout.