Re: Trapping the segfault of a subprocess.Popen

2011-04-07 Thread Pierre GM
On Apr 7, 1:58 am, Nobody nob...@nowhere.com wrote:
 On Wed, 06 Apr 2011 02:20:22 -0700, Pierre GM wrote:
  I need to run a third-party binary from a python script and retrieve
  its output (and its error messages). I use something like
  process = subprocess.Popen(options, stdout=subprocess.PIPE,
  stderr=subprocess.PIPE)
  (info_out, info_err) = process.communicate()
  That works fine, except that the third-party binary in question doesn't
  behave very nicely and tend to segfaults without returning any error. In
  that case, `process.communicate` hangs for ever.

 Odd. The .communicate method should return once both stdout and stderr
 report EOF and the process is no longer running. Whether it terminates
 normally or on a signal makes no difference.

 The only thing I can think of which would cause the situation which you
 describe is if the child process spawns a child of its own before
 terminating. In that case, stdout/stderr won't report EOF until any
 processes which inherited them have terminated.

I think you nailed it. Running the incriminating command line in a
terminal doesn't return to the prompt. In fact, a ps shows that the
process is sleeping in the foreground. Guess I should change the
subject of this thread...



  I thought about calling a `threading.Timer` that would call
  `process.terminate` if `process.wait` doesn't return after a given
  time... But it's not really a solution: the process in question can
  sometimes take a long time to run, and I wouldn't want to kill a
  process still running.
  I also thought about polling every x s and stopping when the result of
  a subprocess.Popen([ps,-p,str(initialprocess.pid)],
  stdout=subprocess.PIPE) becomes only the header line, but my script
  needs to run on Windows as well (and no ps over there)...

 It should suffice to call .poll() on the process. In case that doesn't
 work, the usual alternative would be to send signal 0 to the process (this
 will fail with ESRCH if the process doesn't exist and do nothing
 otherwise), e.g.:

 import os
 import errno

 def exists(process):
     try:
         os.kill(process.pid, 0)
     except OSError, e:
         if e.errno == errno.ESRCH:
             return False
         raise
     return True

OK, gonna try that, thx.


 You might need to take a different approach for Windows, but the above is
 preferable to trying to parse ps output. Note that this only tells you
 if /some/ process exists with the given PID, not whether the original
 process exists; that information can only be obtained from the Popen
 object.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Trapping the segfault of a subprocess.Popen

2011-04-07 Thread Pierre GM
On Apr 7, 5:12 am, Terry Reedy tjre...@udel.edu wrote:
 On 4/6/2011 7:58 PM, Nobody wrote:

  On Wed, 06 Apr 2011 02:20:22 -0700, Pierre GM wrote:

  I need to run a third-party binary from a python script and retrieve
  its output (and its error messages). I use something like
  process = subprocess.Popen(options, stdout=subprocess.PIPE,
  stderr=subprocess.PIPE)
  (info_out, info_err) = process.communicate()
  That works fine, except that the third-party binary in question doesn't
  behave very nicely and tend to segfaults without returning any error. In
  that case, `process.communicate` hangs for ever.

 I am not sure this will help you now, but
 Victor Stinner has added a new module to Python 3.3 that tries to catch
 segfaults and other fatal signals and produce a traceback before Python
 disappears.

Unfortunately, I'm limited to Python 2.5.x for this project. But good
to know, thanks...

-- 
http://mail.python.org/mailman/listinfo/python-list


Trapping the segfault of a subprocess.Popen

2011-04-06 Thread Pierre GM
All,

I need to run a third-party binary from a python script and retrieve
its output (and its error messages). I use something like
 process = subprocess.Popen(options, stdout=subprocess.PIPE, 
 stderr=subprocess.PIPE)
 (info_out, info_err) = process.communicate()
That works fine, except that the third-party binary in question
doesn't behave very nicely and tend to segfaults without returning any
error. In that case, `process.communicate` hangs for ever.

I thought about calling a `threading.Timer` that would call
`process.terminate` if `process.wait` doesn't return after a given
time... But it's not really a solution: the process in question can
sometimes take a long time to run, and I wouldn't want to kill a
process still running.
I also thought about polling every x s and stopping when the result of
a subprocess.Popen([ps,-p,str(initialprocess.pid)],
stdout=subprocess.PIPE) becomes only the header line, but my script
needs to run on Windows as well (and no ps over there)...

Any suggestion welcome,
Thx in advance
P.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Logging exceptions to a file

2009-05-07 Thread Pierre GM
On May 7, 5:32 am, Lie Ryan lie.1...@gmail.com wrote:
 Pierre GM wrote:
  All,
  I need to log messages to both the console and a given file. I use the
  following code (on Python 2.5)

  import logging
  #
  logging.basicConfig(level=logging.DEBUG,)
  logfile = logging.FileHandler('log.log')
  logfile.setLevel(level=logging.INFO)
  logging.getLogger('').addHandler(logfile)
  #
  mylogger = logging.getLogger('mylogger')
  #
  mylogger.info(an info message)

  So far so good, but I'd like to record (possibly unhandled) exceptions
  in the logfile.
  * Do I need to explicitly trap every single exception ?
  * In that case, won't I get 2 log messages on the console (as
  illustrated in the code below:
  try:
      1/0
  except ZeroDivisionError:
      mylogger.exception(:()
      raise

  Any comments/idea welcomed
  Cheers.

 Although it is usually not recommended to use a catch-all except, this
 is the case where it might be useful. JUST DON'T FORGET TO RE-RAISE THE
 EXCEPTION.

 if __name__ == '__main__':
      try:
          main():
      except Exception, e:
          # log('Unhandled Exception', e)
          raise

OK for a simple script, but the (unhandled) exceptions need to be
caught at the module level. Any idea?
--
http://mail.python.org/mailman/listinfo/python-list


Logging exceptions to a file

2009-05-06 Thread Pierre GM
All,
I need to log messages to both the console and a given file. I use the
following code (on Python 2.5)

 import logging
 #
 logging.basicConfig(level=logging.DEBUG,)
 logfile = logging.FileHandler('log.log')
 logfile.setLevel(level=logging.INFO)
 logging.getLogger('').addHandler(logfile)
 #
 mylogger = logging.getLogger('mylogger')
 #
 mylogger.info(an info message)

So far so good, but I'd like to record (possibly unhandled) exceptions
in the logfile.
* Do I need to explicitly trap every single exception ?
* In that case, won't I get 2 log messages on the console (as
illustrated in the code below:
 try:
 1/0
 except ZeroDivisionError:
 mylogger.exception(:()
 raise

Any comments/idea welcomed
Cheers.

--
http://mail.python.org/mailman/listinfo/python-list


Re: what is wrong with my python code?

2007-02-07 Thread Pierre GM
On Wednesday 07 February 2007 12:43:34 Dongsheng Ruan wrote:
 I got feed back saying list object is not callable. But I can't figure
 out what is wrong with my code.

 for i in range(l):
  print A(i)

You're calling A, when you want to access one of its elements: use the 
straight brackets [

for i in range(l):
  print A[i]
-- 
http://mail.python.org/mailman/listinfo/python-list