[issue1767511] SocketServer.DatagramRequestHandler

2008-01-23 Thread Ben Bass

Ben Bass added the comment:

I've just bumped into this issue.  In my opinion the finish() method 
should only do anything if wfile is not empty, i.e:

temp = self.wfile.getvalue()
if temp:
self.socket.sendto(temp, self.client_address)

--
nosy: +bpb

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1767511>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1767511] SocketServer.DatagramRequestHandler

2008-01-23 Thread Ben Bass

Ben Bass added the comment:

Main issue here (as I see it) is that StreamRequestHandler and 
DatagramRequestHandler should behave in the same way. This is not the 
case in Python 2.5.1 for the case where the handle method does not 
respond to the request socket (e.g. in my case it is forwarding data to 
a different socket).

 While handler methods in StreamRequestHandler need not send any data 
back to the request socket, in DatagramRequestHandlers an attempt will 
be made to send data whether any is available or not. This causes a 
socket hang (for several minutes) on Windows with a '10040 Message too 
long' error.

 By only sending data back to the request if the handler has written to 
wfile, this is avoided, giving the twin fixes of avoiding a nasty 
socket error and providing compatibilty with StreamRequestHandler 
behaviour.

Test has been updated to add tests of handlers which do not respond to 
the request; this causes a hang in Python2.5.1 stock (not sure how to 
avoid this and cleanly fail), test passes with changed SocketServer.

p.s. this is my first patch submission to anything, so go easy :-)

Added file: http://bugs.python.org/file9271/DatagramServer.diff

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1767511>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2029] "python -m pydoc -g" fails

2008-02-07 Thread Ben Bass

New submission from Ben Bass:

To quickly open a PyDoc browser, I want to be able to run the following:

python -m pydoc -g

This works fine on Python2.4, but fails on 2.5(.1), with following
traceback (tested on both WinXP and Solaris 8, same result):

Traceback (most recent call last):
  File "c:\python25\lib\runpy.py", line 95, in run_module
filename, loader, alter_sys)
  File "c:\python25\lib\runpy.py", line 52, in _run_module_code
mod_name, mod_fname, mod_loader)
  File "c:\python25\lib\runpy.py", line 32, in _run_code
exec code in run_globals
  File "c:\python25\lib\pydoc.py", line 2255, in 
if __name__ == '__main__': cli()
  File "c:\python25\lib\pydoc.py", line 2191, in cli
gui()
  File "c:\python25\lib\pydoc.py", line 2162, in gui
gui = GUI(root)
  File "c:\python25\lib\pydoc.py", line 2052, in __init__
import threading
ImportError: No module named threading


When running pydoc.py -g directly (i.e. without the -m) it works fine,
but this requires knowing the specific location of pydoc library file,
so is less helpful.

--
components: Library (Lib)
messages: 62145
nosy: bpb
severity: normal
status: open
title: "python -m pydoc -g"  fails
versions: Python 2.5

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue2029>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5131] pprint doesn't know how to print a defaultdict

2010-09-28 Thread Ben Bass

Ben Bass  added the comment:

Same applies to collections.deque, which seems closely related (being another 
collections class).  Can this get addressed here or should I open another issue?

(just been pprinting defaultdict(deque) objects, which clearly fails :)

--
nosy: +bpb

___
Python tracker 
<http://bugs.python.org/issue5131>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1692335] Fix exception pickling: Move initial args assignment to BaseException.__new__

2011-04-12 Thread Ben Bass

Ben Bass  added the comment:

Perhaps this should be addressed separately, but subprocess.CalledProcessError 
is subject to this problem (can't be unpickled) (it has separate returncode and 
cmd attributes, but no args).

It's straightforward to conform user-defined Exceptions to including .args and 
having reasonable __init__ functions, but not possible in the case of stdlib 
exceptions.

>>> import subprocess, pickle
>>> try:
...   subprocess.check_call('/bin/false')
... except Exception as e:
...   pickle.loads(pickle.dumps(e))
... 
Traceback (most recent call last):
  File "", line 2, in 
  File "/usr/lib/python3.1/subprocess.py", line 435, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '/bin/false' returned non-zero exit 
status 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "", line 4, in 
  File "/usr/lib/python3.1/pickle.py", line 1363, in loads
encoding=encoding, errors=errors).load()
TypeError: __init__() takes at least 3 positional arguments (1 given)

--
nosy: +bpb

___
Python tracker 
<http://bugs.python.org/issue1692335>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13120] Default nosigint optionto pdb.Pdb() prevents use in non-main thread

2011-10-06 Thread Ben Bass

New submission from Ben Bass :

The new SIGINT behaviour of pdb.Pdb prevents use of pdb within a non-main 
thread without explicitly setting nosigint=True. Specifically the 'continue' 
command causes a traceback as follows:

{{{
...
  File 
"/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/pdb.py", line 
959, in do_continue
signal.signal(signal.SIGINT, self.sigint_handler)
ValueError: signal only works in main thread
}}}

Since the new behaviour seems to be to gain an enhancement rather than anything 
fundamentally necessary to pdb, wouldn't it be better if the default was 
reversed, so the same code would work identically on Python 3.1 (and 
potentially earlier, i.e. Python2) and Python 3.2?

At the moment in my codebase (rpcpdb) I'm using inspect.getargspec sniffing for 
nosigint on pdb.Pdb.__init__ to determine whether to include a nosigint=True 
parameter, which clearly isn't ideal!

--
components: Library (Lib)
messages: 145040
nosy: bpb
priority: normal
severity: normal
status: open
title: Default nosigint optionto pdb.Pdb() prevents use in non-main thread
type: behavior
versions: Python 3.2

___
Python tracker 
<http://bugs.python.org/issue13120>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13120] Default nosigint option to pdb.Pdb() prevents use in non-main thread

2011-10-06 Thread Ben Bass

Changes by Ben Bass :


--
title: Default nosigint optionto pdb.Pdb() prevents use in non-main thread -> 
Default nosigint option to pdb.Pdb() prevents use in non-main thread

___
Python tracker 
<http://bugs.python.org/issue13120>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7337] Add lossy queue to queue library module

2009-11-17 Thread Ben Bass

New submission from Ben Bass :

Many applications would benefit from 'connectionless' queues, i.e. they 
don't want to care whether anything is reading from the other end.  
Using current queue module classes this is not practical, because there 
is a choice between unbounded memory consumption or blocking. I propose 
adding a 'LossyQueue' class in the queue module which would allow 
bounded memory consumption without blocking on put.  (i.e. items are 
dropped in fifo manner beyond a certain limit).  In my view this is at 
least as natural as the PriorityQueue and LifoQueue extensions in that 
module.

Outline as follows:

class LossyQueue(Queue):
"Queue subclass which drops items on overflow"
def _init(self, maxsize):
if maxsize > 0:
# build the deque with maxsize limit
self.queue = deque(maxlen=maxsize)
else:
# same as normal Queue instance
self.queue = collections.deque()
# deque alone handles maxsize,
# so we pretend we have none
self.maxsize = 0

if there is interest in this I will offer a proper patch with docs and 
tests.

--
components: Library (Lib)
messages: 95374
nosy: bpb
severity: normal
status: open
title: Add lossy queue to queue library module
type: feature request
versions: Python 2.7, Python 3.2

___
Python tracker 
<http://bugs.python.org/issue7337>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7337] Add lossy queue to queue library module

2009-11-18 Thread Ben Bass

Ben Bass  added the comment:

'connectionless' is from how I see it as an analogy with UDP (vs TCP); 
why not just use a deque is primarily about having the same API - a 
client (getter) of the queue shouldn't know or care whether it is a 
'lossy' queue or a normal queue.  I guess most uses of a normal queue 
(excepting the 'task' functions) could just use a deque, but it wouldn't 
feel natural.

Use cases: non-critical event/status reporting is my canonical example.
Specific examples:
 - a program which executes a long running process in a thread. It wants 
to update a GUI progress bar or similar, which must occur in a different 
thread because of the GUI model. By using a LossyQueue, the server 
thread is simplified; it doesn't have to care whether anything is 
listening on the other end, allowing greater decoupling (e.g. no changes 
required if there isn't a GUI). LossyQueues become part of the interface 
which can be used or not as required.
 - emulating/providing wrapper around UDP sockets
 - many application protocols support a get/set/report type interface 
with the addition of asynchronous events (e.g. SNMP, Netconf, SCPI).  In 
these type of applications a suitable abstraction might be a normal 
Queue(s) for the standard commands and a LossyQueue for the events 
(which some applications might not care about).  The point is that to 
the user of this abstraction, these two interfaces look the same.

The 'server' doesn't care if a client is listening or not (it won't 
block and it won't use unlimited memory)
The 'client' (if it wants to use it) doesn't know that it isn't a normal 
queue (same API).
-> decouples server and client tasks.

--

___
Python tracker 
<http://bugs.python.org/issue7337>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com