[issue16487] Allow ssl certificates to be specified from memory rather than files.

2017-11-30 Thread Martin Richard

Martin Richard  added the comment:

FWIW, PyOpenSSL allows to load certificates and keys from a memory buffer and 
much more. It's also fairly easy to switch from ssl to PyOpenSSL.

It's probably a viable alternative in many cases.

--

___
Python tracker 
<https://bugs.python.org/issue16487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28287] Refactor subprocess.Popen to let a subclass handle IO asynchronously

2017-08-25 Thread Martin Richard

Martin Richard added the comment:

Yes, the goal is to isolate the blocking IO in __init__ into other methods so 
Popen can be subclassed in asyncio.

The end goal is to ensure that when asyncio calls Popen(), it doesn't block the 
process. In the context of asyncio, there's no need to make Popen() IOs 
non-blocking as they will be performed with the asyncio API (rather than the IO 
methods provided by the Popen object).

--

___
Python tracker 
<http://bugs.python.org/issue28287>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28782] SEGFAULT when running a given coroutine

2016-11-25 Thread Martin Richard

Martin Richard added the comment:

Thank you all for fixing this so quickly, it's been done amazingly fast!

--

___
Python tracker 
<http://bugs.python.org/issue28782>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28782] SEGFAULT when running a given coroutine

2016-11-23 Thread Martin Richard

New submission from Martin Richard:

Hi,

I stumbled upon a SEGFAULT while trying Python 3.6.0 on a project using
asyncio. I can't really figure out what's happening, so I reduced the original
code triggering the bug down to a reproducible case (which looks a bit clunky,
sorry). The case has been tested on two Linux systems (Archlinux and Debian),
and with several versions of Python.

The bug appears between 3.6.0a4 (most recent version tested not affected) and
3.60b1 (so before the C asyncio module I believe), and is not fixed in the
current repository tip (changeset: 105345:3addf93f4111).

I also produced a traceback using gdb (see bellow).

The segfault happens around the "await" in the body of Cursor._read_data(),
interestingly, if I change anything in the body of the method, the SEGFAULT
disappears and the code works as expected. Also, it seems that calling it from
an other coroutine (main() in the example) is required to trigger the bug.

Cheers,
Martin


Case (also attached as test.py) : 
import asyncio

loop = asyncio.get_event_loop()


class Connection:
def read_until(self, *args, **kwargs):
return self

async def all(self):
return b"\n"


class Cursor:
def __init__(self):
self._connection = Connection()
self._max_bytes = 100
self._data = bytearray()

async def _read_data(self):
# XXX segfault there, if I change anything in the code, it works...
while True:
data = await self._connection.read_until(
b'\n', max_bytes=self._max_bytes).all()
self._max_bytes -= len(data)
if data == b'\n':
break
self._data.extend(data)


async def main():
await Cursor()._read_data()


loop.run_until_complete(main())


Traceback extract (with Python3.6.0b4, --with-pydebug on Linux):

Program received signal SIGSEGV, Segmentation fault.
0x0046d177 in _PyGen_yf (gen=gen@entry=0x734bdaf8) at 
Objects/genobject.c:361
361 Py_INCREF(yf);
(gdb) bt
#0  0x0046d177 in _PyGen_yf (gen=gen@entry=0x734bdaf8) at 
Objects/genobject.c:361
#1  0x0052f49c in _PyEval_EvalFrameDefault (f=0x767067d8, 
throwflag=)
at Python/ceval.c:1992
#2  0x0052a0fc in PyEval_EvalFrameEx (f=f@entry=0x767067d8, 
throwflag=throwflag@entry=0)
at Python/ceval.c:718
#3  0x0046d393 in gen_send_ex (gen=gen@entry=0x734bdc08, 
arg=, 
exc=exc@entry=0, closing=closing@entry=0) at Objects/genobject.c:189
#4  0x0046de8d in _PyGen_Send (gen=gen@entry=0x734bdc08, 
arg=)
at Objects/genobject.c:308
#5  0x7384ba2c in task_step_impl (task=task@entry=0x73263bd8, 
exc=exc@entry=0x0)
at (...)/Python-3.6.0b4/Modules/_asynciomodule.c:1963
#6  0x7384c72e in task_step (task=0x73263bd8, exc=0x0)
at (...)/Python-3.6.0b4/Modules/_asynciomodule.c:2247
#7  0x7384ca79 in task_call_step (arg=, task=)
at (...)/Python-3.6.0b4/Modules/_asynciomodule.c:1848
#8  TaskSendMethWrapper_call (o=, args=, 
kwds=)
at (...)/Python-3.6.0b4/Modules/_asynciomodule.c:1167
#9  0x00446702 in PyObject_Call (func=0x737d7f60, 
args=0x77fb8058, kwargs=0x0)
at Objects/abstract.c:2246
#10 0x005295c8 in do_call_core (func=func@entry=0x737d7f60, 
callargs=callargs@entry=0x77fb8058, kwdict=kwdict@entry=0x0) at 
Python/ceval.c:5054
#11 0x00534c64 in _PyEval_EvalFrameDefault (f=0xb4cb48, 
throwflag=)
at Python/ceval.c:3354
#12 0x0052a0fc in PyEval_EvalFrameEx (f=f@entry=0xb4cb48, 
throwflag=throwflag@entry=0)
at Python/ceval.c:718
#13 0x0052a1cc in _PyFunction_FastCall (co=, 
args=0xb4c5b0, nargs=nargs@entry=1, 
globals=) at Python/ceval.c:4867
(...)

--
components: Interpreter Core
files: test.py
messages: 281573
nosy: martius, yselivanov
priority: normal
severity: normal
status: open
title: SEGFAULT when running a given coroutine
type: crash
versions: Python 3.6, Python 3.7
Added file: http://bugs.python.org/file45612/test.py

___
Python tracker 
<http://bugs.python.org/issue28782>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28287] Refactor subprocess.Popen to let a subclass handle IO asynchronously

2016-09-27 Thread Martin Richard

New submission from Martin Richard:

Hi,

Currently, subprocess.Popen performs blocking IO in its constructor (at least 
on Unix): it reads on a pipe in order to detect outcome of the pre-exec and 
exec phase in the new child. There is no way yet to modify this behavior as 
this blocking call is part of a long Popen._execute_child() method.

This is a problem in asyncio (asyncio.subprocess_exec and 
asyncio.subprocess_shell).

I would like to submit a patch which breaks Popen.__init__() and 
Popen._execute_child() in several methods so it becomes possible to avoid 
blocking calls (read on pipe and waitpid) by overriding a few private methods 
without duplicating too much code. The goal is to use it in asyncio, as 
described in this pull request (which currently monkey-patches Popen):
https://github.com/python/asyncio/pull/428

This patch only targets the unix implementation.

Thanks for your feedback.

--
files: popen_execute_child_refactoring.patch
keywords: patch
messages: 277517
nosy: martius
priority: normal
severity: normal
status: open
title: Refactor subprocess.Popen to let a subclass handle IO asynchronously
type: enhancement
versions: Python 3.6
Added file: 
http://bugs.python.org/file44844/popen_execute_child_refactoring.patch

___
Python tracker 
<http://bugs.python.org/issue28287>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24763] asyncio.BaseSubprocessTransport triggers an unavoidable ResourceWarning if process doesn't start

2015-07-31 Thread Martin Richard

New submission from Martin Richard:

An exception can be raised in SubprocessTransport.__init__() from 
SubprocessTransport._start() - for instance because an exception is raised in 
the preexec_fn callback.

In this case, the calling function never gets a reference to the transport 
object, and cannot close the transport. Hence, when the object is deleted, an 
"unclosed transport" ResourceWarning is always triggered.

Here is a test case showing this behavior:
import asyncio

loop = asyncio.get_event_loop()
try:
loop.run_until_complete(asyncio.create_subprocess_exec('/doesntexist'))
except FileNotFoundError:
pass
finally:
loop.close()


I propose the attached patch as a solution, which call 
SubprocessTransport.close() when an exception is raised in 
SubprocessTransport._start() in the constructor.

--
components: asyncio
files: transport_close_when_exception_init.patch
keywords: patch
messages: 247746
nosy: gvanrossum, haypo, martius, yselivanov
priority: normal
severity: normal
status: open
title: asyncio.BaseSubprocessTransport triggers an unavoidable ResourceWarning 
if process doesn't start
type: resource usage
versions: Python 3.4, Python 3.5
Added file: 
http://bugs.python.org/file40082/transport_close_when_exception_init.patch

___
Python tracker 
<http://bugs.python.org/issue24763>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16487] Allow ssl certificates to be specified from memory rather than files.

2015-07-10 Thread Martin Richard

Martin Richard added the comment:

I'm not sure I know how to do this correctly: I lack of experience both
with openssl C API and writing python modules in C.

It may be more flexible, but unless the key is protected/crypted somehow,
one would need a string or bytes buffer to hold the key when creating the
private key object: not much secure. Don't you think that it should be
addressed in a separate issue?

2015-07-09 15:48 GMT+02:00 Christian Heimes :

>
> Christian Heimes added the comment:
>
> I'd rather introduce new types and have the function accept either a
> string (for path to fiel) or a X509 object and a PKey object. It's more
> flexible and secure. With a private key type we can properly support crypto
> ENGINEs and wipe memory when the object gets deallocated.
>
> --
>
> ___
> Python tracker 
> <http://bugs.python.org/issue16487>
> ___
>

--

___
Python tracker 
<http://bugs.python.org/issue16487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16487] Allow ssl certificates to be specified from memory rather than files.

2015-07-09 Thread Martin Richard

Martin Richard added the comment:

You are right.

And if certfile and keyfile (args of load_cert_chain()) accept file-like 
objects, we agree that cafile (load_verify_location()) should accept them too?

--

___
Python tracker 
<http://bugs.python.org/issue16487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16487] Allow ssl certificates to be specified from memory rather than files.

2015-07-09 Thread Martin Richard

Martin Richard added the comment:

Hi,

I would like to update this patch so it can finally land in cpython, hopefully 
3.6.

tl;dr of the thread:
In a nutshell, the latest patch from Kristján Valur Jónsson updates
SSLContext.load_cert_chain(certfile, keyfile=None, password=None) and
SSLContext.load_verify_locations(cafile=None, capath=None)

so certfile, keyfile and cafile can be either a string representing a path to a 
file or a file-like object.

The discussion seems to favor this API (pass file-like objects) rather than 
using new arguments (certdata, keydata) to pass string or bytes objects.

However, Christian Heimes proposed a patch (which landed in 3.4) which adds a 
cadata argument to load_verify_locations().


So, what should we do?
- allow certfile, keyfile and cafile to be paths or file-like objects,
- add certdata and keydata to load_cert_chain() to be consistent with 
load_verify_locations(), 
- do both.

I'd go the the 2nd solution to be consistent with the API and keep things 
simple.

--
nosy: +martius
versions: +Python 3.6

___
Python tracker 
<http://bugs.python.org/issue16487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Martin Richard

Martin Richard added the comment:

015-05-26 20:40 GMT+02:00 Yury Selivanov :

>
> Yury Selivanov added the comment:
> The only solution to safely fork a process is to fix loop.close() to
> check if it's called from a forked process and to close the loop in
> a safe way (to avoid breaking the master process).  In this case
> we don't even need to throw a RuntimeError.  But we won't have a
> chance to guarantee that all resources will be freed correctly (right?)
>

If all the tasks are cancelled and loop's internal structures (callback
lists, tasks sets, etc) are cleared, I believe that the garbage collector
will eventually be able to dispose everything.

However, it's indeed not enough: resources created by other parts of
asyncio may leak (transports, subprocess). For instance, I proposed to add
a "detach()" method for SubprocessTransport here:
http://bugs.python.org/issue23540 : in this case, I need to close stdin,
stdout, stderr pipes without killing the subprocess.

> So the idea is (I guess it's the 5th option):
>
> 1. If the forked child doesn't call loop.close() immediately after
> forking we raise RuntimeError on first loop operation.
>
> 2. If the forked child calls (explicitly) loop.close() -- it's fine,
> we just close it, the error won't be raised.  When we close we only
> close the selector (without unregistering or re-regestering any FDs),
> we cleanup callback queues without trying to close anything).
>
> Guido, do you still think that raising a "RuntimeError" in a child
> process in an unavoidable way is a better option?
>

> --
>
> ___
> Python tracker 
> <http://bugs.python.org/issue21998>
> ___
>

--

___
Python tracker 
<http://bugs.python.org/issue21998>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Martin Richard

Martin Richard added the comment:

Hi,

My patch was a variation of haypo's patch. The goal was to duplicate the
loop and its internal objects (loop and self pipes) without changing much
to its state from the outside (keeping callbacks and active tasks). I
wanted to be conservative with this patch, but it is not the option I
prefer.

I think that raising a RuntimeError in the child is fine, but may not be
enough:

Imho, saying "the loop can't be used anymore in the child" is fine, but "a
process in which lives an asyncio loop must not be forked" is too
restrictive (I'm not thinking of the fork+exec case, which is probably fine
anyway) because a library may rely on child processes, for instance.

Hence, we should allow a program to fork and eventually dispose the
resources of the loop by calling loop.close() - or any other mechanism that
you see fit (clearing all references to the loop is tedious because of the
global default event loop and the cycles between futures/tasks and the
loop).

However, the normal loop.close() sequence will unregister all the fds
registered to the selector, which will impact the parent. Under Linux with
epoll, it's fine if we only close the selector.

I would therefore, in the child after a fork, close the loop without
breaking the selector state (closing without unregister()'ing fds), unset
the default loop so get_event_loop() would create a new loop, then raise
RuntimeError.

I can elaborate on the use case I care about, but in a nutshell, doing so
would allow to spawn worker processes able to create their own loop without
requiring an idle "blank" child process that would be used as a base for
the workers. It adds the benefit, for instance, of allowing to share data
between the parent and the child leveraging OS copy-on-write.

2015-05-26 18:20 GMT+02:00 Yury Selivanov :

>
> Yury Selivanov added the comment:
>
> > How do other event loops handle fork? Twisted, Tornado, libuv, libev,
> libevent, etc.
>
> It looks like using fork() while an event loop is running isn't
> recommended in any of the above.  If I understand the code correctly, libev
> & gevent reinitialize loops in the forked process (essentially, you have a
> new loop).
>
> I think we have the following options:
>
> 1. Document that using fork() is not recommended.
>
> 2. Detect fork() and re-initialize event loop in the child process
> (cleaning-up callback queues, re-initializing selectors, creating new
> self-pipe).
>
> 3. Detect fork() and raise a RuntimeError.  Document that asyncio event
> loop does not support forking at all.
>
> 4. The most recent patch by Martin detects the fork() and reinitializes
> self-pipe and selector (although all FDs are kept in the new selector).
> I'm not sure I understand this option.
>
> I'm torn between 2 & 3.  Guido, Victor, Martin, what do you think?
>
> --
>
> ___
> Python tracker 
> <http://bugs.python.org/issue21998>
> ___
>

--

___
Python tracker 
<http://bugs.python.org/issue21998>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23540] Proposal for asyncio: SubprocessTransport.detach() to detach a process from a transport

2015-02-27 Thread Martin Richard

New submission from Martin Richard:

I would like to add a detach() method to base_suprocess.BaseSuprocessTransport, 
which would release the underlying Popen object to the user, pretty much like 
socket.detach() detaches a socket object and returns the fd.

The rationale is the following: the lifetime of a subprocess started using a 
loop is bound to that loop, or require to clause the loop without terminating 
the process which leads to resource leaks (the stdin/stdout pipes can't be 
closed).

It may be useful in some cases. For instance, I create a fork of a process 
running a loop which started one or more subprocesses. In the child processus, 
I'd like to close the pipes and free the transport objects by calling:

proc = transport.detach()
transport.close()

proc.stdin.close()
proc.stdout.close()
proc.stderr.close()


The process is still running, in the parent process, everything looks like 
before the fork, the child can forget about the parent loop without fearing 
resource leaks.

It is somewhat related to http://bugs.python.org/issue21998 (Support fork).

I propose a patch which adds BaseSubprocessTransport.detach(), a specialized 
version for _UnixSubprocessTransport taking care of removing the callbacks from 
the ChildWatcher and a detach method for the pipes transports for unix and 
proactor.

--
components: asyncio
files: add-detach-to-subprocess_transport.patch
keywords: patch
messages: 236808
nosy: gvanrossum, haypo, martius, yselivanov
priority: normal
severity: normal
status: open
title: Proposal for asyncio: SubprocessTransport.detach() to detach a process 
from a transport
type: enhancement
versions: Python 3.3, Python 3.4, Python 3.5
Added file: 
http://bugs.python.org/file38263/add-detach-to-subprocess_transport.patch

___
Python tracker 
<http://bugs.python.org/issue23540>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23537] BaseSubprocessTransport includes two unused methods

2015-02-27 Thread Martin Richard

New submission from Martin Richard:

base_subprocess.BaseSuprocessTransport implements 
_make_write_subprocess_pipe_proto and _make_read_subprocess_pipe_proto.

Both are private and both raise NotImplementedError. However, when I grep in 
tulip sources for those methods, they are never called nor overridden by 
subclasses of BaseSuprocessTransport.

Shouldn't they be removed?

--
components: asyncio
messages: 236777
nosy: gvanrossum, haypo, martius, yselivanov
priority: normal
severity: normal
status: open
title: BaseSubprocessTransport includes two unused methods

___
Python tracker 
<http://bugs.python.org/issue23537>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-02-17 Thread Martin Richard

Martin Richard added the comment:

The goal of the patch is to create a duplicate selector (a new epoll() 
structure with the same watched fds as the original epoll). It allows to remove 
fds watched in the child's loop without impacting the parent process.

Actually, it's true that with the current implementation of the selectors 
module (using get_map()), we can achieve the same result than with victor's 
patch without touching the selector module. I attached a patch doing that, also 
working with python 3.4.

I thought about this at_fork() mechanism a bit more and I'm not sure of what we 
want to achieve with this. In my opinion, most of the time, we will want to 
recycle the loop in the child process (close it and create a new one) because 
we will not want to have the tasks and callbacks scheduled on the loop running 
on both the parent and the child (it would probably result in double writes on 
sockets, or double reads, for instance).

With the current implementation of asyncio, I can't recycle the loop for a 
single reason: closing the loop calls _close_self_pipe() which unregisters the 
pipe of the selector (hence breaking the loop in the parent). Since the self 
pipe is an object internal to the loop, I think it's safe to close the pipes 
without unregistering them of the selector. It is at least true with epoll() 
according to the documentation quoted by neologix, but I hope that we can 
expect it to be true with other unix platforms too.

--
Added file: http://bugs.python.org/file38164/at_fork-3.patch

___
Python tracker 
<http://bugs.python.org/issue21998>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-02-17 Thread Martin Richard

Martin Richard added the comment:

In that case, I suggest a small addition to your patch that would do the trick:

in unix_events.py:
+def _at_fork(self):
+super()._at_fork()
+self._selector._at_fork()
+self._close_self_pipe()
+self._make_self_pipe()
+

becomes:

+def _at_fork(self):
+super()._at_fork()
+if not hasattr(self._selector, '_at_fork'):
+return
+self._selector._at_fork()
+self._close_self_pipe()
+self._make_self_pipe()

--

___
Python tracker 
<http://bugs.python.org/issue21998>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-02-17 Thread Martin Richard

Martin Richard added the comment:

I read the patch, it looks good to me for python 3.5. It will (obviously) not 
work with python 3.4 since self._selector won't have an _at_fork() method.

I ran the tests on my project with python 3.5a1 and the patch, it seems to work 
as expected: ie. when I close the loop of the parent process in the child, it 
does not affect the parent.

I don't have a case where the loop of the parent is still used in the child 
though.

--

___
Python tracker 
<http://bugs.python.org/issue21998>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17911] traceback: add a new thin class storing a traceback without storing local variables

2015-01-26 Thread Martin Richard

Changes by Martin Richard :


--
nosy: +martius

___
Python tracker 
<http://bugs.python.org/issue17911>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23209] asyncio: break some cycles

2015-01-12 Thread Martin Richard

Martin Richard added the comment:

I updated the selector patch so BaseSelector.get_key() raises KeyError if the 
mapping is None. All the (non skipped) tests in test_selectors.py passed.

Anyway, if there is an other problem with freeing the mapping object (I don't 
know, maybe "reopening" a loop may be considered?) this patch can probably be 
dropped. Since this cycle is broken when the loop is closed, the objects will 
likely be collected once the program terminates.

--
Added file: http://bugs.python.org/file37681/break-selector-map-cycle.diff

___
Python tracker 
<http://bugs.python.org/issue23209>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23209] asyncio: break some cycles

2015-01-09 Thread Martin Richard

Changes by Martin Richard :


--
components: +asyncio
nosy: +gvanrossum, haypo, yselivanov
type:  -> performance
versions: +Python 3.4

___
Python tracker 
<http://bugs.python.org/issue23209>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23209] asyncio: break some cycles

2015-01-09 Thread Martin Richard

New submission from Martin Richard:

Hi,

I would like to submit 3 trivial modifications which break a cycle each. It is 
not much, but those three cycles caused a lot of objects to be garbage 
collected. They can now be freed using the reference counting mechanism, and 
therefore, reduce the latency that may be involved by the work of the garbage 
collector in a long living process.

In asyncio/base_subprocess.py:
WriteSubprocessPipeProto.proc is a reference to a BaseSubprocessTransport 
object, which holds a reference to the WriteSubprocessPipeProto in 
self._protocol.

I break the cycle in the protocol at the end of connection_lost().

In asyncio/futures.py:
wrap_future() defines a lambda which uses a variable defined in the function, 
therefore creating a closure, referencing the wrap_future() function and 
creating a cycle.

In the (really trivial) patch, the lambda uses the argument "future" instead of 
the "fut" variable defined in a closure. The closure is not needed anymore.

This single cycle is very common, because caused when one uses getaddrinfo().

In asyncio/selectors.py:
_BaseSelectorImpl._map keeps a reference to the _SelectorMapping object, which 
also references the selector with _SelectorMapping._selector.

The reference to the map in the selector is cleared once the selector is closed.

--
files: break-some-cycles.diff
keywords: patch
messages: 233770
nosy: martius
priority: normal
severity: normal
status: open
title: asyncio: break some cycles
Added file: http://bugs.python.org/file37657/break-some-cycles.diff

___
Python tracker 
<http://bugs.python.org/issue23209>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: a new self-pipe should be created in the child process after fork

2014-12-08 Thread Martin Richard

Martin Richard added the comment:

Currently, this is what I do in the child after the fork:

>>> selector = loop._selector
>>> parent_class = selector.__class__.__bases__[0]
>>> selector.unregister = lambda fd: parent_class.unregister(selector, fd)

It replaces unregister() by _BaseSelectorImpl.unregister(), so "our" data 
structures are still cleaned (the dict _fd_to_key, for instance).

If a fix for this issue is desired in tulip, the first solution proposed by 
Guido (closing the selector and let the unregister call fail, see the -trivial- 
patch attached) is probably good enough.

--
keywords: +patch
Added file: 
http://bugs.python.org/file37385/close_self_pipe_after_selector.patch

___
Python tracker 
<http://bugs.python.org/issue21998>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: a new self-pipe should be created in the child process after fork

2014-12-01 Thread Martin Richard

Martin Richard added the comment:

I said something wrong in my previous comment: removing and re-adding the 
reader callback right after the fork() is obviously subject to a race condition.

I'll go for the monkey patching.

--

___
Python tracker 
<http://bugs.python.org/issue21998>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: a new self-pipe should be created in the child process after fork

2014-12-01 Thread Martin Richard

Martin Richard added the comment:

Guido,

Currently in my program, I manually remove and then re-adds the reader to the 
loop in the parent process right after the fork(). I also considered a dirty 
monkey-patching of remove_reader() and remove_writer() which would act as the 
original versions but without removing the fds from the epoll object (ensuring 
I don't get bitten by the same behavior for an other fd).

The easiest fix, I think, is indeed to close the selector without unregistering 
the fds, but I don't know if doing so would have undesired side effects on an 
other platform than Linux (resources leak, or the close call failing maybe).

--

___
Python tracker 
<http://bugs.python.org/issue21998>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: a new self-pipe should be created in the child process after fork

2014-11-28 Thread Martin Richard

Martin Richard added the comment:

Hi,

Actually, closing and creating a new loop in the child doesn't work either, at 
least on Linux.

When, in the child, we call loop.close(), it performs:
self.remove_reader(self._ssock)
(in selector_events.py, _close_self_pipe() around line 85)

Both the parent and the child still refer to the same underlying epoll 
structure, at that moment, and calling remove_reader() has an effect on the 
parent process too (which will never watch the self-pipe again).

I attached a test case that demonstrates the issue (and the workaround, 
commented).

--
nosy: +martius
Added file: http://bugs.python.org/file37306/test2.py

___
Python tracker 
<http://bugs.python.org/issue21998>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22638] ssl module: the SSLv3 protocol is vulnerable ("POODLE" attack)

2014-10-15 Thread Martin Richard

Changes by Martin Richard :


--
nosy: +martius

___
Python tracker 
<http://bugs.python.org/issue22638>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22348] Documentation of asyncio.StreamWriter.drain()

2014-09-12 Thread Martin Richard

Martin Richard added the comment:

Here is an other patch which mentions high and low water limits. I think it's 
better to talk about it, since it tells extactly what a "full buffer" and 
"partially drained" means.

On the other hand, StreamWriter wraps the transport but does not expose the 
set/get_write_buffer_limits() directly, you reach then through 
stream_writer.transport (which makes sense, StreamWriter is here to help 
writing, not to do plumbery) - so I did not mention the functions.

--
Added file: 
http://bugs.python.org/file36606/asyncio-streams-drain-doc-water-limits.patch

___
Python tracker 
<http://bugs.python.org/issue22348>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22348] Documentation of asyncio.StreamWriter.drain()

2014-09-06 Thread Martin Richard

Changes by Martin Richard :


--
hgrepos:  -273

___
Python tracker 
<http://bugs.python.org/issue22348>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22348] Documentation of asyncio.StreamWriter.drain()

2014-09-06 Thread Martin Richard

New submission from Martin Richard:

Hi,

Following the discussion on the python-tulip group, I'd like to propose a patch 
for the documentation of StreamWriter.drain().

This patch aims to give a better description of what drain() is intended to do, 
and when to use it. In particular, it highlights the fact that calling drain() 
does not mean that any write operation will be performed, nor is required to be 
called.

--
components: asyncio
files: asyncio-streams-drain-doc.patch
hgrepos: 273
keywords: patch
messages: 226487
nosy: gvanrossum, haypo, martius, yselivanov
priority: normal
severity: normal
status: open
title: Documentation of asyncio.StreamWriter.drain()
type: behavior
versions: Python 3.4
Added file: http://bugs.python.org/file36561/asyncio-streams-drain-doc.patch

___
Python tracker 
<http://bugs.python.org/issue22348>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com