Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue14725
Richard Oudkerk shibt...@gmail.com added the comment:
I found a race where a connection attempt could happen before the listening
socket's listen() method was called.
Vinay, could you update and try again please.
--
___
Python tracker rep
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: patch review - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue14753
Richard Oudkerk added the comment:
IMHO
1) It should check all predicates.
2) It should return a list of ready conditions.
3) It should *not* accept a list of conditions.
4) from_condition() should be removed.
Also notify() should try again if releasing a waiter raises RuntimeError
because
Richard Oudkerk added the comment:
The spawn branch is in decent shape, although the documentation is not
up-to-date.
I would like to commit before the first alpha.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8713
Richard Oudkerk added the comment:
I played a bit with the patch and -v -Xshowrefcount. The number of references
and blocks left at exit varies (and is higher than for unpatched python).
It appears that a few (1-3) module dicts are not being purged because they have
been orphaned. (i.e
Richard Oudkerk added the comment:
On 01/08/2013 10:59am, Antoine Pitrou wrote:
If you replace the end of your script with the following:
for name, mod in sys.modules.items():
if name != 'encodings':
mod.__dict__[__blob__] = Blob(name)
del name, mod, Blob
then at the end
Richard Oudkerk added the comment:
You might want to open a prompt and look at gc.get_referrers() for
encodings.mbcs.__dict__ (or another of those modules).
gc.get_referrers(sys.modules['encodings.mbcs'].__dict__)
[module 'encodings.mbcs' from
'C:\\Repos\\cpython-dirty\\lib\\encodings
Richard Oudkerk added the comment:
I get different numbers from you. If I run ./python -v -c pass, most
modules in the wiping phase are C extension modules, which is expected.
Pretty much every pure Python module ends up garbage collected before
that.
The *module* gets gc'ed, sure
Richard Oudkerk added the comment:
Also, do note that purge/gc after wiping can still be a regular
gc pass unless the module has been wiped. The gc could be triggered
by another module being wiped.
For me, the modules which die naturally after purging begins are
# purge/gc encodings.aliases
Richard Oudkerk added the comment:
Yes, I agree the patch is ok.
It would be would be much simpler to keep track of the module dicts if
they were weakrefable. Alternatively, at shutdown a weakrefable object
with a reference to the module dict could be inserted in to each module
dict. We
Richard Oudkerk added the comment:
Firstly, list2cmdline() takes a list as its argument, not a string:
import subprocess
print subprocess.list2cmdline([r'\1|2\'])
\\\1|2\\\
But the problem with passing arguments to a batch file is that cmd.exe parses
arguments differently from how
Richard Oudkerk added the comment:
I think you're missing the point. The implementation is wrong as it
does not do what documentation says which is A double quotation mark
preceded by a backslash is interpreted as a literal double quotation
mark.
That docstring describes how the string
Changes by Richard Oudkerk shibt...@gmail.com:
Added file: http://bugs.python.org/file31186/b3620777f54c.diff
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8713
Richard Oudkerk added the comment:
I have done quite a bit of refactoring and added some extra tests.
When I try using the forkserver start method on the OSX Tiger buildbot (the
only OSX one available) I get errors. I have disabled the tests for OSX, but
it seemed to be working before
Richard Oudkerk added the comment:
IMHO it just doesn't make sense passing 0.0 as a timeout value.
I have written lots of code that looks like
timeout = max(deadline - time.time(), 0)
some_function(..., timeout=timeout)
This makes perfect sense. Working code should not be broken
Richard Oudkerk added the comment:
Richard, can you say what failed on the OS X 10.4 (Tiger) buildbot?
There seems to be a problem which depends on the order in which you run
the test, and it happens on Linux also. For example if I do
./python -m test -v
Changes by Richard Oudkerk shibt...@gmail.com:
Added file: http://bugs.python.org/file31214/c7aa0005f231.diff
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8713
Richard Oudkerk added the comment:
The forkserver process is now started using _posixsubprocess.fork_exec(). This
should fix the order dependent problem mentioned before.
Also the forkserver tests are now reenabled on OSX.
--
___
Python tracker
Changes by Richard Oudkerk shibt...@gmail.com:
Added file: http://bugs.python.org/file31282/4fc7c72b1c5d.diff
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8713
Richard Oudkerk added the comment:
I have added documentation now so I think it is ready to merge (except for a
change to Makefile).
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8713
Richard Oudkerk added the comment:
Good for me. This is a very nice addition!
Thanks.
I do see a couple of failed assertions on Windows which presumably happen in a
child process because they do not cause a failure:
Assertion failed: !collecting, file ..\Modules\gcmodule.c, line 1617
Richard Oudkerk added the comment:
Do you mean you want to use a pure python implementation on Unix?
Then you would have to deal with AF_UNIX (which is the default family for
socketpair() currently). A pure python implementation which deals with AF_UNIX
would have to temporarily create
Richard Oudkerk added the comment:
I could submit the part that makes it possible to customize the picklers
of multiprocessing.pool.Pool instance to the standard library if people
are interested.
2.7 and 3.3 are in bugfix mode now, so they will not change.
In 3.3 you can do
from
Richard Oudkerk added the comment:
Adding the line
features[0][0]
to the end of main() produces a segfault for me on Linux.
The FAQ for sqlite3 says that
Under Unix, you should not carry an open SQLite database across a
fork() system call into the child process. Problems
Richard Oudkerk added the comment:
On 21/08/2013 3:46pm, Charles-François Natali wrote:
Another, probably cleaner way would be to finally add the atfork()
module (issue #16500), and register this reseed hook (which could then
be implemented in ssl.py).
Wouldn't that still suffer from
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - duplicate
stage: - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18793
Richard Oudkerk added the comment:
Hopefully this is fixed now.
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
type: - behavior
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18762
Richard Oudkerk added the comment:
Yes I will remove it. I was planning on doing so when PEP 446 was implemented.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18865
Richard Oudkerk added the comment:
The PPC64 buildbot is still failing intermittently.
--
resolution: invalid -
status: closed - open
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18786
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18865
Richard Oudkerk added the comment:
It looks like the main process keeps getting killed by SIGUSR1. Don't know why.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18786
Richard Oudkerk added the comment:
If the _killer process takes too long to start, it won't send SIGUSR1
before the p process returns...
Thanks!
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18786
Richard Oudkerk added the comment:
It should be fixed now so I will close.
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18786
Richard Oudkerk added the comment:
Try using Popen(..., bufsize=0).
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18868
Richard Oudkerk added the comment:
I've seen test_multiprocessing_forkserver giving warnings too, while
running the whole test suite, but can't reproduce them while running it
alone. The warnings seems quite similar though, so a single fix might
resolve the problem with all the tests
Richard Oudkerk added the comment:
On 05/09/2013 9:28am, Charles-François Natali wrote:
As a side note, in the general case, there's more than a performance
optimization: the problem with unregister() + register() vs a real
modify (e.g. EPOLL_CTL_MOD) is that it's subject to a race condition
Richard Oudkerk added the comment:
LGTM.
But I would move import selectors in multiprocessing.connection to just
before the definition of wait() for Unix. It is not needed on Windows and
unnecessary imports slow down start up of new processes
Richard Oudkerk added the comment:
I remember wondering at one time why EPOLLNVAL did not exist, and realizing
that closed fds are just silently unregistered by epoll().
I guess the issue is that some of the selectors indicate a bad fd on
registration, and others do it when polled
Richard Oudkerk added the comment:
With the current patch __repr__() will fail if the untransformed key is
unhashable:
d = collections.transformdict(id)
L = [1,2,3]
d[L] = None
d.keys()
Traceback (most recent call last):
File stdin, line 1, in module
File C:\Repos\cpython-dirty\lib
Richard Oudkerk added the comment:
With your patch, I think if you call get_start_method() without later calling
set_start_method() then the helper process(es) will never be started.
With the current code, popen.Popen() automatically starts the helper processes
if they have not already been
Richard Oudkerk added the comment:
In my patched version, the private popen.get_start_method gets a kwarg
set_if_needed=True. popen.Popen calls that as before, so its behavior
should not change, while the public get_start_method sets the kwarg to
False.
My mistake
Richard Oudkerk added the comment:
By context I did not really mean a context manager. I just meant an object
(possibly a singleton or module) which implements the same interface as
multiprocessing.
(However, it may be a good idea to also make it a context manager whose
__enter__() method
Richard Oudkerk added the comment:
There are lots of things that behave differently depending on the currently set
start method: Lock(), Semaphore(), Queue(), Value(), ... It is not just when
creating a Process or Pool that you need to know the start method.
Passing a context or start_method
Richard Oudkerk added the comment:
I'll review the patch. (According to http://www.python.org/dev/peps/pep-0429/
feature freeze is expected in late November, so there is not too much of rush.)
--
___
Python tracker rep...@bugs.python.org
http
Richard Oudkerk added the comment:
An alternative would be to have separate files NEWS-3.2, NEWS-3.3, NEWS-3.4
etc. If a fix is added to 3.2 and will be merged to 3.3 and 3.4 then you add
an entry to NEWS-3.2 and append some sort of tags to indicate merges:
- Issue #1234: Fix something
Richard Oudkerk added the comment:
Thanks for the doc cleanup -- I am rather busy right now.
Note that stuff does still get replaced by None at shutdown, and this can still
produce errors, even if they are much harder to trigger. If I run the
following program
import _weakref
import
Richard Oudkerk added the comment:
The clearing of modules at shutdown has been substantially changed in 3.4. Now
a best effort is made to let the module go away purely by gc. Those modules
which survive get purged in random order.
In 3.3 all modules were purged, but builtins was special
Richard Oudkerk added the comment:
An alternative would be to use weakref.finalize() which would guarantee that
cleanup happens before any purging occurs. That would allow the use of shutil:
class TemporaryDirectory(object):
def __init__(self, suffix=, prefix=template, dir=None
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19112
___
___
Python-bugs-list mailing
Richard Oudkerk added the comment:
See
http://bugs.python.org/issue436259
This is a problem with Window's implementation of spawn*() and exec*(). Just
use subprocess instead which gets this stuff right.
Note that on Windows exec*() is useless: it just starts a subprocess and exits
Richard Oudkerk added the comment:
I am not sure that I should see there. There is discussion of DOS,
which is not supported, also some complain about Windows execv
function, which deprecated since VC++ 2005 (which I hope also not
supported). Can you be more specific?
_spawn*() and _exec
Richard Oudkerk added the comment:
As I wrote in http://bugs.python.org/issue19066, on Windows execv() is
equivalent to
os.spawnv(os.P_NOWAIT, ...)
os._exit(0)
This means that control is returned to cmd when the child process *starts* (and
afterwards you have cmd and the child
Richard Oudkerk added the comment:
It is said that execv() is deprecated, but it is not said that it is
alias of _execv(). It is only said that _execv() is C++ compliant.
http://msdn.microsoft.com/en-us/library/ms235416(v=vs.90).aspx
Microsoft seems to have decided that all functions
Richard Oudkerk added the comment:
Where did you get that info? MSDN is silent about that.
http://msdn.microsoft.com/en-us/library/886kc0as(v=vs.90).aspx
Reading the source code for the C runtime included with Visual Studio.
The problem is not in what I should or should not use. The problem
Richard Oudkerk added the comment:
Hey. This ticket is about os.execv failing on spaced paths on Windows. It
is not a duplicate of issue19124.
It is a duplicate of #436259 [Windows] exec*/spawn* problem with spaces in
args.
--
___
Python tracker
Richard Oudkerk added the comment:
Visual Studio 10+ ? Is it available somewhere for a reference?
Old versions of the relevant files are here:
http://www.controllogics.com/software/VB6/VC98/CRT/SRC/EXECVE.C
http://www.controllogics.com/software/VB6/VC98/CRT/SRC/SPAWNVE.C
http
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: -sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19066
___
___
Python-bugs-list mailing
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: -sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19124
___
___
Python-bugs-list mailing
Richard Oudkerk added the comment:
Well, perhaps we can special-case builtins not to be wiped at shutdown.
However, there is another problem here in that the Popen object survives
until the builtins module is wiped. This should be investigated too.
Maybe it is because it uses the evil
Richard Oudkerk added the comment:
Is BoundedSemaphore really supposed to be robust in the face of too many
releases, or does it just provide a sanity check?
I think that releasing a bounded semaphore too many times is a programmer
error, and the exception is just a debugging aid
Richard Oudkerk added the comment:
the previous initializers were not supposed to return any value
Previously, any returned value would have been ignored. But the documentation
does not say that the function has to return None. So I don't think we can
assume there is no compatibility issue
Richard Oudkerk added the comment:
I think misuse is an exageration. Various functions change some state and
return a value that is usually ignored, e.g. os.umask(), signal.signal().
Global variables usage is a pattern which might lead to code errors and many
developers discourage from
Richard Oudkerk added the comment:
These functions are compliant with POSIX standards and the return values
are actually useful, they return the previously set masks and handlers,
often are ignored but in complex cases it's good to know their previous
state.
Yes. But my point
Richard Oudkerk added the comment:
BTW, the context objects are singletons.
I could not see a sensible way to make ctx.Process be a picklable class (rather
than a method) if there can be multiple instances of a context type. This
means that the helper processes survive until the program
Richard Oudkerk added the comment:
Attached is a patch which allows the use of separate contexts. For example
try:
ctx = multiprocessing.get_context('forkserver')
except ValueError:
ctx = multiprocessing.get_context('spawn')
q = ctx.Queue()
p = ctx.Process
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue12413
___
___
Python-bugs-list mailing
Richard Oudkerk added the comment:
I'm already confused by the fact that the test is named
test_multiprocessing_spawn and the error is coming from a module named
popen_fork...)
popen_spawn_posix.Popen is a subclass of popen_fork.Popen
Richard Oudkerk added the comment:
I haven't read all of your patch yet, but does this mean a forkserver
will be started regardless of whether it is later used?
No, it is started on demand. But since it is started using
_posixsbuprocess.fork_exec(), nothing is inherited from the main
Richard Oudkerk added the comment:
After running ugly_hack(), trying to malloc a largeish block (1MB) fails:
int main(void)
{
int first;
void *ptr;
ptr = malloc(1024*1024);
assert(ptr != NULL);/* succeeds */
free(ptr);
first = ugly_hack();
ptr = malloc
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: - committed/rejected
status: open - pending
title: Robustness issues in multiprocessing.{get,set}_start_method - Support
different contexts in multiprocessing
type: behavior - enhancement
Changes by Richard Oudkerk shibt...@gmail.com:
--
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18999
___
___
Python-bugs
Richard Oudkerk added the comment:
On 16/10/2013 8:14pm, Guido van Rossum wrote:
(2) I get this message -- what does it mean and should I care?
2 tests altered the execution environment:
test_asyncio.test_base_events test_asyncio.test_futures
Perhaps threads from the ThreadExecutor
Richard Oudkerk added the comment:
I think at module level you can do
if sys.platform != 'win32':
raise unittest.SkipTest('Windows only')
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19262
Richard Oudkerk added the comment:
I can reproduce the problem on the Non-Debug Gentoo buildbot using only
os.fork() and os.kill(pid, signal.SIGTERM). See
http://hg.python.org/cpython/file/9853d3a20849/Lib/test/_test_multiprocessing.py#l339
To investigate further I think strace
Richard Oudkerk added the comment:
I fixed the out of space last night. (Someday I'll get around to figuring
out which test it is that is leaving a bunch of data around when it fails,
but I haven't yet).
It looks like on the Debug Gentoo buildbot configure and clean are failing.
http
Richard Oudkerk added the comment:
I finally have a gdb backtrace of a stuck child (started using os.fork() not
multiprocessing):
#1 0xb76194da in ?? () from /lib/libc.so.6
#2 0xb6d59755 in ?? ()
from
/var/lib/buildslave/custom.murray-gentoo/build/build/lib.linux-i686-3.4-pydebug
Richard Oudkerk added the comment:
Actually, according to strace the call which blocks is
futex(0xb7839454, FUTEX_WAIT_PRIVATE, 1, NULL
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19227
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10015
___
___
Python-bugs-list mailing
Richard Oudkerk added the comment:
I guess this should be clarified in the docs, but multiprocessing.pool.Pool is
a *class* whose constructor takes a context argument, where as
multiprocessing.Pool() is a *bound method* of the default context. (In
previous versions multiprocessing.Pool
Richard Oudkerk added the comment:
I guess we'll have to write platform-dependent code and make this an
optional feature. (Essentially, on platforms like AIX, for a
write-pipe, connection_lost() won't be called unless you try to write
some more bytes to it.)
If we are not capturing stdout
Richard Oudkerk added the comment:
Would it make sense to use socketpair() instead of pipe() on AIX? We could
check for the bug directly rather than checking specifically for AIX.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org
Richard Oudkerk added the comment:
Is this patch still of relevance for asyncio?
No, the _overlapped extension contains the IOCP stuff.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16175
Richard Oudkerk added the comment:
Richard, do you have time to get your patch ready for 3.4?
Yes. But we don't seem to have concensus on how to handle exceptions. The
main question is whether a failed prepare callback should prevent the fork from
happenning, or just be printed
Richard Oudkerk added the comment:
- now that FDs are non-inheritable by default, fork locks around
subprocess and multiprocessing shouldn't be necessary anymore? What
other use cases does the fork-lock have?
CLOEXEC fds will still be inherited by forked children.
- the current
Richard Oudkerk added the comment:
The following uses socketpair() instead of pipe() for stdin, and works for me
on Linux:
diff -r 7d94e4a68b91 asyncio/unix_events.py
--- a/asyncio/unix_events.pySun Oct 20 20:25:04 2013 -0700
+++ b/asyncio/unix_events.pyMon Oct 21 17:15:19 2013 +0100
Richard Oudkerk added the comment:
Won't using a prepare handler mean that the parent and child processes will use
the same seed until one or other of them forks again?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19227
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: needs patch - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19425
Richard Oudkerk added the comment:
This is a test of threading.Barrier rather than anything implemented directly
by multiprocessing.
Tests which involve timeouts tend to be a bit flaky. Increasing the length of
timeouts usually helps, but makes the tests take even longer.
How often have you
Richard Oudkerk added the comment:
Given PEP 446 (fds are now CLOEXEC by default) I prepared an updated patch
where the fork lock is undocumented and subprocess no longer uses the fork
lock. (I did not want to encourage the mixing of threads with fork() without
exec() by exposing the fork
Richard Oudkerk added the comment:
It seems to be a problem with ForkAwareThreadLock. Could you try the attached
patch?
--
Added file: http://bugs.python.org/file29593/forkawarethreadlock.patch
___
Python tracker rep...@bugs.python.org
http
Richard Oudkerk added the comment:
_afterfork_registry is not supposed to be cleared. But the problem with
ForkAwareThreadLocal meant that the size of the registry at generation n is
2**n!
--
___
Python tracker rep...@bugs.python.org
http
Richard Oudkerk added the comment:
I *think* we need to keep compatibility with the wire format, but perhaps
we could use a special length value (-1?) to introduce a longer (64-bit)
length value.
Yes we could, although that would not help on Windows pipe connections (where
byte oriented
Richard Oudkerk added the comment:
On 27/03/2013 5:13pm, mrjbq7 wrote:
On a machine with 256GB of RAM, it makes more sense to send arrays
of this size than say on a laptop...
I was thinking more of speed than memory consumption.
--
___
Python
Richard Oudkerk added the comment:
On 27/03/2013 5:47pm, Charles-François Natali wrote:
multiprocessing currently only allows sharing of such shared arrays
using inheritance.
You mean through fork() COW?
Through fork, yes, but shared rather than copy-on-write.
Perhaps we need
Richard Oudkerk added the comment:
On 27/03/2013 7:27pm, Charles-François Natali wrote:
Charles-François Natali added the comment:
Through fork, yes, but shared rather than copy-on-write.
There's a subtlety: because of refcounting, just treating a COW object
as read-only (e.g. iteratin
Richard Oudkerk added the comment:
On 27/03/2013 8:14pm, Charles-François Natali wrote:
Charles-François Natali added the comment:
Apart from creating, unlinking and resizing the file I don't think there
should be any disk I/O.
On Linux disk I/O only occurs when fsync() or close
Richard Oudkerk added the comment:
On 27/03/13 21:09, Charles-François Natali wrote:
I could, but I don't have to: a shared memory won't incur any I/O or
copy (except if it is swapped). A file-backed mmap will incur a *lot*
of I/O: really, just try writting a 1GB file, and you'll see your
Richard Oudkerk added the comment:
I don't think this is a bug -- processes started with fork() should nearly
always be exited with _exit(). And anyway, using sys.exit() does *not*
guarantee that all deallocators will be called. To be sure of cleanup at exit
you could use (the undocumented
Richard Oudkerk added the comment:
Maybe this is related to
http://bugs.python.org/issue13673
which causes PyTraceback_Print() to fail if a signal is received but
PyErr_CheckSignals() has not been called.
Note that wrapping in try: ... except: raise makes a traceback appear:
try: input
301 - 400 of 736 matches
Mail list logo