Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: -sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19066
___
___
Python-bugs-list mailing
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: -sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19124
___
___
Python-bugs-list mailing
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19112
___
___
Python-bugs-list mailing
Richard Oudkerk added the comment:
The clearing of modules at shutdown has been substantially changed in 3.4. Now
a best effort is made to let the module go away purely by gc. Those modules
which survive get purged in random order.
In 3.3 all modules were purged, but builtins was special
Richard Oudkerk added the comment:
An alternative would be to use weakref.finalize() which would guarantee that
cleanup happens before any purging occurs. That would allow the use of shutil:
class TemporaryDirectory(object):
def __init__(self, suffix=, prefix=template, dir=None
Richard Oudkerk added the comment:
Thanks for the doc cleanup -- I am rather busy right now.
Note that stuff does still get replaced by None at shutdown, and this can still
produce errors, even if they are much harder to trigger. If I run the
following program
import _weakref
import
Richard Oudkerk added the comment:
An alternative would be to have separate files NEWS-3.2, NEWS-3.3, NEWS-3.4
etc. If a fix is added to 3.2 and will be merged to 3.3 and 3.4 then you add
an entry to NEWS-3.2 and append some sort of tags to indicate merges:
- Issue #1234: Fix something
Richard Oudkerk added the comment:
I'll review the patch. (According to http://www.python.org/dev/peps/pep-0429/
feature freeze is expected in late November, so there is not too much of rush.)
--
___
Python tracker rep...@bugs.python.org
http
Richard Oudkerk added the comment:
By context I did not really mean a context manager. I just meant an object
(possibly a singleton or module) which implements the same interface as
multiprocessing.
(However, it may be a good idea to also make it a context manager whose
__enter__() method
Richard Oudkerk added the comment:
There are lots of things that behave differently depending on the currently set
start method: Lock(), Semaphore(), Queue(), Value(), ... It is not just when
creating a Process or Pool that you need to know the start method.
Passing a context or start_method
Richard Oudkerk added the comment:
With the current patch __repr__() will fail if the untransformed key is
unhashable:
d = collections.transformdict(id)
L = [1,2,3]
d[L] = None
d.keys()
Traceback (most recent call last):
File stdin, line 1, in module
File C:\Repos\cpython-dirty\lib
Richard Oudkerk added the comment:
With your patch, I think if you call get_start_method() without later calling
set_start_method() then the helper process(es) will never be started.
With the current code, popen.Popen() automatically starts the helper processes
if they have not already been
Richard Oudkerk added the comment:
In my patched version, the private popen.get_start_method gets a kwarg
set_if_needed=True. popen.Popen calls that as before, so its behavior
should not change, while the public get_start_method sets the kwarg to
False.
My mistake
Richard Oudkerk added the comment:
On 05/09/2013 9:28am, Charles-François Natali wrote:
As a side note, in the general case, there's more than a performance
optimization: the problem with unregister() + register() vs a real
modify (e.g. EPOLL_CTL_MOD) is that it's subject to a race condition
Richard Oudkerk added the comment:
LGTM.
But I would move import selectors in multiprocessing.connection to just
before the definition of wait() for Unix. It is not needed on Windows and
unnecessary imports slow down start up of new processes
Richard Oudkerk added the comment:
I remember wondering at one time why EPOLLNVAL did not exist, and realizing
that closed fds are just silently unregistered by epoll().
I guess the issue is that some of the selectors indicate a bad fd on
registration, and others do it when polled
Richard Oudkerk added the comment:
I've seen test_multiprocessing_forkserver giving warnings too, while
running the whole test suite, but can't reproduce them while running it
alone. The warnings seems quite similar though, so a single fix might
resolve the problem with all the tests
Richard Oudkerk added the comment:
Yes I will remove it. I was planning on doing so when PEP 446 was implemented.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18865
Richard Oudkerk added the comment:
The PPC64 buildbot is still failing intermittently.
--
resolution: invalid -
status: closed - open
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18786
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18865
Richard Oudkerk added the comment:
It looks like the main process keeps getting killed by SIGUSR1. Don't know why.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18786
Richard Oudkerk added the comment:
If the _killer process takes too long to start, it won't send SIGUSR1
before the p process returns...
Thanks!
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18786
Richard Oudkerk added the comment:
It should be fixed now so I will close.
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18786
Richard Oudkerk added the comment:
Try using Popen(..., bufsize=0).
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18868
Richard Oudkerk added the comment:
Hopefully this is fixed now.
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
type: - behavior
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18762
Richard Oudkerk added the comment:
On 21/08/2013 3:46pm, Charles-François Natali wrote:
Another, probably cleaner way would be to finally add the atfork()
module (issue #16500), and register this reseed hook (which could then
be implemented in ssl.py).
Wouldn't that still suffer from
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - duplicate
stage: - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18793
Richard Oudkerk added the comment:
I could submit the part that makes it possible to customize the picklers
of multiprocessing.pool.Pool instance to the standard library if people
are interested.
2.7 and 3.3 are in bugfix mode now, so they will not change.
In 3.3 you can do
from
Richard Oudkerk added the comment:
Adding the line
features[0][0]
to the end of main() produces a segfault for me on Linux.
The FAQ for sqlite3 says that
Under Unix, you should not carry an open SQLite database across a
fork() system call into the child process. Problems
Richard Oudkerk added the comment:
Do you mean you want to use a pure python implementation on Unix?
Then you would have to deal with AF_UNIX (which is the default family for
socketpair() currently). A pure python implementation which deals with AF_UNIX
would have to temporarily create
Richard Oudkerk added the comment:
Good for me. This is a very nice addition!
Thanks.
I do see a couple of failed assertions on Windows which presumably happen in a
child process because they do not cause a failure:
Assertion failed: !collecting, file ..\Modules\gcmodule.c, line 1617
Changes by Richard Oudkerk shibt...@gmail.com:
Added file: http://bugs.python.org/file31282/4fc7c72b1c5d.diff
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8713
Richard Oudkerk added the comment:
I have added documentation now so I think it is ready to merge (except for a
change to Makefile).
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8713
Changes by Richard Oudkerk shibt...@gmail.com:
Added file: http://bugs.python.org/file31214/c7aa0005f231.diff
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8713
Richard Oudkerk added the comment:
The forkserver process is now started using _posixsubprocess.fork_exec(). This
should fix the order dependent problem mentioned before.
Also the forkserver tests are now reenabled on OSX.
--
___
Python tracker
Richard Oudkerk added the comment:
Richard, can you say what failed on the OS X 10.4 (Tiger) buildbot?
There seems to be a problem which depends on the order in which you run
the test, and it happens on Linux also. For example if I do
./python -m test -v
Changes by Richard Oudkerk shibt...@gmail.com:
Added file: http://bugs.python.org/file31186/b3620777f54c.diff
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8713
Richard Oudkerk added the comment:
I have done quite a bit of refactoring and added some extra tests.
When I try using the forkserver start method on the OSX Tiger buildbot (the
only OSX one available) I get errors. I have disabled the tests for OSX, but
it seemed to be working before
Richard Oudkerk added the comment:
IMHO it just doesn't make sense passing 0.0 as a timeout value.
I have written lots of code that looks like
timeout = max(deadline - time.time(), 0)
some_function(..., timeout=timeout)
This makes perfect sense. Working code should not be broken
Richard Oudkerk added the comment:
Firstly, list2cmdline() takes a list as its argument, not a string:
import subprocess
print subprocess.list2cmdline([r'\1|2\'])
\\\1|2\\\
But the problem with passing arguments to a batch file is that cmd.exe parses
arguments differently from how
Richard Oudkerk added the comment:
I think you're missing the point. The implementation is wrong as it
does not do what documentation says which is A double quotation mark
preceded by a backslash is interpreted as a literal double quotation
mark.
That docstring describes how the string
Richard Oudkerk added the comment:
On 01/08/2013 10:59am, Antoine Pitrou wrote:
If you replace the end of your script with the following:
for name, mod in sys.modules.items():
if name != 'encodings':
mod.__dict__[__blob__] = Blob(name)
del name, mod, Blob
then at the end
Richard Oudkerk added the comment:
You might want to open a prompt and look at gc.get_referrers() for
encodings.mbcs.__dict__ (or another of those modules).
gc.get_referrers(sys.modules['encodings.mbcs'].__dict__)
[module 'encodings.mbcs' from
'C:\\Repos\\cpython-dirty\\lib\\encodings
Richard Oudkerk added the comment:
I get different numbers from you. If I run ./python -v -c pass, most
modules in the wiping phase are C extension modules, which is expected.
Pretty much every pure Python module ends up garbage collected before
that.
The *module* gets gc'ed, sure
Richard Oudkerk added the comment:
Also, do note that purge/gc after wiping can still be a regular
gc pass unless the module has been wiped. The gc could be triggered
by another module being wiped.
For me, the modules which die naturally after purging begins are
# purge/gc encodings.aliases
Richard Oudkerk added the comment:
Yes, I agree the patch is ok.
It would be would be much simpler to keep track of the module dicts if
they were weakrefable. Alternatively, at shutdown a weakrefable object
with a reference to the module dict could be inserted in to each module
dict. We
Richard Oudkerk added the comment:
I played a bit with the patch and -v -Xshowrefcount. The number of references
and blocks left at exit varies (and is higher than for unpatched python).
It appears that a few (1-3) module dicts are not being purged because they have
been orphaned. (i.e
Richard Oudkerk added the comment:
The spawn branch is in decent shape, although the documentation is not
up-to-date.
I would like to commit before the first alpha.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8713
Richard Oudkerk added the comment:
IMHO
1) It should check all predicates.
2) It should return a list of ready conditions.
3) It should *not* accept a list of conditions.
4) from_condition() should be removed.
Also notify() should try again if releasing a waiter raises RuntimeError
because
Richard Oudkerk added the comment:
Thanks for the patches!
--
resolution: - fixed
stage: patch review - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17778
Changes by Richard Oudkerk shibt...@gmail.com:
--
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18344
___
___
Python-bugs
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: needs patch - committed/rejected
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18344
Richard Oudkerk added the comment:
Thanks for the report.
This should be fixed now in 2.7. (3.1 and 3.2 only get security fixes.)
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18455
___
___
Python-bugs-list mailing
Richard Oudkerk added the comment:
- would improve POSIX compatibility, it mimics what os.pipe()
does on those OS
I disagree.
On Windows fds can only be inherited if you start processes using the spanwn*()
family of functions. If you start them using CreateProcess() then the
underlying
Richard Oudkerk added the comment:
Oops. I confused os.popen() with os.spawn*(). os.spawnv() IS still
implemented using spawnv() in Python 3.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4708
Richard Oudkerk added the comment:
Does that test always fail?
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18382
___
___
Python
Richard Oudkerk added the comment:
Shouldn't the child process be terminating using os._exit()?
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6642
Richard Oudkerk added the comment:
I think I know what's going on here. For socket IO readline() uses a
readahead buffer size of 1.
Why is that? I think that makefile(mode='rb') and fdopen() both create
BufferedReader objects with the same buffer size.
It looks to me like
Richard Oudkerk added the comment:
Using
while True:
if not fileobj.read(8192):
break
instead of
for line in fileobj:
pass
results in higher throughput, but a similar slowdown with makefile(). So this
is not a problem specific
Richard Oudkerk added the comment:
The only real reason for implementing SocketIO in pure Python is because read()
and write() do not work on Windows with sockets. (I think there are also a few
complications involving SSL sockets and the close() method.)
On Windows I have implemented a file
Richard Oudkerk added the comment:
Ah. I had not thought of socket timeouts.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18329
Richard Oudkerk added the comment:
I find that by adding the lines
fileobj.raw.readinto = ss.recv_into
fileobj.raw.read = ss.recv
the speed with makefile() is about 30% slower than with fdopen().
--
___
Python tracker rep
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - works for me
stage: test needed - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7292
Richard Oudkerk added the comment:
Patch attached.
--
keywords: +patch
Added file: http://bugs.python.org/file30748/buf-readall.patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18344
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
title: multiprocessing.pool.Pool task/worker handlers are not fork safe - Pool
methods can only be used by parent process.
type: behavior -
versions: +Python 2.7
Changes by Richard Oudkerk shibt...@gmail.com:
--
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue14206
___
___
Python-bugs
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: - committed/rejected
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue14206
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
versions: +Python 3.3, Python 3.4
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17261
Richard Oudkerk added the comment:
Reopening because I think this is again a problem for Win64 and 3.x. The Win64
buildbots always seem to crash on test_marshal (and I do too).
It appears to be BugsTestCase.test_loads_2x_code() which crashes, which is
virtually the same
Richard Oudkerk added the comment:
Closing because this is caused by #17206 and is already discussed there.
--
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue2286
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17097
Richard Oudkerk added the comment:
When modules are garbage collected the associated globals dict is purged -- see
#18214. This means that all values (except __builtins__) are replaced by None.
To work around this run_path() apparently returns a *copy* of the globals dict
which was created
Richard Oudkerk added the comment:
I think this is a duplicate of #17899.
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18332
Richard Oudkerk added the comment:
I think in Python 3 makefile() returns a TextIOWrapper object by default. To
force the use of binary you need to specfiy the mode:
fileobj = ss.makefile(mode='rb')
--
nosy: +sbt
___
Python tracker rep
Richard Oudkerk added the comment:
I did this to use the same abstraction that was used extensively for
other purposes, instead of recreating the same abstraction with a deque
as its basis.
So you wanted a FIFO queue and preferred the API of Queue to that of deque?
Well it will be *much
Richard Oudkerk added the comment:
unfinished_tasks is simply used as a counter. It is only accessed while
holding self._cond. If you get this error then I think the error text is
correct -- your progam calls task_done() to many times.
The proposed patch silences the sanity check by making
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15818
Richard Oudkerk added the comment:
I was thinking about the line
self.__dict__.update(state)
overwriting new data with stale data.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17621
Richard Oudkerk added the comment:
1. but should not cause any pratical difficulties -- you have a typo in
'pratical' there.
2. What exactly do you mean by managed queues in the new addition?
Woops. Fixed now see 860fc6a2bd21, 347647a1f798. A managed queue is
one created like
Changes by Richard Oudkerk shibt...@gmail.com:
--
assignee: - docs@python
components: +Documentation -IO, Interpreter Core
nosy: +docs@python
resolution: - fixed
stage: - committed/rejected
status: open - closed
type: behavior -
___
Python
Richard Oudkerk added the comment:
This is really a documentation issue. The doc fix for #18277 covers this.
--
components: +Library (Lib) -Extension Modules
resolution: - wont fix
stage: - committed/rejected
status: open - closed
type: - behavior
Richard Oudkerk added the comment:
Apologies for being dense, but how would you actually use such a loader?
Would you need to install something in sys.meta_path/sys.path_hooks? Would it
make all imports lazy or only imports of specified modules?
--
nosy: +sbt
Richard Oudkerk added the comment:
Shouldn't the import lock be held to make it threadsafe?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17621
Richard Oudkerk added the comment:
This is a very similar issue to #17985.
While it may seem counter-intuitive, I don't see how it makes any difference.
Another thread/process might remove the item before you can get it.
I find it very difficult to imagine a real program where you can safely
Richard Oudkerk added the comment:
Why would you use a multi-process queue to pass messages from one part of the
program to another part, in the same process and thread? Why not just use a
deque?
Is this something you actually did, or are you just trying to come up with a
plausible example
Changes by Richard Oudkerk shibt...@gmail.com:
--
keywords: +gsoc -patch
resolution: - rejected
stage: - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16507
Richard Oudkerk added the comment:
We don't do non-security updates on Python 2.6 anymore.
As a workaround you might be able to do something like
import sys, multiprocessing
sys.frozen = True# or multiprocessing.forking.WINEXE = True
...
if __name__ == '__main__
Richard Oudkerk added the comment:
I just tried freezing the program
from multiprocessing import freeze_support,Manager
if __name__ == '__main__':
freeze_support()
m=Manager()
l = m.list([1,2,3])
l.append(4)
print(l)
print(repr(l))
using cx_Freeze
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - fixed
stage: - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17018
Richard Oudkerk added the comment:
See also #9573 and #15914.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18122
___
___
Python-bugs-list
Changes by Richard Oudkerk shibt...@gmail.com:
--
resolution: - works for me
stage: - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15198
Richard Oudkerk added the comment:
I think if you use timeit then the code is wrapped inside a function before it
is compiled. This means that your code can mostly use faster local lookups
rather than global lookups.
--
nosy: +sbt
___
Python
Changes by Richard Oudkerk shibt...@gmail.com:
--
stage: committed/rejected -
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18252
___
___
Python
Changes by Richard Oudkerk shibt...@gmail.com:
--
stage: - committed/rejected
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18252
___
___
Python
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: +pitrou
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18214
___
___
Python-bugs-list
Richard Oudkerk added the comment:
On 15/06/2013 7:11pm, Antoine Pitrou wrote:
Usually garbage collection will end up clearing the module's dict anyway.
This is not true, since global objects might have a __del__ and then hold
the whole module dict alive through a reference cycle. Happily
Changes by Richard Oudkerk shibt...@gmail.com:
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9122
___
___
Python-bugs-list mailing
New submission from Richard Oudkerk:
Currently when a module is garbage collected its dict is purged by replacing
all values except __builtins__ by None. This helps clear things at shutdown.
But this can cause problems if it occurs *before* shutdown: if we use a
function defined in a module
Richard Oudkerk added the comment:
Do you want something like
f.done() and not f.cancelled() and f.exception() is None
--
nosy: +sbt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18212
101 - 200 of 736 matches
Mail list logo