Re: [Python-Dev] Add "e" (close and exec) mode to open()
Benjamin wrote: > I'm not sure it's worth cluttering the open() interface with such a > non-portable option. Zbyszek wrote: > If the best-effort fallback is included, it is quite portable. Definitely > all modern and semi-modern systems support either the atomic or the > nonatomic methods. Gregory wrote: > I'm not excited about raising an exception when it isn't supported; > it should attempt to get the same behavior via multiple API calls instead Yes, I'm proposing the best-effort approach: use O_CLOEXEC/O_NOINHERIT if available, or fcntl()+FD_CLOEXEC otherwise. My patch requires fcntl() + FD_CLOEXEC flag or open() + O_NOINHERIT flag. I failed to find an OS where none of these flag/function is present. Usually, when I would like to test the portability, I test Linux, Windows and Mac OS X. Then I test FreeBSD, OpenBSD and OpenIndiana (Solaris). I don't test other OS because I don't know them and they are not installed on my PC :-) (I have many VM.) Should I try other platforms? Benjamin wrote: > People requiring such control should use the low-level os.open interface. os.open() is not very convinient because you have to chose the flag and the functions depending on the OS. If the open() flag is rejected, we should at least provide an helper for os.open() + fcntl(). The myriad cloexec APIs between different platforms suggests to me that using this features requires understanding its various quirks on different platforms. Victor 2013/1/8 Benjamin Peterson : > 2013/1/7 Gregory P. Smith : >> >> >> >> On Mon, Jan 7, 2013 at 4:03 PM, Benjamin Peterson >> wrote: >>> >>> 2013/1/7 Victor Stinner : >>> > Hi, >>> > >>> > I would like add a new flag to open() mode to close the file on exec: >>> > "e". This feature exists using different APIs depending on the OS and >>> > OS version: O_CLOEXEC, FD_CLOEXEC and O_NOINHERIT. Do you consider >>> > that such flag would be interesting? >>> >>> I'm not sure it's worth cluttering the open() interface with such a >>> non-portable option. People requiring such control should use the >>> low-level os.open interface. >> >> >> The ability to supply such flags really belongs on _all_ high or low level >> file descriptor creating APIs so that things like subprocess_cloexec_pipe() >> would not be necessary: >> http://hg.python.org/cpython/file/0afa7b323abb/Modules/_posixsubprocess.c#l729 > > I think the open() interface should have consistent and > non-conditional support features to the maximum extent possible. The > recent addition of "x" is a good example I think. The myriad cloexec > APIs between different platforms suggests to me that using this > features requires understanding its various quirks on different > platforms. > > > > -- > Regards, > Benjamin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add "e" (close and exec) mode to open()
Oops, I sent my email too early by mistake (it was not finished). > The myriad cloexec > APIs between different platforms suggests to me that using this > features requires understanding its various quirks on different > platforms. Sorry, I don't understand. What do you mean by "various quirks". The "close-on-exec" feature is implemented differently depending on the platform, but it always have the same meaning. It closes the file when a subprocess is created. Running a subprocess is also implemented differently depending on the OS, there are two mains approaches: fork()+exec() on UNIX, on Windows (I don't know how it works on Windows). Extract of fcntl() manual page on Linux: "If the FD_CLOEXEC bit is 0, the file descriptor will remain open across an execve(2), otherwise it will be closed." I would like to expose the OS feature using a portable API to hide the "The myriad cloexec APIs". Victor 2013/1/8 Victor Stinner : > Benjamin wrote: >> I'm not sure it's worth cluttering the open() interface with such a >> non-portable option. > Zbyszek wrote: >> If the best-effort fallback is included, it is quite portable. Definitely >> all modern and semi-modern systems support either the atomic or the >> nonatomic methods. > Gregory wrote: >> I'm not excited about raising an exception when it isn't supported; >> it should attempt to get the same behavior via multiple API calls instead > > Yes, I'm proposing the best-effort approach: use O_CLOEXEC/O_NOINHERIT > if available, or fcntl()+FD_CLOEXEC otherwise. > > My patch requires fcntl() + FD_CLOEXEC flag or open() + O_NOINHERIT > flag. I failed to find an OS where none of these flag/function is > present. > > Usually, when I would like to test the portability, I test Linux, > Windows and Mac OS X. Then I test FreeBSD, OpenBSD and OpenIndiana > (Solaris). I don't test other OS because I don't know them and they > are not installed on my PC :-) (I have many VM.) > > Should I try other platforms? > > Benjamin wrote: >> People requiring such control should use the low-level os.open interface. > > os.open() is not very convinient because you have to chose the flag > and the functions depending on the OS. If the open() flag is rejected, > we should at least provide an helper for os.open() + fcntl(). > > The myriad cloexec > APIs between different platforms suggests to me that using this > features requires understanding its various quirks on different > platforms. > > Victor > > 2013/1/8 Benjamin Peterson : >> 2013/1/7 Gregory P. Smith : >>> >>> >>> >>> On Mon, Jan 7, 2013 at 4:03 PM, Benjamin Peterson >>> wrote: 2013/1/7 Victor Stinner : > Hi, > > I would like add a new flag to open() mode to close the file on exec: > "e". This feature exists using different APIs depending on the OS and > OS version: O_CLOEXEC, FD_CLOEXEC and O_NOINHERIT. Do you consider > that such flag would be interesting? I'm not sure it's worth cluttering the open() interface with such a non-portable option. People requiring such control should use the low-level os.open interface. >>> >>> >>> The ability to supply such flags really belongs on _all_ high or low level >>> file descriptor creating APIs so that things like subprocess_cloexec_pipe() >>> would not be necessary: >>> http://hg.python.org/cpython/file/0afa7b323abb/Modules/_posixsubprocess.c#l729 >> >> I think the open() interface should have consistent and >> non-conditional support features to the maximum extent possible. The >> recent addition of "x" is a good example I think. The myriad cloexec >> APIs between different platforms suggests to me that using this >> features requires understanding its various quirks on different >> platforms. >> >> >> >> -- >> Regards, >> Benjamin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] make test
When I run the test, i got these:
... ...
testShareLocal (test.test_socket.TestSocketSharing) ... skipped 'Windows
specific'
testTypes (test.test_socket.TestSocketSharing) ... skipped 'Windows specific'
==
ERROR: test_idna (test.test_socket.GeneralModuleTests)
--
Traceback (most recent call last):
File "/data/users/xwl/Python-3.3.0/Lib/test/test_socket.py", line 1183, in
test_idna
socket.gethostbyname('испытание.python.org')
socket.gaierror: [Errno -3] Temporary failure in name resolution
--
Ran 483 tests in 24.816s
FAILED (errors=1, skipped=29)
test test_socket failed
make: *** [test] Error 1
Then I run the failing test manually, as the README guided:
[xwl@ordosz8003 ~/Python-3.3.0]$ ./python -m test -v test_idna
== CPython 3.3.0 (default, Jan 8 2013, 11:07:49) [GCC 4.1.2 20080704 (Red Hat
4.1.2-46)]
== Linux-2.6.18-164.el5-x86_64-with-redhat-5.4-Tikanga little-endian
== /data/users/xwl/Python-3.3.0/build/test_python_17651
Testing with flags: sys.flags(debug=0, inspect=0, interactive=0, optimize=0,
dont_write_bytecode=0, no_user_site=0, no_site=0, ignore_environment=0,
verbose=0, bytes_warning=0, quiet=0, hash_randomization=1)
[1/1] test_idna
test test_idna crashed -- Traceback (most recent call last):
File "/data/users/xwl/Python-3.3.0/Lib/test/regrtest.py", line 1213, in
runtest_inner
the_package = __import__(abstest, globals(), locals(), [])
ImportError: No module named 'test.test_idna'
1 test failed:
test_idna
[xwl@ordosz8003 ~/Python-3.3.0]$ pwd
/data/users/xwl/Python-3.3.0
[xwl@ordosz8003 ~/Python-3.3.0]$
Please help me to install this great software on my machine. Think you!
David.Xu
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Point of building without threads?
On Mon, 2012-05-07 at 21:49 +0200, Antoine Pitrou wrote: > > I guess a long time ago, threading support in operating systems wasn't > very widespread, but these days all our supported platforms have it. > Is it still useful for production purposes to configure > --without-threads? Do people use this option for something else than > curiosity of mind? I hope that the intent behind asking this question was more of being curious, rather then considering dropping --without-threads: unfortunately, multithreading was, still is and probably will remain troublesome on many supercomputing platforms. Often, once a new supercomputer is launched, as a developer you get a half-baked C/C++ compiler with threading support broken to the point when it's much easier to not use it altogether [*] rather than trying to work around the compiler quirks. Of course, the situation improves over the lifetime of each particular computer, but usually, when everything is halfway working, the computer itself becomes obsolete, so there is not much point in using it anymore. Moreover, these days there is a clear trend towards OpenMP, so it has become even harder to pressure the manufacturers to fix threads, because they have 101 argument why you should port your code to OpenMP instead. HTH. [*]: Another usual candidates for being broken beyond repair are the linker, especially when it comes to shared libraries, and support for advanced C++ language features, such as templates... -- Sincerely yours, Yury V. Zaytsev ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Point of building without threads?
Le Tue, 08 Jan 2013 10:28:25 +0100, "Yury V. Zaytsev" a écrit : > On Mon, 2012-05-07 at 21:49 +0200, Antoine Pitrou wrote: > > > > I guess a long time ago, threading support in operating systems > > wasn't very widespread, but these days all our supported platforms > > have it. Is it still useful for production purposes to configure > > --without-threads? Do people use this option for something else than > > curiosity of mind? > > I hope that the intent behind asking this question was more of being > curious, rather then considering dropping --without-threads: > unfortunately, multithreading was, still is and probably will remain > troublesome on many supercomputing platforms. I was actually asking this question in the hope that we could perhaps simplify our range of build options (and the corresponding C #define's), but you made a convincing point that we should keep the --without-threads option :-) Thank you Antoine. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add "e" (close and exec) mode to open()
2013/1/8 Victor Stinner : > I don't know platform without this flag. According to the following email, fcntl.FD_CLOEXEC was not available in Python 2.2 on Red Hat 7.3 (in 2003): http://communities.mentor.com/community/cs/archives/qmtest/msg00501.html I don't know if the constant was not defined in fcntl.h, or if the constant was just not exposed in Python 2.2? Does anyone have such old version of RedHat to test if fcntl.FD_CLOEXEC is available (on a recent version of Python)? -- In the Python issue #12107, I can read: "I realize this bugreport cannot fix 35 years of a bad design decision in linux." http://bugs.python.org/issue12107 Well... Ruby made a brave choice :-) Ruby (2.0?) does set close-on-exec flag on *ALL file descriptors (except 0, 1, 2) *by default*: http://bugs.ruby-lang.org/issues/5041 This change solves the problem of having to close all file descriptor after a fork to run a new program (see closed Python issues #11284 and #8052)... if you are not using C extensions creating file descriptors? Ruby applications relying on passing FD to child processes have to explicitly disable close-on-exec the flag: it was done in Unicorn for example. -- See also the issue discussing the usage of O_CLOEXEC and SOCK_CLOEXEC in the libapr: https://issues.apache.org/bugzilla/show_bug.cgi?id=46425 Victor ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] make test
Hello. We are sorry but we cannot help you. This mailing list is to work on developing Python (adding new features to Python itself and fixing bugs); if you're having problems learning, understanding or using Python, please find another forum. Probably python-list/comp.lang.python mailing list/news group is the best place; there are Python developers who participate in it; you may get a faster, and probably more complete, answer there. See http://www.python.org/community/ for other lists/news groups/fora. Thank you for understanding. On Tue, Jan 08, 2013 at 11:21:34AM +0800, Xu Wanglin wrote: > ImportError: No module named 'test.test_idna' Really, there is no module test.test_idna. Look into Lib/test/ yourself. Oleg. -- Oleg Broytmanhttp://phdru.name/[email protected] Programmers don't die, they just GOSUB without RETURN. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] make test
On Tue, 08 Jan 2013 16:48:41 +0400, Oleg Broytman wrote: > Hello. > >We are sorry but we cannot help you. This mailing list is to work on > developing Python (adding new features to Python itself and fixing bugs); > if you're having problems learning, understanding or using Python, please > find another forum. Probably python-list/comp.lang.python mailing list/news > group is the best place; there are Python developers who participate in it; > you may get a faster, and probably more complete, answer there. See > http://www.python.org/community/ for other lists/news groups/fora. Thank > you for understanding. > > On Tue, Jan 08, 2013 at 11:21:34AM +0800, Xu Wanglin wrote: > > ImportError: No module named 'test.test_idna' > >Really, there is no module test.test_idna. Look into Lib/test/ yourself. Xu's confusion arises from the fact that the test is named test_idna, but that is actually the name of the test *within* the test file that is named to the right of that name in the unittest output. There is a bug in the issue tracker for having unittest print out the complete path to the individual test in a cut-and-pasteable fashion. Perhaps someone will be motivated to work on a fix :) --David PS: as long as I'm writing this, Xu, that error you got looks like a transient error possibly caused by your local name server...if that is the only failure you got, your Python is working perfectly fine and you can go ahead and install/use it. But as Oleg said, you will get much better and faster help for this kind of question from python-list. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Point of building without threads?
[ Weird, I can't see your original e-mail Antoine; hijacking Yury's reply instead. ] On Tue, Jan 08, 2013 at 01:28:25AM -0800, Yury V. Zaytsev wrote: > On Mon, 2012-05-07 at 21:49 +0200, Antoine Pitrou wrote: > > > > I guess a long time ago, threading support in operating systems wasn't > > very widespread, but these days all our supported platforms have it. > > Is it still useful for production purposes to configure > > --without-threads? Do people use this option for something else than > > curiosity of mind? All our NetBSD, OpenBSD and DragonFlyBSD slaves use --without-thread. Without it, they all wedge in some way or another. (That should be fixed*/investigated, but, until then, yeah, --without-threads allows for a slightly more useful (but still broken) test suite run on these platforms.) [*]: I suspect the problem with at least OpenBSD is that their userland pthreads implementation just doesn't cut it; there is no hope for the really technical tests that poke and prod at things like correct signal handling and whatnot. Trent. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Point of building without threads?
Le Tue, 8 Jan 2013 09:02:00 -0500, Trent Nelson a écrit : > [ Weird, I can't see your original e-mail Antoine; hijacking > Yury's reply instead. ] The original e-mail is quite old (it was sent in May) :-) Regards Antoine. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Point of building without threads?
Trent Nelson wrote: > All our NetBSD, OpenBSD and DragonFlyBSD slaves use --without-thread. > Without it, they all wedge in some way or another. (That should be > fixed*/investigated, but, until then, yeah, --without-threads allows > for a slightly more useful (but still broken) test suite run on > these platforms.) > > [*]: I suspect the problem with at least OpenBSD is that their > userland pthreads implementation just doesn't cut it; there > is no hope for the really technical tests that poke and > prod at things like correct signal handling and whatnot. For OpenBSD the situation should be fixed in the latest release: http://www.openbsd.org/52.html#new I haven't tried it myself though. Stefan Krah ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] More compact dictionaries with faster iteration
On Mon, Dec 10, 2012 at 3:44 AM, Raymond Hettinger
wrote:
> The current memory layout for dictionaries is
> unnecessarily inefficient. It has a sparse table of
> 24-byte entries containing the hash value, key pointer,
> and value pointer.
>
> Instead, the 24-byte entries should be stored in a
> dense table referenced by a sparse table of indices.
>
> For example, the dictionary:
>
> d = {'timmy': 'red', 'barry': 'green', 'guido': 'blue'}
>
> is currently stored as:
>
> entries = [['--', '--', '--'],
>[-8522787127447073495, 'barry', 'green'],
>['--', '--', '--'],
>['--', '--', '--'],
>['--', '--', '--'],
>[-9092791511155847987, 'timmy', 'red'],
>['--', '--', '--'],
>[-6480567542315338377, 'guido', 'blue']]
>
> Instead, the data should be organized as follows:
>
> indices = [None, 1, None, None, None, 0, None, 2]
> entries = [[-9092791511155847987, 'timmy', 'red'],
> [-8522787127447073495, 'barry', 'green'],
> [-6480567542315338377, 'guido', 'blue']]
>
> Only the data layout needs to change. The hash table
> algorithms would stay the same. All of the current
> optimizations would be kept, including key-sharing
> dicts and custom lookup functions for string-only
> dicts. There is no change to the hash functions, the
> table search order, or collision statistics.
>
> The memory savings are significant (from 30% to 95%
> compression depending on the how full the table is).
> Small dicts (size 0, 1, or 2) get the most benefit.
>
> For a sparse table of size t with n entries, the sizes are:
>
> curr_size = 24 * t
> new_size = 24 * n + sizeof(index) * t
>
> In the above timmy/barry/guido example, the current
> size is 192 bytes (eight 24-byte entries) and the new
> size is 80 bytes (three 24-byte entries plus eight
> 1-byte indices). That gives 58% compression.
>
> Note, the sizeof(index) can be as small as a single
> byte for small dicts, two bytes for bigger dicts and
> up to sizeof(Py_ssize_t) for huge dict.
>
> In addition to space savings, the new memory layout
> makes iteration faster. Currently, keys(), values, and
> items() loop over the sparse table, skipping-over free
> slots in the hash table. Now, keys/values/items can
> loop directly over the dense table, using fewer memory
> accesses.
>
> Another benefit is that resizing is faster and
> touches fewer pieces of memory. Currently, every
> hash/key/value entry is moved or copied during a
> resize. In the new layout, only the indices are
> updated. For the most part, the hash/key/value entries
> never move (except for an occasional swap to fill a
> hole left by a deletion).
>
> With the reduced memory footprint, we can also expect
> better cache utilization.
>
> For those wanting to experiment with the design,
> there is a pure Python proof-of-concept here:
>
>http://code.activestate.com/recipes/578375
>
> YMMV: Keep in mind that the above size statics assume a
> build with 64-bit Py_ssize_t and 64-bit pointers. The
> space savings percentages are a bit different on other
> builds. Also, note that in many applications, the size
> of the data dominates the size of the container (i.e.
> the weight of a bucket of water is mostly the water,
> not the bucket).
>
>
> Raymond
> ___
> Python-Dev mailing list
> [email protected]
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
One question Raymond.
The compression ratios stay true provided you don't overallocate entry
list. If you do overallocate you don't really gain that much (it all
depends vastly on details), or even loose in some cases. What do you
think should the strategy be?
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Point of building without threads?
On Tue, Jan 08, 2013 at 06:15:45AM -0800, Stefan Krah wrote: > Trent Nelson wrote: > > All our NetBSD, OpenBSD and DragonFlyBSD slaves use --without-thread. > > Without it, they all wedge in some way or another. (That should be > > fixed*/investigated, but, until then, yeah, --without-threads allows > > for a slightly more useful (but still broken) test suite run on > > these platforms.) > > > > [*]: I suspect the problem with at least OpenBSD is that their > > userland pthreads implementation just doesn't cut it; there > > is no hope for the really technical tests that poke and > > prod at things like correct signal handling and whatnot. > > For OpenBSD the situation should be fixed in the latest release: > >http://www.openbsd.org/52.html#new > > I haven't tried it myself though. Interesting! I'll look into upgrading the existing Snakebite OpenBSD slaves (they're both at 5.1). Trent. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] is this the fault of import_fresh_module or pickle?
Hello,
I'm still having some struggles with the interaction between pickle and
import overriding with import_fresh_module.
_elementtree.TreeBuilder can't be pickled at this point. When I do this:
from test.support import import_fresh_module
import pickle
P = import_fresh_module('xml.etree.ElementTree', blocked=['_elementtree'])
tb = P.TreeBuilder()
print(pickle.dumps(tb))
Everything works fine. However, if I add import_fresh_module for the C
module:
from test.support import import_fresh_module
import pickle
C = import_fresh_module('xml.etree.ElementTree', fresh=['_elementtree'])
P = import_fresh_module('xml.etree.ElementTree', blocked=['_elementtree'])
tb = P.TreeBuilder()
print(pickle.dumps(tb))
I get an error from pickle.dumps:
Traceback (most recent call last):
File "mix_c_py_etree.py", line 10, in
print(pickle.dumps(tb))
_pickle.PicklingError: Can't pickle : it's not the same object as
xml.etree.ElementTree.TreeBuilder
Note that I didn't change the executed code sequence. All I did was import
the C version of ET before the Python version. I was under the impression
this had to keep working since P = import_fresh_module uses
blocked=['_elementtree'] properly.
This interaction only seems to happen with pickle. What's going on here?
Can we somehow improve import_fresh_module to avoid this? Perhaps actually
deleting previously imported modules with some special keyword flag?
Thanks in advance,
Eli
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] is this the fault of import_fresh_module or pickle?
Eli Bendersky wrote:
> Everything works fine. However, if I add import_fresh_module for the C module:
>
> from test.support import import_fresh_module
> import pickle
> C = import_fresh_module('xml.etree.ElementTree', fresh=['_elementtree'])
> P = import_fresh_module('xml.etree.ElementTree', blocked=['_elementtree'])
sys.modules still contains the C version at this point, so:
sys.modules['xml.etree.ElementTree'] = P
> tb = P.TreeBuilder()
> print(pickle.dumps(tb))
> This interaction only seems to happen with pickle. What's going on here? Can
> we
> somehow improve import_fresh_module to avoid this? Perhaps actually deleting
> previously imported modules with some special keyword flag?
pickle always looks up sys.modules['xml.etree.ElementTree']. Perhaps we
could improve something, but this requirement is rather special; personally
I'm okay with switching sys.modules explicitly in the tests, because that
reminds me of what pickle does.
Stefan Krah
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add "e" (close and exec) mode to open()
2013/1/8 Victor Stinner : > Oops, I sent my email too early by mistake (it was not finished). > >> The myriad cloexec >> APIs between different platforms suggests to me that using this >> features requires understanding its various quirks on different >> platforms. > > Sorry, I don't understand. What do you mean by "various quirks". The > "close-on-exec" feature is implemented differently depending on the > platform, but it always have the same meaning. It closes the file when > a subprocess is created. Running a subprocess is also implemented > differently depending on the OS, there are two mains approaches: > fork()+exec() on UNIX, on Windows (I don't know how > it works on Windows). > > Extract of fcntl() manual page on Linux: "If the FD_CLOEXEC bit is > 0, the file descriptor will remain open across an execve(2), otherwise > it will be closed." > > I would like to expose the OS feature using a portable API to hide the > "The myriad cloexec APIs". Okay, fair enough, but I really would like it not to ever raise NotImplementedError. Then you would end up having different codepaths for various oses anyway. -- Regards, Benjamin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add "e" (close and exec) mode to open()
2013/1/8 Benjamin Peterson : > Okay, fair enough, but I really would like it not to ever raise > NotImplementedError. Then you would end up having different codepaths > for various oses anyway. So what do you suggest? Victor ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add "e" (close and exec) mode to open()
2013/1/8 Victor Stinner : > 2013/1/8 Benjamin Peterson : >> Okay, fair enough, but I really would like it not to ever raise >> NotImplementedError. Then you would end up having different codepaths >> for various oses anyway. > > So what do you suggest? If the only systems it doesn't work on is ancient RedHat, that's probably okay. -- Regards, Benjamin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] is this the fault of import_fresh_module or pickle?
On Tue, Jan 8, 2013 at 8:05 AM, Stefan Krah wrote:
> Eli Bendersky wrote:
> > Everything works fine. However, if I add import_fresh_module for the C
> module:
> >
> > from test.support import import_fresh_module
> > import pickle
> > C = import_fresh_module('xml.etree.ElementTree', fresh=['_elementtree'])
> > P = import_fresh_module('xml.etree.ElementTree',
> blocked=['_elementtree'])
>
> sys.modules still contains the C version at this point, so:
>
> sys.modules['xml.etree.ElementTree'] = P
>
>
> > tb = P.TreeBuilder()
> > print(pickle.dumps(tb))
>
>
>
> > This interaction only seems to happen with pickle. What's going on here?
> Can we
> > somehow improve import_fresh_module to avoid this? Perhaps actually
> deleting
> > previously imported modules with some special keyword flag?
>
> pickle always looks up sys.modules['xml.etree.ElementTree']. Perhaps we
> could improve something, but this requirement is rather special; personally
> I'm okay with switching sys.modules explicitly in the tests, because that
> reminds me of what pickle does.
>
Wouldn’t it be be better if import_fresh_module or some alternative
function could do that for you? I mean, wipe out the import cache for
certain modules I don't want to be found?
Eli
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] is this the fault of import_fresh_module or pickle?
On Tue, 08 Jan 2013 17:05:50 +0100, Stefan Krah wrote:
> Eli Bendersky wrote:
> > Everything works fine. However, if I add import_fresh_module for the C
> > module:
> >
> > from test.support import import_fresh_module
> > import pickle
> > C = import_fresh_module('xml.etree.ElementTree', fresh=['_elementtree'])
> > P = import_fresh_module('xml.etree.ElementTree', blocked=['_elementtree'])
>
> sys.modules still contains the C version at this point, so:
>
> sys.modules['xml.etree.ElementTree'] = P
>
>
> > tb = P.TreeBuilder()
> > print(pickle.dumps(tb))
>
> > This interaction only seems to happen with pickle. What's going on here?
> > Can we
> > somehow improve import_fresh_module to avoid this? Perhaps actually deleting
> > previously imported modules with some special keyword flag?
>
> pickle always looks up sys.modules['xml.etree.ElementTree']. Perhaps we
> could improve something, but this requirement is rather special; personally
> I'm okay with switching sys.modules explicitly in the tests, because that
> reminds me of what pickle does.
Handling this case is why having a context-manager form of
import_fresh_module was suggested earlier in this meta-thread. At
least, I think that would solve it, I haven't tried it :)
--David
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add "e" (close and exec) mode to open()
2013/1/8 Benjamin Peterson : > 2013/1/8 Victor Stinner : >> 2013/1/8 Benjamin Peterson : >>> Okay, fair enough, but I really would like it not to ever raise >>> NotImplementedError. Then you would end up having different codepaths >>> for various oses anyway. >> >> So what do you suggest? > > If the only systems it doesn't work on is ancient RedHat, that's probably > okay. What do you mean? NotIlmplementedError is acceptable if only rare and/or old OS raise such issue? > According to the following email, fcntl.FD_CLOEXEC was not available > in Python 2.2 on Red Hat 7.3 (in 2003): > http://communities.mentor.com/community/cs/archives/qmtest/msg00501.html This issue looks like http://bugs.python.org/issue496171 So it looks like the problem was just that the constant was not exposed properly whereas the OS supports the feature. I guess that FD_CLOEXEC always worked on RedHat. Victor ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Add "e" (close and exec) mode to open()
2013/1/8 Victor Stinner : > 2013/1/8 Benjamin Peterson : >> 2013/1/8 Victor Stinner : >>> 2013/1/8 Benjamin Peterson : Okay, fair enough, but I really would like it not to ever raise NotImplementedError. Then you would end up having different codepaths for various oses anyway. >>> >>> So what do you suggest? >> >> If the only systems it doesn't work on is ancient RedHat, that's probably >> okay. > > What do you mean? NotIlmplementedError is acceptable if only rare > and/or old OS raise such issue? We have to draw the line somewhere. People writing Python to run on such systems will already have to be aware of such issues. -- Regards, Benjamin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] is this the fault of import_fresh_module or pickle?
Eli Bendersky wrote: > On Tue, Jan 8, 2013 at 8:05 AM, Stefan Krah wrote: > > pickle always looks up sys.modules['xml.etree.ElementTree']. Perhaps we > could improve something, but this requirement is rather special; > personally > I'm okay with switching sys.modules explicitly in the tests, because that > reminds me of what pickle does. > > > Wouldn?t it be be better if import_fresh_module or some alternative function > could do that for you? I mean, wipe out the import cache for certain modules I > don't want to be found? For a single test, perhaps. ContextAPItests.test_pickle() from test_decimal would look quite strange if import_fresh_module was used repeatedly. Stefan Krah ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] is this the fault of import_fresh_module or pickle?
On Tue, Jan 8, 2013 at 8:37 AM, R. David Murray wrote:
> On Tue, 08 Jan 2013 17:05:50 +0100, Stefan Krah
> wrote:
> > Eli Bendersky wrote:
> > > Everything works fine. However, if I add import_fresh_module for the C
> module:
> > >
> > > from test.support import import_fresh_module
> > > import pickle
> > > C = import_fresh_module('xml.etree.ElementTree',
> fresh=['_elementtree'])
> > > P = import_fresh_module('xml.etree.ElementTree',
> blocked=['_elementtree'])
> >
> > sys.modules still contains the C version at this point, so:
> >
> > sys.modules['xml.etree.ElementTree'] = P
> >
> >
> > > tb = P.TreeBuilder()
> > > print(pickle.dumps(tb))
> >
> > > This interaction only seems to happen with pickle. What's going on
> here? Can we
> > > somehow improve import_fresh_module to avoid this? Perhaps actually
> deleting
> > > previously imported modules with some special keyword flag?
> >
> > pickle always looks up sys.modules['xml.etree.ElementTree']. Perhaps we
> > could improve something, but this requirement is rather special;
> personally
> > I'm okay with switching sys.modules explicitly in the tests, because that
> > reminds me of what pickle does.
>
> Handling this case is why having a context-manager form of
> import_fresh_module was suggested earlier in this meta-thread. At
> least, I think that would solve it, I haven't tried it :)
>
Would you mind extracting just this idea into this discussion so we can
focus on it here? I personally don't see how making import_fresh_module a
context manager will solve things, unless you add some extra functionality
to it? AFAIU it doesn't remove modules from sys.modules *before* importing,
at this point.
Eli
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 431 Time zone support improvements - Update
On Fri, Dec 28, 2012 at 10:12 PM, Ronald Oussoren wrote: > > On 28 Dec, 2012, at 21:23, Lennart Regebro wrote: > > > Happy Holidays! Here is the update of PEP 431 with the changes that > emerged after the earlier discussion. > > Why is the new timezone support added in a submodule of datetime? Adding > the new > function and exception to datetime itself wouldn't clutter the API that > much, and datetime > already contains some timezone support (datetime.tzinfo). Putting the API directly into the datetime module does conflict with the new timezone class from Python 3.2. The timezone() function therefore needs to be called something else, or the timezone class must be renamed. Alternative names for the timezone() function is get_timezone(), which has already been rejected, and zoneinfo() which makes it clear that it's only zoneinfo timezones that are relevant. Or the timezone class get's renamed TimeZone (which is more PEP8 anyway). We can allow the timezone() function to take both timezone(offset, [name]) as now, and timezone(name) and return a TimeZone object in the first case and a zoneinfo based timezone in the second case. Or maybe somebody else can come up with more clever options? ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 431 Time zone support improvements - Update
On Tue, Jan 8, 2013 at 1:08 PM, Lennart Regebro wrote: > On Fri, Dec 28, 2012 at 10:12 PM, Ronald Oussoren > wrote: >> >> >> On 28 Dec, 2012, at 21:23, Lennart Regebro wrote: >> >> > Happy Holidays! Here is the update of PEP 431 with the changes that >> > emerged after the earlier discussion. >> >> Why is the new timezone support added in a submodule of datetime? Adding >> the new >> function and exception to datetime itself wouldn't clutter the API that >> much, and datetime >> already contains some timezone support (datetime.tzinfo). > > > Putting the API directly into the datetime module does conflict with the new > timezone class from Python 3.2. The timezone() function therefore needs to > be called something else, or the timezone class must be renamed. Can't rename the class since that would potentially break code (e.g. if the class can be pickled). > > Alternative names for the timezone() function is get_timezone(), which has > already been rejected, and zoneinfo() which makes it clear that it's only > zoneinfo timezones that are relevant. I'm personally +0 on zoneinfo(), but I don't have a better suggestion. -Brett > Or the timezone class get's renamed TimeZone (which is more PEP8 anyway). > > We can allow the timezone() function to take both timezone(offset, [name]) > as now, and timezone(name) and return a TimeZone object in the first case > and a zoneinfo based timezone in the second case. > > Or maybe somebody else can come up with more clever options? > > ___ > Python-Dev mailing list > [email protected] > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Point of building without threads?
2013/1/8 Trent Nelson : > On Tue, Jan 08, 2013 at 06:15:45AM -0800, Stefan Krah wrote: >> Trent Nelson wrote: >> > All our NetBSD, OpenBSD and DragonFlyBSD slaves use --without-thread. >> > Without it, they all wedge in some way or another. (That should be >> > fixed*/investigated, but, until then, yeah, --without-threads allows >> > for a slightly more useful (but still broken) test suite run on >> > these platforms.) >> > >> > [*]: I suspect the problem with at least OpenBSD is that their >> > userland pthreads implementation just doesn't cut it; there >> > is no hope for the really technical tests that poke and >> > prod at things like correct signal handling and whatnot. >> >> For OpenBSD the situation should be fixed in the latest release: >> >>http://www.openbsd.org/52.html#new >> >> I haven't tried it myself though. > > Interesting! I'll look into upgrading the existing Snakebite > OpenBSD slaves (they're both at 5.1). Oooh yes, many bugs has been fixed by the implementation of threads in the kernel (rthreads in OpenBSD 5.2)! Just one recent example. On OpenBSD 4.9, FD_CLOEXEC doesn't work with fork()+exec() whereas it works with exec(); on OpenBSD 5.2, both cases work as expected. http://bugs.python.org/issue16850#msg179294 Victor ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] PEP 3156 - Asynchronous IO Support Rebooted
Hello. I've read the PEP and some things raise questions in my consciousness. Here they are. 1. Series of sock_ methods can be organized into a wrapper around sock object. This wrappers can then be saved and used later in async-aware code. This way code like: sock = socket(...) # later, e.g. in connect() yield from tulip.get_event_loop().sock_connect(sock, ...) # later, e.g. in read() data = yield from tulip.get_event_loop().sock_recv(sock, ...) will look like: sock = socket(...) async_sock = tulip.get_event_loop().wrap_socket(sock) # later, e.g. in connect() yield from async_sock.connect(...) # later, e.g. in read() data = yield from async_sock.recv(...) Interface looks cleaner while plain calls (if they ever needed) will be only 5 chars longer. 2. Not as great, but still possible to wrap fd in similar way to make interface simpler. Instead of: add_reader(fd, callback, *args) remove_reader(fd) We can do: wrap_fd(fd).reader = functools.partial(callback, *args) wrap_fd(fd).reader = None # or del wrap_fd(fd).reader 3. Why not use properties (or fields) instead of methods for cancelled, running and done in Future class? I think, it'll be easier to use since I expect such attributes to be accessed as properties. I see it as some javaism since in Java Future have getters for this fields but they are prefixed with 'is'. 4. Why separate exception() from result() for Future class? It does the same as result() but with different interface (return instead of raise). Doesn't this violate the rule "There should be one obvious way to do it"? 5. I think, protocol and transport methods' names are not easy or understanding enough: - write_eof() does not write anything but closes smth, should be close_writing or smth alike; - the same way eof_received() should become smth like receive_closed; - pause() and resume() work with reading only, so they should be suffixed (prefixed) with read(ing), like pause_reading(), resume_reading(). Kind regards, Yuriy. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] is this the fault of import_fresh_module or pickle?
On Wed, Jan 9, 2013 at 2:58 AM, Eli Bendersky wrote:
>> Handling this case is why having a context-manager form of
>> import_fresh_module was suggested earlier in this meta-thread. At
>> least, I think that would solve it, I haven't tried it :)
>
>
> Would you mind extracting just this idea into this discussion so we can
> focus on it here? I personally don't see how making import_fresh_module a
> context manager will solve things, unless you add some extra functionality
> to it? AFAIU it doesn't remove modules from sys.modules *before* importing,
> at this point.
Sure it does, that's how it works: the module being imported, as well
as anything requested as a "fresh" module is removed from sys.modules,
anything requested as a "blocked" module is replaced with None (or
maybe 0 - whatever it is that will force ImportError). It then does
the requested import and then *reverts all those changes* to
sys.modules.
It's that last part which is giving you trouble: by the time you run
the actual tests, sys.modules has been reverted to its original state,
so pickle gets confused when it attempts to look things up by name.
Rather than a context manager form of import_fresh_module, what we
really want is a "modules_replaced" context manager:
@contextmanager
def modules_replaced(replacements):
_missing = object()
saved = {}
try:
for name, mod in replacements.items():
saved[name] = sys.modules.get(name, _missing)
sys.modules[name] = mod
yield
finally:
for name, mod in saved.items():
if mod is _missing:
del sys.modules[name]
else:
sys.modules[name] = mod
And a new import_fresh_modules function that is like
import_fresh_module, but returns a 2-tuple of the requested module and
a mapping of all the affected modules, rather than just the module
object.
However, there will still be cases where this doesn't work (i.e.
modules with import-time side effects that don't support repeated
execution), and the pickle and copy global registries are a couple of
the places that are notorious for not coping with repeated imports of
a module. In that case, the test case will need to figure out what
global state is being modified and deal with that specifically.
(FWIW, this kind of problem is why import_fresh_module is in
test.support rather than importlib and the reload builtin became
imp.reload in Python 3 - "module level code is executed once per
process" is an assumption engrained in most Python developer's brains,
and these functions deliberately violate it).
Cheers,
Nick.
--
Nick Coghlan | [email protected] | Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3156 - Asynchronous IO Support Rebooted
2013/1/8 Yuriy Taraday : > 4. Why separate exception() from result() for Future class? It does the same > as result() but with different interface (return instead of raise). Doesn't > this violate the rule "There should be one obvious way to do it"? I expect that's a copy-and-paste error. exception() will return the exception if one occured. -- Regards, Benjamin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 431 Time zone support improvements - Update
On Wed, Jan 9, 2013 at 5:02 AM, Brett Cannon wrote: > On Tue, Jan 8, 2013 at 1:08 PM, Lennart Regebro wrote: >> Alternative names for the timezone() function is get_timezone(), which has >> already been rejected, and zoneinfo() which makes it clear that it's only >> zoneinfo timezones that are relevant. > > I'm personally +0 on zoneinfo(), but I don't have a better suggestion. zoneinfo() sounds reasonable to me. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3156 - Asynchronous IO Support Rebooted
On Wed, Jan 9, 2013 at 11:14 AM, Yuriy Taraday wrote: > 4. Why separate exception() from result() for Future class? It does the same > as result() but with different interface (return instead of raise). Doesn't > this violate the rule "There should be one obvious way to do it"? The exception() method exists for the same reason that we support both "key in mapping" and raising KeyError from "mapping[key]": sometimes you want "Look Before You Leap", other times you want to let the exception fly. If you want the latter, just call .result() directly, if you want the former, check .exception() first. Regardless, the Future API isn't really being defined in PEP 3156, as it is mostly inheritied from the previously implemented PEP 3148 (http://www.python.org/dev/peps/pep-3148/#future-objects) Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3156 - Asynchronous IO Support Rebooted
On Tue, Jan 8, 2013 at 6:07 PM, Benjamin Peterson wrote: > 2013/1/8 Yuriy Taraday : >> 4. Why separate exception() from result() for Future class? It does the same >> as result() but with different interface (return instead of raise). Doesn't >> this violate the rule "There should be one obvious way to do it"? > > I expect that's a copy-and-paste error. exception() will return the > exception if one occured. I don't see the typo. It is as Nick explained. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3156 - Asynchronous IO Support Rebooted
2013/1/8 Guido van Rossum : > On Tue, Jan 8, 2013 at 6:07 PM, Benjamin Peterson wrote: >> 2013/1/8 Yuriy Taraday : >>> 4. Why separate exception() from result() for Future class? It does the same >>> as result() but with different interface (return instead of raise). Doesn't >>> this violate the rule "There should be one obvious way to do it"? >> >> I expect that's a copy-and-paste error. exception() will return the >> exception if one occured. > > I don't see the typo. It is as Nick explained. PEP 3156 says "exception(). Difference with PEP 3148: This has no timeout argument and does not wait; if the future is not yet done, it raises an exception." I assume it's not supposed to raise. -- Regards, Benjamin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3156 - Asynchronous IO Support Rebooted
2013/1/8 Benjamin Peterson : > 2013/1/8 Guido van Rossum : >> On Tue, Jan 8, 2013 at 6:07 PM, Benjamin Peterson >> wrote: >>> 2013/1/8 Yuriy Taraday : 4. Why separate exception() from result() for Future class? It does the same as result() but with different interface (return instead of raise). Doesn't this violate the rule "There should be one obvious way to do it"? >>> >>> I expect that's a copy-and-paste error. exception() will return the >>> exception if one occured. >> >> I don't see the typo. It is as Nick explained. > > PEP 3156 says "exception(). Difference with PEP 3148: This has no > timeout argument and does not wait; if the future is not yet done, it > raises an exception." I assume it's not supposed to raise. Oh, I see. "it raises an exception" refers to the not-completed-eyt exception. Poor reading on my part; never mind. -- Regards, Benjamin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3156 - Asynchronous IO Support Rebooted
On Tue, Jan 8, 2013 at 6:53 PM, Benjamin Peterson wrote: > 2013/1/8 Guido van Rossum : >> On Tue, Jan 8, 2013 at 6:07 PM, Benjamin Peterson >> wrote: >>> 2013/1/8 Yuriy Taraday : 4. Why separate exception() from result() for Future class? It does the same as result() but with different interface (return instead of raise). Doesn't this violate the rule "There should be one obvious way to do it"? >>> >>> I expect that's a copy-and-paste error. exception() will return the >>> exception if one occured. >> >> I don't see the typo. It is as Nick explained. > > PEP 3156 says "exception(). Difference with PEP 3148: This has no > timeout argument and does not wait; if the future is not yet done, it > raises an exception." I assume it's not supposed to raise. No, actually, in that case it *does* raise an exception, because it means that the caller didn't understand the interface. It *returns* an exception object when the Future is done but the "result" is exceptional. But it *raises* when the Future is not done yet. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3156 - Asynchronous IO Support Rebooted
On Wed, Jan 9, 2013 at 6:31 AM, Nick Coghlan wrote: > On Wed, Jan 9, 2013 at 11:14 AM, Yuriy Taraday > wrote: > > 4. Why separate exception() from result() for Future class? It does the > same > > as result() but with different interface (return instead of raise). > Doesn't > > this violate the rule "There should be one obvious way to do it"? > > The exception() method exists for the same reason that we support both > "key in mapping" and raising KeyError from "mapping[key]": sometimes > you want "Look Before You Leap", other times you want to let the > exception fly. If you want the latter, just call .result() directly, > if you want the former, check .exception() first. > Ok, I get it now. Thank you for clarifying. > Regardless, the Future API isn't really being defined in PEP 3156, as > it is mostly inheritied from the previously implemented PEP 3148 > (http://www.python.org/dev/peps/pep-3148/#future-objects) > Then #3 and #4 are about PEP 3148. Why was it done this way? Kind regards, Yuriy. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3156 - Asynchronous IO Support Rebooted
On Tue, Jan 8, 2013 at 5:14 PM, Yuriy Taraday wrote: > I've read the PEP and some things raise questions in my consciousness. Here > they are. Thanks! > 1. Series of sock_ methods can be organized into a wrapper around sock > object. This wrappers can then be saved and used later in async-aware code. > This way code like: > > sock = socket(...) > # later, e.g. in connect() > yield from tulip.get_event_loop().sock_connect(sock, ...) > # later, e.g. in read() > data = yield from tulip.get_event_loop().sock_recv(sock, ...) > > will look like: > > sock = socket(...) > async_sock = tulip.get_event_loop().wrap_socket(sock) > # later, e.g. in connect() > yield from async_sock.connect(...) > # later, e.g. in read() > data = yield from async_sock.recv(...) > > Interface looks cleaner while plain calls (if they ever needed) will be only > 5 chars longer. This is a semi-internal API that is mostly useful to Transport implementers, and there won't be many of those. So I prefer the API that has the fewest classes. > 2. Not as great, but still possible to wrap fd in similar way to make > interface simpler. Instead of: > > add_reader(fd, callback, *args) > remove_reader(fd) > > We can do: > > wrap_fd(fd).reader = functools.partial(callback, *args) > wrap_fd(fd).reader = None # or > del wrap_fd(fd).reader Ditto. > 3. Why not use properties (or fields) instead of methods for cancelled, > running and done in Future class? I think, it'll be easier to use since I > expect such attributes to be accessed as properties. I see it as some > javaism since in Java Future have getters for this fields but they are > prefixed with 'is'. Too late, this is how PEP 3148 defined it. It was indeed inspired by Java Futures. However I would defend using methods here, since these are not all that cheap -- they have to acquire and release a lock. > 4. Why separate exception() from result() for Future class? It does the same > as result() but with different interface (return instead of raise). Doesn't > this violate the rule "There should be one obvious way to do it"? Because it is quite awkward to check for an exception if you have to catch it (4 lines instead of 1). > 5. I think, protocol and transport methods' names are not easy or > understanding enough: > - write_eof() does not write anything but closes smth, should be > close_writing or smth alike; > - the same way eof_received() should become smth like receive_closed; I am indeed struggling a bit with these names, but "writing an EOF" is actually how I think of this (maybe I am dating myself to the time of mag tapes though :-). > - pause() and resume() work with reading only, so they should be suffixed > (prefixed) with read(ing), like pause_reading(), resume_reading(). Agreed. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3156 - Asynchronous IO Support Rebooted
On Tue, Jan 8, 2013 at 8:31 PM, Guido van Rossum wrote: > On Tue, Jan 8, 2013 at 5:14 PM, Yuriy Taraday wrote: >> - pause() and resume() work with reading only, so they should be suffixed >> (prefixed) with read(ing), like pause_reading(), resume_reading(). > > Agreed. I think I want to take that back. I think it is more common for a protocol to want to pause the transport (i.e. hold back data_received() calls) than it is for a transport to want to pause the protocol (i.e. hold back write() calls). So the more common method can have a shorter name. Also, pause_reading() is almost confusing, since the protocol's method is named data_received(), not read_data(). Also, there's no reason for the protocol to want to pause the *write* (send) actions of the transport -- if wanted to write less it should not have called write(). The reason to distinguish between the two modes of pausing is because it is sometimes useful to "stack" multiple protocols, and then a protocol in the middle of the stack acts as a transport to the protocol next to it (and vice versa). See the discussion on this list previously, e.g. http://mail.python.org/pipermail/python-ideas/2013-January/018522.html (search for the keyword "stack" in this long message to find the relevant section). -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3156 - Asynchronous IO Support Rebooted
On Wed, Jan 9, 2013 at 8:31 AM, Guido van Rossum wrote: > On Tue, Jan 8, 2013 at 5:14 PM, Yuriy Taraday wrote: > > I've read the PEP and some things raise questions in my consciousness. > Here > > they are. > > Thanks! > > > 1. Series of sock_ methods can be organized into a wrapper around sock > > object. This wrappers can then be saved and used later in async-aware > code. > > This is a semi-internal API that is mostly useful to Transport > implementers, and there won't be many of those. So I prefer the API > that has the fewest classes. > > > 2. Not as great, but still possible to wrap fd in similar way to make > > interface simpler. > > Ditto. > Ok, I see. Should transports be bound to event loop on creation? I wonder, what would happen if someone changes current event loop between these calls. > > > 3. Why not use properties (or fields) instead of methods for cancelled, > > running and done in Future class? I think, it'll be easier to use since I > > expect such attributes to be accessed as properties. I see it as some > > javaism since in Java Future have getters for this fields but they are > > prefixed with 'is'. > > Too late, this is how PEP 3148 defined it. It was indeed inspired by > Java Futures. However I would defend using methods here, since these > are not all that cheap -- they have to acquire and release a lock. > > I understand why it should be a method, but still if it's a getter, it should have either get_ or is_ prefix. Are there any way to change this with 'Final' PEP? > > 4. Why separate exception() from result() for Future class? It does the > same > > as result() but with different interface (return instead of raise). > Doesn't > > this violate the rule "There should be one obvious way to do it"? > > Because it is quite awkward to check for an exception if you have to > catch it (4 lines instead of 1). > > > 5. I think, protocol and transport methods' names are not easy or > > understanding enough: > > - write_eof() does not write anything but closes smth, should be > > close_writing or smth alike; > > - the same way eof_received() should become smth like receive_closed; > > I am indeed struggling a bit with these names, but "writing an EOF" is > actually how I think of this (maybe I am dating myself to the time of > mag tapes though :-). > > I never saw a computer working with a tape, but it's clear to me what does they do. I've just imagined the amount of words I'll have to say to students about EOFs instead of simple "it closes our end of one half of a socket". > - pause() and resume() work with reading only, so they should be suffixed > > (prefixed) with read(ing), like pause_reading(), resume_reading(). > > Agreed. > > -- > --Guido van Rossum (python.org/~guido) > -- Kind regards, Yuriy. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3156 - Asynchronous IO Support Rebooted
On Tue, Jan 8, 2013 at 9:02 PM, Yuriy Taraday wrote: > On Wed, Jan 9, 2013 at 8:31 AM, Guido van Rossum wrote: >> On Tue, Jan 8, 2013 at 5:14 PM, Yuriy Taraday wrote: >> > 1. Series of sock_ methods can be organized into a wrapper around sock >> > object. This wrappers can then be saved and used later in async-aware >> > code. >> >> This is a semi-internal API that is mostly useful to Transport >> implementers, and there won't be many of those. So I prefer the API >> that has the fewest classes. >> >> > 2. Not as great, but still possible to wrap fd in similar way to make >> > interface simpler. >> >> Ditto. > > > Ok, I see. > Should transports be bound to event loop on creation? I wonder, what would > happen if someone changes current event loop between these calls. Yes, this is what the transport implementation does. >> > 3. Why not use properties (or fields) instead of methods for cancelled, >> > running and done in Future class? I think, it'll be easier to use since >> > I >> > expect such attributes to be accessed as properties. I see it as some >> > javaism since in Java Future have getters for this fields but they are >> > prefixed with 'is'. >> >> Too late, this is how PEP 3148 defined it. It was indeed inspired by >> Java Futures. However I would defend using methods here, since these >> are not all that cheap -- they have to acquire and release a lock. >> > > I understand why it should be a method, but still if it's a getter, it > should have either get_ or is_ prefix. Why? That's not a universal coding standard. The names seem clear enough to me. > Are there any way to change this with 'Final' PEP? No, the concurrent.futures package has been released (I forget if it was Python 3.2 or 3.3) and we're bound to backwards compatibility. Also I really don't think it's a big deal at all. >> > 4. Why separate exception() from result() for Future class? It does the >> > same >> > as result() but with different interface (return instead of raise). >> > Doesn't >> > this violate the rule "There should be one obvious way to do it"? >> >> Because it is quite awkward to check for an exception if you have to >> catch it (4 lines instead of 1). >> >> >> > 5. I think, protocol and transport methods' names are not easy or >> > understanding enough: >> > - write_eof() does not write anything but closes smth, should be >> > close_writing or smth alike; >> > - the same way eof_received() should become smth like receive_closed; >> >> I am indeed struggling a bit with these names, but "writing an EOF" is >> actually how I think of this (maybe I am dating myself to the time of >> mag tapes though :-). >> > I never saw a computer working with a tape, but it's clear to me what does > they do. > I've just imagined the amount of words I'll have to say to students about > EOFs instead of simple "it closes our end of one half of a socket". But which half? A socket is two independent streams, one in each direction. Twisted uses half_close() for this concept but unless you already know what this is for you are left wondering which half. Which is why I like using 'write' in the name. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3156 - Asynchronous IO Support Rebooted
On Wed, Jan 9, 2013 at 9:14 AM, Guido van Rossum wrote: > On Tue, Jan 8, 2013 at 9:02 PM, Yuriy Taraday wrote: > > Should transports be bound to event loop on creation? I wonder, what > would > > happen if someone changes current event loop between these calls. > > Yes, this is what the transport implementation does. > But in theory every sock_ call is independent and returns Future bound to current event loop. So if one change event loop with active transport, nothing bad should happen. Or I'm missing something. > > I understand why it should be a method, but still if it's a getter, it > > should have either get_ or is_ prefix. > > Why? That's not a universal coding standard. The names seem clear enough > to me. > When I see (in autocompletion, for example) or remember name like "running", it triggers thought that it's a field. When I remember smth like is_running, it definitely associates with method. > > Are there any way to change this with 'Final' PEP? > > No, the concurrent.futures package has been released (I forget if it > was Python 3.2 or 3.3) and we're bound to backwards compatibility. > Also I really don't think it's a big deal at all. > Yes, not a big deal. > > >> > 5. I think, protocol and transport methods' names are not easy or > >> > understanding enough: > >> > - write_eof() does not write anything but closes smth, should be > >> > close_writing or smth alike; > >> > - the same way eof_received() should become smth like receive_closed; > >> > >> I am indeed struggling a bit with these names, but "writing an EOF" is > >> actually how I think of this (maybe I am dating myself to the time of > >> mag tapes though :-). > >> > > I never saw a computer working with a tape, but it's clear to me what > does > > they do. > > I've just imagined the amount of words I'll have to say to students about > > EOFs instead of simple "it closes our end of one half of a socket". > > But which half? A socket is two independent streams, one in each > direction. Twisted uses half_close() for this concept but unless you > already know what this is for you are left wondering which half. Which > is why I like using 'write' in the name. Yes, 'write' part is good, I should mention it. I meant to say that I won't need to explain that there were days when we had to handle a special marker at the end of file. -- Kind regards, Yuriy. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-ideas] PEP 3156 - Asynchronous IO Support Rebooted
Is this thread really ready to migrate to python-dev when we're still bikeshedding method names? Yuriy Taraday writes: > > But which half? A socket is two independent streams, one in each > > direction. Twisted uses half_close() for this concept but unless you > > already know what this is for you are left wondering which half. Which > > is why I like using 'write' in the name. > > Yes, 'write' part is good, I should mention it. I meant to say that I won't > need to explain that there were days when we had to handle a special marker > at the end of file. Mystery is good for students. Getting serious, "close_writer" occured to me as a possibility. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3156 - Asynchronous IO Support Rebooted
On Tue, Jan 8, 2013 at 9:26 PM, Yuriy Taraday wrote: > > > > On Wed, Jan 9, 2013 at 9:14 AM, Guido van Rossum wrote: >> >> On Tue, Jan 8, 2013 at 9:02 PM, Yuriy Taraday wrote: >> > Should transports be bound to event loop on creation? I wonder, what >> > would >> > happen if someone changes current event loop between these calls. >> >> Yes, this is what the transport implementation does. > > > But in theory every sock_ call is independent and returns Future bound to > current event loop. It is bound to the event loop whose sock_() method you called. > So if one change event loop with active transport, nothing bad should > happen. Or I'm missing something. Changing event loops in the middle of event processing is not a common (or even useful) pattern. You start the event loop and then leave it alone. >> > I understand why it should be a method, but still if it's a getter, it >> > should have either get_ or is_ prefix. >> >> Why? That's not a universal coding standard. The names seem clear enough >> to me. > > > When I see (in autocompletion, for example) or remember name like "running", > it triggers thought that it's a field. When I remember smth like is_running, > it definitely associates with method. That must pretty specific to your personal experience. >> > Are there any way to change this with 'Final' PEP? >> >> No, the concurrent.futures package has been released (I forget if it >> was Python 3.2 or 3.3) and we're bound to backwards compatibility. >> Also I really don't think it's a big deal at all. > > > Yes, not a big deal. >> >> >> >> > 5. I think, protocol and transport methods' names are not easy or >> >> > understanding enough: >> >> > - write_eof() does not write anything but closes smth, should be >> >> > close_writing or smth alike; >> >> > - the same way eof_received() should become smth like receive_closed; >> >> >> >> I am indeed struggling a bit with these names, but "writing an EOF" is >> >> actually how I think of this (maybe I am dating myself to the time of >> >> mag tapes though :-). >> >> >> > I never saw a computer working with a tape, but it's clear to me what >> > does >> > they do. >> > I've just imagined the amount of words I'll have to say to students >> > about >> > EOFs instead of simple "it closes our end of one half of a socket". >> >> But which half? A socket is two independent streams, one in each >> direction. Twisted uses half_close() for this concept but unless you >> already know what this is for you are left wondering which half. Which >> is why I like using 'write' in the name. > > > Yes, 'write' part is good, I should mention it. I meant to say that I won't > need to explain that there were days when we had to handle a special marker > at the end of file. But even today you have to mark the end somehow, to distinguish it from "not done yet, more could be coming". The equivalent is typing ^D into a UNIX terminal (or ^Z on Windows). -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
