[issue2675] Curses terminal resize problems when Python is in interactive mode
poq added the comment: Issue 3948 is almost certainly a duplicate. -- ___ Python tracker <http://bugs.python.org/issue2675> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2675] Curses terminal resize problems when Python is in interactive mode
poq added the comment: Just to confirm: curses SIGWINCH handling is still (2.7/3.2) broken after importing readline. Readline seems to set the LINES/COLUMNS environment vars, and this confuses curses, even if curses is imported after readline. Clearing the LINES/COLUMNS vars after import readline fixes the issue: os.environ['LINES'] = os.environ['COLUMNS'] = '' or os.unsetenv('LINES'); os.unsetenv('COLUMNS') (or other variations). I spent a couple hours tearing my hair out over this, before I found this report. It may be possible for Python to work around this readline behavior by saving LINES/COLUMNS and restoring them after initializing readline. Or maybe this should just be documented somewhere. -- nosy: +poq versions: +Python 3.3 -Python 2.4 ___ Python tracker <http://bugs.python.org/issue2675> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13934] sqlite3 test typo
poq added the comment: Now with proper skipping. -- Added file: http://bugs.python.org/file25622/sqlite3-test-hooks.v2.patch ___ Python tracker <http://bugs.python.org/issue13934> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13934] sqlite3 test typo
poq added the comment: Sure, why not. Here you go. -- Added file: http://bugs.python.org/file25616/sqlite3-version-doc.patch ___ Python tracker <http://bugs.python.org/issue13934> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14386] Expose dictproxy as a public type
poq added the comment: It is exposed as types.DictProxyType in Python 2... -- nosy: +poq ___ Python tracker <http://bugs.python.org/issue14386> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14228] It is impossible to catch sigint on startup in python code
poq added the comment: > Because the available space for command line switches is rather limited Limited by what? > "MYENVVAR=foo python ..." That does not work with hashbangs (and env is kludgey). -- ___ Python tracker <http://bugs.python.org/issue14228> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14228] It is impossible to catch sigint on startup in python code
poq added the comment: > No, the point is that the exception may be caused by a real bug and having > the traceback is tremendously useful to debug such situations. [...] > KeyboardInterrupt is not different in this regard from, say, > ZeroDivisionError. KeyboardInterrupt *is* different though. It is caused by the user instead of by any bug. Of course, it can still be useful to know where the program was interrupted, so showing a traceback for KeyboardInterrupt by default makes sense IMO. > As I said, a possible solution is to allow users to alter the default signal > handling (using an env var). Why an environment variable instead of a command line switch (as suggested by Ian Jackson)? Shouldn't a script be able to decide for itself how it handles signals? -- ___ Python tracker <http://bugs.python.org/issue14228> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14228] It is impossible to catch sigint on startup in python code
poq added the comment: > It seems not even using -S fixes the problem That's because you try to use re and os in your except block, and the KeyboardInterrupt is raised before they are imported. -- nosy: +poq ___ Python tracker <http://bugs.python.org/issue14228> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12568] Add functions to get the width in columns of a character
poq added the comment: Martin, I agree that wcswidth is incorrect with respect to Unicode. However I don't think that's relevant at all. Python should only try to match the behaviour of the terminal. Since terminals do slightly different things, trying to match them exactly - in all cases, on all systems - is virtually impossible. But AFAICT wcwidth should match the terminal behaviour on nearly all modern systems, so it makes sense to expose it. -- ___ Python tracker <http://bugs.python.org/issue12568> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12568] Add functions to get the width in columns of a character
poq added the comment: It seems this is a bit of a minefield... GNOME Terminal/libvte has an environment variable (VTE_CJK_WIDTH) to override the handling of ambiguous width characters. It bases its default on the locale (with the comment 'This is basically what GNU libc does'). urxvt just uses system wcwidth. Xterm uses some voodoo to decide between system wcwidth and mk_wcwidth(_cjk): http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c I think the simplest solution is to just expose libc's wc(s)width. It is widely used and is most likely to match the behaviour of the terminal. FWIW I wrote a little script to test the widths of all Unicode characters, and came up with the following logic to match libvte behaviour: def wcwidth(c, legacy_cjk=False): if c in u'\t\r\n\10\13\14': raise ValueError('character %r has no intrinsic width' % c) if c in u'\0\5\7\16\17': return 0 if u'\u1160' <= c <= u'\u11ff': return 0 # hangul jamo if unicodedata.category(c) in ('Mn', 'Me', 'Cf') and c != u'\u00ad': return 0 # 00ad = soft hyphen eaw = unicodedata.east_asian_width(c) if eaw in ('F', 'W'): return 2 if legacy_cjk and eaw == 'A': return 2 return 1 -- ___ Python tracker <http://bugs.python.org/issue12568> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12568] Add functions to get the width in columns of a character
poq added the comment: Martin, I think you meant to write "if w == 'A':". Some very common characters have ambiguous widths though (e.g. the Greek alphabet), so you can't just raise an error for them. http://unicode.org/reports/tr11/ says: "Ambiguous characters occur in East Asian legacy character sets as wide characters, but as narrow (i.e., normal-width) characters in non-East Asian usage." So in practice applications can treat ambiguous characters as narrow by default, with a user setting to use legacy (wide) width. As Tom pointed out there are also a bunch of zero width characters, and characters with special formatting like tab, soft hyphen, ... -- nosy: +poq ___ Python tracker <http://bugs.python.org/issue12568> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14164] my little contribution to the docs
poq added the comment: It is generally considered more correct to write "floating-point number", because "floating-point" is a compound adjective here. The hyphen clarifies that it should be parsed as ((floating point) number) instead of (floating (point number)). However, in practice "floating point number" is also commonly used. I completely agree with Eli that this is just nitpicking, and not a productive use of Python developers' time. -- nosy: +poq ___ Python tracker <http://bugs.python.org/issue14164> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13641] decoding functions in the base64 module could accept unicode strings
poq added the comment: FWIW, I was surprised by the return type of b64encode when I first used it in Python 3. It seems to me that b64encode turns binary data into text and thus intuitively should take bytes and return str. Similarly it seems intuitive to me for b64decode to take str as input and return bytes, as it turns text back into binary data. -- nosy: +poq ___ Python tracker <http://bugs.python.org/issue13641> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13773] Support sqlite3 uri filenames
poq added the comment: > The Python docs should either list them (there aren’t much; pro: all the info > is here, con: maintenance) or link to them. They've already added a new option ('psow') since I opened this report, so linking is probably more future-proof. I've added an updated patch which adds a link. I've also changed the exception when URIs are not supported to sqlite3.NotSupportedError. > By the way, do you want to give us your full name so that we can credit you? I prefer anonymity. :) -- Added file: http://bugs.python.org/file24420/sqlite-uri.v3.patch ___ Python tracker <http://bugs.python.org/issue13773> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12993] prepared statements in sqlite3 module
poq added the comment: This can be closed. -- ___ Python tracker <http://bugs.python.org/issue12993> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12142] Reference cycle when importing ctypes
poq added the comment: I've attached a patch for the _array_type change. The long double fix is probably dependent on PEP3118 (#3132). -- keywords: +patch Added file: http://bugs.python.org/file24413/ctypes-leak.patch ___ Python tracker <http://bugs.python.org/issue12142> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13773] Support sqlite3 uri filenames
poq added the comment: > Can you open a bug report for that? Opened #13934. > I think the doc could link to the sqlite.org doc about URIs. I considered this, but the rest of the sqlite3 module documentation doesn't link to the sqlite.org doc pages either. There is only a link to http://www.sqlite.org under 'See also'. -- ___ Python tracker <http://bugs.python.org/issue13773> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13934] sqlite3 test typo
New submission from poq : The test CheckCollationIsUsed in Lib/sqlite3/test/hooks.py never runs because it checks the wrong version tuple. Patch attached. -- components: Tests files: sqlite3-test-hooks.patch keywords: patch messages: 152548 nosy: poq priority: normal severity: normal status: open title: sqlite3 test typo type: behavior versions: Python 3.3 Added file: http://bugs.python.org/file24412/sqlite3-test-hooks.patch ___ Python tracker <http://bugs.python.org/issue13934> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13773] Support sqlite3 uri filenames
poq added the comment: Thanks for your comments. You're right, I didn't consider positional arguments. Here's a patch that addresses your comments. Should I also rewrap modified lines that were already much too long? I also noticed & fixed an unrelated typo in Lib/sqlite3/test/hooks.py... -- Added file: http://bugs.python.org/file24207/sqlite-uri.v2.patch ___ Python tracker <http://bugs.python.org/issue13773> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13773] Support sqlite3 uri filenames
New submission from poq : URIs are an extensible way to pass options to SQLite. See: http://www.sqlite.org/uri.html Patch adds a keyword argument "uri" to sqlite3.connect which causes the filename to be parsed as a URI if set to True. -- components: Extension Modules files: sqlite-uri.patch keywords: patch messages: 151089 nosy: poq priority: normal severity: normal status: open title: Support sqlite3 uri filenames type: enhancement versions: Python 3.3 Added file: http://bugs.python.org/file24205/sqlite-uri.patch ___ Python tracker <http://bugs.python.org/issue13773> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12997] sqlite3: PRAGMA foreign_keys = ON doesn't work
poq added the comment: sqlite3.version_info = (2, 6, 0) sqlite3.sqlite_version_info = (3, 7, 4) pysqlite2.version_info = (2, 6, 0) pysqlite2.sqlite_version_info = (3, 7, 4) -- ___ Python tracker <http://bugs.python.org/issue12997> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13031] [PATCH] small speed-up for tarfile.py when unzipping tarballs
poq added the comment: I don't think you even need the slice, if you use unpack_from. -- nosy: +poq ___ Python tracker <http://bugs.python.org/issue13031> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12997] sqlite3: PRAGMA foreign_keys = ON doesn't work
poq added the comment: Nope. $ sqlite3 SQLite version 3.7.4 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> pragma foreign_keys; 0 sqlite> $ python Python 2.7.1+ (r271:86832, Apr 11 2011, 18:13:53) [GCC 4.5.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sqlite3 >>> c = sqlite3.connect(':memory:') >>> list(c.execute('pragma foreign_keys')) [(0,)] >>> list(c.execute('pragma foreign_keys = on')) [] >>> list(c.execute('pragma foreign_keys')) [(1,)] -- ___ Python tracker <http://bugs.python.org/issue12997> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12993] prepared statements in sqlite3 module
poq added the comment: The sqlite3 module already uses prepared statements. Quoting from the documentation: "The sqlite3 module internally uses a statement cache to avoid SQL parsing overhead. If you want to explicitly set the number of statements that are cached for the connection, you can set the cached_statements parameter. The currently implemented default is to cache 100 statements." -- nosy: +poq ___ Python tracker <http://bugs.python.org/issue12993> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12997] sqlite3: PRAGMA foreign_keys = ON doesn't work
poq added the comment: Works for me? $ python2.7 t.py Traceback (most recent call last): File "t.py", line 13, in con.execute("insert into track (artist_id) values (1)") sqlite3.IntegrityError: foreign key constraint failed $ python3.2 t.py Traceback (most recent call last): File "t.py", line 13, in con.execute("insert into track (artist_id) values (1)") sqlite3.IntegrityError: foreign key constraint failed -- nosy: +poq ___ Python tracker <http://bugs.python.org/issue12997> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12778] JSON-serializing a large container takes too much memory
poq added the comment: > Is iterencode() used much? I would think dump() and dumps() see the most use. Of course. I'd just prefer an elegant & complete solution. But I agree accelerating just dump() would already be much better than the current situation. -- ___ Python tracker <http://bugs.python.org/issue12778> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12778] JSON-serializing a large container takes too much memory
poq added the comment: > It would just need to call a given callable (fp.write) at regular intervals > and that would be enough to C-accelerate dump(). True, but that would just special case dump(), just like dumps() is special-cased now. Ideally JSONEncoder.iterencode() would be accelerated, so you wouldn't need any special cases. Or deprecate iterencode() and replace it with a callback interface... -- ___ Python tracker <http://bugs.python.org/issue12778> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12778] JSON-serializing a large container takes too much memory
poq added the comment: I think this is because dumps() uses the C encoder. Making the C encoder incremental (i.e. iterator-based) like the Python encoder would solve this. I actually looked into doing this for issue #12134, but it didn't seem so simple; Since C has no yield, I think the iterator would need to maintain its own stack to keep track of where it is in the object tree it's encoding... If there is interest though, I may be able to write a patch when I have some time off again... -- nosy: +poq ___ Python tracker <http://bugs.python.org/issue12778> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12134] json.dump much slower than dumps
poq added the comment: dump() is not slower because it's incremental though. It's slower because it's pure Python. I don't think there is necessarily a memory/speed trade-off; it should be possible to write an incremental encoder in C as well. -- ___ Python tracker <http://bugs.python.org/issue12134> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12142] Reference cycle when importing ctypes
poq added the comment: Tests succeed with this change. There is only one use of _array_type, which is in the same module. This use is presumably tested, because the test fails if I change the line to _array_type = type(Structure). In fact, everything must behave exactly the same after this change, because the two values are identical: >>> from ctypes import * >>> type(c_int * 3) is type(Array) True -- ___ Python tracker <http://bugs.python.org/issue12142> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12134] json.dump much slower than dumps
poq added the comment: Alright. I wouldn't mind a little note in the docs; I certainly did not expect that these two functions would perform so differently. Would it be very difficult though to add buffering support the C encoder? -- ___ Python tracker <http://bugs.python.org/issue12134> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12142] Reference cycle when importing ctypes
Changes by poq : -- title: eference cycle when importing ctypes -> Reference cycle when importing ctypes ___ Python tracker <http://bugs.python.org/issue12142> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12142] eference cycle when importing ctypes
Changes by poq : -- title: Circular reference when importing ctypes -> eference cycle when importing ctypes ___ Python tracker <http://bugs.python.org/issue12142> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12142] Circular reference when importing ctypes
New submission from poq : When importing ctypes after gc.set_debug(gc.DEBUG_LEAK), the garbage collector finds a 'c_int_Array_3' class and some related objects. The class is created in ctypes/_endian.py: _array_type = type(c_int * 3) It seems that this could be avoided with: _array_type = type(Array) Of course, I realize this is not a bug because normally it will just get collected. It is just an extremely minor annoyance because this is currently the only thing still found by DEBUG_LEAK for my program ;) -- components: ctypes messages: 136485 nosy: poq priority: normal severity: normal status: open title: Circular reference when importing ctypes type: resource usage versions: Python 3.3 ___ Python tracker <http://bugs.python.org/issue12142> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12134] json.dump much slower than dumps
New submission from poq : import json, timeit obj = [[1,2,3]*10]*10 class writable(object): def write(self, buf): pass w = writable() print('dumps: %.3f' % timeit.timeit(lambda: json.dumps(obj), number=1)) print('dump: %.3f' % timeit.timeit(lambda: json.dump(obj,w), number=1)) On my machine this outputs: dumps: 0.391 dump: 4.501 I believe this is mostly caused by dump using JSONEncoder.iterencode without _one_shot=True, resulting in c_make_encoder not being used. -- components: Extension Modules messages: 136439 nosy: poq priority: normal severity: normal status: open title: json.dump much slower than dumps type: performance versions: Python 2.7, Python 3.2 ___ Python tracker <http://bugs.python.org/issue12134> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com