Sworddragon added the comment:
> $ cat badfilename.py
> badfn = "こんにちは".encode('euc-jp').decode('utf-8', 'surrogateescape')
> print("bad filename:", badfn)
>
> $ PYTHONIOENCODING=utf-8:backslashreplace python3 badfilename.py
> bad filename: \udca4\udcb
Sworddragon added the comment:
> What do you mean by "make the C locale"?
I was pointing to the Platform Support Changes of PEP 538.
> I'm not sure of the name of each mode yet.
>
> After having written the "Use Cases" section and especially the
> Moj
Sworddragon added the comment:
On looking into PEP 538 and PEP 540 I think PEP 540 is the way to go. It
provides an option for a stronger encapsulation for the de-/encoding logic
between the interpreter and the developer. Instead of caring about error
handling the developer has now to care
Sworddragon added the comment:
The point is this ticket claims to be using the surrogateescape error handler
for sys.stdout and sys.stdin for the C locale. I have never used
surrogateescape explicitly before and thus have no experience for it and
consulting the documentation mentions throwing
Sworddragon added the comment:
Bug #28180 has caused me to make a look at the "encoding" issue this and the
tickets before have tried to solve more or less. Being a bit unsure what the
root cause and intention for all this was I'm now at a point to actually check
this ti
Changes by Sworddragon <sworddrag...@aol.com>:
--
nosy: +Sworddragon
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue28180>
___
_
Sworddragon added the comment:
The proposal is in the startpost.
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue26656>
___
___
Sworddragon added the comment:
Maybe it sounded a bit confusing but this text was not to be meant as a direct
match against the documentation.
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/i
Sworddragon added the comment:
If we decide to word it this way eventually the sentence for "Windows only"
needs to be updated too. Not sure about the other sentences as they sound a bit
if they would guarantee what they say. Maybe somebody else
Sworddragon added the comment:
Edit: Forgot to say that the first point also applies to the options "Install
launcher for all users (recommended)" and "for all users (requires elevation)"
or "Install for all users" dependent on wha
New submission from Sworddragon:
On installing Python 3.5.1 with the Windows installer I have noticed some
things that could maybe be enhanced:
- The launcher provides the option "Add Python 3.5 to PATH" on the first page.
But if a custom installation is choosen the option app
Sworddragon added the comment:
I'm assuming this means this must be called on every interactive python
instance. If not just correct me.
While this might be an enhancement it would still be too inconvenient.
Basically something that can be configured permanently for example with
~/.bashrc
New submission from Sworddragon:
The documentation for re.compile says "Compile a regular expression pattern
into a regular expression object, which can be used for matching using its
match() and search() methods, described below." which implies that match() and
search() are the on
Sworddragon added the comment:
I'm wondering what the recursion limit is if -l and -r are not given. Does it
default to 10 too or is there no limit? If the first is the case maybe this
should also get documented.
--
___
Python tracker <
New submission from Sworddragon:
After ticket #21404 got solved I'm noticing that the default value for the
compresslevel argument is not mentioned. Maybe this can be documented too.
--
components: Library (Lib)
messages: 253592
nosy: Sworddragon
priority: normal
severity: normal
Sworddragon added the comment:
At tarfile, but the compresslevel argument is mentioned there but not its
default value.
--
title: Default value for compresslevel is not documented -> tarfile: Default
value for compresslevel is not documen
New submission from Sworddragon:
I'm noticing some things on the argparse help output that can maybe enhanced.
Here is a testcase that produces an example output:
#!/usr/bin/python3 -BEOObbs
# coding=utf-8
import argparse
arguments = argparse.ArgumentParser()
arguments.add_argument('-t
Sworddragon added the comment:
> The formatting of choices has been discussed in other bug/issues.
What was the reason showing the choices only once at default was not chosen?
--
___
Python tracker <rep...@bugs.python.org>
<http://bug
New submission from Sworddragon:
In the argparse module I'm noticing that for example an optional argument and a
positional argument can target the same namespace. Here is a testcase:
#!/usr/bin/python3 -BEOObbs
# coding=utf-8
import argparse
arguments = argparse.ArgumentParser
New submission from Sworddragon:
It seems Python's own regular expressions aren't able of handling nested
structures so maybe this can be enhanced like it is done with PCRE's recursive
patterns.
--
components: Library (Lib)
messages: 251951
nosy: Sworddragon
priority: normal
severity
Sworddragon added the comment:
In this case probably all is fine then. But there is a minor thing I noticed
from one of your previous posts: You said parser.add_argument returns an Action
object but I noticed that this is not mentioned on "16.4.3. The add_argument()
method". I
Sworddragon added the comment:
I'm actually not fully sure why you are telling me this all, especially in this
specific way.
But I would also go the other way, by removing ArgumentParser.get_default and
ArgumentParser.set_defaults if we think the current ways of getting/setting are
enough
Sworddragon added the comment:
I was myself in the case where I needed the values of the choices key of 2
specific arguments. Currently I'm solving this by storing them in variables but
probably it could be cleaner by using a getter.
--
___
Python
New submission from Sworddragon:
On making a look at the argparse documentation to figure out if I can get the
value of the choices key of a specific argument after argument parsing or if I
have to implement it myself I noticed that there is a getter/setter for the
default key
Sworddragon added the comment:
I was thinking about cases where the default is variable for example a call to
platform.machine() while the choices list (and the script itself) might not
support all exotic architectures for its use that might be returned now or in a
future version of Python
Changes by Sworddragon sworddrag...@aol.com:
--
nosy: +Sworddragon
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9625
___
___
Python-bugs-list
New submission from Sworddragon:
I'm seeing in the documentation 8 different os.exec* functions that differ only
slightly. I think from the way they are differing they could also all be merged
into 1 function which could look like this:
os.exec(file, args, env, use_path)
- file is the path
New submission from Sworddragon:
The library compileall has the option -f to force the rebuilding of the
bytecode files so I thought maybe there could also be an option to delete all
bytecode files which haven't a non-bytecode library anymore.
--
components: Library (Lib)
messages
Sworddragon added the comment:
If there is no real use for socket.recvfrom(0) (and then probably
socket.recv(0) too) maybe a bufsize argument of 0 should throw an exception?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org
New submission from Sworddragon:
For example on sending ICMP packets and receiving the data socket.recv(1) does
wait for data while socket.recv(0) doesn't. socket.recvfrom(1) does wait for
data too but I'm also seeing that socket.recvfrom(0) is waiting for data which
doesn't look correct
New submission from Sworddragon:
From the documentation:
os.O_DSYNC
os.O_RSYNC
os.O_SYNC
os.O_NDELAY
os.O_NONBLOCK
os.O_NOCTTY
os.O_SHLOCK
os.O_EXLOCK
os.O_CLOEXEC
These constants are only available on Unix.
But os.O_SHLOCK and os.O_EXLOCK are not available on my system (Linux
3.16.7
Changes by Sworddragon sworddrag...@aol.com:
--
type: - behavior
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23105
___
___
Python-bugs-list
Changes by Sworddragon sworddrag...@aol.com:
--
components: +Library (Lib)
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23105
___
___
Python-bugs
Sworddragon added the comment:
I have missed the first part of the documentation and am not sure if something
needs really to be changed. But if you think so maybe comments like These
constants are only available on Unix. could be extended by the word commonly
like These constants
Sworddragon added the comment:
It works if -q 0 is given without the need of a workaround. So this was just
a feature of apt that was causing this behavior. I think here is nothing more
to do so I'm closing this ticket.
--
resolution: - not a bug
status: open - closed
New submission from Sworddragon:
There is currently shlex.split() that is for example useful to split a command
string and pass it to subprocess.Popen with shell=False. But I'm missing a
function that does the opposite: Building the command string from a list that
could for example
Sworddragon added the comment:
Yes, it is possible to do this with a few other commands. But I think it would
be still a nice enhancement to have a direct function for it.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org
New submission from Sworddragon:
The application apt-get on Linux does scale its output dependent of the size of
the terminal but I have noticed that there are differences if I'm calling
apt-get directly or with a subprocess without shell and creationflags set (so
that creationflags should
New submission from Sworddragon:
On reading the output of an application (for example apt-get download
firefox) that dynamically changes a line (possibly with the terminal control
character \r) I have noticed that read(1) does not read the output until it has
finished with a newline
Changes by Sworddragon sworddrag...@aol.com:
Removed file: http://bugs.python.org/file36661/test.py
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22443
Sworddragon added the comment:
Edit: Updated testcase as I forgot to flush the output (in case somebody hints
to it).
--
Added file: http://bugs.python.org/file36662/test.py
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22443
Sworddragon added the comment:
The buffering of stdout and/or stderr of your application probably
changes if the application runs in a terminal (TTY) or if the output is
redirected to a pipe (not a TTY). Set the setvbuf() function.
This means in the worst case there is currently no official
Sworddragon added the comment:
You don't need to compile Python. Just compile nobuffer.c to
libnobuffer.so. See the documentation in nobuffer.c.
Strictly following the documentation does not work:
sworddragon@ubuntu:~/tmp$ gcc -shared -o nobuffer.so interceptor.c
gcc: error: interceptor.c
Sworddragon added the comment:
Why must stdin of the subprocess be closed so that a read() on stdout can
return?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22439
Sworddragon added the comment:
But this happens also on read(1). I'm even getting no partly output.
1. I'm calling diff in a way where it expects input to compare.
2. I'm writing and flushing to diff's stdin.
3. diff seems to not get this content until I close its stdin
Sworddragon added the comment:
Ah, now I see it. Thanks for your hint.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22439
___
___
Python-bugs
Sworddragon added the comment:
I was able to compile the library but after executing
LD_PRELOAD=./libnobuffer.so ./test.py I'm seeing no difference. The unflushed
output is still not being read with read(1).
--
___
Python tracker rep
Sworddragon added the comment:
stdbuf -o 0 ./test.py and unbuffer ./test.py doesn't change the result too.
Or is something wrong with my testcase?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22443
Changes by Sworddragon sworddrag...@aol.com:
Removed file: http://bugs.python.org/file36660/test.py
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22441
Changes by Sworddragon sworddrag...@aol.com:
Added file: http://bugs.python.org/file36667/test.py
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22441
Sworddragon added the comment:
Edit: Updated testcase as it contained an unneeded argument from an older
testcase (in case it confuses somebody).
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22441
New submission from Sworddragon:
On sending something to stdin of a process that was called with subprocess (for
example diff) I have figured out that all is working fine if stdin is closed
but flushing stdin will cause a hang (the same as nothing would be done). In
the attachments
Sworddragon added the comment:
and if you try to receive less bytes than the datagram size, the rest will be
discarded, like UDP.
I'm wondering how would it be possible then to fetch packets of an unknown size
without using an extremely big buffer
Sworddragon added the comment:
It is too late to change the unicode-escape encoding.
So it will stay at ISO-8859-1? If yes I think this ticket can be closed as wont
fix.
--
status: pending - open
___
Python tracker rep...@bugs.python.org
http
Sworddragon added the comment:
I have retested this with the correct linked version and it is working fine now
so I'm closing this ticket.
--
resolution: - not a bug
status: open - closed
___
Python tracker rep...@bugs.python.org
http
New submission from Sworddragon:
If I'm receiving data from a socket (several bytes) and making the first call
to socket.recv(1) all is fine but the second call won't get any further data.
But doing this again with socket.recv(2) instead will successfully get the 2
bytes. Here is a testcase
New submission from Sworddragon:
This is a fork from this ticket: http://bugs.python.org/issue21404
tarfile has a compression level and seems to get now the missing documentation
for it. But there is still a compression level missing for zipfile.
--
components: Library (Lib)
messages
Changes by Sworddragon sworddrag...@aol.com:
--
type: - enhancement
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21417
___
___
Python-bugs-list
Sworddragon added the comment:
Sure, here is the new ticket: http://bugs.python.org/issue21417
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21404
Sworddragon added the comment:
Could it be that compress_level is not documented?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21404
Sworddragon added the comment:
Then this one is easy: The documentation needs just an update. But then there
is still zipfile that doesn't provide (or at least document) a compression
level.
--
___
Python tracker rep...@bugs.python.org
http
Changes by Sworddragon sworddrag...@aol.com:
--
type: - enhancement
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21404
___
___
Python-bugs-list
New submission from Sworddragon:
The tarfile/zipfile libraries doesn't seem to provide a direct way to specify
the compression level. I have now ported my code from subprocess to
tarfile/zipfile to achieve platform independency but would be happy if I could
also control the compression level
Sworddragon added the comment:
The TarFile class provides more options. Alternatively a file object could be
used but this means additional code (and maybe IO overhead).
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21369
Sworddragon added the comment:
Interesting, after reading the documentation again I would now assume that is
what **kwargs is for.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21369
New submission from Sworddragon:
tarfile.open() does support optionally an compression method on the mode
argument in the form of 'filemode[:compression]' but tarfile.TarFile() does
only suport 'a', 'r' and 'w'. Is there a special reason that tarfile.TarFile()
doesn't directly support
Sworddragon added the comment:
The documentation says that unicode_internal is deprecated since Python 3.3 but
not unicode_escape. Also, isn't unicode_escape different from utf-8? For
example my original intention was to convert 2 byte string characters to their
control characters
New submission from Sworddragon:
I have made some tests with encoding/decoding in conjunction with
unicode-escape and got some strange results:
print('ä')
ä
print('ä'.encode('utf-8'))
b'\xc3\xa4'
print('ä'.encode('utf-8').decode('unicode-escape'))
ä
print('ä'.encode('utf-8').decode
New submission from Sworddragon:
I have noticed that since Python 3.4 the interactive mode does log all commands
to ~/.python_history. This caused me to switch into normal user mode and look
for a solution. With Google I have found the related entry in the documentation:
On systems
Sworddragon added the comment:
It sounds like me that del dir_list does only delete the copied list while
del dir_list[:] accesses the reference and deletes this list. If I'm not
wrong with this assumption I think you was meaning dir_list instead of root_dir
in your post.
But thanks
New submission from Sworddragon:
The following was tested on Linux. In the attachments is the example code and
here is my output:
sworddragon@ubuntu:/tmp$ ./test.py
1
I'm deleting the list of directories on every recursion and skipping if I'm
directly in /proc (which is theoretically
New submission from Sworddragon:
With Python 3.4.0 RC1 on using the command unoconv -o test.pdf test.odt I'm
getting a segmentation fault. In the attachments are the used LibreOffice
document and a GDB backtrace. The used version of unoconv was 0.6-6 from Ubuntu
14.04 dev and can be currently
Changes by Sworddragon sworddrag...@aol.com:
Added file: http://bugs.python.org/file34207/test.odt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20756
Sworddragon added the comment:
Was it rebuilt linked against Python 3.4, instead of Python 3.3?
I don't know. Is ../Python/pystate.c that throws the error not a part of Python?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org
Sworddragon added the comment:
The fact that write() uses sys.getfilesystemencoding() is either
a defect or a bad design (I leave the decision to you).
I have good news for you. write() does not cal sys.getfilesystemencoding(),
because the encoding is set at the time the file is opened
Sworddragon added the comment:
Instead, open() determines the default encoding by calling the same function
that's used to initialize Py_FileSystemDefaultEncoding: get_locale_encoding()
in Python/pythonrun.c. Which on POSIX systems calls the POSIX function
nl_langinfo().
open() will use
Sworddragon added the comment:
By the way I have found a valid use case for LANG=C. udev and Upstart are not
setting LANG which will result in the ascii encoding for invoked Python
scripts. This could be a problem since these applications are commonly dealing
with non-ascii filesystems
Sworddragon added the comment:
https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/1235483
After opening many hundred tickets I would say: With luck this ticket will get
a response within the next year. But in the worst case it will be simply
refused.
I found examples using LANG=$LANG
Sworddragon added the comment:
What would happen if we call this example script with LANG=C on the patch?:
---
import os
for name in sorted(os.listdir('ä')):
print(name)
---
Would it throw an exception on os.listdir('ä')?
--
___
Python
New submission from Sworddragon:
From the documentation: The '*', '+', and '?' qualifiers are all greedy;
But this is not the case for '?'. In the attachments is an example which shows
this: re.search(r'1?', '01') should find '1' but it doesn't find anything.
--
components: Library
Sworddragon added the comment:
I'm closing the issue as invalid, because Python 3 behaviour is correct and
must not be changed.
The fact that write() uses sys.getfilesystemencoding() is either a defect or a
bad design (I leave the decision to you).
But I'm still missing a reply to my
Sworddragon added the comment:
If the environment variable is not enough
There is a big difference between environment variables and internal calls:
Environment variables are user-space while builtin/library functions are
developer-space.
I have good news for you. write() does not cal
Sworddragon added the comment:
You should keep things more simple:
- Python and the operation system/filesystem are in a client-server
relationship and Python should validate all.
- It doesn't matter what you will finally decide to be the default encoding on
various places - all will provide
Sworddragon added the comment:
Using an environment variable is not the holy grail for this. On writing a
non-single-user application you can't expect the user to set extra environment
variables.
If compatibility is the only reason in my opinion it would be much better to
include something
Sworddragon added the comment:
It is nice that you could fixed the documentation due to this report but this
was just a sideeffect - so closing this report and moving it to Documentation
was maybe wrong.
--
___
Python tracker rep...@bugs.python.org
Sworddragon added the comment:
This idea was already proposed in issue #8622, but it was a big fail.
Not completely: If your locale is utf-8 and you want to operate on an utf-8
filesystem all is fine. But what if you want then to operate on a ntfs
(non-utf-8) partition? As I know
Sworddragon added the comment:
I have extended the benchmark a little and here are my new results:
concatenate_string() : 0.037489
concatenate_bytes(): 2.920202
concatenate_bytearray(): 0.157311
concatenate_string_io(): 0.035397
concatenate_bytes_io
Sworddragon added the comment:
We aren't going to add the optimization shortcut for bytes
There is still the question: Why isn't this going to be optimized?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19801
New submission from Sworddragon:
It seems that print() and write() (and maybe other of such I/O functions) are
relying on sys.getfilesystemencoding(). But these functions are not operating
with filenames but with their content. In the attachments is an example script
which demonstrates
New submission from Sworddragon:
sys.getfilesystemencoding() says for Unix: On Unix, the encoding is the user’s
preference according to the result of nl_langinfo(CODESET), or 'utf-8' if
nl_langinfo(CODESET) failed.
In my opinion relying on the locale environment is risky since
filesystem
New submission from Sworddragon:
In the attachments is a testcase which does concatenate 10 times a string
and than 10 times a bytes object. Here is my result:
sworddragon@ubuntu:~/tmp$ ./test.py
String: 0.03165316581726074
Bytes : 0.5805566310882568
--
components: Benchmarks
New submission from Sworddragon:
socket(7) does contain SO_PRIORITY but trying to use this value will result in
this error: AttributeError: 'module' object has no attribute 'SO_PRIORITY'
--
components: Library (Lib)
messages: 204506
nosy: Sworddragon
priority: normal
severity: normal
Sworddragon added the comment:
After checking it: Yes it does, thanks for the hint. In this case I'm closing
this ticket now.
--
resolution: - invalid
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19671
Sworddragon added the comment:
Hi. Since Python 3.2, compileall functions supports the optimization level
through the `optimize` parameter.
There is no command-line option to control the optimization level used by the
compile() function, because the Python interpreter itself already
New submission from Sworddragon:
Currently on calling one of the compileall functions it is not possible to pass
the optimization level as argument. The bytecode will be created depending of
the optimization level of the current script instance. But if a script wants to
compile .pyc files
New submission from Sworddragon:
Currently the documentation does sometimes say about specific exceptions but
most times not. As I'm often catching exceptions to ensure a high stability
this gets a little difficult. For example print() can trigger a BrokenPipeError
and the most file functions
Sworddragon added the comment:
I'm fine with this decision as it will be really much work. But this also means
programming with Python isn't considered for high stability applications - due
to the lack of important informations in the documentation.
An alternate way would be to rely on error
Sworddragon added the comment:
Correct, but the second part of my last message was just my opinion that I
would prefer error codes over exceptions because it implies already a completed
documentation for this part due to return codes/error arguments/other potential
ways
New submission from Sworddragon:
All functions of compileall are providing a maxlevels argument which defaults
to 10. But it is currently not possible to disable this recursion limitation.
Maybe it would be useful to have a special value like -1 to disable this
limitation and allow to compile
Sworddragon added the comment:
Do realize this is a one-time memory cost, though, because next execution
will load from the .pyo and thus will never load the docstring into memory.
Except in 2 cases:
- The bytecode was previously generated with -O.
- The bytecode couldn't be written
1 - 100 of 139 matches
Mail list logo