[issue46766] Add a class for file operations so a syntax such as open("file.img", File.Write | File.Binary | File.Disk) is possible.

2022-02-16 Thread Isaac Johnson


Isaac Johnson  added the comment:

Well that is how it works with open. It is implemented in the io module and  
added to builtins.

--

___
Python tracker 
<https://bugs.python.org/issue46766>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46766] Add a class for file operations so a syntax such as open("file.img", File.Write | File.Binary | File.Disk) is possible.

2022-02-16 Thread Isaac Johnson


Isaac Johnson  added the comment:

Well it wouldn't need to be imported. I was working on including it inside 
builtins like open(). It wouldn't be very convenient if it needed to be 
imported.

--

___
Python tracker 
<https://bugs.python.org/issue46766>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46766] Add a class for file operations so a syntax such as open("file.img", File.Write | File.Binary | File.Disk) is possible.

2022-02-15 Thread Isaac Johnson


Change by Isaac Johnson :


--
type:  -> enhancement

___
Python tracker 
<https://bugs.python.org/issue46766>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46766] Add a class for file operations so a syntax such as open("file.img", File.Write | File.Binary | File.Disk) is possible.

2022-02-15 Thread Isaac Johnson


Isaac Johnson  added the comment:

I'm currently working on implementing this. It will probably be a few weeks.

--

___
Python tracker 
<https://bugs.python.org/issue46766>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46766] Add a class for file operations so a syntax such as open("file.img", File.Write | File.Binary | File.Disk) is possible.

2022-02-15 Thread Isaac Johnson


New submission from Isaac Johnson :

I think it would be great for something like this to be with the IO module. It 
will improve code readability.

--
components: Library (Lib)
messages: 413315
nosy: isaacsjohnson22
priority: normal
severity: normal
status: open
title: Add a class for file operations so a syntax such as open("file.img", 
File.Write | File.Binary | File.Disk) is possible.
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue46766>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37578] Change Glob: Allow Recursion for Hidden Files

2021-12-16 Thread Isaac Muse


Isaac Muse  added the comment:

If this was to be done, you'd want to make sure character sequences also match 
hidden files: [.]. Just * and ? would be incomplete. If allowing ** to match a 
leading dot, it would not match . or ..

--
nosy: +Isaac Muse

___
Python tracker 
<https://bugs.python.org/issue37578>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13475] Add '--mainpath'/'--nomainpath' command line options to override sys.path[0] initialisation

2021-11-26 Thread Neil Isaac


Change by Neil Isaac :


--
nosy: +nisaac

___
Python tracker 
<https://bugs.python.org/issue13475>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45552] Close asyncore/asynchat/smtpd issues and list them here

2021-10-21 Thread Isaac Boukris


Change by Isaac Boukris :


--
keywords: +patch
nosy: +Isaac Boukris
nosy_count: 1.0 -> 2.0
pull_requests: +27397
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/11770

___
Python tracker 
<https://bugs.python.org/issue45552>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45255] sqlite3.connect() should check if the sqlite file exists and throw a FileNotFoundError if it doesn't

2021-09-21 Thread Isaac Boates


New submission from Isaac Boates :

I was just using the sqlite3 package and was very confused when trying to open 
an sqlite database from a relative path, because the only error provided was:

  File "/path/to/filepy", line 50, in __init__
self.connection = sqlite3.connect(path)
sqlite3.OperationalError: unable to open database file

It turns out I was just executing Python from the wrong location and therefore 
my relative path was broken. Not a big problem. But it was confusing because it 
only throws this generic OperationalError.

Could it instead throw a FileNotFoundError if the db simply doesn't exist at 
the specified path?

--
messages: 402309
nosy: iboates
priority: normal
severity: normal
status: open
title: sqlite3.connect() should check if the sqlite file exists and throw a 
FileNotFoundError if it doesn't
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue45255>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45038] Bugs

2021-08-28 Thread Jonathan Isaac


New submission from Jonathan Isaac :

Jonathan Isaac

Sent with Aqua Mail for Android
https://www.mobisystems.com/aqua-mail

--
messages: 400479
nosy: bonesisaac1982
priority: normal
severity: normal
status: open
title: Bugs

___
Python tracker 
<https://bugs.python.org/issue45038>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45037] theme-change.py for tkinter lib

2021-08-28 Thread Jonathan Isaac


Jonathan Isaac  added the comment:

Bugs

--
components: +Parser
nosy: +lys.nikolaou, pablogsal
type:  -> crash
versions: +Python 3.11, Python 3.6

___
Python tracker 
<https://bugs.python.org/issue45037>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45037] theme-change.py for tkinter lib

2021-08-28 Thread Jonathan Isaac


Jonathan Isaac  added the comment:

Get the code!

--
nosy: +bonesisaac1982

___
Python tracker 
<https://bugs.python.org/issue45037>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35673] Loader for namespace packages

2021-07-16 Thread Isaac


Isaac  added the comment:

Not sure if it's proper etiquette to bump issues on the tracker, but is there 
any interest in this issue for 3.11?

--
nosy: +fwahhab

___
Python tracker 
<https://bugs.python.org/issue35673>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44380] glob.glob handling of * (asterisk) wildcard is broken

2021-06-10 Thread Isaac Muse


Isaac Muse  added the comment:

Sadly, this because pathlib glob and glob.glob use different implementations. 
And glob.glob does not provide something equivalent to a DOTALL flag allowing a 
user to glob hidden files without explicitly defining the leading dot in the 
pattern.

--
nosy: +Isaac Muse

___
Python tracker 
<https://bugs.python.org/issue44380>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44175] What do "cased" and "uncased" mean?

2021-05-21 Thread Isaac Ge


Isaac Ge  added the comment:

Or we could integrate the explanation of uncased characters into the footnote 
for cased characters, and append the footnote in "str.istitle()" and 
"str.upper()".

--

___
Python tracker 
<https://bugs.python.org/issue44175>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44175] What do "cased" and "uncased" mean?

2021-05-21 Thread Isaac Ge


Isaac Ge  added the comment:

@ Josh Rosenberg Sorry, I mistook "follow" as "be followed by". Thanks to your 
explication, the document is coherent. I admit that I cannot conjure up any 
better altnernative.

I noticed that "cased character" are explained via the footnote: 
https://docs.python.org/3/library/stdtypes.html?highlight=istitle#id6

So it may be better to add a footnote for "uncased characters" as well, like 
ones in "str.istitle()" and "str.upper()".

By the way, the footnote for "cased character" is a bit confusing because of 
the curt abbreviations "Lu", "Ll", and "Lt". I did not get these until I fount 
out they are related to general category of unicode, so we could add a related 
link pointing to a related Unicode document.

--

___
Python tracker 
<https://bugs.python.org/issue44175>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44175] What do "cased" and "uncased" mean?

2021-05-19 Thread Isaac Ge


Isaac Ge  added the comment:

Why does "a".istitle() return "False" while it is not followed by any uncased 
character?

--

___
Python tracker 
<https://bugs.python.org/issue44175>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44175] What do "cased" and "uncased" mean?

2021-05-18 Thread Isaac Ge


Change by Isaac Ge :


--
title: What does "cased" and "uncased" mean? -> What do "cased" and "uncased" 
mean?

___
Python tracker 
<https://bugs.python.org/issue44175>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44175] What does "cased" and "uncased" mean?

2021-05-18 Thread Isaac Ge


New submission from Isaac Ge :

str.istitle(): Return True if the string is a titlecased string and there is at 
least one character, for example uppercase characters may only follow uncased 
characters and lowercase characters only cased ones. Return False otherwise.

I saw this description from the doc. But what does "cased" andd "uncased" mean? 
I looked it up on a dictionary, and the latter only says: "cased in something: 
completely covered with a particular material".

I think "cased" may be "capitalized", but, if so, the usage of the former is 
not endorsed by dictionaries so that I think this word is confusing or 
informal. so does "uncased".

--
assignee: docs@python
components: Documentation
messages: 393920
nosy: docs@python, otakutyrant
priority: normal
severity: normal
status: open
title: What does "cased" and "uncased" mean?
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue44175>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42830] tempfile mkstemp() leaks file descriptors if os.close() is not called

2021-01-28 Thread Isaac Young


Isaac Young  added the comment:

Perhaps the documentation should be more explicit, but I wouldn't say this is 
an issue. Both mkstemp and mkdtemp are low level functions which are intended 
to have this kind of flexibility.

The os.unlink, and the equivalent os.remove, are POSIX defined functions which 
always deletes the name from the filesystem but the file will remain in memory 
so long as there are file descriptors referencing it. So using 
os.close(file_descriptor) is actually how you are expected to use this API.

Is there any reason you don't want to use [Named]TemporaryFile? They are  high 
level interfaces which handle the cleanup.

--
nosy: +StillSubjectToChange

___
Python tracker 
<https://bugs.python.org/issue42830>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39682] pathlib.Path objects can be used as context managers

2020-03-05 Thread Isaac Muse


Isaac Muse  added the comment:

Wrong thread sorry

--

___
Python tracker 
<https://bugs.python.org/issue39682>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39856] glob : some 'unix style' glob items are not supported

2020-03-05 Thread Isaac Muse


Isaac Muse  added the comment:

Brace expansion does not currently exist in Python's glob. You'd have to use a 
third party module to expand the braces and then run glob on each returned 
pattern, or use a third party module that implements a glob that does it for 
you.

Shameless plug:

Brace expansion: https://github.com/facelessuser/bracex

Glob that does it for you (when the feature is enabled): 
https://github.com/facelessuser/wcmatch

Now whether Python should integrate such behavior by default is another 
question.

--
nosy: +Isaac Muse

___
Python tracker 
<https://bugs.python.org/issue39856>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39682] pathlib.Path objects can be used as context managers

2020-03-05 Thread Isaac Muse


Isaac Muse  added the comment:

Brace expansion does not currently exist in Python's glob. You'd have to use a 
third party module to expand the braces and then run glob on each returned 
pattern, or use a third party module that implements a glob that does it for 
you.

Shameless plug:

Brace expansion: https://github.com/facelessuser/bracex

Glob that does it for you (when the feature is enabled): 
https://github.com/facelessuser/wcmatch

Now whether Python should integrate such behavior by default is another 
question.

--
nosy: +Isaac Muse

___
Python tracker 
<https://bugs.python.org/issue39682>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39532] Pathlib: handling of `.` in paths and patterns creates unmatchable paths

2020-02-02 Thread Isaac Muse


Isaac Muse  added the comment:

The more I think about this, I think the normalization of paths is actually 
fine, it is the normalization of the patterns that is problematic, or more the 
difference in normalization. I could live with the pattern normalization of `.` 
and trailing `/` if it was at least consistent with what happens in paths. I 
still find the modification of the glob pattern in this manner surprising, but 
at least it wouldn't' cause cases like this to fail.

--

___
Python tracker 
<https://bugs.python.org/issue39532>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39532] Pathlib: handling of `.` in paths and patterns creates unmatchable paths

2020-02-02 Thread Isaac Muse


New submission from Isaac Muse :

It appears that the pathlib library strips out `.` in glob paths when they 
represent a directory. This is kind of a naive approach in my opinion, but I 
understand what was trying to be achieved.

When a path is given to pathlib, it normalizes it by stripping out 
non-essential things like `.` that represent directories, and strips out 
trailing `/` to give a path without unnecessary parts (the stripping of 
trailing `/` is another discussion).

But there is a small twist, when given an empty string or just a dot, you need 
to have something as the directory, so it allows a `.`.

So, it appears the idea was since this normalization is applied to paths, why 
not apply it to the glob patterns as well, so it does. But the special logic 
that ensures you don't have an empty string to match does not get applied to 
the glob patterns. This creates unmatchable paths:

>>> import pathlib
>>> str(pathlib.Path('.'))
'.'
>>> pathlib.Path('.').match('.')
Traceback (most recent call last):
  File "", line 1, in 
  File "C:\Python36\lib\pathlib.py", line 939, in match
raise ValueError("empty pattern")
ValueError: empty pattern

I wonder if it is appropriate to apply this `.` stripping to glob patterns. 
Personally, I think the glob pattern, except for slash normalization, should 
remain unchanged, but if it is to be normalized above and beyond this, at the 
very least should use the exact same logic that is applied to the paths.

------
components: Library (Lib)
messages: 361259
nosy: Isaac Muse
priority: normal
severity: normal
status: open
title: Pathlib: handling of `.` in paths and patterns creates unmatchable paths
type: behavior
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue39532>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29249] Pathlib glob ** bug

2020-01-31 Thread Isaac Muse


Isaac Muse  added the comment:

I think the idea of adding a globmatch function is a decent idea.

That is what I did in a library I wrote to get more out of glob than what 
Python offered out of the box: 
https://facelessuser.github.io/wcmatch/pathlib/#purepathglobmatch. 

Specifically the differences are globmatch is just a pure match of a path, it 
doesn't do the implied `**` at the beginning of a pattern like match does. 
While it doesn't enable `**` by default, such features are controlled by flags

>>> pathlib.Path("a/b/c/d/e.txt").match('a/*/**/*', flags=pathlib.GLOBSTAR)
True

This isn't to promote my library, but more to say, as a user, I found such 
functionality worth adding. I think it would be generally nice to have such 
functionality in some form in Python by default. Maybe something called 
`globmatch` that offers that could be worthwhile.

--
nosy: +Isaac Muse

___
Python tracker 
<https://bugs.python.org/issue29249>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



ANN: calf 0.3.1

2019-11-09 Thread Isaac To
Hello all,

I'm glad to announce the first public release of calf, 0.3.1:
https://pypi.org/project/calf/0.3.1/

About
=

calf: Command Argument Loading Function for Python

Calf lets you remove all your command argument parsing code, at least
for simple cases. Only the implementation function is left, with
initialization code that uses calf to call this function. The command
argument parser is configured with a proper docstring, and perhaps
some annotations (argument type) and default values for the
parameters. In other words, stuffs that you would write anyway.

The docstring can be written in Google, Sphinx, epydoc or Numpy style,
and the design is that it is easy to swap the parsing function with
yours. In fact, you can customize such a wide range of characteristics
of calf, that you can treat it as a slightly restricted frontend to
the ArgumentParser under the hood. Used in this way, you can treat
calf as a cute way to configure argparse.

This package shamelessly stole a lot of ideas from plac, but hopes to
be more focused on creating comfortable command line interfaces rather
than becoming a Swiss knife for programs with text-only user
interface.


--
Python-announce-list mailing list -- python-announce-list@python.org
To unsubscribe send an email to python-announce-list-le...@python.org
https://mail.python.org/mailman3/lists/python-announce-list.python.org/

Support the Python Software Foundation:
http://www.python.org/psf/donations/


[issue37591] test_concurrent_future failed

2019-10-07 Thread Isaac Turner


Isaac Turner  added the comment:

I'm seeing the same error on Ubuntu LTS 16.04.6 on an ARM64 platform.

$ make && make test

...

0:09:18 load avg: 2.09 [ 78/416] test_complex
0:09:20 load avg: 2.08 [ 79/416] test_concurrent_futures

Traceback:
 Thread 0x007f61f0 (most recent call first):
  File "/home/minit/Python-3.7.4/Lib/threading.py", line 296 in wait
  File "/home/minit/Python-3.7.4/Lib/multiprocessing/queues.py", line 224 in 
_feed
  File "/home/minit/Python-3.7.4/Lib/threading.py", line 870 in run
  File "/home/minit/Python-3.7.4/Lib/threading.py", line 926 in _bootstrap_inner
  File "/home/minit/Python-3.7.4/Lib/threading.py", line 890 in _bootstrap

Thread 0x007f93fff1f0 (most recent call first):
  File "/home/minit/Python-3.7.4/Lib/selectors.py", line 415 in select
  File "/home/minit/Python-3.7.4/Lib/multiprocessing/connection.py", line 920 
in wait
  File "/home/minit/Python-3.7.4/Lib/concurrent/futures/process.py", line 361 
in _queue_management_worker
  File "/home/minit/Python-3.7.4/Lib/threading.py", line 870 in run
  File "/home/minit/Python-3.7.4/Lib/threading.py", line 926 in _bootstrap_inner
  File "/home/minit/Python-3.7.4/Lib/threading.py", line 890 in _bootstrap

Current thread 0x007fa6296000 (most recent call first):
  File "/home/minit/Python-3.7.4/Lib/test/test_concurrent_futures.py", line 917 
in _fail_on_deadlock
  File "/home/minit/Python-3.7.4/Lib/test/test_concurrent_futures.py", line 978 
in test_crash
  File "/home/minit/Python-3.7.4/Lib/unittest/case.py", line 628 in run
  File "/home/minit/Python-3.7.4/Lib/unittest/case.py", line 676 in __call__
  File "/home/minit/Python-3.7.4/Lib/unittest/suite.py", line 122 in run
  File "/home/minit/Python-3.7.4/Lib/unittest/suite.py", line 84 in __call__
  File "/home/minit/Python-3.7.4/Lib/unittest/suite.py", line 122 in run
  File "/home/minit/Python-3.7.4/Lib/unittest/suite.py", line 84 in __call__
  File "/home/minit/Python-3.7.4/Lib/unittest/suite.py", line 122 in run
  File "/home/minit/Python-3.7.4/Lib/unittest/suite.py", line 84 in __call__
  File "/home/minit/Python-3.7.4/Lib/test/support/testresult.py", line 162 in 
run
  File "/home/minit/Python-3.7.4/Lib/test/support/__init__.py", line 1915 in 
_run_suite
  File "/home/minit/Python-3.7.4/Lib/test/support/__init__.py", line 2011 in 
run_unittest
  File "/home/minit/Python-3.7.4/Lib/test/test_concurrent_futures.py", line 
1245 in test_main
  File "/home/minit/Python-3.7.4/Lib/test/support/__init__.py", line 2143 in 
decorator
  File "/home/minit/Python-3.7.4/Lib/test/libregrtest/runtest.py", line 228 in 
_runtest_inner2
  File "/home/minit/Python-3.7.4/Lib/test/libregrtest/runtest.py", line 264 in 
_runtest_inner
  File "/home/minit/Python-3.7.4/Lib/test/libregrtest/runtest.py", line 149 in 
_runtest
  File "/home/minit/Python-3.7.4/Lib/test/libregrtest/runtest.py", line 187 in 
runtest
  File "/home/minit/Python-3.7.4/Lib/test/libregrtest/main.py", line 390 in 
run_tests_sequential
  File "/home/minit/Python-3.7.4/Lib/test/libregrtest/main.py", line 488 in 
run_tests
  File "/home/minit/Python-3.7.4/Lib/test/libregrtest/main.py", line 642 in 
_main
  File "/home/minit/Python-3.7.4/Lib/test/libregrtest/main.py", line 588 in main
  File "/home/minit/Python-3.7.4/Lib/test/libregrtest/main.py", line 663 in main
  File "/home/minit/Python-3.7.4/Lib/test/regrtest.py", line 46 in _main
  File "/home/minit/Python-3.7.4/Lib/test/regrtest.py", line 50 in 
  File "/home/minit/Python-3.7.4/Lib/runpy.py", line 85 in _run_code
  File "/home/minit/Python-3.7.4/Lib/runpy.py", line 193 in _run_module_as_main

test test_concurrent_futures failed

--
nosy: +Isaac Turner

___
Python tracker 
<https://bugs.python.org/issue37591>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35913] asyncore: allow handling of half closed connections

2019-02-07 Thread Isaac Boukris


Isaac Boukris  added the comment:

if not data:
# a closed connection is indicated by signaling
# a read condition, and having recv() return 0.
self.handle_close()
return b''

This above is the current code. Do you agree that it makes a wrong assumption 
and therefore behave incorrectly? If so, how do you suggest fixing it without 
adding a new method?

Otherwise; maybe we can at least amend the comment in the code, and perhaps add 
a word or two to the doc.

--

___
Python tracker 
<https://bugs.python.org/issue35913>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35913] asyncore: allow handling of half closed connections

2019-02-06 Thread Isaac Boukris


Isaac Boukris  added the comment:

> But I want to raise the flag again: why we are adding new functionality to 
> the *deprecated* module? It violates our on deprecation policy, isn't it?

I'm biased but I see this as more of a small and subtle fix for the current 
logic that incorrectly treats this as a closed connection, rather than a new 
feature.
In addition, it could serve a documentation hint for people troubleshooting 
edge cases in their code (especially people who are not familiar with these 
semantics).

> Point with asyncore/chat is that every time you try to fix them you end up 
> messing with the public API one way or another.

I'd agree about the first commit (avoid calling recv with size zero), which may 
change the behavior for a poorly written application that tries to read a chunk 
of zero bytes, but the second commit is straight forward and I can't see how it 
could break anything.

--

___
Python tracker 
<https://bugs.python.org/issue35913>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35913] asyncore: allow handling of half closed connections

2019-02-06 Thread Isaac Boukris


Isaac Boukris  added the comment:

> It seems recv() returning b"" is an alias for "connection lost". E.g. in 
> Twisted:

To my understanding, technically the connection is not fully closed, it is just 
shut-down for reading but we can still perform write operations on it (that is, 
the client may be still waiting for the response). I can reproduce it with an 
http-1.0 client, I'll try to put up a test to demonstrate it more clearly.

>From recv() man page:
When a stream socket peer has performed an orderly shutdown, the return value 
will be 0 (the traditional "end-of-file" return).

--

___
Python tracker 
<https://bugs.python.org/issue35913>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35913] asyncore: allow handling of half closed connections

2019-02-06 Thread Isaac Boukris


Isaac Boukris  added the comment:

Fair enough. I'll sign the CLA meanwhile you consider it.

In my opinion it may still be useful in addressing issues in existing projects 
written using asyncore (and maybe for python2 as well).

Thanks!

--

___
Python tracker 
<https://bugs.python.org/issue35913>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35913] asyncore: allow handling of half closed connections

2019-02-06 Thread Isaac Boukris


New submission from Isaac Boukris :

When recv() return 0 we may still have data to send. Add a handler for this 
case, which may happen with some protocols, notably http1.0 ver.

Also, do not call recv with a buffer size of zero to avoid ambiguous return 
value (see recv man page).

--
components: Library (Lib)
messages: 334944
nosy: Isaac Boukris
priority: normal
severity: normal
status: open
title: asyncore: allow handling of half closed connections
type: behavior

___
Python tracker 
<https://bugs.python.org/issue35913>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34583] os.stat() wrongfully returns False for symlink on Windows 10 v1803

2018-09-04 Thread Isaac Shabtay


New submission from Isaac Shabtay :

Windows 10 Pro, v1803.
Created a directory: D:\Test
Created a symbolic link to it: C:\Test -> D:\Test

The current user has permissions to access the link, however os.stat() fails:

>>> os.stat('C:\\Test')
Traceback (most recent call last):
  File "", line 1, in 
PermissionError: [WinError 5] Access is denied: 'C:\\Test'

The only change in my system since this has last worked, is that I upgraded to 
v1803 (used to be v1709 up until about a week ago).

--
components: Library (Lib)
messages: 324605
nosy: Isaac Shabtay
priority: normal
severity: normal
status: open
title: os.stat() wrongfully returns False for symlink on Windows 10 v1803
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue34583>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33766] Grammar Incongruence

2018-06-03 Thread Isaac Elliott


Isaac Elliott  added the comment:

Cool, thanks for the help. Should I submit a PR with the updated documentation?

--

___
Python tracker 
<https://bugs.python.org/issue33766>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33766] Grammar Incongruence

2018-06-03 Thread Isaac Elliott


Isaac Elliott  added the comment:

I went through that document before I created this issue. I can't find anything 
which describes this behavior - could you be more specific please?

--

___
Python tracker 
<https://bugs.python.org/issue33766>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33766] Grammar Incongruence

2018-06-03 Thread Isaac Elliott


Isaac Elliott  added the comment:

Thanks for the clarification. Is there a reference to this in the documentation?

--

___
Python tracker 
<https://bugs.python.org/issue33766>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33766] Grammar Incongruence

2018-06-03 Thread Isaac Elliott


New submission from Isaac Elliott :

echo 'print("a");print("b")' > test.py

This program is grammatically incorrect according to the specification 
(https://docs.python.org/3.8/reference/grammar.html). But Python 3 runs it 
without issue.


It's this production here

simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE

which says 'simple_stmt's must be terminated by a newline. However, the program 
I wrote doesn't contain any newlines.

I think the grammar spec is missing some information, but I'm not quite sure 
what. Does anyone have an idea?

--
components: Interpreter Core
messages: 318617
nosy: Isaac Elliott
priority: normal
severity: normal
status: open
title: Grammar Incongruence
type: behavior
versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue33766>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33149] Parser stack overflows

2018-03-26 Thread Isaac Elliott

Isaac Elliott <isaace71...@gmail.com> added the comment:

Because of the way recursive descent parsing works,

[[

is actually the minimal input required to reproduce this in python3.

In python2, the bug is still present, but requires a slightly deeper nesting:



--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue33149>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33149] Parser stack overflows

2018-03-26 Thread Isaac Elliott

New submission from Isaac Elliott <isaace71...@gmail.com>:

python3's parser stack overflows on deeply-nested expressions, for example:

[[]]

or

aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa(aa()

These are both minimal examples, so if you remove one level of nesting from 
either then python3 will behave normally.

--
messages: 314485
nosy: Isaac Elliott
priority: normal
severity: normal
status: open
title: Parser stack overflows
versions: Python 3.6

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue33149>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31263] Assigning to subscript/slice of literal is permitted

2017-08-23 Thread Isaac Elliott

Isaac Elliott added the comment:

Does backward compatibility take priority over correct behavior? What process 
is followed when fixing a bug causes a breaking change?

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31263>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31263] Assigning to subscript/slice of literal is permitted

2017-08-23 Thread Isaac Elliott

Isaac Elliott added the comment:

Yes I would disallow a script such as
`a = [0]; [5, a][1][:] = [3]` (note: your example of just `[5, a][1][:] = [3]` 
does not run, so I assumed it must be used in a situation like this)

Evaluating the target of an assignment is unnecessary, we can syntactically 
determine whether some left hand side can be assigned to:

* Identifiers are assignable (`a = 2`)
* Attribute accesses are assignable, provided that the left of the dot is 
assignable (`a.foo = 2`, `a.b.c.foo`, etc)
* Subscripts are assignable, provided that the outer expression is assignable 
(`a[1] = 2`, `a.foo[b] = 2`, `a[1][2][3] = 2`, `a.b[1].c[2] = 2`)
* Lists are assignable, provided that all their elements are assignable 
(`[a,b,c] = [1,2,3]`)
* Expression lists/tuples are assignable, provided that all their elements are 
assignable (`a, b = (1, 2)`, `(a,b,c) = (1,2,3)`)
* Unpackings are assignable, provided that their argument is assignable (`*a, = 
[1,2,3]`, `a, *b = [1,2,3]`)
* Slices are assignable, provided that the outer expression is assignable 
(`a[:] = [1,2,3]`, `a.foo[1:2] = [1]`
* Everything else is not assignable (did I forget anything?)

This can definitely be encoded as a context-free grammar, although I don't know 
if it will present conflicts in the parser generator.

I do think it's worth it. Python is one of the most widely used programming 
languages, and it's our responsibility to ensure it behaves correctly.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31263>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31263] Assigning to subscript/slice of literal is permitted

2017-08-22 Thread Isaac Elliott

New submission from Isaac Elliott:

In Python 3.5 and 3.6 (at least), the language reference presents a grammar 
that disallows assignment to literals.

For example, `(a for 1 in [1,2,3])` is a syntax error, as is `(1, a) = (2, 3)`.

However the grammar doesn't prevent assignment to subscripted or sliced 
literals.

For example neither `(a for [1,2,3][0] in [1,2,3])` nor `([1,2,3][0], a) = (2, 
3)` are considered syntax errors.

Similar behavior is exhibited for slices.

The problem is that the `target_list` production 
(https://docs.python.org/3.5/reference/simple_stmts.html#grammar-token-target_list)
 reuses the `subscription` and `slicing` productions which both use the 
`primary` production, allowing literals on their left side.

--
messages: 300740
nosy: Isaac Elliott
priority: normal
severity: normal
status: open
title: Assigning to subscript/slice of literal is permitted
type: behavior
versions: Python 3.5, Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31263>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31085] Add option for namedtuple to name its result type automatically

2017-08-02 Thread Isaac Morland

Isaac Morland added the comment:

Not if one of the attributes is something that cannot be part of a typename:

>>> fields = ['def', '-']

>>> namedtuple ('test', fields, rename=True).__doc__

'test(_0, _1)'

>>> namedtuple ('__'.join (fields), fields, rename=True).__doc__

Traceback (most recent call last):

  File "", line 1, in 

  File
"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/collections.py",
line 339, in namedtuple

'alphanumeric characters and underscores: %r' % name)

ValueError: Type names and field names can only contain alphanumeric
characters and underscores: 'def_-'

>>>

Which I admit is a weird thing to be doing, but duplicating attribute names
or trying to use a keyword as an attribute name (or anything else that
requires rename=True) is also weird.

Also it's far from clear that the pre-renaming field names are what is
wanted in the auto-generated typename. If I was actually using attribute
names that required renaming I would want the auto-generated typename to
match the renamed attributes. The original fieldnames play no part in the
operation of the namedtuple class or its instances once it has been
created: only the renamed fieldnames even remain reachable from the
namedtuple object.

Anyway I think I'm probably out at this point. I think Python development
is not a good cultural fit for me, based on this discussion. Which is
weird, since I love working in Python. I even like the whitespace
indentation, although admittedly not quite as much as I thought I would
before I tried it. I hugely enjoy the expressiveness of the language
features, combined with the small but useful set of immediately-available
library functions, together with the multitude of importable standard
modules backing it all up. But I should have known when functools.compose
(which ought to be almost the first thing in any sort of "functional
programming" library) was rejected that I should stay away from attempting
to get involved in the enhancement side of things.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31085>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31085] Add option for namedtuple to name its result type automatically

2017-08-02 Thread Isaac Morland

Isaac Morland added the comment:

OK, so it's pretty clear this is heading towards a rejection, but I can't
help but respond to your points:

On 2 August 2017 at 01:12, Raymond Hettinger <rep...@bugs.python.org> wrote:

* This would be a potentially  confusing addition to the API.
>

I'm giving a natural meaning to providing a None where it is not permitted
now. The meaning is to provide a reasonable value for the missing
parameter. How could that be confusing? Also it's completely ignorable -
people don't have to pass None and get the auto-generated typename if they
don't want to.

> * It may also encourage bad practices that we don't want to see in real
> code.
>

What bad practices? There are lots of times when providing an explicit name
is a waste of effort. This provides a simple way of telling the library to
figure it out. Aren't there supposedly just two hard things in computer
science? Naming things, and cache invalidation. An opportunity to avoid
naming things that don't need to be specifically named is something worth
taking.

> * We want to be able to search for the namedtuple definition, want to have
> a meaningful repr, and want pickling to be easy.
>

You mean by searching for the typename in the source code? In my primary
usecase, the typename is computed regardless, so it doesn't appear in the
source code and can't be searched for. The other suggestion which appeared
at one point was passing "_" as the typename. This is going to be somewhat
challenging to search for also.

As to the meaningful repr, that is why I want auto-generation of the
typename. This is not for uses like this:

MyType = namedtuple ('MyType', ['a', 'b', 'c'])

It is for ones more like this:

rowtype = namedtuple (None, row_headings)

Or as it currently has to be:

rowtype = namedtuple ('rowtype', row_headings)

(leading to all the rowtypes being the same name, so less meaningful)

Or:

rowtype = namedtuple ('__'.join (row_headings), row_headings)

(which repeats the irrelevant-in-its-details computation wherever it is
needed and doesn't support rename=True, unless a more complicated
computation that duplicates code inside of namedtuple() is repeated)

Finally I'm not clear on how pickling is made more difficult by having
namedtuple() generate a typename. The created type still has a typename.
But I'm interested - this is the only point I don't think I understand.

* This doesn't have to be shoe-horned into the namedtuple API.  If an
> actual need did arise, it is trivial to write a wrapper that specifies
> whatever auto-naming logic happens to make sense for a particular
> application:
>
> >>> from collections import namedtuple
> >>> def auto_namedtuple(*attrnames, **kwargs):
> typename = '_'.join(attrnames)
> return namedtuple(typename, attrnames, **kwargs)
>
> >>> NT = auto_namedtuple('name', 'rank', 'serial')
> >>> print(NT.__doc__)
> name_rank_serial(name, rank, serial)

Your code will not work if rename=True is needed. I don't want to repeat
the rename logic as doing so is a code smell.

In short, I'm disappointed. I'm not surprised to make a suggestion, and
have people point out problems. For example, my original proposal ignored
the difficulties of creating the C implementation, and the issue of
circular imports, and I very much appreciated those criticisms. But I am
disappointed at the quality of the objections to these modified proposals.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31085>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31085] Add option for namedtuple to name its result type automatically

2017-08-02 Thread Isaac Morland

Isaac Morland added the comment:

On 1 August 2017 at 14:32, R. David Murray <rep...@bugs.python.org> wrote:

>
> R. David Murray added the comment:
>
> I wrote a "parameterized tests" extension for unittest, and it has the
> option of autogenerating the test name from the parameter names and
> values.  I've never used that feature, and I am considering ripping it out
> before I release the package, to simplify the code.  If I do I might
> replace it with a hook for generating the test name so that the user can
> choose their own auto-naming scheme.
>
> Perhaps that would be an option here: a hook for generating the name, that
> would be called where you want your None processing to be?  That would not
> be simpler than your proposal, but it would be more general (satisfy more
> use cases) and might be worth the cost.  On the other hand, other
> developers might not like the API bloat ;)
>

It's August, not April. Raymond Hettinger is accusing my proposed API of
being potentially confusing, while you're suggesting providing a hook? All
I want is the option of telling namedtuple() to make up its own typename,
for situations where there should be one but I don't want to provide it.

Having said that, if people really think a hook like this is worth doing,
I'll implement it. But I agree that it seems excessively complicated. Let's
see if auto-generation is useful first, then if somebody wants a different
auto-generation, provide the capability.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31085>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31085] Add option for namedtuple to name its result type automatically

2017-08-01 Thread Isaac Morland

Isaac Morland added the comment:

First, another note I would like to point out: this is much nicer to write
within namedtuple than as a wrapper function because it is trivial to use
the existing rename logic when needed, as seen in the diff I provided. I
suppose I could write a wrapper which calls namedtuple and then changes the
class name after creation but that just feels icky. The only other
alternatives would be to duplicate the rename logic or have the wrapper not
work with rename.

By way of response to R. David Murray: Every use case, of everything, is
specialized. Another way of thinking of what I'm suggesting is that I would
like to make providing a typename optional, and have the library do its
best based on the other information provided in the call to namedtuple.
This pretty well has to mean mashing the fieldnames together in some way
because no other information about the contents of the namedtuple is
provided. So I think this is a very natural feature: what else could it
possibly mean to pass None for the typename?

If for a particular application some other more meaningful auto-generated
name is needed, that could still be provided to namedtuple(). For example,
an ORM that uses the underlying table name.

In response to other suggestions, I don't see how one can prefer "_" all
over the place in debugging output to a string that identifies the
fieldnames involved. Or really, just the option of having a string that
identifies the fieldnames: I'm not forcing anyone to stop passing '_'.

To INADA Naoki: thanks for pointing that out. I agree that in the subclass
case it no longer matters what typename is used for the namedtuple itself.
But isn't that a good reason to allow skipping the parameter, or (since you
can't just skip positional parameters) passing an explicit None?

On 1 August 2017 at 11:02, R. David Murray <rep...@bugs.python.org> wrote:

>
> R. David Murray added the comment:
>
> I think the "vaguely" pretty much says it, and you are the at least the
> first person who has *requested* it :)
>
> This is one of those cost-versus-benefit calculations.  It is a
> specialized use case, and in other specialized use cases the "automatically
> generated" name that makes the most sense is likely to be something
> different than an amalgamation of the field names.
>
> So I vote -0.5.  I don't think even the small complication of the existing
> code is worth it, but I'm not strongly opposed.
>
> --
> nosy: +r.david.murray
>
> ___
> Python tracker <rep...@bugs.python.org>
> <http://bugs.python.org/issue31085>
> ___
>

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31085>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31086] Add namedattrgetter function which acts like attrgetter but uses namedtuple

2017-07-31 Thread Isaac Morland

Isaac Morland added the comment:

Maybe the issue is that I work with SQL constantly.  In SQL, if I say "SELECT 
a, b, c FROM t" and table t has columns a, b, c, d, e, f, I can still select a, 
b, and c from the result.  So to me it is natural that getting a bunch of 
attributes returns something (row or object, depending on the context), where 
the attributes are still labelled.

I understand why this was rejected as a universal change to attrgetter - in 
particular, I didn't re-evaluate the appropriateness of the change once I 
realized that attrgetter has a C implementation - but I don't understand why 
this isn't considered a natural option to provide.

Using rename=True is just a way of having it not blow up if an attribute name 
requiring renaming is supplied.  I agree that actually using such an attribute 
requires either guessing the name generated by the rename logic in namedtuple 
or using numeric indexing.  If namedtuple didn't have rename=True then I 
wouldn't try to re-implement it but since it does I figure it's worth typing ", 
rename=True" once - it's hardly going to hurt anything.

Finally as to use cases, I agree that if the only thing one is doing is sorting 
it doesn't matter.  But with groupby it can be very useful.  Say I have an 
iterator providing objects with fields (heading_id, heading_text, item_id, 
item_text).  I want to display each heading, followed by its items.

So, I groupby attrgetter ('heading_id', 'heading_text'), and write a loop 
something like this:

for heading, items in groupby (source, attrgetter ('heading_id', 
'heading_text')):
# display heading
# refer to heading.heading_id and heading.heading_text
for item in items:
# display item
# refer to item.item_id and item.item_text

Except I can't, because heading doesn't have attribute names.  If I replace 
attrgetter with namedattrgetter then I'm fine.  How would you write this?  In 
the past I've used items[0] but that is (a) ugly and (b) requires "items = 
list(items)" which is just noise.

I feel like depending on what is being done with map and filter you could have 
a similar situation where you want to refer to the specific fields of the tuple 
coming back from the function returned by attrgetter.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31086>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31085] Add option for namedtuple to name its result type automatically

2017-07-31 Thread Isaac Morland

Isaac Morland added the comment:

I want a meaningful name to appear in debugging output generated by repr() or 
str(), not just _ all over the place.  I just don't want to specifically come 
up with the meaningful name myself.

Right now I pass in the same generated name ('__'.join (field_names)) to the 
constructor, but this means I need to repeat that logic in any other similar 
application, and I would have to put in special handling if any of my attribute 
names required renaming.

I would rather be explicit that I'm not providing a specific name.  With your 
'_' suggestion it looks like a magic value - why '_'?  By specifying None, it's 
obvious at the call point that I'm explicitly declining to provide a name, and 
then the code generates a semi-meaningful name automatically.

Also, please note that I moved the place where typename is assigned to after 
the part where it handles the rename stuff, so the generated names 
automatically incorporate a suitable default and remain valid identifiers.

I'm having trouble seeing the downside here.  I'm adding one "is None" check 
and one line of code to the existing procedure.  I can't believe I'm the only 
person who has wanted to skip making up a type name but still wanted something 
vaguely meaningful in debug output.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31085>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31086] Add namedattrgetter function which acts like attrgetter but uses namedtuple

2017-07-30 Thread Isaac Morland

Isaac Morland added the comment:

Here is the diff.  Note that I assume implementation of #31085, which allows me 
to push determination of a name for the namedtuple down into namedtuple itself:

diff --git a/Lib/collections/__init__.py b/Lib/collections/__init__.py
index 62cf708..d507d23 100644
--- a/Lib/collections/__init__.py
+++ b/Lib/collections/__init__.py
@@ -14,7 +14,8 @@ list, set, and tuple.
 
 '''
 
-__all__ = ['deque', 'defaultdict', 'namedtuple', 'UserDict', 'UserList',
+__all__ = ['deque', 'defaultdict', 'namedtuple', 'namedattrgetter',
+'UserDict', 'UserList',
 'UserString', 'Counter', 'OrderedDict', 'ChainMap']
 
 # For backwards compatibility, continue to make the collections ABCs
@@ -23,7 +24,7 @@ from _collections_abc import *
 import _collections_abc
 __all__ += _collections_abc.__all__
 
-from operator import itemgetter as _itemgetter, eq as _eq
+from operator import itemgetter as _itemgetter, attrgetter as _attrgetter, eq 
as _eq
 from keyword import iskeyword as _iskeyword
 import sys as _sys
 import heapq as _heapq
@@ -451,6 +452,14 @@ def namedtuple(typename, field_names, *, verbose=False, 
rename=False, module=Non
 
 return result
 
+def namedattrgetter (attr, *attrs):
+ag = _attrgetter (attr, *attrs)
+
+if attrs:
+nt = namedtuple (None, (attr,) + attrs, rename=True)
+return lambda obj: nt._make (ag (obj))
+else:
+return ag
 
 
 ###  Counter

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31086>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31086] Add namedattrgetter function which acts like attrgetter but uses namedtuple

2017-07-30 Thread Isaac Morland

New submission from Isaac Morland:

This is meant to replace my proposal in #30020 to change attrgetter to use 
namedtuple.  By creating a new function implemented in Python, I avoid making 
changes to the existing attrgetter, which means that both the need of 
implementing a C version and the risk of changing the performance or other 
characteristics of the existing function are eliminated.

My suggestion is to put this in the collections module next to namedtuple.  
This eliminates the circular import problem and is a natural fit as it is an 
application of namedtuple.

--
components: Library (Lib)
messages: 299534
nosy: Isaac Morland
priority: normal
severity: normal
status: open
title: Add namedattrgetter function which acts like attrgetter but uses 
namedtuple
type: enhancement
versions: Python 3.7

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31086>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31085] Add option for namedtuple to name its result type automatically

2017-07-30 Thread Isaac Morland

Isaac Morland added the comment:

I'm hoping to make a pull request but while I figure that out here is the diff:

diff --git a/Lib/collections/__init__.py b/Lib/collections/__init__.py
index 8408255..62cf708 100644
--- a/Lib/collections/__init__.py
+++ b/Lib/collections/__init__.py
@@ -384,7 +384,6 @@ def namedtuple(typename, field_names, *, verbose=False, 
rename=False, module=Non
 if isinstance(field_names, str):
 field_names = field_names.replace(',', ' ').split()
 field_names = list(map(str, field_names))
-typename = str(typename)
 if rename:
 seen = set()
 for index, name in enumerate(field_names):
@@ -394,6 +393,10 @@ def namedtuple(typename, field_names, *, verbose=False, 
rename=False, module=Non
 or name in seen):
 field_names[index] = '_%d' % index
 seen.add(name)
+if typename is None:
+typename = '__'.join (field_names)
+else:
+typename = str(typename)
 for name in [typename] + field_names:
 if type(name) is not str:
 raise TypeError('Type names and field names must be strings')

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31085>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31085] Add option for namedtuple to name its result type automatically

2017-07-30 Thread Isaac Morland

New submission from Isaac Morland:

I would like to have the possibility of creating a namedtuple type without 
explicitly giving it a name.  I see two major use cases for this:

1) Automatic creation of namedtuples for things like CSV files with headers 
(see #1818) or SQL results (see #13299).  In this case at the point of calling 
namedtuple I have column headings (or otherwise automatically-determined 
attribute names), but there probably isn't a specific class name that makes 
sense to use.

2) Subclassing from a namedtuple invocation; I obviously need to name my 
subclass, but the name passed to the namedtuple invocation is essentially 
useless.

My idea is to allow giving None for the typename parameter of namedtuple, like 
this:

class MyCustomBehaviourNamedtuple (namedtuple (None, ['a', 'b'])):
...

In this case namedtuple will generate a name based on the field names.

This should be backward compatible because right now passing None raises a 
TypeError.  So there is no change if a non-None typename is passed, and an 
exception is replaced by computing a default typename if None is passed.

Patch to follow.

--
components: Library (Lib)
messages: 299532
nosy: Isaac Morland
priority: normal
severity: normal
status: open
title: Add option for namedtuple to name its result type automatically
type: enhancement
versions: Python 3.7

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31085>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30020] Make attrgetter use namedtuple

2017-04-08 Thread Isaac Morland

Isaac Morland added the comment:

What are the "other issues"?

As to the issue you raise here, that's why I use rename=True.

First create a type with an underscore attribute:

>>> t = namedtuple ('t', ['a', '1234'], rename=True)

(just an easy way of creating such a type; used of namedtuple specifically is 
admittedly a bit of a red herring)

Now create an object and illustrate its attributes:

>>> tt = t ('c', 'd')
>>> tt.a
'c'
>>> tt._1
'd'

Now use my modified attrgetter to get the attributes as a namedtuple:

>>> attrgetter ('a', '_1') (tt)
attrgetter(a='c', _1='d')
>>> 

And the example from the help, used in the test file I've already attached, 
illustrates that the dotted attribute case also works.

Essentially, my patch provides no benefit for attrgetter specified attributes 
that aren't valid namedtuple attribute names, but because of rename=True it 
still works and doesn't break anything.  So if you give "a" as an attribute 
name, the output will have an "a" attribute; if you give "_b" as an attribute 
name, the output will have an "_1" (or whatever number) attribute.  Similarly, 
it doesn't help with dotted attributes, but it doesn't hurt either.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30020>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30020] Make attrgetter use namedtuple

2017-04-08 Thread Isaac Morland

Isaac Morland added the comment:

I've attached a file which illustrates what I'm proposing to happen with the 
examples from the help.  Note that attrgetter (attr) is not affected, only 
attrgetter (*attrs) for more than one attribute.  The idea is that tuples 
resulting from attrgetter functions will retain the attribute names from the 
original object.  In some work I have done recently this would have been very 
handy with groupby.

I had some initial confusion because changing the Python attrgetter 
implementation didn't make any difference.  Once I realized I needed to turn 
off import of the C implementation, I figured the rest out fairly quickly.  
Here is the diff:

diff --git a/Lib/operator.py b/Lib/operator.py
index 0e2e53e..9b2a8fa 100644
--- a/Lib/operator.py
+++ b/Lib/operator.py
@@ -247,8 +247,12 @@ class attrgetter:
 else:
 self._attrs = (attr,) + attrs
 getters = tuple(map(attrgetter, self._attrs))
+
+from collections import namedtuple
+nt = namedtuple ('attrgetter', self._attrs, rename=True)
+
 def func(obj):
-return tuple(getter(obj) for getter in getters)
+return nt._make (getter(obj) for getter in getters)
 self._call = func
 
 def __call__(self, obj):
@@ -409,7 +413,7 @@ def ixor(a, b):
 
 
 try:
-from _operator import *
+pass
 except ImportError:
 pass
 else:

There are some issues that still need to be addressed.  The biggest is that 
I've turned off the C implementation.  I assume that we'll need a C 
implementation the new version.  In addition to this:

1) I just call the namedtuple type "attrgetter".  I'm thinking something 
obtained by mashing together the field names or something similar might be more 
appropriate.  However, I would prefer not to repeat the logic within namedtuple 
that deals with field names that aren't identifiers.  So I'm wondering if maybe 
I should also modify namedtuple to allow None as the type name, in which case 
it would use an appropriate default type name based on the field names.

2) I import from collections inside the function.  It didn't seem to work at 
the top-level, I'm guessing because I'm in the library and collections isn't 
ready when operator is initialized.  This may be fine I just point it out as 
something on which I could use advice.

I'm hoping this provides enough detail for people to understand what I'm 
proposing and evaluate whether this is a desireable enhancement.  If so, I'll 
dig into the C implementation next, although I may need assistance with that.

--
Added file: http://bugs.python.org/file46791/test_attrgetter.py

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30020>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30020] Make attrgetter use namedtuple

2017-04-07 Thread Isaac Morland

New submission from Isaac Morland:

I would find it useful if the tuples returned by attrgetter functions were 
namedtuples.  An initial look at the code for attrgetter suggests that this 
would be an easy change and should make little difference to performance.  
Giving a namedtuple where previously a tuple was returned seems unlikely to 
trigger bugs in existing code so I propose to simply change attrgetter rather 
than providing a parameter to specify whether or not to use the new behaviour.

Patch will be forthcoming but comments appreciated.

--
components: Library (Lib)
messages: 291314
nosy: Isaac Morland
priority: normal
severity: normal
status: open
title: Make attrgetter use namedtuple
type: enhancement
versions: Python 3.7

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30020>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25831] dbm.gnu leaks file descriptors on .reorganize()

2015-12-09 Thread Isaac Schwabacher

New submission from Isaac Schwabacher:

I found because test_dbm_gnu fails on NFS; my initial thought was that the test 
was failing to close a file somewhere (similarly to #20876), but a little 
digging suggested that the problem is in dbm.gnu itself:

$ ./python
Python 3.5.1 (default, Dec  9 2015, 11:55:23) 
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dbm.gnu
>>> import subprocess
>>> db = dbm.gnu.open('foo', 'c')
>>> db.reorganize()
>>> db.close()
>>> subprocess.check_call(['lsof', 'foo'])
COMMAND PIDUSER  FD   TYPE DEVICE SIZE/OFFNODE NAME
python  2302377 schwabacher mem-W  REG   0,5298304 25833923756 foo
0

A quick look at _gdbmmodule.c makes clear that the problem is upstream, but 
their bug tracker has 9 total entries... The best bet might just be to skip the 
test on NFS.

--
components: Library (Lib)
messages: 256159
nosy: ischwabacher
priority: normal
severity: normal
status: open
title: dbm.gnu leaks file descriptors on .reorganize()
type: behavior
versions: Python 3.5

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25831>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25831] dbm.gnu leaks file descriptors on .reorganize()

2015-12-09 Thread Isaac Schwabacher

Isaac Schwabacher added the comment:

Further searching reveals this as a dupe of #13947. Closing.

--
status: open -> closed

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25831>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25818] If protocol_factory raises an error, the connection closes but no stacktrace is printed on the server.

2015-12-07 Thread Isaac Dickinson

New submission from Isaac Dickinson:

This makes it a nightmare to figure out why your connections are abruptly 
closing.  It should print an error at least.

--
components: asyncio
files: broken.py
messages: 256081
nosy: SunDwarf, gvanrossum, haypo, yselivanov
priority: normal
severity: normal
status: open
title: If protocol_factory raises an error, the connection closes but no 
stacktrace is printed on the server.
versions: Python 3.4, Python 3.5
Added file: http://bugs.python.org/file41265/broken.py

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25818>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25818] asyncio: If protocol_factory raises an error, the connection closes but no stacktrace is printed on the server.

2015-12-07 Thread Isaac Dickinson

Changes by Isaac Dickinson <eyesism...@gmail.com>:


--
title: If protocol_factory raises an error, the connection closes but no 
stacktrace is printed on the server. -> asyncio: If protocol_factory raises an 
error, the connection closes but no stacktrace is printed on the server.

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25818>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25818] asyncio: If protocol_factory raises an error, the connection closes but no stacktrace is printed on the server.

2015-12-07 Thread Isaac Dickinson

Isaac Dickinson added the comment:

I completely forgot asyncio has a debug mode. Ignore this.

--
resolution:  -> not a bug

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue25818>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24920] shutil.get_terminal_size throws AttributeError

2015-08-25 Thread Isaac Levy

Isaac Levy added the comment:

I guess users need to check standard streams for None. There's not many uses of 
stream attributes in core libs.

Maybe catch should be Exception -- since it's documented to return a fallback 
on error.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24920
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24920] shutil.get_terminal_size throws AttributeError

2015-08-23 Thread Isaac Levy

New submission from Isaac Levy:

OS: windows 7, python 3.4.3, tk version 8.6.1

os.get_terminal_size also fails.


 shutil.get_terminal_size()
Traceback (most recent call last):
  File pyshell#4, line 1, in module
shutil.get_terminal_size()
  File C:\Python34\lib\shutil.py, line 1058, in get_terminal_size
size = os.get_terminal_size(sys.__stdout__.fileno())
AttributeError: 'NoneType' object has no attribute 'fileno'
 os.get_terminal_size()
Traceback (most recent call last):
  File pyshell#5, line 1, in module
os.get_terminal_size()
ValueError: bad file descriptor

--
components: IDLE
messages: 249039
nosy: Isaac Levy
priority: normal
severity: normal
status: open
title: shutil.get_terminal_size throws AttributeError
type: crash
versions: Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24920
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21483] Skip os.utime() test on NFS?

2015-03-05 Thread Isaac Schwabacher

Isaac Schwabacher added the comment:

...and fixed a spot where git diff + copy/paste truncated a long line.  
/sheepish

--
Added file: http://bugs.python.org/file38346/test_import.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21483
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20876] python -m test test_pathlib fails

2015-03-05 Thread Isaac Schwabacher

Isaac Schwabacher added the comment:

Fixed a truncated line in the patch.

--
Added file: http://bugs.python.org/file38347/test_support.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20876
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19884] Importing readline produces erroneous output

2015-03-03 Thread Isaac Schwabacher

Isaac Schwabacher added the comment:

From the OP:

 This was reported at [1] and originally at [2]. The readline maintainer 
 suggests [3] using:
 
 rl_variable_bind (enable-meta-key, off);
 
 which was introduced in readline 6.1. Do you think it'd be safe to add the 
 above line?

From 3.4.3 final:

 @unittest.skipIf(readline._READLINE_VERSION  0x0600
  and libedit not in readline.__doc__,
  not supported in this library version)


The test currently fails on readline version 6.0.  The version to test on needs 
a bump to 0x0610.

--
nosy: +ischwabacher

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19884
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19884] Importing readline produces erroneous output

2015-03-03 Thread Isaac Schwabacher

Isaac Schwabacher added the comment:

Whoops, that's 0x0601.  Though Maxime gives evidence that the version should in 
fact be 0x0603.  (Note that while OS X ships with libedit over libreadline, 
anyone who wants to can install the real thing instead of that pale imitation; 
the test would have been skipped if Maxime were using libedit.)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19884
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21483] Skip os.utime() test on NFS?

2015-03-01 Thread Isaac Schwabacher

Isaac Schwabacher added the comment:

Patch to do precisely this.  Wish I'd spent more time searching for this thread 
and less time debugging; it would have saved me a lot of trouble.

--
keywords: +patch
nosy: +ischwabacher
Added file: http://bugs.python.org/file38291/test_import.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21483
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20876] python -m test test_pathlib fails

2015-03-01 Thread Isaac Schwabacher

Isaac Schwabacher added the comment:

This behavior is caused by the way NFS clients implement unlinking open files: 
instead of unlinking an open file, the filesystem renames it to .nfs and 
unlinks it on close.  (The search term you want is silly rename.)  The reason 
this problem appears is that `test.support.fs_is_case_insensitive()` unlinks 
but fails to close the temporary file that it creates.  Of course, any attempt 
to unlink the .nfs file (for instance, by `shutil.rmtree`) just succeeds in 
renaming it to .nfs, so there is no way to delete the parent directory 
until the file is closed.

The attached patch modifies the offending function to use the 
`tempfile.NamedTemporaryFile` context manager, which closes the file on leaving 
the block.

--
keywords: +patch
nosy: +ischwabacher
Added file: http://bugs.python.org/file38289/test_support.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20876
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23331] Add non-interactive version of Bdb.runcall

2015-01-27 Thread Isaac Jurado

New submission from Isaac Jurado:

The Bdb.runcall method shows a prompt right at the beginning of the function.  
If breakpoints are defined, it is sometimes handy to skip the prompt until the 
next breakpoint, if any.

This use case came up in our development environment for a Django application, 
from where we needed to set breakpoints only for certain request, allowing 
other request to run without interruption.  Our solution was to wrap with a 
WSGI middleware, something like this:

import sys
from bdb import BdbQuit
from pdb import Pdb

class DebuggingMiddleware(object):

def __init__(self, app):
self.aplicacion = app
self.debugger = Pdb()
our_setup_breakpoints_function(self.debugger)

def __call__(self, environ, start_response):
environ['DEBUGGER'] = self.debugger
frame = sys._getframe()
self.debugger.reset()
frame.f_trace = self.debugger.trace_dispatch
self.debugger.botframe = frame
self.debugger._set_stopinfo(frame, None, -1)
sys.settrace(self.debugger.trace_dispatch)
try:
return self.aplicacion(environ, start_response)
except BdbQuit:
pass  # Return None implicitly
finally:
self.debugger.quitting = 1
sys.settrace(None)

As you can see, it is basically a mix of Bdb.set_trace and Bdb.set_continue 
which we came up by trial and error.  If there was something like 
Bdb.runcall_no_prompt or an extra flag to Bdb.runcall to trigger this 
behaviour, this copy and paste would not be necessary.

--
components: Library (Lib)
messages: 234819
nosy: etanol
priority: normal
severity: normal
status: open
title: Add non-interactive version of Bdb.runcall
type: enhancement
versions: Python 3.5

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23331
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22147] PosixPath() constructor should not accept strings with embedded NUL bytes

2014-08-05 Thread Isaac Schwabacher

New submission from Isaac Schwabacher:

This is listed as a python3.4 issue even though I only tried this on the 
python2.7 backport because I don't have a python3 handy, but I was not able to 
find an indication, either here or elsewhere, that this had been addressed.  
Please forgive me if it has.

The `pathlib.PosixPath()` constructor currently accepts strings containing NUL 
bytes, converting them into paths containing NUL bytes. POSIX specifies that a 
pathname may not contain embedded NULs.

It appears that `PosixPath.stat()` is checking for embedded NUL, but 
`PosixPath.open()` is not!  For safety, constructing a `PosixPath` with 
embedded NULs should be forbidden.

`pathlib.WindowsPath()` should probably receive the same treatment.

Observed behavior:

```python

 from pathlib import Path

 Path(\0I'm not malicious, I'm mischievous!)
PosixPath(\x00I'm not malicious, I'm mischievous!)

 _.open()
Traceback (most recent call last):
 File stdin, line 1, in module
 File .../site-packages/pathlib.py, line 1077, in open
 return io.open(str(self), mode, buffering, encoding, errors, newline)
IOError: [Errno 2] No such file or directory: ''

 Path('/') / _
PosixPath(/\x00I'm not malicious, I'm mischievous!)

 _.open()
Traceback (most recent call last):
 File stdin, line 1, in module
 File .../site-packages/pathlib.py, line 1077, in open
 return io.open(str(self), mode, buffering, encoding, errors, newline)
IOError: [Errno 21] Is a directory: /\x00I'm not malicious, I'm mischievous!

 _.stat()
Traceback (most recent call last):
 File stdin, line 1, in module
 File .../site-packages/pathlib.py, line 1051, in stat
 return self._accessor.stat(self)
 File .../site-packages/pathlib.py, line 346, in wrapped
 return strfunc(str(pathobj), *args)
TypeError: must be encoded string without NULL bytes, not str

 p1 = Path('/etc/passwd\0/hello.txt').open()

 p2 = Path('/etc/passwd').open()

 os.path.sameopenfile(p1.fileno(), p2.fileno())
True  # DANGER WILL ROBINSON!

```

Expected behavior:

```python

 Path(/\0I'm not malicious, I'm mischievous!)
...
ValueError: Illegal byte '\x00' in path

```

--
messages: 224880
nosy: ischwabacher
priority: normal
severity: normal
status: open
title: PosixPath() constructor should not accept strings with embedded NUL bytes
type: security
versions: Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22147
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22147] PosixPath() constructor should not accept strings with embedded NUL bytes

2014-08-05 Thread Isaac Schwabacher

Isaac Schwabacher added the comment:

Further digging reveals that the issue with `open()` was fixed in #13848 (the 
bug was in the `io` module).  I still believe that this should fail in the 
`pathlib.Path` constructor, but this is less of a security issue.

--
type: security - behavior

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22147
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21039] pathlib strips trailing slash

2014-08-05 Thread Isaac Schwabacher

Isaac Schwabacher added the comment:

This may be only syntactic sugar, but it is POSIX-specified syntactic sugar: 
according to http://pubs.opengroup.org/onlinepubs/9699919799/. trailing slashes 
in pathnames are semantically meaningful in pathname resolution.  Tilde escapes 
are not mentioned.

4.12 Pathname Resolution


[...]

A pathname that contains at least one non- slash character and that ends with 
one or more trailing slash characters shall not be resolved successfully 
unless the last pathname component before the trailing slash characters names 
an existing directory or a directory entry that is to be created for a 
directory immediately after the pathname is resolved. Interfaces using pathname 
resolution may specify additional constraints[1] when a pathname that does not 
name an existing directory contains at least one non- slash character and 
contains one or more trailing slash characters.

If a symbolic link is encountered during pathname resolution, the behavior 
shall depend on whether the pathname component is at the end of the pathname 
and on the function being performed. If all of the following are true, then 
pathname resolution is complete:

1. This is the last pathname component of the pathname.

2. The pathname has no trailing slash.

3. The function is required to act on the symbolic link itself, or certain 
arguments direct that the function act on the symbolic link itself.

In all other cases, the system shall prefix the remaining pathname, if any, 
with the contents of the symbolic link. [...]

--
nosy: +ischwabacher

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21039
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Extracting the value from Netcdf file with longitude and lattitude

2014-05-16 Thread Isaac Won
Hi,
My question may be confusing.

Now I would like to extract temperature values from model output with python.

My model output have separate temperature, longitude and latitude variables.

So, I overlap these three grid variables on one figure to show temperature with 
longitude and latitude through model domain.

Up to this point, everything is fine. The problem is to extract temperature 
value at certain longitude and latitude. 

Temperature variable doesn't have coordinate, but only values with grid.

Do you have idea about this issue?

Below is my code for the 2 D plot with temperature on model domain.

varn1 = 'T2'
varn2 = 'XLONG'
varn3 = 'XLAT'
Temp  = read_netcdf(filin,varn1)
Lon  = read_netcdf(filin,varn2)
Lat  = read_netcdf(filin,varn3)

Temp_plt =  Temp[12,:,:]
Lon_plt = Lon[12,:,:]
Lat_plt = Lat[12,:,:]
x = Lon_plt
y = Lat_plt

Temp_c = Temp_plt-273.15
myPLT = plt.pcolor(x,y,Temp_c)
mxlabel = plt.xlabel('Latitude')
mylabel = plt.ylabel('Longitude')
plt.xlim(126.35,127.35)
plt.ylim(37.16,37.84)
myBAR = plt.colorbar(myPLT)
myBAR.set_label('Temperature ($^\circ$C)')
plt.show()

--
read_netcdf is a code for extracting values of [time, x,y] format.

I think that the point is to bind x, y in Temp_plt with x, y in Lon_plt and 
Lat_plt to extract temperature values with longitude and latitude input.

This question might be confusing. If you can't understand please let me know.

Any idea or help will be really appreciated.

Best regards,

Hoonill




-- 
https://mail.python.org/mailman/listinfo/python-list


Google app engine database

2014-02-22 Thread glenn . a . isaac
Is there a way to make sure that whenever you're making google engine app 
iterations to a database that that info does not get wiped/deleted.  Please 
advise
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue20496] function definition tutorial encourages bad practice

2014-02-02 Thread Alan Isaac

New submission from Alan Isaac:

Section 4.6 of the tutorial introduces function definition:
http://docs.python.org/3/tutorial/controlflow.html#defining-functions

The first example defines a function that *prints* a Fibonacci series.

A basic mistake made by students new to programming is to use a function to 
print values rather than to return them.  In this sense, the example encourages 
bad practice and misses an opportunity to instruct.  Since they have already 
met lists in Section 3, I suggest that returning a list of the values and then 
printing the list would enhance the tutorial.

--
assignee: docs@python
components: Documentation
messages: 210077
nosy: aisaac, docs@python
priority: normal
severity: normal
status: open
title: function definition tutorial encourages bad practice
type: enhancement

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20496
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Drawing shaded area depending on distance with latitude and altitude coordinate

2014-01-06 Thread Isaac Won
I have tried to make a plot of points with longitude and latitude coordinate, 
and draw shaded area with distance from one point. So, I thought that I could 
uae contourf function from matplotlibrary. My code is:
import haversine
import numpy as np
import matplotlib.pyplot as plt
with open(filin, 'r') as f:
arrays = [map(float, line.split()) for line in f]
newa = [[x[1],-x[2]] for x in arrays]

lat = np.zeros(275)
lon = np.zeros(275)
for c in range(0,275):
lat[c] = newa[c][0]
lon[c] = newa[c][1]

with open(filin, 'r') as f:
arrays = [map(float, line.split()) for line in f]
newa = [[x[1],-x[2]] for x in arrays]

lat = np.zeros(275)
lon = np.zeros(275)
for c in range(0,275):
lat[c] = newa[c][0]
lon[c] = newa[c][1]


dis = np.zeros(275)

for c in range(0,275):
dis[c] = haversine.distance(newa[0],[lat[c],lon[c]])

dis1 = [[]]*1

for c in range(0,275):
dis1[0].append(dis[c])


cs = plt.contourf(lon,lat,dis1)
cb = plt.colorbar(cs)

plt.plot(-lon[0],lat[0],'ro')
plt.plot(-lon[275],lat[275],'ko')
plt.plot(-lon[1:275],lat[1:275],'bo')
plt.xlabel('Longitude(West)')
plt.ylabel('Latitude(North)')
plt.gca().invert_xaxis()
plt.show()

My idea in this code was that I could made a shaded contour by distance from a 
certain point which was noted as newa[0] in the code. I calculated distances 
between newa[0] and other points by haversine module which calculate distances 
with longitudes and latitudes of two points. However, whenever I ran this code, 
I got the error related to X, Y or Z in contourf such as:
TypeError: Length of x must be number of columns in z, and length of y must 
be number of rows.

IF I use meshgrid for X and Y, I also get:
TypeError: Inputs x and y must be 1D or 2D.

I just need to draw shaded contour with distance from one point on the top of 
the plot of each point.

If you give any idea or hint, I will really apprecite. Thank you, Isaac
-- 
https://mail.python.org/mailman/listinfo/python-list


Using MFDataset to combine netcdf files in python

2013-12-02 Thread Isaac Won
I am trying to combine netcdf files, but it contifuously shows  File 
CBL_plot.py, line 11, in f = MFDataset(fili) File utils.pyx, line 274, in 
netCDF4.MFDataset.init (netCDF4.c:3822) IOError: master dataset THref_11:00.nc 
does not have a aggregation dimension.

So, I checked only one netcdf files and the information of a netcdf file is as 
below:

float64 th_ref(u't',) unlimited dimensions = () current size = (30,)

It looks there is no aggregation dimension. However, I would like to combine 
those netcdf files rather than just using one by one. Is there any way to 
create aggregation dimension to make this MFData set work?

Below is the python code I used:
import numpy as np
from netCDF4 import MFDataset
varn = 'th_ref'
fili = THref_*nc'
f= MFDataset(fili)
Th  = f.variables[varn]
Th_ref=np.array(Th[:])
print Th.shape

I will really appreciate any help, idea, and hint.

Thank you, Isaac
-- 
https://mail.python.org/mailman/listinfo/python-list


Plot a contour inside a contour

2013-11-14 Thread Isaac Won
I tried to plot one smaller contour inside of the other larger contour. I have 
two different 2D-arrays. One is with smaller grid spacing and smaller domain 
size and the other is with larger spacing and larger domain size. So, I tried 
to use fig.add_axes function as follows:
fig = plt.figure()
ax1 = fig.add_axes([0.1,0.1,0.8,0.8])
 .
 .
dx   = 450
NX   = SHFX_plt.shape[1]
NY   = SHFX_plt.shape[0]
xdist= (np.arange(NX)*dx+dx/2.)/1000. 
ydist= (np.arange(NY)*dx+dx/2.)/1000.
myPLT = plt.pcolor(xdist,ydist,SHFX_plt)
 .
 .
ax2 = fig.add_axes([8.,8.,18.,18.])
dx1  = 150
NX1   = SHFX_plt1.shape[1]
NY1   = SHFX_plt1.shape[0]
print 'NX1=',NX1,'NY1=',NY1
xdist1= (np.arange(NX1)*dx1+dx1/2.)/1000.
ydist1= (np.arange(NY1)*dx1+dx1/2.)/1000.
myPLT1 = plt.pcolor(xdist1,ydist1,SHFX_plt1)
plt.show()

My intention is to plot ax2 on the top of ax1 from xdist and ydist = 8 with 18 
by 18 size.

However, the result seems only showing ax1.

I will really appreciate any help or idea.

Thank you, Isaac
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Plot a contour inside a contour

2013-11-14 Thread Isaac Won
On Thursday, November 14, 2013 2:01:39 PM UTC-8, John Ladasky wrote:
 On Thursday, November 14, 2013 11:39:37 AM UTC-8, Isaac Won wrote:
 
  I tried to plot one smaller contour inside of the other larger contour.
 
  
 
 Using what software?  A plotting package is not part of the Python standard 
 library.
 Thanks John, I am using Matplotlib package. I will ask the question in the 
 matplotlib-users discussion group as you suggested.
Thank you again, Isaac
 
 
 You did not show the import statements in your code.  If I had to guess, I 
 would say that you are using the Matplotlib package.  Questions which are 
 specific to matplotlib should be asked in the matplotlib-users discussion 
 group:
 
 
 
 https://lists.sourceforge.net/lists/listinfo/matplotlib-users

-- 
https://mail.python.org/mailman/listinfo/python-list


using print() with multiprocessing and pythonw

2013-11-12 Thread Isaac Gerg
I launch my program with pythonw and begin it with the code below so that all 
my print()'s go to the log file specified. 

if sys.executable.find('pythonw') =0:
# Redirect all console output to file.
sys.stdout = open(pythonw - stdout stderr.log,'w')
sys.stderr = sys.stdout

During the course of my program, I call multiprocessing.Process() and launch a 
function several times.  That function has print()'s inside (which are from 
warnings being printed by python).  This printing causes the multiprocess to 
crash.  How can I fix my code so that the print()'s are supressed. I would hate 
to do a warnings.filterwarnings('ignore') because when I unit test those 
functions, the warnings dont appear.

Thanks in advance,
Isaac
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: using print() with multiprocessing and pythonw

2013-11-12 Thread Isaac Gerg
Thanks for the reply Bill.  The problem is the text i am getting is from a 
python warning message, not one of my own print() function calls.
-- 
https://mail.python.org/mailman/listinfo/python-list


Python 3.2 | WIndows 7 -- Multiprocessing and files not closing

2013-10-10 Thread Isaac Gerg
I have a function that looks like the following:

#-
filename = 'c:\testfile.h5'
f = open(filename,'r')
data = f.read()

q = multiprocessing.Queue()
p = multiprocess.Process(target=myFunction,args=(data,q))
p.start()
result = q.get()
p.join()
q.close()

f.close()

os.remove(filename)
#-

When I run this code, I get an error on the last line when I try to remove the 
file.  It tells me that someone has access to the file.  When I remove the 
queue and multiprocessing stuff, the function works fine.

What is going on here?

Thanks in advance,
Isaac


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.2 | WIndows 7 -- Multiprocessing and files not closing

2013-10-10 Thread Isaac Gerg
Sorry, I am just providing pseudo code since I the code i have is quite large.

As I mentioned, the code works fine when I remove the multirpcessing stuff so 
the filename is not the issue (though you are right in your correction).

Someone with the same problem posted a smaller, more complete example here:

http://stackoverflow.com/questions/948119/preventing-file-handle-inheritance-in-multiprocessing-lib

None of the solutions posted work.

On Thursday, October 10, 2013 12:38:19 PM UTC-4, Piet van Oostrum wrote:
 Isaac Gerg isaac.g...@gergltd.com writes:
 
 
 
  I have a function that looks like the following:
 
 
 
 That doesn't look like a function
 
 
 
 
 
  #-
 
  filename = 'c:\testfile.h5'
 
 
 
 Your filename is most probably wrong. It should be something like:
 
 
 
 filename = 'c:/testfile.h5'
 
 filename = 'c:\\testfile.h5'
 
 filename = r'c:\testfile.h5'
 
 -- 
 
 Piet van Oostrum p...@vanoostrum.org
 
 WWW: http://pietvanoostrum.com/
 
 PGP key: [8DAE142BE17999C4]
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.2 | WIndows 7 -- Multiprocessing and files not closing

2013-10-10 Thread Isaac Gerg
On Thu, Oct 10, 2013 at 2:41 PM, Ned Batchelder n...@nedbatchelder.comwrote:

 On 10/10/13 12:44 PM, Isaac Gerg wrote:

 Sorry, I am just providing pseudo code since I the code i have is quite
 large.

 As I mentioned, the code works fine when I remove the multirpcessing
 stuff so the filename is not the issue (though you are right in your
 correction).

 Someone with the same problem posted a smaller, more complete example
 here:

 http://stackoverflow.com/**questions/948119/preventing-**
 file-handle-inheritance-in-**multiprocessing-libhttp://stackoverflow.com/questions/948119/preventing-file-handle-inheritance-in-multiprocessing-lib

 None of the solutions posted work.


 (BTW: it's better form to reply beneath the original text, not above it.)

 None of the solutions try the obvious thing of closing the file before
 spawning more processes.  Would that work for you?  A with statement is a
 convenient way to do this:

 with open(filename,'r') as f:
 data = f.read()

 The file is closed automatically when the with statement ends.

 --Ned.


 On Thursday, October 10, 2013 12:38:19 PM UTC-4, Piet van Oostrum wrote:

 Isaac Gerg isaac.g...@gergltd.com writes:



  I have a function that looks like the following:



 That doesn't look like a function



  #-**
 filename = 'c:\testfile.h5'



 Your filename is most probably wrong. It should be something like:



 filename = 'c:/testfile.h5'

 filename = 'c:\\testfile.h5'

 filename = r'c:\testfile.h5'

 --

 Piet van Oostrum p...@vanoostrum.org

 WWW: http://pietvanoostrum.com/

 PGP key: [8DAE142BE17999C4]



I will try what you suggest and see if it works.

Additionally, is there a place on the web to view this conversation and
reply?  Currently, I am only able to access this list through email.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.2 | WIndows 7 -- Multiprocessing and files not closing

2013-10-10 Thread Isaac Gerg
On Thu, Oct 10, 2013 at 2:49 PM, Isaac Gerg isaac.g...@gergltd.com wrote:




 On Thu, Oct 10, 2013 at 2:41 PM, Ned Batchelder n...@nedbatchelder.comwrote:

 On 10/10/13 12:44 PM, Isaac Gerg wrote:

 Sorry, I am just providing pseudo code since I the code i have is quite
 large.

 As I mentioned, the code works fine when I remove the multirpcessing
 stuff so the filename is not the issue (though you are right in your
 correction).

 Someone with the same problem posted a smaller, more complete example
 here:

 http://stackoverflow.com/**questions/948119/preventing-**
 file-handle-inheritance-in-**multiprocessing-libhttp://stackoverflow.com/questions/948119/preventing-file-handle-inheritance-in-multiprocessing-lib

 None of the solutions posted work.


 (BTW: it's better form to reply beneath the original text, not above it.)

 None of the solutions try the obvious thing of closing the file before
 spawning more processes.  Would that work for you?  A with statement is a
 convenient way to do this:

 with open(filename,'r') as f:
 data = f.read()

 The file is closed automatically when the with statement ends.

 --Ned.


 On Thursday, October 10, 2013 12:38:19 PM UTC-4, Piet van Oostrum wrote:

 Isaac Gerg isaac.g...@gergltd.com writes:



  I have a function that looks like the following:



 That doesn't look like a function



  #-**
 filename = 'c:\testfile.h5'



 Your filename is most probably wrong. It should be something like:



 filename = 'c:/testfile.h5'

 filename = 'c:\\testfile.h5'

 filename = r'c:\testfile.h5'

 --

 Piet van Oostrum p...@vanoostrum.org

 WWW: http://pietvanoostrum.com/

 PGP key: [8DAE142BE17999C4]



 I will try what you suggest and see if it works.

 Additionally, is there a place on the web to view this conversation and
 reply?  Currently, I am only able to access this list through email.



Ned, I am unable to try what you suggest.  The multiprocess.Process call is
within a class but its target is a static method outside of the class thus
no pickling.  I cannot close the file and then reopen after the
multiprocess.Process call because other threads may be reading from the
file during that time.  Is there a way in Python 3.2 to prevent the
multiprocess.Process from inheriting the file descriptors from the parent
process OR, is there a way to ensure that the multiprocess is completely
closed and garbaged collected by the time I want to use os.remove()?

Isaac
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.2 | WIndows 7 -- Multiprocessing and files not closing

2013-10-10 Thread Isaac Gerg
Hi Piet,

Here is a real code example: 
http://stackoverflow.com/questions/948119/preventing-file-handle-inheritance-in-multiprocessing-lib

As I said before, I had provide pseudocode.

I cannot close the file after reading because it is part of a class and other 
threads may be calling member functions which read from the file.

Isaac
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue18844] allow weights in random.choice

2013-08-26 Thread Alan Isaac

New submission from Alan Isaac:

The need for weighted random choices is so common that it is addressed as a 
common task in the docs:
http://docs.python.org/dev/library/random.html

This enhancement request is to add an optional argument to random.choice, which 
must be a sequence of non-negative numbers (the weights) having the same length 
as the main argument.

--
messages: 196229
nosy: aisaac
priority: normal
severity: normal
status: open
title: allow weights in random.choice
type: enhancement

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18844
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Why is the argparse module so inflexible?

2013-06-28 Thread Isaac To
On Sat, Jun 29, 2013 at 9:36 AM, Ethan Furman et...@stoneleaf.us wrote:

 On 06/27/2013 03:49 PM, Steven D'Aprano wrote:


 Libraries should not call sys.exit, or raise SystemExit. Whether to quit
 or not is not the library's decision to make, that decision belongs to
 the application layer. Yes, the application could always catch
 SystemExit, but it shouldn't have to.


 So a library that is explicitly designed to make command-line scripts
 easier and friendlier should quit with a traceback?

 Really?


Perhaps put the functionality handling of the exception of library to
sys.exit with a message into a method so that the user can override it
(e.g., so that it just throw the same exception to the caller of the
library)?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I hate you all

2013-04-05 Thread Isaac To
You underestimated the arrogance of Python.  Python 3 tab doesn't map to 4
spaces.  It doesn't map to any number of spaces.  Tabs and spaces are
completely unrelated.  If you have a function having the first indentation
level with 4 (or any number of) spaces, the next line starting not with 4
spaces but instead with a tab always lead you to the TabError exception.

If you like to play tricks, you can use 4 spaces plus a tab as the next
indentation level.  I'd rather not do this kind of things, and forget about
use using tabs at all.  You are out of luck if you want to play the
tab-space tricks, but if you follow the lead, you'll soon find that code
will be more reliable without tabs, especially if you cut-and-paste code of
others.


On Sat, Apr 6, 2013 at 6:04 AM, terminato...@gmail.com wrote:

 On Saturday, April 6, 2013 12:55:29 AM UTC+3, John Gordon wrote:
  In 64d4fb7c-6a75-4b5f-b5c8-06a4b2b5d...@googlegroups.com
 terminato...@gmail.com writes:
 
   How can python authors be so arrogant to impose their tabs and spaces
   options on me ? It should be my choice if I want to use tabs or not !
 
  You are free to use tabs, but you must be consistent.  You can't mix
  tabs and spaces for lines of code at the same indentation level.

 They say so, but python does not work that way. This is a simple script:

 from unittest import TestCase

 class SvnExternalCmdTests(TestCase) :
 def test_parse_svn_external(self) :
 for sample_external in sample_svn_externals :
 self.assertEqual(parse_svn_externals(sample_external[0][0],
 sample_external[0][1]), [ sample_external[1] ])

 And at the `for` statement at line 5 I get:

 C:\Documents and Settings\Adrian\Projectspython sample-py3.py
   File sample-py3.py, line 5
 for sample_external in sample_svn_externals :
 ^
 TabError: inconsistent use of tabs and spaces in indentation


 Line 5 is the only line in the file that starts at col 9 (after a tab).
 Being the only line in the file with that indent level, how can it be
 inconsistent ?

 You can try the script as it is, and see python 3.3 will not run it
 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Set x to to None and del x doesn't release memory in python 2.7.1 (HPUX 11.23, ia64)

2013-03-09 Thread Isaac To
In general, it is hard for any process to return the memory the OS allocate
to it back to the OS, short of exiting the whole process.  The only case
that this works reliably is when the process allocates a chunk of memory by
mmap (which is chosen by libc if it malloc or calloc a large chunk of
memory), and that whole chunk is not needed any more.  In that case the
process can munmap it.  Evidently you are not see that in your program.
What you allocate might be too small (so libc choose to allocate it using
another system call sbrk), or that the allocated memory also hold other
objects not freed.

If you want to reduce the footprint of a long running program that
periodically allocates a large chunk of memory, the easiest solution is
to fork a different process to achieve the computations that needs the
memory.  That way, you can exit the process after you complete the
computation, and at that point all memory allocated to it is guaranteed to
be freed to the OS.

Modules like multiprocessing probably make the idea sufficiently easy to
implement.


On Sat, Mar 9, 2013 at 4:07 PM, Wong Wah Meng-R32813
r32...@freescale.comwrote:



 If the memory usage is continually growing, you have something
 else that is a problem -- something is holding onto objects. Even if Python
 is not returning memory to the OS, it should be reusing the memory it has
 if objects are being freed.
 --
 [] Yes I have verified my python application is reusing the memory (just
 that it doesn't reduce once it has grown) and my python process doesn't
 have any issue to run even though it is seen taking up more than 2G in
 footprint. My problem is capacity planning on the server whereby since my
 python process doesn't release memory back to the OS, the OS wasn't able to
 allocate memory when a new process is spawn off.

 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Triple nested loop python (While loop insde of for loop inside of while loop)

2013-03-01 Thread Isaac Won
try to make my triple nested loop working. My code would be:
c = 4
y1 = []
m1 = []
std1 = []
while c 24:
c = c + 1
a = []
f.seek(0,0)
for columns in ( raw.strip().split() for raw in f ):
a.append(columns[c])
x = np.array(a, float)
not_nan = np.logical_not(np.isnan(x))
indices = np.arange(len(x))
interp = interp1d(indices[not_nan], x[not_nan], kind = 'nearest')
p = interp(indices)

N = len(p)
dt = 900.0 #Time step (seconds)
fs = 1./dt #Sampling frequency
KA,PSD = oned_Fourierspectrum(p,dt) # Call Song's 1D FS function
time_axis = np.linspace(0.0,N,num = N,endpoint = False)*15/(60*24) 
plot_freq = 24*3600.*KA #Convert to cycles per day 
plot_period = 1.0/plot_freq # convert to days/cycle
fpsd = plot_freq*PSD
d = -1 
while d 335: 
d = d + 1 
y = fpsd[d] 
y1 = y1 + [y]   
   m = np.mean(y1)
m1 = m1 + [m]
print m1


My purpose is make a list of [mean(fpsd[0]), mean(fpsd[1]), mean(fpsd[2]).. 
mean(fpsd[335])]. Each y1 would be the list of fpsd[d].

I check it is working pretty well before second while loop and I can get 
individual mean of fpsd[d]. However, with second whole loop, it produces 
definitely wrong numbers. Would you help me this problem?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Triple nested loop python (While loop insde of for loop inside of while loop)

2013-03-01 Thread Isaac Won
Thank you, Chris.
I just want to acculate value from y repeatedly.
If y = 1,2,3...10, just have a [1,2,3...10] at onece.
On Friday, March 1, 2013 7:41:05 AM UTC-6, Chris Angelico wrote:
 On Fri, Mar 1, 2013 at 7:59 PM, Isaac Won winef...@gmail.com wrote:
 
  while c 24:
 
  for columns in ( raw.strip().split() for raw in f ):
 
  while d 335:
 
 
 
 Note your indentation levels: the code does not agree with your
 
 subject line. The third loop is not actually inside your second.
 
 Should it be?
 
 
 
 ChrisA

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Triple nested loop python (While loop insde of for loop inside of while loop)

2013-03-01 Thread Isaac Won
On Friday, March 1, 2013 7:41:05 AM UTC-6, Chris Angelico wrote:
 On Fri, Mar 1, 2013 at 7:59 PM, Isaac Won winef...@gmail.com wrote:
 
  while c 24:
 
  for columns in ( raw.strip().split() for raw in f ):
 
  while d 335:
 
 
 
 Note your indentation levels: the code does not agree with your
 
 subject line. The third loop is not actually inside your second.
 
 Should it be?
 
 
 
 ChrisA

Yes, the thiird lood should be inside of my whole loop.
Thank you,
Isaac
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Triple nested loop python (While loop insde of for loop inside of while loop)

2013-03-01 Thread Isaac Won
Thank you Ulich for reply,
What I really want to get from this code is m1 as I told. For this purpose, for 
instance, values of fpsd upto second loop and that from third loop should be 
same, but they are not. Actually it is my main question.
Thank you,
Isaac
On Friday, March 1, 2013 6:00:42 AM UTC-6, Ulrich Eckhardt wrote:
 Am 01.03.2013 09:59, schrieb Isaac Won:
 
  try to make my triple nested loop working. My code would be:
 
  c = 4
 
 [...]
 
  while c 24:
 
   c = c + 1
 
 
 
 This is bad style and you shouldn't do that in python. The question that 
 
 comes up for me is whether something else is modifying c in that loop, 
 
 but I think the answer is no. For that reason, use Python's way:
 
 
 
for c in range(5, 25):
 
...
 
 
 
 That way it is also clear that the first value in the loop is 5, while 
 
 the initial c = 4 seems to suggest something different. Also, the last 
 
 value is 24, not 23.
 
 
 
 
 
 
 
   while d 335:
 
   d = d + 1
 
   y = fpsd[d]
 
   y1 = y1 + [y]
 
  m = np.mean(y1)
 
   m1 = m1 + [m]
 
 
 
 Apart from the wrong indention (don't mix tabs and spaces, see PEP 8!) 
 
 and the that d in range(336) is better style, you don't start with an 
 
 empty y1, except on the first iteration of the outer loop.
 
 
 
 I'm not really sure if that answers your problem. In any case, please 
 
 drop everything not necessary to demostrate the problem before posting. 
 
 This makes it easier to see what is going wrong both for you and others. 
 
 Also make sure that others can actually run the code.
 
 
 
 
 
 Greetings from Hamburg!
 
 
 
 Uli

-- 
http://mail.python.org/mailman/listinfo/python-list


Putting the loop inside of loop properly

2013-03-01 Thread Isaac Won
I just would like to make my previous question simpler and I bit adjusted my 
code with help with Ulich and Chris.
The basic structure of my code is:

for c in range(5,25):

for columns in ( raw.strip().split() for raw in f ):
a.append(columns[c])
x = np.array(a, float)
not_nan = np.logical_not(np.isnan(x))
indices = np.arange(len(x))
interp = interp1d(indices[not_nan], x[not_nan], kind = 'nearest')
p = interp(indices)


N = len(p)


fpsd = plot_freq*PSD
f.seek(0,0)
for d in range(336):

y = fpsd[d]
y1 = y1 + [y]
m = np.mean(y1)
m1 = m1 + [m]
--
I just removed seemingly unnecesary lines. I expect that last loop can produce 
the each order values (first, second, last(336th)) of fpsd from former loop.
fpsd would be 20 lists. So, fpsd[0] in third loop shoul be first values from 20 
lists and it expects to be accumulated to y1. So, y1 should be the list of 
first values from 20 fpsd lists. and m is mean of y1. I expect to repeat 356 
times and accumulated to m1. However, it doesn't work and fpsd values in and 
out of the last loop are totally different.
My question is clear?
Any help or questions would be really appreciated.
Isaac
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Import redirects

2013-02-11 Thread Isaac To
On Mon, Feb 11, 2013 at 8:27 PM, Oscar Benjamin
oscar.j.benja...@gmail.comwrote:

 On 11 February 2013 06:50, Isaac To isaac...@gmail.com wrote:
  Except one thing: it doesn't really work.  If I `import foo.baz.mymod`
 now,
  and if in bar.baz.mymod there is a statement `import bar.baz.depmod`,
 then
  it fails.  It correctly load the file bar/baz/depmod.py, and it assigns
  the resulting module to the package object bar.baz as the depmod
 variable.
  But it fails to assign the module object of mymod into the bar.baz
  module.  So after `import foo.baz.mymod`, `foo.baz.mymod` results in an
  AttributeError saying 'module' object has no attribute 'mymod'.  The
 natural
  `import bar.baz.mymod` is not affected.

 My guess is that you have two copies of the module object bar.baz with
 one under the name foo.baz and the other under the name bar.baz. mymod
 is inserted at bar.baz but not at foo.baz. I think a solution in this
 case would be to have your foo/__init__.py also import the subpackage
 'bar.baz' and give it both names in sys.modules:

 import bar.baz
 sys.modules['foo.baz'] = bar.baz


Thanks for the suggestion.  It is indeed attractive if I need only to
pre-import all the subpackage and not to redirect individual modules.  On
the other hand, when I actually try this I found that it doesn't really
work as intended.  What I actually wrote is, as foo/__init__.py:

import sys
import bar
import bar.baz
sys.modules['foo.baz'] = bar.baz
sys.modules['foo'] = bar

One funny effect I get is this:

 import bar.baz.mymod
 bar.baz.mymod
module 'bar.baz.mymod' from 'bar/baz/mymod.pyc'
 import foo.baz.mymod
 bar.baz.mymod
module 'foo.baz.mymod' from 'bar/baz/mymod.pyc'

By importing foo.baz.mymod, I change the name of the module from
bar.baz.mymod to foo.baz.mymod.  If that is not bad enough, I also see
this:

 import bar.baz.mymod as bbm
 import foo.baz.mymod as fbm
 bbm is fbm
False

Both effects are there even if bar/baz/mymod.py no longer `import
bar.baz.depmod`.

It looks to me that package imports are so magical that I shouldn't do
anything funny to it, as anything that seems to work might bite me a few
minutes later.

Regards,
Isaac
-- 
http://mail.python.org/mailman/listinfo/python-list


Import redirects

2013-02-10 Thread Isaac To
I have a package (say foo) that I want to rename (say, to bar), and for
compatibility reasons I want to be able to use the old package name to
refer to the new package.  Copying files or using filesystem symlinks is
probably not the way to go, since that means any object in the modules of
the package would be duplicated, changing one will not cause the other to
be updated.  Instead, I tried the following as the content of
`foo/__init__.py`:

import sys
import bar
sys.modules['foo'] = bar

To my surprise, it seems to work.  If I `import foo` now, the above will
cause bar to be loaded and be used, which is expected.  But even if I
`import foo.baz` now (without first `import foo`), it will now correctly
import bar.baz in its place.

Except one thing: it doesn't really work.  If I `import foo.baz.mymod` now,
and if in bar.baz.mymod there is a statement `import bar.baz.depmod`,
then it fails.  It correctly load the file bar/baz/depmod.py, and it
assigns the resulting module to the package object bar.baz as the depmod
variable.  But it fails to assign the module object of mymod into the
bar.baz module.  So after `import foo.baz.mymod`, `foo.baz.mymod` results
in an AttributeError saying 'module' object has no attribute 'mymod'.  The
natural `import bar.baz.mymod` is not affected.

I tested it with both Python 2.7 and Python 3.2, and I see exactly the same
behavior.

Anyone knows why that happen?  My current work-around is to use the above
code only for modules and not for packages, which is suboptimal.  Anyone
knows a better work-around?
-- 
http://mail.python.org/mailman/listinfo/python-list


About a value error called 'ValueError: A value in x_new is below the interpolation range'

2013-02-05 Thread Isaac Won
Dear all,

I am trying to calculate correlation coefficients between one time series data 
and other time series. However,there are some missing values. So, I 
interploated each time series with 1d interpolation in scipy and got 
correlation coefficients between them. This code works well for some data sets, 
but doesn't for some others. Following is actual error I got:
0.0708904109589
0.0801369863014
0.0751141552511
0.0938356164384
0.0769406392694
Traceback (most recent call last):
  File error_removed.py, line 56, in module
i2 = interp(indices)
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 394, in __call__
out_of_bounds = self._check_bounds(x_new)
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 449, in _check_bounds
raise ValueError(A value in x_new is below the interpolation 
ValueError: A value in x_new is below the interpolation range.

This time is 'x_new is below the interpolation range, but some times, it shows 
above interpolation range.'

I would like to make some self-contained code, but, I am not sure how to make 
it to represent my case well.
I just put all of my code here. I apologize for this inconvenience.
---
-
a = []
c = 4
with open(filin1, 'r') as f1:
arrays = [map(float, line.split()) for line in f1]
newa = [[x[1],x[2]] for x in arrays]

o = newa[58]
f = open(filin, r)
percent1 = []
for columns in ( raw.strip().split() for raw in f ):
a.append(columns[63])
x = np.array(a, float)

not_nan = np.logical_not(np.isnan(x))
indices = np.arange(len(x))
interp = interp1d(indices[not_nan], x[not_nan])
#interp = np.interp(indices, indices[not_nan], x[not_nan])
i1 = interp(indices)

f.close
h1 = []
p1 = []
while c 278:
c = c + 1
d = c - 5
b = []


f.seek(0,0)
for columns in ( raw.strip().split() for raw in f ):

b.append(columns[c])
 y = np.array(b, float)
h = haversine.distance(o, newa[d])
n = len(y)
l = b.count('nan')
percent = l/8760.
percent1 = percent1 + [percent]
   #print l, percent

if percent  0.1:
not_nan = np.logical_not(np.isnan(y))
indices = np.arange(len(y))

interp = interp1d(indices[not_nan], y[not_nan])
#interp = np.interp(indices, indices[not_nan], x[not_nan])
i2 = interp(indices)

pearcoef = sp.pearsonr(i1,i2)
p = pearcoef[0]
p1 = p1 + [p]
h1 = h1 + [h]
print percent

print h1
print p1
print len(p1)
plt.plot(h1, p1, 'o')
plt.xlabel('Distance(km)')
plt.ylabel('Correlation coefficient')
plt.grid(True)
plt.show()
---
For any help or advice, I will really appreciate.

Best regards,

Isaac
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error with quadratic interpolation

2013-01-23 Thread Isaac Won
On Tuesday, January 22, 2013 10:06:41 PM UTC-6, Isaac Won wrote:
 Hi all,
 
 
 
 I have tried to use different interpolation methods with Scipy. My code seems 
 just fine with linear interpolation, but shows memory error with quadratic. I 
 am a novice for python. I will appreciate any help.
 
 
 
 #code
 
 f = open(filin, r)
 
 for columns in ( raw.strip().split() for raw in f ):
 
 a.append(columns[5])
 
 x = np.array(a, float)
 
 
 
 
 
 not_nan = np.logical_not(np.isnan(x))
 
 indices = np.arange(len(x))
 
 interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic')
 
 
 
 p = interp(indices)
 
 
 
 
 
 The number of data is 31747.
 
 
 
 Thank you,
 
 
 
 Isaac

I really appreciate to both Ulich and Oscar.

To Oscar
My actual error message is:
File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 311, in __init__
self._spline = splmake(x,oriented_y,order=order)
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 809, in splmake
coefs = func(xk, yk, order, conds, B)
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 530, in _find_smoothest
u,s,vh = np.dual.svd(B)
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py,
 line 91, in svd
full_matrices=full_matrices, overwrite_a = overwrite_a)
MemoryError
--
Thank you,

Hoonill
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   6   7   >