Re: advanced SimpleHTTPServer?

2016-11-03 Thread justin walters
On Thu, Nov 3, 2016 at 10:27 PM, Rustom Mody  wrote:

> On Thursday, November 3, 2016 at 1:23:05 AM UTC+5:30, Eric S. Johansson
> wrote:
> > On 11/2/2016 2:40 PM, Chris Warrick wrote:
> > > Because, as the old saying goes, any sufficiently complicated Bottle
> > > or Flask app contains an ad hoc, informally-specified, bug-ridden,
> > > slow implementation of half of Django. (In the form of various plugins
> > > to do databases, accounts, admin panels etc.)
> >
> > That's not a special attribute of bottle, flask or Django. Ad hoc,
> > informally specified, bug ridden slow implementations abound.  We focus
> > too much on scaling up and not enough on scaling down. We (designers)
> > also have not properly addressed configuration complexity issues.
>
> This scaling up vs down idea is an important one.
> Related to Buchberger’s blackbox whitebox principle
>
> >
> > If I'm going do something once, if it cost me more than a couple of
> > hours to figure it out, it's too expensive in general but definitely if
> > I forget what I learned. That's why bottle/flask systems meet and need.
> > They're not too expensive to forget what you learned.
> >
> > Django makes the cost of forgetting extremely expensive. I think of
> > using Django as career  rather than a toolbox.
>
> Thats snide... and probably accurate ;-)
> Among my more unpleasant programming experiences was Ruby-on-Rails
> And my impression is that Ruby is fine; Rails not
> Django I dont know and my impression is its a shade better than Rails
>
> It would be nice to discover the bottle inside the flask inside django
>
> Put differently:
> Frameworks are full-featured and horrible to use
> APIs are elegant but ultimately underpowered
> DSLs (eg requests) are in intermediate sweetspot; we need more DSL-families
> --
> https://mail.python.org/mailman/listinfo/python-list
>

I work with Django every day. Knowing Django is like knowing another
ecosystem. It's totally
worth learning though. The speed of development is absolutely unbeatable. I
can build a fully featured
and good-looking blog in about 10 minutes. It's nuts.

The best part about it though, is that it's really just simple Python under
the hood for the most part. You
can override or modify any part of it to make it work in exactly the way
you want it to. I'm a huge Django fanboy,
so excuse the gushing. The docs are also some of the most comprehensive
I've ever seen.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: advanced SimpleHTTPServer?

2016-11-03 Thread Rustom Mody
On Thursday, November 3, 2016 at 1:23:05 AM UTC+5:30, Eric S. Johansson wrote:
> On 11/2/2016 2:40 PM, Chris Warrick wrote:
> > Because, as the old saying goes, any sufficiently complicated Bottle
> > or Flask app contains an ad hoc, informally-specified, bug-ridden,
> > slow implementation of half of Django. (In the form of various plugins
> > to do databases, accounts, admin panels etc.)
> 
> That's not a special attribute of bottle, flask or Django. Ad hoc,
> informally specified, bug ridden slow implementations abound.  We focus
> too much on scaling up and not enough on scaling down. We (designers) 
> also have not properly addressed configuration complexity issues.

This scaling up vs down idea is an important one.
Related to Buchberger’s blackbox whitebox principle
 
> 
> If I'm going do something once, if it cost me more than a couple of
> hours to figure it out, it's too expensive in general but definitely if
> I forget what I learned. That's why bottle/flask systems meet and need.
> They're not too expensive to forget what you learned.
> 
> Django makes the cost of forgetting extremely expensive. I think of
> using Django as career  rather than a toolbox.

Thats snide... and probably accurate ;-)
Among my more unpleasant programming experiences was Ruby-on-Rails
And my impression is that Ruby is fine; Rails not
Django I dont know and my impression is its a shade better than Rails

It would be nice to discover the bottle inside the flask inside django

Put differently:
Frameworks are full-featured and horrible to use
APIs are elegant but ultimately underpowered
DSLs (eg requests) are in intermediate sweetspot; we need more DSL-families
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue10408] Denser dicts and linear probing

2016-11-03 Thread INADA Naoki

INADA Naoki added the comment:

> - make dicts denser by making the resize factor 2 instead of 4 for small dicts

This had been implemented already when I start compact dict.

> - improve cache locality on collisions by using linear probing

set does this. But dict doesn't do it for now.

In case of compact dict, liner probing only affects index table (dk_indices).
dk_indices is small (64byte when dk_size==64). One or two cache line
can contain whole dk_indices of small dicts.
So performance benefit of linear probing will be smaller than previous
dict implementation.

I'll re-evaluate it.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: need some kind of "coherence index" for a group of strings

2016-11-03 Thread Mario R. Osorio
I don't know much about these topics but, wouldn't soundex do the job??

 On Thursday, November 3, 2016 at 12:18:19 PM UTC-4, Fillmore wrote:
> Hi there, apologies for the generic question. Here is my problem let's 
> say that I have a list of lists of strings.
> 
> list1:#strings are sort of similar to one another
> 
>my_nice_string_blabla
>my_nice_string_blqbli
>my_nice_string_bl0bla
>my_nice_string_aru
> 
> 
> list2:#strings are mostly different from one another
> 
>my_nice_string_blabla
>some_other_string
>yet_another_unrelated string
>wow_totally_different_from_others_too
> 
> 
> I would like an algorithm that can look at the strings and determine 
> that strings in list1 are sort of similar to one another, while the 
> strings in list2 are all different.
> Ideally, it would be nice to have some kind of 'coherence index' that I 
> can exploit to separate lists given a certain threshold.
> 
> I was about to concoct something using levensthein distance, but then I 
> figured that it would be expensive to compute and I may be reinventing 
> the wheel.
> 
> Thanks in advance to python masters that may have suggestions...

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28606] Suspected bug in python optimizer with decorators

2016-11-03 Thread R. David Murray

R. David Murray added the comment:

The statement in question causes the compiler to make 'tags' a local variable 
in the function, and thus you get the error on the assignment.  See 
https://docs.python.org/3/faq/programming.html#id8.

--
nosy: +r.david.murray
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Terry Reedy

On 11/3/2016 2:56 AM, arthurhavli...@gmail.com wrote:


lst = [ item for item in lst if predicate(item) ]
lst = [ f(item) for item in lst ]

Both these expressions feature redundancy, lst occurs twice and item at least 
twice. Additionally, the readability is hurt, because one has to dive through 
the semantics of the comprehension to truely understand I am filtering the list 
or remapping its values.

...

A language support for these operations to be made in-place could improve the 
efficiency of this operations through reduced use of memory.


We already have that: slice assignment with an iterator.

lst[:] = (item for item in list if predicate(item))
lst[:] = map(f, lst)  # iterator in 3.x.

To save memory, stop using unneeded temporary lists and use iterators 
instead.  If slice assignment is done as I hope it will optimize remain 
memory operations.  (But I have not read the code.) It should overwrite 
existing slots until either a) the iterator is exhausted or b) existing 
memory is used up.  When lst is both source and destination, only case 
a) can happen.  When it does, the list can be finalized with its new 
contents.


As for timings.

from timeit import Timer
setup = """data = list(range(1))
def func(x):
return x
"""
t1a = Timer('data[:] = [func(a) for a in data]', setup=setup)
t1b = Timer('data[:] = (func(a) for a in data)', setup=setup)
t2a = Timer('data[:] = list(map(func, data))', setup=setup)
t2b = Timer('data[:] = map(func, data)', setup=setup)

print('t1a', min(t1a.repeat(number=500, repeat=7)))
print('t1b', min(t1b.repeat(number=500, repeat=7)))
print('t2a', min(t2a.repeat(number=500, repeat=7)))
print('t2b', min(t2b.repeat(number=500, repeat=7)))
#
t1a 0.5675313005414555
t1b 0.7034254675598604
t2a 0.518128598520
t2b 0.5196112759726024

If f does more work, the % difference among these will decrease.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


[issue28605] Remove mention of LTO when referencing --with-optimization in What's New

2016-11-03 Thread STINNER Victor

STINNER Victor added the comment:

LTO was excluded of --with-optimization by the issue #28032.

--
nosy: +haypo

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28605] Remove mention of LTO when referencing --with-optimization in What's New

2016-11-03 Thread Roundup Robot

Roundup Robot added the comment:

New changeset 1f750fff788e by Brett Cannon in branch '3.6':
Issue #28605: Fix the help and What's New entry for --with-optimizations.
https://hg.python.org/cpython/rev/1f750fff788e

New changeset 4000de2dcd24 by Brett Cannon in branch 'default':
Merge for issue #28605
https://hg.python.org/cpython/rev/4000de2dcd24

--
nosy: +python-dev

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28605] Remove mention of LTO when referencing --with-optimization in What's New

2016-11-03 Thread Brett Cannon

Changes by Brett Cannon :


--
resolution:  -> fixed
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: need some kind of "coherence index" for a group of strings

2016-11-03 Thread jladasky
On Thursday, November 3, 2016 at 3:47:41 PM UTC-7, jlad...@itu.edu wrote:
> On Thursday, November 3, 2016 at 1:09:48 PM UTC-7, Neil D. Cerutti wrote:
> > you may also be 
> > able to use some items "off the shelf" from Python's difflib.
> 
> I wasn't aware of that module, thanks for the tip!
> 
> difflib.SequenceMatcher.ratio() returns a numerical value which represents 
> the "similarity" between two strings.  I don't see a precise definition of 
> "similar", but it may do what the OP needs.

Following up to myself... I just experimented with 
difflib.SequenceMatcher.ratio() and discovered something.  The algorithm is not 
"commutative."  That is, it doesn't ALWAYS produce the same ratio when the two 
strings are swapped.

Here's an excerpt from my interpreter session.

==

In [1]: from difflib import SequenceMatcher

In [2]: import numpy as np

In [3]: sim = np.zeros((4,4))


== snip ==


In [10]: strings
Out[10]: 
('Here is a string.',
 'Here is a slightly different string.',
 'This string should be significantly different from the other two?',
 "Let's look at all these string similarity values in a matrix.")

In [11]: for r, s1 in enumerate(strings):
   : for c, s2 in enumerate(strings):
   : m = SequenceMatcher(lambda x:x=="", s1, s2)
   : sim[r,c] = m.ratio()
   :

In [12]: sim
Out[12]: 
array([[ 1.,  0.64150943,  0.2195122 ,  0.30769231],
   [ 0.64150943,  1.,  0.47524752,  0.30927835],
   [ 0.2195122 ,  0.45544554,  1.,  0.28571429],
   [ 0.30769231,  0.28865979,  0.,  1.]])

==

The values along the matrix diagonal, of course, are all ones, because each 
string was compared to itself.

I also expected the values reflected across the matrix diagonal to match.  The 
first row does in fact match the first column.  The remaining numbers disagree 
somewhat.  The differences are not large, but they are there.  I don't know the 
reason why.  Caveat programmer.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: need some kind of "coherence index" for a group of strings

2016-11-03 Thread Fillmore

On 11/3/2016 6:47 PM, jlada...@itu.edu wrote:

On Thursday, November 3, 2016 at 1:09:48 PM UTC-7, Neil D. Cerutti wrote:

you may also be
able to use some items "off the shelf" from Python's difflib.


I wasn't aware of that module, thanks for the tip!

difflib.SequenceMatcher.ratio() returns a numerical value which represents
> the "similarity" between two strings.  I don't see a precise 
definition of

> "similar", but it may do what the OP needs.





I may end up rolling my own algo, but thanks for the tip, this does seem 
like useful stuff indeed



--
https://mail.python.org/mailman/listinfo/python-list


[issue28607] C implementation of parts of copy.deepcopy

2016-11-03 Thread Brett Cannon

Changes by Brett Cannon :


--
components: +Extension Modules -Library (Lib)
stage:  -> test needed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28607] C implementation of parts of copy.deepcopy

2016-11-03 Thread Rasmus Villemoes

New submission from Rasmus Villemoes:

This is mostly an RFC patch. It compiles and passes the test suite. A somewhat 
silly microbenchmark such as

./python -m timeit -s 'import copy; x = dict([(str(x), x) for x in 
range(1)]);'  'copy.deepcopy(x)'

runs about 30x faster. In the (2.7 only) application which motivated this, the 
part of its initialization that does a lot of deepcopying drops from 11s to 3s. 
That it's so much less is presumably because the application holds on to the 
deepcopies, so there's much more allocation going on than in the 
microbenchmark, but I haven't investigated thoroughly. In any case, a 3.5x 
speedup is also nice.

--
components: Library (Lib)
files: deepcopy.patch
keywords: patch
messages: 280032
nosy: villemoes
priority: normal
severity: normal
status: open
title: C implementation of parts of copy.deepcopy
type: performance
versions: Python 3.7
Added file: http://bugs.python.org/file45344/deepcopy.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: need some kind of "coherence index" for a group of strings

2016-11-03 Thread jladasky
On Thursday, November 3, 2016 at 1:09:48 PM UTC-7, Neil D. Cerutti wrote:
> you may also be 
> able to use some items "off the shelf" from Python's difflib.

I wasn't aware of that module, thanks for the tip!

difflib.SequenceMatcher.ratio() returns a numerical value which represents the 
"similarity" between two strings.  I don't see a precise definition of 
"similar", but it may do what the OP needs.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Paul Rubin
arthurhavli...@gmail.com writes:
> I would gladly appreciate your returns on this, regarding:
> 1 - Whether a similar proposition has been made
> 2 - If you find this of any interest at all
> 3 - If you have a suggestion for improving the proposal

Bleccch.  Might be ok as a behind-the-scenes optimization by the
compiler.  If you want something like C++ move semantics, use C++.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Yury Selivanov

Yury Selivanov added the comment:

+# Wake up the loop if the finalizer was called from
+# a different thread.
+self._write_to_self()

Yeah, looks like shutdown_asyncgens somehow ended up in 3.5 branch (there's no 
harm in it being there).  I'll sync the branches.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Guido van Rossum

Guido van Rossum added the comment:

PS. I noticed there are a few lines different between the "upstream" repo
and the 3.5 stdlib:

+# Wake up the loop if the finalizer was called from
+# a different thread.
+self._write_to_self()

On Thu, Nov 3, 2016 at 3:12 PM, Yury Selivanov 
wrote:

>
> Changes by Yury Selivanov :
>
>
> --
> resolution:  -> fixed
> stage: commit review -> resolved
> status: open -> closed
> type:  -> performance
>
> ___
> Python tracker 
> 
> ___
>

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23996] _PyGen_FetchStopIterationValue() crashes on unnormalised exceptions

2016-11-03 Thread Yury Selivanov

Yury Selivanov added the comment:

> Looks like I forgot about this. My final fix still hasn't been applied, so 
> the code in Py3.4+ is incorrect now.

Left a question in code review

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28604] Exception raised by python3.5 when using en_GB locale

2016-11-03 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

I suspect this issue is similar to issue25812. en_GB has non-ut8 encoding 
(likely iso8859-1). Currency symbol £ is encoded with this encoding as b'\xa3'. 
But Python tries to decode b'\xa3' with an encoding determined by other locale 
setting (LC_CTYPE).

--
nosy: +lemburg, loewis, serhiy.storchaka
versions: +Python 3.7 -Python 3.4

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Yury Selivanov

Yury Selivanov added the comment:

Guido, Andrew, thanks for reviews!

I've fixed some unittests before committing the patch.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Yury Selivanov

Changes by Yury Selivanov :


--
resolution:  -> fixed
stage: commit review -> resolved
status: open -> closed
type:  -> performance

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Roundup Robot

Roundup Robot added the comment:

New changeset 128ffe3c3eb9 by Yury Selivanov in branch '3.5':
Issue #28600: Optimize loop.call_soon().
https://hg.python.org/cpython/rev/128ffe3c3eb9

New changeset 4f570a612aec by Yury Selivanov in branch '3.6':
Merge 3.5 (issue #28600)
https://hg.python.org/cpython/rev/4f570a612aec

New changeset 46c3eede41a6 by Yury Selivanov in branch 'default':
Merge 3.6 (issue #28600)
https://hg.python.org/cpython/rev/46c3eede41a6

--
nosy: +python-dev

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Python Dice Game/Need help with my script/looping!

2016-11-03 Thread Cousin Stanley
Constantin Sorin wrote:

> Hello,I recently started to make a dice game in python.
> 
> Everything was nice and beautiful,until now.
> 
> My problem is that when I try to play and I win or lost 
> or it's equal next time it will continue only with that.
>  

  Following is a link to a version of your code
  rewritten to run using python3  

http://csphx.net/fire_dice.py.txt

  The only significant differences 
  between python2 and python3
  in your code are  

python2 . python3

 print ... print( )

 raw_input( ) ... input( )


Bob Gailer wrote: 

> 
> The proper way to handle this 
> is to put the entire body of game() 
> in a while loop.
>   
> Since the values of e and f are not changed in the loop 
> he will continue to get the same thing.
>  

  These changes are the key
  to making the program loop
  as desired  

  All other changes are mostly cosmetic  

 
  * Note *

I  am  partial to white space 
both  horizontal  and  vertical ... :-)
  

-- 
Stanley C. Kitching
Human Being
Phoenix, Arizona

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28606] Suspected bug in python optimizer with decorators

2016-11-03 Thread Brian Smith

New submission from Brian Smith:

While using decorators in python 3.5.2, I ran into a surprising bug where the 
decorator sometimes lost context of the outer scope.  The attached file 
demonstrates this problem.

In this file, we have 2 decorators.  They are identical, except that the first 
has one line (at line 11) commented out.

When I run the program, I get the following output:

dev:~/dev/route105/workspace/ir/play$ python bug.py
Trying dec1:
in dec1: {'tags': ['foo:1'], 'name': 'foo'}
inside dec1.decorator: {'tags': {'tags': ['foo:1', ['name:subsystem']], 'name': 
'foo'}, 'func': }

Trying dec2:
in dec2: {'tags': ['foo:1'], 'name': 'foo'}
inside dec2.decorator: {'func': }
Traceback (most recent call last):
  File "bug.py", line 42, in 
@dec2(name="foo", tags=["foo:1"])
  File "bug.py", line 27, in decorator
name = tags["name"]
UnboundLocalError: local variable 'tags' referenced before assignment

There are two issues here:
1) In dec1, the keyword argument 'name' exists in the outer scope, but not in 
the inner scope.  For some reason, the keyword argument 'tags' exists in both 
scopes.

2) In dec2, Adding the line 
 tags=tags["tags"]
causes the keyword argument 'tags' to disappear from the inner scope. 

The only thing I could think of was a compiler or optimizer bug.

--
components: Interpreter Core
files: bug.py
messages: 280025
nosy: Brian Smith
priority: normal
severity: normal
status: open
title: Suspected bug in python optimizer with decorators
type: behavior
versions: Python 3.5
Added file: http://bugs.python.org/file45343/bug.py

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28605] Remove mention of LTO when referencing --with-optimization in What's New

2016-11-03 Thread Brett Cannon

New submission from Brett Cannon:

The What's New doc for Python 3.6 mentions that --with-optimizations turns on 
LTO which is no longer true.

--
assignee: brett.cannon
components: Documentation
messages: 280024
nosy: brett.cannon, gregory.p.smith
priority: deferred blocker
severity: normal
status: open
title: Remove mention of LTO when referencing --with-optimization in What's New
versions: Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: data interpolation

2016-11-03 Thread Bob Gailer
On Nov 3, 2016 6:10 AM, "Heli"  wrote:
>
> Hi,
>
> I have a question about data interpolation using python. I have a big
ascii file containg data in the following format and around 200M points.
>
> id, xcoordinate, ycoordinate, zcoordinate
>
> then I have a second file containing data in the following format, ( 2M
values)
>
> id, xcoordinate, ycoordinate, zcoordinate, value1, value2, value3,...,
valueN
>
> I would need to get values for x,y,z coordinates of file 1 from values of
file2.
>
Apologies but I have no idea what you're asking. Can you give us some
examples?

> I don´t know whether my data in file1 and 2 is from structured or
unstructured grid source. I was wondering which interpolation module either
from scipy or scikit-learn you recommend me to use?
>
> I would also appreciate if you could recommend me some sample
example/reference.
>
> Thanks in Advance for your help,
> --
> https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Chris Angelico
On Fri, Nov 4, 2016 at 4:00 AM, Steve D'Aprano
 wrote:
> On Fri, 4 Nov 2016 01:05 am, Chris Angelico wrote:
>
>> On Thu, Nov 3, 2016 at 7:29 PM, Steven D'Aprano
>>  wrote:
 lst = map (lambda x: x*5, lst)
 lst = filter (lambda x: x%3 == 1, lst)
 And perform especially bad in CPython compared to a comprehension.
>>>
>>> I doubt that.
>>>
>>
>> It's entirely possible. A list comp involves one function call (zero
>> in Py2), but map/lambda involves a function call per element in the
>> list. Function calls have overhead.
>
> I don't know why you think that list comps don't involve function calls.

List comps themselves involve one function call (zero in Py2). What
you do inside the expression is your business. Do you agree that list
comps don't have the overhead of opening and closing files?

files = "/etc/hostname", "/etc/resolv.conf", ".bashrc"
contents = [open(fn).read() for fn in files]

In a comparison between comprehensions and map, this cost is
irrelevant, unless your point is that "they're all pretty quick".

> Here's some timing results using 3.5 on my computer. For simplicity, so
> folks can replicate the test themselves, here's the timing code:
>
>
> from timeit import Timer
> setup = """data = list(range(1))
> def func(x):  # simulate some calculation
> return {x+1: x**2}
> """
> t1 = Timer('[func(a) for a in data]', setup=setup)
> t2 = Timer('list(map(func, data))', setup=setup)

This is very different from the original example, about which the OP
said that map performs badly, and you doubted it. In that example, the
list comp includes the expression directly, whereas map (by its
nature) must use a function. The "inline expression" of a
comprehension is more efficient than the "inline expression" of a
lambda function given to map.

> And here's the timing results on my machine:
>
> py> min(t1.repeat(number=1000, repeat=7))  # list comp
> 18.2571472954005
> py> min(t2.repeat(number=1000, repeat=7))  # map
> 18.157311914488673
>
> So there's hardly any difference, but map() is about 0.5% faster in this
> case.

Right. As has often been stated, map is perfectly efficient, *if* the
body of the comprehension would be simply a function call, nothing
more. You can map(str, stuff) to stringify a bunch of things. Nice.
But narrow, and not the code that was being discussed.

> Now, it is true that *if* you can write the list comprehension as a direct
> calculation in an expression instead of a function call:
>
> [a+1 for a in data]
>
> *and* compare it to map using a function call:
>
> map(lambda a: a+1, data)
>
>
> then the overhead of the function call in map() may be significant.

Thing is, this is extremely common. How often do you actually use a
comprehension with something that is absolutely exactly a function
call on the element in question?

> But the
> extra cost is effectively just a multiplier. It isn't going to change
> the "Big Oh" behaviour.

Sure it doesn't. In each case, the cost is linear. But the work is
linear, so I would expect the time to be linear.

> So map() here is less than a factor of two slower. I wouldn't call
> that "especially bad" -- often times, a factor of two is not important.
> What really hurts is O(N**2) performance, or worse.
>
> So at worst, map() is maybe half as fast as a list comprehension, and at
> best, its perhaps a smidgen faster. I would expect around the same
> performance, differing only by a small multiplicative factor: I wouldn't
> expect one to be thousands or even tens of times slower that the other.

But this conclusion I agree with. There is a performance difference,
but it is not overly significant. Compared to the *actual work*
involved in the task (going through one list and doing some operation
on each operation), the difference between map and a comprehension is
generally going to be negligible.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python Dice Game/Need help with my script/looping!

2016-11-03 Thread Bob Gailer
On Nov 3, 2016 11:30 AM, "Constantin Sorin"  wrote:
>
> Hello,I recently started to make a dice game in python.Everything was
nice and beautiful,until now.My problem is that when I try to play and I
win or lost or it's equal next time it will continue only with that.
> Exemple:
> Enter name >> Sorin
> Money = 2
> Bet >> 2
> You won!
> Money 4
> Bet >> 2
> You won!
> and it loops like this :/
>

What are the rules of the game?

> Here is the code:
>
> import time
> import os
> import random
> os = os.system
> os("clear")
>
> print "What is your name?"
> name = raw_input(">>")
>
> def lost():
> print "Yoy lost the game!Wanna Play again?Y/N"
> ch = raw_input(">>")
> if ch == "Y":
> game()

When you call a function from within that function you are using recursion.
This is not what recursion is intended for. If you play long enough you
will run out of memory.

The proper way to handle this is to put the entire body of game() in a
while loop.

> elif ch == "N":
> exit()
>
>
>
> def game():
> os("clear")
> a = random.randint(1,6)
> b = random.randint(1,6)
> c = random.randint(1,6)
> d = random.randint(1,6)
> e = a + b
> f = c + d
> money = 2
> while money > 0:
> print "Welcome to FireDice %s!" %name
> print "Your money: %s$" %money
> print "How much do you bet?"
> bet = input(">>")
> if e > f:
> print "you won!"
> money = money + bet
> elif e < f:
> print "you lost"
> money = money - bet
> else:
> print "?"
>
> print money
> lost()

Since the values of e and f are not changed in the loop he will continue to
get the same thing.
>
> game()
> --
> https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28604] Exception raised by python3.5 when using en_GB locale

2016-11-03 Thread Guillaume Pasquet (Etenil)

New submission from Guillaume Pasquet (Etenil):

This issue was originally reported on Fedora's Bugzilla: 
https://bugzilla.redhat.com/show_bug.cgi?id=1391280

Description of problem:
After switching the monetary locale to en_GB, python then raises an exception 
when calling locale.localeconv()

Version-Release number of selected component (if applicable):
3.5.2-4.fc25

How reproducible:
Every time

Steps to Reproduce:
1. Write a python3 script or open the interactive interpreter with "python3"
2. Enter the following
import locale
locale.setlocale(locale.LC_MONETARY, 'en_GB')
locale.localeconv()
3. Observe that python raises an encoding exception

Actual results:
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib64/python3.5/locale.py", line 110, in localeconv
d = _localeconv()
UnicodeDecodeError: 'locale' codec can't decode byte 0xa3 in position 0: 
Invalid or incomplete multibyte or wide character

Expected results:
A dictionary of locale data similar to (for en_US):
{'mon_thousands_sep': ',', 'currency_symbol': '$', 'negative_sign': '-', 
'p_sep_by_space': 0, 'frac_digits': 2, 'int_frac_digits': 2, 'decimal_point': 
'.', 'mon_decimal_point': '.', 'positive_sign': '', 'p_cs_precedes': 1, 
'p_sign_posn': 1, 'mon_grouping': [3, 3, 0], 'n_cs_precedes': 1, 'n_sign_posn': 
1, 'grouping': [3, 3, 0], 'thousands_sep': ',', 'int_curr_symbol': 'USD ', 
'n_sep_by_space': 0}

Note:
This was reproduced on Linux Mint 18 (python 3.5.2), and also on Fedora with 
python 3.4 and python 3.6 (compiled).

--
components: Interpreter Core
messages: 280023
nosy: Guillaume Pasquet (Etenil)
priority: normal
severity: normal
status: open
title: Exception raised by python3.5 when using en_GB locale
type: behavior
versions: Python 3.4, Python 3.5, Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28603] traceback module can't format/print unhashable exceptions

2016-11-03 Thread Andreas Stührk

New submission from Andreas Stührk:

The traceback module tries to handle loops caused by an exception's __cause__ 
or __context__ attributes when printing tracebacks. To do so, it adds already 
seen exceptions to a set. Unfortunately, it doesn't handle unhashable 
exceptions:

>>> class E(Exception): __hash__ = None
...
>>> traceback.print_exception(E, E(), None)
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python3.5/traceback.py", line 100, in print_exception
type(value), value, tb, limit=limit).format(chain=chain):
  File "/usr/lib/python3.5/traceback.py", line 439, in __init__
_seen.add(exc_value)
TypeError: unhashable type: 'E'

CPython's internal exception printing pretty much does the same, except it 
ignores any exception while operating on the seen set (see 
https://hg.python.org/cpython/file/8ee4ed577c03/Python/pythonrun.c#l813 ff).

Attached is a patch that makes the traceback module ignore TypeErrors while 
operating on the seen set. It also adds a (minimal) test.

--
components: Library (Lib)
files: unhashable_exceptions.diff
keywords: patch
messages: 280022
nosy: Trundle
priority: normal
severity: normal
status: open
title: traceback module can't format/print unhashable exceptions
Added file: http://bugs.python.org/file45342/unhashable_exceptions.diff

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26980] The path argument of asyncio.BaseEventLoop.create_unix_connection is not documented

2016-11-03 Thread Guido van Rossum

Changes by Guido van Rossum :


--
resolution:  -> fixed
stage: needs patch -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26980] The path argument of asyncio.BaseEventLoop.create_unix_connection is not documented

2016-11-03 Thread Roundup Robot

Roundup Robot added the comment:

New changeset b97b0201c2f4 by Guido van Rossum in branch '3.5':
Issue #26980: Improve docs for create_unix_connection(). By Mariatta.
https://hg.python.org/cpython/rev/b97b0201c2f4

New changeset ddbba4739ef4 by Guido van Rossum in branch '3.6':
Issue #26980: Improve docs for create_unix_connection(). By Mariatta. (3.5->3.6)
https://hg.python.org/cpython/rev/ddbba4739ef4

New changeset d6f4c1b864e6 by Guido van Rossum in branch 'default':
Issue #26980: Improve docs for create_unix_connection(). By Mariatta. (3.6->3.7)
https://hg.python.org/cpython/rev/d6f4c1b864e6

--
nosy: +python-dev

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26980] The path argument of asyncio.BaseEventLoop.create_unix_connection is not documented

2016-11-03 Thread Guido van Rossum

Guido van Rossum added the comment:

LGTM, will apply shortly.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Yury Selivanov

Yury Selivanov added the comment:

> LGTM

Will commit this soon.

> maybe make dirty hack and check hasattr(callback, 'send') ?

If you schedule a coroutine object it will fail anyways, because it's not 
callable.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Марк Коренберг

Марк Коренберг added the comment:

> haypo added the check because people called `.call_later()` with coroutine 
> instead of callback very often

maybe make dirty hack and check hasattr(callback, 'send') ?

--
nosy: +mmarkk

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread STINNER Victor

STINNER Victor added the comment:

call_soon3.patch: LGTM. It enhances error messages (fix the method name) and 
should make asyncio faster.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23996] _PyGen_FetchStopIterationValue() crashes on unnormalised exceptions

2016-11-03 Thread Serhiy Storchaka

Changes by Serhiy Storchaka :


--
type: crash -> behavior
versions: +Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Arthur Havlicek
I understand that, the cost of change is such that it's very unlikely
something like this ever goes into Python, but I feel like the interest of
the proposition is being underestimated here, that's why I'm going to argue
a few points and give a bit more context as needed.

> While mapping and filtering are common operations, I'm not sure mapping
> and filtering and then reassigning back to the original sequence is
> especially common.

It depends of your context. On the last 3 months, I stumbled across this at
least 3 times, which is 3 times more than I used a lambda or a metaclass or
a decorator or other fancy language feature that we simply avoid whenever
possible. It also happened to my collegue. I remember these examples
because we had a bit of humour about how nice can be inlined ternaries
inside comprehensions, but I could be missing a lot more.

The reason I'm being especially impacted by this is because I am maintainer
of a decent-sized Python application (~50-100K lines of code) that
extensively uses lists and dictionaries. We value "low level" data
manipulation and efficiency a lot more than complex, non-obvious
constructs. In other contexts, it may be very different. Note that my
context is only relevant for illustration here, I don't expect a feature to
save me since we are currently shipping to Centos 6 and thus will not see
the light of Python 3.7 in the next 10 years (optimistic estimation).

> Arthur, I would suggest looking at what numpy and pandas do.

In my example context, their benefits can't apply, because I'm not going to
rewrite code that uses lists for them to uses np.array instead for example.
Although the performance boost is likely to be bigger if used properly, I
would have prefered a lesser boost that comes for (almost) free.

Like most Python programmers, I'm not in the case of needing a performance
boost very bad, but that does not mean I disregard performance entirely.
The need of performance is not so binary that it either don't matter at all
or is enough to motivate a rewrite.

> To my eyes, this proposed syntax is completely foreign

I must admit I don't have much imagination for syntax proposals...all that
mattered to me here was to make it clear you are doing an in-place
modification. Feel free to completely ignore that part. Any proposal
welcomed of course.
About Readability & Redundancy

I have misused the terms here, but I wasn't expecting so much nitpicking. I
should have used the term maintenability, because that one is bland and
subjective enough that nobody would have noticed :D

How about "I find that cooler." Good enough ?

In a less sarcastic tone:

What I truely meant here is that when you contain the behavior of your code
inside a specific keyword or syntax, you are making your intentions clear
to the reader. It may be harder for him to gain access to the knowledge in
the first place, but makes it easier over time.

Famous example:

When you learned programming, you may have had no idea what "+=" was doing,
but now you do, you probably rate "a += 2" syntax to be much better than "a
= a + 2". You make the economy of a token, but more important, you make
your intentions clearer because "+=" rings a bell, wihle "=" is a more
generic syntax with a broader meaning.

> So map() here is less than a factor of two slower. I wouldn't call
> that "especially bad" -- often times, a factor of two is not important.
> What really hurts is O(N**2) performance, or worse.

When you evaluate your application bottleneck or your average daily
algorithmic evaluation, perhaps. When regarding language core features we
are not on the same scale. If a language X is 2 times faster than a
language Y to do the same task, that's a huge seller, and is of real
importance.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Ned Deily

Changes by Ned Deily :


--
stage:  -> commit review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: need some kind of "coherence index" for a group of strings

2016-11-03 Thread Neil D. Cerutti

On 11/3/2016 1:49 PM, jlada...@itu.edu wrote:

The Levenshtein distance is a very precise definition of dissimilarity between 
sequences.  It specifies the minimum number of single-element edits you would 
need to change one sequence into another.  You are right that it is fairly 
expensive to compute.

But you asked for an algorithm that would determine whether groups of strings are 
"sort of similar".  How imprecise can you be?  An analysis of the frequency of 
each individual character in a string might be good enough for you.


I also once used a Levenshtein distance algo in Python (code snippet 
D0DE4716-B6E6-4161-9219-2903BF8F547F) to compare names of students (it 
worked, but turned out to not be what I needed), but you may also be 
able to use some items "off the shelf" from Python's difflib.


--
Neil Cerutti

--
https://mail.python.org/mailman/listinfo/python-list


[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Guido van Rossum

Guido van Rossum added the comment:

Patch 3 LGTM.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28602] `tzinfo.fromutc()` fails when used for a fold-aware tzinfo implementation

2016-11-03 Thread Alexander Belopolsky

Alexander Belopolsky added the comment:

> I can't think of a single actual downside to this change - all it does is 
> preserve the original behavior of `fromutc()`.

You've got me on the fence here.  If what you are saying is correct, it would 
make sense to make this change and better do it before 3.6 is out, but it would 
take me some effort to convince myself that an implementation that reuses 
patched fromutc() is indeed correct.

Why don't you implement tzrange.fromutc() in terms of say 
tzrange.simple_fromutc() which is your own patched version of tzinfo.fromutc(). 
 If this passes your tests and is simpler than a straightforward fromutc() that 
does not have to look at tzinfo through the needle hole of utcoffset()/dst() 
pair but knows the internals of your timezone object, we can consider promoting 
your simple_fromutc() to default stdlib implementation.

Alternatively, you can just convince Tim. :-)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28602] `tzinfo.fromutc()` fails when used for a fold-aware tzinfo implementation

2016-11-03 Thread Paul G

Paul G added the comment:

> After all, how much effort would it save for you in dateutil if you could 
> reuse the base class fromutc?

Realistically, this saves me nothing since I have to re-implement it anyway in 
in all versions <= Python 3.6 (basically just the exact same algorithm with 
line 997 replaced with enfold(dt, fold=1) rather than dt.replace(fold=1), but 
I'd rather it fall back to the standard `fromutc()` in fold-aware versions of 
Python 3.6.

That said, I don't see how it's a big can of worms to open. If you're going to 
provide `fromutc()` functionality, it should not be deliberately broken. As I 
mentioned above, I see no actual downside in having `fromutc()` actually work 
as advertised and as intended.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28602] `tzinfo.fromutc()` fails when used for a fold-aware tzinfo implementation

2016-11-03 Thread Alexander Belopolsky

Alexander Belopolsky added the comment:

Paul G at Github:

"""
To be clear, I'm just saying that fromutc() goes from returning something 
meaningful (the correct date and time, with no indication of what side of the 
fold it's on) to something useless (an incorrect date and time) in the case 
where you're on the STD side of the divide. That change would restore the 
original behavior.

For most of the tzinfo implementations I'm providing (tzrange, tzwin, etc), 
there's no problem with an invariant standard time offset, so I'd prefer to 
fall back on the default algorithm in those cases.

Another advantage with using the standard algorithm as a starting point is that 
it all the type checking and such that's done in fromutc is done at that level, 
and I don't have to keep track of handling that.

All that said, it's not a huge deal either way.
"""

Since it is not a big issue for you, I would rather not reopen this can of 
worms.  It may be better to return a clearly erroneous result on fold-aware 
tzinfos to push the developers like you in the right direction. :-)

After all, how much effort would it save for you in dateutil if you could reuse 
the base class fromutc?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread STINNER Victor

STINNER Victor added the comment:

I didn't review the patch, but I agree with the overall approach: expensive
checks can be made only in debug mode.

If people are concerned by the removal of the check by default, we should
repeat everywhere in the doc that async programming is hard and that the
debug mode saves a lot of time ;-)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28602] `tzinfo.fromutc()` fails when used for a fold-aware tzinfo implementation

2016-11-03 Thread Paul G

Paul G added the comment:

Of the `tzinfo` implementations provided by `python-dateutil`, `tzrange`, 
`tzstr` (GNU TZ strings), `tzwin` (Windows style time zones) and `tzlocal` all 
satisfy this condition. These are basically all implementations of default 
system time zone information.

With current implementations `tzfile` and `tzical` also use the invariant 
algorithm, though theoretically there are edge cases where this will cause 
problems, and those should have their `fromutc()` adjusted.

In any case, I can't think of a single actual downside to this change - all it 
does is preserve the original behavior of `fromutc()`. As currently 
implemented, the algorithm is simply wrong when `dst()` is affected by `fold`, 
and this change would have no effect on zones where `dst()` is *not* affected 
by fold.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10408] Denser dicts and linear probing

2016-11-03 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

What is the status of this issue in the light of compact dict implementation?

--
nosy: +inada.naoki

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Yury Selivanov

Changes by Yury Selivanov :


Added file: http://bugs.python.org/file45341/call_soon3.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28602] `tzinfo.fromutc()` fails when used for a fold-aware tzinfo implementation

2016-11-03 Thread Alexander Belopolsky

Alexander Belopolsky added the comment:

I don't think timezones that satisfy the invariant expected by the default  
fromutc() is common enough that we need to provide special support for them.  
Note that in most cases it is the UTC to local conversion that is 
straightforward while Local to UTC is tricky, so the code that reduces a simple 
task to a harder one has questionable utility.

--
assignee:  -> belopolsky
type: behavior -> enhancement

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Yury Selivanov

Changes by Yury Selivanov :


Added file: http://bugs.python.org/file45340/call_soon2.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25264] test_marshal always crashs on "AMD64 Windows10 2.7" buildbot

2016-11-03 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

Crashes seems was fixed in issue22734 and issue27019.

--
resolution:  -> duplicate
stage:  -> resolved
status: open -> closed
superseder:  -> marshal needs a lower stack depth for debug builds on Windows

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: constructor classmethods

2016-11-03 Thread Ethan Furman

On 11/03/2016 07:45 AM, Ethan Furman wrote:

On 11/03/2016 01:50 AM, teppo.p...@gmail.com wrote:


The guide is written in c++ in mind, yet the concepts stands for any
 programming language really. Read it through and think about it. If
 you come back to this topic and say: "yeah, but it's c++", then you
 haven't understood it.


The ideas (loose coupling, easy testing) are certainly applicable in Python -- 
the specific methods talked about in that paper, however, are not.


Speaking specifically about the gyrations needed for the sole purpose of 
testing.

The paper had a lot of good things to say about decoupling, and in that light 
if the class in question should work with any Queue, then it should be passed 
in -- however, if it's an implementation detail, then it shouldn't.

--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


[issue28602] `tzinfo.fromutc()` fails when used for a fold-aware tzinfo implementation

2016-11-03 Thread Paul G

New submission from Paul G:

After PEP-495, the default value for non-fold-aware datetimes is that they 
return the DST side, not the STD side (as was the assumption before PEP-495). 
This invalidates an assumption made in `tz.fromutc()`. See lines 991-1000 of 
datetime.py:

dtdst = dt.dst()
if dtdst is None:
raise ValueError("fromutc() requires a non-None dst() result")
delta = dtoff - dtdst
if delta:
dt += delta
dtdst = dt.dst()
if dtdst is None:
raise ValueError("fromutc(): dt.dst gave inconsistent "
 "results; cannot convert")

Line 997 
(https://github.com/python/cpython/blob/be8de928e5d2f1cd4d4c9c3e6545b170f2b02f1b/Lib/datetime.py#L997)
 assumes that an ambiguous datetime will return the STD side, not the DST side, 
and as a result, if you feed it a date in UTC that should resolve to the STD 
side, it will actually return 1 hour (or whatever the DST offset is) AFTER the 
ambiguous date that should be returned.

If 997 is changed to:

dtdst = dt.replace(fold=1).dst()

That will not affect fold-naive zones (which are instructed to ignore the 
`fold` parameter) and restore the original behavior. This will allow fold-aware 
timezones to be implemented as a wrapper around `fromutc()` rather than a 
complete re-implementation, e.g.:

class FoldAwareTzInfo(datetime.tzinfo):
def fromutc(self, dt):
dt_wall = super(FoldAwareTzInfo, self).fromutc(dt)

is_fold = self._get_fold_status(dt, dt_wall)

return dt_wall.replace(fold=is_fold)

--
messages: 280007
nosy: belopolsky, p-ganssle, tim.peters
priority: normal
severity: normal
status: open
title: `tzinfo.fromutc()` fails when used for a fold-aware tzinfo implementation
type: behavior
versions: Python 3.6, Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Yury Selivanov

Yury Selivanov added the comment:

> IIRC haypo added the check because people called `.call_later()` with 
> coroutine instead of callback very often.

We'll update asyncio docs in 3.6 with a tutorial to focus on coroutines (not on 
low-level event loop).  This should leave the loop API to advanced users.

> But checking in debug mode looks very reasonable to me if it is so expensive.

Exactly.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28601] Ambiguous datetime comparisons should use == rather than 'is' for tzinfo comparison

2016-11-03 Thread Alexander Belopolsky

Alexander Belopolsky added the comment:

See Datetime-SIG thread 
.

--
assignee:  -> belopolsky
nosy: +tim.peters
stage:  -> needs patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28601] Ambiguous datetime comparisons should use == rather than 'is' for tzinfo comparison

2016-11-03 Thread Paul G

New submission from Paul G:

According to PEP495 
(https://www.python.org/dev/peps/pep-0495/#aware-datetime-equality-comparison) 
datetimes are considered not equal if they are an ambiguous time and have 
different zones. However, currently "interzone comparison" is defined / 
implemented as the zones being the same object rather than the zones comparing 
equal.

One issue with this is that it actually breaks backwards compatibility of the 
language, because there doesn't seem to be a way to provide a 
(backwards-compatible) class that implements folding behavior and has 
equivalent dates compare equal. An example using python-dateutil:

```
from datetime import datetime
from dateutil import tz

NYC = tz.gettz('America/New_York')
ET = tz.gettz('US/Eastern')

dt = datetime(2011, 11, 6, 5, 30, tzinfo=tz.tzutc()) # This is 2011-11-06 01:30 
EDT-4

dt_edt = dt.astimezone(ET)
dt_nyc = dt.astimezone(NYC)

print(dt_nyc == dt_edt)
```

In Python 3.5 that will return True, in Python 3.6 it will return False, even 
though 'US/Eastern' and 'America/New_York' are the same zone. In this case, I 
might be able to enforce that these time zones are singletons so that `is` 
always returns True (though this may have other negative consequences for 
utility), but even that solution would fall apart for things like `tzrange` and 
`tzstr`, where you can know that the `dt.utcoffset()`s are going to be 
identical for ALL values of `dt`, but you can't force the objects to be 
identical.

I would suggest that it be changed to use `__eq__` to determine whether two 
`tzinfo` objects are the same zone, as this will allow tzinfo providers to 
create `tzinfo` objects with a consistent behavior between versions in this 
edge case.

--
components: Library (Lib)
messages: 280003
nosy: belopolsky, p-ganssle
priority: normal
severity: normal
status: open
title: Ambiguous datetime comparisons should use == rather than 'is' for tzinfo 
comparison
type: behavior
versions: Python 3.6, Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Andrew Svetlov

Andrew Svetlov added the comment:

The patch looks good.
IIRC haypo added the check because people called `.call_later()` with coroutine 
instead of callback very often.

But checking in debug mode looks very reasonable to me if it is so expensive.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Yury Selivanov

Changes by Yury Selivanov :


--
keywords: +patch
Added file: http://bugs.python.org/file45339/call_soon.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28600] asyncio: Optimize loop.call_soon

2016-11-03 Thread Yury Selivanov

New submission from Yury Selivanov:

loop.call_soon is the central function of asyncio.  Everything goes through it.

Current design of asyncio.loop.call_soon makes the following checks:

1. [debug mode] check that the loop is not closed
2. [debug mode] check that we are calling call_soon from the proper thread
3. [always] check that callback is not a coroutine or a coroutine function

Check #3 is very expensive, because it uses an 'isinstance' check.  Moreover, 
isinstance checks against an ABC, which makes the call even slower.

Attached patch makes check #3 optional and run only in debug mode.  This is a 
very safe patch to merge.

This makes asyncio another 17% (sic!) faster.  In fact it becomes as fast as 
curio for realistic streams benchmarks.

--
assignee: yselivanov
components: asyncio
messages: 280002
nosy: asvetlov, gvanrossum, haypo, inada.naoki, ned.deily, yselivanov
priority: normal
severity: normal
status: open
title: asyncio: Optimize loop.call_soon
versions: Python 3.6, Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: need some kind of "coherence index" for a group of strings

2016-11-03 Thread jladasky
The Levenshtein distance is a very precise definition of dissimilarity between 
sequences.  It specifies the minimum number of single-element edits you would 
need to change one sequence into another.  You are right that it is fairly 
expensive to compute.

But you asked for an algorithm that would determine whether groups of strings 
are "sort of similar".  How imprecise can you be?  An analysis of the frequency 
of each individual character in a string might be good enough for you.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28598] RHS not consulted in `str % subclass_of_str` case.

2016-11-03 Thread Martijn Pieters

Martijn Pieters added the comment:

Here's a proposed patch for tip; what versions would it be worth backporting 
this to?

(Note, there's no NEWS update in this patch).

--
keywords: +patch
Added file: http://bugs.python.org/file45338/issue28598.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



ANN: PyWavelets v0.5.0 release

2016-11-03 Thread Gregory Lee
On behalf of the PyWavelets development team I am pleased to announce the
release of PyWavelets 0.5.0.


PyWavelets is a Python toolbox implementing both discrete and continuous
wavelet transforms (mathematical time-frequency transforms) with a wide
range of built-in wavelets.  C/Cython are used for the low-level routines,
enabling high performance.  Key Features of PyWavelets are:

  * 1D, 2D and nD Forward and Inverse Discrete Wavelet Transform (DWT and
IDWT)
  * 1D, 2D and nD Multilevel DWT and IDWT
  * 1D and 2D Forward and Inverse Stationary Wavelet Transform
  * 1D and 2D Wavelet Packet decomposition and reconstruction
  * 1D Continuous Wavelet Transform
  * When multiple valid implementations are available, we have chosen to
maintain consistency with MATLAB's Wavelet Toolbox.

PyWavelets 0.5.0 is the culmination of 1 year of work.  In addition to
several new features, substantial refactoring of the underlying C and
Cython code have been made.

Highlights of this release include:

- 1D continuous wavelet transforms
- new discrete wavelets added (additional Debauchies and Coiflet wavelets)
- new 'reflect' extension mode for discrete wavelet transforms
- faster performance for multilevel forward stationary wavelet transforms
(SWT)
- n-dimensional support added to forward SWT
- routines to convert multilevel DWT coefficients to and from a single array
- axis support for multilevel DWT
- substantial refactoring/reorganization of the underlying C and Cython code

Full details in the release notes at:
http://pywavelets.readthedocs.io/en/latest/release.0.5.0.html

This release requires Python 2.6, 2.7 or 3.3-3.5 and Numpy 1.9.1 or
greater. Sources can be found on https://pypi.python.org/pypi/PyWavelets
and https://github.com/PyWavelets/pywt/releases.

As always, new contributors are welcome to join us at
https://github.com/PyWavelets/pywt
-- 
https://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Steve D'Aprano
On Fri, 4 Nov 2016 01:05 am, Chris Angelico wrote:

> On Thu, Nov 3, 2016 at 7:29 PM, Steven D'Aprano
>  wrote:
>>> lst = map (lambda x: x*5, lst)
>>> lst = filter (lambda x: x%3 == 1, lst)
>>> And perform especially bad in CPython compared to a comprehension.
>>
>> I doubt that.
>>
> 
> It's entirely possible. A list comp involves one function call (zero
> in Py2), but map/lambda involves a function call per element in the
> list. Function calls have overhead.

I don't know why you think that list comps don't involve function calls.

Here's some timing results using 3.5 on my computer. For simplicity, so
folks can replicate the test themselves, here's the timing code:


from timeit import Timer
setup = """data = list(range(1))
def func(x):  # simulate some calculation
return {x+1: x**2}
"""
t1 = Timer('[func(a) for a in data]', setup=setup)
t2 = Timer('list(map(func, data))', setup=setup)



And here's the timing results on my machine:

py> min(t1.repeat(number=1000, repeat=7))  # list comp
18.2571472954005
py> min(t2.repeat(number=1000, repeat=7))  # map
18.157311914488673

So there's hardly any difference, but map() is about 0.5% faster in this
case.

Now, it is true that *if* you can write the list comprehension as a direct
calculation in an expression instead of a function call:

[a+1 for a in data]

*and* compare it to map using a function call:

map(lambda a: a+1, data)


then the overhead of the function call in map() may be significant. But the
extra cost is effectively just a multiplier. It isn't going to change
the "Big Oh" behaviour. Here's an example:

t3 = Timer('[a+1 for a in data]', setup=setup)
t4 = Timer('list(map(lambda a: a+1, data))', setup=setup)
extra = """from functools import partial
from operator import add
"""
t5 = Timer('list(map(partial(add, 1), data))', setup=setup+extra)

And the timing results:

py> min(t3.repeat(number=1000, repeat=7))  # list comp with expression
2.6977453008294106
py> min(t4.repeat(number=1000, repeat=7))  # map with function
4.280411267653108
py> min(t5.repeat(number=1000, repeat=7))  # map with partial
3.446241058409214

So map() here is less than a factor of two slower. I wouldn't call
that "especially bad" -- often times, a factor of two is not important.
What really hurts is O(N**2) performance, or worse.

So at worst, map() is maybe half as fast as a list comprehension, and at
best, its perhaps a smidgen faster. I would expect around the same
performance, differing only by a small multiplicative factor: I wouldn't
expect one to be thousands or even tens of times slower that the other.



-- 
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: need some kind of "coherence index" for a group of strings

2016-11-03 Thread justin walters
On Thu, Nov 3, 2016 at 9:18 AM, Fillmore 
wrote:

>
> Hi there, apologies for the generic question. Here is my problem let's say
> that I have a list of lists of strings.
>
> list1:#strings are sort of similar to one another
>
>   my_nice_string_blabla
>   my_nice_string_blqbli
>   my_nice_string_bl0bla
>   my_nice_string_aru
>
>
> list2:#strings are mostly different from one another
>
>   my_nice_string_blabla
>   some_other_string
>   yet_another_unrelated string
>   wow_totally_different_from_others_too
>
>
> I would like an algorithm that can look at the strings and determine that
> strings in list1 are sort of similar to one another, while the strings in
> list2 are all different.
> Ideally, it would be nice to have some kind of 'coherence index' that I
> can exploit to separate lists given a certain threshold.
>
> I was about to concoct something using levensthein distance, but then I
> figured that it would be expensive to compute and I may be reinventing the
> wheel.
>
> Thanks in advance to python masters that may have suggestions...
>
>
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>


When you say similar, do you mean similar in the amount of duplicate
words/letters? Or were you more interested
in similar sentence structure?
-- 
https://mail.python.org/mailman/listinfo/python-list


need some kind of "coherence index" for a group of strings

2016-11-03 Thread Fillmore


Hi there, apologies for the generic question. Here is my problem let's 
say that I have a list of lists of strings.


list1:#strings are sort of similar to one another

  my_nice_string_blabla
  my_nice_string_blqbli
  my_nice_string_bl0bla
  my_nice_string_aru


list2:#strings are mostly different from one another

  my_nice_string_blabla
  some_other_string
  yet_another_unrelated string
  wow_totally_different_from_others_too


I would like an algorithm that can look at the strings and determine 
that strings in list1 are sort of similar to one another, while the 
strings in list2 are all different.
Ideally, it would be nice to have some kind of 'coherence index' that I 
can exploit to separate lists given a certain threshold.


I was about to concoct something using levensthein distance, but then I 
figured that it would be expensive to compute and I may be reinventing 
the wheel.


Thanks in advance to python masters that may have suggestions...



--
https://mail.python.org/mailman/listinfo/python-list


[issue25652] collections.UserString.__rmod__() raises NameError

2016-11-03 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

Issue28598 may be related.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28599] AboutDialog set_name is ignored

2016-11-03 Thread Zachary Ware

Changes by Zachary Ware :


--
resolution: not a bug -> third party

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25251] Unknown MS Compiler version 1900

2016-11-03 Thread Steve Dower

Steve Dower added the comment:

That sounds like a great feature for setuptools.

Core distutils is highly focused on what is needed for the core product, and we 
are very much trying to avoid bloating it, whereas setuptools is free to add 
extensions wherever necessary and make them available on earlier versions of 
Python.

The setuptools issue tracker is at https://github.com/pypa/setuptools

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28599] AboutDialog set_name is ignored

2016-11-03 Thread Zachary Ware

Zachary Ware added the comment:

That appears to be an issue with the third-party PyGObject project, please open 
an issue on the PyGObject issue tracker: 
https://bugzilla.gnome.org/page.cgi?id=browse.html=pygobject

--
nosy: +zach.ware
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28599] AboutDialog set_name is ignored

2016-11-03 Thread rebelxt

New submission from rebelxt:

Python 3.5.2 (default, Sep 10 2016, 08:21:44) [GCC 5.4.0 20160609] on linux 
Mint 18 and Gtk 3.

I run a python3 script that includes these statements:

import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, GObject
aboutdialog = Gtk.AboutDialog()
aboutdialog.set_name('arbitrary')
aboutdialog.run()
aboutdialog.destroy()

The script name is displayed in the About dialog, while the 'arbitrary' name is 
ignored. This is inconsistent with python2/gtk2. If the set_name 
method/function exists, it should do something.

I have no way of testing this on any later version of python.

--
messages: 279998
nosy: rebelxt
priority: normal
severity: normal
status: open
title: AboutDialog set_name is ignored
type: behavior
versions: Python 3.5

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28598] RHS not consulted in `str % subclass_of_str` case.

2016-11-03 Thread Xiang Zhang

Changes by Xiang Zhang :


--
nosy: +xiang.zhang

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Python Dice Game/Need help with my script/looping!

2016-11-03 Thread Constantin Sorin
I use Linux and python 2.7.12
-- 
https://mail.python.org/mailman/listinfo/python-list


Python Dice Game/Need help with my script/looping!

2016-11-03 Thread Constantin Sorin
Hello,I recently started to make a dice game in python.Everything was nice and 
beautiful,until now.My problem is that when I try to play and I win or lost or 
it's equal next time it will continue only with that.
Exemple:
Enter name >> Sorin
Money = 2
Bet >> 2
You won!
Money 4
Bet >> 2
You won!
and it loops like this :/

Here is the code:

import time
import os
import random
os = os.system
os("clear")

print "What is your name?"
name = raw_input(">>")

def lost():
print "Yoy lost the game!Wanna Play again?Y/N"
ch = raw_input(">>")
if ch == "Y":
game()
elif ch == "N":
exit()



def game():
os("clear")
a = random.randint(1,6)
b = random.randint(1,6)
c = random.randint(1,6)
d = random.randint(1,6)
e = a + b
f = c + d
money = 2
while money > 0:
print "Welcome to FireDice %s!" %name
print "Your money: %s$" %money
print "How much do you bet?"
bet = input(">>")
if e > f:
print "you won!"
money = money + bet
elif e < f:
print "you lost"
money = money - bet
else:
print "?"

print money
lost()  

game()
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Terry Reedy

On 11/3/2016 4:29 AM, Steven D'Aprano wrote:


Nonsense. It is perfectly readable because it is explicit about what is being
done, unlike some magic method that you have to read the docs to understand
what it does.


Agreed.


A list comprehension or for-loop is more general and can be combined so you can
do both:

alist[:] = [func(x) for x in alist if condition(x)]


The qualifier 'list' is not needed.  The right hand side of slice 
assignment just has to be an iterable.  So a second interation to build 
an intermediate list is not required.


alist[:] = (func(x) for x in alist if condition(x))

The parentheses around the generator expression are required here. 
(Steven, I know you know that, but not everyone else will.)


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: constructor classmethods

2016-11-03 Thread Chris Angelico
On Thu, Nov 3, 2016 at 7:50 PM,   wrote:
> Little bit background related to this topic. It all starts from this article:
> http://misko.hevery.com/attachments/Guide-Writing%20Testable%20Code.pdf
>
> The guide is written in c++ in mind, yet the concepts stands for any 
> programming language really. Read it through and think about it. If you come 
> back to this topic and say: "yeah, but it's c++", then you haven't understood 
> it.

I don't have a problem with something written for C++ (though I do
have a problem with a thirty-eight page document on how to make your
code testable - TLDR), but do bear in mind that a *lot* of C++ code
can be simplified when it's brought to Python. One Python feature that
C++ doesn't have, mentioned already in this thread, is the way you can
have a ton of parameters with defaults, and you then specify only
those you want, as keyword args:

def __init__(self, important_arg1, important_arg2,
queue=None, cache_size=50, whatever=...):
pass

MyClass("foo", 123, cache_size=75)

I can ignore all the arguments that don't matter, and provide only the
one or two that I actually need to change. Cognitive load is
drastically reduced, compared to the "alternative constructor"
pattern, where I have to remember not to construct anything in the
normal way.

You can't do that in C++, so it's not going to be mentioned in that
document, but it's an excellent pattern to follow.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: constructor classmethods

2016-11-03 Thread Ethan Furman

On 11/03/2016 01:50 AM, teppo.p...@gmail.com wrote:


The guide is written in c++ in mind, yet the concepts stands for any
 programming language really. Read it through and think about it. If
 you come back to this topic and say: "yeah, but it's c++", then you
 haven't understood it.


The ideas (loose coupling, easy testing) are certainly applicable in Python -- 
the specific methods talked about in that paper, however, are not.

To go back to the original example:

def __init__(self, ...):
self.queue = Queue()

we have several different (easy!) ways to do dependency injection:

* inject a mock Queue into the module
* make queue a default parameter

If it's just testing, go with the first option:

import the_module_to_test
the_module_to_test.Queue = MockQueue

and away you go.

If the class in question has legitimate, non-testing, reasons to specify 
different Queues, then make it a default argument instead:

def __init__(self, ..., queue=None):
if queue is None:
queue = Queue()
self.queue = queue

or, if it's just for testing but you don't want to hassle injecting a MockQueue 
into the module itself:

def __init__(self, ..., _queue=None):
if _queue is None:
_queue = Queue()
self.queue = _queue

or, if the queue is only initialized (and not used) during __init__ (so you can 
replace it after construction with no worries):

class Example:
def __init__(self, ...):
self.queue = Queue()

ex = Example()
ex.queue = MockQueue()
# proceed with test

The thing each of those possibilities have in common is that the normal 
use-case of just creating the thing and moving on is the very simple:

my_obj = Example(...)

To sum up:  your concerns are valid, but using c++ (and many other language) 
idioms in Python does not make good Python code.

--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


[issue28123] _PyDict_GetItem_KnownHash ignores DKIX_ERROR return

2016-11-03 Thread INADA Naoki

INADA Naoki added the comment:

LGTM.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Chris Angelico
On Thu, Nov 3, 2016 at 7:29 PM, Steven D'Aprano
 wrote:
>> lst = map (lambda x: x*5, lst)
>> lst = filter (lambda x: x%3 == 1, lst)
>> And perform especially bad in CPython compared to a comprehension.
>
> I doubt that.
>

It's entirely possible. A list comp involves one function call (zero
in Py2), but map/lambda involves a function call per element in the
list. Function calls have overhead.

Arthur, I would suggest looking at what numpy and pandas do. When
you're working with ridiculously large data sets, they absolutely
shine; and if you're not working with that much data, the performance
of map or a list comp is unlikely to be significant. If the numpy
folks have a problem that can't be solved without new syntax, then a
proposal can be brought to the core (like matmul, which was approved
and implemented).

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28494] is_zipfile false positives

2016-11-03 Thread Serhiy Storchaka

Changes by Serhiy Storchaka :


--
assignee:  -> serhiy.storchaka

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28597] f-string behavior is conflicting with its documentation

2016-11-03 Thread Eric V. Smith

Eric V. Smith added the comment:

This is a duplicate of issue 28590.

--
resolution:  -> duplicate
stage:  -> resolved
status: open -> closed
superseder:  -> fstring's '{' from escape sequences does not start an expression

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28597] f-string behavior is conflicting with its documentation

2016-11-03 Thread Serhiy Storchaka

Changes by Serhiy Storchaka :


--
assignee:  -> docs@python
components: +Documentation -Interpreter Core
nosy: +docs@python, eric.smith
type:  -> behavior
versions: +Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28387] double free in io.TextIOWrapper

2016-11-03 Thread Serhiy Storchaka

Changes by Serhiy Storchaka :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28387] double free in io.TextIOWrapper

2016-11-03 Thread Roundup Robot

Roundup Robot added the comment:

New changeset 91f024fc9b3a by Serhiy Storchaka in branch '2.7':
Issue #28387: Fixed possible crash in _io.TextIOWrapper deallocator when
https://hg.python.org/cpython/rev/91f024fc9b3a

New changeset 89f7386104e2 by Serhiy Storchaka in branch '3.5':
Issue #28387: Fixed possible crash in _io.TextIOWrapper deallocator when
https://hg.python.org/cpython/rev/89f7386104e2

New changeset c4319c0d0131 by Serhiy Storchaka in branch '3.6':
Issue #28387: Fixed possible crash in _io.TextIOWrapper deallocator when
https://hg.python.org/cpython/rev/c4319c0d0131

New changeset 36af3566b67a by Serhiy Storchaka in branch 'default':
Issue #28387: Fixed possible crash in _io.TextIOWrapper deallocator when
https://hg.python.org/cpython/rev/36af3566b67a

--
nosy: +python-dev

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28598] RHS not consulted in `str % subclass_of_str` case.

2016-11-03 Thread Martijn Pieters

New submission from Martijn Pieters:

The `BINARY_MODULO` operator hardcodes a test for `PyUnicode`:

TARGET(BINARY_MODULO) {
PyObject *divisor = POP();
PyObject *dividend = TOP();
PyObject *res = PyUnicode_CheckExact(dividend) ?
PyUnicode_Format(dividend, divisor) :
PyNumber_Remainder(dividend, divisor);

This means that a RHS subclass of str can't override the operator:

>>> class Foo(str):
... def __rmod__(self, other):
... return self % other
...
>>> "Bar: %s" % Foo("Foo: %s")
'Bar: Foo %s'

The expected output there is "Foo: Bar %s".

This works correctly for `bytes`:

>>> class FooBytes(bytes):
... def __rmod__(self, other):
... return self % other
...
>>> b"Bar: %s" % FooBytes(b"Foo: %s")
b'Foo: Bar: %s'

and for all other types where the RHS is a subclass.

Perhaps there should be a test to see if `divisor` is a subclass, and in that 
case take the slow path?

--
components: Interpreter Core
messages: 279993
nosy: mjpieters
priority: normal
severity: normal
status: open
title: RHS not consulted in `str % subclass_of_str` case.
type: behavior
versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28597] f-string behavior is conflicting with its documentation

2016-11-03 Thread Fabio Zadrozny

New submission from Fabio Zadrozny:

The file:

/Doc/reference/lexical_analysis.rst says that things as:

f"abc {a[\"x\"]} def"  # workaround: escape the inner quotes
f"newline: {ord('\\n')}"  # workaround: double escaping
fr"newline: {ord('\n')}"  # workaround: raw outer string

are accepted in f-strings, yet, all those examples raise a:

SyntaxError: f-string expression part cannot include a backslash

The current Python version where this was tested is: 3.6.0b4

So, either those cases should be supported or lexical_analysis.rst should be 
updated to say that '\' is not valid in the expression part of f-strings.

--
components: Interpreter Core
messages: 279992
nosy: fabioz
priority: normal
severity: normal
status: open
title: f-string behavior is conflicting with its documentation
versions: Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Problems with read_eager and Telnet

2016-11-03 Thread kenansharon
On Monday, 28 February 2011 10:54:56 UTC-5, Robi  wrote:
> Hi everybody,
>  I'm totally new to Python but well motivated :-)
> 
> I'm fooling around with Python in order to interface with FlightGear
> using a telnet connection.
> 
> I can do what I had in mind (send some commands and read output from
> Flightgear using the telnetlib) with a read_until() object to catch
> every output line I need, but it proved to be very slow (it takes 1/10
> of a sec for every read_until().
> 
> I tried using the read_eager() object and it's w faster (it
> does the job in 1/100 of a sec, maybe more, I didn't tested) but it
> gives me problems, it gets back strange strings, repeated ones,
> partially broken ones, well ... I don't know what's going on with it.
> 
> You see, I don't know telnet (the protocol) very good, I'm very new to
> Python and Python's docs are not very specific about that read_eager(9
> stuff.
> 
> Could someone point me to some more documentation about that? or at
> least help me in getting a correct idea of what's going on with
> read_eager()?
> 
> I'm going on investigating but a help from here would be very
> appreciated :-)
> 
> Thanks in advance,
>Roberto

Say a can someone explain for me how read_eager and read _very_eager decide 
when to stop reading date from a socket?

If the command sent causes the server to reply with multiple lines how do the 
Python functions decide when to stop accepting new data from the socket?
-- 
https://mail.python.org/mailman/listinfo/python-list


Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread arthurhavlicek
Hi everybody,

I have an enhancement proposal for Python and, as suggested by PEP 1, am 
exposing a stub to the mailing list before possibly starting writing a PEP. 
This is my first message to python mailing list. I hope you will find this 
content of interest.

Python features a powerful and fast way to create lists through comprehensions. 
Because of their ease of use and efficiency through native implementation, they 
are an advantageous alternative to map, filter, and more. However, when used in 
replacement for an in-place version of these functions, they are sub-optimal, 
and Python offer no alternative.

This lack of implementation have already been pointed out:
http://stackoverflow.com/questions/4148375/is-there-an-in-place-equivalent-to-map-in-python
Notice the concerns of OPs in his comments to replies in this one:
http://stackoverflow.com/questions/3000461/python-map-in-place
In this answer, developpers here are wondering about performance issues 
regarding both loop iteration and comprehension:
http://stackoverflow.com/a/1540069/3323394

I would suggest to implement a language-level, in-place version for them. There 
is severeal motivations for this:

1 - Code readability and reduced redundancy

lst = [ item for item in lst if predicate(item) ]
lst = [ f(item) for item in lst ]

Both these expressions feature redundancy, lst occurs twice and item at least 
twice. Additionally, the readability is hurt, because one has to dive through 
the semantics of the comprehension to truely understand I am filtering the list 
or remapping its values.

Map and filter, although they are more explicit, also feature redundancy. They 
look OK with functional predicate:

lst = map (f, lst)
lst = filter (predicate, lst)

But are less elegant when using an expression, than one has to convert through 
a lambda:

lst = map (lambda x: x*5, lst)
lst = filter (lambda x: x%3 == 1, lst)

And perform especially bad in CPython compared to a comprehension.

2 - Efficiency

A language support for these operations to be made in-place could improve the 
efficiency of this operations through reduced use of memory.


I would propose this syntax. (TODO: find appropriate keywords I guess):

lst.map x: x*5
lst.filter x: x%3 == 1

They can work for dictionaries too.

dct.map k,v: v*5
dct.filter k,v: k+v == 10

The reasonning for the need of a language-level approach is the need for an 
efficient implementation that would support giving an arbitrary expression and 
not only a function. Expressions are shorter and, I guess, would be more 
efficient.


I would gladly appreciate your returns on this, regarding:
1 - Whether a similar proposition has been made
2 - If you find this of any interest at all
3 - If you have a suggestion for improving the proposal

Thanks for reading. Have a nice day
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28123] _PyDict_GetItem_KnownHash ignores DKIX_ERROR return

2016-11-03 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

LGTM. I'll commit the patch soon if there are no comments from other core 
developers.

--
stage: patch review -> commit review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28498] tk busy command

2016-11-03 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

Updated patch skips tests with the cursor option on OSX/Aqua.

--
Added file: http://bugs.python.org/file45337/tk_busy_6.diff

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23262] webbrowser module broken with Firefox 36+

2016-11-03 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

I don't understand with what you disagree. I can imagine old Python 2.7 with 
old browser, old Python 2.7 with new browser, new Python 2.7 with new browser, 
and even new Python 2.7 with old browser on the same computer (but the latter 
is very unlikely).

Old Python 2.7 don't work with new browser and we can't do anything with this. 
We can make new Python 2.7.12 working with new browser. But I'm not sure that 
we can break the compatibility with old browser that could be installed when 
Python 2.7 was first installed on the same computer.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue27779] Sync-up docstrings in C version of the the decimal module

2016-11-03 Thread Stefan Krah

Changes by Stefan Krah :


--
keywords:  -easy, patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue27779] Sync-up docstrings in C version of the the decimal module

2016-11-03 Thread Stefan Krah

Changes by Stefan Krah :


--
keywords: +patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue27779] Sync-up docstrings in C version of the the decimal module

2016-11-03 Thread Stefan Krah

Stefan Krah added the comment:

Okay great.  I think it's probably best to produce an initial patch with the 
verbatim Python docstrings (you can of course address the comments that I 
already made), then we mark the passages that are clearly not valid for 
_decimal or outdated for _pydecimal, then we go for extra clarity.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28123] _PyDict_GetItem_KnownHash ignores DKIX_ERROR return

2016-11-03 Thread Xiang Zhang

Xiang Zhang added the comment:

> If _PyDict_GetItem_KnownHash() returns an error, it is very likely that 
> following insertdict() with same key will return an error.

Make sense.

--
assignee: haypo -> serhiy.storchaka
Added file: http://bugs.python.org/file45336/issue28123_v6.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28385] Bytes objects should reject all formatting codes with an error message

2016-11-03 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

The proposition of making default __format__ equivalent to str() will be 
addressed in separate issue. Python-Dev thread: 
https://mail.python.org/pipermail/python-dev/2016-October/146765.html.

--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28123] _PyDict_GetItem_KnownHash ignores DKIX_ERROR return

2016-11-03 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

If _PyDict_GetItem_KnownHash() returns an error, it is very likely that 
following insertdict() with same key will return an error. I would prefer to 
return an error right after _PyDict_GetItem_KnownHash() is failed. This would 
look more straightforward.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Ned Batchelder
On Thursday, November 3, 2016 at 4:30:00 AM UTC-4, Steven D'Aprano wrote:
> On Thursday 03 November 2016 17:56, arthurhavli...@gmail.com wrote:
> > I would propose this syntax. (TODO: find appropriate keywords I guess):
> > 
> > lst.map x: x*5
> > lst.filter x: x%3 == 1
> 
> I think the chances of Guido accepting new syntax for something as trivial as 
> this with three existing solutions is absolutely zero.
> 
> I think the chances of Guido accepting new list/dict methods for in place map 
> and/or filter is a tiny bit higher than zero.


To my eyes, this proposed syntax is completely foreign. "lst.map" looks
like attribute or method access on lst, but there's no operation on the
result, or function call.  They implicitly assign back to lst, with no
recognizable syntax to indicate that they do (= or "as" is the usual
marker).

While mapping and filtering are common operations, I'm not sure mapping
and filtering and then reassigning back to the original sequence is
especially common.  It's certainly not common enough to deserve completely
new syntax when the existing methods already exist.

Part of the problem here is that you value explicitness, but also are
trying to reduce redundancy.  When you say that lst occurs twice in
your examples, what I see is that it occurs twice because it's being
used for two different things: it is the source of the data, and it is
also the target for the result. I think it is good to have it appear
twice in this case, especially since there's no reason to think it will
usually be used for both purposes.  How do I do exactly the same data
manipulation, but then assign it to lst2 because I belatedly realized
I wanted both the before and after list?  Under your proposed syntax,
I would have to completely rewrite the line to use a different filtering
mechanism because now I need to unbundle the filtering and the assignment.
That seems unfortunate.

You've done the right thing by bringing the proposal here.  I think it
is useful to see how people approach the language, and where they want
changes.  Discussing the pros and cons is a good way to get a deeper
appreciation both for the language and the rationale for its design.
But I don't think this proposal has a chance of moving forward.

--Ned.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue26935] android: test_os fails

2016-11-03 Thread Xavier de Gaye

Xavier de Gaye added the comment:

This new patch takes into account Martin comment in msg265099 and fixes a call 
to getpwall() as Android does not have getpwall().

--
assignee:  -> xdegaye
stage: patch review -> commit review
versions: +Python 3.7
Added file: http://bugs.python.org/file45335/test_urandom_fd_reopened_2.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



data interpolation

2016-11-03 Thread Heli
Hi, 

I have a question about data interpolation using python. I have a big ascii 
file containg data in the following format and around 200M points. 

id, xcoordinate, ycoordinate, zcoordinate

then I have a second file containing data in the following format, ( 2M values) 

id, xcoordinate, ycoordinate, zcoordinate, value1, value2, value3,..., valueN

I would need to get values for x,y,z coordinates of file 1 from values of 
file2.  

I don´t know whether my data in file1 and 2 is from structured or unstructured 
grid source. I was wondering which interpolation module either from scipy or 
scikit-learn you recommend me to use?

I would also appreciate if you could recommend me some sample 
example/reference. 

Thanks in Advance for your help, 
-- 
https://mail.python.org/mailman/listinfo/python-list


PyDev 5.3.1 Released

2016-11-03 Thread Fabio Zadrozny
Release Highlights:
---

* **Important** PyDev now requires Java 8 and Eclipse 4.6 (Neon) onwards.

* PyDev 5.2.0 is the last release supporting Eclipse 4.5 (Mars).

* **Code Completion**

* Substring completions are **on by default** (may be turned off in the
code-completion preferences).
* Fixed issue with code-completion using from..import..as aliases.

* **Others**

* Auto-fix imports with Ctrl+Shift+O properly sorts items based on the
same sorting improvements for code-completion.
* When fixing unresolved import (with Ctrl+1) it properly resolves
dependent projects (bugfix for regression in 5.3.0).
* **async** and **await** keywords are properly highlighted.
* **async** blocks properly auto-indented.
* In PEP 448 list unpack variable was not being marked as a "Load"
variable (which made the code analysis yield false positives).

What is PyDev?
---

PyDev is an open-source Python IDE on top of Eclipse for Python, Jython and
IronPython development.

It comes with goodies such as code completion, syntax highlighting, syntax
analysis, code analysis, refactor, debug, interactive console, etc.

Details on PyDev: http://pydev.org
Details on its development: http://pydev.blogspot.com


What is LiClipse?
---

LiClipse is a PyDev standalone with goodies such as support for Multiple
cursors, theming, TextMate bundles and a number of other languages such as
Django Templates, Jinja2, Kivy Language, Mako Templates, Html, Javascript,
etc.

It's also a commercial counterpart which helps supporting the development
of PyDev.

Details on LiClipse: http://www.liclipse.com/



Cheers,

--
Fabio Zadrozny
--
Software Developer

LiClipse
http://www.liclipse.com

PyDev - Python Development Environment for Eclipse
http://pydev.org
http://pydev.blogspot.com

PyVmMonitor - Python Profiler
http://www.pyvmmonitor.com/
-- 
https://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


  1   2   >