Re: advanced SimpleHTTPServer?

2016-11-03 Thread justin walters
On Thu, Nov 3, 2016 at 10:27 PM, Rustom Mody  wrote:

> On Thursday, November 3, 2016 at 1:23:05 AM UTC+5:30, Eric S. Johansson
> wrote:
> > On 11/2/2016 2:40 PM, Chris Warrick wrote:
> > > Because, as the old saying goes, any sufficiently complicated Bottle
> > > or Flask app contains an ad hoc, informally-specified, bug-ridden,
> > > slow implementation of half of Django. (In the form of various plugins
> > > to do databases, accounts, admin panels etc.)
> >
> > That's not a special attribute of bottle, flask or Django. Ad hoc,
> > informally specified, bug ridden slow implementations abound.  We focus
> > too much on scaling up and not enough on scaling down. We (designers)
> > also have not properly addressed configuration complexity issues.
>
> This scaling up vs down idea is an important one.
> Related to Buchberger’s blackbox whitebox principle
>
> >
> > If I'm going do something once, if it cost me more than a couple of
> > hours to figure it out, it's too expensive in general but definitely if
> > I forget what I learned. That's why bottle/flask systems meet and need.
> > They're not too expensive to forget what you learned.
> >
> > Django makes the cost of forgetting extremely expensive. I think of
> > using Django as career  rather than a toolbox.
>
> Thats snide... and probably accurate ;-)
> Among my more unpleasant programming experiences was Ruby-on-Rails
> And my impression is that Ruby is fine; Rails not
> Django I dont know and my impression is its a shade better than Rails
>
> It would be nice to discover the bottle inside the flask inside django
>
> Put differently:
> Frameworks are full-featured and horrible to use
> APIs are elegant but ultimately underpowered
> DSLs (eg requests) are in intermediate sweetspot; we need more DSL-families
> --
> https://mail.python.org/mailman/listinfo/python-list
>

I work with Django every day. Knowing Django is like knowing another
ecosystem. It's totally
worth learning though. The speed of development is absolutely unbeatable. I
can build a fully featured
and good-looking blog in about 10 minutes. It's nuts.

The best part about it though, is that it's really just simple Python under
the hood for the most part. You
can override or modify any part of it to make it work in exactly the way
you want it to. I'm a huge Django fanboy,
so excuse the gushing. The docs are also some of the most comprehensive
I've ever seen.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: advanced SimpleHTTPServer?

2016-11-03 Thread Rustom Mody
On Thursday, November 3, 2016 at 1:23:05 AM UTC+5:30, Eric S. Johansson wrote:
> On 11/2/2016 2:40 PM, Chris Warrick wrote:
> > Because, as the old saying goes, any sufficiently complicated Bottle
> > or Flask app contains an ad hoc, informally-specified, bug-ridden,
> > slow implementation of half of Django. (In the form of various plugins
> > to do databases, accounts, admin panels etc.)
> 
> That's not a special attribute of bottle, flask or Django. Ad hoc,
> informally specified, bug ridden slow implementations abound.  We focus
> too much on scaling up and not enough on scaling down. We (designers) 
> also have not properly addressed configuration complexity issues.

This scaling up vs down idea is an important one.
Related to Buchberger’s blackbox whitebox principle
 
> 
> If I'm going do something once, if it cost me more than a couple of
> hours to figure it out, it's too expensive in general but definitely if
> I forget what I learned. That's why bottle/flask systems meet and need.
> They're not too expensive to forget what you learned.
> 
> Django makes the cost of forgetting extremely expensive. I think of
> using Django as career  rather than a toolbox.

Thats snide... and probably accurate ;-)
Among my more unpleasant programming experiences was Ruby-on-Rails
And my impression is that Ruby is fine; Rails not
Django I dont know and my impression is its a shade better than Rails

It would be nice to discover the bottle inside the flask inside django

Put differently:
Frameworks are full-featured and horrible to use
APIs are elegant but ultimately underpowered
DSLs (eg requests) are in intermediate sweetspot; we need more DSL-families
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: need some kind of "coherence index" for a group of strings

2016-11-03 Thread Mario R. Osorio
I don't know much about these topics but, wouldn't soundex do the job??

 On Thursday, November 3, 2016 at 12:18:19 PM UTC-4, Fillmore wrote:
> Hi there, apologies for the generic question. Here is my problem let's 
> say that I have a list of lists of strings.
> 
> list1:#strings are sort of similar to one another
> 
>my_nice_string_blabla
>my_nice_string_blqbli
>my_nice_string_bl0bla
>my_nice_string_aru
> 
> 
> list2:#strings are mostly different from one another
> 
>my_nice_string_blabla
>some_other_string
>yet_another_unrelated string
>wow_totally_different_from_others_too
> 
> 
> I would like an algorithm that can look at the strings and determine 
> that strings in list1 are sort of similar to one another, while the 
> strings in list2 are all different.
> Ideally, it would be nice to have some kind of 'coherence index' that I 
> can exploit to separate lists given a certain threshold.
> 
> I was about to concoct something using levensthein distance, but then I 
> figured that it would be expensive to compute and I may be reinventing 
> the wheel.
> 
> Thanks in advance to python masters that may have suggestions...

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Terry Reedy

On 11/3/2016 2:56 AM, arthurhavli...@gmail.com wrote:


lst = [ item for item in lst if predicate(item) ]
lst = [ f(item) for item in lst ]

Both these expressions feature redundancy, lst occurs twice and item at least 
twice. Additionally, the readability is hurt, because one has to dive through 
the semantics of the comprehension to truely understand I am filtering the list 
or remapping its values.

...

A language support for these operations to be made in-place could improve the 
efficiency of this operations through reduced use of memory.


We already have that: slice assignment with an iterator.

lst[:] = (item for item in list if predicate(item))
lst[:] = map(f, lst)  # iterator in 3.x.

To save memory, stop using unneeded temporary lists and use iterators 
instead.  If slice assignment is done as I hope it will optimize remain 
memory operations.  (But I have not read the code.) It should overwrite 
existing slots until either a) the iterator is exhausted or b) existing 
memory is used up.  When lst is both source and destination, only case 
a) can happen.  When it does, the list can be finalized with its new 
contents.


As for timings.

from timeit import Timer
setup = """data = list(range(1))
def func(x):
return x
"""
t1a = Timer('data[:] = [func(a) for a in data]', setup=setup)
t1b = Timer('data[:] = (func(a) for a in data)', setup=setup)
t2a = Timer('data[:] = list(map(func, data))', setup=setup)
t2b = Timer('data[:] = map(func, data)', setup=setup)

print('t1a', min(t1a.repeat(number=500, repeat=7)))
print('t1b', min(t1b.repeat(number=500, repeat=7)))
print('t2a', min(t2a.repeat(number=500, repeat=7)))
print('t2b', min(t2b.repeat(number=500, repeat=7)))
#
t1a 0.5675313005414555
t1b 0.7034254675598604
t2a 0.518128598520
t2b 0.5196112759726024

If f does more work, the % difference among these will decrease.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: need some kind of "coherence index" for a group of strings

2016-11-03 Thread jladasky
On Thursday, November 3, 2016 at 3:47:41 PM UTC-7, jlad...@itu.edu wrote:
> On Thursday, November 3, 2016 at 1:09:48 PM UTC-7, Neil D. Cerutti wrote:
> > you may also be 
> > able to use some items "off the shelf" from Python's difflib.
> 
> I wasn't aware of that module, thanks for the tip!
> 
> difflib.SequenceMatcher.ratio() returns a numerical value which represents 
> the "similarity" between two strings.  I don't see a precise definition of 
> "similar", but it may do what the OP needs.

Following up to myself... I just experimented with 
difflib.SequenceMatcher.ratio() and discovered something.  The algorithm is not 
"commutative."  That is, it doesn't ALWAYS produce the same ratio when the two 
strings are swapped.

Here's an excerpt from my interpreter session.

==

In [1]: from difflib import SequenceMatcher

In [2]: import numpy as np

In [3]: sim = np.zeros((4,4))


== snip ==


In [10]: strings
Out[10]: 
('Here is a string.',
 'Here is a slightly different string.',
 'This string should be significantly different from the other two?',
 "Let's look at all these string similarity values in a matrix.")

In [11]: for r, s1 in enumerate(strings):
   : for c, s2 in enumerate(strings):
   : m = SequenceMatcher(lambda x:x=="", s1, s2)
   : sim[r,c] = m.ratio()
   :

In [12]: sim
Out[12]: 
array([[ 1.,  0.64150943,  0.2195122 ,  0.30769231],
   [ 0.64150943,  1.,  0.47524752,  0.30927835],
   [ 0.2195122 ,  0.45544554,  1.,  0.28571429],
   [ 0.30769231,  0.28865979,  0.,  1.]])

==

The values along the matrix diagonal, of course, are all ones, because each 
string was compared to itself.

I also expected the values reflected across the matrix diagonal to match.  The 
first row does in fact match the first column.  The remaining numbers disagree 
somewhat.  The differences are not large, but they are there.  I don't know the 
reason why.  Caveat programmer.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: need some kind of "coherence index" for a group of strings

2016-11-03 Thread Fillmore

On 11/3/2016 6:47 PM, jlada...@itu.edu wrote:

On Thursday, November 3, 2016 at 1:09:48 PM UTC-7, Neil D. Cerutti wrote:

you may also be
able to use some items "off the shelf" from Python's difflib.


I wasn't aware of that module, thanks for the tip!

difflib.SequenceMatcher.ratio() returns a numerical value which represents
> the "similarity" between two strings.  I don't see a precise 
definition of

> "similar", but it may do what the OP needs.





I may end up rolling my own algo, but thanks for the tip, this does seem 
like useful stuff indeed



--
https://mail.python.org/mailman/listinfo/python-list


Re: need some kind of "coherence index" for a group of strings

2016-11-03 Thread jladasky
On Thursday, November 3, 2016 at 1:09:48 PM UTC-7, Neil D. Cerutti wrote:
> you may also be 
> able to use some items "off the shelf" from Python's difflib.

I wasn't aware of that module, thanks for the tip!

difflib.SequenceMatcher.ratio() returns a numerical value which represents the 
"similarity" between two strings.  I don't see a precise definition of 
"similar", but it may do what the OP needs.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Paul Rubin
arthurhavli...@gmail.com writes:
> I would gladly appreciate your returns on this, regarding:
> 1 - Whether a similar proposition has been made
> 2 - If you find this of any interest at all
> 3 - If you have a suggestion for improving the proposal

Bleccch.  Might be ok as a behind-the-scenes optimization by the
compiler.  If you want something like C++ move semantics, use C++.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python Dice Game/Need help with my script/looping!

2016-11-03 Thread Cousin Stanley
Constantin Sorin wrote:

> Hello,I recently started to make a dice game in python.
> 
> Everything was nice and beautiful,until now.
> 
> My problem is that when I try to play and I win or lost 
> or it's equal next time it will continue only with that.
>  

  Following is a link to a version of your code
  rewritten to run using python3  

http://csphx.net/fire_dice.py.txt

  The only significant differences 
  between python2 and python3
  in your code are  

python2 . python3

 print ... print( )

 raw_input( ) ... input( )


Bob Gailer wrote: 

> 
> The proper way to handle this 
> is to put the entire body of game() 
> in a while loop.
>   
> Since the values of e and f are not changed in the loop 
> he will continue to get the same thing.
>  

  These changes are the key
  to making the program loop
  as desired  

  All other changes are mostly cosmetic  

 
  * Note *

I  am  partial to white space 
both  horizontal  and  vertical ... :-)
  

-- 
Stanley C. Kitching
Human Being
Phoenix, Arizona

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: data interpolation

2016-11-03 Thread Bob Gailer
On Nov 3, 2016 6:10 AM, "Heli"  wrote:
>
> Hi,
>
> I have a question about data interpolation using python. I have a big
ascii file containg data in the following format and around 200M points.
>
> id, xcoordinate, ycoordinate, zcoordinate
>
> then I have a second file containing data in the following format, ( 2M
values)
>
> id, xcoordinate, ycoordinate, zcoordinate, value1, value2, value3,...,
valueN
>
> I would need to get values for x,y,z coordinates of file 1 from values of
file2.
>
Apologies but I have no idea what you're asking. Can you give us some
examples?

> I don´t know whether my data in file1 and 2 is from structured or
unstructured grid source. I was wondering which interpolation module either
from scipy or scikit-learn you recommend me to use?
>
> I would also appreciate if you could recommend me some sample
example/reference.
>
> Thanks in Advance for your help,
> --
> https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Chris Angelico
On Fri, Nov 4, 2016 at 4:00 AM, Steve D'Aprano
 wrote:
> On Fri, 4 Nov 2016 01:05 am, Chris Angelico wrote:
>
>> On Thu, Nov 3, 2016 at 7:29 PM, Steven D'Aprano
>>  wrote:
 lst = map (lambda x: x*5, lst)
 lst = filter (lambda x: x%3 == 1, lst)
 And perform especially bad in CPython compared to a comprehension.
>>>
>>> I doubt that.
>>>
>>
>> It's entirely possible. A list comp involves one function call (zero
>> in Py2), but map/lambda involves a function call per element in the
>> list. Function calls have overhead.
>
> I don't know why you think that list comps don't involve function calls.

List comps themselves involve one function call (zero in Py2). What
you do inside the expression is your business. Do you agree that list
comps don't have the overhead of opening and closing files?

files = "/etc/hostname", "/etc/resolv.conf", ".bashrc"
contents = [open(fn).read() for fn in files]

In a comparison between comprehensions and map, this cost is
irrelevant, unless your point is that "they're all pretty quick".

> Here's some timing results using 3.5 on my computer. For simplicity, so
> folks can replicate the test themselves, here's the timing code:
>
>
> from timeit import Timer
> setup = """data = list(range(1))
> def func(x):  # simulate some calculation
> return {x+1: x**2}
> """
> t1 = Timer('[func(a) for a in data]', setup=setup)
> t2 = Timer('list(map(func, data))', setup=setup)

This is very different from the original example, about which the OP
said that map performs badly, and you doubted it. In that example, the
list comp includes the expression directly, whereas map (by its
nature) must use a function. The "inline expression" of a
comprehension is more efficient than the "inline expression" of a
lambda function given to map.

> And here's the timing results on my machine:
>
> py> min(t1.repeat(number=1000, repeat=7))  # list comp
> 18.2571472954005
> py> min(t2.repeat(number=1000, repeat=7))  # map
> 18.157311914488673
>
> So there's hardly any difference, but map() is about 0.5% faster in this
> case.

Right. As has often been stated, map is perfectly efficient, *if* the
body of the comprehension would be simply a function call, nothing
more. You can map(str, stuff) to stringify a bunch of things. Nice.
But narrow, and not the code that was being discussed.

> Now, it is true that *if* you can write the list comprehension as a direct
> calculation in an expression instead of a function call:
>
> [a+1 for a in data]
>
> *and* compare it to map using a function call:
>
> map(lambda a: a+1, data)
>
>
> then the overhead of the function call in map() may be significant.

Thing is, this is extremely common. How often do you actually use a
comprehension with something that is absolutely exactly a function
call on the element in question?

> But the
> extra cost is effectively just a multiplier. It isn't going to change
> the "Big Oh" behaviour.

Sure it doesn't. In each case, the cost is linear. But the work is
linear, so I would expect the time to be linear.

> So map() here is less than a factor of two slower. I wouldn't call
> that "especially bad" -- often times, a factor of two is not important.
> What really hurts is O(N**2) performance, or worse.
>
> So at worst, map() is maybe half as fast as a list comprehension, and at
> best, its perhaps a smidgen faster. I would expect around the same
> performance, differing only by a small multiplicative factor: I wouldn't
> expect one to be thousands or even tens of times slower that the other.

But this conclusion I agree with. There is a performance difference,
but it is not overly significant. Compared to the *actual work*
involved in the task (going through one list and doing some operation
on each operation), the difference between map and a comprehension is
generally going to be negligible.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python Dice Game/Need help with my script/looping!

2016-11-03 Thread Bob Gailer
On Nov 3, 2016 11:30 AM, "Constantin Sorin"  wrote:
>
> Hello,I recently started to make a dice game in python.Everything was
nice and beautiful,until now.My problem is that when I try to play and I
win or lost or it's equal next time it will continue only with that.
> Exemple:
> Enter name >> Sorin
> Money = 2
> Bet >> 2
> You won!
> Money 4
> Bet >> 2
> You won!
> and it loops like this :/
>

What are the rules of the game?

> Here is the code:
>
> import time
> import os
> import random
> os = os.system
> os("clear")
>
> print "What is your name?"
> name = raw_input(">>")
>
> def lost():
> print "Yoy lost the game!Wanna Play again?Y/N"
> ch = raw_input(">>")
> if ch == "Y":
> game()

When you call a function from within that function you are using recursion.
This is not what recursion is intended for. If you play long enough you
will run out of memory.

The proper way to handle this is to put the entire body of game() in a
while loop.

> elif ch == "N":
> exit()
>
>
>
> def game():
> os("clear")
> a = random.randint(1,6)
> b = random.randint(1,6)
> c = random.randint(1,6)
> d = random.randint(1,6)
> e = a + b
> f = c + d
> money = 2
> while money > 0:
> print "Welcome to FireDice %s!" %name
> print "Your money: %s$" %money
> print "How much do you bet?"
> bet = input(">>")
> if e > f:
> print "you won!"
> money = money + bet
> elif e < f:
> print "you lost"
> money = money - bet
> else:
> print "?"
>
> print money
> lost()

Since the values of e and f are not changed in the loop he will continue to
get the same thing.
>
> game()
> --
> https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Arthur Havlicek
I understand that, the cost of change is such that it's very unlikely
something like this ever goes into Python, but I feel like the interest of
the proposition is being underestimated here, that's why I'm going to argue
a few points and give a bit more context as needed.

> While mapping and filtering are common operations, I'm not sure mapping
> and filtering and then reassigning back to the original sequence is
> especially common.

It depends of your context. On the last 3 months, I stumbled across this at
least 3 times, which is 3 times more than I used a lambda or a metaclass or
a decorator or other fancy language feature that we simply avoid whenever
possible. It also happened to my collegue. I remember these examples
because we had a bit of humour about how nice can be inlined ternaries
inside comprehensions, but I could be missing a lot more.

The reason I'm being especially impacted by this is because I am maintainer
of a decent-sized Python application (~50-100K lines of code) that
extensively uses lists and dictionaries. We value "low level" data
manipulation and efficiency a lot more than complex, non-obvious
constructs. In other contexts, it may be very different. Note that my
context is only relevant for illustration here, I don't expect a feature to
save me since we are currently shipping to Centos 6 and thus will not see
the light of Python 3.7 in the next 10 years (optimistic estimation).

> Arthur, I would suggest looking at what numpy and pandas do.

In my example context, their benefits can't apply, because I'm not going to
rewrite code that uses lists for them to uses np.array instead for example.
Although the performance boost is likely to be bigger if used properly, I
would have prefered a lesser boost that comes for (almost) free.

Like most Python programmers, I'm not in the case of needing a performance
boost very bad, but that does not mean I disregard performance entirely.
The need of performance is not so binary that it either don't matter at all
or is enough to motivate a rewrite.

> To my eyes, this proposed syntax is completely foreign

I must admit I don't have much imagination for syntax proposals...all that
mattered to me here was to make it clear you are doing an in-place
modification. Feel free to completely ignore that part. Any proposal
welcomed of course.
About Readability & Redundancy

I have misused the terms here, but I wasn't expecting so much nitpicking. I
should have used the term maintenability, because that one is bland and
subjective enough that nobody would have noticed :D

How about "I find that cooler." Good enough ?

In a less sarcastic tone:

What I truely meant here is that when you contain the behavior of your code
inside a specific keyword or syntax, you are making your intentions clear
to the reader. It may be harder for him to gain access to the knowledge in
the first place, but makes it easier over time.

Famous example:

When you learned programming, you may have had no idea what "+=" was doing,
but now you do, you probably rate "a += 2" syntax to be much better than "a
= a + 2". You make the economy of a token, but more important, you make
your intentions clearer because "+=" rings a bell, wihle "=" is a more
generic syntax with a broader meaning.

> So map() here is less than a factor of two slower. I wouldn't call
> that "especially bad" -- often times, a factor of two is not important.
> What really hurts is O(N**2) performance, or worse.

When you evaluate your application bottleneck or your average daily
algorithmic evaluation, perhaps. When regarding language core features we
are not on the same scale. If a language X is 2 times faster than a
language Y to do the same task, that's a huge seller, and is of real
importance.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: need some kind of "coherence index" for a group of strings

2016-11-03 Thread Neil D. Cerutti

On 11/3/2016 1:49 PM, jlada...@itu.edu wrote:

The Levenshtein distance is a very precise definition of dissimilarity between 
sequences.  It specifies the minimum number of single-element edits you would 
need to change one sequence into another.  You are right that it is fairly 
expensive to compute.

But you asked for an algorithm that would determine whether groups of strings are 
"sort of similar".  How imprecise can you be?  An analysis of the frequency of 
each individual character in a string might be good enough for you.


I also once used a Levenshtein distance algo in Python (code snippet 
D0DE4716-B6E6-4161-9219-2903BF8F547F) to compare names of students (it 
worked, but turned out to not be what I needed), but you may also be 
able to use some items "off the shelf" from Python's difflib.


--
Neil Cerutti

--
https://mail.python.org/mailman/listinfo/python-list


Re: constructor classmethods

2016-11-03 Thread Ethan Furman

On 11/03/2016 07:45 AM, Ethan Furman wrote:

On 11/03/2016 01:50 AM, teppo.p...@gmail.com wrote:


The guide is written in c++ in mind, yet the concepts stands for any
 programming language really. Read it through and think about it. If
 you come back to this topic and say: "yeah, but it's c++", then you
 haven't understood it.


The ideas (loose coupling, easy testing) are certainly applicable in Python -- 
the specific methods talked about in that paper, however, are not.


Speaking specifically about the gyrations needed for the sole purpose of 
testing.

The paper had a lot of good things to say about decoupling, and in that light 
if the class in question should work with any Queue, then it should be passed 
in -- however, if it's an implementation detail, then it shouldn't.

--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: need some kind of "coherence index" for a group of strings

2016-11-03 Thread jladasky
The Levenshtein distance is a very precise definition of dissimilarity between 
sequences.  It specifies the minimum number of single-element edits you would 
need to change one sequence into another.  You are right that it is fairly 
expensive to compute.

But you asked for an algorithm that would determine whether groups of strings 
are "sort of similar".  How imprecise can you be?  An analysis of the frequency 
of each individual character in a string might be good enough for you.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Steve D'Aprano
On Fri, 4 Nov 2016 01:05 am, Chris Angelico wrote:

> On Thu, Nov 3, 2016 at 7:29 PM, Steven D'Aprano
>  wrote:
>>> lst = map (lambda x: x*5, lst)
>>> lst = filter (lambda x: x%3 == 1, lst)
>>> And perform especially bad in CPython compared to a comprehension.
>>
>> I doubt that.
>>
> 
> It's entirely possible. A list comp involves one function call (zero
> in Py2), but map/lambda involves a function call per element in the
> list. Function calls have overhead.

I don't know why you think that list comps don't involve function calls.

Here's some timing results using 3.5 on my computer. For simplicity, so
folks can replicate the test themselves, here's the timing code:


from timeit import Timer
setup = """data = list(range(1))
def func(x):  # simulate some calculation
return {x+1: x**2}
"""
t1 = Timer('[func(a) for a in data]', setup=setup)
t2 = Timer('list(map(func, data))', setup=setup)



And here's the timing results on my machine:

py> min(t1.repeat(number=1000, repeat=7))  # list comp
18.2571472954005
py> min(t2.repeat(number=1000, repeat=7))  # map
18.157311914488673

So there's hardly any difference, but map() is about 0.5% faster in this
case.

Now, it is true that *if* you can write the list comprehension as a direct
calculation in an expression instead of a function call:

[a+1 for a in data]

*and* compare it to map using a function call:

map(lambda a: a+1, data)


then the overhead of the function call in map() may be significant. But the
extra cost is effectively just a multiplier. It isn't going to change
the "Big Oh" behaviour. Here's an example:

t3 = Timer('[a+1 for a in data]', setup=setup)
t4 = Timer('list(map(lambda a: a+1, data))', setup=setup)
extra = """from functools import partial
from operator import add
"""
t5 = Timer('list(map(partial(add, 1), data))', setup=setup+extra)

And the timing results:

py> min(t3.repeat(number=1000, repeat=7))  # list comp with expression
2.6977453008294106
py> min(t4.repeat(number=1000, repeat=7))  # map with function
4.280411267653108
py> min(t5.repeat(number=1000, repeat=7))  # map with partial
3.446241058409214

So map() here is less than a factor of two slower. I wouldn't call
that "especially bad" -- often times, a factor of two is not important.
What really hurts is O(N**2) performance, or worse.

So at worst, map() is maybe half as fast as a list comprehension, and at
best, its perhaps a smidgen faster. I would expect around the same
performance, differing only by a small multiplicative factor: I wouldn't
expect one to be thousands or even tens of times slower that the other.



-- 
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: need some kind of "coherence index" for a group of strings

2016-11-03 Thread justin walters
On Thu, Nov 3, 2016 at 9:18 AM, Fillmore 
wrote:

>
> Hi there, apologies for the generic question. Here is my problem let's say
> that I have a list of lists of strings.
>
> list1:#strings are sort of similar to one another
>
>   my_nice_string_blabla
>   my_nice_string_blqbli
>   my_nice_string_bl0bla
>   my_nice_string_aru
>
>
> list2:#strings are mostly different from one another
>
>   my_nice_string_blabla
>   some_other_string
>   yet_another_unrelated string
>   wow_totally_different_from_others_too
>
>
> I would like an algorithm that can look at the strings and determine that
> strings in list1 are sort of similar to one another, while the strings in
> list2 are all different.
> Ideally, it would be nice to have some kind of 'coherence index' that I
> can exploit to separate lists given a certain threshold.
>
> I was about to concoct something using levensthein distance, but then I
> figured that it would be expensive to compute and I may be reinventing the
> wheel.
>
> Thanks in advance to python masters that may have suggestions...
>
>
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>


When you say similar, do you mean similar in the amount of duplicate
words/letters? Or were you more interested
in similar sentence structure?
-- 
https://mail.python.org/mailman/listinfo/python-list


need some kind of "coherence index" for a group of strings

2016-11-03 Thread Fillmore


Hi there, apologies for the generic question. Here is my problem let's 
say that I have a list of lists of strings.


list1:#strings are sort of similar to one another

  my_nice_string_blabla
  my_nice_string_blqbli
  my_nice_string_bl0bla
  my_nice_string_aru


list2:#strings are mostly different from one another

  my_nice_string_blabla
  some_other_string
  yet_another_unrelated string
  wow_totally_different_from_others_too


I would like an algorithm that can look at the strings and determine 
that strings in list1 are sort of similar to one another, while the 
strings in list2 are all different.
Ideally, it would be nice to have some kind of 'coherence index' that I 
can exploit to separate lists given a certain threshold.


I was about to concoct something using levensthein distance, but then I 
figured that it would be expensive to compute and I may be reinventing 
the wheel.


Thanks in advance to python masters that may have suggestions...



--
https://mail.python.org/mailman/listinfo/python-list


Re: Python Dice Game/Need help with my script/looping!

2016-11-03 Thread Constantin Sorin
I use Linux and python 2.7.12
-- 
https://mail.python.org/mailman/listinfo/python-list


Python Dice Game/Need help with my script/looping!

2016-11-03 Thread Constantin Sorin
Hello,I recently started to make a dice game in python.Everything was nice and 
beautiful,until now.My problem is that when I try to play and I win or lost or 
it's equal next time it will continue only with that.
Exemple:
Enter name >> Sorin
Money = 2
Bet >> 2
You won!
Money 4
Bet >> 2
You won!
and it loops like this :/

Here is the code:

import time
import os
import random
os = os.system
os("clear")

print "What is your name?"
name = raw_input(">>")

def lost():
print "Yoy lost the game!Wanna Play again?Y/N"
ch = raw_input(">>")
if ch == "Y":
game()
elif ch == "N":
exit()



def game():
os("clear")
a = random.randint(1,6)
b = random.randint(1,6)
c = random.randint(1,6)
d = random.randint(1,6)
e = a + b
f = c + d
money = 2
while money > 0:
print "Welcome to FireDice %s!" %name
print "Your money: %s$" %money
print "How much do you bet?"
bet = input(">>")
if e > f:
print "you won!"
money = money + bet
elif e < f:
print "you lost"
money = money - bet
else:
print "?"

print money
lost()  

game()
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Terry Reedy

On 11/3/2016 4:29 AM, Steven D'Aprano wrote:


Nonsense. It is perfectly readable because it is explicit about what is being
done, unlike some magic method that you have to read the docs to understand
what it does.


Agreed.


A list comprehension or for-loop is more general and can be combined so you can
do both:

alist[:] = [func(x) for x in alist if condition(x)]


The qualifier 'list' is not needed.  The right hand side of slice 
assignment just has to be an iterable.  So a second interation to build 
an intermediate list is not required.


alist[:] = (func(x) for x in alist if condition(x))

The parentheses around the generator expression are required here. 
(Steven, I know you know that, but not everyone else will.)


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: constructor classmethods

2016-11-03 Thread Chris Angelico
On Thu, Nov 3, 2016 at 7:50 PM,   wrote:
> Little bit background related to this topic. It all starts from this article:
> http://misko.hevery.com/attachments/Guide-Writing%20Testable%20Code.pdf
>
> The guide is written in c++ in mind, yet the concepts stands for any 
> programming language really. Read it through and think about it. If you come 
> back to this topic and say: "yeah, but it's c++", then you haven't understood 
> it.

I don't have a problem with something written for C++ (though I do
have a problem with a thirty-eight page document on how to make your
code testable - TLDR), but do bear in mind that a *lot* of C++ code
can be simplified when it's brought to Python. One Python feature that
C++ doesn't have, mentioned already in this thread, is the way you can
have a ton of parameters with defaults, and you then specify only
those you want, as keyword args:

def __init__(self, important_arg1, important_arg2,
queue=None, cache_size=50, whatever=...):
pass

MyClass("foo", 123, cache_size=75)

I can ignore all the arguments that don't matter, and provide only the
one or two that I actually need to change. Cognitive load is
drastically reduced, compared to the "alternative constructor"
pattern, where I have to remember not to construct anything in the
normal way.

You can't do that in C++, so it's not going to be mentioned in that
document, but it's an excellent pattern to follow.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: constructor classmethods

2016-11-03 Thread Ethan Furman

On 11/03/2016 01:50 AM, teppo.p...@gmail.com wrote:


The guide is written in c++ in mind, yet the concepts stands for any
 programming language really. Read it through and think about it. If
 you come back to this topic and say: "yeah, but it's c++", then you
 haven't understood it.


The ideas (loose coupling, easy testing) are certainly applicable in Python -- 
the specific methods talked about in that paper, however, are not.

To go back to the original example:

def __init__(self, ...):
self.queue = Queue()

we have several different (easy!) ways to do dependency injection:

* inject a mock Queue into the module
* make queue a default parameter

If it's just testing, go with the first option:

import the_module_to_test
the_module_to_test.Queue = MockQueue

and away you go.

If the class in question has legitimate, non-testing, reasons to specify 
different Queues, then make it a default argument instead:

def __init__(self, ..., queue=None):
if queue is None:
queue = Queue()
self.queue = queue

or, if it's just for testing but you don't want to hassle injecting a MockQueue 
into the module itself:

def __init__(self, ..., _queue=None):
if _queue is None:
_queue = Queue()
self.queue = _queue

or, if the queue is only initialized (and not used) during __init__ (so you can 
replace it after construction with no worries):

class Example:
def __init__(self, ...):
self.queue = Queue()

ex = Example()
ex.queue = MockQueue()
# proceed with test

The thing each of those possibilities have in common is that the normal 
use-case of just creating the thing and moving on is the very simple:

my_obj = Example(...)

To sum up:  your concerns are valid, but using c++ (and many other language) 
idioms in Python does not make good Python code.

--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Chris Angelico
On Thu, Nov 3, 2016 at 7:29 PM, Steven D'Aprano
 wrote:
>> lst = map (lambda x: x*5, lst)
>> lst = filter (lambda x: x%3 == 1, lst)
>> And perform especially bad in CPython compared to a comprehension.
>
> I doubt that.
>

It's entirely possible. A list comp involves one function call (zero
in Py2), but map/lambda involves a function call per element in the
list. Function calls have overhead.

Arthur, I would suggest looking at what numpy and pandas do. When
you're working with ridiculously large data sets, they absolutely
shine; and if you're not working with that much data, the performance
of map or a list comp is unlikely to be significant. If the numpy
folks have a problem that can't be solved without new syntax, then a
proposal can be brought to the core (like matmul, which was approved
and implemented).

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Problems with read_eager and Telnet

2016-11-03 Thread kenansharon
On Monday, 28 February 2011 10:54:56 UTC-5, Robi  wrote:
> Hi everybody,
>  I'm totally new to Python but well motivated :-)
> 
> I'm fooling around with Python in order to interface with FlightGear
> using a telnet connection.
> 
> I can do what I had in mind (send some commands and read output from
> Flightgear using the telnetlib) with a read_until() object to catch
> every output line I need, but it proved to be very slow (it takes 1/10
> of a sec for every read_until().
> 
> I tried using the read_eager() object and it's w faster (it
> does the job in 1/100 of a sec, maybe more, I didn't tested) but it
> gives me problems, it gets back strange strings, repeated ones,
> partially broken ones, well ... I don't know what's going on with it.
> 
> You see, I don't know telnet (the protocol) very good, I'm very new to
> Python and Python's docs are not very specific about that read_eager(9
> stuff.
> 
> Could someone point me to some more documentation about that? or at
> least help me in getting a correct idea of what's going on with
> read_eager()?
> 
> I'm going on investigating but a help from here would be very
> appreciated :-)
> 
> Thanks in advance,
>Roberto

Say a can someone explain for me how read_eager and read _very_eager decide 
when to stop reading date from a socket?

If the command sent causes the server to reply with multiple lines how do the 
Python functions decide when to stop accepting new data from the socket?
-- 
https://mail.python.org/mailman/listinfo/python-list


Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread arthurhavlicek
Hi everybody,

I have an enhancement proposal for Python and, as suggested by PEP 1, am 
exposing a stub to the mailing list before possibly starting writing a PEP. 
This is my first message to python mailing list. I hope you will find this 
content of interest.

Python features a powerful and fast way to create lists through comprehensions. 
Because of their ease of use and efficiency through native implementation, they 
are an advantageous alternative to map, filter, and more. However, when used in 
replacement for an in-place version of these functions, they are sub-optimal, 
and Python offer no alternative.

This lack of implementation have already been pointed out:
http://stackoverflow.com/questions/4148375/is-there-an-in-place-equivalent-to-map-in-python
Notice the concerns of OPs in his comments to replies in this one:
http://stackoverflow.com/questions/3000461/python-map-in-place
In this answer, developpers here are wondering about performance issues 
regarding both loop iteration and comprehension:
http://stackoverflow.com/a/1540069/3323394

I would suggest to implement a language-level, in-place version for them. There 
is severeal motivations for this:

1 - Code readability and reduced redundancy

lst = [ item for item in lst if predicate(item) ]
lst = [ f(item) for item in lst ]

Both these expressions feature redundancy, lst occurs twice and item at least 
twice. Additionally, the readability is hurt, because one has to dive through 
the semantics of the comprehension to truely understand I am filtering the list 
or remapping its values.

Map and filter, although they are more explicit, also feature redundancy. They 
look OK with functional predicate:

lst = map (f, lst)
lst = filter (predicate, lst)

But are less elegant when using an expression, than one has to convert through 
a lambda:

lst = map (lambda x: x*5, lst)
lst = filter (lambda x: x%3 == 1, lst)

And perform especially bad in CPython compared to a comprehension.

2 - Efficiency

A language support for these operations to be made in-place could improve the 
efficiency of this operations through reduced use of memory.


I would propose this syntax. (TODO: find appropriate keywords I guess):

lst.map x: x*5
lst.filter x: x%3 == 1

They can work for dictionaries too.

dct.map k,v: v*5
dct.filter k,v: k+v == 10

The reasonning for the need of a language-level approach is the need for an 
efficient implementation that would support giving an arbitrary expression and 
not only a function. Expressions are shorter and, I guess, would be more 
efficient.


I would gladly appreciate your returns on this, regarding:
1 - Whether a similar proposition has been made
2 - If you find this of any interest at all
3 - If you have a suggestion for improving the proposal

Thanks for reading. Have a nice day
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Ned Batchelder
On Thursday, November 3, 2016 at 4:30:00 AM UTC-4, Steven D'Aprano wrote:
> On Thursday 03 November 2016 17:56, arthurhavli...@gmail.com wrote:
> > I would propose this syntax. (TODO: find appropriate keywords I guess):
> > 
> > lst.map x: x*5
> > lst.filter x: x%3 == 1
> 
> I think the chances of Guido accepting new syntax for something as trivial as 
> this with three existing solutions is absolutely zero.
> 
> I think the chances of Guido accepting new list/dict methods for in place map 
> and/or filter is a tiny bit higher than zero.


To my eyes, this proposed syntax is completely foreign. "lst.map" looks
like attribute or method access on lst, but there's no operation on the
result, or function call.  They implicitly assign back to lst, with no
recognizable syntax to indicate that they do (= or "as" is the usual
marker).

While mapping and filtering are common operations, I'm not sure mapping
and filtering and then reassigning back to the original sequence is
especially common.  It's certainly not common enough to deserve completely
new syntax when the existing methods already exist.

Part of the problem here is that you value explicitness, but also are
trying to reduce redundancy.  When you say that lst occurs twice in
your examples, what I see is that it occurs twice because it's being
used for two different things: it is the source of the data, and it is
also the target for the result. I think it is good to have it appear
twice in this case, especially since there's no reason to think it will
usually be used for both purposes.  How do I do exactly the same data
manipulation, but then assign it to lst2 because I belatedly realized
I wanted both the before and after list?  Under your proposed syntax,
I would have to completely rewrite the line to use a different filtering
mechanism because now I need to unbundle the filtering and the assignment.
That seems unfortunate.

You've done the right thing by bringing the proposal here.  I think it
is useful to see how people approach the language, and where they want
changes.  Discussing the pros and cons is a good way to get a deeper
appreciation both for the language and the rationale for its design.
But I don't think this proposal has a chance of moving forward.

--Ned.
-- 
https://mail.python.org/mailman/listinfo/python-list


data interpolation

2016-11-03 Thread Heli
Hi, 

I have a question about data interpolation using python. I have a big ascii 
file containg data in the following format and around 200M points. 

id, xcoordinate, ycoordinate, zcoordinate

then I have a second file containing data in the following format, ( 2M values) 

id, xcoordinate, ycoordinate, zcoordinate, value1, value2, value3,..., valueN

I would need to get values for x,y,z coordinates of file 1 from values of 
file2.  

I don´t know whether my data in file1 and 2 is from structured or unstructured 
grid source. I was wondering which interpolation module either from scipy or 
scikit-learn you recommend me to use?

I would also appreciate if you could recommend me some sample 
example/reference. 

Thanks in Advance for your help, 
-- 
https://mail.python.org/mailman/listinfo/python-list


PyDev 5.3.1 Released

2016-11-03 Thread Fabio Zadrozny
Release Highlights:
---

* **Important** PyDev now requires Java 8 and Eclipse 4.6 (Neon) onwards.

* PyDev 5.2.0 is the last release supporting Eclipse 4.5 (Mars).

* **Code Completion**

* Substring completions are **on by default** (may be turned off in the
code-completion preferences).
* Fixed issue with code-completion using from..import..as aliases.

* **Others**

* Auto-fix imports with Ctrl+Shift+O properly sorts items based on the
same sorting improvements for code-completion.
* When fixing unresolved import (with Ctrl+1) it properly resolves
dependent projects (bugfix for regression in 5.3.0).
* **async** and **await** keywords are properly highlighted.
* **async** blocks properly auto-indented.
* In PEP 448 list unpack variable was not being marked as a "Load"
variable (which made the code analysis yield false positives).

What is PyDev?
---

PyDev is an open-source Python IDE on top of Eclipse for Python, Jython and
IronPython development.

It comes with goodies such as code completion, syntax highlighting, syntax
analysis, code analysis, refactor, debug, interactive console, etc.

Details on PyDev: http://pydev.org
Details on its development: http://pydev.blogspot.com


What is LiClipse?
---

LiClipse is a PyDev standalone with goodies such as support for Multiple
cursors, theming, TextMate bundles and a number of other languages such as
Django Templates, Jinja2, Kivy Language, Mako Templates, Html, Javascript,
etc.

It's also a commercial counterpart which helps supporting the development
of PyDev.

Details on LiClipse: http://www.liclipse.com/



Cheers,

--
Fabio Zadrozny
--
Software Developer

LiClipse
http://www.liclipse.com

PyDev - Python Development Environment for Eclipse
http://pydev.org
http://pydev.blogspot.com

PyVmMonitor - Python Profiler
http://www.pyvmmonitor.com/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Reading Fortran Ascii output using python

2016-11-03 Thread Heli
On Monday, October 31, 2016 at 8:03:53 PM UTC+1, MRAB wrote:
> On 2016-10-31 17:46, Heli wrote:
> > On Monday, October 31, 2016 at 6:30:12 PM UTC+1, Irmen de Jong wrote:
> >> On 31-10-2016 18:20, Heli wrote:
> >> > Hi all,
> >> >
> >> > I am trying to read an ascii file written in Fortran90 using python. I 
> >> > am reading this file by opening the input file and then reading using:
> >> >
> >> > inputfile.readline()
> >> >
> >> > On each line of the ascii file I have a few numbers like this:
> >> >
> >> > line 1: 1
> >> > line 2: 1000.834739 2000.38473 3000.349798
> >> > line 3: 1000 2000 5000.69394 99934.374638 54646.9784
> >> >
> >> > The problem is when I have more than 3 numbers on the same line such as 
> >> > line 3, python seems to read this using two reads. This makes the above 
> >> > example will be read like this:
> >> >
> >> > line 1: 1
> >> > line 2: 1000.834739 2000.38473 3000.349798
> >> > line 3: 1000 2000 5000.69394
> >> > line 4: 99934.374638 54646.9784
> >> >
> >> > How can I fix this for each fortran line to be read correctly using 
> >> > python?
> >> >
> >> > Thanks in Advance for your help,
> >> >
> >> >
> >>
> >> You don't show any code so it's hard to say what is going on.
> >> My guess is that your file contains spurious newlines and/or CRLF 
> >> combinations.
> >>
> >> Try opening the file in universal newline mode and see what happens?
> >>
> >> with open("fortranfile.txt", "rU") as f:
> >> for line in f:
> >> print("LINE:", line)
> >>
> >>
> >> Irmen
> >
> > Thanks Irmen,
> >
> > I tried with "rU" but that did not make a difference. The problem is a line 
> > that with one single write statement in my fortran code :
> >
> > write(UNIT=9,FMT="(99g20.8)")  value
> >
> > seems to be read in two python inputfile.readline().
> >
> > Any ideas how I should be fixing this?
> >
> > Thanks,
> >
> What is actually in the file?
> 
> Try opening it in binary mode and print using the ascii function:
> 
>  with open("fortranfile.txt", "rb") as f:
>  contents = f.read()
> 
>  print("CONTENTS:", ascii(contents))

Thanks guys, 
I solved the problem on the Fortran side. Some lines contained new lines 
characters that I fixed by setting format descriptors in Fortran.

Thanks for your help, 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: constructor classmethods

2016-11-03 Thread teppo . pera
Hello everyone, I'll step into conversation too as I think it is quite 
important topic. I'd be the one my collegue calls keen to this practice.

Little bit background related to this topic. It all starts from this article:
http://misko.hevery.com/attachments/Guide-Writing%20Testable%20Code.pdf

The guide is written in c++ in mind, yet the concepts stands for any 
programming language really. Read it through and think about it. If you come 
back to this topic and say: "yeah, but it's c++", then you haven't understood 
it. 

Steven is correct in his post, this is Dependency Injection (thanks for seeing 
through my suggestion, I was starting to think I'm completely crazy, now I'm 
just partially crazy). Create classmethod is mostly actually for convenience, 
especially in those cases where the class would require injecting multiple 
dependencies, which doesn't need to come from outside. I'm fine actually fine 
using default parameters. I'd like to use __new__ and __init__ here, but I 
haven't figured out how to do it with those in a way it's comfortable to write 
and read, which is usually a key point to sell a new practice and also probably 
a reason why create-classmethod is not popular eiher, regardless it being 
useful.

My key goal is to maximize the control of object creation, which makes writing 
tests much much simpler. This is not necessarily OOP, and definitely not strict 
of such. It comes from the above article and that it is following pretty 
strictly in a sense. I have at least found it very useful practice to have no 
logic in constructors. Mock library is definitely powerful mechanism to solve 
some probelms in Python for testability, but it can (if not carefuly how to 
write classes you are going to test) make tests very hard to read and if you 
want to treat your tests as documentation, it is unacceptable.

In order to write a code that just solves the problem, none of those patterns 
are needed. To solve a problem in a manner that it's possible to return to it 
later on, then these patterns start to matter.

Br,
Teppo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pre-pep discussion material: in-place equivalents to map and filter

2016-11-03 Thread Steven D'Aprano
On Thursday 03 November 2016 17:56, arthurhavli...@gmail.com wrote:

[...]
> Python features a powerful and fast way to create lists through
> comprehensions. Because of their ease of use and efficiency through native
> implementation, they are an advantageous alternative to map, filter, and
> more. However, when used in replacement for an in-place version of these
> functions, they are sub-optimal, and Python offer no alternative.


Of course Python offers an alternative: the basic for loop.


for i, x in enumerate(alist):
alist[i] = func(x)


Not enough of a one-liner for you? Write a helper function:

def inplace_map(func, alist):
for i, x in enumerate(alist):
alist[i] = func(x)


In practice, you may even find that except for the most enormous lists (so big 
that memory becomes an issue, so we're talking tens of millions of items) it 
will probably be faster to use a slice and a comprehension:

alist[:] = [func(x) for x in alist]


[...]
> Notice the concerns of OPs in his comments to replies in this one:
> http://stackoverflow.com/questions/3000461/python-map-in-place

What I saw was the OP's *premature optimization*:

"I wanted to use map for performance gains"

"I figured map would be quicker than the list comprehension way"

Although in fairness he does also say:

"I'm working on a big list, and the times we're talking about are seconds in 
difference, which clearly matters"

I'm not necessarily sure that seconds always matters -- if your code takes 90 
seconds, or 96 seconds, who is going to even notice? But let's assume he's 
right and a few seconds difference makes a real difference.

He has three obvious alternatives to waiting until Python 3.7 (the earliest 
such a new feature can be added):

- modify the list in place with a for-loop;
- slice assignment using map;
- slice assignment using list comprehension.


> 1 - Code readability and reduced redundancy
> 
> lst = [ item for item in lst if predicate(item) ]
> lst = [ f(item) for item in lst ]
> 
> Both these expressions feature redundancy, lst occurs twice and item at least
> twice. 

That's not what redundancy means. "Redundancy" doesn't refer to the re-use of 
any arbitrary token. It means doing the same thing twice in two different 
places.


> Additionally, the readability is hurt, because one has to dive through
> the semantics of the comprehension to truely understand I am filtering the
> list or remapping its values.

Nonsense. It is perfectly readable because it is explicit about what is being 
done, unlike some magic method that you have to read the docs to understand 
what it does.

A list comprehension or for-loop is more general and can be combined so you can 
do both:

alist[:] = [func(x) for x in alist if condition(x)]



> Map and filter, although they are more explicit, 

*Less* explicit.

To most people, "map" means the thing that you follow when you are travelling 
in unfamiliar territory, and "filter" means the paper doohickey you put in your 
coffee machine to keep the ground up coffee from ending up in your cup.


> also feature redundancy.
> They look OK with functional predicate:
> 
> lst = map (f, lst)
> lst = filter (predicate, lst)
> 
> But are less elegant when using an expression, than one has to convert
> through a lambda:
> 
> lst = map (lambda x: x*5, lst)
> lst = filter (lambda x: x%3 == 1, lst)

And that's why we have list comprehensions.


> And perform especially bad in CPython compared to a comprehension.

I doubt that.



> 2 - Efficiency
> 
> A language support for these operations to be made in-place could improve the
> efficiency of this operations through reduced use of memory.

*shrug*

Saving memory sometimes costs time.


> I would propose this syntax. (TODO: find appropriate keywords I guess):
> 
> lst.map x: x*5
> lst.filter x: x%3 == 1

I think the chances of Guido accepting new syntax for something as trivial as 
this with three existing solutions is absolutely zero.

I think the chances of Guido accepting new list/dict methods for in place map 
and/or filter is a tiny bit higher than zero.


> The reasonning for the need of a language-level approach is the need for an
> efficient implementation that would support giving an arbitrary expression
> and not only a function. 

We already have three of those: for-loops, list comprehensions, and map.




-- 
Steven
299792.458 km/s — not just a good idea, it’s the law!

-- 
https://mail.python.org/mailman/listinfo/python-list