Re: Expression can be simplified on list

2016-09-28 Thread Rustom Mody
On Thursday, September 15, 2016 at 1:43:05 AM UTC+5:30, Terry Reedy wrote:
> On 9/14/2016 3:16 AM, Rustom Mody wrote:
> 
> > In THOSE TYPES that element can justifiably serve as a falsey (empty) type
> >
> > However to extrapolate from here and believe that ALL TYPES can have a 
> > falsey
> > value meaningfully, especially in some obvious fashion, is mathematically 
> > nonsense.
> 
> Python make no such nonsense claim.  By default, Python objects are truthy.
> 
>  >>> bool(object())
> True
> 
> Because True is the default, object need not and at least in CPython 
> does not have a __bool__ (or __len__) method.  Classes with no falsey 
> objects, such as functions, generators, and codes, need not do anything 
> either.  In the absence of an override function, the internal bool code 
> returns True.
> 
Not sure what you are trying to say Terry...
Your English suggests you disagree with me
Your example is exactly what I am saying; if a type has a behavior in which
all values are always True (true-ish) its a rather strange kind of bool-nature.

Shall we say it has Buddha-nature? ;-)

> It is up to particular classes to override that default and say that it 
> has one or more Falsey objects.  They do so by defining a __bool__ 
> method that returns False for the falsey objects (and True otherwise) or 
> by defining a __len__ method that returns int 0 for falsey objects (and 
> non-0 ints otherwise).  If a class defines both, __bool__ wins.

Sure one can always (ok usually) avoid a bug in a system by not using the 
feature that calls up the bug. Are you suggesting that that makes the bug 
non-exist?

In more detail:
- If user/programmer defines a new type
- Which has no dunder bool
- Which has no dunder len
- Which has no ... (all the other things like len that can make for a 
non-trivial bool behavior)
- And then uses a value of that type in a non-trivial bool-consuming position
such as the condition of an if/while etc

There's a very good chance that bool-usage is buggy

In more mundane terms, dunder bool defaulting to true is about as useful 
as if it defaulted to
2*random.random()

Why not default it in the way that AttributeError/NameError/TypeError etc
are raised?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Steven D'Aprano
On Thursday 29 September 2016 16:13, Gregory Ewing wrote:

> Philosophical question: Is a function that never
> returns actually a function?

Mathematically, all functions return, instantly. Or rather, mathematics occurs 
in an abstract environment where there is no time. Everything happens 
simultaneously. So even infinite sums or products can be calculated instantly 
-- if they converge.

So, yes, even functions that never return are functions. You just need to 
collapse all of infinite time into a single instant.


-- 
Steven
git gets easier once you get the basic idea that branches are homeomorphic 
endofunctors mapping submanifolds of a Hilbert space.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Gregory Ewing

Paul Moore wrote:

What "allows side effects" in languages like Haskell is the fact that the
runtime behaviour of the language is not defined as "calculating the value of
the main function" but rather as "making the process that the main functon
defines as an abstract monad actually happen".


That's an interesting way of looking at it. To put it
another way, what matters isn't just the final result,
but *how* the final result is arrived at.

In the case where the main function never returns,
the "final result" doesn't even exist, and all you
have left is the process.

Philosophical question: Is a function that never
returns actually a function?

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: A regular expression question

2016-09-28 Thread Ben Finney
Cpcp Cp  writes:

> Look this
>
> >>> import re
> >>> text="asdfnbd]"
> >>> m=re.sub("n*?","?",text)
> >>> print m
> ?a?s?d?f?n?b?d?]?
>
> I don't understand the 'non-greedy' pattern.

Since ‘n*’ matches zero or more ‘n’s, it matches zero adjacent to every
actual character.

It's non-greedy because it matches as few characters as will allow the
match to succeed.

> I think the repl argument should replaces every char in text and
> outputs "".

I hope that helps you understand why that expectation is wrong :-)

Regular expression patterns are *not* an easy topic. Try experimenting
and learning with http://www.regexr.com/>.

-- 
 \  “If I haven't seen as far as others, it is because giants were |
  `\   standing on my shoulders.” —Hal Abelson |
_o__)  |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


A regular expression question

2016-09-28 Thread Cpcp Cp
Look this

>>> import re
>>> text="asdfnbd]"
>>> m=re.sub("n*?","?",text)
>>> print m
?a?s?d?f?n?b?d?]?

I don't understand the 'non-greedy' pattern.

I think the repl argument should replaces every char in text and outputs 
"".

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Chris Angelico
On Thu, Sep 29, 2016 at 1:53 PM, Steve D'Aprano
 wrote:
> John Cook suggests that functional programming gets harder and harder to do
> right (both for the compiler and for the programmer) as you asymptotically
> approach 100% pure, and suggests the heuristic that (say) 85% pure is the
> sweet spot: functional in the small, object oriented in the large.
>
> http://www.johndcook.com/blog/2009/03/23/functional-in-the-small-oo-in-the-large/
>
>
> I agree. I find that applying functional techniques to individual methods
> makes my classes much better:
>
> - local variables are fine;
> - global variables are not;
> - global constants are okay;
> - mutating the state of the instance should be kept to the absolute minimum;
> - and only done from a few mutator methods, not from arbitrary methods;
> - attributes should *usually* be passed as arguments to methods, not
>   treated as part of the environment.

I would agree with you. "Functional programming" is not an alternative
to imperative or object-oriented programming; it's a style, it's a set
of rules, that make small functions infinitely easier to reason about,
test, and debug. My points about practicality basically boil down to
the same thing as you were saying - 100% pure is infinitely hard.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Steve D'Aprano
On Wed, 28 Sep 2016 07:18 pm, Chris Angelico wrote:

> On Wed, Sep 28, 2016 at 6:52 PM, Gregory Ewing
>  wrote:
>> Chris Angelico wrote:
>>>
>>>
>>>  wrote:
>>>
 * No side effects (new variable bindings may be created, but
  existing ones cannot be changed; no mutable data structures).
>>>
>>>
>>> If that's adhered to 100%, the language is useless for any operation
>>> that cannot be handled as a "result at end of calculation" function.

They're useless for that too, because you, the caller, cannot see the result
of the calculation. Printing is a side-effect.

Technically you can view the program state through a debugger, but that's
exposing implementation details and besides its not very practical.

But really, this is nit-picking. It is a little like arguing that no
programming languages are actually Turing complete, since no language has
an infinite amount of memory available. Of course a programming language
needs to have *some* way of doing IO, be that print, writing to files, or
flicking the lights on and off in Morse Code, but the challenge is to wall
that off in such a way that it doesn't affect the desirable (to functional
programmers at least) "no side-effects" property.

However, there's no universally agreed upon definition of "side-effect".
Some people might disagree that printing to stdout is a side-effect in the
sense that matters, namely changes to *program* state. Changes to the rest
of the universe are fine.

Some will say that local state (local variables within a function) is okay
so long as that's just an implementation detail. Others insist that even
functions' internal implementation must be pure. After all, a function is
itself a program. If we believe that programs are easier to reason about if
they have no global mutable state (no global variables), then wrapping that
program in "define function"/"end function" tags shouldn't change that.

Others will point out that "no side-effects" is a leaky abstraction, and
like all such abstractions, it leaks. Your functional program will use
memory, the CPU will get warmer, etc.

http://www.johndcook.com/blog/2010/05/18/pure-functions-have-side-effects/

John Cook suggests that functional programming gets harder and harder to do
right (both for the compiler and for the programmer) as you asymptotically
approach 100% pure, and suggests the heuristic that (say) 85% pure is the
sweet spot: functional in the small, object oriented in the large.

http://www.johndcook.com/blog/2009/03/23/functional-in-the-small-oo-in-the-large/


I agree. I find that applying functional techniques to individual methods
makes my classes much better:

- local variables are fine;
- global variables are not;
- global constants are okay;
- mutating the state of the instance should be kept to the absolute minimum;
- and only done from a few mutator methods, not from arbitrary methods;
- attributes should *usually* be passed as arguments to methods, not 
  treated as part of the environment.


That last one is probably the most controversial. Let me explain.


Suppose I have a class with state and at least two methods:


class Robot:
def __init__(self):
self.position = (0, 0)
def move(self, x, y):
self.position[0] += x
self.position[1] += y
def report(self):
# Robot, where are you?
print("I'm at %r" % self.position)


So far so good. But what if we want to make the report() method a bit more
fancy? Maybe we want to spruce it up a bit, and allow subclasses to
customize how they actually report (print, show a dialog box, write to a
log file, whatever you like):

def report(self):
self.format()
self.display_report() 

def display_report(self):
print("I'm at %r" % self.formatted_position)
   
def format(self):
self.formatted_position = (
"x coordinate %f" % self.position[0],
"y coordinate %f" % self.position[1]
)

Now you have this weird dependency where format() communicates with report()
by side-effect, and you cannot test format() or display_report() except by
modifying the position of the Robot.

I see so much OO code written like this, and it is bad, evil, wrong, it
sucks and I don't like it! Just a little bit of functional technique makes
all the difference:

def report(self):
self.display_report(self.format(self.position)) 

def display_report(self, report):
print("I'm at %r" % report)
   
def format(self, position):
return ("x coordinate %f" % position[0],
"y coordinate %f" % position[1]
)

Its easier to understand them (less mystery state in the environment),
easier to test, and easier to convince yourself that the code is correct.



[...]
> If monads allow mutations or side effects, they are by definition not
> pure functions, and violate your bullet point. Languages like Haskell
> have them not because they are an intrinsic part of functio

Re: Syncing up iterators with gaps

2016-09-28 Thread Steve D'Aprano
On Thu, 29 Sep 2016 11:38 am, Tim Chase wrote:

> This seems to discard the data's origin (data1/data2/data3) which is
> how I determine whether to use process_a(), process_b(), or
> process_c() in my original example where N iterators were returned,
> one for each input iterator.

So add another stage to your generator pipeline, one which adds a unique ID
to the output of each generator so you know where it came from.

Hint: the ID doesn't have to be an ID *number*. It can be the process_a,
process_b, process_c ... function itself. Then instead of doing:

for key, (id, stuff) in groupby(merge(data1, data2, data3), keyfunc):
for x in stuff:
if id == 1:
process_a(key, *x)
elif id == 2:
process_b(key, *x)
elif ...



or even:

DISPATCH = {1: process_a, 2: process_b, ...}

for key, (id, stuff) in groupby(merge(data1, data2, data3), keyfunc):
for x in stuff:
DISPATCH[id](key, *x)


you can do:


for key, (process, stuff) in groupby(merge(data1, data2, data3), keyfunc):
for x in stuff:
process(key, *x)





-- 
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Nested for loops and print statements

2016-09-28 Thread Larry Hudson via Python-list

On 09/27/2016 09:20 PM, Steven D'Aprano wrote:

On Wednesday 28 September 2016 12:48, Larry Hudson wrote:


As they came through in the newsgroup, BOTH run correctly, because both
versions had leading spaces only.
(I did a careful copy/paste to check this.)


Copying and pasting from the news client may not be sufficient to show what
whitespace is actually used...

Exactly.  That's why I pointed out the sometime (mis)handling of tabs in newsreaders, and said 
these examples came through in MY reader (Thunderbird) as spaces only, so both examples ran 
correctly.


--
 -=- Larry -=-
--
https://mail.python.org/mailman/listinfo/python-list


Re: Syncing up iterators with gaps

2016-09-28 Thread Tim Chase
On 2016-09-29 10:20, Steve D'Aprano wrote:
> On Thu, 29 Sep 2016 05:10 am, Tim Chase wrote:
> >   data1 = [ # key, data1
> > (1, "one A"),
> > (1, "one B"),
> > (2, "two"),
> > (5, "five"),
> > ]
> 
> So data1 has keys 1, 1, 2, 5.
> Likewise data2 has keys 1, 2, 3, 3, 3, 4 and data3 has keys 2, 4, 5.

Correct

> (data3 also has *two* values, not one, which is an additional
> complication.)

As commented towards the end, the source is set of CSV files, so each
row is a list where a particular (identifiable) item is the key.
Assume that one can use something like get_key(row) to return the key,
which in the above could be implemented as

  get_key = lambda row: row[0]

and for my csv.DictReader data, would be something like

  get_key = lambda row: row["Account Number"]

> > And I'd like to do something like
> > 
> >   for common_key, d1, d2, d3 in magic_happens_here(data1, data2,
> > data3):
> 
> What's common_key? In particular, given that data1, data2 and data3
> have the first key each of 1, 1 and 2 respectively, how do you get:
> 
> > So in the above data, the outer FOR loop would
> > happen 5 times with common_key being [1, 2, 3, 4, 5]
> 
> I'm confused. Is common_key a *constant* [1, 2, 3, 4, 5] or are you
> saying that it iterates over 1, 2, 3, 4, 5?

Your interpretation later is correct, that it each unique key once,
in-order.  So if you

  data1.append((17, "seventeen"))

the outer loop would iterate over [1,2,3,4,5,17]

(so not constant, to hopefully answer that part of your question)

The actual keys are account-numbers, so they're ascii-sorted strings
of the form "1234567-8901", ascending in order through the files.
But for equality/less-than/greater-than comparisons, they work
effectively as integers in my example.

> If the later, it sounds like you want something like a cross between
> itertools.groupby and the "merge" stage of mergesort.

That's a pretty good description at some level.  I looked into
groupby() but was having trouble getting it to do what I wanted.

> Note that I have modified data3 so instead of three columns, (key
> value value), it has two (key value) and value is a 2-tuple.

I'm cool with that.  Since they're CSV rows, you can imagine the
source data then as a generator something like

  data1 = ( (get_key(row), row) for row in my_csv_iter1 )

to get the data to look like your example input data.

> So first you want an iterator that does an N-way merge:
> 
> merged = [(1, "one A"), (1, "one B"), (1, "uno"), 
>   (2, "two"), (2, "dos"), (2, ("ii", "extra alpha")), 
>   (3, "tres x"), (3, "tres y"), (3, "tres z"),
>   (4, "cuatro"), (4, ("iv", "extra beta")),
>   (5, "five"), (5, ("v", "extra gamma")),
>   ]

This seems to discard the data's origin (data1/data2/data3) which is
how I determine whether to use process_a(), process_b(), or
process_c() in my original example where N iterators were returned,
one for each input iterator.  So the desired output would be akin to
(converting everything to tuples as you suggest below)

  [
   (1, [("one A",), ("one B",)], [1, ("uno",)], []),
   (2, [("two",)], [("dos",)], [("ii", "extra alpha")]),
   (3, [], [("tres x",), ("tres y",)], []),
   (4, [], [("cuatro",)], [("iv", "extra beta")]),
   (5, [("five",)], [], [("v", "extra gamma")]),
   ]

only instead of N list()s, having N generators that are smart enough
to yield the corresponding data.

> You might find it easier to have *all* the iterators yield (key,
> tuple) pairs, where data1 and data2 yield a 1-tuple and data3
> yields a 2-tuple.

Right.  Sorry my example obscured that shoulda-obviously-been-used
simplification.

-tkc




-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to make a foreign function run as fast as possible in Windows?

2016-09-28 Thread jfong
Paul  Moore at 2016/9/28 11:31:50PM wrote:
> Taking a step back from the more detailed answers, would I be right to assume 
> that you want to call this external function multiple times from Python, and 
> each call could take days to run? Or is it that you have lots of calls to 
> make and each one takes a small amount of time but the total time for all the 
> calls is in days?
> 
> And furthermore, can I assume that the external function is *not* written to 
> take advantage of multiple CPUs, so that if you call the function once, it's 
> only using one of the CPUs you have? Is it fully utilising a single CPU, or 
> is it actually not CPU-bound for a single call?
> 
> To give specific suggestions, we really need to know a bit more about your 
> issue.

Forgive me, I didn't notice these detail will infulence the answer:-)

Python will call it once. The center part of this function was written in 
assembly for performance. During its execution, this part might be called 
thousands of million times. The function was written to run in a single CPU, 
but the problem it want to solve can be easily distributed into multiple CPUs.

--Jach
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to make a foreign function run as fast as possible in Windows?

2016-09-28 Thread jfong
eryk sun at 2016/9/28 1:05:32PM wrote:
> In Unix, Python's os module may have sched_setaffinity() to set the
> CPU affinity for all threads in a given process.
> 
> In Windows, you can use ctypes to call SetProcessAffinityMask,
> SetThreadAffinityMask, or SetThreadIdealProcessor (a hint for the
> scheduler). On a NUMA system you can call GetNumaNodeProcessorMask(Ex)
> to get the mask of CPUs that are on a given NUMA node. The cmd shell's
> "start" command supports "/numa" and "/affinity" options, which can be
> combined.

Seems have to dive into Windows to understand its usage:-)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Steve D'Aprano
On Thu, 29 Sep 2016 03:46 am, Chris Angelico wrote:

> That's exactly how a function works in an imperative language, and
> it's exactly what the FP advocates detest: opaque state. So is the
> difference between "function" and "monad" in Haskell the same as "pure
> function" and "function" in other contexts?

Honestly Chris, what's hard to understand about this? Monads are burritos.

http://blog.plover.com/prog/burritos.html



-- 
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Syncing up iterators with gaps

2016-09-28 Thread Steve D'Aprano
On Thu, 29 Sep 2016 05:10 am, Tim Chase wrote:

> I've got several iterators sharing a common key in the same order and
> would like to iterate over them in parallel, operating on all items
> with the same key.  I've simplified the data a bit here, but it would
> be something like
> 
>   data1 = [ # key, data1
> (1, "one A"),
> (1, "one B"),
> (2, "two"),
> (5, "five"),
> ]

So data1 has keys 1, 1, 2, 5.

Likewise data2 has keys 1, 2, 3, 3, 3, 4 and data3 has keys 2, 4, 5.

(data3 also has *two* values, not one, which is an additional complication.)

> And I'd like to do something like
> 
>   for common_key, d1, d2, d3 in magic_happens_here(data1, data2, data3):

What's common_key? In particular, given that data1, data2 and data3 have the
first key each of 1, 1 and 2 respectively, how do you get:

> So in the above data, the outer FOR loop would
> happen 5 times with common_key being [1, 2, 3, 4, 5]

I'm confused. Is common_key a *constant* [1, 2, 3, 4, 5] or are you saying
that it iterates over 1, 2, 3, 4, 5?


If the later, it sounds like you want something like a cross between
itertools.groupby and the "merge" stage of mergesort. You have at least
three sorted(?) iterators, representing the CSV files, let's say they
iterate over

data1 = [(1, "one A"), (1, "one B"), (2, "two"), (5, "five")]

data2 = [(1, "uno"), (2, "dos"), (3, "tres x"), (3, "tres y"), 
 (3, "tres z"), (4, "cuatro")]

data3 = [ (2, ("ii", "extra alpha")), (4, ("iv", "extra beta")),
  (5, ("v", "extra gamma"))]


Note that I have modified data3 so instead of three columns, (key value
value), it has two (key value) and value is a 2-tuple.

So first you want an iterator that does an N-way merge:

merged = [(1, "one A"), (1, "one B"), (1, "uno"), 
  (2, "two"), (2, "dos"), (2, ("ii", "extra alpha")), 
  (3, "tres x"), (3, "tres y"), (3, "tres z"),
  (4, "cuatro"), (4, ("iv", "extra beta")),
  (5, "five"), (5, ("v", "extra gamma")),
  ]

and then you can simply call itertools.groupby to group by the common keys:

1: ["one A", "one B", "uno"]
2: ["two", "dos", ("ii", "extra alpha")]
3: ["tres x", "tres y", "tres z"]
4: ["cuatro", ("iv", "extra beta")]
5: ["five", ("v", "extra gamma")]

and then you can extract the separate columns from each value.

You might find it easier to have *all* the iterators yield (key, tuple)
pairs, where data1 and data2 yield a 1-tuple and data3 yields a 2-tuple.


If you look on ActiveState, I'm pretty sure you will find a recipe from
Raymond Hettinger for a merge sort or heap sort or something along those
lines, which you can probably adapt for an arbitrary number of inputs.




-- 
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to automate java application in window using python

2016-09-28 Thread Lawrence D’Oliveiro
On Thursday, September 29, 2016 at 11:54:46 AM UTC+13, Emile van Sebille wrote:
> Which worked for me! You should try it. Sloppy programming has always 
> been unreliable.

So it is clear you don’t have an answer to the OP’s question after all. Just 
some vague, meaningless generalities.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can this be easily done in Python?

2016-09-28 Thread Mario R. Osorio
I'm not sure I understand your question, but I 'think' you area talking about 
executing dynamically chunks of code. If that is the case, there are a couple 
of ways to do it. These are some links that might interest you:

http://stackoverflow.com/questions/3974554/python-how-to-generate-the-code-on-the-fly

http://stackoverflow.com/questions/32073600/python-how-to-create-a-class-name-on-the-fly

http://lucumr.pocoo.org/2011/2/1/exec-in-python/


On Tuesday, September 27, 2016 at 3:58:59 PM UTC-4, TUA wrote:
> Is the following possible in Python?
> 
> Given how the line below works
> 
> TransactionTerms = 'TransactionTerms'
> 
> 
> have something like
> 
> TransactionTerms = 
> 
> that sets the variable TransactionTerms to its own name as string 
> representation without having to specify it explicitly as in the line 
> above

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to automate java application in window using python

2016-09-28 Thread Emile van Sebille

On 09/28/2016 02:52 PM, Lawrence D’Oliveiro wrote:

On Thursday, September 29, 2016 at 4:57:10 AM UTC+13, Emile van Sebille wrote:

My point was that it is possible to automate windows reliably as long as the
programming is robust.


Sounds like circular reasoning.



Which worked for me! You should try it. Sloppy programming has always 
been unreliable.


Emile

--
https://mail.python.org/mailman/listinfo/python-list


Re: how to automate java application in window using python

2016-09-28 Thread Lawrence D’Oliveiro
On Thursday, September 29, 2016 at 4:57:10 AM UTC+13, Emile van Sebille wrote:
> My point was that it is possible to automate windows reliably as long as the
> programming is robust.

Sounds like circular reasoning.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Using the Windows "embedded" distribution of Python

2016-09-28 Thread eryk sun
On Wed, Sep 28, 2016 at 2:35 PM, Paul  Moore  wrote:
> So I thought I'd try SetDllDirectory. That works for python36.dll, but if I 
> load
> python3.dll, it can't find Py_Main - the export shows as "(forwarded to
> python36.Py_Main)", maybe the forwarding doesn't handle SetDllDirectory?

It works for me. Are you calling SetDllDirectory with the
fully-qualified path? If not it's relative to the working directory,
which isn't necessarily (generally is not) the application directory,
in which case delay-loading python36.dll will fail. You can create the
fully-qualified path from the application path, i.e.
GetModuleFileNameW(NULL, ...).

That said, I prefer using LoadLibraryExW(absolute_path_to_python3,
NULL, LOAD_WITH_ALTERED_SEARCH_PATH). The alternate search substitutes
the DLL's directory for the application directory when loading
dependent DLLs, which allows loading python36.dll and vcruntime140.dll
without having to modify the DLL search path of the entire process.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Syncing up iterators with gaps

2016-09-28 Thread Peter Otten
Tim Chase wrote:

> I've got several iterators sharing a common key in the same order and
> would like to iterate over them in parallel, operating on all items
> with the same key.  I've simplified the data a bit here, but it would
> be something like
> 
>   data1 = [ # key, data1
> (1, "one A"),
> (1, "one B"),
> (2, "two"),
> (5, "five"),
> ]
> 
>   data2 = [ # key, data1
> (1, "uno"),
> (2, "dos"),
> (3, "tres x"),
> (3, "tres y"),
> (3, "tres z"),
> (4, "cuatro"),
> ]
> 
>   data3 = [ # key, data1, data2
> (2, "ii", "extra alpha"),
> (4, "iv", "extra beta"),
> (5, "v", "extra gamma"),
> ]
> 
> And I'd like to do something like
> 
>   for common_key, d1, d2, d3 in magic_happens_here(data1, data2, data3):
> for row in d1:
>   process_a(common_key, row)
> for thing in d2:
>   process_b(common_key, row)
> for thing in d3:
>   process_c(common_key, row)
> 
> which would yield the common_key, along with enough of each of those
> iterators (note that gaps can happen, but the sortable order should
> remain the same).  So in the above data, the outer FOR loop would
> happen 5 times with common_key being [1, 2, 3, 4, 5], and each of
> [d1, d2, d3] being an iterator that deals with just that data.
> 
> My original method was hauling everything into memory and making
> multiple passes filtering on the data. However, the actual sources
> are CSV-files, some of which are hundreds of megs in size, and my
> system was taking a bit of a hit.  So I was hoping for a way to do
> this with each iterator making only one complete pass through each
> source (since they're sorted by common key).
> 
> It's somewhat similar to the *nix "join" command, only dealing with
> N files.
> 
> Thanks for any hints.
> 
> -tkc

A bit messy, might try replacing groups list with a dict:

$ cat merge.py  
from itertools import groupby
from operator import itemgetter

first = itemgetter(0)
rest = itemgetter(slice(1, None))


def magic_happens_here(*columns):
grouped = [groupby(column, key=first) for column in columns]
missing = object()

def getnext(g):
nonlocal n
try:
k, g = next(g)
except StopIteration:
n -= 1
return (missing, None)
return k, g

n = len(grouped)
groups = [getnext(g) for g in grouped]
while n:
minkey = min(k for k, g in groups if k is not missing)
yield (minkey,) + tuple(
map(rest, g) if k == minkey else ()
for k, g in groups)
for i, (k, g) in enumerate(groups):
if k == minkey:
groups[i] = getnext(grouped[i])


if __name__ == "__main__":
data1 = [  # key, data1
(1, "one A"),
(1, "one B"),
(2, "two"),
(5, "five"),
]

data2 = [  # key, data1
(1, "uno"),
(2, "dos"),
(3, "tres x"),
(3, "tres y"),
(3, "tres z"),
(4, "cuatro"),
]

data3 = [  # key, data1, data2
(2, "ii", "extra alpha"),
(4, "iv", "extra beta"),
(5, "v", "extra gamma"),
]

for common_key, d1, d2, d3 in magic_happens_here(data1, data2, data3):
print(common_key)
for d in d1, d2, d3:
print("", list(d))
$ python3 merge.py 
1
 [('one A',), ('one B',)]
 [('uno',)]
 []
2
 [('two',)]
 [('dos',)]
 [('ii', 'extra alpha')]
3
 []
 [('tres x',), ('tres y',), ('tres z',)]
 []
4
 []
 [('cuatro',)]
 [('iv', 'extra beta')]
5
 [('five',)]
 []
 [('v', 'extra gamma')]


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Syncing up iterators with gaps

2016-09-28 Thread Chris Kaynor
Here is a slight variation of Chris A's code that does not require
more than a single look-ahead per generator. It may be better
depending on the exact data passed in.

Chris A's version will store all of the items for each output that
have a matching key, which, depending on the expected data, could use
quite a bit of memory. This version yields a list of generators, which
then allows for never having more than a single lookahead per list.
The generators returned must be consumed immediately or they will be
emptied - I put in a safety loop that consumes them before continuing
processing.

My version is likely better if your processing does not require
storing (most of) the items and you expect there to be a large number
of common keys in each iterator. If you expect only a couple of items
per shared key per list, Chris A's version will probably perform
better for slightly more memory usage, as well as being somewhat safer
and simpler.

def magic_happens_here(*iters):
def gen(j):
while nexts[j][0] == common_key:
yield nexts[j]
nexts[j] = next(iters[j], (None,))
iters = [iter(it) for it in iters]
nexts = [next(it, (None,)) for it in iters]
while "moar stuff":
try: common_key = min(row[0] for row in nexts if row[0])
except ValueError: break # No moar stuff
outputs = [common_key]
for i in range(len(nexts)): # code smell, sorry
outputs.append(gen(i))
yield outputs
# The following three lines confirm that the generators provided
#  were consumed. This allows not exhausting the yielded generators.
#  If this is not included, and the iterator is not consumed, it can
#  result in an infinite loop.
for output in outputs[1:]:
for item in output:
pass
Chris


On Wed, Sep 28, 2016 at 12:48 PM, Chris Angelico  wrote:
> On Thu, Sep 29, 2016 at 5:10 AM, Tim Chase
>  wrote:
>> And I'd like to do something like
>>
>>   for common_key, d1, d2, d3 in magic_happens_here(data1, data2, data3):
>> for row in d1:
>>   process_a(common_key, row)
>> for thing in d2:
>>   process_b(common_key, row)
>> for thing in d3:
>>   process_c(common_key, row)
>
> Assuming that the keys are totally ordered and the data sets are
> sorted, this should work:
>
> def magic_happens_here(*iters):
> iters = [iter(it) for it in iters]
> nexts = [next(it, (None,)) for it in iters]
> while "moar stuff":
> try: common_key = min(row[0] for row in nexts if row[0])
> except ValueError: break # No moar stuff
> outputs = [common_key]
> for i in range(len(nexts)): # code smell, sorry
> output = []
> while nexts[i][0] == common_key:
> output.append(nexts[i])
> nexts[i] = next(iters[i], (None,))
> outputs.append(output)
> yield outputs
>
> Basically, it takes the lowest available key, then takes everything of
> that key and yields it as a unit.
>
> Code not tested. Use at your own risk.
>
> ChrisA
> --
> https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Syncing up iterators with gaps

2016-09-28 Thread Chris Angelico
On Thu, Sep 29, 2016 at 5:10 AM, Tim Chase
 wrote:
> And I'd like to do something like
>
>   for common_key, d1, d2, d3 in magic_happens_here(data1, data2, data3):
> for row in d1:
>   process_a(common_key, row)
> for thing in d2:
>   process_b(common_key, row)
> for thing in d3:
>   process_c(common_key, row)

Assuming that the keys are totally ordered and the data sets are
sorted, this should work:

def magic_happens_here(*iters):
iters = [iter(it) for it in iters]
nexts = [next(it, (None,)) for it in iters]
while "moar stuff":
try: common_key = min(row[0] for row in nexts if row[0])
except ValueError: break # No moar stuff
outputs = [common_key]
for i in range(len(nexts)): # code smell, sorry
output = []
while nexts[i][0] == common_key:
output.append(nexts[i])
nexts[i] = next(iters[i], (None,))
outputs.append(output)
yield outputs

Basically, it takes the lowest available key, then takes everything of
that key and yields it as a unit.

Code not tested. Use at your own risk.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Syncing up iterators with gaps

2016-09-28 Thread Terry Reedy

On 9/28/2016 3:10 PM, Tim Chase wrote:

I've got several iterators sharing a common key in the same order and
would like to iterate over them in parallel, operating on all items
with the same key.  I've simplified the data a bit here, but it would
be something like

  data1 = [ # key, data1
(1, "one A"),
(1, "one B"),
(2, "two"),
(5, "five"),
]

  data2 = [ # key, data1
(1, "uno"),
(2, "dos"),
(3, "tres x"),
(3, "tres y"),
(3, "tres z"),
(4, "cuatro"),
]

  data3 = [ # key, data1, data2
(2, "ii", "extra alpha"),
(4, "iv", "extra beta"),
(5, "v", "extra gamma"),
]

And I'd like to do something like

  for common_key, d1, d2, d3 in magic_happens_here(data1, data2, data3):
for row in d1:
  process_a(common_key, row)
for thing in d2:
  process_b(common_key, row)
for thing in d3:
  process_c(common_key, row)

which would yield the common_key, along with enough of each of those
iterators (note that gaps can happen, but the sortable order should
remain the same).  So in the above data, the outer FOR loop would
happen 5 times with common_key being [1, 2, 3, 4, 5], and each of
[d1, d2, d3] being an iterator that deals with just that data.


You just need d1, d2, d3 to be iterables, such as a list.  Write a magic 
generator that opens the three files and reads one line of each (with 
next()).  Then in while True loop, find minimum key and make 3 lists (up 
to 2 possibly empty) of the items in each file with that key.  This will 
require up to 3 inner loops.  The read-ahead makes this slightly messy. 
If any list is not empty, yield the key and 3 lists.  Otherwise break 
the outer loop.



My original method was hauling everything into memory and making
multiple passes filtering on the data. However, the actual sources
are CSV-files, some of which are hundreds of megs in size, and my
system was taking a bit of a hit.  So I was hoping for a way to do
this with each iterator making only one complete pass through each
source (since they're sorted by common key).

It's somewhat similar to the *nix "join" command, only dealing with
N files.


It is also somewhat similar to a 3-way mergesort.

--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Syncing up iterators with gaps

2016-09-28 Thread Tim Chase
I've got several iterators sharing a common key in the same order and
would like to iterate over them in parallel, operating on all items
with the same key.  I've simplified the data a bit here, but it would
be something like

  data1 = [ # key, data1
(1, "one A"),
(1, "one B"),
(2, "two"),
(5, "five"),
]

  data2 = [ # key, data1
(1, "uno"),
(2, "dos"),
(3, "tres x"),
(3, "tres y"),
(3, "tres z"),
(4, "cuatro"),
]

  data3 = [ # key, data1, data2
(2, "ii", "extra alpha"),
(4, "iv", "extra beta"),
(5, "v", "extra gamma"),
]

And I'd like to do something like

  for common_key, d1, d2, d3 in magic_happens_here(data1, data2, data3):
for row in d1:
  process_a(common_key, row)
for thing in d2:
  process_b(common_key, row)
for thing in d3:
  process_c(common_key, row)

which would yield the common_key, along with enough of each of those
iterators (note that gaps can happen, but the sortable order should
remain the same).  So in the above data, the outer FOR loop would
happen 5 times with common_key being [1, 2, 3, 4, 5], and each of
[d1, d2, d3] being an iterator that deals with just that data.

My original method was hauling everything into memory and making
multiple passes filtering on the data. However, the actual sources
are CSV-files, some of which are hundreds of megs in size, and my
system was taking a bit of a hit.  So I was hoping for a way to do
this with each iterator making only one complete pass through each
source (since they're sorted by common key).

It's somewhat similar to the *nix "join" command, only dealing with
N files.

Thanks for any hints.

-tkc



-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Chris Angelico
On Thu, Sep 29, 2016 at 2:43 AM, Random832  wrote:
> On Wed, Sep 28, 2016, at 11:41, Paul Moore wrote:
>> What "allows side effects" in languages like Haskell is the fact that the
>> runtime behaviour of the language is not defined as "calculating the
>> value of the main function" but rather as "making the process that the
>> main functon defines as an abstract monad actually happen".
>
> Well, from another point of view, the output (that is, the set of
> changes to files, among other things, that is defined by the
> monad-thingy) is *part of* the value of the main function. And the state
> of the universe prior to running it is part of the input is part of the
> arguments.

That's exactly how a function works in an imperative language, and
it's exactly what the FP advocates detest: opaque state. So is the
difference between "function" and "monad" in Haskell the same as "pure
function" and "function" in other contexts?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Chris Angelico
On Thu, Sep 29, 2016 at 2:33 AM, Steve D'Aprano
 wrote:
 Procedural programming under another name...
>>>
>>> Only in the sense that procedural programming is unstructured programming
>>> under another name. What is a procedure call but a disguised GOSUB, and
>>> what is GOSUB but a pair of GOTOs?
>>
>> Continuation-passing style is only GOTOs. Instead of returning to the
>> caller, procedures pass control to the continuation, together with the
>> values that the continuation is expecting from the procedure.
>>
>> I guess you can think of it as a way to disguise a GOSUB.
>
>
> Really, if you think about it, both functional and procedural programming
> are exactly the same as programming in assembly language. Returning a value
> from a function pushes that value onto the function call stack, which is
> really just a disguised assembly MOV command.

http://xkcd.com/435/

Also relevant to this conversation:

https://xkcd.com/1270/

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to call this method from main method

2016-09-28 Thread Peter Pearson
On Tue, 27 Sep 2016 23:44:11 -0700 (PDT), prasanthk...@gmail.com wrote:
[snip]
>
> if __name__ == '__main__':
>   
> GenAccessToken("This_is_a_Test_QED_MAC_Key_Which_Needs_to_be_at_Least_32_Bytes_Long",
>  "default", "default", 6,
>"g,m,a,s,c,p,d")
>
> When i am calling the above method from main method it is not
> returning the value but when i use print it is showing the value. Is
> there any wrong in returning the value from a method.

How do you know it's not returning a value?  You don't save the return
value anywhere.

-- 
To email me, substitute nowhere->runbox, invalid->com.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can this be easily done in Python?

2016-09-28 Thread Peter Pearson
On Tue, 27 Sep 2016 12:58:40 -0700 (PDT), TUA  wrote:
> Is the following possible in Python?
>
> Given how the line below works
>
> TransactionTerms = 'TransactionTerms'
>
>
> have something like
>
> TransactionTerms = 
>
> that sets the variable TransactionTerms to its own name as string
> representation without having to specify it explicitly as in the line
> above


You say "variable", but the use of that word often signals a 
misunderstanding about Python.

Python has "objects".  An object often has a name, and in fact often
has several names.  Attempting to associate an object with "its name"
looks like a Quixotic quest to me.

-- 
To email me, substitute nowhere->runbox, invalid->com.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Random832
On Wed, Sep 28, 2016, at 11:41, Paul Moore wrote:
> What "allows side effects" in languages like Haskell is the fact that the
> runtime behaviour of the language is not defined as "calculating the
> value of the main function" but rather as "making the process that the
> main functon defines as an abstract monad actually happen".

Well, from another point of view, the output (that is, the set of
changes to files, among other things, that is defined by the
monad-thingy) is *part of* the value of the main function. And the state
of the universe prior to running it is part of the input is part of the
arguments.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Steve D'Aprano
On Wed, 28 Sep 2016 11:05 pm, Jussi Piitulainen wrote:

> Steve D'Aprano writes:
> 
>> On Wed, 28 Sep 2016 08:03 pm, Lawrence D’Oliveiro wrote:
>>
>>> On Wednesday, September 28, 2016 at 9:53:05 PM UTC+13, Gregory Ewing
>>> wrote:
 Essentially you write the whole program in continuation-
 passing style, with a state object being passed down an
 infinite chain of function calls.
>>> 
>>> Procedural programming under another name...
>>
>> Only in the sense that procedural programming is unstructured programming
>> under another name. What is a procedure call but a disguised GOSUB, and
>> what is GOSUB but a pair of GOTOs?
> 
> Continuation-passing style is only GOTOs. Instead of returning to the
> caller, procedures pass control to the continuation, together with the
> values that the continuation is expecting from the procedure.
> 
> I guess you can think of it as a way to disguise a GOSUB.


Really, if you think about it, both functional and procedural programming
are exactly the same as programming in assembly language. Returning a value
from a function pushes that value onto the function call stack, which is
really just a disguised assembly MOV command.



-- 
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to automate java application in window using python

2016-09-28 Thread Emile van Sebille

On 09/23/2016 05:02 PM, Lawrence D’Oliveiro wrote:

On Thursday, September 22, 2016 at 8:34:20 AM UTC+12, Emile wrote:

Hmm, then I'll have to wait longer to experience the unreliability as
the handful of automated gui tools I'm running has only been up 10 to 12
years or so.


You sound like you have a solution for the OP, then.



My solution was to automate one of the then available windows gui 
scripting tools. I've stopped doing windows in the meantime and no 
longer know what's available.  My point was that it is possible to 
automate windows reliably as long as the programming is robust. You 
indicated you found automating unreliable -- I disagree.


Emile


--
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Paul Moore
On Wednesday, 28 September 2016 10:19:01 UTC+1, Chris Angelico  wrote:
> If monads allow mutations or side effects, they are by definition not
> pure functions, and violate your bullet point. Languages like Haskell
> have them not because they are an intrinsic part of functional
> programming languages, but because they are an intrinsic part of
> practical/useful programming languages.

Monads don't "allow" mutations or side effects. However, they allow you to 
express the *process* of making mutations or side effects in an abstract manner.

What "allows side effects" in languages like Haskell is the fact that the 
runtime behaviour of the language is not defined as "calculating the value of 
the main function" but rather as "making the process that the main functon 
defines as an abstract monad actually happen".

That's a completely informal and personal interpretation of what's going on, 
and Haskell users might not agree with it[1]. But for me the key point in 
working out what Haskell was doing was when I realised that their execution 
model wasn't the naive "evaluate the main function" model that I'd understood 
from when I first learned about functional programming.

Paul

[1] One of the problems with modern functional programming is that they 
typically have major problems explaining their interpretation of what's going 
on, with the result that they generally lose their audience long before they 
manage to explain the actually useful things that come from their way of 
thinking.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to make a foreign function run as fast as possible in Windows?

2016-09-28 Thread Paul Moore
On Tuesday, 27 September 2016 02:49:08 UTC+1, jf...@ms4.hinet.net  wrote:
> This function is in a DLL. It's small but may run for days before complete. I 
> want it takes 100% core usage. Threading seems not a good idea for it shares 
> the core with others. Will the multiprocessing module do it? Any suggestion?

Taking a step back from the more detailed answers, would I be right to assume 
that you want to call this external function multiple times from Python, and 
each call could take days to run? Or is it that you have lots of calls to make 
and each one takes a small amount of time but the total time for all the calls 
is in days?

And furthermore, can I assume that the external function is *not* written to 
take advantage of multiple CPUs, so that if you call the function once, it's 
only using one of the CPUs you have? Is it fully utilising a single CPU, or is 
it actually not CPU-bound for a single call?

To give specific suggestions, we really need to know a bit more about your 
issue.

Paul
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread alister
On Wed, 28 Sep 2016 21:48:20 +1000, Steve D'Aprano wrote:

> On Wed, 28 Sep 2016 08:03 pm, Lawrence D’Oliveiro wrote:
> 
>> On Wednesday, September 28, 2016 at 9:53:05 PM UTC+13, Gregory Ewing
>> wrote:
>>> Essentially you write the whole program in continuation-
>>> passing style, with a state object being passed down an infinite chain
>>> of function calls.
>> 
>> Procedural programming under another name...
> 
> Only in the sense that procedural programming is unstructured
> programming under another name. What is a procedure call but a disguised
> GOSUB, and what is GOSUB but a pair of GOTOs?

by this analysis isn't any function call the same?

are not loops just stylised  conditional gotos?




-- 
We only support a 28000 bps connection.
-- 
https://mail.python.org/mailman/listinfo/python-list


Using the Windows "embedded" distribution of Python

2016-09-28 Thread Paul Moore
This is probably more of a Windows question than a Python question, but as it's 
related to embedding Python, I thought I'd try here anyway.

I'm writing some command line applications in Python, and I want to bundle them 
into a standalone form using the new "embedding" distribution of Python. The 
basic process is easy enough - write a small exe that sets up argv, and then 
call Py_Main. Furthermore, as I'm *only* using Py_Main, I can use the 
restricted API, and make my code binary compatible with any Python 3 version.

So far so good. I have my exe, and my application code. I create a directory 
for my app, put the exe/app code in there, and then unzip the embedded 
distribution in there. And everything works. Yay!

However, I'm expecting my users to put my application directory on their PATH 
(as these are command line utilities). And I don't really want my private copy 
of the Python DLLs to be exposed on the user's PATH (I don't *know* that it'll 
interfere with their actual Python installation, but I'd prefer not to take 
risks). And this is when my problems start. Ideally, I'd put the embedded 
Python distribution in a subdirectory (say "embed") of my application 
directory. But the standard loader won't find python3.dll from there. So I have 
to do something a bit better.

As I'm only using Py_Main, I thought I'd be OK to dynamically load it - 
LoadLibrary on the DLL, then GetProcAddress. No big deal. But I need to tell 
LoadLibrary to get the DLL from the "embed" subdirectory. I suppose I could add 
that directory to PATH locally to my program, but that seems clumsy. So I 
thought I'd try SetDllDirectory. That works for python36.dll, but if I load 
python3.dll, it can't find Py_Main - the export shows as "(forwarded to 
python36.Py_Main)", maybe the forwarding doesn't handle SetDllDirectory?

So, basically what I'm asking:

* If I want to put my application on PATH, am I stuck with having the embedded 
distribution in the same directory, and also on PATH?
* If I can put the embedded distribution in a subdirectory, can that be made to 
work with python3.dll, or will I have to use python36.dll?

None of this is an issue with the most likely use of the embedded distribution 
(GUI apps, or server apps, both of which are likely to be run by absolute path, 
and so don't need to be on PATH myself). But I'd really like to be able to 
promote the embedded distribution as an alternative to tools like py2exe or 
cx_Freeze, so it would be good to know if a solution is possible (hmm, how come 
py2exe, and tools like Mercurial, which AFIK use it, don't have this issue too?)

Paul
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Jussi Piitulainen
Steve D'Aprano writes:

> On Wed, 28 Sep 2016 08:03 pm, Lawrence D’Oliveiro wrote:
>
>> On Wednesday, September 28, 2016 at 9:53:05 PM UTC+13, Gregory Ewing
>> wrote:
>>> Essentially you write the whole program in continuation-
>>> passing style, with a state object being passed down an
>>> infinite chain of function calls.
>> 
>> Procedural programming under another name...
>
> Only in the sense that procedural programming is unstructured programming
> under another name. What is a procedure call but a disguised GOSUB, and
> what is GOSUB but a pair of GOTOs?

Continuation-passing style is only GOTOs. Instead of returning to the
caller, procedures pass control to the continuation, together with the
values that the continuation is expecting from the procedure.

I guess you can think of it as a way to disguise a GOSUB.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Steve D'Aprano
On Wed, 28 Sep 2016 08:03 pm, Lawrence D’Oliveiro wrote:

> On Wednesday, September 28, 2016 at 9:53:05 PM UTC+13, Gregory Ewing
> wrote:
>> Essentially you write the whole program in continuation-
>> passing style, with a state object being passed down an
>> infinite chain of function calls.
> 
> Procedural programming under another name...

Only in the sense that procedural programming is unstructured programming
under another name. What is a procedure call but a disguised GOSUB, and
what is GOSUB but a pair of GOTOs?




-- 
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can this be easily done in Python?

2016-09-28 Thread Matt Wheeler
On Tue, 27 Sep 2016 at 20:58 TUA  wrote:

> Is the following possible in Python?
>
> Given how the line below works
>
> TransactionTerms = 'TransactionTerms'
>
>
> have something like
>
> TransactionTerms = 
>
> that sets the variable TransactionTerms to its own name as string
> representation without having to specify it explicitly as in the line
> above
>

(forgot to send to list, sorry)

```
def name(name):
globals()[name] = name


name('hi')

print(hi)
```

Or alternatively

```
import inspect


def assign():
return inspect.stack()[1].code_context[0].split('=')[0].strip()


thing = assign()

print(thing)
```

But why?

Both of these are pretty dodgy abuses of Python (I can't decide which
is worse, anyone?), and I've only tested them on 3.5. I present them
here purely because I was interested in working them out, and absolve
myself of any responsibility should anyone find them in use in
production code! :P (don't do that!)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why searching in a set is much faster than in a list ?

2016-09-28 Thread Steve D'Aprano
On Wed, 28 Sep 2016 07:07 pm, Chris Angelico wrote:

> On Wed, Sep 28, 2016 at 6:46 PM, Marko Rauhamaa  wrote:
>> Lawrence D’Oliveiro :
>>
>>> On Wednesday, September 28, 2016 at 6:51:17 PM UTC+13, ast wrote:
 I noticed that searching in a set is faster than searching in a list.
>>>
>>> That’s why we have sets.
>>
>> I would have thought the point of sets is to have set semantics, just
>> like the point of lists is to have list semantics.
> 
> And set semantics are what, exactly? Membership is a primary one.

Yep, that's pretty much it...

> "Searching in a set" in the OP's language is a demonstration of the
> 'in' operator, a membership/containment check.
> 
> (Other equally important set semantics include intersection and union,

Both of those revolve on membership testing.

The union of A and B is the set of all elements in A plus those in B; the
intersection of A and B is the set of all elements in both A and B.


> but membership inclusion checks are definitely up there among primary
> purposes of sets.)

I can't think of a set operation (apart from add and remove) which doesn't
resolve around membership.




-- 
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python C API: How to debug reference leak?

2016-09-28 Thread Chris Angelico
On Wed, Sep 28, 2016 at 9:25 PM, Gregory Ewing
 wrote:
> Chris Angelico wrote:
>>
>> If you've Py_DECREFed it and then peek into its internals, you're
>> aiming a gun at your foot.
>
>
> That's true. A safer way would be to look at the refcount
> *before* decreffing and verify that it's what you expect.

Exactly, which is presumably what dieter meant by a refcount of 1
being the one you're holding. A refcount of *2* indicates another
reference somewhere.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python C API: How to debug reference leak?

2016-09-28 Thread Gregory Ewing

Chris Angelico wrote:

If you've Py_DECREFed it and then peek into its internals, you're
aiming a gun at your foot.


That's true. A safer way would be to look at the refcount
*before* decreffing and verify that it's what you expect.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Why searching in a set is much faster than in a list ?

2016-09-28 Thread Michiel Overtoom
Hi,

Brandon Rhodes gave a talk about dictionaries (similar to sets), 'The Mighty 
Dictionary': 

https://www.youtube.com/watch?v=C4Kc8xzcA68


You also might be interested in his talk about Python data structures, 'All 
Your Ducks In A Row':

https://www.youtube.com/watch?v=fYlnfvKVDoM

Greetings,

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Case insensitive replacement?

2016-09-28 Thread Tim Chase
On 2016-09-27 18:10, MRAB wrote:
> The disadvantage of your "string-hacking" is that you're assuming
> that the uppercase version of a string is the same length as the
> original:

Ah, good point.  I went with using the regexp version for now since I
needed to move forward with something, so I'm glad I opted reasonably.

Thanks,

-tkc



-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why searching in a set is much faster than in a list ?

2016-09-28 Thread ast


"Steven D'Aprano"  a écrit dans le message de 
news:57eb7dff$0$1598$c3e8da3$54964...@news.astraweb.com...

On Wednesday 28 September 2016 15:51, ast wrote:


Hello

I noticed that searching in a set is faster than searching in a list.

[...]

I tried a search in a tuple, it's not different that in a list.
Any comments ?



A list, array or tuple is basically a linear array that needs to be searched
one item at a time:

[ a | b | c | d | ... | x | y | z ]

To find x, Python has to start at the beginning and look at every item in turn,
until it finds x.

But a set is basically a hash table. It is an array, but laid out differently,
with blank cells:

[ # | # | h | p | a | # | m | y | b | # | # | f | x | ... | # ]

Notice that the items are jumbled up in arbitrary order. So how does Python
find them?

Python calls hash() on the value, which returns a number, and that points
directly to the cell which would contain the value if it were there. So if you
search for x, Python calls hash(x) which will return (say) 12. Python then
looks in cell 12, and if x is there, it returns True, and if it's not, it
returns False. So instead of looking at 24 cells in the array to find x, it
calculates a hash (which is usually fast), then looks at 1 cell.

(This is a little bit of a simplification -- the reality is a bit more
complicated, but you can look up "hash tables" on the web or in computer
science books. They are a standard data structure, so there is plenty of
information available.)

On average, if you have a list with 1000 items, you need to look at 500 items
before finding the one you are looking for. With a set or dict with 1000 items,
on average you need to look at 1 item before finding the one you are looking
for. And that is why sets and dicts are usually faster than lists.



--
Steven
git gets easier once you get the basic idea that branches are homeomorphic
endofunctors mapping submanifolds of a Hilbert space.



Thanks a lot, very interesting. 


--
https://mail.python.org/mailman/listinfo/python-list


Re: Why searching in a set is much faster than in a list ?

2016-09-28 Thread Marko Rauhamaa
Chris Angelico :

> On Wed, Sep 28, 2016 at 6:46 PM, Marko Rauhamaa  wrote:
>> I would have thought the point of sets is to have set semantics, just
>> like the point of lists is to have list semantics.
>
> And set semantics are what, exactly? Membership is a primary one.

With sets, you can only check for membership.

Lists are mappings that map indices to objects.

> "Searching in a set" in the OP's language is a demonstration of the
> 'in' operator, a membership/containment check.
>
> (Other equally important set semantics include intersection and union,
> but membership inclusion checks are definitely up there among primary
> purposes of sets.)

Yes. However, membership is a feature of lists as well.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Steven D'Aprano
On Wednesday 28 September 2016 15:27, Gregory Ewing wrote:

> * No side effects (new variable bindings may be created, but
>existing ones cannot be changed; no mutable data structures).

As I understand it, for some advanced functional languages like Haskell, that 
is only true as far as the interface of the language (the API) is concerned. 
Implementation-wise, the language may in fact use a mutable data structure, 
provided it is provable that only one piece of code is accessing it at the 
relevant times.

The analogy with Python is the string concatenation optimization. Officially, 
writing:

a + b + c + d + e


in Python has to create and destory the temporary, intermediate strings:

(a+b)

(a+b+c)

(a+b+c+d)

before the final concatenation is performed. But, provided there is only one 
reference to the initial string a, and if certain other conditions related to 
memory management also hold, then this can be optimized as an in-place 
concatenation without having to create new strings, even though strings are 
actually considered immutable and growing them in place is forbidden.


My understanding is that smart functional languages like Haskell do that sort 
of thing a lot, which avoids them being painfully slow.





-- 
Steven
git gets easier once you get the basic idea that branches are homeomorphic 
endofunctors mapping submanifolds of a Hilbert space.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to reduce the DRY violation in this code

2016-09-28 Thread Lorenzo Sutton



On 27/09/2016 17:49, Steve D'Aprano wrote:

I have a class that takes a bunch of optional arguments. They're all
optional, with default values of various types. For simplicity, let's say
some are ints and some are floats:


class Spam:
def __init__(self, bashful=10.0, doc=20.0, dopey=30.0,
 grumpy=40, happy=50, sleepy=60, sneezy=70):
# the usual assign arguments to attributes dance...
self.bashful = bashful
self.doc = doc
# etc.


I also have an alternative constructor that will be called with string
arguments.


May I ask: do you really need to add this method? Can't you ensure that 
the data passed during initialisation is already of the right type (i.e. 
can you convert to float/ints externally)? If now why?


Lorenzo.

It converts the strings to the appropriate type, then calls the

real constructor, which calls __init__. Again, I want the arguments to be
optional, which means providing default values:


@classmethod
def from_strings(cls, bashful='10.0', doc='20.0', dopey='30.0',
 grumpy='40', happy='50', sleepy='60', sneezy='70'):
bashful = float(bashful)
doc = float(doc)
dopey = float(dopey)
grumpy = int(grumpy)
happy = int(happy)
sleepy = int(sleepy)
sneezy = int(sneezy)
return cls(bashful, doc, dopey, grumpy, happy, sleepy, sneezy)


That's a pretty ugly DRY violation. Imagine that I change the default value
for bashful from 10.0 to (let's say) 99. I have to touch the code in three
places (to say nothing of unit tests):

- modify the default value in __init__
- modify the stringified default value in from_strings
- change the conversion function from float to int in from_strings


Not to mention that each parameter is named seven times.


How can I improve this code to reduce the number of times I have to repeat
myself?






--
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Lawrence D’Oliveiro
On Wednesday, September 28, 2016 at 9:53:05 PM UTC+13, Gregory Ewing wrote:
> Essentially you write the whole program in continuation-
> passing style, with a state object being passed down an
> infinite chain of function calls.

Procedural programming under another name...
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to make a foreign function run as fast as possible in Windows?

2016-09-28 Thread alister
On Tue, 27 Sep 2016 19:13:51 -0700, jfong wrote:

> eryk sun at 2016/9/27 11:44:49AM wrote:
>> The threads of a process do not share a single core. The OS schedules
>> threads to distribute the load across all cores
> 
> hmmm... your answer overthrow all my knowledge about Python threads
> completely:-( I actually had ever considered using ProcessPoolExecutor
> to do it.
> 
> If the load was distributed by the OS schedules across all cores, does
> it means I can't make one core solely running a piece of codes for me
> and so I have no contol on its performance?

this would be implementation specific,  not part of the language 
pacification. 



-- 
Ryan's Law:
Make three correct guesses consecutively
and you will establish yourself as an expert.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: what's the difference of Template.append(...) and Template.prepend(...) in pipes module

2016-09-28 Thread Cpcp Cp

https://docs.python.org/2/library/pipes.html
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to reduce the DRY violation in this code

2016-09-28 Thread Yann Kaiser
You could use `attrs` for this along with the convert option, if you're
open to receiving mixed arguments:

>>> @attr.s
... class C(object):
... x = attr.ib(convert=int)
>>> o = C("1")
>>> o.x
1

https://attrs.readthedocs.io/en/stable/examples.html#conversion

On Tue, Sep 27, 2016, 16:51 Steve D'Aprano 
wrote:

> I have a class that takes a bunch of optional arguments. They're all
> optional, with default values of various types. For simplicity, let's say
> some are ints and some are floats:
>
>
> class Spam:
> def __init__(self, bashful=10.0, doc=20.0, dopey=30.0,
>  grumpy=40, happy=50, sleepy=60, sneezy=70):
> # the usual assign arguments to attributes dance...
> self.bashful = bashful
> self.doc = doc
> # etc.
>
>
> I also have an alternative constructor that will be called with string
> arguments. It converts the strings to the appropriate type, then calls the
> real constructor, which calls __init__. Again, I want the arguments to be
> optional, which means providing default values:
>
>
> @classmethod
> def from_strings(cls, bashful='10.0', doc='20.0', dopey='30.0',
>  grumpy='40', happy='50', sleepy='60', sneezy='70'):
> bashful = float(bashful)
> doc = float(doc)
> dopey = float(dopey)
> grumpy = int(grumpy)
> happy = int(happy)
> sleepy = int(sleepy)
> sneezy = int(sneezy)
> return cls(bashful, doc, dopey, grumpy, happy, sleepy, sneezy)
>
>
> That's a pretty ugly DRY violation. Imagine that I change the default value
> for bashful from 10.0 to (let's say) 99. I have to touch the code in three
> places (to say nothing of unit tests):
>
> - modify the default value in __init__
> - modify the stringified default value in from_strings
> - change the conversion function from float to int in from_strings
>
>
> Not to mention that each parameter is named seven times.
>
>
> How can I improve this code to reduce the number of times I have to repeat
> myself?
>
>
>
>
>
> --
> Steve
> “Cheer up,” they said, “things could be worse.” So I cheered up, and sure
> enough, things got worse.
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
Yann Kaiser
kaiser.y...@gmail.com
yann.kai...@efrei.net
+33 6 51 64 01 89
https://github.com/epsy
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python C API: How to debug reference leak?

2016-09-28 Thread Chris Angelico
On Wed, Sep 28, 2016 at 6:56 PM, Gregory Ewing
 wrote:
> dieter wrote:
>>
>> dl l  writes:
>>
>>> When I debug in C++, I see the reference count of a PyObject is 1.
>
>>> How can I find out where is referencing this object?
>>
>>
>> Likely, it is the reference, you are holding:
>
>
> Unless you've just Py_DECREFed it, expecting it to go
> away, and the recfcount is still 1, in which case there's
> still another reference somewhere else.

If you've Py_DECREFed it and then peek into its internals, you're
aiming a gun at your foot. In theory, the decref could have been
followed by some other operation (or a context switch) that allocated
another object and happened to use the same address (not as unlikely
as you might think, given that some object types use free lists).
Always have a reference to something before you mess with it.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to reduce the DRY violation in this code

2016-09-28 Thread alister
On Wed, 28 Sep 2016 01:49:56 +1000, Steve D'Aprano wrote:

> I have a class that takes a bunch of optional arguments. They're all
> optional, with default values of various types. For simplicity, let's
> say some are ints and some are floats:
> 
> 
> class Spam:
> def __init__(self, bashful=10.0, doc=20.0, dopey=30.0,
>  grumpy=40, happy=50, sleepy=60, sneezy=70):
> # the usual assign arguments to attributes dance... self.bashful
> = bashful self.doc = doc # etc.
> 
> 
> I also have an alternative constructor that will be called with string
> arguments. It converts the strings to the appropriate type, then calls
> the real constructor, which calls __init__. Again, I want the arguments
> to be optional, which means providing default values:
> 
> 
> @classmethod def from_strings(cls, bashful='10.0', doc='20.0',
> dopey='30.0',
>  grumpy='40', happy='50', sleepy='60', sneezy='70'):
> bashful = float(bashful)
> doc = float(doc)
> dopey = float(dopey)
> grumpy = int(grumpy)
> happy = int(happy)
> sleepy = int(sleepy)
> sneezy = int(sneezy)
> return cls(bashful, doc, dopey, grumpy, happy, sleepy, sneezy)
> 
> 
> That's a pretty ugly DRY violation. Imagine that I change the default
> value for bashful from 10.0 to (let's say) 99. I have to touch the code
> in three places (to say nothing of unit tests):
> 
This is often refered to as using "Magic Numbers" and can be avoided by 
using constants

HAPPY='50'
SNEEZY='70'


def spam:
def __init__(self,happy=HAPPY,sneezy=SNEEZY,...)

of course Python does not have true constants (hence following the PEP 8 
recommendation of making them all caps) so you have to behave yourself 
regarding their usage.
at least now they only need to be changed in 1 location.
 

that said when I find i have situations like this it is often a good idea 
to re-investigate my approach to the overall problem to see if there is 
another way to implement it.


-- 
"Humor is a drug which it's the fashion to abuse."
-- William Gilbert
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Chris Angelico
On Wed, Sep 28, 2016 at 6:52 PM, Gregory Ewing
 wrote:
> Chris Angelico wrote:
>>
>>
>>  wrote:
>>
>>> * No side effects (new variable bindings may be created, but
>>>  existing ones cannot be changed; no mutable data structures).
>>
>>
>> If that's adhered to 100%, the language is useless for any operation
>> that cannot be handled as a "result at end of calculation" function.
>
>
> Surprisingly, that turns out not to be true. Modern
> functional programming has developed some very elegant
> techniques for expressing state-changing operations in
> a functional way, using things they call monads.
>
> Essentially you write the whole program in continuation-
> passing style, with a state object being passed down an
> infinite chain of function calls.
>
> None of the functions ever return, so there's no
> danger of an "old" state being seen again after the state
> has changed. This allows in-place mutations to be
> performed as optimisations; it also allows I/O to be
> handled in a functional way.

If monads allow mutations or side effects, they are by definition not
pure functions, and violate your bullet point. Languages like Haskell
have them not because they are an intrinsic part of functional
programming languages, but because they are an intrinsic part of
practical/useful programming languages.

>> You can't produce intermediate output. You can't even produce
>> debugging output (at least, not from within the program - you'd need
>> an external debugger). You certainly can't have a long-running program
>> that handles multiple requests (eg a web server).
>
>
> Well, people seem to be writing web servers in functional
> languages anyway, judging by the results of googling for
> "haskell web server". :-)

Again, Haskell has such features in order to make it useful, not
because they're part of the definition of "functional programming".

>> Unless you're deliberately defining
>> "functional language" as "clone of Haskell" or something, there's no
>> particular reason for this to be a requirement.
>
>
> It's not *my* definition, I'm just pointing out what
> the term "functional language" means to the people
> who design and use such languages. It means a lot more
> than just "a language that has functions". If that's
> your definition, then almost any language designed
> in the last few decades is a functional language, and
> the term is next to meaningless.

Of course it's more than "a language that has functions"; but I'd say
that a more useful comparison would be "languages that require
functional idioms exclusively" vs "languages that support functional
idioms" vs "languages with no functional programming support". Python
is squarely in the second camp, with features like list
comprehensions, map/reduce, etc, but never forcing you to use them. (C
would be one I'd put into the third camp - I don't think it'd be at
all practical to try to use any sort of functional programming idiom
in C. But I might be wrong.)

>> Python is predominantly a *practical* language. Since purely
>> functional programming is fundamentally impractical, Python doesn't
>> force us into it.
>
>
> The point is that a true functional language (as
> opposed to just "a language that has functions") *does*
> force you into it. Lack of side effects is considered
> an important part of that paradigm, because it allows
> programs to be reasoned about in a mathematical way.
> If that's not enforced, the point of the whole thing
> is lost.
>
> The fact that Python *doesn't* force you into it means
> that Python is not a functional language in that sense.

Right. But my point is that *no* truly useful general-purpose language
actually enforces all of this. A language with absolutely no side
effects etc is a great way to represent algebra (algebra is all about
a search for truth that existed from the beginning - if you discover
half way through that x is 7, then x must have actually been 7 when
the program started, only you didn't know it yet), but it's a terrible
way to do any sort of I/O, and it's rather impractical for really
long-running calculations, being that much harder to debug. The
languages most commonly referred to as "functional languages" are
merely *closer to* a pure no-side-effect language - they encourage
functional styles more strongly than Python does.

I strongly support the use of pure functions in places where that
makes sense. They're much easier to reason about, write unit tests
for, etc, etc, than are functions with side effects. But most
practical programming work is made easier, not harder, by permitting
side effects.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why searching in a set is much faster than in a list ?

2016-09-28 Thread Chris Angelico
On Wed, Sep 28, 2016 at 6:46 PM, Marko Rauhamaa  wrote:
> Lawrence D’Oliveiro :
>
>> On Wednesday, September 28, 2016 at 6:51:17 PM UTC+13, ast wrote:
>>> I noticed that searching in a set is faster than searching in a list.
>>
>> That’s why we have sets.
>
> I would have thought the point of sets is to have set semantics, just
> like the point of lists is to have list semantics.

And set semantics are what, exactly? Membership is a primary one.
"Searching in a set" in the OP's language is a demonstration of the
'in' operator, a membership/containment check.

(Other equally important set semantics include intersection and union,
but membership inclusion checks are definitely up there among primary
purposes of sets.)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why searching in a set is much faster than in a list ?

2016-09-28 Thread Steven D'Aprano
On Wednesday 28 September 2016 15:51, ast wrote:

> Hello
> 
> I noticed that searching in a set is faster than searching in a list.
[...]
> I tried a search in a tuple, it's not different that in a list.
> Any comments ?


A list, array or tuple is basically a linear array that needs to be searched 
one item at a time:

[ a | b | c | d | ... | x | y | z ]

To find x, Python has to start at the beginning and look at every item in turn, 
until it finds x.

But a set is basically a hash table. It is an array, but laid out differently, 
with blank cells:

[ # | # | h | p | a | # | m | y | b | # | # | f | x | ... | # ]

Notice that the items are jumbled up in arbitrary order. So how does Python 
find them?

Python calls hash() on the value, which returns a number, and that points 
directly to the cell which would contain the value if it were there. So if you 
search for x, Python calls hash(x) which will return (say) 12. Python then 
looks in cell 12, and if x is there, it returns True, and if it's not, it 
returns False. So instead of looking at 24 cells in the array to find x, it 
calculates a hash (which is usually fast), then looks at 1 cell.

(This is a little bit of a simplification -- the reality is a bit more 
complicated, but you can look up "hash tables" on the web or in computer 
science books. They are a standard data structure, so there is plenty of 
information available.)

On average, if you have a list with 1000 items, you need to look at 500 items 
before finding the one you are looking for. With a set or dict with 1000 items, 
on average you need to look at 1 item before finding the one you are looking 
for. And that is why sets and dicts are usually faster than lists.



-- 
Steven
git gets easier once you get the basic idea that branches are homeomorphic 
endofunctors mapping submanifolds of a Hilbert space.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python C API: How to debug reference leak?

2016-09-28 Thread Gregory Ewing

dieter wrote:

dl l  writes:


When I debug in C++, I see the reference count of a PyObject is 1.

>> How can I find out where is referencing this object?


Likely, it is the reference, you are holding:


Unless you've just Py_DECREFed it, expecting it to go
away, and the recfcount is still 1, in which case there's
still another reference somewhere else.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Gregory Ewing

Chris Angelico wrote:


 wrote:


* No side effects (new variable bindings may be created, but
 existing ones cannot be changed; no mutable data structures).


If that's adhered to 100%, the language is useless for any operation
that cannot be handled as a "result at end of calculation" function.


Surprisingly, that turns out not to be true. Modern
functional programming has developed some very elegant
techniques for expressing state-changing operations in
a functional way, using things they call monads.

Essentially you write the whole program in continuation-
passing style, with a state object being passed down an
infinite chain of function calls.

None of the functions ever return, so there's no
danger of an "old" state being seen again after the state
has changed. This allows in-place mutations to be
performed as optimisations; it also allows I/O to be
handled in a functional way.


You can't produce intermediate output. You can't even produce
debugging output (at least, not from within the program - you'd need
an external debugger). You certainly can't have a long-running program
that handles multiple requests (eg a web server).


Well, people seem to be writing web servers in functional
languages anyway, judging by the results of googling for
"haskell web server". :-)


* Syntactic support for currying.


Is that really such a big deal?


It is when higher-order functions are such a fundamental
part of the ecosystem that you can hardly take a breath
without needing to curry something.


Unless you're deliberately defining
"functional language" as "clone of Haskell" or something, there's no
particular reason for this to be a requirement.


It's not *my* definition, I'm just pointing out what
the term "functional language" means to the people
who design and use such languages. It means a lot more
than just "a language that has functions". If that's
your definition, then almost any language designed
in the last few decades is a functional language, and
the term is next to meaningless.


Python is predominantly a *practical* language. Since purely
functional programming is fundamentally impractical, Python doesn't
force us into it.


The point is that a true functional language (as
opposed to just "a language that has functions") *does*
force you into it. Lack of side effects is considered
an important part of that paradigm, because it allows
programs to be reasoned about in a mathematical way.
If that's not enforced, the point of the whole thing
is lost.

The fact that Python *doesn't* force you into it means
that Python is not a functional language in that sense.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Why searching in a set is much faster than in a list ?

2016-09-28 Thread Marko Rauhamaa
Lawrence D’Oliveiro :

> On Wednesday, September 28, 2016 at 6:51:17 PM UTC+13, ast wrote:
>> I noticed that searching in a set is faster than searching in a list.
>
> That’s why we have sets.

I would have thought the point of sets is to have set semantics, just
like the point of lists is to have list semantics.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to call a method returning a value from a main function

2016-09-28 Thread Paul Moore
On Wednesday, 28 September 2016 07:47:38 UTC+1, prasanth kotagiri  wrote:
> def GenAccessToken(mackey,authid,configid,tokenexp,*perm):
> args=JWTParms()
> args.configurationId=configid
> args.authzSystemMacKey=mackey
> args.authzSystemId=authid
> args.tokenExpiryInSeconds=tokenexp
> args.permissions=perm
> tokenGen=TokenGenerator(args)
> tok=tokenGen.generate()
> return tok
> 
> if __name__ == '__main__':
>   
> GenAccessToken("This_is_a_Test_QED_MAC_Key_Which_Needs_to_be_at_Least_32_Bytes_Long",
>  "default", "default", 6,
>"g,m,a,s,c,p,d")
> 
> when i am calling the above method it is not returning any value but when i 
> use print it is printing the value. Is there any wrong in returning the value 
> from above method. Please help me ASAP

GenAccessToken is returning a value (from "return tok") but you're not doing 
anything with that value in your "if name == '__main__'" section. You need to 
assign the return value to a variable, or print it, or whatever you want to do 
with it. Otherwise Python will assume you weren't interested in the return 
value and simply throw it away.

Try something like

tok = 
GenAccessToken("This_is_a_Test_QED_MAC_Key_Which_Needs_to_be_at_Least_32_Bytes_Long",
 "default", "default", 6,
"g,m,a,s,c,p,d")
print(tok)

Paul
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: what's the difference of Template.append(...) and Template.prepend(...) in pipes module

2016-09-28 Thread Steven D'Aprano
On Wednesday 28 September 2016 16:47, Cpcp Cp wrote:

> Template.append(cmd, kind) and Template.prepend(cmd, kind)
> Append a new action at the end.The cmd variable must be a valid bourne shell
> command. The kind variable consists of two letters.

"Append" means to put at the end.

"Prepend" means to put at the beginning.


Here is a list: [1, 2, 3, 4]


If we APPEND "foo", we get: [1, 2, 3, 4, foo]


If we PREPEND "foo", we get: [foo, 1, 2, 3, 4]


Does that help?



> My os is windows 7.But this module is used for POSIX.

Which module? We don't know which module you are talking about, which makes it 
hard to give you a specific answer.






-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why searching in a set is much faster than in a list ?

2016-09-28 Thread Lawrence D’Oliveiro
On Wednesday, September 28, 2016 at 6:51:17 PM UTC+13, ast wrote:
> I noticed that searching in a set is faster than searching in a list.

That’s why we have sets.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: what's the difference of Template.append(...) and Template.prepend(...) in pipes module

2016-09-28 Thread Lawrence D’Oliveiro
On Wednesday, September 28, 2016 at 7:47:46 PM UTC+13, Cpcp Cp wrote:
> My os is windows 7.But this module is used for POSIX.

Windows 10 has a Linux layer, I believe. Why not try that?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread Peter Otten
Lawrence D’Oliveiro wrote:

> On Wednesday, September 28, 2016 at 3:35:58 AM UTC+13, Peter Otten wrote:
>> is Python actually a "functional language"?
> 
> Yes

[snip]

No. To replace the mostly irrelevant link with something addressing my 
question:





-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why searching in a set is much faster than in a list ?

2016-09-28 Thread Christian Gollwitzer

Am 28.09.16 um 07:51 schrieb ast:

Hello

I noticed that searching in a set is faster than searching in a list.


Because a set is a hashtable, which has constant-time lookup, whereas in 
a list it needs to compare with every element.



from timeit import Timer
from random import randint

l = [i for i in range(100)]
s = set(l)


Try longer lists/sets and the difference should increase.

Christian


t1 = Timer("randint(0, 200) in l", "from __main__ import l, randint")
t2 = Timer("randint(0, 200) in s", "from __main__ import s, randint")

t1.repeat(3, 10)
[1.459111982109448, 1.4568229341997494, 1.4329947660946232]

t2.repeat(3, 10)
[0.8499233841172327, 0.854728743457656, 0.8618653348400471]

I tried a search in a tuple, it's not different that in a list.
Any comments ?


--
https://mail.python.org/mailman/listinfo/python-list


Re: Why searching in a set is much faster than in a list ?

2016-09-28 Thread dieter
"ast"  writes:
> ...
> I noticed that searching in a set is faster than searching in a list.

A "set" is actually similar to a degenerated "dict". It is using hashing
to quickly access its content which could give you (amortized asymptotically)
contant runtime. With a list, you iterate over its elements and
compare along the line - which gives (...) linear runtime.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to call this method from main method

2016-09-28 Thread Ben Finney
prasanthk...@gmail.com writes:

> When i am calling the above method from main method

You don't have a main method, and you don't specify what “the above
method” is.

I assume you mean “… calling the ‘GenAccessToken’ function” (which is
not a method of anything, in the code you showed). But maybe you mean
something different?

> it is not returning the value but when i use print it is showing the
> value.

Can you show an example Python interactive session which:

* Imports your module.

* Calls the function.

* Demonstrates the behaviour that is confusing you.

> Please help me ASAP

You may not realise (I assume you are not a native English speaker),
that is *very* demanding and rude. We are a community of volunteers
here, asking for a response “ASAP” is making demands you are not in a
position to make.

-- 
 \“Don't fight forces, use them.” —Richard Buckminster Fuller, |
  `\   _Shelter_, 1932 |
_o__)  |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread dieter
Peng Yu  writes:
> Hi, In many other functional language, one can change the closure of a
> function. Is it possible in python?

I do not think so: the corresponding attributes/types ("__closure__", "cell")
are explicitely designed to be read only.

However, I never missed closure changebility. Should I really need it
(in an extreme rare case), I would bind the corresponding variable
to an object which could be modified at will.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python C API: How to debug reference leak?

2016-09-28 Thread dieter
dl l  writes:

> When I debug in C++, I see the reference count of a PyObject is 1. I don't
> know where is referencing this object. How can I find out where is
> referencing this object?

Likely, it is the reference, you are holding: typically, whenever
you can access a Python object, this object has at least reference count 1
(because it must lie on at least one access path -- the one used by you).

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is there a way to change the closure of a python function?

2016-09-28 Thread ast


"jmp"  a écrit dans le message de 
news:mailman.31.1474987306.2302.python-l...@python.org...

On 09/27/2016 04:01 PM, Peng Yu wrote:




Note: function are objects, and can have attributes, however I rarely see usage of these, there 
could be good reasons for that.




It could be use to implement a cache for example.

def fact1(n):

   if not hasattr(fact1, '_cache'):
   fact1._cache = {0:1}

   if n not in fact1._cache:
   fact1._cache[n] = n * fact1(n-1)

   return fact1._cache[n] 


--
https://mail.python.org/mailman/listinfo/python-list


Re: Python C API: How to debug reference leak?

2016-09-28 Thread dieter
dl l  writes:

> Thanks for reply. Is there any function in C to get the reference objects
> of a object? I want to debug where are referencing the object.

Depending on your understanding of "reference objects", this would
be "gc.get_referents" or "gc.get_referrers".

Of course, those are not "C" functions but Python functions in the module
"gc". However, (with some effort) you can use them in "C" as
well.

Most types (at least the standard container types) will have
"gc" support functions which allow to determine the "referents"
of a corresponding object.

Note: those "gc" functions are implemented in "C". But, almost surely,
they are declared "static", i.e. not globally exposed (in order
not to pollute the global symbol namespace).


When I want to use Python functions in C, I write a bit of "Cython" source
and use "Cython" to compile it into "C". Then, I copy and paste
this "C" code into my own "C" application to have the functions
available there.


Important note for debugging:
(Almost) all Python functions expect that they are called only when the
active thread holds the GIL ("Global Interpreter Lock").
Calling Python functions during debugging may violate this restriction
and this can lead to abnormal behaviour.

In single thread applications, the real danger is likely not too large.
In multi thread applications, I have already seen "SIGSEGV"s caused
by this.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: what's the difference of Template.append(...) and Template.prepend(...) in pipes module

2016-09-28 Thread Cpcp Cp
Please give me a simple example.Thanks!
-- 
https://mail.python.org/mailman/listinfo/python-list