Re: Problem with keyboard up/down arrows in Python 2.4 interpreter

2011-03-24 Thread Julien
On Mar 22, 5:37 pm, Benjamin Kaplan  wrote:
> On Tue, Mar 22, 2011 at 2:27 AM, Julien  wrote:
> > Hi,
>
> > I'm having problems when typing the up/down arrows in the Python 2.4
> > interpreter (exact version: Python 2.4.6 (#1, Mar  3 2011, 15:45:53)
> > [GCC 4.2.1 (Apple Inc. build 5664)] on darwin).
>
> > When I press the up arrow it outputs "^[[A" and when I press the down
> > arrow it outputs "^[[B".
>
> > I've google it and it looks like it might be an issue with the
> > readline not being installed or configured properly. Is that correct?
> > If so, how can I fix this issue?
>
> > Many thanks.
>
> > Kind regards,
>
> > Julien
> > --
>
> How did you install Python? If you used Macports, it looks like
> readline support is a separate port for 2.4. Try installing
> py-readline.
>
>
>
>
>
>
>
> >http://mail.python.org/mailman/listinfo/python-list

Hi,

Which readline package should I install? I've tried with 'pip install
py-readline' but that package doesn't seem to exist. I've tried with
'pyreadline' (without the dash) but that didn't fix the issue.

Many thanks,

Julien
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: functools.partial doesn't work without using named parameter

2011-03-24 Thread Paddy
P.S: Python 3.2!
-- 
http://mail.python.org/mailman/listinfo/python-list


functools.partial doesn't work without using named parameter

2011-03-24 Thread Paddy
Hi, I just found the following oddity where for function fsf1 I am forced to 
use a named parameter for correct evaluation and was wondering why it doesn't 
work, yet the example from the docs of wrapping int to create basetwo doesn't 
need this?
The example:

>>> from functools import partial
>>> basetwo = partial(int, base=2)
>>> basetwo('10010')
18
>>> 
>>> def fs(f, s): return [f(value) for value in s]

>>> def f1(value): return value * 2

>>> s = [0, 1, 2, 3]
>>> fs(f1, s)
[0, 2, 4, 6]
>>> 
>>> fsf1 = partial(fs, f=f1)
>>> fsf1(s)
Traceback (most recent call last):
  File "", line 1, in 
fsf1(s)
TypeError: fs() got multiple values for keyword argument 'f'
>>> # BUT
>>> fsf1(s=s)
[0, 2, 4, 6]
>>> 

Would someone help?

- Thanks in advance, Paddy.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread Stefan Behnel

Steven D'Aprano, 25.03.2011 06:46:

On Thu, 24 Mar 2011 18:32:11 -0700, Carl Banks wrote:


It's probably the least justified builtin other than pow.


I don't know about that. Correctly, efficiently and *quickly*
implementing the three-argument version of pow is exactly the sort of
thing that should be in the built-ins, or at least the standard library.


I think that touches it already. We have a "math" module, so why is there a 
*builtin* for doing special math? How much code is there really that only 
uses pow() and does not import "math"?


Stefan

--
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread Paul Rubin
Steven D'Aprano  writes:
> I don't know about that. Correctly, efficiently and *quickly* 
> implementing the three-argument version of pow is exactly the sort of 
> thing that should be in the built-ins, or at least the standard library.

There is also math.pow which works slightly differently from built-in
pow, sigh.
-- 
http://mail.python.org/mailman/listinfo/python-list


ConfigParser.NoSectionError: No section: 'locations'

2011-03-24 Thread Şansal Birbaş
Hi,

I created successfully an .exe file for my python code. As a .py file, it works 
like a charm. But when I try to run it from the exe version, I get error as 
follows:

  Traceback (most recent call last):
  File "CreateAS.pyw", line 14, in 
  File "pulp\__init__.pyc", line 33, in 
  File "pulp\pulp.pyc", line 103, in 
  File "pulp\solvers.pyc", line 101, in 
  File "pulp\solvers.pyc", line 59, in initialize
  File "ConfigParser.pyc", line 532, in get
ConfigParser.NoSectionError: No section: 'locations'

How can I solve that?

Thanks in advance.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread Steven D'Aprano
On Thu, 24 Mar 2011 18:32:11 -0700, Carl Banks wrote:

> It's probably the least justified builtin other than pow.

I don't know about that. Correctly, efficiently and *quickly* 
implementing the three-argument version of pow is exactly the sort of 
thing that should be in the built-ins, or at least the standard library.

[steve@sylar ~]$ python3 -m timeit -s "n=2" "(n**3)%123456789"
1000 loops, best of 3: 633 usec per loop
[steve@sylar obfuscate]$ python3 -m timeit -s "n=2" "pow(n, 3, 
123456789)"
10 loops, best of 3: 6.03 usec per loop

That's two orders of magnitude faster.

(You need the n=2 to defeat Python's compile-time constant folding, 
otherwise timing measurements will be misleading.)



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic assigments

2011-03-24 Thread Steven D'Aprano
On Thu, 24 Mar 2011 17:26:34 -0500, Chris Rebert wrote:

> On Thu, Mar 24, 2011 at 1:39 PM, Seldon  wrote:
>> Hi, I have a question about generating variable assignments
>> dynamically.
> 
> This can frequently be a code smell.

Is there any time when it's not a code smell? A code smell is something 
that makes you go "Hmmm..." (or at least, it *should* do so), not 
necessarily the wrong thing.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic assigments

2011-03-24 Thread Steven D'Aprano
On Thu, 24 Mar 2011 19:51:08 -0700, scattered wrote:

> On Mar 24, 7:18 pm, Steven D'Aprano  +comp.lang.pyt...@pearwood.info> wrote:
>> On Thu, 24 Mar 2011 14:39:34 -0700, scattered wrote:
>> > Could try:
>>
>>  my_list = [("x", 7), ("y", 8)]
>>  for pair in my_list: exec(pair[0] + " = " + str(pair[1])) x,y
>>  (7,8)
>>
>> Please don't ever do such a thing. The world has enough buggy software
>> vulnerable to code injection attacks without you encouraging newbies to
>> write more.
>>
>> If (generic) you, the programmer, can write
>>
>> my_list = [("x", 7), ("y", 8)]
>> for pair in my_list:
>>     exec(pair[0] + " = " + str(pair[1]))
>>
>> in your code, then you should stop messing about and just write:
>>
>> x = 7
>> y = 8
>>
>>
> Good question - though presumably the OP had some motivation for wanting
> to do this dynamically. Possibly some sort of code-template (though then
> it would probably make more sense to create a text file which contains
> the bindings you want, which could then be expanded to a full-fledged
> program).

Of course he has some motivation for wanting to do it. Nearly every bad 
idea ever done was done because somebody thought it was a good idea. 
Templating via exec was a bad idea the first time somebody thought "I 
know! I'll just use exec to create variables!" and it remains a bad idea 
today. 

(Although with care and a lot of effort, you can make it less bad. See, 
for example, namedtuple in the standard library. Read the source code for 
it, and see just how much effort Raymond Hettinger puts into making it 
safe. And call me paranoid if you like, but I still wouldn't trust it if 
the names were coming from anonymous users over the internet and injected 
straight into namedtuple. Just because *I* can't see a vulnerability, 
doesn't mean there isn't one.)



>> instead. The only time this technique is even *possibly* justified is
>> if the contents of my_list comes from external data not known at
>> compile- time.
> 
> Here is another possibility: you are using Python *interactively* in

In my earlier post, I described the dynamic creation of variables as:

"... something you should *nearly* always avoid doing." [Emphasis added.]

Congratulations, you've found one of the few exceptions. Of course an 
interactive shell must allow you to create variables interactively. It 
would hardly be interactive if you couldn't. This is why interactive 
shells are often called a REPL: Read Eval (or Exec) Print Loop.

Note also that I was describing *variables*. Although there are a lot of 
similarities between variables and instance attributes (so much so that 
some other languages call them both variables), there are some subtle 
differences, and those differences are important. Dynamic *attribute* 
creation is not anywhere near as bad a code-smell as dynamic variables:

setattr(instance, name, value)  # A slight whiff, but usually okay.

globals()[name] = value  # Smells pretty bad, but very occasionally okay.

exec(name + " = " + str(value))  # Reeks like Satan's armpit after a long
# day mucking out the pits of excrement with the souls of the Damned.


> solving cryptograms (as a matter of fact - I was doing exactly this
> yesterday in trying to solve some Playfair ciphers).

If you're interested in ciphers, you might find this useful:

http://pypi.python.org/pypi/obfuscate



 You have a
> ciphertext that is a stream of letters in the range A...Z. You need to
> consult frequencies of letters, pairs of letters, triples of letters,
> and quadruples of letters that occur. So, you write a script that steps
> through the cipher text, creates a dictionary which records frequencies
> of strings of length <= 4, and, as an added convienence, creates
> bindings of frequencies to these strings.

I don't call that a convenience. I call that a problem. What happens when 
your cipher text includes

"...aBzNk6YPq7psGNQ1Pj?lenprem1zGWdefmspzzQ..."

How many problems can you see there?



> Thus - if you want to know how
> often, say, EFW occurs in the ciphertext you just type EFW (rather than
> freq["EFW"]) and the Python shell returns the frequency.

Sounds like a terrible idea. What do you do when you want to know how 
often "len" or "1xy" or "???" or "def" occurs?



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why memoizing is faster

2011-03-24 Thread Terry Reedy

On 3/24/2011 8:26 PM, Fons Adriaensen wrote:

On Thu, Mar 24, 2011 at 08:12:22PM -0400, Terry Reedy wrote:


The irony of this is that memoizing 'recursive' functions with a
decorator depends on the fact the Python does not have truly recursive
functions. A function cannot call itself directly.


I wonder what exactly is meant by that last sentence:

* It doesn't happen (since the function name is evaluated
   to find the function to be called, as you explained).


In particular, the function name is resolved to an object when the 
function is called and executed, rather than when the function is 
compiled. To be more exact, I should have said, "A user-writen, 
Python-coded function cannot call itself" This is assuming that we 
exclude implementation-dependent hackery that subverts Python semantics 
by exploiting implementation-specific information that is not part of 
the language or its specification.


Consider

def f(): return f()

This books bad. Infinite recursion. Bad programmer...
Now continue with

f_orig=f
def f(): return 1
print(f_orig())
# 1

In context, f_orig is not recursive at all, in spite of its appearance.

Now consider this proposal: Introduce a keyword or special name, such as 
'__func__', that means 'this function', to be resolved as such just 
after compilation. With that, "def f: return __func__()" would be 
directly recursive, and any wrapper would be ignored by the __func__() call.


Wrappers are not needed to memoize with recursion either:

def fib_recur(n, _cache = [1,1]):
  if n >= len(_cache):
_cache.append(fib_recur(n-2) + fib_recur(n-1))
  return _cache[n]

Perhaps prettier than fib_iter, posted before, but subject to external 
rebinding and stack overflow.


Both fib_recur and fib_iter calculate fib(100) as 573147844013817084101 
and fib(1000) as 
7033036771142281582183525487718354977018126983635873274260490508715453711819693357974224949456261173348775044924176599108818636326545022364710601205337412127386733998139373125598767690091902245245323403501

in a small fraction of a second.

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Understanding the Pystone measurement

2011-03-24 Thread tleeuwenb...@gmail.com
Hi there,

Is there a good writeup of what the pystone measurement actually
means? I'm working on benchmarking of some Python code at work, and
I'm interested in how Pystone might be relevant to me. I've tried
googling, but I can't find any introductory / definitional
descriptions of what this module/measurement means.

Any pointers much appreciated!

Regards,
-Tennessee
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic assigments

2011-03-24 Thread Tim Leslie
On 25 March 2011 13:51, scattered  wrote:
> Here is another possibility: you are using Python *interactively* in
> solving cryptograms (as a matter of fact - I was doing exactly this
> yesterday in trying to solve some Playfair ciphers). You have a
> ciphertext that is a stream of letters in the range A...Z. You need to
> consult frequencies of letters, pairs of letters, triples of letters,
> and quadruples of letters that occur. So, you write a script that
> steps through the cipher text, creates a dictionary which records
> frequencies of strings of length <= 4, and, as an added convienence,
> creates bindings of frequencies to these strings. Thus - if you want
> to know how often, say, EFW occurs in the ciphertext you just type EFW
> (rather than freq["EFW"])

And what happens when you want to know the frequency of "if", "def",
"for" or any other variable which matches a keyword?

Tim
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic assigments

2011-03-24 Thread scattered
On Mar 24, 10:51 pm, scattered  wrote:

[snip]

>  I can easily imagine other
> situations in which a user might want to create a large number of
> bindings for interactive use. Maybe as a teacher (I'm a math teacher)
> you have written a student-class which contains things like methods to
> compute averages, return lists of missing assignments, etc. At the
> prompt you run a script that creates a binding for each student in a
> section to a student object so that you can just type something like
> JohnDoe.missing() to get a list of their missing assignments. Just
> because exec() *can* be misused doesn't mean that there aren't valid
> uses for it. You would be foolish to run exec() on unparsed externally
> supplied data - but perhaps you are running it on data that you
> yourself have generated.
>

I think I might actually implement the above. A quick experiment with
IDLE's Python shell shows that an assigment generated by exec() is
immediately usable with autocomplete. Many of my students have weird
names like Victor Jarinski (a made-up but not untypical example).
Typing in average["Victor Jarinski"] for a dictionary look-up requires
a fair amount of typing, but if I have a top-level binding I can just
type Vic and then invoke auto-complete to get to
VictorJarinski.average() with a fraction of the key-strokes.

-scattered
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic assigments

2011-03-24 Thread Jason Swails
On Thu, Mar 24, 2011 at 4:05 PM, Steven D'Aprano <
steve+comp.lang.pyt...@pearwood.info> wrote:

> On Thu, 24 Mar 2011 19:39:21 +0100, Seldon wrote:
>
> > Hi, I have a question about generating variable assignments dynamically.
> [...]
> > Now, I would like to use data contained in this list to dynamically
> > generate assignments of the form "var1 = value1", ecc where var1 is an
> > identifier equal (as a string) to the 'var1' in the list.
>
> Why on earth would you want to do that?
>

Doesn't optparse do something like that?  (I was actually looking for a way
to do this type of thing)  OptionParser.add_option() etc. etc. where you add
a variable and key, yet it assigns it as a new attribute of the Options
class that it returns upon OptionParser.parse_args().

I would see this as a way of making a class like that generalizable.  I
agree that __main__ (or something contained within) would certainly need to
know the name of each variable to be even remotely useful, but that doesn't
mean that each part inside does as well.

It's certainly doable with dicts and just using key-pairs, but i'd rather do
something like

opts.variable

than

opts.dictionary['variable']

Just my 2c.  Thanks to JM for suggesting setattr, btw.

--Jason
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic assigments

2011-03-24 Thread scattered
On Mar 24, 7:18 pm, Steven D'Aprano  wrote:
> On Thu, 24 Mar 2011 14:39:34 -0700, scattered wrote:
> > Could try:
>
>  my_list = [("x", 7), ("y", 8)]
>  for pair in my_list: exec(pair[0] + " = " + str(pair[1]))
>  x,y
>  (7,8)
>
> Please don't ever do such a thing. The world has enough buggy software
> vulnerable to code injection attacks without you encouraging newbies to
> write more.
>
> If (generic) you, the programmer, can write
>
> my_list = [("x", 7), ("y", 8)]
> for pair in my_list:
>     exec(pair[0] + " = " + str(pair[1]))
>
> in your code, then you should stop messing about and just write:
>
> x = 7
> y = 8
>

Good question - though presumably the OP had some motivation for
wanting to do this dynamically. Possibly some sort of code-template
(though then it would probably make more sense to create a text file
which contains the bindings you want, which could then be expanded to
a full-fledged program).

> instead. The only time this technique is even *possibly* justified is if
> the contents of my_list comes from external data not known at compile-
> time.

Here is another possibility: you are using Python *interactively* in
solving cryptograms (as a matter of fact - I was doing exactly this
yesterday in trying to solve some Playfair ciphers). You have a
ciphertext that is a stream of letters in the range A...Z. You need to
consult frequencies of letters, pairs of letters, triples of letters,
and quadruples of letters that occur. So, you write a script that
steps through the cipher text, creates a dictionary which records
frequencies of strings of length <= 4, and, as an added convienence,
creates bindings of frequencies to these strings. Thus - if you want
to know how often, say, EFW occurs in the ciphertext you just type EFW
(rather than freq["EFW"]) and the Python shell returns the frequency.
While this example may seem far-fetched, I can easily imagine other
situations in which a user might want to create a large number of
bindings for interactive use. Maybe as a teacher (I'm a math teacher)
you have written a student-class which contains things like methods to
compute averages, return lists of missing assignments, etc. At the
prompt you run a script that creates a binding for each student in a
section to a student object so that you can just type something like
JohnDoe.missing() to get a list of their missing assignments. Just
because exec() *can* be misused doesn't mean that there aren't valid
uses for it. You would be foolish to run exec() on unparsed externally
supplied data - but perhaps you are running it on data that you
yourself have generated.

> But that means you're vulnerable to a hostile user injecting code
> into your data:
>
> my_list = [("import os; os.system('echo \"deleting all files...\"'); x",
> 7), ("y", 8)]
> for pair in my_list:
>     exec(pair[0] + " = " + str(pair[1]))
>
> Code injection attacks are some of the most common source of viruses and
> malware, far more common (and much easier to perform!) today than buffer
> overflows. If an unprivileged process can inject code into something that
> a privileged process is running, your computer is compromised.
>
> --
> Steven

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread Carl Banks
On Mar 24, 5:37 pm, "Martin v. Loewis"  wrote:
> > The cmp argument doesn't depend in any way on an object's __cmp__
> > method, so getting rid of __cmp__ wasn't any good readon to also get
> > rid of the cmp argument
>
> So what do you think about the cmp() builtin? Should have stayed,
> or was it ok to remove it?

Since it's trivial to implement by hand, there's no point for it to be
a builtin.  There wasn't any point before rich comparisons, either.
I'd vote not merely ok to remove, but probably a slight improvement.
It's probably the least justified builtin other than pow.


> If it should have stayed: how should it's implementation have looked like?

Here is how cmp is documented: "The return value is negative if x < y,
zero if x == y and strictly positive if x > y."

So if it were returned as a built-in, the above documentation suggests
the following implementation:

def cmp(x,y):
if x < y: return -1
if x == y: return 0
if x > y: return 1
raise ValueError('arguments to cmp are not well-ordered')

(Another, maybe better, option would be to implement it so as to have
the same expectations as list.sort, which I believe only requires
__eq__ and __gt__.)


> If it was ok to remove it: how are people supposed to fill out the cmp=
> argument in cases where they use the cmp() builtin in 2.x?

Since it's trivial to implement, they can just write their own cmp
function, and as an added bonus they can work around any peculiarities
with an incomplete comparison set.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread Martin v. Loewis
> The cmp argument doesn't depend in any way on an object's __cmp__
> method, so getting rid of __cmp__ wasn't any good readon to also get
> rid of the cmp argument

So what do you think about the cmp() builtin? Should have stayed,
or was it ok to remove it?

If it should have stayed: how should it's implementation have looked like?

If it was ok to remove it: how are people supposed to fill out the cmp=
argument in cases where they use the cmp() builtin in 2.x?

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread MRAB

On 24/03/2011 23:49, Steven D'Aprano wrote:

On Thu, 24 Mar 2011 17:47:05 +0100, Antoon Pardon wrote:


However since that seems to be a problem for you I will be more
detailed. The original poster didn't ask for cases in which cmp was
necessary, he asked for cases in which not using cmp was cumbersome.


I'm the original poster, and that's not what I said. I said:

"If anyone has any use-cases for sorting with a comparison function that
either can't be written using a key function, or that perform really
badly when done so, this would be a good time to speak up."

You'll notice that I said nothing about whether writing the code was easy
or cumbersome, and nothing about readability.



I
gave a case where not using cmp was cumbersome. a tuple where you want
it sorted with first item in descending order and second item ascending.


No, I'm sorry, calling sort twice is not cumbersome. In fact, if somebody
gave me code that sorted tuples in that way using a comparison function,
I would immediately rip it out and replace it with two calls to sort: not
only is it usually faster and more efficient, but it's easier to read,
easier to reason about, and easier to write.


from operator import itemgetter
data.sort(key=itemgetter(1))
data.sort(key=itemgetter(0), reverse=True)


If there are two, it isn't too bad, but if there are several (some
ascending, some descending), then it gets a little cumbersome.


A cmp function for this task may have been justified back in the Dark
Ages of Python 2.2, before Python's sort was guaranteed to be stable, but
there's no need for it now.


[snip]
It's one of those things I wouldn't have voted to remove, but now it's
gone, I'm unsure how much I'd like it back! I wouldn't object to it
returning...
--
http://mail.python.org/mailman/listinfo/python-list


Re: why memoizing is faster

2011-03-24 Thread Fons Adriaensen
On Thu, Mar 24, 2011 at 08:12:22PM -0400, Terry Reedy wrote:

> The irony of this is that memoizing 'recursive' functions with a  
> decorator depends on the fact the Python does not have truly recursive  
> functions. A function cannot call itself directly.

I wonder what exactly is meant by that last sentence:

* It doesn't happen (since the function name is evaluated
  to find the function to be called, as you explained).

or

* Even if a variable pointing directly to the function
  were provided (as a global or function argument),
  calling it is not supposed to work (for some reason).

??

Ciao,

-- 
FA

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why memoizing is faster

2011-03-24 Thread Terry Reedy

On 3/24/2011 9:48 AM, Andrea Crotti wrote:

I was showing a nice memoize decorator to a friend using the classic
fibonacci problem.

--8<---cut here---start->8---
   def memoize(f, cache={}):
   def g(*args, **kwargs):
   # first must create a key to store the arguments called
   # it's formed by the function pointer, *args and **kwargs
   key = ( f, tuple(args), frozenset(kwargs.items()) )
   # if the result is not there compute it, store and return it
   if key not in cache:
   cache[key] = f(*args, **kwargs)

   return cache[key]

   return g

   # without the caching already for 100 it would be unfeasible
   @memoize
   def fib(n):
   if n<= 1:
   return 1
   else:
   return fib(n-1) + fib(n-2)


The irony of this is that memoizing 'recursive' functions with a 
decorator depends on the fact the Python does not have truly recursive 
functions. A function cannot call itself directly. It can only call 
whatever callable is bound externally to its definition name. Hence, 
memoize rebinds the definition name (in this case 'fib') to a wrapper, 
so that the function once known as 'fib' no longer calls itself but 
calls the wrapper instead.


If Python did what some have asked, which is to make 'recursive' 
functions actually recursive by replacing a local variable that matches 
the function name (in this 'fib') with a direct reference to the 
function itself, as a constant (and this could be done), then the 
wrapper would not get called from within the function.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Callback mysteries

2011-03-24 Thread Fons Adriaensen
Hello all,

I wonder if someone could explain some of the following.

(Python 3.2)

I have a class which has a method called 'callback()'.
An instance of this class calls a C extension which
then calls back into Python.

In all cases below, two arguments are passed to the C
code and end up in 

PyObject *object;
PyObject *method;

The original solution was:

P:  self -> object,  "callback" -> method
C:  PyObject_CallMethodObjArgs(object, method, 0);

and this works nicely. 

Just out of academic interest, I was wondering if the run-time
lookup of 'callback' could be avoided. My first attempt was:

P:  self -> object, classname.callback -> method
C:  PyObject_CallFunctionObjArgs(method, object, 0); 

and this also works, I'm just not sure it's kosher.

Now in theory 'self.callback' should contain all info
that is required (or not ?). So I also tried:

P:  self -> object, self.callback -> method
C:  PyObject_CallFunctionObjArgs(method, PyMethod_Self(method), 0)

which fails,  

P:  self -> object, self.callback -> method
C:  PyObject_CallFunctionObjArgs(PyMethod_Function(method), 
PyMethod_Self(method), 0);

which also fails.

And indeed the value returned by PyMethod_Self() is not equal to 
the value of object (= self). In fact it seems not to depend on
the calling instance at all. Do I misunderstand what PyMethod_Self()
is supposed to return ?

I also tried 

P:  self -> object, self.callback -> method
C:  PyObject_CallObject(method, 0);

which also fails.

Any comments that help me understand these things will be appreciated !


Ciao,

-- 
FA




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why memoizing is faster

2011-03-24 Thread Terry Reedy

On 3/24/2011 9:48 AM, Andrea Crotti wrote:


   def fib_iter(n):
   ls = {0: 1, 1:1}


Storing a linear array in a dict is a bit bizarre


   for i in range(2, n+1):
   ls[i] = ls[i-1] + ls[i-2]

   return ls[max(ls)]


So is using max(keys) to find the highest index, which you already know 
(it is n). Actually, fib_iter(0) will return fib_iter(1), and in general 
that would be wrong. Above should just be ls[n].


Third, why toss away the array only to recalculate with each call?
Memoizing *is* faster ;-). So do it for the iterative version too.


and what happens is surprising, this version is 20 times slower then the
recursive memoized version.


For the reason Stefan explained and hinted above. Try the following instead:

def fib_iter(n, _cache = [1,1]):
  k = len(_cache)
  if n >= k:
for i in range(k, n+1):
   _cache.append(_cache[i-2] + _cache[i-1])
  return _cache[n]

This should be slightly faster than the crazy-key cache for the memoized 
recursive version.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread Steven D'Aprano
On Thu, 24 Mar 2011 17:47:05 +0100, Antoon Pardon wrote:

> However since that seems to be a problem for you I will be more
> detailed. The original poster didn't ask for cases in which cmp was
> necessary, he asked for cases in which not using cmp was cumbersome.

I'm the original poster, and that's not what I said. I said:

"If anyone has any use-cases for sorting with a comparison function that 
either can't be written using a key function, or that perform really 
badly when done so, this would be a good time to speak up."

You'll notice that I said nothing about whether writing the code was easy 
or cumbersome, and nothing about readability.


> I
> gave a case where not using cmp was cumbersome. a tuple where you want
> it sorted with first item in descending order and second item ascending.

No, I'm sorry, calling sort twice is not cumbersome. In fact, if somebody 
gave me code that sorted tuples in that way using a comparison function, 
I would immediately rip it out and replace it with two calls to sort: not 
only is it usually faster and more efficient, but it's easier to read, 
easier to reason about, and easier to write.


from operator import itemgetter
data.sort(key=itemgetter(1))
data.sort(key=itemgetter(0), reverse=True)


A cmp function for this task may have been justified back in the Dark 
Ages of Python 2.2, before Python's sort was guaranteed to be stable, but 
there's no need for it now.


> You then responded, that you could solve that by sorting in multiple
> steps. The problem with this response is that how items are to be
> compared can be decided in a different module from where they are
> actually sorted.

So what? Instead of this:

# Module A decides how to sort, module B actually does the sort.
# In module A:
B.do_sorting(cmp=some_comparison_function)

it will become this:

# In module A:
func = functools.cmp_to_key(some_comparison_function)
B.do_sorting(key=func)

That's hardly cumbersome.

I'm looking for cases where using cmp_to_key doesn't work at all, or 
performs badly, if such cases exist at all.



> First of all, there is no need for a dynamical generated cmp function in
> my provided case. My cmp fumction would simply look like this:
> 
> def cmp(tp1, tp2):
>   return cmp(tp2[0], tp1[0]) or cmp(tp1[1], tp2[1])

"Simply"?

Perhaps that works, I don't know -- but I do know that I can't tell 
whether it works by just looking at it. That makes it complex enough that 
it should have a test suite, to ensure it works the way you expect.

But regardless, nobody argues that a comparison function must always be 
complicated. Or even that a comparison function is never less complicated 
than a key function. That's not the point. The point is to find an 
example where a comparison function will work where no key function is 
either possible or practical.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic assigments

2011-03-24 Thread Steven D'Aprano
On Thu, 24 Mar 2011 14:39:34 -0700, scattered wrote:

> Could try:
> 
 my_list = [("x", 7), ("y", 8)]
 for pair in my_list: exec(pair[0] + " = " + str(pair[1]))
 x,y
 (7,8)


Please don't ever do such a thing. The world has enough buggy software 
vulnerable to code injection attacks without you encouraging newbies to 
write more.

If (generic) you, the programmer, can write 

my_list = [("x", 7), ("y", 8)]
for pair in my_list:
exec(pair[0] + " = " + str(pair[1]))


in your code, then you should stop messing about and just write:

x = 7
y = 8

instead. The only time this technique is even *possibly* justified is if 
the contents of my_list comes from external data not known at compile-
time. But that means you're vulnerable to a hostile user injecting code 
into your data:

my_list = [("import os; os.system('echo \"deleting all files...\"'); x", 
7), ("y", 8)]
for pair in my_list:
exec(pair[0] + " = " + str(pair[1]))


Code injection attacks are some of the most common source of viruses and 
malware, far more common (and much easier to perform!) today than buffer 
overflows. If an unprivileged process can inject code into something that 
a privileged process is running, your computer is compromised.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic assigments

2011-03-24 Thread Steven D'Aprano
On Thu, 24 Mar 2011 19:39:21 +0100, Seldon wrote:

> Hi, I have a question about generating variable assignments dynamically.
[...]
> Now, I would like to use data contained in this list to dynamically
> generate assignments of the form "var1 = value1", ecc where var1 is an
> identifier equal (as a string) to the 'var1' in the list.

Why on earth would you want to do that?

Named variables are useful when you know the variable name when you are 
writing the code:

number_of_rows = 42
#... and later
for row in number_of_rows: pass

Great, that works fine. But if you don't what the variable is called when 
you are writing the code, how are you supposed to refer to it later?

s = get_string()
create variable with name given by s and value 42
# ... and later
for row in ... um... what goes here???


This sort of dynamic creation of variables is an anti-pattern: something 
you should nearly always avoid doing. The better way of dealing with the 
problem of dynamic references is to use a dict with key:value pairs:

s = get_string()
data = {s: 42}
# ... and later
for row in data[s]: pass

In this case, the string s is the key, possibly "number_of_rows". You, 
the programmer, don't care what that string actually is, because you 
never need to refer to it directly. You always refer to it indirectly via 
the variable name s.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic assigments

2011-03-24 Thread Chris Rebert
On Thu, Mar 24, 2011 at 1:39 PM, Seldon  wrote:
> Hi, I have a question about generating variable assignments dynamically.

This can frequently be a code smell.

> I have a list of 2-tuples like this
>
> (
> (var1, value1),
> (var2, value2),
> .. ,
> )
>
> where var1, var2, ecc. are strings and value1, value2 are generic objects.
>
> Now, I would like to use data contained in this list to dynamically generate
> assignments of the form "var1 = value1", ecc where var1 is an identifier
> equal (as a string) to the 'var1' in the list.
>
> Is there a way to achieve this ?

Is there a reason you can't or don't want to use a dict or object
attributes instead?

Cheers,
Chris
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic assigments

2011-03-24 Thread scattered
On Mar 24, 2:39 pm, Seldon  wrote:
> Hi, I have a question about generating variable assignments dynamically.
>
> I have a list of 2-tuples like this
>
> (
> (var1, value1),
> (var2, value2),
> .. ,
> )
>
> where var1, var2, ecc. are strings and value1, value2 are generic objects.
>
> Now, I would like to use data contained in this list to dynamically
> generate assignments of the form "var1 = value1", ecc where var1 is an
> identifier equal (as a string) to the 'var1' in the list.
>
> Is there a way to achieve this ?
>
> Thanks in advance for any answer.

Could try:

>>> my_list = [("x", 7), ("y", 8)]

>>> for pair in my_list: exec(pair[0] + " = " + str(pair[1]))

>>> x,y

>>> (7,8)



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Convert ctypes 16 bit c_short array to a 32 bit numpy array

2011-03-24 Thread Wanderer
On Mar 24, 3:14 pm, Wanderer  wrote:
> I'm using ctypes to have a dll fill a buffer with 16 bit data. I then
> want to convert this data to a numpy array. The code snippet below
> converts the data from 16 bit to 32 bit, but two 16 bit numbers are
> concatenated to make a 32 bit number and half the array is zero.
>
>         Buffer = (c_short * byteSize)()
>         self.cam.Qframe.pBuffer = cast(pointer(Buffer), c_void_p)
>         perr = self.cam.GrabFrame()
>         image1 = np.frombuffer(Buffer, int)
>         xdim = self.cam.Qframe.width
>         ydim = self.cam.Qframe.height
>         image2 = image1.reshape(xdim, ydim)
>
> image2 looks like
>
> [[6291555 6357091 6160481 ..., 6488160 6226020 6553697]
>  [6488163 6422625 6684770 ..., 6422624 6553697 6553696]
>  [6488160 6357091 6226018 ..., 6815842 6422627 6553696]
>  ...,
>  [      0       0       0 ...,       0       0       0]
>  [      0       0       0 ...,       0       0       0]
>  [      0       0       0 ...,       0       0       0]]
>
> How do convert 16 bit data to 32 bit data?
> Thanks

I figured it out.

Buffer = (c_ubyte * byteSize)()
self.cam.Qframe.pBuffer = cast(pointer(Buffer), c_void_p)
perr = self.cam.GrabFrame()
image1 = np.frombuffer(Buffer, np.uint16)
xdim = self.cam.Qframe.width
ydim = self.cam.Qframe.height
image2 = image1.reshape(xdim, ydim)

Though Eclipse thinks
Buffer = (c_ubyte * byteSize)()

is an error.
-- 
http://mail.python.org/mailman/listinfo/python-list


Convert ctypes 16 bit c_short array to a 32 bit numpy array

2011-03-24 Thread Wanderer

I'm using ctypes to have a dll fill a buffer with 16 bit data. I then
want to convert this data to a numpy array. The code snippet below
converts the data from 16 bit to 32 bit, but two 16 bit numbers are
concatenated to make a 32 bit number and half the array is zero.

Buffer = (c_short * byteSize)()
self.cam.Qframe.pBuffer = cast(pointer(Buffer), c_void_p)
perr = self.cam.GrabFrame()
image1 = np.frombuffer(Buffer, int)
xdim = self.cam.Qframe.width
ydim = self.cam.Qframe.height
image2 = image1.reshape(xdim, ydim)

image2 looks like

[[6291555 6357091 6160481 ..., 6488160 6226020 6553697]
 [6488163 6422625 6684770 ..., 6422624 6553697 6553696]
 [6488160 6357091 6226018 ..., 6815842 6422627 6553696]
 ...,
 [  0   0   0 ...,   0   0   0]
 [  0   0   0 ...,   0   0   0]
 [  0   0   0 ...,   0   0   0]]

How do convert 16 bit data to 32 bit data?
Thanks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic assigments

2011-03-24 Thread Jean-Michel Pichavant

Seldon wrote:

Hi, I have a question about generating variable assignments dynamically.

I have a list of 2-tuples like this

(
(var1, value1),
(var2, value2),
.. ,
)

where var1, var2, ecc. are strings and value1, value2 are generic 
objects.


Now, I would like to use data contained in this list to dynamically 
generate assignments of the form "var1 = value1", ecc where var1 is an 
identifier equal (as a string) to the 'var1' in the list.


Is there a way to achieve this ?

Thanks in advance for any answer.

Class Pouf: pass

var1 = 'paf'
value1 = 1
setattr(Pouf, var1, value1)

print Pouf.paf
> 1


JM
--
http://mail.python.org/mailman/listinfo/python-list


dynamic assigments

2011-03-24 Thread Seldon

Hi, I have a question about generating variable assignments dynamically.

I have a list of 2-tuples like this

(
(var1, value1),
(var2, value2),
.. ,
)

where var1, var2, ecc. are strings and value1, value2 are generic objects.

Now, I would like to use data contained in this list to dynamically 
generate assignments of the form "var1 = value1", ecc where var1 is an 
identifier equal (as a string) to the 'var1' in the list.


Is there a way to achieve this ?

Thanks in advance for any answer.
--
http://mail.python.org/mailman/listinfo/python-list


About gmon.out for profiling

2011-03-24 Thread Baskaran Sankaran
Hi,

I have two questions regarding the profiling. My python version
automatically generates gmon.out for every script that I run and so I guess
that it must have been compiled with the appropriate option.

Firstly, it is not clear to me how I can use this. I've used python with
-cprofile switch and I wonder if they give different information.

Secondly, I would like to know how I can disable the generation of gmon.out.
Sometimes it is plain annoying to have these files in different places.

I would appreciate suggestions on both questions.

Thanks
- Baskaran
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: autoscale y to current xrange in view - matplotlib

2011-03-24 Thread Blockheads Oi Oi

On 23/03/2011 18:17, urban_gibbon wrote:
[snip]

You're likely to get answers on the matplotlib users mailing list 
https://lists.sourceforge.net/lists/listinfo/matplotlib-users


--
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread Ian Kelly
On Thu, Mar 24, 2011 at 10:47 AM, Antoon Pardon
 wrote:
>> That's not what you wrote before.  You wrote "I can't do the sort in
>> multiple steps."  I was just responding to what you wrote.
>
> That is because I tend to assume some intelligence with those I
> communicate with, so that I don't need to be pedantly clear and
> spell everything out.
>
> However since that seems to be a problem for you I will be more
> detailed. The original poster didn't ask for cases in which cmp
> was necessary, he asked for cases in which not using cmp was
> cumbersome.

False.  He asked for either sort of case:

> If anyone has any use-cases for sorting with a comparison function that
> either can't be written using a key function, or that perform really
> badly when done so, this would be a good time to speak up.


> You then responded, that you could solve that by sorting in multiple
> steps.

No, I did not.

> The problem with this response is that how items are to be
> compared can be decided in a different module from where they are
> actually sorted. So if we would accept this MO, this would mean
> that any module that needed to sort something according to a provided
> key, would need to provide for the possibility that the sorting would
> have to be done in multiple steps. So, in order to avoid complexity
> in the internal python sort, we would increase the complexity of
> all library code that would need to sort things in a by the user
> provided way and didn't want to barf in such an event.

That is a perfectly reasonable argument that you apparently were too
lazy to even mention before.  It is completely different from and not
at all implied by the patently absurd statement "So this list is
sorted within the class but how it is sorted is decided outside the
class. So I can't do the sort in multiple steps."

I would not say "The sky is green" when what I mean is "The color of
the sky is the result of the atmosphere scattering the white light
from the sun in the smaller wavelengths, including some green light."
Would you?

> So when I wrote I couldn't do that, I implicitly meant if you
> want a less cumbersome solution than using cmp, because your
> solution would make all library code more cumbersome.
>
>> > I think I have provided such a case. If you don't agree then
>> > don't just give examples of what I can do, argue how your solution
>> > would be a less ugly or more natural way to handle this.
>>
>> Well, the alternative is to generate the cmp function from the
>> externally selected keys dynamically at runtime, is it not?  Something
>> like this:
>>
>> def get_cmp(keys):
>>     def cmp(a, b):
>>         for (key, reversed) in keys:
>>             if reversed:
>>                 result = cmp(key(b), key(a))
>>             else:
>>                 result = cmp(key(a), key(b))
>>             if result != 0:
>>                 return result
>>         return result
>>     return cmp
>>
>> I fail to see how that is any more natural than my solution, unless
>> you have another way of doing it.  And it's certainly not going to be
>> any faster.
>
> First of all, there is no need for a dynamical generated cmp function in
> my provided case. My cmp fumction would simply look like this:
>
> def cmp(tp1, tp2):
>  return cmp(tp2[0], tp1[0]) or cmp(tp1[1], tp2[1])

I misunderstood your use case, then.  When you wrote that "how it is
sorted is decided outside of the class" in the context of sorting
based on multiple keys, I took that to imply that the external code
was deciding the sorting at runtime, e.g. that perhaps your priority
queue was being displayed in a GUI table that can be sorted on
multiple columns simultaneously.  But evidently you expect that this
key-based sort order will always be hard-coded?

> Second, there is not only the question of what is more natural, but
> there is also the question of how you are spreading the complexity.
> Your first solution would mean spreading the complexity to all library
> code where the user could provide the order-relation of the items that
> would then be sorted within the library. This alternative of your, even
> if too complicated than needed would mean that libary code could just
> use the sort method and the complicated code is limited to those
> specific items that need it.

On the other hand, if at some point your library really does need to
accept a dynamically generated sort order, then you are pushing the
complexity onto the client code by forcing it to generate a cmp
function like the one I posted above.  But perhaps that will never be
the case for your purpose, in which case I take your point.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread Antoon Pardon
On Thu, Mar 24, 2011 at 09:06:44AM -0600, Ian Kelly wrote:
> On Thu, Mar 24, 2011 at 3:23 AM, Antoon Pardon
>  wrote:
> > Sure I can do that. I can do lots of things like writing a CMP class
> > that I will use as a key and where I can implement the logic for
> > comparing the original objects, which I otherwise would have put in a
> > cmp function. I thought this wasn't about how one can get by without
> > the cmp argument. This was about cases where the cmp argument was the
> > more beautiful or natural way to handle this case.
> 
> That's not what you wrote before.  You wrote "I can't do the sort in
> multiple steps."  I was just responding to what you wrote.

That is because I tend to assume some intelligence with those I
communicate with, so that I don't need to be pedantly clear and
spell everything out.

However since that seems to be a problem for you I will be more
detailed. The original poster didn't ask for cases in which cmp
was necessary, he asked for cases in which not using cmp was
cumbersome. I gave a case where not using cmp was cumbersome.
a tuple where you want it sorted with first item in descending
order and second item ascending.

You then responded, that you could solve that by sorting in multiple
steps. The problem with this response is that how items are to be
compared can be decided in a different module from where they are
actually sorted. So if we would accept this MO, this would mean
that any module that needed to sort something according to a provided
key, would need to provide for the possibility that the sorting would
have to be done in multiple steps. So, in order to avoid complexity
in the internal python sort, we would increase the complexity of
all library code that would need to sort things in a by the user
provided way and didn't want to barf in such an event.

So when I wrote I couldn't do that, I implicitly meant if you
want a less cumbersome solution than using cmp, because your
solution would make all library code more cumbersome.

> > I think I have provided such a case. If you don't agree then
> > don't just give examples of what I can do, argue how your solution
> > would be a less ugly or more natural way to handle this.
> 
> Well, the alternative is to generate the cmp function from the
> externally selected keys dynamically at runtime, is it not?  Something
> like this:
> 
> def get_cmp(keys):
> def cmp(a, b):
> for (key, reversed) in keys:
> if reversed:
> result = cmp(key(b), key(a))
> else:
> result = cmp(key(a), key(b))
> if result != 0:
> return result
> return result
> return cmp
> 
> I fail to see how that is any more natural than my solution, unless
> you have another way of doing it.  And it's certainly not going to be
> any faster.

First of all, there is no need for a dynamical generated cmp function in
my provided case. My cmp fumction would simply look like this:

def cmp(tp1, tp2):
  return cmp(tp2[0], tp1[0]) or cmp(tp1[1], tp2[1])

Second, there is not only the question of what is more natural, but
there is also the question of how you are spreading the complexity.
Your first solution would mean spreading the complexity to all library
code where the user could provide the order-relation of the items that
would then be sorted within the library. This alternative of your, even
if too complicated than needed would mean that libary code could just
use the sort method and the complicated code is limited to those
specific items that need it.

-- 
Antoon Pardon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Instant File I/O

2011-03-24 Thread Jordan Meyer
That did the trick! Thanks!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: os.stat bug?

2011-03-24 Thread Laszlo Nagy



It's the OS kernel. If it was Python or the C library, sending SIGKILL
would result in immediate termination.

Is the disk interface operating in PIO mode? A slow disk shouldn't cause
100% CPU consumption; the OS would just get on with something else (or
just idle) while waiting for data to become available. But if it's
having to copy data from the controller one word at a time, that could
cause it (and would also make the disk appear slow).

This is a RAID 1+0 array with 10 hard disks plus a SCSI controller with 
2GB write back cache. I don't think that it has to do anything with disk 
speed. The CPU load goes up to 100% (the disk I/O does not go up that much).


I'm working about a different storage method. We will be storing these 
logs in a real database instead of separate CSV files. So probably this 
problem will cease.


Thanks,

   Laszlo


--
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread Ian Kelly
On Thu, Mar 24, 2011 at 3:23 AM, Antoon Pardon
 wrote:
> Sure I can do that. I can do lots of things like writing a CMP class
> that I will use as a key and where I can implement the logic for
> comparing the original objects, which I otherwise would have put in a
> cmp function. I thought this wasn't about how one can get by without
> the cmp argument. This was about cases where the cmp argument was the
> more beautiful or natural way to handle this case.

That's not what you wrote before.  You wrote "I can't do the sort in
multiple steps."  I was just responding to what you wrote.

> I think I have provided such a case. If you don't agree then
> don't just give examples of what I can do, argue how your solution
> would be a less ugly or more natural way to handle this.

Well, the alternative is to generate the cmp function from the
externally selected keys dynamically at runtime, is it not?  Something
like this:

def get_cmp(keys):
def cmp(a, b):
for (key, reversed) in keys:
if reversed:
result = cmp(key(b), key(a))
else:
result = cmp(key(a), key(b))
if result != 0:
return result
return result
return cmp

I fail to see how that is any more natural than my solution, unless
you have another way of doing it.  And it's certainly not going to be
any faster.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread Mel
Carl Banks wrote:

> On Mar 23, 1:38 pm, Paul Rubin  wrote:
>> Well, I thought it was also to get rid of 3-way cmp in general, in favor
>> of rich comparison.
>
> Supporting both __cmp__ and rich comparison methods of a class does
> add a lot of complexity.  The cmp argument of sort doesn't.
>
> The cmp argument doesn't depend in any way on an object's __cmp__
> method, so getting rid of __cmp__ wasn't any good readon to also get
> rid of the cmp argument; their only relationship is that they're
> spelled the same.  Nor is there any reason why cmp being a useful
> argument of sort should indicate that __cmp__ should be retained in
> classes.

I would have thought that the upper limit of cost of supporting cmp= and
key= would be two different internal front-ends to the internal
internal sort.

Mel.

-- 
http://mail.python.org/mailman/listinfo/python-list


No more Python support in NetBeans 7.0

2011-03-24 Thread Kees Bakker
Hi,

Sad news (for me, at least), in the upcoming version 7.0 of NetBeans
there will be no Python plugin anymore.

I have been using NetBeans for Python development for a while now
and I was very happy with it.

See this archive for details:
http://netbeans.org/projects/www/lists/nbpython-dev/archive/2010-11/message/0
http://netbeans.org/projects/www/lists/nbpython-dev/archive/2011-01/message/0
-- 
Kees

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why memoizing is faster

2011-03-24 Thread Stefan Behnel

Andrea Crotti, 24.03.2011 14:48:

I was showing a nice memoize decorator to a friend using the classic
fibonacci problem.

--8<---cut here---start->8---
   def memoize(f, cache={}):
   def g(*args, **kwargs):
   # first must create a key to store the arguments called
   # it's formed by the function pointer, *args and **kwargs
   key = ( f, tuple(args), frozenset(kwargs.items()) )
   # if the result is not there compute it, store and return it
   if key not in cache:
   cache[key] = f(*args, **kwargs)

   return cache[key]

   return g

   # without the caching already for 100 it would be unfeasible
   @memoize
   def fib(n):
   if n<= 1:
   return 1
   else:
   return fib(n-1) + fib(n-2)

--8<---cut here---end--->8---


It works very well, but he said that the iterative version would be
anyway faster.

But I tried this

--8<---cut here---start->8---
   def fib_iter(n):
   ls = {0: 1, 1:1}
   for i in range(2, n+1):
   ls[i] = ls[i-1] + ls[i-2]

   return ls[max(ls)]
--8<---cut here---end--->8---

and what happens is surprising, this version is 20 times slower then the
recursive memoized version.


What have you used for timing? "timeit"? Note that the memoized version 
saves the results globally, so even the end result is cached, and the next 
call takes the result from there, instead of actually doing anything. The 
timeit module repeatedly calls the code and only gives you the best runs, 
i.e. those where the result was already cached.


Stefan

--
http://mail.python.org/mailman/listinfo/python-list


why memoizing is faster

2011-03-24 Thread Andrea Crotti
I was showing a nice memoize decorator to a friend using the classic
fibonacci problem.

--8<---cut here---start->8---
  def memoize(f, cache={}):
  def g(*args, **kwargs):
  # first must create a key to store the arguments called
  # it's formed by the function pointer, *args and **kwargs
  key = ( f, tuple(args), frozenset(kwargs.items()) )
  # if the result is not there compute it, store and return it
  if key not in cache:
  cache[key] = f(*args, **kwargs)
  
  return cache[key]
  
  return g
  
  # without the caching already for 100 it would be unfeasible
  @memoize
  def fib(n):
  if n <= 1:
  return 1
  else:
  return fib(n-1) + fib(n-2)
  
--8<---cut here---end--->8---


It works very well, but he said that the iterative version would be
anyway faster.

But I tried this
  
--8<---cut here---start->8---
  def fib_iter(n):
  ls = {0: 1, 1:1}
  for i in range(2, n+1):
  ls[i] = ls[i-1] + ls[i-2]
  
  return ls[max(ls)]
--8<---cut here---end--->8---

and what happens is surprising, this version is 20 times slower then the
recursive memoized version.

I first had a list of the two last elements and I thought that maybe the
swaps were expensive, now with a dictionary it should be very fast.

Am I missing something?
Thanks, 
Andrea
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "in house" pypi?

2011-03-24 Thread Miki Tebeka
> The easiest solution is to use a plain file system. Make a directory per
> project, and put all distributions of the project into the directory.
> Then have Apache serve the parent directory, with DirectoryIndex turned
> on.
Great, thanks!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threading with Socket Server

2011-03-24 Thread baloan
If you don't mind to use the coroutine library eventlet you can
implement a single threaded solution. See example below. For your use
case you need to change controller to load the shelve every
eventlet.sleep(n) seconds.

Regards, Andreas

# eventlet single thread demo

""" prc_publish.eventlet

Price Publisher
"""

# imports

from argparse import ArgumentParser
import eventlet
import logging
import os
import random
import sys
import cPickle as pickle

LOG = logging.getLogger()

# definitions

def main(argv=None):
if argv is None:
argv = sys.argv
LOG.info("starting '%s %s'", os.path.basename(argv[0]), "
".join(argv[1:]))
# parse options and arguments
parser = ArgumentParser(description="Price Publisher")
parser.add_argument("-f", "--file", dest="filename",
  help="read configuration from %(dest)s")
parser.add_argument("-p", "--port", default=8001, type=int,
  help="server port [default: %(default)s")
args = parser.parse_args()
print args
# create product dict
prds = { }
pubqs = []
for n in range(10):
key = "AB" + "{:04}".format(n)
prds["AB" + key] = Pricer(key)
# start one thread for price changes
eventlet.spawn(controller, prds, pubqs)
address = ('localhost', 8010)
eventlet.spawn(listener, address, pubqs)
# main thread runs eventlet loop
while True:
eventlet.sleep(10)


def listener(address, pubqs):
sock = eventlet.listen(address)
while True:
LOG.info('waiting for connection on %s', address)
cx, remote = sock.accept()
LOG.info("accepting connection from %s", remote)
inq = eventlet.queue.Queue()
pubqs.append(inq)
eventlet.spawn(receiver, cx)
eventlet.spawn(publisher, pubqs, inq, cx)


def publisher(pubqs, inq, cx):
LOG.info("Publisher running")
try:
while True:
# what happens if client does not pick up
# what happens if client dies during queue wait
try:
with eventlet.Timeout(1):
item = inq.get()
s = pickle.dumps(item, pickle.HIGHEST_PROTOCOL)
# s = "{0[0]} {0[1]}\n\r".format(item)
cx.send(s)
except eventlet.Timeout:
# raises IOError if connection lost
cx.fileno()
# if connection closes
except IOError, e:
LOG.info(e)
# make sure to close the socket
finally:
cx.close()
pubqs.remove(inq)
LOG.info("Publisher terminated")


def receiver(cx):
LOG.info("Receiver running")
try:
while True:
# what happens if client does not pick up
s = cx.recv(4096)
if not s:
break
LOG.info(s)
# if connection closes
except IOError, e:
LOG.info(e)
# make sure to close the socket
finally:
cx.close()
LOG.info("Receiver terminated")

def controller(prds, pubqs):
while True:
LOG.info("controller: price update cycle, %i pubqs",
len(pubqs))
Pricer.VOLA = update_vola(Pricer.VOLA)
for prd in prds.values():
prd.run()
for pubq in pubqs:
pubq.put((prd.name, prd.prc))
eventlet.sleep(5)

def update_vola(old_vola):
new_vola = max(old_vola + random.choice((-1, +1)) * 0.01, 0.01)
return new_vola

class Pricer(object):
VOLA = 0.01
def __init__(self, name):
self.name = name
self.prc = random.random() * 100.0

def run(self):
self.prc += random.choice((-1, +1)) * self.prc * self.VOLA


if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s.%(msecs)03i %(levelname).
4s %(funcName)10s: %(message)s',
datefmt='%H:%M:%S')
main()
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "in house" pypi?

2011-03-24 Thread Christian Heimes
Am 24.03.2011 12:49, schrieb Billy Earney:
> Another possible solution, would be to use urlimport
> http://pypi.python.org/pypi/urlimport/
> if the packages are 100% python (no
> c, etc), you could create a single repository, serve that via a web server,
> and users could easy import modules without even installing them..

At work we keep all third party and pure Python packages in our svn
repository. It's a practical solution that makes deployment and tracking
of changes very easy. We also have a full installation of Python 2.6 and
JRE including extension modules in our svn. Only on Unix we install
Python and extension from source. It's a rather practical solution.

Christian
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "in house" pypi?

2011-03-24 Thread Billy Earney
Another possible solution, would be to use urlimport
http://pypi.python.org/pypi/urlimport/
if the packages are 100% python (no
c, etc), you could create a single repository, serve that via a web server,
and users could easy import modules without even installing them..

This isn't exactly what you asked for, but depending on your situation,
could be useful.

Hope this helps.

On Thu, Mar 24, 2011 at 6:22 AM, Christian Heimes  wrote:

> Am 24.03.2011 04:19, schrieb Miki Tebeka:
> > Greetings,
> >
> > My company want to distribute Python packages internally. We would like
> something like an internal PyPi where people can upload and easy_install
> from packages.
> >
> > Is there such a ready made solution?
> > I'd like something as simple as possible, without my install headache.
>
> Plain simple solution:
>
>  * configure a host in your apache config and point it to a directory on
> the file system
>  * create directory simple  and turn directory index on for it and all
> its descendants
>  * create a directory for every package and download the files into
> these directories
>
> The download link for e.g. lxml 2.3 should look like
> http://your.pipy.com/simple/lxml/lxml-2.3.tar.gz. Now point run
> "easy_install --index-url http://your.pipy.com/simple lxml==2.3"
>
> Christian
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "in house" pypi?

2011-03-24 Thread Christian Heimes
Am 24.03.2011 04:19, schrieb Miki Tebeka:
> Greetings,
> 
> My company want to distribute Python packages internally. We would like 
> something like an internal PyPi where people can upload and easy_install from 
> packages.
> 
> Is there such a ready made solution?
> I'd like something as simple as possible, without my install headache.

Plain simple solution:

 * configure a host in your apache config and point it to a directory on
the file system
 * create directory simple  and turn directory index on for it and all
its descendants
 * create a directory for every package and download the files into
these directories

The download link for e.g. lxml 2.3 should look like
http://your.pipy.com/simple/lxml/lxml-2.3.tar.gz. Now point run
"easy_install --index-url http://your.pipy.com/simple lxml==2.3"

Christian

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread Antoon Pardon
On Wed, Mar 23, 2011 at 05:51:07PM +0100, Stefan Behnel wrote:
> >>
> >>You can use a stable sort in two steps for that.
> >
> >Which isn't helpfull if where you decide how they have to be sorted is
> >not the place where they are actually sorted.
> >
> >I have a class that is a priority queue. Elements are added at random but
> >are removed highest priority first. The priority queue can have a key or
> >a cmp function for deciding which item is the highest priority. It
> >can also take a list as an initializor, which will then be sorted.
> >
> >So this list is sorted within the class but how it is sorted is decided
> >outside the class. So I can't do the sort in multiple steps.
> 
> That sounds more like a use case for heap sort than for Python's
> builtin list sort. See the heapq module.

No the heapq module is not usefull. The heapq functions don't have a
cmp, or a key argument. So you can't use it with priorities that differ
from the normal order of the items.

For the rest is solving this particular problem beside the point. It
was just an illustration of how, sorting a list can be done in a place
differently from where one decides the order-relation of the items in
the list. That fact makes it less obvious to use multiple steps in the
place where you actually sort.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread Antoon Pardon
On Wed, Mar 23, 2011 at 10:40:11AM -0600, Ian Kelly wrote:
> On Wed, Mar 23, 2011 at 9:14 AM, Antoon Pardon
>  wrote:
> > Which isn't helpfull if where you decide how they have to be sorted is
> > not the place where they are actually sorted.
> >
> > I have a class that is a priority queue. Elements are added at random but
> > are removed highest priority first. The priority queue can have a key or
> > a cmp function for deciding which item is the highest priority. It
> > can also take a list as an initializor, which will then be sorted.
> >
> > So this list is sorted within the class but how it is sorted is decided
> > outside the class. So I can't do the sort in multiple steps.
> 
> You can't do this?
> 
> for (key, reversed) in self.get_multiple_sort_keys():
> self.pq.sort(key=key, reversed=reversed)

Sure I can do that. I can do lots of things like writing a CMP class
that I will use as a key and where I can implement the logic for
comparing the original objects, which I otherwise would have put in a
cmp function. I thought this wasn't about how one can get by without
the cmp argument. This was about cases where the cmp argument was the
more beautiful or natural way to handle this case.

I think I have provided such a case. If you don't agree then
don't just give examples of what I can do, argue how your solution
would be a less ugly or more natural way to handle this.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Special logging module needed

2011-03-24 Thread Laszlo Nagy

2011.03.23. 19:33 keltezéssel, Dan Stromberg írta:


On Wed, Mar 23, 2011 at 7:37 AM, Laszlo Nagy > wrote:


I was also thinking about storing data in a gdbm database. One
file for each month storing at most 100 log messages for every key
value. Then one file for each day in the current month, storing
one message for each key value. Incremental backup would be easy,
and reading back old messages would be fast enough (just need to
do a few hash lookups). However, implementing a high availability
service around this is not that easy.


I think a slight variation of this sounds like a good bet for you.  
But when you "open" a database, create a temporary copy, and when you 
close the database, rename it back to its original name.  Then your 
backups should be able to easily get a self-consistent (if not up to 
the millisecond) snapshot.
My idea was to open all database in read-only mode, except the one for 
the last day. So it will be possible to archive these files (except the 
last day). No need to make copies. I have also developed an algorithm 
that merges the database from the "previous day" with the database of 
the "month of the previous day". This happens when a day switch occurs. 
The algorithm detects this, and it can merge the database and at the 
same time, log messages can be added. The service is only suspended two 
times a day, when fragmented and defragged database are switched. But 
that is only a single "file rename" operation and it takes less than 0.1 
seconds to do. So the alg. is ready. I can implement it. But It is not 
easy to do it. I can spend many days with it and then it may turn out 
that it is not that efficient than I thought.


I cannot believe that others din't run into the same problem. This is 
why I posted to the list. I don't want to reinvent the wheel if I don't 
need to.



Or did you have some other problem in mind for the gdbm version?

Nope.


BTW, avoid huge directories of course, especially if you don't have 
hashed or btree directories.  One way is to come up with a longish 
hash key (sha?), and use a trie-like structure in the filesystem on 
fibonnaci-length chunks of the hash keys becoming directories and 
subdirectories.

Hmm that's a good idea. Thanks!

   L


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Guido rethinking removal of cmp from sort method

2011-03-24 Thread Paul Rubin
Dennis Lee Bieber  writes:
>   The first half of the problem description -- "Elements are added at
> random" seems more suited to an in-place insertion sort method. 

This is precisely what a priority queue is for.  Insertions take 
O(log n) time and there's very little space overhead in heapq's
list-based implementation.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Instant File I/O

2011-03-24 Thread Charles

"jam1991"  wrote in message 
news:c0c76bc4-8923-4a46-9c36-6e1a0375f...@l11g2000yqb.googlegroups.com...
[snip]
> they sign into the program with; however, this information doesn't
> appear in the file until after the program has closed. This poses a
> problem for retrieving the up-to-date statistics data during the same
> session. Is there anyway I can fix this? I'm using .write() to write
[snip]

.flush() ?
>From http://www.tutorialspoint.com/python/file_methods.htm
file.flush()
Flush the internal buffer, like stdio's fflush. This may be a no-op on some 
file-like objects.
Charles


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "in house" pypi?

2011-03-24 Thread John Nagle

On 3/23/2011 8:19 PM, Miki Tebeka wrote:

Greetings,

My company want to distribute Python packages internally. We would
like something like an internal PyPi where people can upload and
easy_install from packages.

Is there such a ready made solution? I'd like something as simple as
possible, without my install headache.

Thanks, -- Miki


  PyPi isn't a code repository, like CPAN or SourceForge.
It's mostly a collection of links.

  Take a look at CPAN, Perl's package repository. That's
well organized and useful.  Modules are stored in a common archive
after an approval process, and can be downloaded and installed
in a standard way.

"easy_install" generally isn't easy.  It has some built-in
assumptions about where things are stored, assumptions which
often don't hold true.

John Nagle
--
http://mail.python.org/mailman/listinfo/python-list