Re: empty lists vs empty generators

2005-05-02 Thread jfj
Brian Roberts wrote:

> I'm using using generators and iterators more and more intead of
> passing lists around, and prefer them.  However, I'm not clear on the
> best way to detect an empty generator (one that will return no items)
> when some sort of special case handling is required.
>

Usually it will be the job of the generator to signal something like 
this.  I think a possible way might be:

 class GeneratorEmpty: pass

 def generator():
  if not X:
  raise GeneratorEmpty
  for i in X:
   yield i

 try:
  for x in generator
  something (x)
 except GeneratorEmpty:
  generator_special_case

The trick is that when generators raise exceptions they terminate.
Although this is probably not what you want.  The thing is that you
cannot know if a generator will return any elements until you call
its next() method.


> Q2: Is there a way that handles both lists and generators, so I don't
> have to worry about which one I've got?

I don't think this is possible.  A generator must be called (with
next()) in order for its code to take over and see if it is empty or
not.  Unlike the list.


jfj

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Can .py be complied?

2005-04-28 Thread jfj
[EMAIL PROTECTED] wrote:
IMO the fact that so many people ask
"How can I create executables in Python on Windows"
indicates that standard "batteries included" Windows Python
distribution is missing a vital battery. There are tools such as
py2exe, but this functionality should be built-in, so that a newbie to
Python can just download it, type
python -o foo.exe foo.py
at the command line, and get an executable, without any further effort.

Since this is about windows and windows users just want everything in
".exe" form (no matter if it also contains spyware), and they don't care
about the size of it (they just want the damn exe) and since there is
zero chance that python will be included in the next windows
distribution but these people still want the exe (they do, really),
I think I have a convenient solution to give it to them.
/* small program in C in self extracting archive
 */
if (have_application ("Python")) {
  have_python:
  system ("python.exe my_application.py")
} else {
  printf ("This software requires python. Wait until all the
necessary components are being installed\n");
  download_python_from_python_org();
  system ("install_python.exe");
  goto have_python;
}
Seriously, people who want executables wouldn't notice the difference.
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: What's do list comprehensions do that generator expressions don't?

2005-04-25 Thread jfj
Mike Meyer wrote:
jfj <[EMAIL PROTECTED]> writes:

I think a better question would be "What do *generator expressions* do
that list comprehensions don't?".  And always use list comprehensions
unless you want the extra bit.

As the OP, I can say why I didn't ask those questions.
Sorry. I was referring to the subject line:)
Generator expressions don't build the entire list in memory before you
have to deal with it. This makes it possible to deal with expressions
that are to long to fit in memory.
Which means that the real rule should be always use generator
expressions, unless you *know* the expression will always fit in
memory.
Consider this code which I also included the first reply:
   x = [i for in in something()]
   random.shuffle (x)
   x.sort ()
Shuffle and sort are two examples where need *the entire list* to
work. Similarily for a dictionary where the values are small lists.
In this example using a generator buys you *nothing* because you
will immediately build a list.
So there are cases where we need the list as the product of an algorithm
and a generator is not good enough.  In fact, in my experience with 
python so far I'd say that those cases are the most common case.

That is the one question.
The other question is "why not list(generator) instead of [list 
comprehension]?"

I guess that lists are *so important* that having a primary language
feature for building them is worth it.  On the other hand "list()" is
not a primary operator of the python language. It is merely a builtin
function.
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: What's do list comprehensions do that generator expressions don't?

2005-04-25 Thread jfj
Robert Kern wrote:
jfj wrote:
2) to convert a list/tuple/string to a list, which is
done extremely fast.

Add "any iteratable". Genexps are iterables.
The thing is that when you want to convert a tuple to a list
you know already the size of it and you can avoid using append()
and expanding the list gradually.  For iterables you can't avoid
appending items until StopIteration so using list() doesn't have
any advantage.  The OP was about genexps vs list comprehensions
but this is about list() vs. list comprehensions.
Possibly. I find them too similar with little enough to choose between them, hence the OP's question. 
One solution is to forget about list().  If you want a list use []. 
Unless you want to convert a tuple...

I think a better question would be "What do *generator expressions* do
that list comprehensions don't?".  And always use list comprehensions
unless you want the extra bit.
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: What's do list comprehensions do that generator expressions don't?

2005-04-25 Thread jfj
I jfj wrote:
  make_fractal_with_seed (x for x in range(1) if fibonacci_prime 
(x))

and this is stupendus.
At least range should be xrange.
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to "generalize" a function?

2005-04-25 Thread jfj
Thomas Köllmann wrote:
Hi, everybody!
I'm teaching myself Python, and I have no experience in programming
apart from some years of shell scripting. So, please bear with me.
These two funktions are part of an administrative script I've set
myself as a first lesson, and, as you will see, they're practically the same,
except for one variable. So I'd like to weld them together -- but I
can't find out how to.
Pass the variable as an argument probably.
But because generally you wouldn't want to recompile the regexp (this 
should be done once), you could say:

# untested
def makewriter (regexp_string):
  def writeFunc(ip, regex=re.compile(regexp_string)):
 confFile = open(networkConf, 'r')
 conf = confFile.readlines()
 confFile.close
 for line in conf:
if regex.search(line):
addressLine = line
 addressLineNum = conf.index(addressLine)
 address = string.split(addressLine, ' ')
 address[1] = ip + "\n"
 conf[addressLineNum] = string.join(address)
 confFile = open(networkConf, 'w')
 confFile.writelines(conf)
 confFile.close
  return writeFunc
writeIP=makewriter('(.*)address(.*)')
writeMask=makewriter('(.*)netmask(.*)')
This is rather advanced python programming though, but it shows
cool dynamic function creation features and it's never early to
get into it;)
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: What's do list comprehensions do that generator expressions don't?

2005-04-25 Thread jfj
Robert Kern wrote:
Mike Meyer wrote:
Ok, we've added list comprehensions to the language, and seen that
they were good. We've added generator expressions to the language, and
seen that they were good as well.
I'm left a bit confused, though - when would I use a list comp instead
of a generator expression if I'm going to require 2.4 anyway?
If you want a list right away you'd use a list comprehension.
 X =[i for i in something() if somethingelse()]
 random.shuffle(X)
 print x[23]
On the other hand it's generator expressions which should be used
only when the code can be written in as a pipe.  For example a filter
of a -otherwise- very long list:
  make_fractal_with_seed (x for x in range(1) if 
fibonacci_prime (x))



Never. If you really need a list
list(x*x for x in xrange(10))
Sadly, we can't remove list comprehensions until 3.0.
Why???
Then we should also remove:
 x=[] to x=list()
 x=[1,2,3] to x=list(1,2,3)
I think "list" is useful only:
1) to subclass it
2) to convert a list/tuple/string to a list, which is
done extremely fast.
But for iterators I find the list comprehension syntax nicer.
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: Calling a Perl Module from Python

2005-04-06 Thread jfj
Swaroop C H wrote:

AFAIK, there isn't any reliable way to call Perl modules from Python.

pyperl. pyperl. pyperl.
+10 on making this a standard module for embrace+extend reasons
*and* because perl is not that bad after all.
jf

--
http://mail.python.org/mailman/listinfo/python-list


Re: StopIteration in the if clause of a generator expression

2005-04-01 Thread jfj
Peter Otten wrote:
To confuse a newbies and old hands alike, Bengt Richter wrote:
got me for one:)

To make it a bit clearer, a StopIteration raised in a generator expression
silently terminates that generator:
*any* exception raised from a generator, terminates the generator
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: Calling __init__ with multiple inheritance

2005-03-29 Thread jfj
Peter Otten wrote:

Child()
child
father
mother
parent # <-- parent only once
<__main__.Child object at 0x402ad38c>
D-uh?

class Parent(object):
def __init__(self):
print "parent"
super(Parent, self).__init__()
class Father(Parent):
def __init__(self):
print "father"
super(Father, self).__init__()
print "D-uh"
class Mother(Parent):
def __init__(self):
print "mother"
super(Mother, self).__init__()
print "D-oh"
class Child(Father, Mother):
def __init__(self):
print "child"
super(Child, self).__init__()
Child()

This prints
 child
 father
 mother
 parent
 D-oh
 D-uh
Therefore super is a very intelligent function indeed!
jf
--
http://mail.python.org/mailman/listinfo/python-list


Re: Calling __init__ with multiple inheritance

2005-03-28 Thread jfj
Peter Otten wrote:
jfj wrote:
Peter Otten wrote:

Here is an alternative approach that massages the initializer signatures
a bit to work with super() in a multiple-inheritance environment:

   super(Father, self).__init__(p_father=p_father, **more)
Is there any advantage using super in this case?
I think the case Father.__init__ (self, params) is simpler
and does the job perfectly well.
I agree.
 
Ok. thanks for the confirmation.

super seems to be needed in "Dynamic Inheritance" cases where
we don't know an object's bases and there are comlicated mro issues!

Suppose you wanted factor out common code from the Father and Mother classes
into a Parent class -- something neither complicated nor farfetched. With
explicit calls to Parent.__init__() you would end up calling it twice from
Child.__init__(). So when you anticipate that your class hierarchy may
change, or that your classes may be subclassed by users of your library, I
think super() is somewhat less errorprone.
I accept the case that you avoid bugs if you extend the hierarchy
upwards.  Although that's rare.
As for the case where the users of the library want to subclass, I don't
see a problem.  They know they must subclass from class XXX and so they
call XXX.__init__ to construct it.
In the case of Parent diamond inheritance, super() can avoid calling
the __init__ of parent twice?  How?
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: Calling __init__ with multiple inheritance

2005-03-28 Thread jfj
Peter Otten wrote:

Here is an alternative approach that massages the initializer signatures a
bit to work with super() in a multiple-inheritance environment:

super(Father, self).__init__(p_father=p_father, **more)

Is there any advantage using super in this case?
I think the case Father.__init__ (self, params) is simpler
and does the job perfectly well.
super seems to be needed in "Dynamic Inheritance" cases where
we don't know an object's bases and there are comlicated mro issues!
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: Getting the word to conventional programmers

2005-03-22 Thread jfj
Terry Reedy wrote:
"Cameron Laird" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]

*DevSource* profiles "The State of the Scripting Universe" in
http://www.devsource.com/article2/0,1759,1778141,00.asp >.

Interesting quote from Guido: "If the same effort were poured into speeding 
up Python as Sun devoted to Java, Python would be better than Java in every 
respect."

Except from a the standard, powerful, 
looks-good-everywhere-and-has-a-tree-widget GUI toolkit? :)

Seriously, I think this is *very* important.
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python scope is too complicated

2005-03-21 Thread jfj
Dan Bishop wrote:

x = 17
sum(x for x in xrange(101))
5050
x
17
Your example with generator expressions is interesting.
Even more interesting is:
def foo(x):
 y= (i for i in x)
 return y
From the disassembly it seems that the generator is a code object but
'x' is not a cell variable. WTF?  How do I disassemble the generator?
Must look into this
[ list comprehensions are different because they expand to inlined 
bytecode ]

jf
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python scope is too complicated

2005-03-20 Thread jfj
Max wrote:
Yeah, I know. It's the price we pay for forsaking variable declarations. 
But for java programmers like me, Py's scoping is too complicated. 
Please explain what constitutes a block/namespace, and how to refer to 
variables outside of it.

Some may disagree, but for me the easiest way to understand python's
scopes is this:
"""In Python, there are only two scopes.  The global and the local.
The global scope is a dictionary while the local, in the case of a
function is extremely fast.  There are no other scopes.  There are
no scopes in the nested statements inside code blocks and there are
no class scopes.  As a special case, nested function definitions
appear to be something like a nested scope, but in reallity this is
detected at compile-time and a strange feature called 'cell variables'
is used"""
In order to write to a global variable from a function, we have
to use:
global var
which notifies the compiler that assignment to 'var' does not make
a new local variable, but it modifies the global one (OT: wouldn't
"global.var = 3" be nicer?).  On the other hand, if we just want to
read a global variable we don't have to say "global var" because
the compiler sees that there is no assignment to 'var' in the function
code and therefore intelligently concludes that it's about a global
one.
Generally, python sees everything as
exec "code" in global_dictionary, local_dictionary
In this case it uses the opcode LOAD_NAME which looks first in locals
and then in globals (and actually, then in __builtins__)
For functions it uses either LOAD_FAST for locals or LOAD_GLOBAL for
globals.
HTH
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused with classmethods

2005-03-11 Thread jfj
Ruud wrote:
So far for *how* it works. As to *why* it works like this, I don't know
for sure. But my guess is that the reasoning was something as follows:
if you define a function (regular or something special like a
classmethod) only for an instance of a class, you obviously don't
want to use it in a class context: it is -by definition- invisible to
the class, or to other instances of the same class.
One possible use case would be to store a callback function.
And in that case you definitely don't want the class magic to happen
when you reference the function.

Yep. Got it. Indeed the reason seems to be a valid optimization:
-in 99% of the cases you request something from an instance it is a 
plain old variable
-in 99% of the cases you request something from a class it's a
function.

So it would be a waste of time to check for the conversion when
something exists in the __dict__ of the instance, indeed.
OTOH, I'm talking about the "concept of python" and not CPython 
implementation, and that's why I have these questions:)

Thanks,
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused with classmethods

2005-03-11 Thread jfj
Diez B. Roggisch wrote:
Moreover the documentation 
sais that if the first argument is an instance, its class will be
used for the classmethod.  OTOH, "class scope" is not a real thing in
python.  It is just some code which is executed and then we get its
locals and use it on Class(localsDict, BasesTuple, ClassName) to make
a new class, it seems.  So we can create a classmethod in any scope
and then just attach it to a class's dictionary.

I'd still call the code executed inside a class statement block a "scope" -
for example in a def-statement the scope is the current frame of execution,
so 

def foo():
   bar = "baz"
Absolutely.  In fact here is another interesting case which can lead to
nice recipies:
class B:
print "Making class B!"
if blah:
def f(self): print self
else
def f(self): raise StopIteration
print "My locals are:",locals(),"and I will make a class from them"
makes the bar part of the frames local variables. Scopes just exchange or
stack the  dicts for name lookup.

Well, actually, to put things right, in python there are only *two*
real scopes: global scope and local scope.
I remember that when I was a newbie I was confused by this.
The thing is that there is no nesting of scopes in reality.
Just to help other newbies avoid the confusion, I believe it would
be better to say, from the start, that:
- There definitelly is no nesting of scopes for if/for/while/try
blocks.
- There is no nesting of scopes when nested functions reference
stuff from the enclosing function.  It looks like there is but
there isn't because then we should be able to say:
exec "some code" in locals(), cellvars(), globals()
which we can't and proves that referencing variables from the enclosing
function is indeed a questionable feature.  It all happens because the
parser detects that there are variables with that name in the enclosing
functions and creates special cellvars...
- No nesting of scopes for classes because we *have* to use 'self'
(which is a good thing IMHO).
jf
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused with classmethods

2005-03-11 Thread jfj
Diez B. Roggisch wrote:
I understand that this is a very peculiar use of
classmethods but is this error intentional?
Or did I completely missed the point somewhere?

A little bit: classmethods are defined in a class context.
def foo(cls):
print cls
class A:
foo = classmethod(foo)
The error you observe seems to be a result of your "abuse" of classmethod
outside a class scope.

Not necessarily:
def foo(cls):
print cls
f=classmethod(foo)
class A: pass
A.f = f
a=A()
a.f()
This works.  Anyway, the confusion starts from the documentation of
classmethod().  Since classmethod is a *function* of builtins we can
invoke it as such from wherever we want.  Moreover the documentation
sais that if the first argument is an instance, its class will be
used for the classmethod.  OTOH, "class scope" is not a real thing in
python.  It is just some code which is executed and then we get its
locals and use it on Class(localsDict, BasesTuple, ClassName) to make
a new class, it seems.  So we can create a classmethod in any scope
and then just attach it to a class's dictionary.
jf
--
http://mail.python.org/mailman/listinfo/python-list


Confused with classmethods

2005-03-11 Thread jfj
Hi.
Suppose this:

def foo (x):
print x
f = classmethod (foo)
class A: pass
a=A()
a.f = f
a.f()
# TypeError: 'classmethod' object is not callable!
###
I understand that this is a very peculiar use of
classmethods but is this error intentional?
Or did I completely missed the point somewhere?
j.
--
http://mail.python.org/mailman/listinfo/python-list


Re: lambda closure question

2005-02-21 Thread jfj
Carl Banks wrote:
transformations gets rebound, so you'd need a reference to it.
That certainly is an application.  I guess it depends on one's
programming background.
I'd only use nested (function, class) definition to accomplish
such a feature:

def genclass(x,y):
class myclass:
M = x
def f(self, z):
return self.M + y + z
return myclass
A=genclass(1,2)
a=A()
#
where i would prefer python to expand this as a template to:
class myclass:
M = 1
def f(self, z):
return self.M + 2 + z
A = myclass
a = A()
IOW, I'd like nested (function, class) definitions to
be used *only* for dynamic code generation with runtime
constants, and OOP *forced* for other applications like
the fractal example:)
jf
---
# if you're dissatisfied with the situation on the planet
# slow down the circulation of money
--
http://mail.python.org/mailman/listinfo/python-list


Re: lambda closure question

2005-02-21 Thread jfj
Antoon Pardon wrote:
Op 2005-02-19, jfj schreef <[EMAIL PROTECTED]>:
once foo() returns there is no way to modify 'x'!
It becomes a kind of constant.

In this particular case yes. But not in general, what about
this:

def F():
...   l = []
...   def pop():
... return l.pop()
...   def push(e):
... l.append(e)
...   return pop, push
... 

Others will point this out, but if I'm fast enough...
This does not change the object referenced by l.
It calls methods of it and because it is mutable the containts
of the list are modified.
'l' is a list at address, 0x and that can never change once
F() has returned.
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: lambda closure question

2005-02-20 Thread jfj
Carl Banks wrote:
Say you want to calculate a list of points of an iterated fractal.
Here's the idea: you have a set of linear transformations.  You take
the origin (0,0) as the first point, and then apply each transformation
in turn to get a new point.  You recursively apply each transformation
at each additional point, up to a certain depth.
I like these kind of things. Ahhh, the best use out of one's computer.
What's the best way to write such a function?  Using a nested function,
it's easy as pie.  (Details may be a little wrong)
. def ifs(transformations, maxdepth):
. def add_point(x,y,angle,depth):
. ifspoints.append((x,y,angle))
. if depth < maxdepth:
. for t in transformations:
. nx = x + t.x*cos(angle) - t.y*sin(angle)
. ny = y + t.x*sin(angle) * t.y*cos(angle)
. nangle = angle + t.angle
. add_point(nx,ny,nangle,depth+1)
. ifspoints = []
. add_point(0.0, 0.0, 0.0, 0)
. return ifspoints
If you didn't have the nested function, you'd have to pass not only
depth, but also maxdepth, transformations, and ifspoints around.
I see. Some people complained that you have to use "self." all the
time in methods:)  If that was lifted the above would be done with
a class then?
The costly extra feature is this:
###
 def foo():
def f():
   print x
x=1
f()
x=2
f()
return f
foo()()
#
which prints '1 2 2'
The fractal code runs a little _slower_ because of this ability.
Although the specific program does not take advantage of it!
If you had something a little more complex than this, with more than
one nested function and more complex pieces of data you'd need to pass
around, the savings you get from not having to pass a data structure
around and referncing all that data though the structure can be quite a
lot.
Generally, I believe OOP is a superset of nested functions (as a
feature).  Nested functions make sense to me as primary python
def statements which ``create new code objects with arbitary constants''
When I see :
###
def foo(x):
 def bar():
 print x
 return bar
###
and the call:
 xx = foo(1)
I imagine that python runs:
 def bar():
print 1
 xx=bar
But alas, it doesn't:(
No. It carries around a silly cell object for the
rest of its lifetime.  see xx.func_closure.
At least, I think this is what happens...
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: lambda closure question

2005-02-19 Thread jfj
Carl Banks wrote:
jfj wrote:
Carl Banks wrote:
Ted Lilley wrote:
Unfortunately, it doesn't work.  It seems the closure keeps track
of
the variable fed to it dynamically - if the variable changes after
[...]
At least, that's the explanation I'm deducing from this behavior.

And that's the correct explanation, chief.

It is intended that way.  As an example of why that is: consider a
nested function called "printvars()" that you could insert i
various
places within a function to print out the value of some local
variable.
If you did that, you wouldn't want printvars to print the values
at
the time it was bound, would you?
Allow me to disagree (and start a new "confused with closures"
thread:)
We know that python does not have references to variables. To some
newcomers this may seem annoying but eventually they understand the
pythonic way and they do without them. But in the case of closures
python supports references!

That's right.  It's called Practicality beats purity.
Yes, but according to the python philosophy one could pass locals()
to the nested function and grab the values from there. Seems practical
for the rareness of this...
 My experience bears this out: I find that about half the nested
functions I use are to reference an outer scope, half I return as a
closure.  Only once or twice did I try to use a nested function inside
the defining function.  I would guess this is more or less typical of
how nested functions are used.  If so, it was the right decision.
The question is how many of the nested functions you use take advantage
of the fact that the deref'd variable is a *reference to a variable* and
not a *constant* (different constant for each definition of the nested
function of course).
Or, IOW, it seems very reasonable that if somebody wants to write the
"printvars()" function, he could simply pass locals().
I would accept this being the right decision if there was a statement
like "global var", called "deref var" with which nested funcs could
modify those variables.  Then we could say:
#
def foo():
def f1(y):
deref x
x = y
def f2():
print x
x=11
return f1, f2

But one should go with OOP in that case instead.
Ogligatory Python 3000 suggestion:
I hope in python 3000, we'll get rid of CellObjects and insert
freevars at the consts of the function object at MAKE_CLOSURE.
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: lambda closure question

2005-02-19 Thread jfj
Carl Banks wrote:
Ted Lilley wrote:

Unfortunately, it doesn't work.  It seems the closure keeps track of
the variable fed to it dynamically - if the variable changes after
 [...]
At least, that's the explanation I'm deducing from this behavior.

And that's the correct explanation, chief.

It is intended that way.  As an example of why that is: consider a
nested function called "printvars()" that you could insert in various
places within a function to print out the value of some local variable.
 If you did that, you wouldn't want printvars to print the values at
the time it was bound, would you?
Allow me to disagree (and start a new "confused with closures" thread:)
We know that python does not have references to variables. To some
newcomers this may seem annoying but eventually they understand the
pythonic way and they do without them. But in the case of closures
python supports references!
Nested functions, seem to do two independent things:
  1) reference variables of an outer local scoope
  2) are functions bound dynamically to constants
These two are independent because in:
##
def foo():
def nested():
 print x
f = nested
x = 'sassad'
f()
x = 'aafdss'
f()
return f
##
once foo() returns there is no way to modify 'x'!
It becomes a kind of constant.
So IMVHO, one will never need both features. You either want to
return a function "bound to a local value" *xor* "use a local
function which has access to locals as if they were globals".
Supposing the default behaviour was that python does what Carl
suggests (closures don't keep track of variables), would there
be an elegant pythonic way to implement "nested functions that
reference variables from their outer scope"? (although if find
this little useful and un-pythonic)
Personally, i think that this is a problem with the way lisp
and other languages define the term "closure". But python is
different IMO, and the ability to reference variables is not
useful.
cheers,
jfj
---
# the president
--
http://mail.python.org/mailman/listinfo/python-list


Re: [newbie] Confused with raise w/o args

2005-02-14 Thread jfj
Fredrik Lundh wrote:
"jfj" <[EMAIL PROTECTED]> wrote:

Wait! second that. We would like to

hmm.  are you seconding yourself, and refering to you and yourself as we?
:)
"we" refers to all python users.
no.  your foo() function raises B, and is called from the exception
handler in b1.  exception handlers are free to raise new exceptions
at any time.
Yes but 'B' has been handled while the handler of 'A' has not finished yet.
Here is a better example:
##
# comment out one of the two
# raise statements to test this
import traceback
def foo():
raise # this raises an 'A'
try:
   raise B
except:
   pass
raise # this raises a 'B'. I find this peculiar
try:
 try:
raise A
 except:
foo()
except:
 traceback.print_exc()
##
It seems that there are no uses of this "feature".
I guess that's the easy way to implement re-raise in CPython.
I quickly grepped through the sources (;)) and it seems that
perhaps it would be possible to introduce a new opcode
JUMP_FORWARD_CLEANUP_EXC which would be used to leave except clauses.
Then, RAISE_VARARGS would push exceptions on to an "exception stack"
and JUMP_FORWARD_CLEANUP_EXC would pop them.
The bottom element in such an exception stack would be the
UnhandledException.  're-raise' would use the stack-top exception.
However, there are no bytecode changes until AST is merged, so this
will have to wait..
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: [EVALUATION] - E02 - Support for MinGW Open Source Compiler

2005-02-14 Thread jfj
bruno modulix wrote:
Ilias Lazaridis wrote:
I'm a newcomer to python:
[EVALUATION] - E01: The Java Failure - May Python Helps?
http://groups-beta.google.com/group/comp.lang.python/msg/75f0c5c35374f553
My trollometer's beeping...
When person 'A' calls person 'B' a troll, these are the possibilities:
1. 'A' is indeed a troll
2. 'B' is the troll
3. both 'A' and 'B' are trolls
4. nobody is a troll. they're computer scientists passionate about their 
ideas and they are trying to convince each other.

5. nobody is a troll and there is no trolling going on.
Now, it's rather common to accuse people of trolling these days.
The fact that Markus Wankus said that Ilias is a troll does not mean
that everybody should reply to him in that tone.
This is a one .vs many battle and it sucks.
gerald
--
http://mail.python.org/mailman/listinfo/python-list


Re: [newbie] Confused with raise w/o args

2005-02-14 Thread jfj
jfj wrote:
IMHO, a more clean operation of raise would be either:
  1) raise w/o args allowed *only* inside an except clause to
 re-raise the exception being handled by the clause.
Wait! second that. We would like to
###
def bar():
 raise
def b5():
try:
raise A
except:
bar ()
#
So, restricting raise into except clauses only, is not good.
Change the proposal to:
>   1) raise w/o args re-raises the exception being handled
>  or UnhandledException.
here is another confusing case:
###
import sys
class A:
 pass
class B:
 pass
def foo ():
try:
raise B
except:
pass
raise
def b1():
 try:
   raise A
 except:
   foo()
try:
b1 ()
except:
print sys.exc_info()[0]
##
This reports that __main__.B is raised but wouldn't it be better
to raise an 'A' since this is the currently handled exception?
jf
--
http://mail.python.org/mailman/listinfo/python-list


[newbie] Confused with raise w/o args

2005-02-14 Thread jfj
Hello.
I am a bit confused with 'raise' without any arguments.
Suppose the testcase below (i hope it's correct!):
##
import sys
class A:
  pass
class B:
  pass
def foo():
try:
  raise B
except:
  pass
def b1 ():
 try:
raise A
 except:
foo ()
 raise  # raises A
def b2 ():
 try:
   raise A
 except:
   try:
   raise B
   except:
pass
 raise  # raises B
def b3 ():
  foo ()
  raise # raises B
def b4 ():
 try:
raise A
 except:
pass
 foo ()
 raise  # raises A
#
try:
b1 ()
except:
print sys.exc_info()
try:
b2 ()
except:
print sys.exc_info()
try:
b3 ()
except:
print sys.exc_info()
try:
b4 ()
except:
print sys.exc_info()

The semantics of raise without arguments are that the exceptions
of the current scope have a higer priority.  That can be seen
in functions b1() and b2(), where although an exception of type
'B' was the last handled by python, b1() raises 'A'.
Although, b3() shows that the exceptions from other scopes *are*
available.
The b4() case demonstrates the confusion better.
IMHO, a more clean operation of raise would be either:
  1) raise w/o args allowed *only* inside an except clause to
 re-raise the exception being handled by the clause.
  2) always re-raise the last exception no matter the scope.
It appears to me that this behaviour is a bit weird and I would
like to ask:  Are the semantics of 'raise w/o args' a result of
python's implementation?  If python was re-designed, would that
be different?  In python 3000, would you consider changing this
and if yes, to what semantics?  May this be changed in 2.5?
Thanks
jfj
-
# don't hold back
--
http://mail.python.org/mailman/listinfo/python-list


Re: [EVALUATION] - E02 - Support for MinGW Open Source Compiler

2005-02-14 Thread jfj
Michael Hoffman wrote:
Ilias Lazaridis wrote:

b) Why does the Python Foundation not ensure, that the python 
source-code is directly compilable with MinGW?

Why should they? It already runs on Windows with a freely available
compiler.
The point is that the freely available compiler wouldn't be free if
it wasn't for gcc.  Just for that I _believe_ python, being open source,
should support mingw as the default.  But I *don't care* and I don't 
mind, really ;)

jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: THREAD_STACK_SIZE and python performance?

2005-02-12 Thread jfj
Stein Morten Sandbech wrote:
The FreeBSD patch, setting the value to 0x10
seems to be enough for most of our zope servers,
however, I've noticed that we get an occasional
server death even with this value. This is on normal
load, but handling many and large CMS operations
in zope/plone.
Just curious. How much recursion are we talking about here???
Those CMS operations. 100 calls? 1000 calls?
If it's not near that, then the problem should be somewhere else.
So, any thoughts on even larger thread stacks?
Generally there should be no problems. The only case where the
stack size *may* matter AFAIK, may be threads. Because each thread
has its own stack --in non-stackless python-- stack size may be an issue
if it's too much and there are **many** threads. Otherwise, it should
not have any impact on performance.
jfj
# cure to insomnia: when you lay down, try to figure out
# WTF is wrong with you.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Iterate through dictionary of file objects and file names

2005-02-12 Thread jfj
Brian Beck wrote:
print "Closed: %s" % fileName
Call me a pedant, but what'd wrong with:
print 'closed: ' + filename
or
print 'closed:', filename
?
Modulus operator good but don't over-use it. Otherwise, bad style.
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: A great Alan Kay quote

2005-02-10 Thread jfj
Peter Hansen wrote:
Grant Edwards wrote:
In an interview at 
http://acmqueue.com/modules.php?name=Content&pa=showpage&pid=273
Alan Kay said something I really liked, and I think it applies
equally well to Python as well as the languages mentioned:

I characterized one way of looking at languages in this
way: a lot of them are either the agglutination of features
or they're a crystallization of style. Languages such as
APL, Lisp, and Smalltalk are what you might call style
languages, where there's a real center and imputed style to
how you're supposed to do everything.
I think that "a crystallization of style" sums things up nicely.
The rest of the interview is pretty interesting as well.

Then Perl is an "agglutination of styles", while Python might
be considered a "crystallization of features"...
Bah. My impressions from the interview was "there are no good
languages anymore. In my time we made great languages, but today
they all suck. Perl for example"
I got the impressions that the interview is as bad for python
as for perl and any of the languages of the 90's and 00's.
From the interview:
""" You could think of it as putting a low-pass filter on some of the 
good ideas from the ’60s and ’70s, as computing spread out much, much 
faster than educating unsophisticated people can happen. In the last 25 
years or so, we actually got something like a pop culture, similar to 
what happened when television came on the scene and some of its 
inventors thought it would be a way of getting Shakespeare to the 
masses. But they forgot that you have to be more sophisticated and have 
more perspective to understand Shakespeare. What television was able to 
do was to capture people as they were.

So I think the lack of a real computer science today, and the lack of 
real software engineering today, is partly due to this pop culture.
"""

So, let's not be so self-important , and see this interview
as one who bashes perl and admires python. It aint. Python is pop
culture according to Mr Kay. I'll leave the rest to slashdot..
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused with methods

2005-02-07 Thread jfj
I just realized that I'm trolling.
We all sometimes get into a trollwar despite our intensions
(even FL). So, I would like to apologize on behalf
of all those who participated in it. I'm sorry you had to witness
that.
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused with methods

2005-02-07 Thread jfj
Steven Bethard wrote:
For your benefit, and the benefit of others who make this same mistake, 
I'll rewrite your original post in a manner that won't seem so 
self-important and confrontational[1].

``People who code too much tend to get authoritative'', in general.
I am an arrogant noob -- just like arrogant experts...
Inconsistenly y'rs
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused with methods

2005-02-07 Thread jfj
Alex Martelli wrote:
>
I think the problem is that you know python so well that you are used
to the way things are and everything seems natural the way it is.
For a newbie, the behaviour I mentioned seems indeed a bit inconsistent.
"Inconsistent" not as in mathematics but as in "wow! I'd thought this
should work like this". Like you have a perception of the language
and then this feature changes everything. It's a different kind of
"inconsistency" and it doesn't give python a bad name or something,
nor does it break the mathematical laws of the universe.
There is no trolling involved here. Maybe wrong use of the english
language. But then again, this is the newgroups and the english
language is ill-used all the time:)
Regards,
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused with methods

2005-02-07 Thread jfj
Hello Diez.
I sent a reply to the previous message requesting an example of
what would break, but unfortunatelly it didn't make it in python-list.
Here it is for the record [
Diez B. Roggisch wrote:
If things worked as you wanted it to, that would mean that passing bound
method as argument to a class and storing it there to an instance variable
that would "eat up" the arguments - surely not the desired behaviour.

Could you please give an example of this ?
If you mean:
   class A:
  def f(x):
 print x
   class B:
  pass
   a=A()
   B.f = a.f
   b=B()
   b.f() #surprise: it works!!
then that is equally "weird" as my case. "eat up" sounds
rather bad but all what will happen is an argument number
mismatch.
In fact is seems more appropriate to use a.f.im_func in this
case to "convert the bound method to an unbound one".
Maybe this happens more often?
Thanks,
G.
]
Diez B. Roggisch wrote:
If there a good reason that the __get__ of a boundmethod does not
create a new boundmethod wrapper over the first boundmethod?

I already gave you the good reason:
class A:
   def callback(self, arg):
   print self, arg
def callback(arg):
print arg
class DoSomethingWithCallback:
def __init__(self, cb):
self.cb = cb
def run(self):
for i in xrange(100):
 self.cb(i)
u = DoSomethingWithCallback(A().callback)
  ^
Oops!!!
I think it would be more obvious to somebody who doesn't know
the python internals, to say:
 # take the callback from A and install it to DSWC
 u=DoSomethingWithCallback( A.callback)
or
 u=DoSomethingWithCallback (A().callback.im_func)
or
 u=DoSomethingWithCallback (A().im_class.callback)

v = DoSomethingWithCallback(callback)
# would crash if your suggestion worked
u.run()
v.run()
It would complain about not enough arguments.
As it does in the case I'm confused about!
If you are after currying - look at the cookbook, there are recipes for
that.
I'm satistfied with Alex's response that it is like that for backwards
compatibility, but in Python 3000 it may be the other way around.
Thanks all.
jfj
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused with methods

2005-02-07 Thread jfj
Alex Martelli wrote:
jfj <[EMAIL PROTECTED]> wrote:
Then, without looking at the previous code, one can say that "x" is a
function which takes one argument.

One can say whatever one wishes, but saying it does not make it true.
One can say that x is a green frog, but that's false: x is a
boundmethod.
One can say that x is a function, but that's false: x is a boundmethod.
One can say that x is a spade, but that's false: x is a boundmethod.
I understand that a function and a boundmethod are *different* things.
For one a *boundmethod* has the attributes im_self, im_class, which
a function does not have (a green frog neither). Thus they are not
the same thing.
HOWEVER, what I ask is WHY don't we set the tp_descr_get of
the boundmethod object to be the same as the func_descr_get???
Or WHY do they *have* to differ in this specific part?
I quickly looked at the source of python and it seems that a
one-liner would be enough to enable this. So it's not that it
would be hard to implement it, or inefficient.
A bound method would *still* be a boundmethod.
We could additionally have:
>>>type(x)
 >>

If there a good reason that the __get__ of a boundmethod does not
create a new boundmethod wrapper over the first boundmethod?
G.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused with methods

2005-02-06 Thread jfj
Alex Martelli wrote:
I still wouldn't see any "inconsistency" even if two different ways of
proceeding gave the same result in a certain case.  That would be like
saying that having x-(-y) give the same result as x+y (when x and y are
numbers) is ``inconsistent''... the word ``inconsistent'' just doesn't
MEAN that!
"Inconsistent" means sort of the reverse: one way of proceeding giving
different results.  But the fact that the same operation on objects of
different types may well give different results isn't _inconsistent_ --
it's the sole purpose of HAVING different types in the first place...!
Ok! I said I was confused in the first place!
G.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused with methods

2005-02-06 Thread jfj
Diez B. Roggisch wrote:
If things worked as you wanted it to, that would mean that passing a bound
method as argument to a class and storing it there to an instance variable
that would "eat up" the arguments - surely not the desired behaviour.
Could you please give an example of this ?
If you mean:
   class A:
  def f(x):
 print x
   class B:
  pass
   a=A()
   B.f = a.f
   b=B()
   b.f() #surprise: it works!!
then that is equally "weird" as my case.
In fact is seems more appropriate to use a.f.im_func in this
case to "convert the bound method to an unbound one".
Maybe this happens more often though?
Thanks,
G.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused with methods

2005-02-06 Thread jfj
Alex Martelli wrote:
jfj <[EMAIL PROTECTED]> wrote:
Isn't that inconsistent?

That Python has many callable types, not all of which are descriptors?
I don't see any inconsistency there.  Sure, a more generalized currying
(argument-prebinding) capability would be more powerful, but not more
consistent (there's a PEP about that, I believe).
Thanks for the explanation.
The inconsistency I see is that if I wanted this kind of behavior
I would've used the staticmethod() builtin (which in a few words
alters __get__ to return the function unmodified).
So I would write
 A.foo = staticmethod (b.foo)
But now, it always acts as staticmethod:(
Anyway, if there's a PEP about it, I'm +1 because its "pythonic".
G.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused with methods

2005-02-06 Thread jfj
Dan Perl wrote:
"jfj" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
However this is not possible for another instance method:

class A:
pass
class B:
def foo(x,y)
print x,y
b=B()
A.foo = b.foo
a=A()
# error!!!
a.foo()
##
Python complains that 'foo() takes exactly 2 arguments (1 given)'.
But by calling "b.foo(1)" we prove that it is indeed a function which 
takes
exactly one argument.

Isn't that inconsistent?

You called b.foo(1) but a.foo().  Note one argument in the first call and no 
arguments in the second call.  Would you have called a.foo(1), you would 
have gotten the same result as with b.foo(1).  I suppose that was just a 
small omission on your part, but what are you trying to do anyway?  It's a 
very strange use of instance methods. 


No omission.
If I say:
x=b.foo
x(1)
Then, without looking at the previous code, one can say that "x" is a
function which takes one argument. Continuing with "x":
A.foo = x
# this is ok
A.foo(1)
a=A()
# this is not ok
a.foo()
I expected that when we add this "x" to a class's dictionary and
then we request it from an instance of that class, it will be
converted to an bound-method and receive its --one-- argument
from the referring instance.
So "a.foo()" == "A.foo(a)" == "x(a)" == "b.foo(a)" == "B.foo(b,a)",
or at least "why not?" (head exploded?:)
I'm not trying to do something specific with this though.
G.
--
http://mail.python.org/mailman/listinfo/python-list


Confused with methods

2005-02-06 Thread jfj
I don't understand.
We can take a function and attach it to an object, and then call it
as an instance method as long as it has at least one argument:
#
class A:
  pass
def foo(x):
  print x
A.foo = foo
a=A()
a.foo()
#
However this is not possible for another instance method:

class A:
 pass
class B:
 def foo(x,y)
 print x,y
b=B()
A.foo = b.foo
a=A()
# error!!!
a.foo()
##
Python complains that 'foo() takes exactly 2 arguments (1 given)'.
But by calling "b.foo(1)" we prove that it is indeed a function which takes
exactly one argument.
Isn't that inconsistent?
Thanks,
Gerald.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Question about 'None'

2005-01-28 Thread jfj
Francis Girard wrote:
What was the goal behind this rule ?
If you have a list which contains integers, strings, tuples, lists and
dicts and you sort it and print it, it will be easier to detect what
you're looking for:)
G.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Question about 'None'

2005-01-27 Thread jfj
Francis Girard wrote:
Wow ! What is it that are compared ? I think it's the references (i.e. the 
adresses) that are compared. The "None" reference may map to the physical 0x0 
adress whereas 100 is internally interpreted as an object for which the 
reference (i.e. address) exists and therefore greater than 0x0.

Am I interpreting correctly ?

Not really. If objects of different types are compared (like compare a
string with a list), then no matter what, all strings but be "smaller"
than all lists. Or the oposite.
So the fast way to accomplish this is to compare the addresses of the
object's method table. Something which is common for all objects of
the same type.
G.
--
http://mail.python.org/mailman/listinfo/python-list


Re: python memory blow out

2005-01-27 Thread jfj
Stephen Thorne wrote:
On Thu, 27 Jan 2005 09:08:59 +0800, Simon Wittber
<[EMAIL PROTECTED]> wrote:
According to the above post:
a) If the allocation is > 256 bytes, call the system malloc.
b) If the allocation is < 256, use its own malloc implementation, which
allocates memory in 256 kB chunks and never releases it.
I imagine this means that large memory allocations are eventually
released back to the operating system. However, in my case, this
appears to be not happening.

There was a recent patch posted to python-dev list which allows python
to release memory back to the operating system once the 256kb chunk is
no longer used.
The policy is that the memory allocated for those things is as much as 
the maximum number of them where needed during the program.

This is "bad" in rare cases:
A program which
- at some point, while normally needs 10-20 integers, it peaks its 
requirements and allocates 1000 integers.
- which does not terminate after that peak but keeps running for a long 
time without ever needing again many integers.

Such programs are rather rare. Moreover, the OS will happily swap out 
the unused int blocks after a while. A more pathetic situation would
be, in the above scenario to release all the 1 integers except 
from every 1000th. Assuming those ints left are accessed frequently,
the OS can't even swap out the pages!

But such programs, unless intentionally constructed are, **very** rare 
and it's supposed to be better to have a faster python in the general case.

Gerald.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Tuple slices

2005-01-27 Thread jfj
Nick Coghlan wrote:
1. Applies only if you are making large slices, or a lot of slices with 
each containing at least 3 elements.
  A view can also *cost* memory, when it looks at a small piece of a 
large item. The view will keep the entire item alive, even though it 
needs only a small piece.
That is correct.
2. Hell no. The *elements* aren't copied, pointers to the elements are. 
If you *don't* copy the pointers, then every item access through the 
view involves an indirection as the index into the original sequence 
gets calculated.
If you have
x=(1,2,...11)
y=x[:-1]
then you copy 10 pointers AND you INCREF them AND you DECREF them
when y dies.
The unfortunate case by (1) would be:
x=(1,2,...11)
x=x[:1]
So views *may* save memory in some applications, but are unlikely to 
save time in any application (except any saving resulting from the 
memory saving).

They do. If tp_dealloc of a tuple view doesn't decref the pointers.
We should look for what is the most common case.
Gerald
-PS: the PEP for the removal ought to have a ":)" at the end.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Tuple slices

2005-01-26 Thread jfj
Jeff Shannon wrote:

So, what problem is it, exactly, that you think you'd solve by making 
tuple slices a view rather than a copy?

I think views are good for
 1) saving memory
 2) saving time (as you don't have to copy the elements into the new tuple)
And they are worth it. However, (as in other cases with slicing), it is 
very easy and fast to create a view for a slice with the default step 
'1', while it's a PITA and totally not worth it to create a view for a 
slice with non default step. I think it would be good to:

 if slice_step == 1
   create_view
 else
   create_new_tuple
Actually, i think that slices with step, is a bad feature in general
and i think I will write a PEP to suggest their removal in python3k.
Gerald
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python! Is! Truly! Amazing!

2005-01-03 Thread jfj
Ron Garret wrote:
In article <[EMAIL PROTECTED]>,
 "Erik  Bethke" <[EMAIL PROTECTED]> wrote:

I have NEVER experienced this kind of programming joy.

Just wait until you discover Lisp!
;-)

I've had it with all those lisp posts lately ;-)
There were functional and non-functional programming languages (the 
first being *much* simpler to implement). There is a *reason* people 
chose C over lisp. It's not that we were all blind and didn't see the 
amazingness of lisp. Procedural languages are simply better, and I'm not 
replying to this flamewar.

Thank you:)
G.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Lambda going out of fashion

2004-12-23 Thread jfj
Stephen Thorne wrote:
Hi guys,
I'm a little worried about the expected disappearance of lambda in
python3000. I've had my brain badly broken by functional programming
in the past, and I would hate to see things suddenly become harder
than they need to be.
Don't worry, it's not gonna go away because too much software is
depending on it. If you don't follow the advice that lambda is 
deprecated and you keep using it, more software will depend on it and it 
will never disappear :)

Personally I'm not a fan of functional programming but lambda *is* 
useful when I want to say for example:

  f (callback=lambda x, y: foo (y,x))
I don't believe it will ever disappear.
G.
--
http://mail.python.org/mailman/listinfo/python-list


Re: PyCon is coming - we need your help

2004-12-22 Thread jfj
I wish it was in Amsterdam.. ;)
Gerald
--
http://mail.python.org/mailman/listinfo/python-list


Why are tuples immutable?

2004-12-13 Thread jfj
Yo.
Why can't we __setitem__ for tuples?
The way I see it is that if we enable __setitem__ for tuples there
doesn't seem to be any performance penalty if the users don't use it
(aka, python performance independent of tuple mutability).
On the other hand, right now we have to use a list if we want to
__setitem__ on a sequence. If we could use tuples in the cases where
we want to modify items but not modify the length of the sequence,
programs could be considerably faster. Yes?
Enlighten me.
G.
--
http://mail.python.org/mailman/listinfo/python-list