const objects (was Re: Death to tuples!)

2005-12-14 Thread Gabriel Zachmann

I was wondering why python doesn't contain a way to make things const?

If it were possible to declare variables at the time they are bound to 
objects that they should not allow modification of the object, then we would 
have a concept _orthogonal_ to data types themselves and, as a by-product, a 
way to declare tuples as constant lists.

So this could look like this:

 const l = [1, 2, 3]

 def foo( const l ): ...

and also

 const d = { 1 : 1, 2 : 2, ... }

etc.

It seems to me that implementing that feature would be fairly easy.
All that would be needed is a flag with each variable.

Just my tupence,
Gabriel.


-- 
/---\
| Any intelligent fool can make things bigger, more complex,|
| or more violent. It takes a touch of genius - and a lot of courage -  |
| to move in the opposite direction. (Einstein) |
\---/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: const objects (was Re: Death to tuples!)

2005-12-14 Thread Steven D'Aprano
On Wed, 14 Dec 2005 10:57:05 +0100, Gabriel Zachmann wrote:

 I was wondering why python doesn't contain a way to make things const?
 
 If it were possible to declare variables at the time they are bound to 
 objects that they should not allow modification of the object, then we would 
 have a concept _orthogonal_ to data types themselves and, as a by-product, a 
 way to declare tuples as constant lists.

In an earlier thread, somebody took me to task for saying that Python
doesn't have variables, but names and objects instead.

This is another example of the mental confusion that occurs when you think
of Python having variables. Some languages have variables. Some do not. Do
not apply the rules of behaviour of C (which has variables) to Python
(which does not).

Python already has objects which do not allow modification of the object.
They are called tuples, strings, ints, floats, and other immutables.



 So this could look like this:
 
  const l = [1, 2, 3]

(As an aside, it is poor practice to use l for a name, because it is too
easy to mistake for 1 in many fonts. I always use capital L for quick
throw-away lists, like using x or n for numbers or s for a string.)

Let's do some tests with the constant list, which I will call L:


py L.count(2)
1
py L.append(4)
Traceback (most recent call last):
  File stdin, line 1, in ?
ConstantError: can't modify constant object

(Obviously I've faked that last error message.) So far so good: we can
call list methods on L, but we can't modify it.

But now look what happens when we rebind the name L:

py L = 2
py print L
2

Rebinding the name L doesn't do anything to the object that L pointed to.
That constant list will still be floating in memory somewhere. If L was
the only reference to it, then it will be garbage collected and the memory
it uses reclaimed.


Now, let's look at another problem with the idea of constants for Python:

py L = [1, 2, 3]  # just an ordinary modifiable list
py const D = {1: hello world, 2: L}  # constant dict

Try to modify the dictionary:

py D[0] = parrot
Traceback (most recent call last):
  File stdin, line 1, in ?
ConstantError: can't modify constant object

So far so good.

py L.append(4)

What should happen now? Should Python allow the modification of ordinary
list L? If it does, then this lets you modify constants through the back
door: we've changed one of the items of a supposedly unchangeable dict.

But if we *don't* allow the change to take place, we've locked up an
ordinary, modifiable list simply by putting it inside a constant. This
will be a great way to cause horrible side-effects: you have some code
which accesses an ordinary list, and expects to be able to modify it. Some
other piece of code, could be in another module, puts that list inside a
constant, and *bam* your code will break when you try to modify your list.

Let me ask you this: what problem are you trying to solve by adding
constants to Python? 



 It seems to me that implementing that feature would be fairly easy.
 All that would be needed is a flag with each variable.

Surely not. Just adding a flag to objects would not actually implement the
change in behaviour you want. You need to code changes to the parser to
recognise the new keyword, and you also need code to actually make objects
unmodifiable.



-- 
Steven.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: const objects (was Re: Death to tuples!)

2005-12-14 Thread Steve Holden
Gabriel Zachmann wrote:
[...]
 
 It seems to me that implementing that feature would be fairly easy.
 All that would be needed is a flag with each variable.
 
It seems to me like it should be quite easy to add a sixth forward gear 
to my car, but I'm quite sure an auto engineer would quickly be able to 
point out several reasons why it wasn't, as well as questioning my 
need for a sixth gear in the first place.

Perhaps you could explain why the absence of const objects is a problem?

regards
  Steve
-- 
Steve Holden   +44 150 684 7255  +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006  www.python.org/pycon/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: const objects (was Re: Death to tuples!)

2005-12-14 Thread Christopher Subich
Gabriel Zachmann wrote:
 
 I was wondering why python doesn't contain a way to make things const?
 
 If it were possible to declare variables at the time they are bound to 
 objects that they should not allow modification of the object, then we 
 would have a concept _orthogonal_ to data types themselves and, as a 
 by-product, a way to declare tuples as constant lists.
.
.
.
 It seems to me that implementing that feature would be fairly easy.
 All that would be needed is a flag with each variable.

Nope, that's not all you need; in fact, your definition of 'const' 
conflates two sorts of constants.

Consider:

 const l = 1
 l = 2 # error?

And
 const l = []
 l.append(foo) # error?

with its more general:
 const foo = MyClass()
 foo.myMethod() # error?  myMethod might mutate.

And none of this can prevent:
 d = {}
 const foo=[d]
 d['bar']='baz'

The first constant is the only well-defined one in Python: a constant 
name.  A constant name would prohibit rebinding of the name for the 
scope of the name.  Of course, it can't prevent whatsoever mutation of 
the object which is referenced by the name.

Conceptually, a constant name would be possible in a python-like 
language, but it would require significant change to the language to 
implement; possibly something along the lines of name/attribute 
unification (because with properties it's possible to have 
nearly-constant[1] attributes on class instances).

The other form of constant, that of a frozen object, is difficult 
(probably impossible) to do for a general object: without knowing ahead 
of time the effects of any method invocation, it is very difficult to 
know whether the object will be mutated.  Combine this with exec/eval 
(as the most absurd level of generality), and I'd argue that it is 
probably theoretically impossible.

For more limited cases, and for more limited definitions of immutable, 
and ignoring completely the effects of extremely strange code, you might 
be able to hack something together with a metaclass (or do something 
along the lines of a frozenset).  I wouldn't recommend it just for 
general use.

Really, the single best purpose of constant names/objects is for 
compiler optimization, which CPython doesn't do as-of-yet.  When it 
does, possibly through the PyPy project, constants will more likely be 
discovered automatically from analysis of running code.

[1] -- barring straight modification of __dict__
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: const objects (was Re: Death to tuples!)

2005-12-14 Thread Magnus Lycka
Gabriel Zachmann wrote:
 
 I was wondering why python doesn't contain a way to make things const?
 
 If it were possible to declare variables at the time they are bound to 
 objects that they should not allow modification of the object, then we 
 would have a concept _orthogonal_ to data types themselves and, as a 
 by-product, a way to declare tuples as constant lists.
 
 So this could look like this:
 
 const l = [1, 2, 3]

That was a bit confusing. Is it the name 'l' or the list
object [1, 2, 3] that you want to make const? If you want
to make the list object immutable, it would make more sense
to write l = const [1, 2, 3]. I don't quite see the point
though.

If you could write const l = [1, 2, 3], that should logically
mean that the name l is fixed to the (mutable) list object
that initially contains [1, 2, 3], i.e. l.append(6) is OK,
but l = 'something completely different in the same scope
as const l = [1, 2, 3] would be forbidden.

Besides, what's the use case for mutable numbers for instance,
when you always use freely rebindable references in your source
code to refer to these numbers. Do you want to be able to play
nasty tricks like this?

  f = 5
  v = f
  v++
  print f
6

It seems to me that you don't quite understand what the
assignment operator does in Python. Please read
http://effbot.org/zone/python-objects.htm

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: const objects (was Re: Death to tuples!)

2005-12-14 Thread Tom Anderson
On Wed, 14 Dec 2005, Steven D'Aprano wrote:

 On Wed, 14 Dec 2005 10:57:05 +0100, Gabriel Zachmann wrote:

 I was wondering why python doesn't contain a way to make things const?

 If it were possible to declare variables at the time they are bound 
 to objects that they should not allow modification of the object, then 
 we would have a concept _orthogonal_ to data types themselves and, as a 
 by-product, a way to declare tuples as constant lists.

 In an earlier thread, somebody took me to task for saying that Python 
 doesn't have variables, but names and objects instead.

I'd hardly say it was a taking to task - that phrase implies 
authoritativeness on my part! :)

 This is another example of the mental confusion that occurs when you 
 think of Python having variables.

What? What does this have to do with it? The problem here - as Christopher 
and Magnus point out - is the conflation in the OP's mind of the idea of a 
variable, and of the object referenced by that variable. He could have 
expressed the same confusion using your names-values-and-bindings 
terminology - just replace 'variable' with 'name'. The expression would be 
nonsensical, but it's nonsensical in the variables-objects-and-pointers 
terminology too.

 Some languages have variables. Some do not.

Well, there is the lambda calculus, i guess ...

tom

-- 
The sky above the port was the colour of television, tuned to a dead
channel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: const objects (was Re: Death to tuples!)

2005-12-14 Thread Steven D'Aprano
On Wed, 14 Dec 2005 18:35:51 +, Tom Anderson wrote:

 On Wed, 14 Dec 2005, Steven D'Aprano wrote:
 
 On Wed, 14 Dec 2005 10:57:05 +0100, Gabriel Zachmann wrote:

 I was wondering why python doesn't contain a way to make things const?

 If it were possible to declare variables at the time they are bound 
 to objects that they should not allow modification of the object, then 
 we would have a concept _orthogonal_ to data types themselves and, as a 
 by-product, a way to declare tuples as constant lists.

 In an earlier thread, somebody took me to task for saying that Python 
 doesn't have variables, but names and objects instead.
 
 I'd hardly say it was a taking to task - that phrase implies 
 authoritativeness on my part! :)
 
 This is another example of the mental confusion that occurs when you 
 think of Python having variables.
 
 What? What does this have to do with it? The problem here - as Christopher 
 and Magnus point out - is the conflation in the OP's mind of the idea of a 
 variable, and of the object referenced by that variable. He could have 
 expressed the same confusion using your names-values-and-bindings 
 terminology - just replace 'variable' with 'name'. The expression would be 
 nonsensical, but it's nonsensical in the variables-objects-and-pointers 
 terminology too.

If the OP was thinking names-and-bindings, he would have immediately
realised there is a difference between unmodifiable OBJECTS and
unchangeable NAMES, a distinction which doesn't appear to have even passed
his mind.

Variable is a single entity of name+value, so it makes perfect sense to
imagine a variable with a constant, unchangeable value. But a name+object
is two entities, and to implement constants you have to have both
unmodifiable objects and names that can't be rebound -- and even that may
not be sufficient.



-- 
Steven.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-12-05 Thread Antoon Pardon
Op 2005-12-02, Bengt Richter schreef [EMAIL PROTECTED]:
 On 2 Dec 2005 13:05:43 GMT, Antoon Pardon [EMAIL PROTECTED] wrote:

On 2005-12-02, Bengt Richter [EMAIL PROTECTED] wrote:
 On 1 Dec 2005 09:24:30 GMT, Antoon Pardon [EMAIL PROTECTED] wrote:

On 2005-11-30, Duncan Booth [EMAIL PROTECTED] wrote:
 Antoon Pardon wrote:


Personnaly I expect the following pieces of code

  a = const expression
  b = same expression

to be equivallent with

  a = const expression
  b = a

But that isn't the case when the const expression is a list.

 ISTM the line above is a symptom of a bug in your mental Python source 
 interpreter.
 It's a contradiction. A list can't be a const expression.
 We probably can't make real progress until that is debugged ;-)
 Note: assert const expression is a list should raise a mental exception ;-)

Why should const expression is a list raise a mental exception with
me? I think it should have raised a mental exception with the designer.
If there is a problem with const list expression, maybe the language
shouldn't have something that looks so much like one?


This seems to go against the pythonic spirit of explicit is
better than implicit.
 Unless you accept that '[' is explicitly different from '(' ;-)


It also seems to go against the way default arguments are treated.
I suspect another bug ;-)

The question is where is the bug? You can start from the idea that
the language is how it was defined and thus by definition correct
and so any problem is user problem.

You can also notice that a specific construct is a stumbling block
with a lot a new people and wonder if that doens't say something
about the design.

 Do you want
 to write an accurate historical account, or are you expressing discomfort
 from having had to revise your mental model of other programming languages
 to fit Python? Or do you want to try to enhance Python in some way?

If there is discomfort, then that has more to do with having revised
my mental model to python in one aspect doesn't translate to
understanding other aspects of python enough.

 An example?

Well there is the documentation about function calls, which states
something like the first positional argument provided will go
to the first parameter, ... and that default values will be used
for parameters not filled by arguments. Then you stumble on
the build in function range with the signature:

  range([start,] stop[, step])

Why if you only provide one arguments, does it go to the second
parameter?

Why are a number of constructs for specifying/creating a value/object
limited to subscriptions? Why is it impossible to do the following:

  a = ...
  f(...)
  a = 3:8
  tree.keys('a':'b')

Why is how you can work with defaults in slices not similar with
how you work with defaults in calls. You can do:

  lst[:7]

So why can't you call range as follows:

  range(,7)


lst[::] is a perfect acceptable slice, so why doesn't, 'slice()' work?

Positional arguments must come before keyword arguments, but when
you somehow need to do the following:

  foo(arg0, *args, kwd = value)

You suddenly find out the above is illegal and it should be written

  foo(arg0, kwd = value, *args)


people are making use of that rule too much, making the total language
less pratical as a whole.
 IMO this is hand waving unless you can point to specifics, and a kind of
 unseemly propaganda/innuendo if you can't.

IMO the use of negative indexing is the prime example in this case.
Sure it is practical that if you want the last element of a list,
you can just use -1 as a subscript. However in a lot of cases -1,
is just as out of bounds as an index greater than the list length.

At one time I was given lower en upperlimit for a slice from a list,
Both could range from 0 to len - 1. But I really needed three slices,
I needed lst[low:up], lst[low-1,up-1] and lst[low+1,up+1].
Getting lst[low+1:up+1] wasn't a problem, The way python treated
slices gave me just what I wanted, even if low and up were to big.
But when low or up were zero, the lst[low-1,up-1] gave trouble.

If I want lst[low:up] in reverse, then the following works in general:

  lst[up-1 : low-1 : -1]

Except of course when low or up is zero.


Of course I can make a subclass list that works as I want it, but IMO that
is the other way around. People should use a subclass for special cases,
like indexes that wrap around, not use a subclass to remove the special
casing, that was put in the base class.

Of course this example fits between the examples above, and some of
those probably will fit here too.

 or what are we pursuing?

What I'm pursuing I think is that people would think about what
impractical effects can arise when you drop purity for practicallity.
 I think this is nicely said and important. I wish it were possible
 to arrive at a statement like this without wading though massive 
 irrelevancies ;-)

Well I hope you didn't have to wade such much this time.

 BTW, I am participating in this thread more out of interest in

Re: Death to tuples!

2005-12-02 Thread Antoon Pardon
On 2005-12-01, Mike Meyer [EMAIL PROTECTED] wrote:
 Antoon Pardon [EMAIL PROTECTED] writes:
 On 2005-12-01, Mike Meyer [EMAIL PROTECTED] wrote:
 Antoon Pardon [EMAIL PROTECTED] writes:
 I know what happens, I would like to know, why they made this choice.
 One could argue that the expression for the default argument belongs
 to the code for the function and thus should be executed at call time.
 Not at definion time. Just as other expressions in the function are
 not evaluated at definition time.

 The idiom to get a default argument evaluated at call time with the
 current behavior is:

 def f(arg = None):
 if arg is None:
arg = BuildArg()

 What's the idiom to get a default argument evaluated at definition
 time if it were as you suggested?

 Well there are two possibilities I can think of:

 1)
   arg_default = ...
   def f(arg = arg_default):
  ...

 Yuch. Mostly because it doesn't work:

 arg_default = ...
 def f(arg = arg_default):
 ...

 arg_default = ...
 def g(arg = arg_default):

 That one looks like an accident waiting to happen.

It's not because accidents can happen, that it doesn't work.
IMO that accidents can happen here is because python
doesn't allow a name to be marked as a constant or unreboundable.

 This may not have been the reason it was done in the first place, but
 this loss of functionality would seem to justify the current behavior.

 And, just for fun:

 def setdefaults(**defaults):
 def maker(func):
 def called(*args, **kwds):
 defaults.update(kwds)
 func(*args, **defaults)
 return called
 return maker

So it seems that with a decorator there would be no loss of
functionality.

-- 
Antoon Pardon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-12-02 Thread Antoon Pardon
On 2005-12-02, Bengt Richter [EMAIL PROTECTED] wrote:
 On 1 Dec 2005 09:24:30 GMT, Antoon Pardon [EMAIL PROTECTED] wrote:

On 2005-11-30, Duncan Booth [EMAIL PROTECTED] wrote:
 Antoon Pardon wrote:

 The left one is equivalent to:

 __anon = []
 def Foo(l):
...

 Foo(__anon)
 Foo(__anon)
 
 So, why shouldn't: 
 
res = []
for i in range(10):
   res.append(i*i)
 
 be equivallent to:
 
   __anon = list()
   ...
 
res = __anon
for i in range(10):
   res.append(i*i)

 Because the empty list expression '[]' is evaluated when the expression 
 containing it is executed.

This doesn't follow. It is not because this is how it is now, that that
is the way it should be.

I think one could argue that since '[]' is normally evaluated when
the expression containing it is excuted, it should also be executed
when a function is called, where '[]' is contained in the expression
  ^^^[1] ^
determining the default value.
  ^^^[2]
 Ok, but [] (without the quotes) is just one possible expression, so
 presumably you have to follow your rules for all default value expressions.
 Plain [] evaluates to a fresh new empty list whenever it is evaluated,

Yes, one of the questions I have is why python people whould consider
it a problem if it wasn't.

Personnaly I expect the following pieces of code

  a = const expression
  b = same expression

to be equivallent with

  a = const expression
  b = a

But that isn't the case when the const expression is a list.

A person looking at:

  a = [1 , 2]

sees something resembling

  a = (1 , 2)

Yet the two are treated very differently. As far as I understand the
first is translated into somekind of list((1,2)) statement while
the second is build at compile time and just bound.

This seems to go against the pythonic spirit of explicit is
better than implicit.

It also seems to go against the way default arguments are treated.

 but that's independent of scope. An expression in general may depend on
 names that have to be looked up, which requires not only a place to look
 for them, but also persistence of the name bindings. so def 
 foo(arg=PI*func(x)): ...
 means that at call-time you would have to find 'PI', 'func', and 'x' 
 somewhere.
 Where  how?
 1) If they should be re-evaluated in the enclosing scope, as default arg 
 expressions
 are now, you can just write foo(PI*func(x)) as your call.

I may be a bit pedantic. (Read that as I probably am)

But you can't necesarry write foo(PI*func(x)) as your call, because PI
and func maybe not within scope where the call is made.

 So you would be asking
 for foo() to be an abbreviation of that. Which would give you a fresh list if
 foo was defined def foo(arg=[]): ...

This was my first thought.

 Of course, if you wanted just the expression value as now at def time, you 
 could write
 def foo(...):...; foo.__default0=PI*fun(x) and later call 
 foo(foo.__default0), which is
 what foo() effectively does now.

 2) Or did you want the def code to look up the bindings at def time and save 
 them
 in, say, a tuple __deftup0=(PI, func, x) that captures the def-time bindings 
 in the scope
 enclosing the def, so that when foo is called, it can do arg = 
 _deftup0[0]*_deftup0[1](_deftup0[2])
 to initialize arg and maybe trigger some side effects at call time.

This is tricky, I think it would depend on how foo(arg=[]) would be
translated. 

   2a) _deftup0=([]), with a subsequent arg = _deftup0[0]
or 
   2b) _deftup0=(list, ()), with subsequently arg = _deftup0[0](_deftup0[1])


My feeling is that this proposal would create a lot of confusion.

Something like def f(arg = s) might give very different results
depending on s being a list or a tuple.

 3) Or did you want to save the names themselves, __default0_names=('PI', 
 'func', 'x')
 and look them up at foo call time, which is tricky as things are now, but 
 could be done?

No, this would make for some kind of dynamic scoping, I don't think it
would mingle with the static scoping python has now.


 The left has one list created outside the body of the function, the
 right one has two lists created outside the body of the function. Why
 on earth should these be the same?
 
 Why on earth should it be the same list, when a function is called
 and is provided with a list as a default argument?
 It's not provided with a list -- it's provided with a _reference_ to a list.
 You know this by now, I think. Do you want clone-object-on-new-reference 
 semantics?
 A sort of indirect value semantics? If you do, and you think that ought to be
 default semantics, you don't want Python. OTOH, if you want a specific effect,
 why not look for a way to do it either within python, or as a graceful 
 syntactic
 enhancement to python? E.g., def foo(arg{expr}):... could mean evaluate arg 
 as you would like.
 Now the ball is in your court to define as you would like (exactly and 
 precisely ;-)

I didn't start my 

Re: Death to tuples!

2005-12-02 Thread Mike Meyer
Antoon Pardon [EMAIL PROTECTED] writes:
 Well there are two possibilities I can think of:

 1)
   arg_default = ...
   def f(arg = arg_default):
  ...

 Yuch. Mostly because it doesn't work:

 arg_default = ...
 def f(arg = arg_default):
 ...

 arg_default = ...
 def g(arg = arg_default):

 That one looks like an accident waiting to happen.
 It's not because accidents can happen, that it doesn't work.
 IMO that accidents can happen here is because python
 doesn't allow a name to be marked as a constant or unreboundable.

Loets of accidents could be fixed if Python marked names in various
ways: with a type, or as only being visible to certain other types, or
whatever. A change that requires such a construct in order to be
usable probably needs rethinking.

Even if that weren't a problem, this would still require introducting
a new variable into the global namespace for every such
argument. Unlike other similar constructs, you *can't* clean them up,
because the whole point is that they be around later.

The decorator was an indication of a possible solution. I know it
fails in some cases, and it probably fails in others as well.

  mike
-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-12-02 Thread Mike Meyer
Antoon Pardon [EMAIL PROTECTED] writes:
 On 2005-12-02, Bengt Richter [EMAIL PROTECTED] wrote:
 On 1 Dec 2005 09:24:30 GMT, Antoon Pardon [EMAIL PROTECTED] wrote:
On 2005-11-30, Duncan Booth [EMAIL PROTECTED] wrote:
 Antoon Pardon wrote:
I think one could argue that since '[]' is normally evaluated when
the expression containing it is excuted, it should also be executed
when a function is called, where '[]' is contained in the expression
  ^^^[1] ^
determining the default value.
  ^^^[2]
 Ok, but [] (without the quotes) is just one possible expression, so
 presumably you have to follow your rules for all default value expressions.
 Plain [] evaluates to a fresh new empty list whenever it is evaluated,
 Yes, one of the questions I have is why python people whould consider
 it a problem if it wasn't.

That would make [] behave differently from [compute_a_value()]. This
doesn't seem like a good idea.

 Personnaly I expect the following pieces of code
   a = const expression
   b = same expression
 to be equivallent with
   a = const expression
   b = a
 But that isn't the case when the const expression is a list.

It isn't the case when the const expression is a tuple, either:

x = (1, 2)
 (1, 2) is x
False
 

or an integer:

 a = 1234
 a is 1234
False
 

Every value (in the sense of a syntactic element that's a value, and
not a keyword, variable, or other construct) occuring in a program
should represent a different object. If the compiler can prove that an
value can't be changed, it's allowed to use a single instance for all
occurences of that value. Is there *any* language that behaves
differently from this?

 A person looking at:
   a = [1 , 2]
 sees something resembling
   a = (1 , 2)
 Yet the two are treated very differently. As far as I understand the
 first is translated into somekind of list((1,2)) statement while
 the second is build at compile time and just bound.

No, that translation doesn't happen. [1, 2] builds a list of
values. (1, 2) builds and binds a constant, which is only possible
because it, unlike [1, 2], *is* a constant. list(1, 2) calls the
function list on a pair of values:

 def f():
...  a = [1, 2]
...  b = list(1, 2)
...  c = (1, 2)
... 
 dis.dis(f)
  2   0 LOAD_CONST   1 (1)
  3 LOAD_CONST   2 (2)
  6 BUILD_LIST   2
  9 STORE_FAST   0 (a)

  3  12 LOAD_GLOBAL  1 (list)
 15 LOAD_CONST   1 (1)
 18 LOAD_CONST   2 (2)
 21 CALL_FUNCTION2
 24 STORE_FAST   2 (b)

  4  27 LOAD_CONST   3 ((1, 2))
 30 STORE_FAST   1 (c)
 33 LOAD_CONST   0 (None)
 36 RETURN_VALUE

 This seems to go against the pythonic spirit of explicit is
 better than implicit.

Even if [arg] were just syntactic sugar for list(arg), why would
that be implicit in some way?

 It also seems to go against the way default arguments are treated.

Only if you don't understand how default arguments are treated.

 mike
-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-12-02 Thread Bengt Richter
On 2 Dec 2005 13:05:43 GMT, Antoon Pardon [EMAIL PROTECTED] wrote:

On 2005-12-02, Bengt Richter [EMAIL PROTECTED] wrote:
 On 1 Dec 2005 09:24:30 GMT, Antoon Pardon [EMAIL PROTECTED] wrote:

On 2005-11-30, Duncan Booth [EMAIL PROTECTED] wrote:
 Antoon Pardon wrote:


Personnaly I expect the following pieces of code

  a = const expression
  b = same expression

to be equivallent with

  a = const expression
  b = a

But that isn't the case when the const expression is a list.

ISTM the line above is a symptom of a bug in your mental Python source 
interpreter.
It's a contradiction. A list can't be a const expression.
We probably can't make real progress until that is debugged ;-)
Note: assert const expression is a list should raise a mental exception ;-)

A person looking at:

  a = [1 , 2]
English: let a refer to a mutable container object initialized to contain
an ordered sequence of specified elements 1 and 2.

sees something resembling

  a = (1 , 2)
English: let a refer to an immutable container object initialized to contain
an ordered sequence of specified elements 1 and 2.

Yet the two are treated very differently. As far as I understand the
first is translated into somekind of list((1,2)) statement while
They are of course different in that two different kinds of objects
(mutable vs immutable containers) are generated. This can allow an optimization
in the one case, but not generally in the other.

the second is build at compile time and just bound.
a = (1, 2) is built at compile time, but a = (x, y) would not be, since x and y
can't generally be known a compile time. This is a matter of optimization, not
semantics. a = (1, 2) _could_ be built with the same code as a = (x, y) picking 
up
1 and 2 constants as arguments to a dynamic construction of the tuple, done in 
the
identical way as the construction would be done with x and y. But that is a red 
herring
in this discussion, if we are talking about the language rather than the 
implementation.


This seems to go against the pythonic spirit of explicit is
better than implicit.
Unless you accept that '[' is explicitly different from '(' ;-)


It also seems to go against the way default arguments are treated.
I suspect another bug ;-)



 but that's independent of scope. An expression in general may depend on
 names that have to be looked up, which requires not only a place to look
 for them, but also persistence of the name bindings. so def 
 foo(arg=PI*func(x)): ...
 means that at call-time you would have to find 'PI', 'func', and 'x' 
 somewhere.
 Where  how?
 1) If they should be re-evaluated in the enclosing scope, as default arg 
 expressions
 are now, you can just write foo(PI*func(x)) as your call.

I may be a bit pedantic. (Read that as I probably am)

But you can't necesarry write foo(PI*func(x)) as your call, because PI
and func maybe not within scope where the call is made.
Yes, I was trying to make you notice this ;-)


 So you would be asking
 for foo() to be an abbreviation of that. Which would give you a fresh list if
 foo was defined def foo(arg=[]): ...

This was my first thought.

[...]
 Why on earth should it be the same list, when a function is called
 and is provided with a list as a default argument?
 It's not provided with a list -- it's provided with a _reference_ to a 
 list.
 You know this by now, I think. Do you want clone-object-on-new-reference 
 semantics?
 A sort of indirect value semantics? If you do, and you think that ought to be
 default semantics, you don't want Python. OTOH, if you want a specific 
 effect,
 why not look for a way to do it either within python, or as a graceful 
 syntactic
 enhancement to python? E.g., def foo(arg{expr}):... could mean evaluate arg 
 as you would like.
 Now the ball is in your court to define as you would like (exactly and 
 precisely ;-)

I didn't start my question because I wanted something to change in
python. It was just something I wondered about. Now I wouldn't
I wonder if this something will still exist once you get
assert const expression is a list to raise a mental exception ;-)

mind python to be enhanced at this point, so should the python
people decide to work on this, I'll give you my proposal. Using your
syntax.

  def foo(arg{expr}):
 ...

should be translated something like:

  class _def: pass

  def foo(arg = _def):
if arg is _def:
  arg = expr
...

I think this is equivallent with your first proposal and probably
not worth the trouble, since it is not that difficult to get
the behaviour one wants.
Again, I'm not proposing anything except to help lay out evidence.
The above is just a spelling of typical idiom for mutable default
value initialization.


I think such a proposal would be most advantaged for the newbees
because the two possibilities for default values would make them
think about what the differences are between the two, so they
are less likely to be confused about the def f(l=[]) case.

So are you saying it's not worth the trouble 

Re: Death to tuples!

2005-12-01 Thread Antoon Pardon
On 2005-11-30, Duncan Booth [EMAIL PROTECTED] wrote:
 Antoon Pardon wrote:

 The left one is equivalent to:

 __anon = []
 def Foo(l):
...

 Foo(__anon)
 Foo(__anon)
 
 So, why shouldn't: 
 
res = []
for i in range(10):
   res.append(i*i)
 
 be equivallent to:
 
   __anon = list()
   ...
 
res = __anon
for i in range(10):
   res.append(i*i)

 Because the empty list expression '[]' is evaluated when the expression 
 containing it is executed.

This doesn't follow. It is not because this is how it is now, that that
is the way it should be.

I think one could argue that since '[]' is normally evaluated when
the expression containing it is excuted, it should also be executed
when a function is called, where '[]' is contained in the expression
determining the default value.

 The left has one list created outside the body of the function, the
 right one has two lists created outside the body of the function. Why
 on earth should these be the same?
 
 Why on earth should it be the same list, when a function is called
 and is provided with a list as a default argument?

 Because the empty list expression '[]' is evaluated when the 
 expression containing it is executed.

Again you are just stating the specific choice python has made.
Not why they made this choice.

 I see no reason why your and my question should be answered
 differently. 

 We are agreed on that, the answers should be the same, and indeed they are. 
 In each case the list is created when the expression (an assigment or a 
 function definition) is executed. The behaviour, as it currently is, is 
 entirely self-consistent.

 I think perhaps you are confusing the execution of the function body with 
 the execution of the function definition. They are quite distinct: the 
 function definition evaluates any default arguments and creates a new 
 function object binding the code with the default arguments and any scoped 
 variables the function may have.

I know what happens, I would like to know, why they made this choice.
One could argue that the expression for the default argument belongs
to the code for the function and thus should be executed at call time.
Not at definion time. Just as other expressions in the function are
not evaluated at definition time.

So when these kind of expression are evaluated at definition time,
I don't see what would be so problematic when other functions are
evaluated at definition time to.

 If the system tried to delay the evaluation until the function was called 
 you would get surprising results as variables referenced in the default 
 argument expressions could have changed their values.

This would be no more surprising than a variable referenced in a normal
expression to have changed values between two evaluations.

-- 
Antoon Pardon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-12-01 Thread Antoon Pardon
On 2005-11-30, Christophe [EMAIL PROTECTED] wrote:
 Antoon Pardon a écrit :
 On 2005-11-30, Duncan Booth [EMAIL PROTECTED] wrote:
 
Antoon Pardon wrote:

But lets just consider. Your above code could simply be rewritten
as follows.

  res = list()
  for i in range(10):
 res.append(i*i)


I don't understand your point here? You want list() to create a new list 
and [] to return the same (initially empty) list throughout the run of the 
program?
 
 
 No, but I think that each occurence returning the same (initially empty)
 list throughout the run of the program would be consistent with how
 default arguments are treated.

 What about that :
 def f(a):
  res = [a]
  return res

 How can you return the same list that way ? Do you propose to make such 
 construct illegal ?

I don't propose anything. This is AFAIC just a philosophical
exploration about the cons and pros of certain python decisions.

To answer your question. The [a] is not a constant list, so
maybe it should be illegal. The way python works now each list
is implicitely constructed. So maybe it would have been better
if python required such a construction to be made explicit.

If people would have been required to write:

  a = list()
  b = list()

Instead of being able to write

  a = []
  b = []

It would have been clearer that a and b are not the same list.

-- 
Antoon Pardon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-12-01 Thread Duncan Booth
Antoon Pardon wrote:

 I know what happens, I would like to know, why they made this choice.
 One could argue that the expression for the default argument belongs
 to the code for the function and thus should be executed at call time.
 Not at definion time. Just as other expressions in the function are
 not evaluated at definition time.
 
Yes you could argue for that, but I think it would lead to a more complex 
and confusing language.

The 'why' is probably at least partly historical. When default arguments 
were added to Python there were no bound variables, so the option of 
delaying the evaluation simply wasn't there. However, I'm sure that if 
default arguments were being added today, and there was a choice between 
using closures or evaluating the defaults at function definition time, the 
choice would still come down on the side of simplicity and clarity.

(Actually, I think it isn't true that Python today could support evaluating 
default arguments inside the function without further changes to how it 
works: currently class variables aren't in scope inside methods so you 
would need to add support for that at the very least.)

If you want the expressions to use closures then you can do that by putting 
expressions inside the function. If you changed default arguments to make 
them work in the same way, then you would have to play a lot more games 
with factory functions. Most of the tricks you can play of the x=x default 
argument variety are just tricks, but sometimes they can be very useful 
tricks.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-12-01 Thread Mike Meyer
Antoon Pardon [EMAIL PROTECTED] writes:
 I know what happens, I would like to know, why they made this choice.
 One could argue that the expression for the default argument belongs
 to the code for the function and thus should be executed at call time.
 Not at definion time. Just as other expressions in the function are
 not evaluated at definition time.

The idiom to get a default argument evaluated at call time with the
current behavior is:

def f(arg = None):
if arg is None:
   arg = BuildArg()

What's the idiom to get a default argument evaluated at definition
time if it were as you suggested?

 mike
-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-12-01 Thread Fuzzyman

Mike Meyer wrote:
 Antoon Pardon [EMAIL PROTECTED] writes:
  I know what happens, I would like to know, why they made this choice.
  One could argue that the expression for the default argument belongs
  to the code for the function and thus should be executed at call time.
  Not at definion time. Just as other expressions in the function are
  not evaluated at definition time.

 The idiom to get a default argument evaluated at call time with the
 current behavior is:

 def f(arg = None):
 if arg is None:
arg = BuildArg()

 What's the idiom to get a default argument evaluated at definition
 time if it were as you suggested?


Having default arguments evaluated at definition time certainly bites a
lot of newbies. It allows useful tricks with nested scopes though.

All the best,

Fuzzyman
http://www.voidspace.org.uk/python/index.shtml

  mike
 --
 Mike Meyer [EMAIL PROTECTED]
 http://www.mired.org/home/mwm/
 Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-12-01 Thread Antoon Pardon
On 2005-12-01, Mike Meyer [EMAIL PROTECTED] wrote:
 Antoon Pardon [EMAIL PROTECTED] writes:
 I know what happens, I would like to know, why they made this choice.
 One could argue that the expression for the default argument belongs
 to the code for the function and thus should be executed at call time.
 Not at definion time. Just as other expressions in the function are
 not evaluated at definition time.

 The idiom to get a default argument evaluated at call time with the
 current behavior is:

 def f(arg = None):
 if arg is None:
arg = BuildArg()

 What's the idiom to get a default argument evaluated at definition
 time if it were as you suggested?

Well there are two possibilities I can think of:

1)
  arg_default = ...
  def f(arg = arg_default):
 ...

2)
  def f(arg = None):
if arg is None:
  arg = default.

-- 
Antoon Pardon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-12-01 Thread Mike Meyer
Antoon Pardon [EMAIL PROTECTED] writes:
 On 2005-12-01, Mike Meyer [EMAIL PROTECTED] wrote:
 Antoon Pardon [EMAIL PROTECTED] writes:
 I know what happens, I would like to know, why they made this choice.
 One could argue that the expression for the default argument belongs
 to the code for the function and thus should be executed at call time.
 Not at definion time. Just as other expressions in the function are
 not evaluated at definition time.

 The idiom to get a default argument evaluated at call time with the
 current behavior is:

 def f(arg = None):
 if arg is None:
arg = BuildArg()

 What's the idiom to get a default argument evaluated at definition
 time if it were as you suggested?

 Well there are two possibilities I can think of:

 1)
   arg_default = ...
   def f(arg = arg_default):
  ...

Yuch. Mostly because it doesn't work:

arg_default = ...
def f(arg = arg_default):
...

arg_default = ...
def g(arg = arg_default):

That one looks like an accident waiting to happen.

 2)
   def f(arg = None):
 if arg is None:
   arg = default.

Um, that's just rewriting the first one in an uglier fashion, except
you omitted setting the default value before the function.

This may not have been the reason it was done in the first place, but
this loss of functionality would seem to justify the current behavior.

And, just for fun:

def setdefaults(**defaults):
def maker(func):
def called(*args, **kwds):
defaults.update(kwds)
func(*args, **defaults)
return called
return maker

mike
-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-12-01 Thread Bengt Richter
On 1 Dec 2005 09:24:30 GMT, Antoon Pardon [EMAIL PROTECTED] wrote:

On 2005-11-30, Duncan Booth [EMAIL PROTECTED] wrote:
 Antoon Pardon wrote:

 The left one is equivalent to:

 __anon = []
 def Foo(l):
...

 Foo(__anon)
 Foo(__anon)
 
 So, why shouldn't: 
 
res = []
for i in range(10):
   res.append(i*i)
 
 be equivallent to:
 
   __anon = list()
   ...
 
res = __anon
for i in range(10):
   res.append(i*i)

 Because the empty list expression '[]' is evaluated when the expression 
 containing it is executed.

This doesn't follow. It is not because this is how it is now, that that
is the way it should be.

I think one could argue that since '[]' is normally evaluated when
the expression containing it is excuted, it should also be executed
when a function is called, where '[]' is contained in the expression
 ^^^[1] ^
determining the default value.
 ^^^[2]
Ok, but [] (without the quotes) is just one possible expression, so
presumably you have to follow your rules for all default value expressions.
Plain [] evaluates to a fresh new empty list whenever it is evaluated,
but that's independent of scope. An expression in general may depend on
names that have to be looked up, which requires not only a place to look
for them, but also persistence of the name bindings. so def 
foo(arg=PI*func(x)): ...
means that at call-time you would have to find 'PI', 'func', and 'x' somewhere.
Where  how?
1) If they should be re-evaluated in the enclosing scope, as default arg 
expressions
are now, you can just write foo(PI*func(x)) as your call. So you would be asking
for foo() to be an abbreviation of that. Which would give you a fresh list if
foo was defined def foo(arg=[]): ...

Of course, if you wanted just the expression value as now at def time, you 
could write
def foo(...):...; foo.__default0=PI*fun(x) and later call foo(foo.__default0), 
which is
what foo() effectively does now.

2) Or did you want the def code to look up the bindings at def time and save 
them
in, say, a tuple __deftup0=(PI, func, x) that captures the def-time bindings in 
the scope
enclosing the def, so that when foo is called, it can do arg = 
_deftup0[0]*_deftup0[1](_deftup0[2])
to initialize arg and maybe trigger some side effects at call time.

3) Or did you want to save the names themselves, __default0_names=('PI', 
'func', 'x')
and look them up at foo call time, which is tricky as things are now, but could 
be done?


 The left has one list created outside the body of the function, the
 right one has two lists created outside the body of the function. Why
 on earth should these be the same?
 
 Why on earth should it be the same list, when a function is called
 and is provided with a list as a default argument?
It's not provided with a list -- it's provided with a _reference_ to a list.
You know this by now, I think. Do you want clone-object-on-new-reference 
semantics?
A sort of indirect value semantics? If you do, and you think that ought to be
default semantics, you don't want Python. OTOH, if you want a specific effect,
why not look for a way to do it either within python, or as a graceful syntactic
enhancement to python? E.g., def foo(arg{expr}):... could mean evaluate arg as 
you would like.
Now the ball is in your court to define as you would like (exactly and 
precisely ;-)



 Because the empty list expression '[]' is evaluated when the 
 expression containing it is executed.

Again you are just stating the specific choice python has made.
Not why they made this choice.
Why are you interested in the answer to this question? ;-) Do you want
to write an accurate historical account, or are you expressing discomfort
from having had to revise your mental model of other programming languages
to fit Python? Or do you want to try to enhance Python in some way?

 I see no reason why your and my question should be answered
 differently. 

 We are agreed on that, the answers should be the same, and indeed they are. 
 In each case the list is created when the expression (an assigment or a 
 function definition) is executed. The behaviour, as it currently is, is 
 entirely self-consistent.

 I think perhaps you are confusing the execution of the function body with 
 the execution of the function definition. They are quite distinct: the 
 function definition evaluates any default arguments and creates a new 
 function object binding the code with the default arguments and any scoped 
 variables the function may have.

I know what happens, I would like to know, why they made this choice.
One could argue that the expression for the default argument belongs
to the code for the function and thus should be executed at call time.
Not at definion time. Just as other expressions in the function are
not evaluated at definition time.
Maybe it was just easier, and worked very well, and no one showed a need
for doing it differently that couldn't easily 

Re: Death to tuples!

2005-12-01 Thread bonono

Bengt Richter wrote:
 
  Because the empty list expression '[]' is evaluated when the
  expression containing it is executed.
 
 Again you are just stating the specific choice python has made.
 Not why they made this choice.
 Why are you interested in the answer to this question? ;-) Do you want
 to write an accurate historical account, or are you expressing discomfort
 from having had to revise your mental model of other programming languages
 to fit Python? Or do you want to try to enhance Python in some way?

My WAG :

Because it is usually presented as this is the best way rather than
this is the python way. For the former one, I think people would be
curious of why it is best(or better than other considered alternative),
as a learning excercise may be.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-30 Thread Duncan Booth
Mike Meyer wrote:

 An object is compatible with an exception if it is either the object
 that identifies the exception, or (for exceptions that are classes)
 it is a base class of the exception, or it is a tuple containing an
 item that is compatible with the exception.

 Requiring a tuple here means that the code doesn't have to worry
 about a possible recursive data structure.
 
 How so?
 
 except ExceptType1, (ExceptType2, ExceptType3, (ExceptType4,
 ExceptType5): 
 
 I suspect this won't work, but meets the description.
 
 Lists could be used here with the same restrictions as tuples.

I think you meant:

  except (ExceptType1, (ExceptType2, ExceptType3, (ExceptType4,
ExceptType5))):

otherwise the first comma separates the exception specification from the 
target and you get a syntax error even before the syntax error you might 
expect from the unmatched parentheses. And yes, that works fine.

e.g. You can do this:

 ExcGroup1 = ReferenceError, RuntimeError
 ExcGroup2 = MemoryError, SyntaxError, ExcGroup1
 ExcGroup3 = TypeError, ExcGroup2
 try:
raise ReferenceError(Hello)
except ExcGroup3, e:
print Caught,e


Caught Hello
 ExcGroup3
(class exceptions.TypeError at 0x009645A0, (class exceptions.MemoryError 
at 0x00977120, class exceptions.SyntaxError at 0x00964A50, (class 
exceptions.ReferenceError at 0x009770C0, class exceptions.RuntimeError at 
0x00964840)))
 

The exception tuple has the same sort of nesting as your example, and no 
matter how deeply nested the system will find the relevant exception.

Now consider what happens if you used a list here:

 ExcGroup1 = [ReferenceError, RuntimeError]
 ExcGroup2 = MemoryError, SyntaxError, ExcGroup1
 ExcGroup3 = TypeError, ExcGroup2
 ExcGroup1.append(ExcGroup3)
 ExcGroup3
(class exceptions.TypeError at 0x009645A0, (class exceptions.MemoryError 
at 0x00977120, class exceptions.SyntaxError at 0x00964A50, [class 
exceptions.ReferenceError at 0x009770C0, class exceptions.RuntimeError at 
0x00964840, (class exceptions.TypeError at 0x009645A0, (class 
exceptions.MemoryError at 0x00977120, class exceptions.SyntaxError at 
0x00964A50, [...]))]))
 try:
raise ReferenceError(Hello)
except ExcGroup3, e:
print Caught,e



Traceback (most recent call last):
  File pyshell#53, line 2, in -toplevel-
raise ReferenceError(Hello)
ReferenceError: Hello


As it currently stands, the exception is not caught because anything in the 
exception specification which is not a tuple is considered to be a 
(potential) exception.

With lists is suddenly becomes possible to have a recursive data structure. 
repr goes to a lot of effort to ensure that it can detect the recursion and 
replace that particular part of the output with ellipses. The exception 
handling code would have to do something similar if it accepted lists and 
that would slow down all exception processing. By only traversing inside 
tuples we can be sure that, even if the data structure as a whole is 
recursive, the part which is traversed is not.

(Note that the repr code also fails to detect the recursive tuple, it isn't 
until it hits the list a second time that it spots the recursion.)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-30 Thread Antoon Pardon
On 2005-11-29, Duncan Booth [EMAIL PROTECTED] wrote:
 Antoon Pardon wrote:

 The question is, should we consider this a problem. Personnaly, I 
 see this as not very different from functions with a list as a default
 argument. In that case we often have a list used as a constant too.
 
 Yet python doesn't has a problem with mutating this list so that on
 the next call a 'different' list is the default. So if mutating
 a list used as a constant is not a problem there, why should it
 be a problem in your example?
 
 Are you serious about that?

I'm at least that serious that I do consider the two cases
somewhat equivallent.

 The semantics of default arguments are quite clearly defined (although 
 suprising to some people): the default argument is evaluated once when the 
 function is defined and the same value is then reused on each call.

 The semantics of list constants are also clearly defined: a new list is 
 created each time the statement is executed. Consider:

   res = []
   for i in range(10):
  res.append(i*i)

 If the same list was reused each time this code was executed the list would 
 get very long. Pre-evaluating a constant list and creating a copy each time 
 wouldn't break the semantics, but simply reusing it would be disastrous.

This is not about how things are defined, but about should we consider
it a problem if it were defined differently. And no I am not arguing
python should change this. It would break too much code and would
make python all the more surprising.

But lets just consider. Your above code could simply be rewritten
as follows.

  res = list()
  for i in range(10):
 res.append(i*i)

Personnaly I think that the two following pieces of code should
give the same result.

  def Foo(l=[]):def Foo(l):
...   ...

  Foo() Foo([])
  Foo() Foo([])

Just as I think, you want your piece of code to give the same
result as how I rewrote it.

I have a problem understanding people who find the below don't
have to be equivallent and the upper must.

-- 
Antoon Pardon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-30 Thread Duncan Booth
Antoon Pardon wrote:
 But lets just consider. Your above code could simply be rewritten
 as follows.
 
   res = list()
   for i in range(10):
  res.append(i*i)
 
I don't understand your point here? You want list() to create a new list 
and [] to return the same (initially empty) list throughout the run of the 
program?


 Personnaly I think that the two following pieces of code should
 give the same result.
 
   def Foo(l=[]):def Foo(l):
 ...   ...
 
   Foo() Foo([])
   Foo() Foo([])
 
 Just as I think, you want your piece of code to give the same
 result as how I rewrote it.
 
 I have a problem understanding people who find the below don't
 have to be equivallent and the upper must.

The left one is equivalent to:

__anon = []
def Foo(l):
   ...

Foo(__anon)
Foo(__anon)

The left has one list created outside the body of the function, the right 
one has two lists created outside the body of the function. Why on earth 
should these be the same?

Or to put it even more simply, it seems that you are suggesting:

__tmp = []
x = __tmp
y = __tmp

should do the same as:

x = []
y = []
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-30 Thread Antoon Pardon
On 2005-11-30, Duncan Booth [EMAIL PROTECTED] wrote:
 Antoon Pardon wrote:
 But lets just consider. Your above code could simply be rewritten
 as follows.
 
   res = list()
   for i in range(10):
  res.append(i*i)
 
 I don't understand your point here? You want list() to create a new list 
 and [] to return the same (initially empty) list throughout the run of the 
 program?

No, but I think that each occurence returning the same (initially empty)
list throughout the run of the program would be consistent with how
default arguments are treated.

 Personnaly I think that the two following pieces of code should
 give the same result.
 
   def Foo(l=[]):def Foo(l):
 ...   ...
 
   Foo() Foo([])
   Foo() Foo([])
 
 Just as I think, you want your piece of code to give the same
 result as how I rewrote it.
 
 I have a problem understanding people who find the below don't
 have to be equivallent and the upper must.

 The left one is equivalent to:

 __anon = []
 def Foo(l):
...

 Foo(__anon)
 Foo(__anon)

So, why shouldn't: 

   res = []
   for i in range(10):
  res.append(i*i)

be equivallent to:

  __anon = list()
  ...

   res = __anon
   for i in range(10):
  res.append(i*i)

 The left has one list created outside the body of the function, the right 
 one has two lists created outside the body of the function. Why on earth 
 should these be the same?

Why on earth should it be the same list, when a function is called
and is provided with a list as a default argument?

I see no reason why your and my question should be answered differently.

 Or to put it even more simply, it seems that you are suggesting:

 __tmp = []
 x = __tmp
 y = __tmp

 should do the same as:

 x = []
 y = []

No, I'm not suggesting it should, I just don't see why it should be
considered a problem if it would do the same, provided this is the
kind of behaviour we already have with list as default arguments.

Why is it a problem when a constant list is mutated in an expression,
but isn't it a problem when a constant list is mutated as a default
argument?

-- 
Antoon Pardon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-30 Thread Duncan Booth
Antoon Pardon wrote:

 The left one is equivalent to:

 __anon = []
 def Foo(l):
...

 Foo(__anon)
 Foo(__anon)
 
 So, why shouldn't: 
 
res = []
for i in range(10):
   res.append(i*i)
 
 be equivallent to:
 
   __anon = list()
   ...
 
res = __anon
for i in range(10):
   res.append(i*i)

Because the empty list expression '[]' is evaluated when the expression 
containing it is executed.

 
 The left has one list created outside the body of the function, the
 right one has two lists created outside the body of the function. Why
 on earth should these be the same?
 
 Why on earth should it be the same list, when a function is called
 and is provided with a list as a default argument?

Because the empty list expression '[]' is evaluated when the 
expression containing it is executed.

 
 I see no reason why your and my question should be answered
 differently. 

We are agreed on that, the answers should be the same, and indeed they are. 
In each case the list is created when the expression (an assigment or a 
function definition) is executed. The behaviour, as it currently is, is 
entirely self-consistent.

I think perhaps you are confusing the execution of the function body with 
the execution of the function definition. They are quite distinct: the 
function definition evaluates any default arguments and creates a new 
function object binding the code with the default arguments and any scoped 
variables the function may have.

If the system tried to delay the evaluation until the function was called 
you would get surprising results as variables referenced in the default 
argument expressions could have changed their values.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-30 Thread Christophe
Antoon Pardon a écrit :
 On 2005-11-30, Duncan Booth [EMAIL PROTECTED] wrote:
 
Antoon Pardon wrote:

But lets just consider. Your above code could simply be rewritten
as follows.

  res = list()
  for i in range(10):
 res.append(i*i)


I don't understand your point here? You want list() to create a new list 
and [] to return the same (initially empty) list throughout the run of the 
program?
 
 
 No, but I think that each occurence returning the same (initially empty)
 list throughout the run of the program would be consistent with how
 default arguments are treated.

What about that :
def f(a):
 res = [a]
 return res

How can you return the same list that way ? Do you propose to make such 
construct illegal ?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-29 Thread Bengt Richter
On 27 Nov 2005 23:33:27 -0800, Dan Bishop [EMAIL PROTECTED] wrote:

Mike Meyer wrote:
 It seems that the distinction between tuples and lists has slowly been
 fading away. What we call tuple unpacking works fine with lists on
 either side of the assignment, and iterators on the values side. IIRC,
 apply used to require that the second argument be a tuple; it now
 accepts sequences, and has been depreciated in favor of *args, which
 accepts not only sequences but iterators.

 Is there any place in the language that still requires tuples instead
 of sequences, except for use as dictionary keys?

The % operator for strings.  And in argument lists.

def __setitem__(self, (row, column), value):
   ...

Seems like str.__mod__ could take an arbitary (BTW, matching length, 
necessarily?
Or just long enough?) iterable in place of a tuple, just like it can take
an arbitrary mapping object in place of a dict for e.g. '%(name)s'% 
{'name':'name value'}

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-29 Thread Duncan Booth
Antoon Pardon wrote:

 The question is, should we consider this a problem. Personnaly, I 
 see this as not very different from functions with a list as a default
 argument. In that case we often have a list used as a constant too.
 
 Yet python doesn't has a problem with mutating this list so that on
 the next call a 'different' list is the default. So if mutating
 a list used as a constant is not a problem there, why should it
 be a problem in your example?
 
Are you serious about that?

The semantics of default arguments are quite clearly defined (although 
suprising to some people): the default argument is evaluated once when the 
function is defined and the same value is then reused on each call.

The semantics of list constants are also clearly defined: a new list is 
created each time the statement is executed. Consider:

  res = []
  for i in range(10):
 res.append(i*i)

If the same list was reused each time this code was executed the list would 
get very long. Pre-evaluating a constant list and creating a copy each time 
wouldn't break the semantics, but simply reusing it would be disastrous.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-29 Thread Fredrik Lundh
Paddy wrote:

 I would consider
  t =  ([1,2], [3,4])
 to be assigning a tuple with two list elements to t.
 The inner lists will be mutable but I did not know you could change the
 outer tuple and still have the same tuple object.

you can't.

but since hash(tuple) is defined in terms of map(hash, t), the resulting
tuple is not hashable.

also see:

http://effbot.org/zone/python-hash.htm#tuples

/F



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-29 Thread [EMAIL PROTECTED]
Having a general frozen() system makes a lot of sense.  People use
tuples for two different things: as a lightweight record type (e.g.,
(x, y) coordinate pairs), and as an immutable list.  The latter is not
officially sanctioned but is widely believed to be the purpose for
tuples.  And the value of an immutable list is obvious: you can use it
as a constant or pass it to a function and know it won't be abused.  So
why not an immutable dictionary too?  There are many times I would have
preferred this if it were available.  frozen() is appealing because it
provides a general solution for all container types.

I'm not sure tuple should be eliminated, however.  It still has value
to show that these things are a group.  And it's very lightweight to
construct.

-- Mike Orr [EMAIL PROTECTED]

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-29 Thread Bengt Richter
On 28 Nov 2005 14:48:35 GMT, Duncan Booth [EMAIL PROTECTED] wrote:

Antoon Pardon wrote:

  def func(x):
 ...   if x in [1,3,5,7,8]:
 ...  print 'x is really odd'
 ...
  dis.dis(func)
 ...
3  20 LOAD_FAST0 (x)
   23 LOAD_CONST   2 (1)
   26 LOAD_CONST   3 (3)
   29 LOAD_CONST   4 (5)
   32 LOAD_CONST   5 (7)
   35 LOAD_CONST   6 (8)
   38 BUILD_LIST   5
   41 COMPARE_OP   6 (in)
 
 I'm probably missing something, but what would be the problem if this
 list was created during compile time?

Not much in this particular instance. 'x in aList' is implemented as 
aList.__contains__(x), so there isn't any easy way to get hold of the 
list[*] and keep a reference to it. On the other hand:

def func(x):
return x + [1, 3, 5, 7, 8]

we could pass in an object x with an add operator which gets hold of its 
right hand operand and mutates it.

So the problem is that we can't just turn any list used as a constant into 
a constant list, we need to be absolutely sure that the list is used only 
in a few very restricted circumstances, and since there isn't actually any 
benefit to using a list here rather than a tuple it hardly seems 
worthwhile.

There might be some mileage in compiling the list as a constant and copying 
it before use, but you would have to do a lot of timing tests to be sure.

[*] except through f.func_code.co_consts, but that doesn't count.

If we had a way to effect an override of a specific instance's attribute 
accesses
to make certain attribute names act as if they were defined in type(instance), 
and
if we could do this with function instances, and if function local accesses 
would
check if names were one of the ones specified for the function instance being 
called,
then we could define locally named constants etc like properties.

The general mechanism would be that instance.__classvars__ if present would make
instance.x act like instance.__classvars__['x'].__get__(instance). IOW as if 
descriptor
access for descriptors defined in type(instance). Thus if you wrote
instance.__classvars__ = dict(t=property(lambda self, 
t=__import__('time').ctime:t())) 

then that would make instance.t act as if you had assigned the property to 
type(instance).t
-- which is handy for types whose instances don't let you assign to 
type(instance).

This gets around to instances of the function type. Let's say we had a decorator
defined like

def deco(f):
f.__classvars__ = dict(now= property(lambda f, 
t=__import__('time').ctime:t()))
return f

then if function local name access acted like self.name access where self was 
the function
instance[1], and type(self) was checked for descriptors, meaning in this case 
foo.__classvars__
would get checked like type(self).__dict__, then when you wrote

@deco
def foo():
print now  # = print foo.__classvars__['now'].__get__(foo, type(foo))
   #(i.e., as if property defined as type(self) attribute 
where self is foo
   # but not accessed via global name as in above 
illustrative expression)
now = 42   # = AttributeError: can't set attribute

This would also let you add properties to other unmodifiable types like the 
module type you see
via type(__import__('anymod')), e.g.,

import foomod
foomod.__classvars__ = dict(ver=property(lambda mod: '(Version specially 
formatted): 'r'%mod.version)
# (or foomod could define its own __classvars__)
then
foomod.ver
would act like a property defined in the __classvars__ - extended class 
variable namespace, and return
what a normal property would. Plain functions in foomod.__classvars__ would 
return bound methods with
foomod in self position, so you could call graphics.draw(args) and know that 
draw if so designed
could be defined as def draw(mod, *args): ...

Just another idea ;-)
[1] actually, during a function call, the frame instance if probably more like 
the self, but let's
say the name resolution order for local access extends to foo.__classvars__ 
somethow as if that were
an overriding front end base class dict of the mro chain.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-29 Thread jepler
On Tue, Nov 29, 2005 at 10:41:13AM +, Bengt Richter wrote:
 Seems like str.__mod__ could take an arbitary (BTW, matching length, 
 necessarily?
 Or just long enough?) iterable in place of a tuple, just like it can take
 an arbitrary mapping object in place of a dict for e.g. '%(name)s'% 
 {'name':'name value'}

What, and break reams of perfectly working code?

s = set([1, 2, 3])
t = [4, 5, 6]
u = qwerty
v = iter([None])
print The set is: %s % s
print The list is: %s % t
print The string is: %s % u
print The iterable is: %s % v

Jeff


pgpb5oni2WBfO.pgp
Description: PGP signature
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Death to tuples!

2005-11-29 Thread Fredrik Lundh
[EMAIL PROTECTED] wrote:

 Having a general frozen() system makes a lot of sense.  People use
 tuples for two different things: as a lightweight record type (e.g.,
 (x, y) coordinate pairs), and as an immutable list.  The latter is not
 officially sanctioned but is widely believed to be the purpose for
 tuples.  And the value of an immutable list is obvious: you can use it
 as a constant or pass it to a function and know it won't be abused.  So
 why not an immutable dictionary too?  There are many times I would have
 preferred this if it were available.  frozen() is appealing because it
 provides a general solution for all container types.

http://www.python.org/peps/pep-0351.html
http://www.python.org/dev/summary/2005-10-16_2005-10-31.html#freeze-protocol

/F



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-29 Thread Duncan Booth
Dan Bishop wrote:

 Is there any place in the language that still requires tuples instead
 of sequences, except for use as dictionary keys?
 
 The % operator for strings.  And in argument lists.
 
 def __setitem__(self, (row, column), value):
...
 
Don't forget the exception specification in a try..catch statement:

 An object is compatible with an exception if it is either the object
 that identifies the exception, or (for exceptions that are classes) it
 is a base class of the exception, or it is a tuple containing an item
 that is compatible with the exception.

Requiring a tuple here means that the code doesn't have to worry about a 
possible recursive data structure.
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Unifying Attributes and Names (was: Re: Death to tuples!)

2005-11-29 Thread Christopher Subich
Bengt Richter wrote:

 If we had a way to effect an override of a specific instance's attribute 
 accesses
 to make certain attribute names act as if they were defined in 
 type(instance), and
 if we could do this with function instances, and if function local accesses 
 would
 check if names were one of the ones specified for the function instance being 
 called,
 then we could define locally named constants etc like properties.
 
 The general mechanism would be that instance.__classvars__ if present would 
 make

Nah... you're not nearly going far enough with this.  I'd suggest a full 
unification of names and attributes.  This would also enhance 
lexical scoping and allow an outer keyword to set values in an outer 
namespace without doing royally weird stuff.

In general, all lexical blocks which currently have a local namespace 
(right now, modules and functions) would have a __namespace__ variable, 
containing the current namespace object.  Operations to get/set/delete 
names would be exactly translated to getattr/setattr/delattrs.  Getattrs 
on a namespace that does not contain the relevant name recurse up the 
chain of nested namespaces, to the global (module) namespace, which will 
raise an AttributeError if not found.

This allows exact replication of current behaviour, with a couple 
interesting twists:
1) i = i+1 with i in only an outer scope acutally works now; it uses 
the outer scope i and creates a local i binding.
2) global variables are easily defined by a descriptor:
def global_var(name):
   return property(
lambda self: getattr(self.global,name),
lambda (self, v): setattr(self.global,name,v),
lambda self: delattr(self.global,name),
Global variable %s % name)
3) outer variables under write access (outer x, x = 1) are also 
well-defined by descriptor (exercise left for reader).  No more weird 
machinations involving a list in order to build an accumulator function, 
for example.  Indeed, this is probably the primary benefit.
4) Generally, descriptor-based names become possible, allowing some 
rather interesting features[*]:
i) True constants, which cannot be rebound (mutable objects aside)
   ii) Aliases, such that 'a' and 'b' actually reference the same bit, 
so a = 1 - b == 1
  iii) Deep references, such that 'a' could be a reference to my_list[4].
   iv) Dynamic variables, such as a now_time that implicitly expands 
to some function.
5) With redefinition of the __namespace__ object, interesting run-time 
manipulations become possible, such as redefining a variable used by a 
function to be local/global/outer.  Very dangerous, of course, but 
potentially very flexible.  One case that comes to mind is a profiling 
namespace, which tracks how often variables are accessed -- 
over-frequented variables might lead to better-optimized code, and 
unaccessed variables might indicate dead code.

[*] -- I'm not saying that any of these examples are particularly good 
ideas; indeed, abuse of them would be incredibly ugly.  It's just that 
these are the first things that come to mind, because they're also so 
related to the obvious use-cases of properties.

The first reaction to this is going to be a definite ew, and I'd 
agree; this would make Python names be non-absolute [then again, the 
__classvars__ suggestion goes nearly as far anyway].  But this 
unification does bring all the power of instance.attribute down to the 
level of local_name.

The single biggest practical benefit is an easy definiton of an outer 
keyword: lexical closures in Python would then become truly on-par with 
use of global variables.  The accumulator example would become:
def make_accum(init):
i = init
def adder(j):
   outer i #[1]
   i += j
   return i
return adder

[1] -- note, this 'outer' check will have to require that 'i' be defined 
in an outer namespace -at the time the definition is compiled-. 
Otherwise, the variable might have to be created at runtime (as can be 
done now with 'global'), but there's no obvious choice on which 
namespace to create it in: global, or the immediately-outer one?  This 
implies the following peculiar behaviour (but I think it's for the best):

  # no i exists
  def f(): # will error on definition
   outer i
   print i
  def g(): # won't error
   print i
  i = 1
  f()
  g()

Definitely a Py3K proposal, though.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-29 Thread skip

 Actually, no, I hadn't.  I don't use tuples that way.  It's rare when
 I have a tuple whose elements are not all floats, strings or ints,
 and I never put mutable containers in them.

Alex You never have a dict whose values are lists?  

Sorry, incomplete explanation.  I never create tuples which contain mutable
containers, so I never have the can't use 'em as dict keys and related
problems.  My approach to use of tuples pretty much matches Guido's intent I
think: small, immutable, record-like things.

immutable-all-the-way-down-ly, y'rs,

Skip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-29 Thread Bengt Richter
On Tue, 29 Nov 2005 09:26:53 -0600, [EMAIL PROTECTED] wrote:


--cvVnyQ+4j833TQvp
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Tue, Nov 29, 2005 at 10:41:13AM +, Bengt Richter wrote:
 Seems like str.__mod__ could take an arbitary (BTW, matching length, 
 necessarily?
 Or just long enough?) iterable in place of a tuple, just like it can take
 an arbitrary mapping object in place of a dict for e.g. '%(name)s'% 
 {'name':'name value'}

What, and break reams of perfectly working code?

There you go again ;-) There I went again, d'oh ;-/

s = set([1, 2, 3])
t = [4, 5, 6]
u = qwerty
v = iter([None])
print The set is: %s % s
print The list is: %s % t
print The string is: %s % u
print The iterable is: %s % v

I guess I could argue for an iterable with a next method in single-arg form
just having its next method called to get successive arguments. If you
wanted the above effect for v, you would have to do the same as you now do to
print a single tuple argument, i.e., you wrap it in a 1-tuple like (v,)

This might even let you define an object that has both __getitem__ and next
methods for mixing formats like '%(first)s %s %(third)s' % map_n_next_object

reminder...

  tup = (1, 2, 3)
  print The tuple is: %s % tup
 Traceback (most recent call last):
   File stdin, line 1, in ?
 TypeError: not all arguments converted during string formatting
  print The tuple is: %s % tup,
 Traceback (most recent call last):
   File stdin, line 1, in ?
 TypeError: not all arguments converted during string formatting
  print The tuple is: %s % (tup,)
 The tuple is: (1, 2, 3)

But, yeah, it is handy to pass a single non-tuple arg.

But iterables as iterators have possibilities too, e.g. maybe
it = iter(seq)
print 'From %s: %s' % (it,)
print 'Unpacked: %s %s' % it # as if (it.next(). it.next())
print 'Next two: %s %s' % it # ditto

You will get StopIteration if you run out of args though, so maybe str.__mod__
should catch that and convert it to its TypeError: not enough arguments for 
format string
when/if it finally happens.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Unifying Attributes and Names (was: Re: Death to tuples!)

2005-11-29 Thread Christopher Subich
Bengt Richter wrote:

 If we had a way to effect an override of a specific instance's
attribute accesses
 to make certain attribute names act as if they were defined in
type(instance), and
 if we could do this with function instances, and if function local
accesses would
 check if names were one of the ones specified for the function
instance being called,
 then we could define locally named constants etc like properties.
 
 The general mechanism would be that instance.__classvars__ if
present would make
 
Nah... you're not nearly going far enough with this.  I'd suggest a
full 
unification of names and attributes.  This would also enhance 
lexical scoping and allow an outer keyword to set values in an outer

namespace without doing royally weird stuff.

In general, all lexical blocks which currently have a local namespace 
(right now, modules and functions) would have a __namespace__
variable, 
containing the current namespace object.  Operations to get/set/delete

names would be exactly translated to getattr/setattr/delattrs. 
Getattrs 
on a namespace that does not contain the relevant name recurse up the 
chain of nested namespaces, to the global (module) namespace, which
will 
raise an AttributeError if not found.

This allows exact replication of current behaviour, with a couple 
interesting twists:
1) i = i+1 with i in only an outer scope acutally works now; it uses

the outer scope i and creates a local i binding.
2) global variables are easily defined by a descriptor:
def global_var(name):
   return property(
lambda self: getattr(self.global,name),
lambda (self, v): setattr(self.global,name,v),
lambda self: delattr(self.global,name),
Global variable %s % name)
3) outer variables under write access (outer x, x = 1) are also 
well-defined by descriptor (exercise left for reader).  No more weird 
machinations involving a list in order to build an accumulator
function, 
for example.  Indeed, this is probably the primary benefit.
4) Generally, descriptor-based names become possible, allowing some 
rather interesting features[*]:
i) True constants, which cannot be rebound (mutable objects
aside)
   ii) Aliases, such that 'a' and 'b' actually reference the same bit,

so a = 1 - b == 1
  iii) Deep references, such that 'a' could be a reference to
my_list[4].
   iv) Dynamic variables, such as a now_time that implicitly expands

to some function.
5) With redefinition of the __namespace__ object, interesting run-time

manipulations become possible, such as redefining a variable used by a

function to be local/global/outer.  Very dangerous, of course, but 
potentially very flexible.  One case that comes to mind is a
profiling 
namespace, which tracks how often variables are accessed -- 
over-frequented variables might lead to better-optimized code, and 
unaccessed variables might indicate dead code.

[*] -- I'm not saying that any of these examples are particularly good

ideas; indeed, abuse of them would be incredibly ugly.  It's just that

these are the first things that come to mind, because they're also so 
related to the obvious use-cases of properties.

The first reaction to this is going to be a definite ew, and I'd

agree; this would make Python names be non-absolute [then again, the 
__classvars__ suggestion goes nearly as far anyway].  But this 
unification does bring all the power of instance.attribute down to
the 
level of local_name.

The single biggest practical benefit is an easy definiton of an
outer 
keyword: lexical closures in Python would then become truly on-par
with 
use of global variables.  The accumulator example would become:
def make_accum(init):
i = init
def adder(j):
   outer i #[1]
   i += j
   return i
return adder

[1] -- note, this 'outer' check will have to require that 'i' be
defined 
in an outer namespace -at the time the definition is compiled-. 
Otherwise, the variable might have to be created at runtime (as can be

done now with 'global'), but there's no obvious choice on which 
namespace to create it in: global, or the immediately-outer one?  This

implies the following peculiar behaviour (but I think it's for the
best):

 # no i exists
 def f(): # will error on definition
 outer i
   print i
 def g(): # won't error
 print i
 i = 1
 f()
 g()
 
Definitely a Py3K proposal, though.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-29 Thread Mike Meyer
Duncan Booth [EMAIL PROTECTED] writes:
 Dan Bishop wrote:
 Is there any place in the language that still requires tuples instead
 of sequences, except for use as dictionary keys?
 
 The % operator for strings.  And in argument lists.
 
 def __setitem__(self, (row, column), value):
...
 
 Don't forget the exception specification in a try..catch statement:

 An object is compatible with an exception if it is either the object
 that identifies the exception, or (for exceptions that are classes) it
 is a base class of the exception, or it is a tuple containing an item
 that is compatible with the exception.

 Requiring a tuple here means that the code doesn't have to worry about a 
 possible recursive data structure.

How so?

except ExceptType1, (ExceptType2, ExceptType3, (ExceptType4, ExceptType5):

I suspect this won't work, but meets the description.

Lists could be used here with the same restrictions as tuples.

  mike
-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-29 Thread Nicola Larosa
 The new intended use is as an immutable sequence type, not a
 lightweight C struct. The new name to denote this new use -
 following in the footsteps of the set type - is frozenlist. The
 changes to the implementation would be adding any non-mutating methods
 of list to tuple, which appears to mean index and count.

Yeah, I like this! frozenlist is nice, and we get a real immutable
sequence for a change.

-- 
Nicola Larosa - [EMAIL PROTECTED]

She was up the learning curve like a mountain goat.
 -- WasterDave on Slashdot, October 2005

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Sybren Stuvel
Mike Meyer enlightened us with:
 Is there any place in the language that still requires tuples
 instead of sequences, except for use as dictionary keys?

Anything that's an immutable sequence of numbers. For instance, a pair
of coordinates. Or a value and a weight for that value.

 If not, then it's not clear that tuples as a distinct data type
 still serves a purpose in the language. In which case, I think it's
 appropriate to consider doing away with tuples.

I really disagree. There are countless examples where adding or
removing elements from a list just wouldn't be right.

 The new intended use is as an immutable sequence type, not a
 lightweight C struct.

It's the same, really. A lightweight list of elements, where each
element has its own meaning, is both an immutable sequence as well as
a lightweight C struct.

Sybren
-- 
The problem with the world is stupidity. Not saying there should be a
capital punishment for stupidity, but why don't we just take the
safety labels off of everything and let the problem solve itself? 
 Frank Zappa
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Peter Hansen
Mike Meyer wrote:
 It seems that the distinction between tuples and lists has slowly been
 fading away. What we call tuple unpacking works fine with lists on
 either side of the assignment, and iterators on the values side. IIRC,
 apply used to require that the second argument be a tuple; it now
 accepts sequences, and has been depreciated in favor of *args, which
 accepts not only sequences but iterators.
 
 Is there any place in the language that still requires tuples instead
 of sequences, except for use as dictionary keys?

Would it be possible to optimize your frozenlist so that the objects 
would be created during compilation time and rather than only during 
runtime?  If not then tuples() have a distinct performance advantage in 
code like the following where they are used as local constants:

  def func(x):
...   if x in (1, 3, 5, 7, 8):
... print 'x is really odd'
...
  import dis
  dis.dis(func)

   3  20 LOAD_FAST0 (x)
  23 LOAD_CONST   8 ((1, 3, 5, 7, 8))
  26 COMPARE_OP   6 (in)

  def func(x):
...   if x in [1,3,5,7,8]:
...  print 'x is really odd'
...
  dis.dis(func)
...
   3  20 LOAD_FAST0 (x)
  23 LOAD_CONST   2 (1)
  26 LOAD_CONST   3 (3)
  29 LOAD_CONST   4 (5)
  32 LOAD_CONST   5 (7)
  35 LOAD_CONST   6 (8)
  38 BUILD_LIST   5
  41 COMPARE_OP   6 (in)


-Peter

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Antoon Pardon
Op 2005-11-28, Peter Hansen schreef [EMAIL PROTECTED]:
 Mike Meyer wrote:
 It seems that the distinction between tuples and lists has slowly been
 fading away. What we call tuple unpacking works fine with lists on
 either side of the assignment, and iterators on the values side. IIRC,
 apply used to require that the second argument be a tuple; it now
 accepts sequences, and has been depreciated in favor of *args, which
 accepts not only sequences but iterators.
 
 Is there any place in the language that still requires tuples instead
 of sequences, except for use as dictionary keys?

 Would it be possible to optimize your frozenlist so that the objects 
 would be created during compilation time and rather than only during 
 runtime?  If not then tuples() have a distinct performance advantage in 
 code like the following where they are used as local constants:

  def func(x):
 ...   if x in (1, 3, 5, 7, 8):
 ... print 'x is really odd'
 ...
  import dis
  dis.dis(func)
 
3  20 LOAD_FAST0 (x)
   23 LOAD_CONST   8 ((1, 3, 5, 7, 8))
   26 COMPARE_OP   6 (in)

  def func(x):
 ...   if x in [1,3,5,7,8]:
 ...  print 'x is really odd'
 ...
  dis.dis(func)
 ...
3  20 LOAD_FAST0 (x)
   23 LOAD_CONST   2 (1)
   26 LOAD_CONST   3 (3)
   29 LOAD_CONST   4 (5)
   32 LOAD_CONST   5 (7)
   35 LOAD_CONST   6 (8)
   38 BUILD_LIST   5
   41 COMPARE_OP   6 (in)

I'm probably missing something, but what would be the problem if this
list was created during compile time?

-- 
Antoon Pardon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Duncan Booth
Antoon Pardon wrote:

  def func(x):
 ...   if x in [1,3,5,7,8]:
 ...  print 'x is really odd'
 ...
  dis.dis(func)
 ...
3  20 LOAD_FAST0 (x)
   23 LOAD_CONST   2 (1)
   26 LOAD_CONST   3 (3)
   29 LOAD_CONST   4 (5)
   32 LOAD_CONST   5 (7)
   35 LOAD_CONST   6 (8)
   38 BUILD_LIST   5
   41 COMPARE_OP   6 (in)
 
 I'm probably missing something, but what would be the problem if this
 list was created during compile time?

Not much in this particular instance. 'x in aList' is implemented as 
aList.__contains__(x), so there isn't any easy way to get hold of the 
list[*] and keep a reference to it. On the other hand:

def func(x):
return x + [1, 3, 5, 7, 8]

we could pass in an object x with an add operator which gets hold of its 
right hand operand and mutates it.

So the problem is that we can't just turn any list used as a constant into 
a constant list, we need to be absolutely sure that the list is used only 
in a few very restricted circumstances, and since there isn't actually any 
benefit to using a list here rather than a tuple it hardly seems 
worthwhile.

There might be some mileage in compiling the list as a constant and copying 
it before use, but you would have to do a lot of timing tests to be sure.

[*] except through f.func_code.co_consts, but that doesn't count.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Steven Bethard
Dan Bishop wrote:
 Mike Meyer wrote:
 
Is there any place in the language that still requires tuples instead
of sequences, except for use as dictionary keys?
 
 The % operator for strings.  And in argument lists.
 
 def __setitem__(self, (row, column), value):
...

Interesting that both of these two things[1][2] have recently been 
suggested as candidates for removal in Python 3.0.

[1]http://www.python.org/dev/summary/2005-09-01_2005-09-15.html#string-formatting-in-python-3-0
[2]http://www.python.org/dev/summary/2005-09-16_2005-09-30.html#removing-nested-function-parameters

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Aahz
In article [EMAIL PROTECTED], Mike Meyer  [EMAIL PROTECTED] wrote:

Is there any place in the language that still requires tuples instead
of sequences, except for use as dictionary keys?

If not, then it's not clear that tuples as a distinct data type
still serves a purpose in the language. In which case, I think it's
appropriate to consider doing away with tuples. Well, not really:
just changing their intended use, changing the name to note that, and
tweaking the implementation to conform to this.

There's still the obstacle known as Guido.  I suggest you write a PEP so
that whatever decision gets made, there's a document.
-- 
Aahz ([EMAIL PROTECTED])   * http://www.pythoncraft.com/

If you think it's expensive to hire a professional to do the job, wait
until you hire an amateur.  --Red Adair
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Mike Meyer
Steven Bethard [EMAIL PROTECTED] writes:
 Dan Bishop wrote:
 Mike Meyer wrote:

Is there any place in the language that still requires tuples instead
of sequences, except for use as dictionary keys?
 The % operator for strings.  And in argument lists.
 def __setitem__(self, (row, column), value):
...
 Interesting that both of these two things[1][2] have recently been
 suggested as candidates for removal in Python 3.0.
 [1]http://www.python.org/dev/summary/2005-09-01_2005-09-15.html#string-formatting-in-python-3-0
 [2]http://www.python.org/dev/summary/2005-09-16_2005-09-30.html#removing-nested-function-parameters

#2 I actually mentioned in passing, as it's part of the general
concept of tuple unpacking. When names are bound, you can use a
tuple for an lvalue, and the sequence on the rhs will be unpacked
into the various names in the lvalue:

for key, value = mydict.iteritems(): ...
a, (b, c) = (1, 2), (3, 4)

I think of the parameters of a function as just another case of
this; any solution that works for the above two should work for
function paremeters as well.

 mike
-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Mike Meyer
Peter Hansen [EMAIL PROTECTED] writes:
 Mike Meyer wrote:
 It seems that the distinction between tuples and lists has slowly been
 fading away. What we call tuple unpacking works fine with lists on
 either side of the assignment, and iterators on the values side. IIRC,
 apply used to require that the second argument be a tuple; it now
 accepts sequences, and has been depreciated in favor of *args, which
 accepts not only sequences but iterators.
 Is there any place in the language that still requires tuples instead
 of sequences, except for use as dictionary keys?
 Would it be possible to optimize your frozenlist so that the objects
 would be created during compilation time and rather than only during
 runtime?  If not then tuples() have a distinct performance advantage
 in code like the following where they are used as local constants:

No, because I want to change the definition.

Tuples have the problem that they are immutable, except when they're
not (or for proper values of immutable, your choice). They're
hashable, except when they're not. Or equivalently, they can be used
as dictionary keys - or set elements - except when they can't.

I want frozenlists to *not* be that way. A frozenlist should *always*
be valid as a dictionary key. This changes the semantics significantly
- and means that creating a frozenlist from a list could be very
expensive.

Doing this requires extending the frozen concept, adding two new
types, a new magic method, and maybe a new builtin function.

Right now, a frozenset is always valid as a dictionary key. The concept
extension is to provide facilities to freeze nearly everything.

New types:
frozenlist: a tuple with count/index methods, and the constraint
that all the elements are frozen.
frozendict: A dictionary without __setitem__, and with the constraint
that all values stored in it are frozen.

New magic method: __freeze__(self): returns a frozen version of self.

Possibe new builtin:
freeze(obj) - returns the frozen form of obj. For lists, it
returns the equivalent frozenlist; for a set, it returns the
equivalent frozenset. For dicts, it returns the equivalent
frozendict. For other built-ins, it returns the original object.
For classes written in Python, if the instance has __freeze__, it
returns the result of calling that. Otherwise, if the instance
has __hash__, or does not have __cmp__ and __eq__ (i.e. - if
the class is hashable as is), it returns the instance. If all
of that fails, it throws an exception. While not strictly
required, it appears to be handy, and it makes thinking about
the new types much less simler.

You can't always create a frozenlist (or frozendict) at compile time. I.e.:

a = []
foo(a)
fa = frozenlist([a])

You don't know at compile time what a is going to be. Since you can't
change the frozenlist after creation, you have to wait until you know
the value of a to create fa.

This version is *not* suitable as an alias for tuple. It might work as
a replacement for tuple in Py3K, but a number of issues have to be
worked out - most notable tuple packing. % on strings is easy - you
make it accept either a list or a frozenlist (or, if we're going to
change it, maybe an arbitary sequence). But tuple unpacking would need
some real work. Do we blow off the comma syntax for creating tuples,
and force peole to write real lists? Doesn't seem like  a good idea to
me.

mike
-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread skip

Mike Tuples have the problem that they are immutable, except when
Mike they're not (or for proper values of immutable, your
Mike choice). They're hashable, except when they're not. Or
Mike equivalently, they can be used as dictionary keys - or set
Mike elements - except when they can't.

For those of us not following this thread closely, can you identify cases
where tuples are mutable, not hashable or can't be used as dictionary keys?
I've never encountered any such cases.

Skip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Paul Rubin
[EMAIL PROTECTED] writes:
 For those of us not following this thread closely, can you identify cases
 where tuples are mutable, not hashable or can't be used as dictionary keys?
 I've never encountered any such cases.

t = ([1,2], [3,4])
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Aahz
In article [EMAIL PROTECTED],
Paul Rubin  http://[EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] writes:

 For those of us not following this thread closely, can you identify
 cases where tuples are mutable, not hashable or can't be used as
 dictionary keys?  I've never encountered any such cases.

t = ([1,2], [3,4])

Exactly.  Technically, the tuple is still immutable, but because it
contains mutable objects, it is no longer hashable and cannot be used as
a dict key.  One of the odder bits about this usage is that augmented
assignment breaks, but not completely:

 t = ([1,2], [3,4])
 t[0] += [5]
Traceback (most recent call last):
  File stdin, line 1, in ?
TypeError: object doesn't support item assignment
 t
([1, 2, 5], [3, 4])

(I'm pretty sure Skip has seen this before, but I figure it's a good
reminder.)
-- 
Aahz ([EMAIL PROTECTED])   * http://www.pythoncraft.com/

If you think it's expensive to hire a professional to do the job, wait
until you hire an amateur.  --Red Adair
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Paddy
I would consider
 t =  ([1,2], [3,4])
to be assigning a tuple with two list elements to t.
The inner lists will be mutable but I did not know you could change the
outer tuple and still have the same tuple object.

- Pad.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Paul Rubin
Paddy [EMAIL PROTECTED] writes:
 I would consider
  t =  ([1,2], [3,4])
 to be assigning a tuple with two list elements to t.
 The inner lists will be mutable but I did not know you could change the
 outer tuple and still have the same tuple object.

Whether t is mutable is a question of definitions.  Either way, you
can't hash t or use it as a dictionary key.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Mike Meyer
[EMAIL PROTECTED] writes:
 Mike Tuples have the problem that they are immutable, except when
 Mike they're not (or for proper values of immutable, your
 Mike choice). They're hashable, except when they're not. Or
 Mike equivalently, they can be used as dictionary keys - or set
 Mike elements - except when they can't.
 For those of us not following this thread closely, can you identify cases
 where tuples are mutable, not hashable or can't be used as dictionary keys?
 I've never encountered any such cases.

Actually, that didn't come from this thread. But it happens if one of
the elements in the tuple is mutable. For instance:

 t = [],
 type(t)
type 'tuple'
 t == [],
True
 hash(t)
Traceback (most recent call last):
  File stdin, line 1, in ?
TypeError: list objects are unhashable
 t[0].append(1)
 t == [],
False
 {t: 23}
Traceback (most recent call last):
  File stdin, line 1, in ?
TypeError: list objects are unhashable
 

For builtins, the three cases - hashable, immutable and usable as
dictionary keys - are all identical. Instances of Python classes with
proper __hash__ and __eq__ defintions will be hashable and usable as
dictionary keys. Immutable for them is a messy question.

mike
-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread skip

 For those of us not following this thread closely, can you identify
 cases where tuples are mutable, not hashable or can't be used as
 dictionary keys?  I've never encountered any such cases.
 
 t = ([1,2], [3,4])

...

 t = ([1,2], [3,4])
 t[0] += [5]
aahz Traceback (most recent call last):
aahz   File stdin, line 1, in ?
aahz TypeError: object doesn't support item assignment
 t
aahz ([1, 2, 5], [3, 4])

aahz (I'm pretty sure Skip has seen this before, but I figure it's a
aahz good reminder.)

Actually, no, I hadn't.  I don't use tuples that way.  It's rare when I have
a tuple whose elements are not all floats, strings or ints, and I never put
mutable containers in them.

Skip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Peter Hansen
Antoon Pardon wrote:
 Op 2005-11-28, Peter Hansen schreef [EMAIL PROTECTED]:
Mike Meyer wrote:
Is there any place in the language that still requires tuples instead
of sequences, except for use as dictionary keys?

Would it be possible to optimize your frozenlist so that the objects 
would be created during compilation time and rather than only during 
runtime?  If not then tuples() have a distinct performance advantage in 
code like the following where they are used as local constants:
[snip code]
 I'm probably missing something, but what would be the problem if this
 list was created during compile time?

??  I was *asking* if there would be a problem doing that, not saying 
there would be a problem.  I think you misread or misunderstood something.

-Peter

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Steven Bethard
Mike Meyer wrote:
 Steven Bethard [EMAIL PROTECTED] writes:
 
Dan Bishop wrote:

Mike Meyer wrote:


Is there any place in the language that still requires tuples instead
of sequences, except for use as dictionary keys?

The % operator for strings.  And in argument lists.
def __setitem__(self, (row, column), value):
   ...

Interesting that both of these two things[1][2] have recently been
suggested as candidates for removal in Python 3.0.
[1]http://www.python.org/dev/summary/2005-09-01_2005-09-15.html#string-formatting-in-python-3-0
[2]http://www.python.org/dev/summary/2005-09-16_2005-09-30.html#removing-nested-function-parameters
 
 #2 I actually mentioned in passing, as it's part of the general
 concept of tuple unpacking. When names are bound, you can use a
 tuple for an lvalue, and the sequence on the rhs will be unpacked
 into the various names in the lvalue:
 
 for key, value = mydict.iteritems(): ...
 a, (b, c) = (1, 2), (3, 4)
 
 I think of the parameters of a function as just another case of
 this; any solution that works for the above two should work for
 function paremeters as well.

The difference is that currently, you have to use tuple syntax in 
functions, while you have your choice of syntaxes with normal unpacking::

py def f(a, (b, c)):
... pass
...
py def f(a, [b, c]):
... pass
...
Traceback (  File interactive input, line 1
 def f(a, [b, c]):
  ^
SyntaxError: invalid syntax
py a, (b, c) = (1, 2), (3, 4)
py a, [b, c] = (1, 2), (3, 4)
py a, [b, c] = [1, 2], (3, 4)
py a, [b, c] = [1, 2], [3, 4]

Of course, the result in either case is still a tuple.  So I do agree 
that Python doesn't actually require tuples in function definitions; 
just their syntax.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Alex Martelli
Mike Meyer [EMAIL PROTECTED] wrote:
   ...
 concept of tuple unpacking. When names are bound, you can use a
 tuple for an lvalue, and the sequence on the rhs will be unpacked
 into the various names in the lvalue:
 
 for key, value = mydict.iteritems(): ...
 a, (b, c) = (1, 2), (3, 4)
 
 I think of the parameters of a function as just another case of
 this; any solution that works for the above two should work for
 function paremeters as well.

I agree that conceptually the situations are identical, it's just that
currently using [ ] on the left of an equal sign is OK, while using them
in a function's signature is a syntax error.  No doubt that part of the
syntax could be modified (expanded), I imagine that nothing bad would
follow as a consequence.


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Alex Martelli
[EMAIL PROTECTED] wrote:
   ...
 Actually, no, I hadn't.  I don't use tuples that way.  It's rare when I have
 a tuple whose elements are not all floats, strings or ints, and I never put
 mutable containers in them.

You never have a dict whose values are lists?  Or never call .items (or
.iteritems, or popitem, ...) on that dict?  I happen to use dicts with
list values often, and sometimes use the mentioned methods on them...
and said methods will then return one or more tuples with a mutable
container in them.  I've also been known to pass lists as arguments to
functions (if the functions receives arguments with *args, there you are
again: args is a then tuple with mutable containers in it), use
statements such as:
return 1, 2, [x+1 for x in wah]
which also build such tuples, and so on, and so forth... tuples get
created pretty freely in many cases, after all.


Alex
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Rikard Bosnjakovic
Steven Bethard wrote:

 [1]http://www.python.org/dev/summary/2005-09-01_2005-09-15.html#string-formatting-in-python-3-0
  

Reading this link, I find this:

Currently, using % for string formatting has a number of inconvenient 
consequences:

 * precedence issues: a%sa % b*4 produces 'abaabaabaaba', not 
'aa'  [...]


Testing it locally:

  a%sa % b*4
'abaabaabaaba'

But b*4 is not a tuple, as the formatting arguments are supposed to be, 
so why blame its erronious behaviour?

Works fine this way:

  a%sa % (b*4,)
'aa'

So I really do not see the problem. For my own format-strings, I always 
add a final comma to make sure it's a tuple I'm using.

Just my $0.02.


-- 
Sincerely,  |http://bos.hack.org/cv/
Rikard Bosnjakovic  | Code chef - will cook for food

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread bonono

Rikard Bosnjakovic wrote:
 Steven Bethard wrote:

  [1]http://www.python.org/dev/summary/2005-09-01_2005-09-15.html#string-formatting-in-python-3-0

 Reading this link, I find this:

 Currently, using % for string formatting has a number of inconvenient
 consequences:

  * precedence issues: a%sa % b*4 produces 'abaabaabaaba', not
 'aa'  [...]


 Testing it locally:

   a%sa % b*4
 'abaabaabaaba'

 But b*4 is not a tuple, as the formatting arguments are supposed to be,
 so why blame its erronious behaviour?

 Works fine this way:

   a%sa % (b*4,)
 'aa'

 So I really do not see the problem. For my own format-strings, I always
 add a final comma to make sure it's a tuple I'm using.

 Just my $0.02.

this doesn't look like a tuple issue, but precedence of the operator.

a%sa % (b*4) 

also gives the expected answer. aa

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Antoon Pardon
On 2005-11-28, Duncan Booth [EMAIL PROTECTED] wrote:
 Antoon Pardon wrote:

  def func(x):
 ...   if x in [1,3,5,7,8]:
 ...  print 'x is really odd'
 ...
  dis.dis(func)
 ...
3  20 LOAD_FAST0 (x)
   23 LOAD_CONST   2 (1)
   26 LOAD_CONST   3 (3)
   29 LOAD_CONST   4 (5)
   32 LOAD_CONST   5 (7)
   35 LOAD_CONST   6 (8)
   38 BUILD_LIST   5
   41 COMPARE_OP   6 (in)
 
 I'm probably missing something, but what would be the problem if this
 list was created during compile time?

 Not much in this particular instance. 'x in aList' is implemented as 
 aList.__contains__(x), so there isn't any easy way to get hold of the 
 list[*] and keep a reference to it. On the other hand:

 def func(x):
 return x + [1, 3, 5, 7, 8]

 we could pass in an object x with an add operator which gets hold of its 
 right hand operand and mutates it.

 So the problem is that we can't just turn any list used as a constant into 
 a constant list, we need to be absolutely sure that the list is used only 
 in a few very restricted circumstances, and since there isn't actually any 
 benefit to using a list here rather than a tuple it hardly seems 
 worthwhile.

The question is, should we consider this a problem. Personnaly, I 
see this as not very different from functions with a list as a default
argument. In that case we often have a list used as a constant too.

Yet python doesn't has a problem with mutating this list so that on
the next call a 'different' list is the default. So if mutating
a list used as a constant is not a problem there, why should it
be a problem in your example?

-- 
Antoon Pardon
-- 
http://mail.python.org/mailman/listinfo/python-list


Death to tuples!

2005-11-27 Thread Mike Meyer
It seems that the distinction between tuples and lists has slowly been
fading away. What we call tuple unpacking works fine with lists on
either side of the assignment, and iterators on the values side. IIRC,
apply used to require that the second argument be a tuple; it now
accepts sequences, and has been depreciated in favor of *args, which
accepts not only sequences but iterators.

Is there any place in the language that still requires tuples instead
of sequences, except for use as dictionary keys?

If not, then it's not clear that tuples as a distinct data type still
serves a purpose in the language. In which case, I think it's
appropriate to consider doing away with tuples. Well, not really: just
changing their intended use, changing the name to note that, and
tweaking the implementation to conform to this.

The new intended use is as an immutable sequence type, not a
lightweight C struct. The new name to denote this new use -
following in the footsteps of the set type - is frozenlist. The
changes to the implementation would be adding any non-mutating methods
of list to tuple, which appears to mean index and count.

Removing the tuple type is clearly a Py3K action. Adding frozenlist
could be done now. Whehter or not we could make tuple an alias for
frozenlist before Py3K is an interesting question.

   mike
-- 
Mike Meyer [EMAIL PROTECTED]  http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-27 Thread Paul Rubin
Mike Meyer [EMAIL PROTECTED] writes:
 The new intended use is as an immutable sequence type, not a
 lightweight C struct. The new name to denote this new use -
 following in the footsteps of the set type - is frozenlist. The
 changes to the implementation would be adding any non-mutating methods
 of list to tuple, which appears to mean index and count.

I like this.  I also note that tuples predated classes.  The
appropriate way to do a C struct these days is often as a class
instance with __slots__.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-27 Thread Dan Bishop
Mike Meyer wrote:
 It seems that the distinction between tuples and lists has slowly been
 fading away. What we call tuple unpacking works fine with lists on
 either side of the assignment, and iterators on the values side. IIRC,
 apply used to require that the second argument be a tuple; it now
 accepts sequences, and has been depreciated in favor of *args, which
 accepts not only sequences but iterators.

 Is there any place in the language that still requires tuples instead
 of sequences, except for use as dictionary keys?

The % operator for strings.  And in argument lists.

def __setitem__(self, (row, column), value):
   ...

-- 
http://mail.python.org/mailman/listinfo/python-list