> Do people think it would be too radical if the built-in open()
> function was removed altogether, requiring all code that opens files
> to import the io module first? This would make it easier to identify
> modules that engage in I/O.
+1.
Presumably you can still write to the standard input, ou
> I got a dejavu here :)
> http://mail.python.org/pipermail/python-3000/2007-April/007045.html
>
> and Guido's answer:
> http://mail.python.org/pipermail/python-3000/2007-April/007063.html
Well yes, but if it's done at the lexical level, the INDENT and DEDENT
tokens don't exist.
___
> Too dangerous. The most common Python syntax error (by far, even for
> experienced users) is omission of a colon. If the missing colon starts
> to have its own special meaning, that would not be a good thing.
It's not special -- omitting it would have exactly the same effect as
omitting a colo
Yes, I have read Swift :-) And in that spirit, I don't know whether to take
this proposal seriously because it's kind of radical. Nevertheless, here
goes...
It has occurred to me that as Python stands today, an indent always begins
with a colon. So in principle, we could define anything that lo
> Incidentally, I know one Python programmer who writes list literals
> like this:
>
> mylist = [
>1
> , 2
> , 3
> ]
>
> In a fixed-width font, the commas and brackets are all in the same
> column. While "bleech" is the proper reaction ;-), that does work
> It would also change the meaning of existing valid programs such as:
>
>x = 1,
>y()
This is the strongest argument against the idea that I've seen so far.
It could be solved by *not* treating , as an operator, and by keeping the
open bracket rule.
> OTOH, the "open bracket" rule is certainly sufficient by itself, and
> is invaluable for writing "big" list, tuple, and dict literals (things
> I doubt come up in Andrew's EFL inspiration).
If comma is treated as an operator, the "open bracket" rule doesn't seem all
that invaluable to me. Can y
> I am worried that (as no indent is required on the next line) it will
> accidentally introduce legal interpretations for certain common (?)
> typos, e.g.
> x = y+# Used to be y+1, the 1 got dropped
> f(x)
A reasonable worry. It could still be solved at the lexical level by
requiring ev
Looking at PEP-3125, I see that one of the rejected alternatives is to allow
any unfinished expression to indicate a line continuation.
I would like to suggest a modification to that alternative that has worked
successfully in another programming language, namely Stu Feldman's EFL. EFL
is a langu
> I'm not proposing to remove the feature, however I'd like to see an
> alternative method for declaring statements that cross a line boundary.
> I seem to recall at one point someone suggesting the use of ellipsis,
> which makes a lot of sense to me:
>
> sorted_result = partition_lower( input
> FWIW, I always liked the `parameter passing is assignment` semantics
> of Python. I sure hope nobody is going to start a crus^H^H^H^HPEP to
> remove tuple unpacking in general from the language!
Isn't the point of this discussion that it is already gone?
___
> "contract" is a better term, IMO, since it's already used in CS (as in
> Eiffel), and describes the situation more correctly: *behavior* rather
> than *signature*.
> "ability" just doesn't seem right to me: my class is not *able* to be a
> set,
> it *behaves* like a set. it follows the set contra
> Hm, I think it would be fine if there *was* no distinction. IOW if
>
> def foo(a: None) -> None: pass
>
> was indistinguishable from
>
> def foo(a): pass
>
> In fact I think I'd prefer it that way. Having an explicit way to say
> "no type here, move along" sounds fine.
I'd like to urge again
> I believe this example captures an important requirement that has been
> expressed both by Bill Janssen and by Andrew Koenig (needless to say I
> agree): abilities should be able to imply contracts that go beyond a
> collection of methods (without necessarily being able to e
> More of a tagging approach, then?
>
> Something like
>
> class MyNewClass (ExistingClass,
> OtherInterfaceIJustNoticedExistingClassImplements):
> pass
>
> ?
Maybe, maybe not. I'm not sure. I'm thinking that it may be useful to be
able somehow to assert that pre-existing class C has pr
> I think it would also be great if we had "ability
> algebra" whereby you could state that a given ability A is composed of
> existing abilities A1 and A2, and every object or class that already
> has A1 and A2 will automatically be considered to have A.
Yes!
> However, I do *not* want to go the
> Andrew Koenig writes:
> > For example, I can imagine a single interface having multiple
> > abilities.
> Perhaps because it inherits from multiple sub-interfaces?
Or perhaps because after the interface was defined, someone noticed that it
happened to have those abilities and
> What does it add to have to declare a class as being "Iterable", if it
> already implements __iter__? What does the notion of "Iterable" add to
> the execution *or* understanding of the code?
Let's suppose for the sake of argument that declaring a class as being
"Iterable" adds nothing. What d
> Hm, I would think it extends very naturally to instance variables, and
> I see no reason why it's a particularly bad idea to also extend it
> (mostly in our minds anyway, I expect) to describe certain behaviors
> that cannot be expressed (easily) in code.
I think of an ability as a property of a
> I'll try to make an argument for "Interface" over "Ability" using
> examples from Zope.
> It seems to me that not all interfaces coincide with something the
> object can _do_. Some speak to what can be done _to_ an object:
When I see the word "interface", I think of the collection of method ca
> Both 'ability' and 'interface' imply (to me, anyway) that the class
> being inspected is an actor, that it 'does something' rather than being
> operated on.
I chose 'ability' because to me it doesn't require that the class being
inspected is active by itself. For example, it feels natural to me
> I believe Ka-Ping once proposed something similar. This also jives
> nicely with the "verify" functionality that I mentioned. However, I
> don't know how easy it will be to write such compliance tests given
> that typically the constructor signature is not part of the ability.
> It may be more pr
> It strikes me that one aspect of "being Pythonic" is a strong reliance
> on politeness: that's what duck-typing is all about.
Part of my motivation for entering this discussion is that C++ templates use
duck-typing, and the C++ community has for several years been thinking about
how to add more
> I'm not sure I said anything Guido didn't already say, but I wanted to
> make the distinction between *enforcing* correct behavior and
> *communicating* correct behavior. Java (and Zope, apparently) simply
> have a more formal way of doing this. Stock Python doesn't.
What he said!
___
> This looks like you're rediscovering or reinventing Java- or
> Zope-style interfaces: those are not passed on via inheritance but via
> a separate "implements" clause.
I don't think I'm quite reinventing Java interfaces. In Java, where just
about everything is a method, it's temptingly easy to
> The only completely accurate semantic check is to run
> the program and see if it produces the right result.
If that were possible, we could solve the halting problem :-)
> In other words, don't LBYL, but Just Do It, and use
> unit tests.
A fine idea when it's possible. Unfortunately, it's no
> That's not what I would say. I would say that you
> should design your code so that you don't *need*
> to find out whether it's an iterator or not. Instead,
> you just require an iterator, always, and trust that
> the calling code will give you one. If it doesn't,
> your unit tests will catch tha
quot;iterator", then every iterator is also able to signal the
presence of an ability--and we most definitely do not want that!
In other words, whether or not we choose to define a family of types that
stand for particular abilities, I think we shoul
> Static != non-duck.
> One could imagine static duck typing (is it the same as structural
> typing?) with type inference. I wonder if some existing languages have
> static duck typing (boo? haskell?).
C++ (using templates).
___
Python-3000 mailing lis
> Duck typing is a seriously bad idea
Why?
___
Python-3000 mailing list
Python-3000@python.org
http://mail.python.org/mailman/listinfo/python-3000
Unsubscribe:
http://mail.python.org/mailman/options/python-3000/archive%40mail-archive.com
> Being technical hacker types, we can cope with describing the ins and
> outs of how our code works, but are less sure on the motivations for
> stackless-style technologies as used in real-world applications :)
Continuations. Which in turn are useful in multiuser interactive
applications, among
> However, I also realize that requiring every access to a class variable
> to instantiate a new method object is expensive, to say the least.
Why does every access to a class variable have to instantiate a new method
object?
___
Python-3000 mailing li
> From a code style perspective, I've always felt that the magical
> __underscore__ names should not be referred to ouside of the class
> implementing those names. The double underscores are an indication that
> this method or property is in most normal use cases referred to
> implicitly by use ra
> Guido van Rossum wrote:
> > So how about we change callable() and add hashable(), iterable() and
> > whatever else makes sense so that these all become like this:
> >
> > def callable(x):
> > return getattr(x, "__call__", None) is not None
> This *still* doesn't fully solve the problem in
> While this makes sense from the perspective you mention, paraphrased
> as "different objects have different capabilities, and I want to query
> what capabilities this object has," I'm not convinced that any but the
> most uncommon uses involve enumerating the capabilities of an object.
> And thos
> > I felt uncomfortable about exposing the implementation that way
> ...but double-underscore methods are part of the language definition,
> not part of the implementation.
Yes and no. For example, it appears to be part of the language definition
that foo() is equivalent to foo.__call__(), but
> > I for one have gotten along quite happily without
> > ever wanting to test for hashability.
> Me too, as said elsewhere. This whole thread seems like a purely
> theoretical debate.
That may be because it has departed from its original intent.
I started the thread because I wanted to call att
> With your CD example, you need an external resource (the CD itself) in
> order to calculate the hash - in that case, you can't safely defer
> the hash calculation until the first time you know you need it,
> since you don't know whether or not you'll have access to the
> physical CD at that point
> All of which is a long-winded way of saying "calculation of an object hash
> should be both cheap and idempotent" :)
Actually, I disagree -- I don't see why there's anything wrong with a hash
being expensive to calculate the first time you do it.
For example, consider a string type in which the
> Which case are you considering? In case 1, __hash__ of the parent object
> is not idempotent because it depends on whether __hash__ of the child
> object has a cached value or not.
I don't see why you should say that; it's certainly not my intent. What did
I say that makes you think that that d
> In both cases, __hash__ is not idempotent, and is thus an abomination.
Why do you say it's not idempotent? The first time you call it, either it
works or it doesn't. If it doesn't work, then you shouldn't have called it
in the first place. If it does work, all subsequent calls will return the
> A __hash__ method with side effects is not formally prohibited in the
> documentation but nevertheless, a __hash__ which is not idempotent is
> an abomination.[1] Thus, there is no need for a test of whether __hash__
> will succeed: just try it.
> [1] I subtlely switched language from "side effe
> But you've just pointed out that they're *not*
> the same kind of concept, no matter how much
> you might wish that there were.
> The only way to make hashability testable at
> less cost than attempting to do it would be
> to have a separate __is_hashable__ method for
> that purpose, which would
> > That would be at odds with the approach taken with
> > list.__hash__ which has to be called in order to find-out it is not
> > hashable.
> That feature of __hash__ is just an unfortunate necessity.
> It arises because hashability is sometimes a "deep"
> property (i.e. it depends on the hashabi
> Then it becomes a matter of whether it's worth having callable()
> around as an alternative spelling. Those arguing in favour of
> it would have to explain whether we should also have addable(),
> subtractable(), mutiplyable(), indexable(), etc. etc. etc...
I'd love to be able to determine wheth
> Andrew Koenig wrote:
> > I note in PEP 3000 the proposal to remove callable(), with the comment
> "just call the object and catch the exception."
> I think that should be amended to "just use hasattr(obj. '__call__')
> instead". That&
> Agreed. I think the people who want to use this as a test for whether
> a client passed them a usable object are barking up the wrong tree.
> What I do see it as useful for is making an api that accepts a
> foo-like-object, or a callable object that returns a foo-like-object.
Yes. What really g
I note in PEP 3000 the proposal to remove callable(), with the comment "just
call the object and catch the exception."
I think that's a bad idea, because it takes away the ability to separate the
callability test from the first call. As a simple example, suppose you're
writing a function that you
> moreover, you can say a set is a "kind of" a keys-only dict. in fact,
> the first implementation of set used a dict, where the keys where the
> elements of the set, and their value was always True.
Or you could adopt the approach used by SETL: A dict is equivalent to a set
of 2-tuples. In other
49 matches
Mail list logo