Re: Python's only one way to do it philosophy isn't good?

2007-07-09 Thread Chris Mellon
On 7/6/07, Douglas Alan [EMAIL PROTECTED] wrote:
 Chris Mellon [EMAIL PROTECTED] writes:

  Sure, but thats part of the general refcounting vs GC argument -
  refcounting gives (a certain level of) timeliness in resource
  collection, GC often only runs under memory pressure. If you're
  saying that we should keep refcounting because it provides better
  handling of non-memory limited resources like file handles, I
  probably wouldn't argue. But saying we should keep refcounting
  because people like to and should write code that relies on implicit
  scope level object destruction I very strongly argue against.

 And why would you do that?  People rely very heavily in C++ on when
 destructors will be called, and they are in fact encouraged to do so.
 They are, in fact, encouraged to do so *so* much that constructs like
 finally and with have been rejected by the C++ BDFL.  Instead, you
 are told to use smart pointers, or what have you, to clean up your
 allocated resources.


For the record, C++ doesn't have a BDFL. And yes, I know that it's
used all the time in C++ and is heavily encouraged. However, C++ has
totally different object semantics than Python, and there's no reason
to think that we should use it because a different language with
different rules does it. For one thing, Python doesn't have the
concept of stack objects that C++ does.

 I so no reason not to make Python at least as expressive a programming
 language as C++.


I have an overwhelming urge to say something vulgar here. I'm going to
restrain myself and point out that this isn't a discussion about
expressiveness.

   Relying on the specific semantics of refcounting to give
   certain lifetimes is a logic error.
  
   For example:
  
   f = some_file() #maybe it's the file store for a database implementation
   f.write('a bunch of stuff')
   del f
   #insert code that assumes f is closed.

  That's not a logic error if you are coding in CPython, though I agree
  that in this particular case the explicit use of with would be
  preferable due to its clarity.

  I stand by my statement. I feel that writing code in this manner is
  like writing C code that assumes uninitialized pointers are 0 -
  regardless of whether it works, it's erroneous and bad practice at
  best, and actively harmful at worst.

 That's a poor analogy.  C doesn't guarantee that pointers will be
 initialized to 0, and in fact, they typically are not.  CPython, on
 other other hand, guarantees that the refcounter behaves a certain
 way.


It's a perfect analogy, because the value of an uninitialized pointer
in C is *implementation dependent*. The standard gives you no guidance
one way or the other, and an implementation is free to assign any
value it wants. Including 0, and it's not uncommon for implementations
to do so, at least in certain configurations.

The Python language reference explicitly does *not* guarantee the
behavior of the refcounter. By relying on it, you are relying on an
implementation specific, non-specified behavior. Exactly like you'd be
doing if you rely on the value of uninitialized variables in C.

 There are languages other than C that guarantee that values are
 initialized in certain ways.  Are you going to also assert that in
 those languages you should not rely on the initialization rules?


Of course not. Because they *do* guarantee and specify that. C
doesn't, and neither does Python.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-09 Thread Douglas Alan
Chris Mellon [EMAIL PROTECTED] writes:

 And why would you do that?  People rely very heavily in C++ on when
 destructors will be called, and they are in fact encouraged to do so.
 They are, in fact, encouraged to do so *so* much that constructs like
 finally and with have been rejected by the C++ BDFL.  Instead, you
 are told to use smart pointers, or what have you, to clean up your
 allocated resources.

 For the record, C++ doesn't have a BDFL.

Yes, I know.

   http://dictionary.reference.com/browse/analogy

 And yes, I know that it's used all the time in C++ and is heavily
 encouraged. However, C++ has totally different object semantics than
 Python,

That would depend on how you program in C++.  If you use a framework
based on refcounted smart pointers, then it is rather similar.
Especially if you back that up in your application with a conservative
garbage collector, or what have you.

 I so no reason not to make Python at least as expressive a programming
 language as C++.

 I have an overwhelming urge to say something vulgar here. I'm going
 to restrain myself and point out that this isn't a discussion about
 expressiveness.

Says who?

 That's a poor analogy.  C doesn't guarantee that pointers will be
 initialized to 0, and in fact, they typically are not.  CPython, on
 other other hand, guarantees that the refcounter behaves a certain
 way.

 It's a perfect analogy, because the value of an uninitialized pointer
 in C is *implementation dependent*.

Name one common C implementation that guarantees that uninitialized
pointers will be initialized to null.  None that I have *ever* used
make such a guarantee.  In fact, uninitialized values have always been
garbage with every C compiler I have ever used.

If gcc guaranteed that uninitialized variables would always be zeroed,
and you knew that your code would always be compiled with gcc, then
you would be perfectly justified in coding in a style that assumed
null values for uninitialized variables.  Those are some big if's,
though.

 The Python language reference explicitly does *not* guarantee the
 behavior of the refcounter.

Are you suggesting that it is likely to change?  If so, I think you
will find a huge uproar about it.

 By relying on it, you are relying on an implementation specific,
 non-specified behavior.

I'm relying on a feature that has worked fine since the early '90s,
and if it is ever changed in the future, I'm sure that plenty of other
language changes will come along with it that will make adapting code
that relies on this feature to be the least of my porting worries.

 Exactly like you'd be doing if you rely on the value of
 uninitialized variables in C.

Exactly like I'd be doing if I made Unix system calls in my C code.
After all, system calls are implementation dependent, aren't they?
That doesn't mean that I don't rely on them every day.

 There are languages other than C that guarantee that values are
 initialized in certain ways.  Are you going to also assert that in
 those languages you should not rely on the initialization rules?

 Of course not. Because they *do* guarantee and specify that. C
 doesn't, and neither does Python.

CPython does by tradition *and* by popular will.

Also the language reference manual specifically indicates that CPython
uses a refcounter and documents that it collects objects as soon as
they become unreachable (with the appropriate caveats about circular
references, tracing, debugging, and stored tracebacks).

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-09 Thread Steve Holden
Douglas Alan wrote:
 Chris Mellon [EMAIL PROTECTED] writes:
[...]
 The Python language reference explicitly does *not* guarantee the
 behavior of the refcounter.
 
 Are you suggesting that it is likely to change?  If so, I think you
 will find a huge uproar about it.
 
 By relying on it, you are relying on an implementation specific,
 non-specified behavior.
 
 I'm relying on a feature that has worked fine since the early '90s,
 and if it is ever changed in the future, I'm sure that plenty of other
 language changes will come along with it that will make adapting code
 that relies on this feature to be the least of my porting worries.
 
Damn, it seems to be broken on my Jython/IronPython installations, maybe 
I should complain. Oh no, I can't, because it *isn't* *part* *of* *the* 
*language*. ...

 Exactly like you'd be doing if you rely on the value of
 uninitialized variables in C.
 
 Exactly like I'd be doing if I made Unix system calls in my C code.
 After all, system calls are implementation dependent, aren't they?
 That doesn't mean that I don't rely on them every day.
 
That depends on whether you program to a specific standard or not.

 There are languages other than C that guarantee that values are
 initialized in certain ways.  Are you going to also assert that in
 those languages you should not rely on the initialization rules?
 
 Of course not. Because they *do* guarantee and specify that. C
 doesn't, and neither does Python.
 
 CPython does by tradition *and* by popular will.
 
But you make the mistake of assuming that Python is CPython, which it isn't.

 Also the language reference manual specifically indicates that CPython
 uses a refcounter and documents that it collects objects as soon as
 they become unreachable (with the appropriate caveats about circular
 references, tracing, debugging, and stored tracebacks).
 
Indeed, but that *is* implementation dependent. As long as you stick to 
CPython you'll be fine. That's allowed. Just be careful about the 
discussions you get into :-)

regards
  Steve
-- 
Steve Holden+1 571 484 6266   +1 800 494 3119
Holden Web LLC/Ltd   http://www.holdenweb.com
Skype: holdenweb  http://del.icio.us/steve.holden
--- Asciimercial --
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
--- Thank You for Reading -

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-09 Thread Douglas Alan
Steve Holden [EMAIL PROTECTED] writes:

 I'm relying on a feature that has worked fine since the early '90s,
 and if it is ever changed in the future, I'm sure that plenty of other
 language changes will come along with it that will make adapting code
 that relies on this feature to be the least of my porting worries.

 Damn, it seems to be broken on my Jython/IronPython installations,
 maybe I should complain. Oh no, I can't, because it *isn't* *part*
 *of* *the* *language*. ...

As I have mentioned *many* times, I'm coding in CPython 2.5, and I
typically make extensive use of Unix-specific calls.  Consequently, I
have absolutely no interest in making my code compatible with Jython
or IronPython, since Jython is stuck at 2.2, IronPython at 2.4, and
neither provide full support for the Python Standard Library or access
to Unix-specific functionality.

I might at some point want to write some Jython code to make use of
Java libraries, but when I code in Jython, I will have absolutely no
interest in trying to make that code compatible with CPython, since
that cannot be if my Jython code calls libraries that are not
available to CPython.

 Exactly like you'd be doing if you rely on the value of

 uninitialized variables in C.

 Exactly like I'd be doing if I made Unix system calls in my C code.
 After all, system calls are implementation dependent, aren't they?
 That doesn't mean that I don't rely on them every day.

 That depends on whether you program to a specific standard or not.

What standard would that be?  Posix is too restrictive.
BSD/OSX/Linux/Solaris are all different.  I make my program work on
the platform I'm writing it for (keeping in mind what platforms I
might want to port to in the future, in order to avoid obvious
portability pitfalls), and then if the program needs to be ported
eventually to another platforms, I figure out how to do that when the
time comes.

 Of course not. Because they *do* guarantee and specify that. C
 doesn't, and neither does Python.

 CPython does by tradition *and* by popular will.

 But you make the mistake of assuming that Python is CPython, which it isn't.

I do not make that mistake.  I refer to CPython as Python as does
99% of the Python community.  When I talk about Jython, I call in
Jython and when I talk about IronPython I refer to it as
IronPython.  None of this implies that I don't understand that
CPython has features in it that a more strict interpretation of the
word Python doesn't necessarily have, just as when I call a tomato a
vegetable that doesn't mean that I don't understand that it is
really a fruit.

 Also the language reference manual specifically indicates that
 CPython uses a refcounter and documents that it collects objects as
 soon as they become unreachable (with the appropriate caveats about
 circular references, tracing, debugging, and stored tracebacks).

 Indeed, but that *is* implementation dependent. As long as you stick
 to CPython you'll be fine. That's allowed. Just be careful about the
 discussions you get into :-)

I've stated over and over again that all I typically care about is
CPython, and what I'm criticized for is for my choice to program for
CPython, rather than for a more generalized least-common-denominator
Python.

When I program for C++, I also program for the compilers and OS'es
that I will be using, as trying to write C++ code that will compile
under all C++ compilers and OS'es is an utterly losing proposition.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-06 Thread Chris Mellon
On 7/5/07, Douglas Alan [EMAIL PROTECTED] wrote:
 Chris Mellon [EMAIL PROTECTED] writes:

  Some people here have been arguing that all code should use with to
  ensure that the files are closed.  But this still wouldn't solve the
  problem of the large data structures being left around for an
  arbitrary amount of time.

  I don't think anyone has suggested that. Let me be clear about *my*
  position: When you need to ensure that a file has been closed by a
  certain time, you need to be explicit about it. When you don't care,
  just that it will be closed soonish then relying on normal object
  lifetime calls is sufficient. This is true regardless of whether
  object lifetimes are handled via refcount or via true garbage
  collection.

 But it's *not* true at all when relying only on a true GC!  Your
 program could easily run out of file descriptors if you only have a
 real garbage collector and code this way (and are opening lots of
 files).  This is why destructors are useless in Java -- you can't rely
 on them *ever* being called.  In Python, however, destructors are
 quite useful due to the refcounter.


Sure, but thats part of the general refcounting vs GC argument -
refcounting gives (a certain level of) timeliness in resource
collection, GC often only runs under memory pressure. If you're saying
that we should keep refcounting because it provides better handling of
non-memory limited resources like file handles, I probably wouldn't
argue. But saying we should keep refcounting because people like to
and should write code that relies on implicit scope level object
destruction I very strongly argue against.

  Relying on the specific semantics of refcounting to give
  certain lifetimes is a logic error.
 
  For example:
 
  f = some_file() #maybe it's the file store for a database implementation
  f.write('a bunch of stuff')
  del f
  #insert code that assumes f is closed.

 That's not a logic error if you are coding in CPython, though I agree
 that in this particular case the explicit use of with would be
 preferable due to its clarity.


I stand by my statement. I feel that writing code in this manner is
like writing C code that assumes uninitialized pointers are 0 -
regardless of whether it works, it's erroneous and bad practice at
best, and actively harmful at worst.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-06 Thread Douglas Alan
Chris Mellon [EMAIL PROTECTED] writes:

 Sure, but thats part of the general refcounting vs GC argument -
 refcounting gives (a certain level of) timeliness in resource
 collection, GC often only runs under memory pressure. If you're
 saying that we should keep refcounting because it provides better
 handling of non-memory limited resources like file handles, I
 probably wouldn't argue. But saying we should keep refcounting
 because people like to and should write code that relies on implicit
 scope level object destruction I very strongly argue against.

And why would you do that?  People rely very heavily in C++ on when
destructors will be called, and they are in fact encouraged to do so.
They are, in fact, encouraged to do so *so* much that constructs like
finally and with have been rejected by the C++ BDFL.  Instead, you
are told to use smart pointers, or what have you, to clean up your
allocated resources.

I so no reason not to make Python at least as expressive a programming
language as C++.

  Relying on the specific semantics of refcounting to give
  certain lifetimes is a logic error.
 
  For example:
 
  f = some_file() #maybe it's the file store for a database implementation
  f.write('a bunch of stuff')
  del f
  #insert code that assumes f is closed.

 That's not a logic error if you are coding in CPython, though I agree
 that in this particular case the explicit use of with would be
 preferable due to its clarity.

 I stand by my statement. I feel that writing code in this manner is
 like writing C code that assumes uninitialized pointers are 0 -
 regardless of whether it works, it's erroneous and bad practice at
 best, and actively harmful at worst.

That's a poor analogy.  C doesn't guarantee that pointers will be
initialized to 0, and in fact, they typically are not.  CPython, on
other other hand, guarantees that the refcounter behaves a certain
way.

There are languages other than C that guarantee that values are
initialized in certain ways.  Are you going to also assert that in
those languages you should not rely on the initialization rules?

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-05 Thread Chris Mellon
On 7/2/07, Douglas Alan [EMAIL PROTECTED] wrote:
 Lenard Lindstrom [EMAIL PROTECTED] writes:
  If they are simply a performance tweak then it's not an issue *. I
  was just concerned that the calls were necessary to keep resources
  from being exhausted.

 Well, if you catch an exception and don't return quickly, you have to
 consider not only the possibility that there could be some open files
 left in the traceback, but also that there could be a large and now
 useless data structures stored in the traceback.

 Some people here have been arguing that all code should use with to
 ensure that the files are closed.  But this still wouldn't solve the
 problem of the large data structures being left around for an
 arbitrary amount of time.


I don't think anyone has suggested that. Let me be clear about *my*
position: When you need to ensure that a file has been closed by a
certain time, you need to be explicit about it. When you don't care,
just that it will be closed soonish then relying on normal object
lifetime calls is sufficient. This is true regardless of whether
object lifetimes are handled via refcount or via true garbage
collection. Relying on the specific semantics of refcounting to give
certain lifetimes is a logic error.

For example:

f = some_file() #maybe it's the file store for a database implementation
f.write('a bunch of stuff')
del f
#insert code that assumes f is closed.

This is the sort of code that I warn against writing.

f = some_file()
with f:
  f.write(a bunch of stuff)
#insert code that assumes f is closed, but correctly this time

is better.

On the other hand,
f = some_file()
f.write(a bunch of stuff)
#insert code that doesn't care about the state of f

is also fine. It *remains* fine no matter what kind of object lifetime
policy we have. The very worst case is that the file will never be
closed. However, this is exactly the sort of guarantee that GC can't
make, just as it can't ensure that you won't run out of memory. That's
a general case argument about refcounting semantics vs GC semantics,
and there are benefits and disadvantages to both sides.

What I am arguing against are explicit assumptions based on implicit
behaviors. Those are always fragile, and doubly so when the implicit
behavior isn't guaranteed (and, in fact, is explicitly *not*
guaranteed, as with refcounting semantics).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-05 Thread Falcolas
On Jul 5, 10:30 am, Chris Mellon [EMAIL PROTECTED] wrote:

 I don't think anyone has suggested that. Let me be clear about *my*
 position: When you need to ensure that a file has been closed by a
 certain time, you need to be explicit about it. When you don't care,
 just that it will be closed soonish then relying on normal object
 lifetime calls is sufficient. This is true regardless of whether
 object lifetimes are handled via refcount or via true garbage
 collection. Relying on the specific semantics of refcounting to give
 certain lifetimes is a logic error.

 For example:

 f = some_file() #maybe it's the file store for a database implementation
 f.write('a bunch of stuff')
 del f
 #insert code that assumes f is closed.

 This is the sort of code that I warn against writing.

 f = some_file()
 with f:
   f.write(a bunch of stuff)
 #insert code that assumes f is closed, but correctly this time

 is better.

This has raised a few questions in my mind. So, here's my newbie
question based off this.

Is this:

f = open(xyz)
f.write(wheee)
f.close()
# Assume file is closed properly.

as safe as your code:

f = some_file()
with f:
  f.write(a bunch of stuff)
#insert code that assumes f is closed, but correctly this time

Thanks!

G

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-05 Thread John Nagle
Falcolas wrote:
 On Jul 5, 10:30 am, Chris Mellon [EMAIL PROTECTED] wrote:
 
I don't think anyone has suggested that. Let me be clear about *my*
position: When you need to ensure that a file has been closed by a
certain time, you need to be explicit about it. When you don't care,
just that it will be closed soonish then relying on normal object
lifetime calls is sufficient. This is true regardless of whether
object lifetimes are handled via refcount or via true garbage
collection. Relying on the specific semantics of refcounting to give
certain lifetimes is a logic error.

We may need a guarantee that if you create a local object and
don't copy a strong reference to it to an outer scope, upon exit from
the scope, the object will be destroyed.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-05 Thread Douglas Alan
Chris Mellon [EMAIL PROTECTED] writes:

 Some people here have been arguing that all code should use with to
 ensure that the files are closed.  But this still wouldn't solve the
 problem of the large data structures being left around for an
 arbitrary amount of time.

 I don't think anyone has suggested that. Let me be clear about *my*
 position: When you need to ensure that a file has been closed by a
 certain time, you need to be explicit about it. When you don't care,
 just that it will be closed soonish then relying on normal object
 lifetime calls is sufficient. This is true regardless of whether
 object lifetimes are handled via refcount or via true garbage
 collection.

But it's *not* true at all when relying only on a true GC!  Your
program could easily run out of file descriptors if you only have a
real garbage collector and code this way (and are opening lots of
files).  This is why destructors are useless in Java -- you can't rely
on them *ever* being called.  In Python, however, destructors are
quite useful due to the refcounter.

 Relying on the specific semantics of refcounting to give
 certain lifetimes is a logic error.

 For example:

 f = some_file() #maybe it's the file store for a database implementation
 f.write('a bunch of stuff')
 del f
 #insert code that assumes f is closed.

That's not a logic error if you are coding in CPython, though I agree
that in this particular case the explicit use of with would be
preferable due to its clarity.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-05 Thread Lenard Lindstrom
Falcolas wrote:
 f = some_file() #maybe it's the file store for a database implementation
 f.write('a bunch of stuff')
 del f
 #insert code that assumes f is closed.

 This is the sort of code that I warn against writing.

 f = some_file()
 with f:
   f.write(a bunch of stuff)
 #insert code that assumes f is closed, but correctly this time

 is better.
 
 This has raised a few questions in my mind. So, here's my newbie
 question based off this.
 
 Is this:
 
 f = open(xyz)
 f.write(wheee)
 f.close()
 # Assume file is closed properly.
 

This will not immediately close f if f.write raises an exception since 
the program stack is kept alive as a traceback.

 as safe as your code:
 
 f = some_file()
 with f:
   f.write(a bunch of stuff)
 #insert code that assumes f is closed, but correctly this time
 

The with statement is designed to be safer. It contains an implicit 
try/finally that lets the file close itself in case of an exception.

--
Lenard Lindstrom
[EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-04 Thread Jorgen Grahn
On Fri, 29 Jun 2007 12:53:49 -0400, Douglas Alan [EMAIL PROTECTED] wrote:
 Steve Holden [EMAIL PROTECTED] writes:

 Python doesn't *have* any refcounting semantics.

 I'm not convinced that Python has *any* semantics at all outside of
 specific implementations.  It has never been standardized to the rigor
 of your typical barely-readable language standards document.

 If you rely on the behavior of CPython's memory allocation and
 garbage collection you run the risk of producing programs that won't
 port tp Jython, or IronPython, or PyPy, or ...

 This is a trade-off that many users *are* willing to make.

 Yes, I have no interest at the moment in trying to make my code
 portable between every possible implementation of Python, since I have
 no idea what features such implementations may or may not support.
 When I code in Python, I'm coding for CPython.  In the future, I may
 do some stuff in Jython, but I wouldn't call it Python -- it'd call
 it Jython.

Yeah, especially since Jython is currently (according to the Wikipedia
article) an implementation of Python 2.2 ... not even *I* use 
versions that are that old these days!

[I have, for a long time, been meaning to post here about refcounting
and relying on CPython's __del__ semantics, but I've never had the
energy to write clearly or handle the inevitable flame war. So I'll
just note that my view on this seems similar to Doug's.]

/Jorgen

-- 
  // Jorgen Grahn grahn@Ph'nglui mglw'nafh Cthulhu
\X/ snipabacken.dyndns.org  R'lyeh wgah'nagl fhtagn!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-02 Thread Douglas Alan
Lenard Lindstrom [EMAIL PROTECTED] writes:

 Explicitly clear the exception? With sys.exc_clear?

 Yes.  Is there a problem with that?

 As long as nothing tries to re-raise the exception I doubt it breaks
 anything:

   import sys
   try:
   raise StandardError(Hello)
 except StandardError:
   sys.exc_clear()
   raise


 Traceback (most recent call last):
File pyshell#6, line 5, in module
  raise
 TypeError: exceptions must be classes, instances, or strings
 (deprecated), not NoneType

I guess I don't really see that as a problem.  Exceptions should
normally only be re-raised where they are caught.  If a piece of code
has decided to handle an exception, and considers it dealt with, there
is no reason for it not to clear the exception, and good reason for it
to do so.  Also, any caught exception is automatically cleared when
the catching procedure returns anyway, so it's not like Python has
ever considered a caught exception to be precious information that
ought to be preserved long past the point where it is handled.

 But it is like calling the garbage collector. You are tuning the
 program to ensure some resource isn't exhausted.

I'm not sure I see the analogy: Calling the GC can be expensive,
clearing an exception is not.  The exception is going to be cleared
anyway when the procedure returns, the GC wouldn't likely be.

It's much more like explicitly assigning None to a variable that
contains a large data structure when you no longer need the contents
of the variable.  Doing this sort of thing can be a wise thing to do
in certain situations.

 It relies on implementation specific behavior to be provably
 reliable*.

As Python is not a formally standardized language, and one typically
relies on the fact that CPython itself is ported to just about every
platform known to Man, I don't find this to be a particular worry.

 If this is indeed the most obvious way to do things in your
 particular use case then Python, and many other languages, is
 missing something. If the particular problem is isolated,
 formalized, and general solution found, then a PEP can be
 submitted. If accepted, this would ensure future and cross-platform
 compatibility.

Well, I think that the refcounting semantics of CPython are useful,
and allow one to often write simpler, easier-to-read and maintain
code.  I think that Jython and IronPython, etc., should adopt these
semantics, but I imagine they might not for performance reasons.  I
don't generally use Python for it's speediness, however, but rather
for it's pleasant syntax and semantics and large, effective library.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-02 Thread Douglas Alan
Lenard Lindstrom [EMAIL PROTECTED] writes:

 You don't necessarily want a function that raises an exception to
 deallocate all of its resources before raising the exception, since
 you may want access to these resources for debugging, or what have
 you.

 No problem:

 [...]

   class MyFile(file):
   def __exit__(self, exc_type, exc_val, exc_tb):
   if exc_type is not None:
   self.my_last_posn = self.tell()
   return file.__exit__(self, exc_type, exc_val, exc_tb)

I'm not sure I understand you here.  You're saying that I should have
the foresight to wrap all my file opens is a special class to
facilitate debugging?

If so, (1) I don't have that much foresight and don't want to have
to.  (2) I debug code that other people have written, and they often
have less foresight than me.  (3) It would make my code less clear to
ever file open wrapped in some special class.

Or are you suggesting that early in __main__.main(), when I wish to
debug something, I do something like:

   __builtins__.open = __builtins__.file = MyFile

?

I suppose that would work.  I'd still prefer to clear exceptions,
though, in those few cases in which a function has caught an exception
and isn't going to be returning soon and have the resources generally
kept alive in the traceback.  To me, that's the more elegant and
general solution.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-02 Thread Lenard Lindstrom
Douglas Alan wrote:
 Lenard Lindstrom [EMAIL PROTECTED] writes:
 
 Explicitly clear the exception? With sys.exc_clear?
 
 Yes.  Is there a problem with that?
 
 As long as nothing tries to re-raise the exception I doubt it breaks
 anything:

   import sys
   try:
  raise StandardError(Hello)
 except StandardError:
  sys.exc_clear()
  raise


 Traceback (most recent call last):
File pyshell#6, line 5, in module
  raise
 TypeError: exceptions must be classes, instances, or strings
 (deprecated), not NoneType
 
 I guess I don't really see that as a problem.  Exceptions should
 normally only be re-raised where they are caught.  If a piece of code
 has decided to handle an exception, and considers it dealt with, there
 is no reason for it not to clear the exception, and good reason for it
 to do so.

It is only a problem if refactoring the code could mean the exception is 
re-raised instead of handled at that point. Should the call to exc_clear 
be overlooked then the newly added raise will not work.

 Also, any caught exception is automatically cleared when
 the catching procedure returns anyway, so it's not like Python has
 ever considered a caught exception to be precious information that
 ought to be preserved long past the point where it is handled.
 

That's the point. Python takes care of clearing the traceback. Calls to 
exc_clear are rarely seen. If they are simply a performance tweak then 
it's not an issue *. I was just concerned that the calls were necessary 
to keep resources from being exhausted.

 But it is like calling the garbage collector. You are tuning the
 program to ensure some resource isn't exhausted.
 
 I'm not sure I see the analogy: Calling the GC can be expensive,
 clearing an exception is not.  The exception is going to be cleared
 anyway when the procedure returns, the GC wouldn't likely be.
 

The intent of a high level language is to free the programmer from such 
concerns as memory management. So a call to the GC is out-of-place in a 
production program. Anyone encountering such a call would wonder what is 
so critical about that particular point in the execution. So 
encountering an exc_clear would make me wonder why it is so important to 
free that traceback. I would hope the comments would explain it.

 It's much more like explicitly assigning None to a variable that
 contains a large data structure when you no longer need the contents
 of the variable.  Doing this sort of thing can be a wise thing to do
 in certain situations.
 

I just delete the name myself. But this is different. Removing a name 
from the namespace, or setting it to None, prevents an accidental access 
later. A caught traceback is invisible.

 It relies on implementation specific behavior to be provably
 reliable*.
 
 As Python is not a formally standardized language, and one typically
 relies on the fact that CPython itself is ported to just about every
 platform known to Man, I don't find this to be a particular worry.
 

But some things will make it into ISO Python. Registered exit handlers 
will be called at program termination. A context manager's __exit__ 
method will be called when leaving a with statement. But garbage 
collection will be implementation-defined **.

 If this is indeed the most obvious way to do things in your
 particular use case then Python, and many other languages, is
 missing something. If the particular problem is isolated,
 formalized, and general solution found, then a PEP can be
 submitted. If accepted, this would ensure future and cross-platform
 compatibility.
 
 Well, I think that the refcounting semantics of CPython are useful,
 and allow one to often write simpler, easier-to-read and maintain
 code.

Just as long as you have weighed the benefits against a future move to a 
JIT-accelerated, continuation supporting PyPy interpreter that might not 
use reference counting.

 I think that Jython and IronPython, etc., should adopt these
 semantics, but I imagine they might not for performance reasons.  I
 don't generally use Python for it's speediness, however, but rather
 for it's pleasant syntax and semantics and large, effective library.
 

Yet improved performance appeared to be a priority in Python 2.4 
development, and Python's speed continues to be a concern.


* I see in section 26.1 of the Python 2.5 /Python Library Reference/ as 
regards exc_clear: This function can also be used to try to free 
resources and trigger object finalization, though no guarantee is made 
as to what objects will be freed, if any. So using exc_clear is not so 
much frowned upon as questioned.

** A term that crops up a lot in the C standard /ISO/IEC 9899:1999 (E)/. :-)

--
Lenard Lindstrom
[EMAIL PROTECTED]

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-02 Thread Lenard Lindstrom
Douglas Alan wrote:
 Lenard Lindstrom [EMAIL PROTECTED] writes:
 
 You don't necessarily want a function that raises an exception to
 deallocate all of its resources before raising the exception, since
 you may want access to these resources for debugging, or what have
 you.
 
 No problem:

 [...]

   class MyFile(file):
  def __exit__(self, exc_type, exc_val, exc_tb):
  if exc_type is not None:
  self.my_last_posn = self.tell()
  return file.__exit__(self, exc_type, exc_val, exc_tb)
 
 I'm not sure I understand you here.  You're saying that I should have
 the foresight to wrap all my file opens is a special class to
 facilitate debugging?
 
 If so, (1) I don't have that much foresight and don't want to have
 to.  (2) I debug code that other people have written, and they often
 have less foresight than me.  (3) It would make my code less clear to
 ever file open wrapped in some special class.
 

Obviously you had the foresight to realize with statements could 
compromise debugging. I never considered it myself. I don't know the 
specifics of your particular project, what the requirements are. So I 
can't claim this is the right solution for you. But the option is available.

 Or are you suggesting that early in __main__.main(), when I wish to
 debug something, I do something like:
 
__builtins__.open = __builtins__.file = MyFile
 
 ?
 
 I suppose that would work.

No, I would never suggest replacing a builtin like that. Even replacing 
a definite hook like __import__ is risky, should more than one package 
try and do it in a program.

 I'd still prefer to clear exceptions,
 though, in those few cases in which a function has caught an exception
 and isn't going to be returning soon and have the resources generally
 kept alive in the traceback.  To me, that's the more elegant and
 general solution.
 

As long as the code isn't dependent on explicitly cleared exceptions. 
But if it is I assume it is well documented.

--
Lenard Lindstrom
[EMAIL PROTECTED]

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-02 Thread Douglas Alan
Lenard Lindstrom [EMAIL PROTECTED] writes:

 I'm not sure I understand you here.  You're saying that I should have
 the foresight to wrap all my file opens is a special class to
 facilitate debugging?

 Obviously you had the foresight to realize with statements could
 compromise debugging. I never considered it myself.

It's not really so much a matter of having foresight, as much as
having had experience debugging a fair amount of code.  And, at times,
having benefited from the traditional idiomatic way of coding in
Python, where files are not explicitly closed.

Since there are benefits with the typical coding style, and I find
there to be no significant downside, other than if, perhaps some code
holds onto tracebacks, I suggest that the problem be idiomatically
addressed in the *few* code locations that hold onto tracebacks,
rather than in all the *myriad* code locations that open and close
files.

 Or are you suggesting that early in __main__.main(), when I wish to
 debug something, I do something like:
__builtins__.open = __builtins__.file = MyFile
 ?
 I suppose that would work.

 No, I would never suggest replacing a builtin like that. Even
 replacing a definite hook like __import__ is risky, should more than
 one package try and do it in a program.

That misinterpretation of your idea would only be reasonable while
actually debugging, not for standard execution.  Standard rules of
coding elegance don't apply while debugging, so I think the
misinterpretation might be a reasonable alternative.  Still I think
I'd just prefer to stick to the status quo in this regard.

 As long as the code isn't dependent on explicitly cleared
 exceptions. But if it is I assume it is well documented.

Typically the resource in question is an open file.  These usually
don't have to be closed in a particularly timely fashion.  If, for
some reason, a files absolutelys need to be closed rapidly, then it's
probably best to use with in such a case.  Otherwise, I vote for the
de facto standard idiom of relying on the refcounter along with
explicitly clearing exceptions in the situations we've previously
discusses.

If some code doesn't explicitly clear an exception, though, and holds
onto the the most recent one while running in a loop (or what have
you), in the cases we are considering, it hardly seems like the end of
the world.  It will just take a little bit longer for a single file to
be closed than might ideally be desired.  But this lack of ideal
behavior is usually not going to cause much trouble.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-02 Thread Douglas Alan
Lenard Lindstrom [EMAIL PROTECTED] writes:

 Also, any caught exception is automatically cleared when
 the catching procedure returns anyway, so it's not like Python has
 ever considered a caught exception to be precious information that
 ought to be preserved long past the point where it is handled.

 That's the point. Python takes care of clearing the traceback. Calls
 to exc_clear are rarely seen.

But that's probably because it's very rare to catch an exception and
then not return quickly.  Typically, the only place this would happen
is in main(), or one of its helpers.

 If they are simply a performance tweak then it's not an issue *. I
 was just concerned that the calls were necessary to keep resources
 from being exhausted.

Well, if you catch an exception and don't return quickly, you have to
consider not only the possibility that there could be some open files
left in the traceback, but also that there could be a large and now
useless data structures stored in the traceback.

Some people here have been arguing that all code should use with to
ensure that the files are closed.  But this still wouldn't solve the
problem of the large data structures being left around for an
arbitrary amount of time.

 But some things will make it into ISO Python.

Is there a movement afoot of which I'm unaware to make an ISO standard
for Python?

 Just as long as you have weighed the benefits against a future move
 to a JIT-accelerated, continuation supporting PyPy interpreter that
 might not use reference counting.

I'll worry about that day when it happens, since many of my calls to
the standard library will probably break anyway at that point.  Not to
mention that I don't stay within the confines of Python 2.2, which is
where Jython currently is.  (E.g., Jython does not have generators.)
Etc.

 I think that Jython and IronPython, etc., should adopt these
 semantics, but I imagine they might not for performance reasons.  I
 don't generally use Python for it's speediness, however, but rather
 for it's pleasant syntax and semantics and large, effective
 library.

 Yet improved performance appeared to be a priority in Python 2.4
 development, and Python's speed continues to be a concern.

I don't think the refcounting semantics should slow Python down much
considering that it never has aimed for C-level performance anyway.
(Some people claim it's a drag on supporting threads.  I'm skeptical,
though.)  I can see it being a drag on something like Jython, though,
were you are going through a number of different layers to get from
Jython code to the hardware.

Also, I imagine that no one wants to put in the work in Jython to have
a refcounter when the garbage collector comes with the JVM for free.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-02 Thread Lenard Lindstrom
Douglas Alan wrote:
 Lenard Lindstrom [EMAIL PROTECTED] writes:
 

 Or are you suggesting that early in __main__.main(), when I wish to
 debug something, I do something like:
__builtins__.open = __builtins__.file = MyFile
 ?
 I suppose that would work.
 
 No, I would never suggest replacing a builtin like that. Even
 replacing a definite hook like __import__ is risky, should more than
 one package try and do it in a program.
 
 That misinterpretation of your idea would only be reasonable while
 actually debugging, not for standard execution.  Standard rules of
 coding elegance don't apply while debugging, so I think the
 misinterpretation might be a reasonable alternative.  Still I think
 I'd just prefer to stick to the status quo in this regard.
 

I totally missed the when I wish to debug something. Skimming when I 
should be reading.

---
Lenard Lindstrom
[EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-02 Thread Lenard Lindstrom
Douglas Alan wrote:
 Lenard Lindstrom [EMAIL PROTECTED] writes:
 
 But some things will make it into ISO Python.
 
 Is there a movement afoot of which I'm unaware to make an ISO standard
 for Python?
 

Not that I know of. But it would seem any language that lasts long 
enough will become ISO standard. I just meant that even though Python 
has no formal standard there are documented promises that will not be 
broken lightly by any implementation.

---
Lenard Lindstrom
[EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-01 Thread Paul Rubin
Douglas Alan [EMAIL PROTECTED] writes:
  Haskell and ML are both evaluate typed lambda calculus unlike Lisp
  which is based on untyped lambda calculus.  Certainly the most
  familiar features of Lisp (dynamic typing, S-expression syntax,
  programs as data (Lisp's macro system results from this)) are absent
  from Haskell and ML.
 
 And that is supposed to make them better and more flexible??? 

Well no, by itself the absence of those Lisp characteristics mainly
means it's a pretty far stretch to say that Haskell and ML are Lisp
dialects.

 The ideal language of the future will have *optional* manifest
 typing along with type-inference, and will have some sort of pramgma
 to turn on warnings when variables are forced to become dynamic due
 to there not being enough type information to infer the type.  But
 it will still allow programming with dynamic typing when that is
 necessary.

If I understand correctly, in Haskell these are called existential types:

   http://haskell.org/hawiki/ExistentialTypes

 The last time I looked at Haskell, it was still in the stage of being
 a language that only an academic could love.  

I used to hear the same thing said about Lisp.

  Haskell's type system lets it do stuff that Lisp can't approach.
 
 What kind of stuff?  Compile-time polymorphism is cool for efficiency
 and type safety, but doesn't actually provide you with any extra
 functionality that I'm aware of.

For example, it can guarantee referential transparency of functions
that don't live in certain monads.  E.g. if a function takes an
integer arg and returns an integer (f :: Integer - Integer), the type
system guarantees that computing f has no side effects (it doesn't
mutate arrays, doesn't open network sockets, doesn't print messages,
etc).  That is very helpful for concurrency, see the paper Composable
Memory Transactions linked from here:

  http://research.microsoft.com/Users/simonpj/papers/stm/index.htm

other stuff there is interesting too.

 Where do you get this idea that the Lisp world does not get such
 things as parallelism?  StarLisp was designed for the Connection
 Machine...

Many parallel programs have been written in Lisp and *Lisp, and
similarly in C, C++, Java, and Python, through careful use of manually
placed synchronization primitives, just as many programs using dynamic
memory allocation have been written in C with manual use of malloc and
free.  This presentation shows some stuff happening in Haskell that
sounds almost as cool as bringing garbage collection to the
malloc/free era:

 http://research.microsoft.com/~simonpj/papers/ndp/NdpSlides.pdf

As for where languages are going, I think I already mentioned Tim
Sweeney's presentation on The Next Mainstream Programming Language:

  http://www.st.cs.uni-sb.de/edu/seminare/2005/advanced-fp/docs/sweeny.pdf

It's not Haskell, but its type system is even more advanced than Haskell's.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-07-01 Thread Lenard Lindstrom
Douglas Alan wrote:
 Lenard Lindstrom [EMAIL PROTECTED] writes:
 
 Explicitly clear the exception? With sys.exc_clear?
 
 Yes.  Is there a problem with that?
 

As long as nothing tries to re-raise the exception I doubt it breaks 
anything:

  import sys
  try:
raise StandardError(Hello)
except StandardError:
sys.exc_clear()
raise


Traceback (most recent call last):
   File pyshell#6, line 5, in module
 raise
TypeError: exceptions must be classes, instances, or strings 
(deprecated), not NoneType


But it is like calling the garbage collector. You are tuning the program 
to ensure some resource isn't exhausted. It relies on implementation 
specific behavior to be provably reliable*. If this is indeed the most 
obvious way to do things in your particular use case then Python, and 
many other languages, is missing something. If the particular problem is 
isolated, formalized, and general solution found, then a PEP can be 
submitted. If accepted, this would ensure future and cross-platform 
compatibility.


* reference counting is an integral part of the CPython C api so cannot 
be changed without breaking a lot of extension modules. It will remain 
as long as CPython is implemented in C.


---
Lenard Lindstrom
[EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-30 Thread Michele Simionato
On Jun 29, 3:42 pm, Douglas Alan [EMAIL PROTECTED] wrote:
 Michele Simionato [EMAIL PROTECTED] writes:
  I've written plenty of Python code that relied on destructors to
  deallocate resources, and the code always worked.
  You have been lucky:

 No I haven't been lucky -- I just know what I'm doing.



  $ cat deallocating.py
  import logging

  class C(object):
  def __init__(self):
  logging.warn('Allocating resource ...')

  def __del__(self):
  logging.warn('De-allocating resource ...')
  print 'THIS IS NEVER REACHED!'

  if __name__ == '__main__':
  c = C()

  $ python deallocating.py
  WARNING:root:Allocating resource ...
  Exception exceptions.AttributeError: 'NoneType' object has no
  attribute 'warn' in bound method C.__del__ of __main__.C object at
  0xb7b9436c ignored

 Right.  So?  I understand this issue completely and I code
 accordingly.

What does it mean you 'code accordingly'? IMO the only clean way out
of this issue
is to NOT rely on the garbage collector and to manage resource
deallocation
explicitely, not implicitely. Actually I wrote a recipe to help with
this
a couple of months ago and this discussion prompted me to publish it:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/523007
But how would you solve the issue using destructors only? I am just
curious,
I would be happy if there was a simple and *reliable* solution, but I
sort
of doubt it. Hoping to be proven wrong,


Michele Simionato

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-30 Thread Douglas Alan
Paul Rubin http://[EMAIL PROTECTED] writes:

 Douglas Alan [EMAIL PROTECTED] writes:

 But that's a library issue, not a language issue.  The technology
 exists completely within Lisp to accomplish these things, and most
 Lisp programmers even know how to do this, as application frameworks
 in Lisp often do this kind.  The problem is getting anything put into
 the standard.  Standardizing committees just suck.

 Lisp is just moribund, is all.  Haskell has a standardizing committee
 and yet there are lots of implementations taking the language in new
 and interesting directions all the time.  The most useful extensions
 become de facto standards and then they make it into the real
 standard.

You only say this because you are not aware of all the cool dialetcs
of Lisp that are invented.  The problem is that they rarely leave the
tiny community that uses them, because each community comes up with
it's own different cool dialect of Lisp.  So, clearly the issue is not
one of any lack of motivation or people working on Lisp innovations --
it's getting them to sit down together and agree on a standard.

This, of course is a serious problem.  One that is very similar to the
problem with Python vs. Ruby on Rails.  It's not the problem that you are
ascribing to Lisp, however.

|oug

P.S. Besides Haskell is basically a refinement of ML, which is a
dialect of Lisp.

P.P.S. I doubt that any day soon any purely (or even mostly)
functional language is going to gain any sort of popularity outside of
academia.  Maybe 20 years from now, they will, but I wouldn't bet on
it.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-30 Thread Paul Rubin
Douglas Alan [EMAIL PROTECTED] writes:
 P.S. Besides Haskell is basically a refinement of ML, which is a
 dialect of Lisp.

I'd say Haskell and ML are descended from Lisp, just like mammals are
descended from fish.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-30 Thread Douglas Alan
Michele Simionato [EMAIL PROTECTED] writes:

 Right.  So?  I understand this issue completely and I code
 accordingly.

 What does it mean you 'code accordingly'? IMO the only clean way out
 of this issue is to NOT rely on the garbage collector and to manage
 resource deallocation explicitely, not implicitely.

(1) I don't rely on the refcounter for resources that ABSOLUTELY,
POSITIVELY must be freed before the scope is left.  In the code that
I've worked on, only a small fraction of resources would fall into
this category.  Open files, for instance, rarely do.  For open files,
in fact, I actually want access to them in the traceback for debugging
purposes, so closing them using with would be the opposite of what I
want.

(2) I don't squirrel away references to tracebacks.

(3) If a procedure catches an exception but isn't going to return
quickly, I clear the exception.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-30 Thread Douglas Alan
Paul Rubin http://[EMAIL PROTECTED] writes:

 Douglas Alan [EMAIL PROTECTED] writes:

 P.S. Besides Haskell is basically a refinement of ML, which is a
 dialect of Lisp.

 I'd say Haskell and ML are descended from Lisp, just like mammals are
 descended from fish.

Hardly -- they all want to share the elegance of lambda calculus,
n'est-ce pas?  Also, ML was originally implemented in Lisp, and IIRC
correctly, at least in early versions, shared much of Lisp's syntax.

Also, Scheme has a purely functional core (few people stick to it, of
course), and there are purely functional dialects of Lisp.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-30 Thread Paul Rubin
Douglas Alan [EMAIL PROTECTED] writes:
  I'd say Haskell and ML are descended from Lisp, just like mammals are
  descended from fish.
 
 Hardly -- they all want to share the elegance of lambda calculus,
 n'est-ce pas?  Also, ML was originally implemented in Lisp, and IIRC
 correctly, at least in early versions, shared much of Lisp's syntax.

Haskell and ML are both evaluate typed lambda calculus unlike Lisp
which is based on untyped lambda calculus.  Certainly the most
familiar features of Lisp (dynamic typing, S-expression syntax,
programs as data (Lisp's macro system results from this)) are absent
from Haskell and ML.  Haskell's type system lets it do stuff that
Lisp can't approach.  I'm reserving judgement about whether Haskell is
really practical for application development, but it can do stuff that
no traditional Lisp can (e.g. its concurrency and parallelism stuff,
with correctness enforced by the type system).  It makes it pretty
clear that Lisp has become Blub.

ML's original implementation language is completely irrelevant; after
all Python is still implemented in C.

 Also, Scheme has a purely functional core (few people stick to it, of
 course), and there are purely functional dialects of Lisp.

Scheme has never been purely functional.  It has had mutation since
the beginning.

Hedgehog Lisp (purely functional, doesn't have setq etc.) is really
cute.  I almost used it in an embedded project but that got cancelled
too early.  It seems to me more like a poor man's Erlang though, than
anything resemblant to ML.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-30 Thread Douglas Alan
Paul Rubin http://[EMAIL PROTECTED] writes:

 Haskell and ML are both evaluate typed lambda calculus unlike Lisp
 which is based on untyped lambda calculus.  Certainly the most
 familiar features of Lisp (dynamic typing, S-expression syntax,
 programs as data (Lisp's macro system results from this)) are absent
 from Haskell and ML.

And that is supposed to make them better and more flexible???  The
ideal language of the future will have *optional* manifest typing
along with type-inference, and will have some sort of pramgma to turn
on warnings when variables are forced to become dynamic due to there
not being enough type information to infer the type.  But it will
still allow programming with dynamic typing when that is necessary.

The last time I looked at Haskell, it was still in the stage of being
a language that only an academic could love.  Though, it was certainly
interesting.

 Haskell's type system lets it do stuff that Lisp can't approach.

What kind of stuff?  Compile-time polymorphism is cool for efficiency
and type safety, but doesn't actually provide you with any extra
functionality that I'm aware of.

 I'm reserving judgement about whether Haskell is really practical
 for application development, but it can do stuff that no traditional
 Lisp can (e.g. its concurrency and parallelism stuff, with
 correctness enforced by the type system).  It makes it pretty clear
 that Lisp has become Blub.

Where do you get this idea that the Lisp world does not get such
things as parallelism?  StarLisp was designed for the Connection
Machine by Thinking Machines themselves.  The Connection Machine was
one of the most parallel machines ever designed.  Alas, it was ahead of
it's time.

Also, I know a research scientist at CSAIL at MIT who has designed and
implemented a version of Lisp for doing audio and video art.  It was
designed from the ground-up to deal with realtime audio and video
streams as first class objects.  It's actually pretty incredible -- in
just a few lines of code, you can set up a program that displays the
same video multiplied and tiled into a large grid of little videos
tiles, but where a different filter or delay is applied to each tile.
This allows for some stunningly strange and interesting video output.
Similar things can be done in the language with music (though if you
did that particular experiment it would probably just sound
cacophonous).

Does that sound like an understanding of concurrency to you?  Yes, I
thought so.

Also, Dylan has optional manifests types and type inference, so the
Lisp community understands some of the benefits of static typing.
(Even MacLisp had optional manifest types, but they weren't  there for
safety, but rather for performance.  Using them, you could get Fortran
level of performance out of Lisp, which was quite a feat at the time.)

 ML's original implementation language is completely irrelevant;
 after all Python is still implemented in C.

Except that in the case of ML, it was mostly just a thin veneer on
Lisp that added a typing system and type inference.

 Also, Scheme has a purely functional core (few people stick to it, of
 course), and there are purely functional dialects of Lisp.

 Scheme has never been purely functional.  It has had mutation since
 the beginning.

I never said that was purely functional -- I said that it has a purely
functional core.  I.e., all the functions that have side effects have
and ! on their ends (or at least they did when I learned the
language), and there are styles of programming in Scheme that
discourage using any of those functions.

|oug

P.S.  The last time I took a language class (about five or six years
ago), the most interesting languages I thought were descended from
Self, not any functional language.  (And Self, of course is descended
from Smalltalk, which is descended from Lisp.)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-30 Thread Douglas Alan
Lenard Lindstrom [EMAIL PROTECTED] writes:

 Explicitly clear the exception? With sys.exc_clear?

Yes.  Is there a problem with that?

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-30 Thread Douglas Alan
I wrote:

 P.S.  The last time I took a language class (about five or six years
 ago), the most interesting languages I thought were descended from
 Self, not any functional language.  (And Self, of course is descended
 from Smalltalk, which is descended from Lisp.)

I think that Cecil is the particular language that I was most thinking
of:

   http://en.wikipedia.org/wiki/Cecil_programming_language

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Michele Simionato
On Jun 29, 6:44 am, Douglas Alan [EMAIL PROTECTED] wrote:

 I've written plenty of Python code that relied on destructors to
 deallocate resources, and the code always worked.

You have been lucky:

$ cat deallocating.py
import logging

class C(object):
def __init__(self):
logging.warn('Allocating resource ...')

def __del__(self):
logging.warn('De-allocating resource ...')
print 'THIS IS NEVER REACHED!'

if __name__ == '__main__':
c = C()

$ python deallocating.py
WARNING:root:Allocating resource ...
Exception exceptions.AttributeError: 'NoneType' object has no
attribute 'warn' in bound method C.__del__ of __main__.C object at
0xb7b9436c ignored

Just because your experience has been positive, you should not
dismiss the opinion who have clearly more experience than you on
the subtilities of Python.

 Michele

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Hrvoje Niksic
Douglas Alan [EMAIL PROTECTED] writes:

 I think you overstate your case.  Lispers understand iteration
 interfaces perfectly well, but tend to prefer mapping fuctions to
 iteration because mapping functions are both easier to code (they
 are basically equivalent to coding generators) and efficient (like
 non-generator-implemented iterators).  The downside is that they are
 not quite as flexible as iterators (which can be hard to code) and
 generators, which are slow.

Why do you think generators are any slower than hand-coded iterators?
Consider a trivial sequence iterator:

$ python -m timeit -s 'l=[1] * 100
class foo(object):
  def __init__(self, l):
self.l = l
self.i = 0
  def __iter__(self):
return self
  def next(self):
self.i += 1
try:
  return self.l[self.i - 1]
except IndexError:
  raise StopIteration
' 'tuple(foo(l))'
1 loops, best of 3: 173 usec per loop

The equivalent generator is not only easier to write, but also
considerably faster:

$ python -m timeit -s 'l=[1] * 100
def foo(l):
  i = 0
  while 1:
try:
  yield l[i]
except IndexError:
  break
i += 1
' 'tuple(foo(l))'
1 loops, best of 3: 46 usec per loop
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Chris Mellon
On 6/28/07, Douglas Alan [EMAIL PROTECTED] wrote:
 Chris Mellon [EMAIL PROTECTED] writes:

  Obviously. But theres nothing about the with statement that's
  different than using smart pointers in this regard.

 Sure there is -- smart pointers handle many sorts of situations, while
 with only handles the case where the lifetime of the object
 corresponds to the scope.


The entire point of RAII is that you use objects who's lifetime
corresponds with a scope. Smart pointers are an RAII technique to
manage refcounts, not a resource management technique in and of
themselves.


  To the extent that your code ever worked when you relied on this
  detail, it will continue to work.

 I've written plenty of Python code that relied on destructors to
 deallocate resources, and the code always worked.


This is roughly equivilent to someone saying that they don't bother
initializing pointers to 0 in C, because it's always worked for them.
The fact that it works in certain cases (in the C case, when you're
working in the debug mode of certain compilers or standard libs) does
not mean that code that relies on it working is correct.

  There are no plans to replace pythons refcounting with fancier GC
  schemes that I am aware of.

 This is counter to what other people have been saying.  They have been
 worrying me by saying that the refcounter may go away and so you may
 not be able to rely on predictable object lifetimes in the future.


Well, the official language implementation explicitly warns against
relying on the behavior you've been relying on. And of course, for the
purposes you've been using it it'll continue to work even if python
did eliminate refcounting - soon enough deallocation of non-time
sensitive resources. So I don't know what you're hollering about.

You're arguing in 2 directions here. You don't want refcounting to go
away, because you rely on it to close things exactly when there are no
more references. On the other hand, you're claiming that implicit
management and its pitfalls are fine because most of the time you
don't need the resource to be closed in a  deterministic manner.

If you're relying on refcounting for timely, guaranteed,
deterministic resource managment then your code is *wrong* already,
for the same reason that someone who assumes that uninitialized
pointers in C will be 0 is wrong.

If you're relying on refcounting for soon enough resource management
then it'll continue to work no matter what GC scheme python may or may
not move to.

  Nothing about Pythons memory management has changed. I know I'm
  repeating myself here, but you just don't seem to grasp this
  concept.  Python has *never* had deterministic destruction of
  objects. It was never guaranteed, and code that seemed like it
  benefited from it was fragile.

 It was not fragile in my experience.  If a resource *positively*,
 *absolutely* needed to be deallocated at a certain point in the code
 (and occasionally that was the case), then I would code that way.  But
 that has been far from the typical case for me.


Your experience was wrong, then. It's fragile because it's easy for
external callers to grab refcounts to your objects, and it's easy for
code modifications to cause resources to live longer. If you don't
*care* about that, then by all means, don't control the resource
explicitly. You can continue to do this no matter what - people work
with files like this in Java all the time, for the same reason they do
it in Python. Memory and files are not the end all of resources.

You're arguing against explicit resource management with the argument
that you don't need to manage resources. Can you not see how
ridiculously circular this is?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Douglas Alan
Michele Simionato [EMAIL PROTECTED] writes:

 I've written plenty of Python code that relied on destructors to
 deallocate resources, and the code always worked.

 You have been lucky:

No I haven't been lucky -- I just know what I'm doing.

 $ cat deallocating.py
 import logging

 class C(object):
 def __init__(self):
 logging.warn('Allocating resource ...')

 def __del__(self):
 logging.warn('De-allocating resource ...')
 print 'THIS IS NEVER REACHED!'

 if __name__ == '__main__':
 c = C()

 $ python deallocating.py
 WARNING:root:Allocating resource ...
 Exception exceptions.AttributeError: 'NoneType' object has no
 attribute 'warn' in bound method C.__del__ of __main__.C object at
 0xb7b9436c ignored

Right.  So?  I understand this issue completely and I code
accordingly.

 Just because your experience has been positive, you should not
 dismiss the opinion who have clearly more experience than you on
 the subtilities of Python.

I don't dismiss their opinion at all.  All I've stated is that for my
purposes I find that the refcounting semantics of Python to be useful,
expressive, and dependable, and that I wouldn't like it one bit if
they were removed from Python.

Those who claim that the refcounting semantics are not useful are the
ones who are dismissing my experience.  (And the experience of
zillions of other Python programmers who have happily been relying on
them.)

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Douglas Alan
Dennis Lee Bieber [EMAIL PROTECTED] writes:

   LISP and FORTH are cousins...

Not really.  Their only real similarity (other than the similarities
shared by most programming languages) is that they both use a form of
Polish notation.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Douglas Alan
Hrvoje Niksic [EMAIL PROTECTED] writes:

 Douglas Alan [EMAIL PROTECTED] writes:

 I think you overstate your case.  Lispers understand iteration
 interfaces perfectly well, but tend to prefer mapping fuctions to
 iteration because mapping functions are both easier to code (they
 are basically equivalent to coding generators) and efficient (like
 non-generator-implemented iterators).  The downside is that they are
 not quite as flexible as iterators (which can be hard to code) and
 generators, which are slow.

 Why do you think generators are any slower than hand-coded iterators?

Generators aren't slower than hand-coded iterators in *Python*, but
that's because Python is a slow language.  In a fast language, such as
a Lisp, generators are like 100 times slower than mapping functions.
(At least they were on Lisp Machines, where generators were
implemented using a more generator coroutining mechanism [i.e., stack
groups].  *Perhaps* there would be some opportunities for more
optimization if they had used a less general mechanism.)

CLU, which I believe is the language that invented generators, limited
them to the power of mapping functions (i.e., you couldn't have
multiple generators instantiated in parallel), making them really
syntactic sugar for mapping functions.  The reason for this limitation
was performance.  CLU was a fast language.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Chris Mellon
On 6/29/07, Douglas Alan [EMAIL PROTECTED] wrote:
 Chris Mellon [EMAIL PROTECTED] writes:

  You're arguing against explicit resource management with the argument
  that you don't need to manage resources. Can you not see how
  ridiculously circular this is?

 No.  It is insane to leave files unclosed in Java (unless you know for
 sure that your program is not going to be opening many files) because
 you don't even know that the garbage collector will ever even run, and
 you could easily run out of file descriptors, and hog system
 resources.

 On the other hand, in Python, you can be 100% sure that your files
 will be closed in a timely manner without explicitly closing them, as
 long as you are safe in making certain assumptions about how your code
 will be used.  Such assumptions are called preconditions, which are
 an understood notion in software engineering and by me when I write
 software.


Next time theres one of those software development isn't really
engineering debates going on I'm sure that we'll be able to settle
the argument by pointing out that relying on *explicitly* unreliable
implementation details is defined as engineering by some people.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Douglas Alan
Chris Mellon [EMAIL PROTECTED] writes:

 You're arguing against explicit resource management with the argument
 that you don't need to manage resources. Can you not see how
 ridiculously circular this is?

No.  It is insane to leave files unclosed in Java (unless you know for
sure that your program is not going to be opening many files) because
you don't even know that the garbage collector will ever even run, and
you could easily run out of file descriptors, and hog system
resources.

On the other hand, in Python, you can be 100% sure that your files
will be closed in a timely manner without explicitly closing them, as
long as you are safe in making certain assumptions about how your code
will be used.  Such assumptions are called preconditions, which are
an understood notion in software engineering and by me when I write
software.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Jean-Paul Calderone
On Fri, 29 Jun 2007 09:56:14 -0400, Douglas Alan [EMAIL PROTECTED] wrote:
Chris Mellon [EMAIL PROTECTED] writes:

 You're arguing against explicit resource management with the argument
 that you don't need to manage resources. Can you not see how
 ridiculously circular this is?

No.  It is insane to leave files unclosed in Java (unless you know for
sure that your program is not going to be opening many files) because
you don't even know that the garbage collector will ever even run, and
you could easily run out of file descriptors, and hog system
resources.

On the other hand, in Python, you can be 100% sure that your files
will be closed in a timely manner without explicitly closing them, as
long as you are safe in making certain assumptions about how your code
will be used.  Such assumptions are called preconditions, which are
an understood notion in software engineering and by me when I write
software.

You realize that Python has exceptions, right?  Have you ever encountered
a traceback object?  Is one of your preconditions that no one will ever
handle an exception raised by your code or by their own code when it is
invoked by yours?

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Douglas Alan
Chris Mellon [EMAIL PROTECTED] writes:

 On the other hand, in Python, you can be 100% sure that your files
 will be closed in a timely manner without explicitly closing them, as
 long as you are safe in making certain assumptions about how your code
 will be used.  Such assumptions are called preconditions, which are
 an understood notion in software engineering and by me when I write
 software.

 Next time theres one of those software development isn't really
 engineering debates going on I'm sure that we'll be able to settle
 the argument by pointing out that relying on *explicitly* unreliable
 implementation details is defined as engineering by some people.

The proof of the pudding is in it's eating.  I've worked on very large
programs that exhibited very few bugs, and ran flawlessly for many
years.  One managed the memory remotely of a space telescope, and the
code was pretty tricky.  I was sure when writing the code that there
would be a number of obscure bugs that I would end up having to pull
my hair out debugging, but it's been running flawlessly for more than
a decade now, without require nearly any debugging at all.

Engineering to a large degree is knowing where to dedicate your
efforts.  If you dedicate them to where they are not needed, then you
have less time to dedicate them to where they truly are.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Hrvoje Niksic
Douglas Alan [EMAIL PROTECTED] writes:

  The downside is that they are not quite as flexible as iterators
 (which can be hard to code) and generators, which are slow.

 Why do you think generators are any slower than hand-coded iterators?

 Generators aren't slower than hand-coded iterators in *Python*, but
 that's because Python is a slow language.

But then it should be slow for both generators and iterators.

 *Perhaps* there would be some opportunities for more optimization if
 they had used a less general mechanism.)

Or if the generators were built into the language and directly
supported by the compiler.  In some cases implementing a feature is
*not* a simple case of writing a macro, even in Lisp.  Generators may
well be one such case.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Douglas Alan
Jean-Paul Calderone [EMAIL PROTECTED] writes:

On the other hand, in Python, you can be 100% sure that your files
will be closed in a timely manner without explicitly closing them, as
long as you are safe in making certain assumptions about how your code
will be used.  Such assumptions are called preconditions, which are
an understood notion in software engineering and by me when I write
software.

 You realize that Python has exceptions, right?

Yes, of course.

 Have you ever encountered a traceback object?

Yes, of course.

 Is one of your preconditions that no one will ever handle an
 exception raised by your code or by their own code when it is
 invoked by yours?

A precondition of much of my Python code is that callers won't
squirrel away large numbers of tracebacks for long periods of time.  I
can live with that.  Another precondition of much of my code is that
the caller doesn't assume that it is thread-safe.  Another
precondition is that the caller doesn't assume that it is likely to
meet real-time constraints.  Another precondition is that the caller
doesn't need my functions to promise not to generate any garbage that
might call the GC to invoked.

If I had to write all my code to work well without making *any*
assumptions about what the needs of the caller might be, then my code
would have to be much more complicated, and then I'd spend more effort
making my code handle situations that it won't face for my purposes.
Consequently, I'd have less time to make my software have the
functionality that I actually require.

Regarding, specifically, tracebacks holding onto references to open
files -- have you considered that you may actually *want* to see the
file in the state that it was in when the exception was raised for the
purposes of debugging, rather than having it forcefully closed on you?

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Steve Holden
Douglas Alan wrote:
 Michele Simionato [EMAIL PROTECTED] writes:
 
 I've written plenty of Python code that relied on destructors to
 deallocate resources, and the code always worked.
 
 You have been lucky:
 
 No I haven't been lucky -- I just know what I'm doing.
 
 $ cat deallocating.py
 import logging

 class C(object):
 def __init__(self):
 logging.warn('Allocating resource ...')

 def __del__(self):
 logging.warn('De-allocating resource ...')
 print 'THIS IS NEVER REACHED!'

 if __name__ == '__main__':
 c = C()

 $ python deallocating.py
 WARNING:root:Allocating resource ...
 Exception exceptions.AttributeError: 'NoneType' object has no
 attribute 'warn' in bound method C.__del__ of __main__.C object at
 0xb7b9436c ignored
 
 Right.  So?  I understand this issue completely and I code
 accordingly.
 
 Just because your experience has been positive, you should not
 dismiss the opinion who have clearly more experience than you on
 the subtilities of Python.
 
 I don't dismiss their opinion at all.  All I've stated is that for my
 purposes I find that the refcounting semantics of Python to be useful,
 expressive, and dependable, and that I wouldn't like it one bit if
 they were removed from Python.
 
 Those who claim that the refcounting semantics are not useful are the
 ones who are dismissing my experience.  (And the experience of
 zillions of other Python programmers who have happily been relying on
 them.)
 
 |oug

Python doesn't *have* any refcounting semantics. If you rely on the 
behavior of CPython's memory allocation and garbage collection you run 
the risk of producing programs that won't port tp Jython, or IronPython, 
or PyPy, or ...

This is a trade-off that many users *are* willing to make.

regards
  Steve
-- 
Steve Holden+1 571 484 6266   +1 800 494 3119
Holden Web LLC/Ltd   http://www.holdenweb.com
Skype: holdenweb  http://del.icio.us/steve.holden
--- Asciimercial --
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
--- Thank You for Reading -

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Duncan Booth
Douglas Alan [EMAIL PROTECTED] wrote:

 Is one of your preconditions that no one will ever handle an
 exception raised by your code or by their own code when it is
 invoked by yours?
 
 A precondition of much of my Python code is that callers won't
 squirrel away large numbers of tracebacks for long periods of time.  I
 can live with that.  Another precondition of much of my code is that
 the caller doesn't assume that it is thread-safe.  Another
 precondition is that the caller doesn't assume that it is likely to
 meet real-time constraints.  Another precondition is that the caller
 doesn't need my functions to promise not to generate any garbage that
 might call the GC to invoked.

None of that is relevant.

Have you ever seen any code looking roughly like this?

def mainloop():
   while somecondition:
  try:
  dosomestuff()
  except SomeExceptions:
  handletheexception()

Now, imagine somewhere deep inside dosomestuff an exception is raised while 
you have a file open and the exception is handled in mainloop. If the loop 
then continues with a fresh call to dosomestuff the traceback object will 
continue to exist until the next exception is thrown or until mainloop 
returns.

There is no 'squirrelling away' needed here. The point is that it is easy 
to write code which accidentally holds onto tracebacks. It is not 
reasonable to expect the caller to analyse all situations where the 
traceback object could continue to exist.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Douglas Alan
Hrvoje Niksic [EMAIL PROTECTED] writes:

 Generators aren't slower than hand-coded iterators in *Python*, but
 that's because Python is a slow language.

 But then it should be slow for both generators and iterators.

Python *is* slow for both generators and iterators.  It's slow for
*everything*, except for cases when you can have most of the work done
within C-coded functions or operations that perform a lot of work
within a single call.  (Or, of course, cases where you are i/o
limited, or whatever.)

 *Perhaps* there would be some opportunities for more optimization if
 they had used a less general mechanism.)

 Or if the generators were built into the language and directly
 supported by the compiler.  In some cases implementing a feature is
 *not* a simple case of writing a macro, even in Lisp.  Generators may
 well be one such case.

You can't implement generators in Lisp (with or without macros)
without support for generators within the Lisp implementation.  This
support was provided as stack groups on Lisp Machines and as
continuations in Scheme.  Both stack groups and continuations are
slow.  I strongly suspect that if they had provided direct support for
generators, rather than indirectly via stack groups and continuations,
that that support would have been slow as well.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Douglas Alan
Steve Holden [EMAIL PROTECTED] writes:

 Python doesn't *have* any refcounting semantics.

I'm not convinced that Python has *any* semantics at all outside of
specific implementations.  It has never been standardized to the rigor
of your typical barely-readable language standards document.

 If you rely on the behavior of CPython's memory allocation and
 garbage collection you run the risk of producing programs that won't
 port tp Jython, or IronPython, or PyPy, or ...

 This is a trade-off that many users *are* willing to make.

Yes, I have no interest at the moment in trying to make my code
portable between every possible implementation of Python, since I have
no idea what features such implementations may or may not support.
When I code in Python, I'm coding for CPython.  In the future, I may
do some stuff in Jython, but I wouldn't call it Python -- it'd call
it Jython.  When I do code for Jython, I'd be using it to get to
Java libraries that would make my code non-portable to CPython, so
portability here seems to be a red herring.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Douglas Alan
Duncan Booth [EMAIL PROTECTED] writes:

 A precondition of much of my Python code is that callers won't
 squirrel away large numbers of tracebacks for long periods of time.  I
 can live with that.  Another precondition of much of my code is that
 the caller doesn't assume that it is thread-safe.  Another
 precondition is that the caller doesn't assume that it is likely to
 meet real-time constraints.  Another precondition is that the caller
 doesn't need my functions to promise not to generate any garbage that
 might call the GC to invoked.

 None of that is relevant.

Of course it is.  I said large number of tracebacks up there, and
you promptly ignored that precondition in your subsequent
counterexample.

 Have you ever seen any code looking roughly like this?

 def mainloop():
while somecondition:
   try:
   dosomestuff()
   except SomeExceptions:
   handletheexception()

Of course.

 Now, imagine somewhere deep inside dosomestuff an exception is
 raised while you have a file open and the exception is handled in
 mainloop. If the loop then continues with a fresh call to
 dosomestuff the traceback object will continue to exist until the
 next exception is thrown or until mainloop returns.

It's typically okay in my software for a single (or a few) files to
remain open for longer than I might expect.  What it couldn't handle
is running out of file descriptors, or the like.  (Just like it
couldn't handle running out of memory.)  But that's not going to
happen with your counterexample.

If I were worried about a file or two remaining open too long, I'd
clear the exception in the mainloop above, after handling it.  Python
lets you do that, doesn't it?

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Lenard Lindstrom
Douglas Alan wrote:
 
 [I]n Python, you can be 100% sure that your files
 will be closed in a timely manner without explicitly closing them, as
 long as you are safe in making certain assumptions about how your code
 will be used.  Such assumptions are called preconditions, which are
 an understood notion in software engineering and by me when I write
 software.
 

So documenting an assumption is more effective than removing the 
assumption using a with statement?

--
Lenard Lindstrom
[EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Douglas Alan
Lenard Lindstrom [EMAIL PROTECTED] writes:

 Douglas Alan wrote:

 [I]n Python, you can be 100% sure that your files
 will be closed in a timely manner without explicitly closing them, as
 long as you are safe in making certain assumptions about how your code
 will be used.  Such assumptions are called preconditions, which are
 an understood notion in software engineering and by me when I write
 software.

 So documenting an assumption is more effective than removing the
 assumption using a with statement?

Once again I state that I have nothing against with statements.  I
used it all the time ages ago in Lisp.

But (1) try/finally blocks were not to my liking for this sort of
thing because they are verbose and I think error-prone for code
maintenance.  I and many others prefer relying on the refcounter for
file closing over the try/finally solution.  Consequently, using the
refcounter for such things is a well-entrenched and succinct idiom.
with statements are a big improvement over try/finally, but for
things like file closing, it's six of one, half dozen of the other
compared against just relying on the refcounter.

(2) with statements do not work in all situations because often you
need to have an open file (or what have you) survive the scope in
which it was opened.  You may need to have multiple objects be able to
read and/or write to the file.  And yet, the file may not want to be
kept open for the entire life of the program.  If you have to decide
when to explicitly close the file, then you end up with the same sort
of modularity issues as when you have to free memory explicitly.  The
refcounter handles these sorts of situations with aplomb.

(3) Any code that is saving tracebacks should assume that it is likely
to cause trouble, unless it is using code that is explicitly
documented to be robust in the face of this, just as any code that
wants to share objects between multiple threads should assume that
this is likely to cause trouble, unless it is using code that is
explicitly documented to be robust in the face of this.

(4) Any code that catches exceptions should either return soon or
clear the exception.  If it doesn't, the problem is not with the
callee, but with the caller.

(5) You don't necessarily want a function that raises an exception to
deallocate all of its resources before raising the exception, since
you may want access to these resources for debugging, or what have you.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Lenard Lindstrom
Douglas Alan wrote:
 Lenard Lindstrom [EMAIL PROTECTED] writes:
 
 Douglas Alan wrote:
 
 [I]n Python, you can be 100% sure that your files
 will be closed in a timely manner without explicitly closing them, as
 long as you are safe in making certain assumptions about how your code
 will be used.  Such assumptions are called preconditions, which are
 an understood notion in software engineering and by me when I write
 software.
 
 So documenting an assumption is more effective than removing the
 assumption using a with statement?
 
 Once again I state that I have nothing against with statements.  I
 used it all the time ages ago in Lisp.
 

Sorry if I implied that. I assumed it would be clear I was only 
referring to the specific case of implicitly closing files using 
reference counting.

 But (1) try/finally blocks were not to my liking for this sort of
 thing because they are verbose and I think error-prone for code
 maintenance.  I and many others prefer relying on the refcounter for
 file closing over the try/finally solution.  Consequently, using the
 refcounter for such things is a well-entrenched and succinct idiom.
 with statements are a big improvement over try/finally, but for
 things like file closing, it's six of one, half dozen of the other
 compared against just relying on the refcounter.
 

I agree that try/finally is not a good way to handle resources.

 (2) with statements do not work in all situations because often you
 need to have an open file (or what have you) survive the scope in
 which it was opened.  You may need to have multiple objects be able to
 read and/or write to the file.  And yet, the file may not want to be
 kept open for the entire life of the program.  If you have to decide
 when to explicitly close the file, then you end up with the same sort
 of modularity issues as when you have to free memory explicitly.  The
 refcounter handles these sorts of situations with aplomb.
 

Hmm. I come from a C background so normally don't think of a file object 
as leading a nomadic life. I automatically associate a file with a home 
scope that is responsible for opening and closing it. That scope could 
be defined by a function or a module. But I'm not a theorist so can't 
make any general claims. I can see, though, how ref count could close a 
file sooner than if one waits until returning to some ultimate enclosing 
scope.

 (3) Any code that is saving tracebacks should assume that it is likely
 to cause trouble, unless it is using code that is explicitly
 documented to be robust in the face of this, just as any code that
 wants to share objects between multiple threads should assume that
 this is likely to cause trouble, unless it is using code that is
 explicitly documented to be robust in the face of this.
 

Luckily there is not much need to save tracebacks.

 (4) Any code that catches exceptions should either return soon or
 clear the exception.  If it doesn't, the problem is not with the
 callee, but with the caller.
 

Explicitly clear the exception? With sys.exc_clear?

 (5) You don't necessarily want a function that raises an exception to
 deallocate all of its resources before raising the exception, since
 you may want access to these resources for debugging, or what have you.
 

No problem:

  class MyFile(file):
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type is None:
file.__exit__(self, None, None, None)
return False


  del f

Traceback (most recent call last):
   File pyshell#36, line 1, in module
 del f
NameError: name 'f' is not defined
  try:
with MyFile(something, w) as f:
raise StandardError()
except StandardError:
print Caught


Caught
  f.closed
False


But that is not very imaginative:

  class MyFile(file):
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type is not None:
self.my_last_posn = self.tell()
return file.__exit__(self, exc_type, exc_val, exc_tb)


  del f

Traceback (most recent call last):
   File pyshell#44, line 1, in module
 del f
NameError: name 'f' is not defined
  try:
with MyFile(something, w) as f:
f.write(A line of text\n)
raise StandardError()
except StandardError:
print Caught


Caught
  f.closed
True
  f.my_last_posn
16L

---
Lenard Lindstrom
[EMAIL PROTECTED]


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-29 Thread Paul Rubin
Douglas Alan [EMAIL PROTECTED] writes:
 But that's a library issue, not a language issue.  The technology
 exists completely within Lisp to accomplish these things, and most
 Lisp programmers even know how to do this, as application frameworks
 in Lisp often do this kind.  The problem is getting anything put into
 the standard.  Standardizing committees just suck.

Lisp is just moribund, is all.  Haskell has a standardizing committee
and yet there are lots of implementations taking the language in new
and interesting directions all the time.  The most useful extensions
become de facto standards and then they make it into the real
standard.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-28 Thread John Nagle
Douglas Alan wrote:
 Chris Mellon [EMAIL PROTECTED] writes:
On 6/27/07, Douglas Alan [EMAIL PROTECTED] wrote:

This totally misrepresents the case. The with statement and the
context manager is a superset of the RAII functionality.
 
 
 No, it isn't.  C++ allows you to define smart pointers (one of many
 RAII techniques), which can use refcounting or other tracking
 techniques.  Refcounting smart pointers are part of Boost and have
 made it into TR1, which means they're on track to be included in the
 next standard library.  One need not have waited for Boost, as they can
 be implemented in about a page of code.
 
 The standard library also has auto_ptr, which is a different sort of
 smart pointer, which allows for somewhat fancier RAII than
 scope-based.

Smart pointers in C++ never quite work.  In order to do anything
with the pointer, you have to bring it out as a raw pointer, which makes
the smart pointer unsafe.  Even auto_ptr, after three standardization
attempts, is still unsafe.

Much handwaving around this problem comes from the Boost crowd, but
in the end, you just can't do safe reference counted pointers via
C++ templates. It requires language support.

This is off topic, though, for Python.  If anybody cares,
look at my postings in comp.lang.c++.std for a few years back.

Python is close to getting it right, but not quite.  Python destructors
aren't airtight; you can pass the self pointer out of a destructor, which
re-animates the object.  This generally results in undesirable behavior.

Microsoft's managed C++ has the same problem.  They explicitly addressed
re-animation and consider the possibility that a destructor can be called
twice.  To see the true horror of this approach, read

http://www.codeproject.com/managedcpp/cppclidtors.asp

Microsoft Managed C++ ended up having destructors, finalizers,
explicit destruction, scope-based destruction of locals, re-animation,
and nondeterministic garbage collection, all in one language.
(One might suspect that this was intended to drive people to C#.)

In Python, if you have reference loops involving objects
with destructors, the objects don't get reclaimed at all.  You don't
want to call destructors from the garbage collector.  That creates
major problems, like introducing unexpected concurrency and wierd
destructor ordering issues.

Much of the problem is that Python, like Perl and Java, started out
with strong pointers only, and, like Perl and Java, weak pointers
were added as afterthoughts.  Once you have weak pointers, you can
do it right.  Because weak pointers went in late, there's a legacy
code problem, mostly in GUI libraries.

One right answer would be a pure reference counted system where
loops are outright errors, and you must use weak pointers for backpointers.
I write Python code in that style, and run with GC in debug mode,
to detect leaks. I modified BeautifulSoup to use weak pointers
where appropriate, and passed those patches back to the author.
When all or part of a tree is detached, it goes away immediately,
rather than hanging around until the next GC cycle.  The general
idea is that pointers toward the leaves of trees should be strong
pointers, and pointers toward the root should be weak pointers.

For a truly sound system, you'd want to detect reference loops
at the moment they're created, and handle them as errors.  This
is quite possible, although inefficient for certain operations.
Reversing a linked list that has depth counts is expensive.  But then,
Python lists aren't implemented as linked lists; they're variable sized arrays
with one reference count for the whole array.  So, in practice,
the cases where maintaining depth counts gets expensive
are rare.

Then you'd want a way to limit the scope of self within a destructor,
so that you can't use it in a context which could result in it
outliving the destruction of the object.  This is a bit tricky,
and might require some extra checking in destructors.
The basic idea is that once the reference count has gone to 0,
anything that increments it is a serious error.  (As mentioned
above, Microsoft Managed C++ allowed re-animation, and it's
clear from that experience that you don't want to go there.)

With those approaches, destructors
would be sound, order of destruction would be well defined, and
the here be dragons notes about destructors could come out of
the documentation.

With that, we wouldn't need with.  Or a garbage collector.

If you like minimalism, this is the way to go.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-28 Thread Paul Rubin
Douglas Alan [EMAIL PROTECTED] writes:
  Before the with statement, you could do the same thing but you
  needed nested try/finally blocks
 
 No, you didn't -- you could just encapsulate the resource acquisition
 into an object and allow the destructor to deallocate the resource.

But without the try/finally blocks, if there is an unhandled
exception, it passes a traceback object to higher levels of the
program, and the traceback contains a pointer to the resource, so you
can't be sure the resource will ever be freed.  That was part of the
motivation for the with statement.

 And how's that?  I should think that modern architectures would have
 an efficient way of adding and subtracting from an int atomically.  

I'm not sure.  In STM implementations it's usually done with a
compare-and-swap instruction (CMPXCHG on the x86) so you read the old
integer, increment a local copy, and CMPXCHG the copy into the object,
checking the swapped-out value to make sure that nobody else changed
the object between the copy and the swap (rollback and try again if
someone has).  It might be interesting to wrap Python refcounts that
way, but really, Python should move to a compacting GC of some kind,
so the heap doesn't get all fragmented.  Cache misses are a lot more
expensive now than they were in the era when CPython was first
written.

 If they don't, I have a hard time seeing how *any* multi-threaded
 applications are going to be able to make good use of multiple processors.

They carefully manage the number of mutable objects shared between
threads is how.  A concept that doesn't mix with CPython's use of
reference counts.

 Yes, there is.  [Lisp] it's a very flexible language that can adapt
 to the needs of projects that need to push the boundaries of what
 computer programmers typically do.

Really, if they used better languages they'd be able to operate within
boundaries instead of pushing them.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-28 Thread Antoon Pardon
On 2007-06-23, Steven D'Aprano [EMAIL PROTECTED] wrote:
 On Fri, 22 Jun 2007 13:21:14 -0400, Douglas Alan wrote:

 I.e., I could write a new object system for Lisp faster than I could
 even begin to fathom the internal of CPython.  Not only that, I have
 absolutely no desire to spend my valuable free time writing C code.
 I'd much rather be hacking in Python, thank you very much.

 Which is very valuable... IF you care about writing a new object system. I
 don't, and I think most developers don't, which is why Lisp-like macros
 haven't taken off.

I find this is a rather sad kind of argument. It seems to imply that
python is only for problems that are rather common or similar to
those. If most people don't care about the kind of problem you
are working on, it seems from this kind of argument that python
is not the language you should be looking at.

-- 
Antoon Pardon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-28 Thread Chris Mellon
On 6/27/07, Douglas Alan [EMAIL PROTECTED] wrote:
 Chris Mellon [EMAIL PROTECTED] writes:

  On 6/27/07, Douglas Alan [EMAIL PROTECTED] wrote:

  The C++ folks feel so strongly about this, that they refuse to provide
  finally, and insist instead that you use destructors and RAII to do
  resource deallocation.  Personally, I think that's taking things a bit
  too far, but I'd rather it be that way than lose the usefulness of
  destructors and have to use when or finally to explicitly
  deallocate resources.

  This totally misrepresents the case. The with statement and the
  context manager is a superset of the RAII functionality.

 No, it isn't.  C++ allows you to define smart pointers (one of many
 RAII techniques), which can use refcounting or other tracking
 techniques.  Refcounting smart pointers are part of Boost and have
 made it into TR1, which means they're on track to be included in the
 next standard library.  One need not have waited for Boost, as they can
 be implemented in about a page of code.

 The standard library also has auto_ptr, which is a different sort of
 smart pointer, which allows for somewhat fancier RAII than
 scope-based.


Obviously. But theres nothing about the with statement that's
different than using smart pointers in this regard. I take it back,
there's one case - when you need only one scope in a function, with
requires an extra block while C++ style RAII allows you to

  It doesn't overload object lifetimes, rather it makes the intent
  (code execution upon entrance and exit of a block) explicit.

 But I don't typically wish for this sort of intent to be made
 explicit.  TMI!  I used with for *many* years in Lisp, since this is
 how non-memory resource deallocation has been dealt with in Lisp since
 the dawn of time.  I can tell you from many years of experience that
 relying on Python's refcounter is superior.


I question the relevance of your experience, then. Refcounting is fine
for memory, but as you mention below, memory is only one kind of
resource and refcounting is not necessarily the best technique for all
resources. Java has the same problem, where you've got GC so you don't
have to worry about memory, but no tools for managing non-memory
resources.

 Shouldn't you be happy that there's something I like more about Python
 than Lisp?


I honestly don't care if anyone prefers Python over Lisp or vice
versa. If you like Lisp, you know where it is.

  Nobody in their right mind has ever tried to get rid of explicit
  resource management - explicit resource management is exactly what you
  do every time you create an object, or you use RAII, or you open a
  file.

 This just isn't true.  For many years I have not had to explicitly
 close files in Python.  Nor have I had to do so in C++.  They have
 been closed for me implicitly.  With is not implicit -- or at least
 not nearly as implicit as was previous practice in Python, or as is
 current practice in C++.


You still don't have to manually close files. But you cannot, and
never could, rely on them being closed at a given time unless you did
so. If you need a file to be closed in a deterministic manner, then
you must close it explicitly. The with statement is not implicit and
never has been. Implicit resource management is *insufficient* for
the general resource management case. It works fine for memory, it's
okay for files (until it isn't), it's terrible for thread locks and
network connections and database transactions. Those things require
*explicit* resource management.

  *Manual* memory management, where the tracking of references and
  scopes is placed upon the programmer, is what people are trying to
  get rid of and the with statement contributes to that goal, it
  doesn't detract from it.

 As far as I am concerned, memory is just one resource amongst many,
 and the programmer's life should be made easier in dealing with all
 such resources.


Which is exactly what the with statement is for.

  Before the with statement, you could do the same thing but you
  needed nested try/finally blocks

 No, you didn't -- you could just encapsulate the resource acquisition
 into an object and allow the destructor to deallocate the resource.


If you did this in Python, your code was wrong. You were coding C++ in
Python. Don't do it.

  RAII is a good technique, but don't get caught up on the
  implementation details.

 I'm not -- I'm caught up in the loss of power and elegance that will
 be caused by deprecating the use of destructors for resource
 deallocation.


Python has *never had this*. This never worked. It could seem to work
if you carefully, manually, inspected your code and managed your
object lifetimes. This is much more work than the with statement.

To the extent that your code ever worked when you relied on this
detail, it will continue to work. There are no plans to replace
pythons refcounting with fancier GC schemes that I am aware of.

  The with statement does exactly the same thing, but is 

Re: Python's only one way to do it philosophy isn't good?

2007-06-28 Thread Andy Freeman
On Jun 27, 11:41 pm, John Nagle [EMAIL PROTECTED] wrote:
 One right answer would be a pure reference counted system where
 loops are outright errors, and you must use weak pointers for backpointers.
 ... The general
 idea is that pointers toward the leaves of trees should be strong
 pointers, and pointers toward the root should be weak pointers.

While I agree that weak pointers are good and can not be an
afterthought, I've written code where back changed dynamically, and
I'm pretty sure that Nagle has as well.

Many programs with circular lists have an outside pointer to the
current element, but the current element changes.  All of the links
implementing the list have to be strong enough to keep all of the list
alive.

Yes, one can implement a circular list as a vector with a current
index, but that has space and/or time consequences.  It's unclear that
that approach generalizes for more complicated structures.  (You can't
just pull all of the links out into such lists.)

In short, while disallowing loops with strong pointers is a right
answer, it isn't always a right answer, so it can't be the only
answer.

-andy

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-28 Thread John Nagle
Andy Freeman wrote:
 On Jun 27, 11:41 pm, John Nagle [EMAIL PROTECTED] wrote:

 While I agree that weak pointers are good and can not be an
 afterthought, I've written code where back changed dynamically, and
 I'm pretty sure that Nagle has as well.

That sort of thing tends to show up in GUI libraries, especially
ones that have event ordering issues.  It's a tough area.

 Many programs with circular lists have an outside pointer to the
 current element, but the current element changes.  All of the links
 implementing the list have to be strong enough to keep all of the list
 alive.
 Yes, one can implement a circular list as a vector with a current
 index, but that has space and/or time consequences.  

We used to see things like that back in the early 1980s, but today,
worrying about the space overhead associated with keeping separate
track of ownership and position in a circular buffer chain isn't
a big deal.  I last saw that in a FireWire driver, and even there,
it wasn't really necessary.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-28 Thread Andy Freeman
On Jun 28, 1:09 pm, John Nagle [EMAIL PROTECTED] wrote:
 Andy Freeman wrote:
  On Jun 27, 11:41 pm, John Nagle [EMAIL PROTECTED] wrote:
  While I agree that weak pointers are good and can not be an
  afterthought, I've written code where back changed dynamically, and
  I'm pretty sure that Nagle has as well.

 That sort of thing tends to show up in GUI libraries, especially
 ones that have event ordering issues.  It's a tough area.

It shows up almost anywhere one needs to handle recurring operations.
It also shows up in many dynamic search structures.

  Yes, one can implement a circular list as a vector with a current
  index, but that has space and/or time consequences.  

 We used to see things like that back in the early 1980s, but today,
 worrying about the space overhead associated with keeping separate
 track of ownership and position in a circular buffer chain isn't
 a big deal.

Insert and delete can be a big deal.  O(1) is nice.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-28 Thread Steve Holden
Douglas Woodrow wrote:
 On Wed, 27 Jun 2007 01:45:44, Douglas Alan [EMAIL PROTECTED] wrote
 A chaque son gout
 
 I apologise for this irrelevant interruption to the conversation, but 
 this isn't the first time you've written that.
 
 The word chaque is not a pronoun.
 
 http://grammaire.reverso.net/index_alpha/Fiches/Fiche220.htm

Right, he probably means Chaqu'un à son gout (roughly, each to his own 
taste).

regards
  Steve
-- 
Steve Holden+1 571 484 6266   +1 800 494 3119
Holden Web LLC/Ltd   http://www.holdenweb.com
Skype: holdenweb  http://del.icio.us/steve.holden
--- Asciimercial --
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
--- Thank You for Reading -

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-28 Thread Douglas Alan
Steve Holden [EMAIL PROTECTED] writes:

 Douglas Woodrow wrote:

 On Wed, 27 Jun 2007 01:45:44, Douglas Alan [EMAIL PROTECTED] wrote

 A chaque son gout

 I apologise for this irrelevant interruption to the conversation,
 but this isn't the first time you've written that.  The word
 chaque is not a pronoun.

 http://grammaire.reverso.net/index_alpha/Fiches/Fiche220.htm

 Right, he probably means Chaqu'un à son gout (roughly, each to his
 own taste).

Actually, it's chacun.  And the à may precede the chacun.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-28 Thread Steve Holden
Douglas Alan wrote:
 Steve Holden [EMAIL PROTECTED] writes:
 
 Douglas Woodrow wrote:
 
 On Wed, 27 Jun 2007 01:45:44, Douglas Alan [EMAIL PROTECTED] wrote
 
 A chaque son gout
 
 I apologise for this irrelevant interruption to the conversation,
 but this isn't the first time you've written that.  The word
 chaque is not a pronoun.
 
 http://grammaire.reverso.net/index_alpha/Fiches/Fiche220.htm
 
 Right, he probably means Chaqu'un à son gout (roughly, each to his
 own taste).
 
 Actually, it's chacun.  And the à may precede the chacun.
 
 |oug

http://everything2.com/?node_id=388997 is clearly not authoritative, as 
the literal translation about which it speaks is far from literal (it 
mistakes the preposition à (to) for a, the present tense of the verb 
to have. I suppose the literal translation is Each one to his own 
taste. It does offer some support to my theory, however. So I'll quote 
the damned thing anyway.

chacun is an elision of the two words Chaque (each) and un (one), 
and use of those two words is at least equally correct, though where it 
stands in modern usage I must confess I have no idea. The word order you 
suggest would be less likely to be used by a peasant than a lawyer. 
Being a peasant, I naturally used the other wording.

IANA linguist-ical-ly y'rs  - steve
-- 
Steve Holden+1 571 484 6266   +1 800 494 3119
Holden Web LLC/Ltd   http://www.holdenweb.com
Skype: holdenweb  http://del.icio.us/steve.holden
--- Asciimercial --
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
--- Thank You for Reading -

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-28 Thread Douglas Alan
Steve Holden [EMAIL PROTECTED] writes:

 Actually, it's chacun.  And the à may precede the chacun.

 |oug

 chacun is an elision of the two words Chaque (each) and un
 (one), and use of those two words is at least equally correct, though
 where it stands in modern usage I must confess I have no idea.

Google can answer that: 158,000 hits for chaqu'un, 57 million for
chacun.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-28 Thread Douglas Alan
Chris Mellon [EMAIL PROTECTED] writes:

 Obviously. But theres nothing about the with statement that's
 different than using smart pointers in this regard.

Sure there is -- smart pointers handle many sorts of situations, while
with only handles the case where the lifetime of the object
corresponds to the scope.

 But I don't typically wish for this sort of intent to be made
 explicit.  TMI!  I used with for *many* years in Lisp, since this
 is how non-memory resource deallocation has been dealt with in Lisp
 since the dawn of time.  I can tell you from many years of
 experience that relying on Python's refcounter is superior.

 I question the relevance of your experience, then.

Gee, thanks.

 Refcounting is fine for memory, but as you mention below, memory is
 only one kind of resource and refcounting is not necessarily the
 best technique for all resources.

I never said that it is the best technique for *all* resources.  Just
the most typical ones.

 This just isn't true.  For many years I have not had to explicitly
 close files in Python.  Nor have I had to do so in C++.  They have
 been closed for me implicitly.  With is not implicit -- or at least
 not nearly as implicit as was previous practice in Python, or as is
 current practice in C++.

 You still don't have to manually close files. But you cannot, and
 never could, rely on them being closed at a given time unless you
 did so.

You could for most intents and purposes.

 If you need a file to be closed in a deterministic manner, then you
 must close it explicitly.

You don't typically need them to be closed in a completely fool-proof
deterministic fashion.  If some other code catches your exceptions and
holds onto the traceback then it must know that can be delaying a few
file-closings, or the like.

 The with statement is not implicit and never has been. Implicit
 resource management is *insufficient* for the general resource
 management case. It works fine for memory, it's okay for files
 (until it isn't), it's terrible for thread locks and network
 connections and database transactions. Those things require
 *explicit* resource management.

Yes, I agree there are certain situations in which you certainly want
with, or something like it.  I've never disagreed with that
assertion at all.  I just don't agree that for most Python code this
is the *typical* case.

 To the extent that your code ever worked when you relied on this
 detail, it will continue to work.

I've written plenty of Python code that relied on destructors to
deallocate resources, and the code always worked.

 There are no plans to replace pythons refcounting with fancier GC
 schemes that I am aware of.

This is counter to what other people have been saying.  They have been
worrying me by saying that the refcounter may go away and so you may
not be able to rely on predictable object lifetimes in the future.

 Nothing about Pythons memory management has changed. I know I'm
 repeating myself here, but you just don't seem to grasp this
 concept.  Python has *never* had deterministic destruction of
 objects. It was never guaranteed, and code that seemed like it
 benefited from it was fragile.

It was not fragile in my experience.  If a resource *positively*,
*absolutely* needed to be deallocated at a certain point in the code
(and occasionally that was the case), then I would code that way.  But
that has been far from the typical case for me.

 Purify tells me that I know more about the behavior of my code than
 you do: I've *never* had any memory leaks in large C++ programs that
 used refcounted smart pointers that were caused by cycles in my data
 structures that I didn't know about.

 I'm talking about Python refcounts. For example, a subtle resource
 leak that has caught me before is that tracebacks hold references to
 locals in the unwound stack.

Yes, I'm aware of that.  Most programs don't hold onto tracebacks for
long.  If you are working with software that does, then, I agree, that
sometimes one will have to code things more precisely.

 If you relied on refcounting to clean up a resource, and you needed
 exception handling, the resource wasn't released until *after* the
 exception unwound, which could be a problem. Also holding onto
 tracebacks for latter processing (not uncommon in event based
 programs) would artificially extend the lifetime of the resource. If
 the resource you were managing was a thread lock this could be a
 real problem.

Right -- I've always explicitly managed thread locks.

 I really have no desire to code in C, thank you.  I'd rather be coding
 in Python.  (Hence my [idle] desire for macros in Python, so that I
 could do even more of my work in Python.)

 In this particular conversation, I really don't think that theres much
 to say beyond put up or shut up.

I think your attitude here is unPythonic.

 The experts in the field have said that it's not practical.

Guido has occasionally said that he might consider a macro facility
for a future 

Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Douglas Alan
Paul Rubin http://[EMAIL PROTECTED] writes:

 Douglas Alan [EMAIL PROTECTED] writes:

  In the Maclisp era functions like mapcar worked on lists, and
  generated equally long lists in memory.

 I'm aware, but there were various different mapping functions.  map,
 as opposed to mapcar didn't return any values at all, and so you had
 to rely on side effects with it.

 The thing is there was no standard way in Maclisp to write something
 like Python's count function and map over it.  This could be done in
 Scheme with streams, of course.

I'm not sure that you can blame MacLisp for not being object-oriented.
The idea hadn't even been invented yet when MacLisp was implemented
(unless you count Simula).  If someone went to make an OO version of
MacLisp, I'm sure they'd get all this more or less right, and people
have certainly implemented dialects of Lisp that are consistently OO.

 Right -- I wrote iterators, not generators.

 Python iterators (the __iter__ methods on classes) are written with
 yield statements as often as not.

I certainly agree that iterators can be implemented with generators,
but generators are a language feature that are impossible to provide
without deep language support, while iterators are just an OO
interface that any OO language can provide.  Though without a good
macro facility the syntax to use them may not be so nice.

 That's not ugly.  The fact that CPython has a reference-counting GC
 makes the lifetime of object predictable, which means that like in
 C++, and unlike in Java, you can use destructors to good effect.  This
 is one of the huge boons of C++.  The predictability of lifespan makes
 the language more expressive and powerful.  The move to deprecate
 relying on this feature in Python is a bad thing, if you ask me, and
 removes one of the advantages that Python had over Lisp.

 No that's wrong, C++ has no GC at all, reference counting or
 otherwise, so its destructors only run when the object is manually
 released or goes out of scope.

Right, but implementing generic reference-counted smart pointers is
about a page of code in C++, and nearly every large C++ application
I've seen uses such things.

 Python (as of 2.5) does that using the new with statement, which
 finally makes it possible to escape from that losing GC-dependent
 idiom.  The with statement handles most cases that C++ destructors
 normally handle.

Gee, that's back to the future with 1975 Lisp technology.  Destructors
are a much better model for dealing with such things (see not *all*
good ideas come from Lisp -- a few come from C++) and I am dismayed
that Python is deprecating their use in favor of explicit resource
management.  Explicit resource management means needlessly verbose
code and more opportunity for resource leaks.

The C++ folks feel so strongly about this, that they refuse to provide
finally, and insist instead that you use destructors and RAII to do
resource deallocation.  Personally, I think that's taking things a bit
too far, but I'd rather it be that way than lose the usefulness of
destructors and have to use when or finally to explicitly
deallocate resources.

 Python object lifetimes are in fact NOT predictable because the ref
 counting doesn't (and can't) pick up cyclic structure.

Right, but that doesn't mean that 99.9% of the time, the programmer
can't immediately tell that cycles aren't going to be an issue.

I love having a *real* garbage collector, but I've also dealt with C++
programs that are 100,000+ lines long and I wrote plenty of Python
code before it had a real garbage collector, and I never had any
problem with cyclic data structures causing leaks.  Cycles are really
not all that common, and when they do occur, it's usually not very
difficult to figure out where to add a few lines to a destructor to
break the cycle.

 And the refcounts are a performance pig in multithreaded code,
 because of how often they have to be incremented and updated.

I'm willing to pay the performance penalty to have the advantage of
not having to use constructs like when.

Also, I'm not convinced that it has to be a huge performance hit.
Some Lisp implementations had a 1,2,3, many (or something like that)
reference-counter for reclaiming short-lived objects.  This bypassed
the real GC and was considered a performance optimization.  (It was
probably on a Lisp Machine, though, where they had special hardware to
help.)

 That's why CPython has the notorious GIL (a giant lock around the
 whole interpreter that stops more than one interpreter thread from
 being active at a time), because putting locks on the refcounts
 (someone tried in the late 90's) to allow multi-cpu parallelism
 slows the interpreter to a crawl.

All due to the ref-counter?  I find this really hard to believe.
People write multi-threaded code all the time in C++ and also use
smart pointers at the same time.  I'm sure they have to be a bit
careful, but they certainly don't require a GIL.

I *would* believe that getting rid of 

Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Douglas Woodrow
On Wed, 27 Jun 2007 01:45:44, Douglas Alan [EMAIL PROTECTED] wrote
A chaque son gout

I apologise for this irrelevant interruption to the conversation, but 
this isn't the first time you've written that.

The word chaque is not a pronoun.

http://grammaire.reverso.net/index_alpha/Fiches/Fiche220.htm
-- 
Doug Woodrow

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Paul Rubin
Dennis Lee Bieber [EMAIL PROTECTED] writes:
   What happens when two individuals release libraries using these
 proposed macros -- and have implement conflicting macros using the same
 identifiers -- and you try to use both libraries in one application?

Something like the current situation with Python web frameworks ;)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Paul Rubin
Douglas Alan [EMAIL PROTECTED] writes:
  The thing is there was no standard way in Maclisp to write something
  like Python's count function and map over it.  This could be done in
  Scheme with streams, of course.
 
 I'm not sure that you can blame MacLisp for not being object-oriented.
 The idea hadn't even been invented yet when MacLisp was implemented
 (unless you count Simula).  If someone went to make an OO version of
 MacLisp, I'm sure they'd get all this more or less right, and people
 have certainly implemented dialects of Lisp that are consistently OO.

count() has nothing to with OO, it's just the infinite stream 1,2,3,...
which can be implemented as a Scheme closure the obvious way.

 Right, but implementing generic reference-counted smart pointers is
 about a page of code in C++, and nearly every large C++ application
 I've seen uses such things.

That's because C++ has no GC.

 Gee, that's back to the future with 1975 Lisp technology.  Destructors
 are a much better model for dealing with such things (see not *all*
 good ideas come from Lisp -- a few come from C++) and I am dismayed
 that Python is deprecating their use in favor of explicit resource
 management.  Explicit resource management means needlessly verbose
 code and more opportunity for resource leaks.

And relying on refcounts to free a resource at a particular time is
precisely explicit resource management.  What if something makes a
copy of the pointer that you didn't keep track of?  The whole idea of
so-called smart pointers is to not have to keep track of them after
all.  Anything like destructors are not in the spirit of GC at all.
The idea of GC is to be invisible, so the language semantics can be
defined as if all objects stay around forever.  GC should only reclaim
something if there is no way to know that it is gone.  For stuff like
file handles, you can tell whether they are gone or not, for example
by trying to unmount the file system.  Therefore they should not be
managed by GC.

 Also, I'm not convinced that it has to be a huge performance hit.
 Some Lisp implementations had a 1,2,3, many (or something like that)
 reference-counter for reclaiming short-lived objects.  This bypassed
 the real GC and was considered a performance optimization.  (It was
 probably on a Lisp Machine, though, where they had special hardware to
 help.)

That is a common technique and it's usually done with just one bit.

  because putting locks on the refcounts (someone tried in the late
  90's) to allow multi-cpu parallelism slows the interpreter to a crawl.
 
 All due to the ref-counter?  I find this really hard to believe.
 People write multi-threaded code all the time in C++ and also use
 smart pointers at the same time.  I'm sure they have to be a bit
 careful, but they certainly don't require a GIL.

Heap allocation in C++ is a relatively heavyweight process that's used
sort of sparingly.  C++ code normally uses a combination of static
allocation (fixed objects and globals), stack allocation (automatic
variables), immediate values (fixnums), and (when necessary) heap
allocation.  In CPython, *everything* is on the heap, even small
integers, so saying x = 3 has to bump the refcount for the integer 3.
There is a LOT more refcount bashing in CPython than there would be
in something like a Boost application.

  Lisp may always be around in some tiny niche but its use as a
  large-scale systems development language has stopped making sense.
 
 It still makes perfect sense for AI research.  

I think most AI research is being done in other languages nowadays.
There are some legacy Lisp systems around and some die-hards but
I have the impression newer stuff is being done in ML or Haskell.

I personally use Emacs Lisp every day and I think Hedgehog Lisp (a
tiny functional Lisp dialect intended for embedded platforms like cell
phones--the runtime is just 20 kbytes) is a very cool piece of code.
But using CL for new, large system development just seems crazy today.

 Re Lisp, though, there used to be a joke (which turned out to be
 false), which went, I don't know what the most popular programming
 language will be in 20 years, but it will be called 'Fortran'.  In
 reality, I don't know what the most popular language will be called 20
 years from now, but it will *be* Lisp.

Well, they say APL is a perfect crystal--if you add anything to it, it
becomes flaws; while Lisp is a ball of mud--you can throw in anything
you want and it's still Lisp.  However I don't believe for an instant
that large system development in 2027 will be done in anything like CL.

See:

  http://www.cs.princeton.edu/~dpw/popl/06/Tim-POPL.ppt
  http://www.st.cs.uni-sb.de/edu/seminare/2005/advanced-fp/docs/sweeny.pdf

(both are the same presentation, the links are the original Powerpoint
version and a pdf conversion) for a game developer discussing his
experiences with a 500 KLOC C++ program, describing where he thinks
things are going.  I find it compelling.  Mostly he wants 

Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Andy Freeman
On Jun 27, 1:15 am, Paul Rubin http://[EMAIL PROTECTED] wrote:
 Dennis Lee Bieber [EMAIL PROTECTED] writes:

 What happens when two individuals release libraries using these
  proposed macros -- and have implement conflicting macros using the same
  identifiers -- and you try to use both libraries in one application?

 Something like the current situation with Python web frameworks ;)

Actually, no.  For python, the most reasonable macro scope would be
the file, so different files in the same application could easily use
conflicting macros without any problems.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Andy Freeman
On Jun 26, 10:03 am, Paul Rubin http://[EMAIL PROTECTED] wrote:
  Map doesn't work on generators or iterators because they're not part
  of the common lisp spec, but if someone implemented them as a library,
  said library could easily include a map that handled them as well.

 Right, more scattered special purpose kludges instead of a powerful
 uniform interface.

Huh?  The interface could continue to be (map ...).

Python's for statement relies on the fact that python is mostly object
oriented and many of the predefined types have an iterator interface.
Lisp lists and vectors currently aren't objects and very few of the
predefined types have an iterator interface.

It's easy enough to get around the lack of objectness and add the
equivalent of an iterator iterface, in either language.  The fact that
lisp folks haven't bothered suggests that this isn't a big enough
issue.

The difference is that lisp users can easily define python-like for
while python folks have to wait for the implementation.

Syntax matters.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Chris Mellon
On 6/27/07, Andy Freeman [EMAIL PROTECTED] wrote:
 On Jun 26, 10:03 am, Paul Rubin http://[EMAIL PROTECTED] wrote:
   Map doesn't work on generators or iterators because they're not part
   of the common lisp spec, but if someone implemented them as a library,
   said library could easily include a map that handled them as well.
 
  Right, more scattered special purpose kludges instead of a powerful
  uniform interface.

 Huh?  The interface could continue to be (map ...).

 Python's for statement relies on the fact that python is mostly object
 oriented and many of the predefined types have an iterator interface.
 Lisp lists and vectors currently aren't objects and very few of the
 predefined types have an iterator interface.

 It's easy enough to get around the lack of objectness and add the
 equivalent of an iterator iterface, in either language.  The fact that
 lisp folks haven't bothered suggests that this isn't a big enough
 issue.


Is this where I get to call Lispers Blub programmers, because they
can't see the clear benefit to a generic iteration interface?

 The difference is that lisp users can easily define python-like for
 while python folks have to wait for the implementation.


Yes, but Python already has it (so the wait time is 0), and the Lisp
user doesn't.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Chris Mellon
On 6/27/07, Douglas Alan [EMAIL PROTECTED] wrote:
 Paul Rubin http://[EMAIL PROTECTED] writes:


 Gee, that's back to the future with 1975 Lisp technology.  Destructors
 are a much better model for dealing with such things (see not *all*
 good ideas come from Lisp -- a few come from C++) and I am dismayed
 that Python is deprecating their use in favor of explicit resource
 management.  Explicit resource management means needlessly verbose
 code and more opportunity for resource leaks.

 The C++ folks feel so strongly about this, that they refuse to provide
 finally, and insist instead that you use destructors and RAII to do
 resource deallocation.  Personally, I think that's taking things a bit
 too far, but I'd rather it be that way than lose the usefulness of
 destructors and have to use when or finally to explicitly
 deallocate resources.


This totally misrepresents the case. The with statement and the
context manager is a superset of the RAII functionality. It doesn't
overload object lifetimes, rather it makes the intent (code execution
upon entrance and exit of a block) explicit. You use it in almost
exactly the same way you use RAII in C++ (creating new blocks as you
need new scopes), and it performs exactly the same function.

Nobody in their right mind has ever tried to get rid of explicit
resource management - explicit resource management is exactly what you
do every time you create an object, or you use RAII, or you open a
file. *Manual* memory management, where the tracking of references and
scopes is placed upon the programmer, is what people are trying to get
rid of and the with statement contributes to that goal, it doesn't
detract from it. Before the with statement, you could do the same
thing but you needed nested try/finally blocks and you had to
carefully keep track of the scopes, order of object creation, which
objects were created, all that. The with statement removes the manual,
error prone work from that and lets you more easily write your intent
- which is *precisely* explicit resource management.

RAII is a good technique, but don't get caught up on the
implementation details. The fact that it's implemented via stack
objects with ctors and dtors is a red herring. The significant feature
is that it's you've got explicit, predictable resource management with
(and this is the important bit) a guarantee that code will be called
in all cases of scope exit.

The with statement does exactly the same thing, but is actually
superior because

a) It doesn't tie the resource managment to object creation. This
means you can use, for example, with lock: instead of the C++ style
Locker(lock)

and

b) You can tell whether you exited with an exception, and what that
exception is, so you can take different actions based on error
conditions vs expected exit. This is a significant benefit, it allows
the application of context managers to cases where RAII is weak. For
example, controlling transactions.

  Python object lifetimes are in fact NOT predictable because the ref
  counting doesn't (and can't) pick up cyclic structure.

 Right, but that doesn't mean that 99.9% of the time, the programmer
 can't immediately tell that cycles aren't going to be an issue.

 I love having a *real* garbage collector, but I've also dealt with C++
 programs that are 100,000+ lines long and I wrote plenty of Python
 code before it had a real garbage collector, and I never had any
 problem with cyclic data structures causing leaks.  Cycles are really
 not all that common, and when they do occur, it's usually not very
 difficult to figure out where to add a few lines to a destructor to
 break the cycle.


They can occur in the most bizarre and unexpected places. To the point
where I suspect that the reality is simply that you never noticed your
cycles, not that they didn't exist.

  And the refcounts are a performance pig in multithreaded code,
  because of how often they have to be incremented and updated.

 I'm willing to pay the performance penalty to have the advantage of
 not having to use constructs like when.


with. And if you think you won't need it because python will get
real GC you're very confused about what GC does and how.

 Also, I'm not convinced that it has to be a huge performance hit.
 Some Lisp implementations had a 1,2,3, many (or something like that)
 reference-counter for reclaiming short-lived objects.  This bypassed
 the real GC and was considered a performance optimization.  (It was
 probably on a Lisp Machine, though, where they had special hardware to
 help.)

  That's why CPython has the notorious GIL (a giant lock around the
  whole interpreter that stops more than one interpreter thread from
  being active at a time), because putting locks on the refcounts
  (someone tried in the late 90's) to allow multi-cpu parallelism
  slows the interpreter to a crawl.

 All due to the ref-counter?  I find this really hard to believe.
 People write multi-threaded code all the time in C++ and also use
 

Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread joswig
On Jun 27, 10:51 am, Paul Rubin http://[EMAIL PROTECTED] wrote:

 I personally use Emacs Lisp every day and I think Hedgehog Lisp (a
 tiny functional Lisp dialect intended for embedded platforms like cell
 phones--the runtime is just 20 kbytes) is a very cool piece of code.
 But using CL for new, large system development just seems crazy today.

It seems that many of the hardcore Lisp developers are busy developing
the core of new airline system software (pricing, reservation, ...)
in Common Lisp. It replaced already some mainframes...
Kind of crazy. I guess that counts as very large systems development.

There is sure also lots of Python involved, IIRC.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Douglas Alan
Chris Mellon [EMAIL PROTECTED] writes:

 Is this where I get to call Lispers Blub programmers, because they
 can't see the clear benefit to a generic iteration interface?

I think you overstate your case.  Lispers understand iteration
interfaces perfectly well, but tend to prefer mapping fuctions to
iteration because mapping functions are both easier to code (they are
basically equivalent to coding generators) and efficient (like
non-generator-implemented iterators).  The downside is that they are
not quite as flexible as iterators (which can be hard to code) and
generators, which are slow.

Lispers have long since understood how to write mapping function to
iterator converters using stack groups or continuations, but Common
Lisp never mandated stack groups or continuations for conforming
implementations.  Scheme, of course, has continuations, and there are
implementations of Common Lisp with stack groups.

 The difference is that lisp users can easily define python-like for
 while python folks have to wait for the implementation.

 Yes, but Python already has it (so the wait time is 0), and the Lisp
 user doesn't.

So do Lispers, provided that they use an implementation of Lisp that
has the aforementioned extensions to the standard.  If they don't,
they are the unfortunately prisoners of the standardizing committees.

And, I guarantee you, that if Python were specified by a standardizing
committee, it would suffer this very same fate.

Regarding there being way too many good but incompatible
implementations of Lisp -- I understand.  The very same thing has
caused Ruby to incredibly rapidly close the lead that Python has
traditionally had over Ruby.  There reason for this is that there are
too many good but incompatible Python web dev frameworks, and only one
good one for Ruby.  So, we see that while Lisp suffers from too much
of a good thing, so does Python, and that may be the death of it if
Ruby on Rails keeps barreling down on Python like a runaway train.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Andy Freeman
On Jun 27, 8:09 am, Chris Mellon [EMAIL PROTECTED] wrote:
 On 6/27/07, Andy Freeman [EMAIL PROTECTED] wrote:
  It's easy enough to get around the lack of objectness and add the
  equivalent of an iterator iterface, in either language.  The fact that
  lisp folks haven't bothered suggests that this isn't a big enough
  issue.

 Is this where I get to call Lispers Blub programmers, because they
 can't see the clear benefit to a generic iteration interface?

The Blub argument relies on inability to implement comparable
functionality in blub.  (For example, C programmers don't get to
call Pythonists Blub programmers because Python doesn't use {} and
Pythonistas don't get to say the same about C programmers because C
doesn't use whitespace.)  Generic iterators can be implemented by lisp
programmers and some have.  Others haven't had the need.

  The difference is that lisp users can easily define python-like for
  while python folks have to wait for the implementation.

 Yes, but Python already has it (so the wait time is 0), and the Lisp
 user doesn't.

for isn't the last useful bit of syntax.  Python programmers got to
wait until 2.5 to get with.  Python 2.6 will probably have syntax
that wasn't in Python 2.5.

Lisp programmers with a syntax itch don't wait anywhere near that long.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Lenard Lindstrom
Douglas Alan wrote:
 
 Lispers have long since understood how to write mapping function to
 iterator converters using stack groups or continuations, but Common
 Lisp never mandated stack groups or continuations for conforming
 implementations.  Scheme, of course, has continuations, and there are
 implementations of Common Lisp with stack groups.
 

Those stack groups

http://common-lisp.net/project/bknr/static/lmman/fd-sg.xml

remind me of Python greenlets

http://cheeseshop.python.org/pypi/greenlet .


---
Lenard Lindstrom
[EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Douglas Alan
Chris Mellon [EMAIL PROTECTED] writes:

 On 6/27/07, Douglas Alan [EMAIL PROTECTED] wrote:

 The C++ folks feel so strongly about this, that they refuse to provide
 finally, and insist instead that you use destructors and RAII to do
 resource deallocation.  Personally, I think that's taking things a bit
 too far, but I'd rather it be that way than lose the usefulness of
 destructors and have to use when or finally to explicitly
 deallocate resources.

 This totally misrepresents the case. The with statement and the
 context manager is a superset of the RAII functionality.

No, it isn't.  C++ allows you to define smart pointers (one of many
RAII techniques), which can use refcounting or other tracking
techniques.  Refcounting smart pointers are part of Boost and have
made it into TR1, which means they're on track to be included in the
next standard library.  One need not have waited for Boost, as they can
be implemented in about a page of code.

The standard library also has auto_ptr, which is a different sort of
smart pointer, which allows for somewhat fancier RAII than
scope-based.

 It doesn't overload object lifetimes, rather it makes the intent
 (code execution upon entrance and exit of a block) explicit.

But I don't typically wish for this sort of intent to be made
explicit.  TMI!  I used with for *many* years in Lisp, since this is
how non-memory resource deallocation has been dealt with in Lisp since
the dawn of time.  I can tell you from many years of experience that
relying on Python's refcounter is superior.

Shouldn't you be happy that there's something I like more about Python
than Lisp?

 Nobody in their right mind has ever tried to get rid of explicit
 resource management - explicit resource management is exactly what you
 do every time you create an object, or you use RAII, or you open a
 file.

This just isn't true.  For many years I have not had to explicitly
close files in Python.  Nor have I had to do so in C++.  They have
been closed for me implicitly.  With is not implicit -- or at least
not nearly as implicit as was previous practice in Python, or as is
current practice in C++.

 *Manual* memory management, where the tracking of references and
 scopes is placed upon the programmer, is what people are trying to
 get rid of and the with statement contributes to that goal, it
 doesn't detract from it.

As far as I am concerned, memory is just one resource amongst many,
and the programmer's life should be made easier in dealing with all
such resources.

 Before the with statement, you could do the same thing but you
 needed nested try/finally blocks

No, you didn't -- you could just encapsulate the resource acquisition
into an object and allow the destructor to deallocate the resource.

 RAII is a good technique, but don't get caught up on the
 implementation details.

I'm not -- I'm caught up in the loss of power and elegance that will
be caused by deprecating the use of destructors for resource
deallocation.

 The with statement does exactly the same thing, but is actually
 superior because

 a) It doesn't tie the resource managment to object creation. This
 means you can use, for example, with lock: instead of the C++ style
 Locker(lock)

I know all about with.  As I mentioned above, Lisp has had it since
the dawn of time.  And I have nothing against it, since it is at times
quite useful.  I'm just dismayed at the idea of deprecating reliance
on destructors in favor of with for the majority of cases when the
destructor usage works well and is more elegant.

 b) You can tell whether you exited with an exception, and what that
 exception is, so you can take different actions based on error
 conditions vs expected exit. This is a significant benefit, it
 allows the application of context managers to cases where RAII is
 weak. For example, controlling transactions.

Yes, for the case where you might want to do fancy handling of
exceptions raised during resource deallocation, then when is
superior, which is why it is good to have in addition to the
traditional Python mechanism, not as a replacement for it.

 Right, but that doesn't mean that 99.9% of the time, the programmer
 can't immediately tell that cycles aren't going to be an issue.

 They can occur in the most bizarre and unexpected places. To the point
 where I suspect that the reality is simply that you never noticed your
 cycles, not that they didn't exist.

Purify tells me that I know more about the behavior of my code than
you do: I've *never* had any memory leaks in large C++ programs that
used refcounted smart pointers that were caused by cycles in my data
structures that I didn't know about.

 And if you think you won't need it because python will get real GC
 you're very confused about what GC does and how.

Ummm, I know all about real GC, and I'm quite aware than Python has
had it for  quite some time now.  (Though the implementation is rather
different last I checked than it would be for a language that didn't
also have 

Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Douglas Alan
Douglas Woodrow [EMAIL PROTECTED] writes:

 On Wed, 27 Jun 2007 01:45:44, Douglas Alan [EMAIL PROTECTED] wrote

A chaque son gout

 I apologise for this irrelevant interruption to the conversation, but
 this isn't the first time you've written that.

 The word chaque is not a pronoun.

A chacun son epellation.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Graham Breed
Dennis Lee Bieber wote:

   But if these macros are supposed to allow one to sort of extend
 Python syntax, are you really going to code things like

   macrolib1.keyword

 everywhere?

I don't see why that *shouldn't* work.  Or from macrolib1 import
keyword as foo.  And to be truly Pythonic the keywords would have to
be scoped like normal Python variables.  One problem is that such a
system wouldn't be able to redefine existing keywords.

Lets wait for a concrete proposal before delving into this rats'
cauldron any further.


   Graham

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-27 Thread Douglas Alan
Dennis Lee Bieber [EMAIL PROTECTED] writes:

   But if these macros are supposed to allow one to sort of extend
 Python syntax, are you really going to code things like

   macrolib1.keyword
 everywhere?

No -- I would expect that macros (if done the way that I would like
them to be done) would work something like so:

   from setMacro import macro set, macro let
   let x = 1
   set x += 1

The macros let and set (like all macro invocations) would have to
be the first tokens on a line.  They would be passed either the
strings x = 1 and x += 1, or some tokenized version thereof.
There would be parsing libraries to help them from there.

For macros that need to continue over more than one line, e.g.,
perhaps something like

   let x = 1
   y = 2
   z = 3
   set x = y + z
   y = x + z
   z = x + y
   print x, y, z

the macro would parse up to when the indentation returns to the previous
level.

For macros that need to return values, a new bracketing syntax would
be needed.  Perhaps something like:

   while $(let x = foo()):
  print x

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-26 Thread Paul Rubin
Douglas Alan [EMAIL PROTECTED] writes:
  In the Maclisp era functions like mapcar worked on lists, and
  generated equally long lists in memory.
 
 I'm aware, but there were various different mapping functions.  map,
 as opposed to mapcar didn't return any values at all, and so you had
 to rely on side effects with it.

The thing is there was no standard way in Maclisp to write something
like Python's count function and map over it.  This could be done in
Scheme with streams, of course.

 Right -- I wrote iterators, not generators.

Python iterators (the __iter__ methods on classes) are written with
yield statements as often as not.

  The point is that mapcar (as the name implies) advances down a list
  using cdr, i.e. it only operates on lists, not general iterators or
  streams or whatever.
 
 Right, but each sequence type had it's own corresponding mapping fuctions.

Precisely, I think that's what Alexander was trying to get across, Lisp
didn't have a uniform interface for traversing different types of sequence.

 they had standardized such things.  This would not be particularly
 difficult to do, other than the getting everyone to agree on just what
 the interfaces should be.  But Lisp programmers, are of course, just
 as recalcitrant as Python programmers.

Python programmers tend to accept what the language gives them and use
it and not try to subvert it too much.  I don't say that is good or bad.

 And in Python's case, the reference manual is just an incomplete
 description of the features offered by the implementation, and people
 revel in features that are not yet in the reference manual.

No I don't think so, unless you count some things that are in accepted
PEP's and therefore can be considered part of the reference docs, even
though they haven't yet been merged into the manual.

 That's not ugly.  The fact that CPython has a reference-counting GC
 makes the lifetime of object predictable, which means that like in
 C++, and unlike in Java, you can use destructors to good effect.  This
 is one of the huge boons of C++.  The predictability of lifespan makes
 the language more expressive and powerful.  The move to deprecate
 relying on this feature in Python is a bad thing, if you ask me, and
 removes one of the advantages that Python had over Lisp.

No that's wrong, C++ has no GC at all, reference counting or
otherwise, so its destructors only run when the object is manually
released or goes out of scope.  The compiler normally doesn't attempt
lifetime analysis and it would probably be against the rules to free
an object as soon as it became inaccessible anyway.  Python (as of
2.5) does that using the new with statement, which finally makes it
possible to escape from that losing GC-dependent idiom.  The with
statement handles most cases that C++ destructors normally handle.

Python object lifetimes are in fact NOT predictable because the ref
counting doesn't (and can't) pick up cyclic structure.  Occasionally a
cyclic GC comes along and frees up cyclic garbage, so some destructors
don't get run til then.  Of course you can manually organize your code
so that stuff with destructors don't land in cyclic structures, but
now you don't really have automatic GC any more, you have (partially)
manual storage management.  And the refcounts are a performance pig in
multithreaded code, because of how often they have to be incremented
and updated.  That's why CPython has the notorious GIL (a giant lock
around the whole interpreter that stops more than one interpreter
thread from being active at a time), because putting locks on the
refcounts (someone tried in the late 90's) to allow multi-cpu
parallelism slows the interpreter to a crawl.

Meanwhile 4-core x86 cpu's are shipping on the desktop, and network
servers not dependent on the complex x86 architecture are using
16-core MIPS processors (www.movidis.com).  Python is taking a beating
all the time because of its inability to use parallel cpu's, and it's
only going to get worse unless/until PyPy fixes the situation.  And
that means serious GC instead of ref counting.

 And it's not bygone -- it's just nichified.  Lisp is forever -- you'll see.

Lisp may always be around in some tiny niche but its use as a
large-scale systems development language has stopped making sense.

If you want to see something really pathetic, hang out on
comp.lang.forth sometime.  It's just amazing how unaware the
inhabitants there are of how irrelevant their language has become.
Lisp isn't that far gone yet, but it's getting more and more like that.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-26 Thread Andy Freeman
On Jun 26, 12:26 am, Paul Rubin http://[EMAIL PROTECTED] wrote:
 Precisely, I think that's what Alexander was trying to get across, Lisp
 didn't have a uniform interface for traversing different types of sequence.

And he's wrong, at least as far as common lisp is concerned - map does
exactly that.

http://www.lispworks.com/documentation/HyperSpec/Body/f_map.htm

Map doesn't work on generators or iterators because they're not part
of the common lisp spec, but if someone implemented them as a library,
said library could easily include a map that handled them as well.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-26 Thread Andy Freeman
On Jun 26, 8:49 am, Andy Freeman [EMAIL PROTECTED] wrote:
 Map doesn't work on generators or iterators because they're not part
 of the common lisp spec, but if someone implemented them as a library,
 said library could easily include a map that handled them as well.

Note that this is is a consequence of something that Python does
better than lisp.  Far more parts of python are defined in terms of
named operations which are data-type independent.  As a result, they
work on things that the implementor (or spec) never considered.

That said, it's no big deal for a lisp program that needed an enhanced
map that also understands iterators and generators to use it.

Compare that with what a programmer using Python 2.4 has to do if
she'd like the functionality provided by 2.5's with statement.  Yes,
with is just syntax, but it's extremely useful syntax, syntax that
can be easily implemented with lisp-style macros.





-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-26 Thread Paul Rubin
Andy Freeman [EMAIL PROTECTED] writes:
 And he's wrong, at least as far as common lisp is concerned - map does
 exactly that.
 
 http://www.lispworks.com/documentation/HyperSpec/Body/f_map.htm

sequence there just means vectors and lists.

 Map doesn't work on generators or iterators because they're not part
 of the common lisp spec, but if someone implemented them as a library,
 said library could easily include a map that handled them as well.

Right, more scattered special purpose kludges instead of a powerful
uniform interface.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-26 Thread Paul Rubin
Andy Freeman [EMAIL PROTECTED] writes:
 Compare that with what a programmer using Python 2.4 has to do if
 she'd like the functionality provided by 2.5's with statement.  Yes,
 with is just syntax, but it's extremely useful syntax, syntax that
 can be easily implemented with lisp-style macros.

Not really.  The with statement's binding targets all have to support
the protocol, which means a lot of different libraries need redesign.
You can't do that with macros.  Macros can handle some narrow special
cases such as file-like objects, handled in Python with
contextlib.closing.

That said, the with statement was missing from Python for much too
long, since users were happy to rely on reference counting.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-26 Thread Andy Freeman
On Jun 26, 10:10 am, Paul Rubin http://[EMAIL PROTECTED] wrote:
 Andy Freeman [EMAIL PROTECTED] writes:
  Compare that with what a programmer using Python 2.4 has to do if
  she'd like the functionality provided by 2.5's with statement.  Yes,
  with is just syntax, but it's extremely useful syntax, syntax that
  can be easily implemented with lisp-style macros.

 Not really.

Yes really, as the relevant PEP shows.  The it works like pseudo-
code is very close to how it would be defined with lisp-style macros.

 The with statement's binding targets all have to support
 the protocol, which means a lot of different libraries need redesign.

That's a different problem, and it's reasonably solvable for anyone
who wants to use the roll-your-own with while writing an application
running under 2.4.  (You just add the relevant methods to the
appropriate classes.)

The big obstacle is the syntax of the with-statement.  There's no way
to define it in python with user-code.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-26 Thread Graham Breed
Douglas Alan wote:
 Graham Breed [EMAIL PROTECTED] writes:

  Another way is to decorate functions with their local variables:

  from strict import my
  @my(item)
  ... def f(x=1, y=2.5, z=[1,2,4]):
  ... x = float(x)
  ... w = float(y)
  ... return [item+x-y for item in z]

 Well, I suppose that's a bit better than the previous suggestion, but
 (1) it breaks the style rule of not declaring variables until you need
 them, and (2) it doesn't catch double initialization.

(1) is a style rule that many style guides explicitly violate.  What
is (2) and why would it be a problem?

A better way that I think is fine syntactically would be

from strict import norebind, set
@norebind
def f(x=1, y=2.5, z=[1.2.4]):
set(x=float(x))
set(w=float(y))
return [item+x-y for item in z]

It won't work because the Python semantics don't allow a function to
alter a nested namespace.  Or for a decorator to get at the locals of
the function it's decorating.  It's an example of Python restricting
flexibility, certainly.

  The best way to catch false rebindings is to stick a comment with
  the word rebound after every statement where you think you're
  rebinding a variable.

 No, the best way to catch false rebindings is to have the computers
 catch such errors for you.  That's what you pay them for.

How does the computer know which rebindings are false unless you tell
it?

  Then you can search your code for cases where there's a rebound
  comment but no rebinding.

 And how do I easily do that?  And how do I know if I even need to in
 the face of sometimes subtle bugs?

In UNIX, you do it by putting this line in a batch file:

egrep -H 'rebound' $* | egrep -v '^[^:]+:[[:space:]]*([.[:alnum:]]+)
[[:space:]]*=(|.*[^.])\\1\'

You don't know you need to do it, of course.  Like you wouldn't know
you needed to use the let and set macros if that were possible.
Automated checks are only useful for problems you know you might have.

  Assuming you're the kind of person who knows that false rebindings
  can lead to perplexing bugs, but doesn't check apparent rebindings
  in a paranoid way every time a perplexing bug comes up, anyway.
  (They aren't that common in modern python code, after all.)

 They're not that uncommon, either.

The 300-odd line file I happened to have open had no examples of the
form x = f(x).  There was one rebinding of an argument, such as:

if something is None:
something = default_value

but that's not the case you were worried about.  If you've decided it
does worry you after all there may be a decorator/function pattern
that can check that no new variables have been declared up to a
certain point.

I also checked a 400-odd file which has one rebinding that the search
caught.  And also this line:

m, n = n, m%n

which isn't of the form I was searching for.  Neither would the set()
solution above be valid, or the substitution below.  I'm sure it can
be done with regular expressions, but they'd get complicated.  The
best way would be to use a parser, but unfortunately I don't
understand the current Python grammar for assignments.  I'd certainly
be interested to see how your proposed macros would handle this kind
of thing.

This is important because the Python syntax is complicated enough that
you have to be careful playing around with it.  Getting macros to work
the way you want with results acceptable to the general community
looks like a huge viper pit to me.  That may be why you're being so
vague about the implementation, and why no macro advocates have
managed to get a PEP together.  A preprocessor that can read in
modified Python syntax and output some form of real Python might do
what you want.  It's something you could work on as a third-party
extension and it should be able to do anything macros can.


That aside, the short code sample I give below does have a rebinding
of exactly the form you were worried about.  It's still idiomatic for
text substitutions and so code with a lot of text substitutions will
likely have a lot of rebindings.  You could give each substituted text
a different name.  I think that makes some sense because if you're
changing the text you should give it a name to reflect the changes.
But it's still error prone: you might use the wrong (valid) name
subsequently.  Better is to check for unused variables.

 I've certainly had it happen to me on several occasions, and sometimes
 they've been hard to find as I might not even see the mispeling even
 if I read the code 20 times.

With vim, all you have to do is go to the relevant line and type ^* to
check that the two names are really the same.  I see you use Emacs but
I'm sure that has an equivalent.

 (Like the time I spent all day trying to figure out why my assembly
 code wasn't working when I was a student and finally I decided to ask
 the TA for help, and while talking him through my code so that he
 could tell me what I was doing wrong, I finally noticed the rO where
 there was supposed to 

Re: Python's only one way to do it philosophy isn't good?

2007-06-26 Thread Douglas Alan
Paul Rubin http://[EMAIL PROTECTED] writes:

 Andy Freeman [EMAIL PROTECTED] writes:

 Compare that with what a programmer using Python 2.4 has to do if
 she'd like the functionality provided by 2.5's with statement.  Yes,
 with is just syntax, but it's extremely useful syntax, syntax that
 can be easily implemented with lisp-style macros.

 Not really.  The with statement's binding targets all have to support
 the protocol, which means a lot of different libraries need redesign.
 You can't do that with macros.

But that's a library issue, not a language issue.  The technology
exists completely within Lisp to accomplish these things, and most
Lisp programmers even know how to do this, as application frameworks
in Lisp often do this kind.  The problem is getting anything put into
the standard.  Standardizing committees just suck.

I just saw a presentation today on the Boost library for C++.  This
project started because the standard library for C++ is woefully
inadequate for today's programming needs, but any chance of getting
big additions into the standard library will take 5-10 years.
Apparently this is true for all computer language standards.  And even
then, the new standard will be seriously lacking, because it is
usually based on armchair thinking rather than real-world usage.

So the Boost guys are making a defacto standard (or so they hope)
library for C++ that has more of the stuff you want, and then when the
standardizing committees get around to revising the actual standard,
the new standard will already be in wide use, meaning they just have
to sign off on it (and perhaps suggest a few tweaks).

Alas, the Lisp standards are stuck in this sort of morass, even while
many implementations do all the right things.

Python doesn't have this problem because it operates like Boost to
begin with, rather than having a zillion implementations tracking some
slow moving standard that then mandates things that might be nearly
impossible to implement, while leaving out much of what people need.

But then again, neither do many dialects of Lisp, which are developed
more or less like Python is.  But then they aren't standards
compliant, and so they don't receive wide adoption.

 Macros can handle some narrow special cases such as file-like
 objects, handled in Python with contextlib.closing.

Macros handle the language part of things in Lisp perfectly well in
this regard.  But you are right -- they certainly can't make
standardizing committees do the right thing.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-25 Thread Paul Rubin
Douglas Alan [EMAIL PROTECTED] writes:
 And likewise, good macro programming can solve some problems that no
 amount of linting could ever solve.

I think Lisp is more needful of macros than other languages, because
its underlying primitives are too, well, primitive.  You have to write
all the abstractions yourself.  Python has built-in abstractions for a
few container types like lists and dicts, and now a new and more
general one (iterators), so it's the next level up.  Haskell abstracts
the concept of containers to something called monads, so operations
like loops and list comprehensions fall out automatically (it took me
a while to realize that--Haskell listcomps weren't a bright new idea
someone thought of adding to an otherwise complete language: they were
already inherently present in the list monad operations and their
current syntax is just minor sugaring and is actually restricted on
purpose to make the error messages less confusing).

So, a bunch of stuff one needs macros to do conveniently in Lisp, can
be done with Python's built-in syntax.  And a bunch of stuff that
Python could use macros for, are easily done in Haskell using delayed
evaluation and monads.  And Haskell is starting to grow its own macro
system (templates) but that's probably a sign that an even higher
level language (maybe with dependent types or something) would make
the templates unnecessary.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-25 Thread Douglas Alan
Paul Rubin http://[EMAIL PROTECTED] writes:

 Douglas Alan [EMAIL PROTECTED] writes:
 And likewise, good macro programming can solve some problems that no
 amount of linting could ever solve.

 I think Lisp is more needful of macros than other languages, because
 its underlying primitives are too, well, primitive.  You have to write
 all the abstractions yourself.

Well, not really beause you typically use Common Lisp with CLOS and a
class library.  If you ask me, the more things that can (elegantly) be
moved out of the core language and into a standard library, the
better.

 Python has built-in abstractions for a few container types like
 lists and dicts, and now a new and more general one (iterators), so
 it's the next level up.

Common Lisp has had all these things for ages.

 And a bunch of stuff that Python could use macros for, are easily
 done in Haskell using delayed evaluation and monads.  And Haskell is
 starting to grow its own macro system (templates) but that's
 probably a sign that an even higher level language (maybe with
 dependent types or something) would make the templates unnecessary.

Alas, I can't comment too much on Haskell, as, although I am familiar
with it to some extent, I am far from proficient in it.  Don't worry
-- it's on my to-do list.

I think that first I'd like to take Gerry Sussman's new graduate
class, first, though, and I'll find out how it can all be done in
Scheme.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-25 Thread Alexander Schmolck
Douglas Alan [EMAIL PROTECTED] writes:

 Python has built-in abstractions for a few container types like
 lists and dicts, and now a new and more general one (iterators), so
 it's the next level up.

 Common Lisp has had all these things for ages.

Rubbish. Do you actually know any common lisp?

There is precisely no way to express

for x in xs:
blah(x)

or
x = xs[key]

in either scheme or CL, which is a major defect of both language (although
there has been a recent and limited proposal for sequence iteration by c.
rhodes which is implemented as an experimental extension in sbcl). This is
stuff even C++, which is about the lowest-level language anyone uses for
general purpose programming these days has been able to express for decades
(modulo foreach syntax).

In a decent scheme it's easy enough to define your own collection
class/iteration protocol, which does allow you to do something like the above,
but of course only for container abstractions that you have some control over
yourself. Even in this limited sense you can forget about doing this in CL in
a way that meshes nicely with the existing primitives (inter alia because of
spurious inconsistencies between e.g. sequence and hash-access and
under-specification of the exception hierachy) and anything as expressive as
generators/coroutines in CL with reasonable effort and performance which won't
even allow you to write LOOPs over custom container types (the nonstandard
ITERATE package has limited support for this).

'as
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-25 Thread Douglas Alan
Alexander Schmolck [EMAIL PROTECTED] writes:

 Douglas Alan [EMAIL PROTECTED] writes:

 Python has built-in abstractions for a few container types like
 lists and dicts, and now a new and more general one (iterators), so
 it's the next level up.

 Common Lisp has had all these things for ages.

 Rubbish. Do you actually know any common lisp?

Yes, though it's been quite a while, and it was mostly on Lisp
Machines, which, at the time, Common Lisp was still being
standardized, and so Lisp Machine Chine Nual Lisp wasn't quite
Common Lisp compliant at the time.  Also, Lisp Machine Lisp had a lot
of features, such as stack groups, that weren't put into Common Lisp.
Also, my experience predates CLOS, as at the time Lisp Machines used
Flavors.

Most of my Lisp experience is actually in MacLisp (and Ulisp and
Proto, neither of which you've likely heard of).  MacLisp was an
immediate precursor of Common Lisp, and didn't have a standard object
system at all (I rolled one myself for my applications), but it had
the Loop macro and if I recall correctly, the MacLisp Loop macro
(which was nearly identical to the Chine Nual Loop macro, which I
thought was ported rather unsullied for Common Lisp).  In any case,
IIRC, there were hooks in the Loop macro for dealing with iterators
and I actually used this for providing an iterator-like interface to
generators (for Lisp Machines) that I coded up with macros and stack
groups.

It may be that these hooks didn't make it into the Common Lisp Loop
macro, or that my memory of what was provided by the macro is a little
off.  What's not off, is that it was really easy to implement these
things, and it wasn't like I was some sort of Lisp guru -- I was just
an undergraduate student.

I will certainly admit that Lisp programmers at the time were (and
likely still are) much more enamored of mapping functions than of
iterators.  Mapping functions certainly get the job done as elegantly
as iterators most of the time, although I would agree that they are
not quite so general.  Of course, using generators, I was easily able
to make a converter that would take a mapping function and return a
corresponding iterator.

Scheme, on, the other hand, at least by idiom, has computation
streams, and streams are equivalent to iterators.

 There is precisely no way to express

 for x in xs:
 blah(x)

The canonical way to do this in Lisp would be something like:

   (mapcar (lambda (x) (blah x))
   xs)

Though there would (at least in MacLisp) be a differently named
mapping function for each sequence type, which makes things a bit less
convenient, as you have to know the name of the mapping function
for each type.

 or
 x = xs[key]

I'm not sure what you are asserting?  That Common Lisp doesn't have
hash tables?  That's certainly not the case.  Or that it doesn't
provide standard generic functions for accessing them, so you can
provide your own dictionaries that are implemented differently and
then use exactly the same interface?  The latter I would believe, as
that would be one of my criticisms of Lisp -- although it's pretty
cool that you can load whatever object system you would like (CLOS
being by far the most common), it also means that the core language
itself is a bit deficient in OO terms.

This problem would be significantly mitigated by defining new
standards for such things in terms of CLOS, but unfortunately
standards change unbearably slowly.  There are certainly many
implementations of Lisp that solve these issues, but they have a hard
time achieving wide adoption.  A language like Python, which is
defined by its implementation, rather than by a standard, can move
much more quickly.  This debate though is really one more of
what is the best model for language definition, rather than one on
what the ideal language is like.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-25 Thread Paul Rubin
Douglas Alan [EMAIL PROTECTED] writes:
 I will certainly admit that Lisp programmers at the time were (and
 likely still are) much more enamored of mapping functions than of
 iterators.  Mapping functions certainly get the job done as elegantly
 as iterators most of the time, although I would agree that they are
 not quite so general.

In the Maclisp era functions like mapcar worked on lists, and
generated equally long lists in memory.  It was sort of before my time
but I have the impression that Maclisp was completely dynamically
scoped and as such, it couldn't cleanly make anything like generators
(since it had no way to make lexical closures).

 Scheme, on, the other hand, at least by idiom, has computation
 streams, and streams are equivalent to iterators.

No not really, they (in SICP) are at best more like class instances
with a method that mutates some state.  There's nothing like a yield
statement in the idiom.  You could do it with call/cc but SICP just
uses ordinary closures to implement streams.

 The canonical way to do this in Lisp would be something like:
(mapcar (lambda (x) (blah x)) xs)

At least you could spare our eyesight by writing that as 
(mapcar #'blah xs) ;-).  The point is that mapcar (as the name 
implies) advances down a list using cdr, i.e. it only operates
on lists, not general iterators or streams or whatever.

  x = xs[key]
 
 I'm not sure what you are asserting?  That Common Lisp doesn't have
 hash tables?  That's certainly not the case.  Or that it doesn't
 provide standard generic functions for accessing them

The latter.  Of course there are getf/setf, but those are necessarily
macros.

 A language like Python, which is defined by its implementation,
 rather than by a standard, can move much more quickly.  This debate
 though is really one more of what is the best model for language
 definition, rather than one on what the ideal language is like.

Python is not Perl and it has in principle always been defined by its
reference manual, though until fairly recently it's fostered a style
of relying on various ugly CPython artifacts like the reference
counting GC.

Lisp accumulated a lot of cruft over the decades and it kept some
baggage that it really could have done without.  I don't think
Python's designers learned nearly as much from Lisp as they could
have, and Python has suffered because of it.  Lisp still has an
awesome beauty in both the CL and Scheme incarnations.  But it's like
listening to Elvis music--even if it can still get you dancing, at the
end of the day it's still a reflection of a bygone culture.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-25 Thread Douglas Alan
Paul Rubin http://[EMAIL PROTECTED] writes:

 Douglas Alan [EMAIL PROTECTED] writes:

 I will certainly admit that Lisp programmers at the time were (and
 likely still are) much more enamored of mapping functions than of
 iterators.  Mapping functions certainly get the job done as elegantly
 as iterators most of the time, although I would agree that they are
 not quite so general.

 In the Maclisp era functions like mapcar worked on lists, and
 generated equally long lists in memory.

I'm aware, but there were various different mapping functions.  map,
as opposed to mapcar didn't return any values at all, and so you had
to rely on side effects with it.

 It was sort of before my time but I have the impression that Maclisp
 was completely dynamically scoped and as such,

Yes, that's right.

 it couldn't cleanly make anything like generators (since it had no
 way to make lexical closures).

That's right, generators would have been quite difficult to do in
MacLisp.  But a Lisp Machine (with stack groups) could have done them,
and did, with or without closures.

 Scheme, on, the other hand, at least by idiom, has computation
 streams, and streams are equivalent to iterators.

 No not really, they (in SICP) are at best more like class instances
 with a method that mutates some state.  There's nothing like a yield
 statement in the idiom.

Right -- I wrote iterators, not generators.

 You could do it with call/cc but SICP just uses ordinary closures to
 implement streams.

Yes, that's right.

 The canonical way to do this in Lisp would be something like:
(mapcar (lambda (x) (blah x)) xs)

 At least you could spare our eyesight by writing that as 
 (mapcar #'blah xs) ;-).

Good point!  But I just love lambda -- even when I'm just using it as
a NOP  (Also I couldn't remember the syntax for accessing the
function property of a symbol in MacLisp.)

 The point is that mapcar (as the name implies) advances down a list
 using cdr, i.e. it only operates on lists, not general iterators or
 streams or whatever.

Right, but each sequence type had it's own corresponding mapping
fuctions.

  x = xs[key]

 I'm not sure what you are asserting?  That Common Lisp doesn't have
 hash tables?  That's certainly not the case.  Or that it doesn't
 provide standard generic functions for accessing them

 The latter.  Of course there are getf/setf, but those are necessarily
 macros.

Right.  OO on primitive data types is kind of hard in a non OO
language.  So, when writing an application in MacLisp, or Lisp Machine
lisp, I might have had to spend a bit of time writing an application
framework that provided the OO features I needed.  This was not
particularly hard to do in Lisp, but surely not nearly as nice as if
they had standardized such things.  This would not be particularly
difficult to do, other than the getting everyone to agree on just what
the interfaces should be.  But Lisp programmers, are of course, just
as recalcitrant as Python programmers.

 A language like Python, which is defined by its implementation,
 rather than by a standard, can move much more quickly.  This debate
 though is really one more of what is the best model for language
 definition, rather than one on what the ideal language is like.

 Python is not Perl and it has in principle always been defined by its
 reference manual,

And in Python's case, the reference manual is just an incomplete
description of the features offered by the implementation, and people
revel in features that are not yet in the reference manual.

 though until fairly recently it's fostered a style of relying on
 various ugly CPython artifacts like the reference counting GC.

That's not ugly.  The fact that CPython has a reference-counting GC
makes the lifetime of object predictable, which means that like in
C++, and unlike in Java, you can use destructors to good effect.  This
is one of the huge boons of C++.  The predictability of lifespan makes
the language more expressive and powerful.  The move to deprecate
relying on this feature in Python is a bad thing, if you ask me, and
removes one of the advantages that Python had over Lisp.

 Lisp accumulated a lot of cruft over the decades and it kept some
 baggage that it really could have done without.

Indeed -- true of most languages.  Of course, there have been quite a
few Lisp dialects that have been cleaned up in quite a few ways (e.g.,
Dylan), but they, of course, have a hard time achieving any
significant traction.

 I don't think Python's designers learned nearly as much from Lisp as
 they could have, and Python has suffered because of it.

Amen.

 Lisp still has an awesome beauty in both the CL and Scheme
 incarnations.

Indeed.

 But it's like listening to Elvis music--even if it can still get you
 dancing, at the end of the day it's still a reflection of a bygone
 culture.

Lisp is more like The Beatles.

And it's not bygone -- it's just nichified.  Lisp is forever -- you'll
see.

|oug
-- 

Re: Python's only one way to do it philosophy isn't good?

2007-06-24 Thread Douglas Alan
Steven D'Aprano [EMAIL PROTECTED] writes:

 On Sat, 23 Jun 2007 14:56:35 -0400, Douglas Alan wrote:

 How long did it take you to write the macros, and use them, compared
 to running Pylint or Pychecker or equivalent?

 An hour?  Who cares?  You write it once and then you have it for the
 rest of your life.  You put it in a widely available library, and then
 *every* programmer also has it for the rest of their lives.  The
 amortized cost: $0.00.  The value: priceless.

 Really? Where do I download this macro? How do I find out about it? How
 many Lisp programmers are using it now?

(1) I didn't have to write such a macro for Lisp, as Lisp works
differently.  For one thing, Lisp already has let and set special
forms.  (Lisp uses the term special form for what Python would call
a statement, but Lisp doesn't call them statements since they return
values.)

(2) You act as if I have no heavy criticisms of Lisp or the Lisp
community.  I critique everything with equal vigor, and keep an eye
out for the good aspects and ideas of everything with equal vigor.

 How does your glib response jib with your earlier claims that the
 weakness of Lisp/Scheme is the lack of good libraries?

(1) See above. (2) My response wasn't glib.

 Googling for ' Douglas Allen download lisp OR scheme ' wasn't very
 promising.

(1) You spelled my name wrong.  (2) I haven't written any libraries
for any mainstream dialects of Lisp since there was a web.  I did
write a multiple dispatch lookup cacher for a research dialect of
Lisp, but it  was just an exercise for a version of Lisp that few
people have ever used.

 In fairness, the various Python lints/checkers aren't part of the standard
 library either, but they are well-know standards.

In general I don't like such checkers, as I tend to find them more
annoying than useful.

 Thanks, but that's just too syntactically ugly and verbose for me to
 use.

 Syntactically ugly? Verbose?

 Compare yours with mine:

 let x = 0
 let y = 1
 let z = 2
 set x = 99 

 (Looks like BASIC, circa 1979.)

It looks like a lot of languages.  And there's a reason for that -- it
was a good idea.

 variables.declare(x=0, y=1, z=2)
 variables.x = 99

 (Standard Python syntax.)

 I don't think having two easily confused names, let and set is an
 advantage,

Let and set are not easily confused.  Lisp programmers have had
absolutely no problem keeping the distinction separate for the last 47
years now.

 but if you don't like the word declare you could change it to
 let, or change the name of the module to set (although that runs the
 risk of confusing it with sets).

 Because this uses perfectly stock-standard Python syntax, you could even
 do this, so you type fewer characters:

 v = variables
 v.x = 99

 and it would Just Work. 

I wouldn't program that way, and no one that I know would either.

In this regard you sound exactly like all the C++ folks, who when you
point out that something in C++ is inadequate for one's needs, they
point you at some cumbersome and ugly solution and then tell you that
since C++ can already deal with the complaint, that there's no good
reason to consider changing C++.  Consequently, C++ still doesn't have
a finally statement, and it requires either making instance
variables public or forcing the programmer to write lots of
boilerplate code writing setter and getter functions.  Fortunately,
the Python developers finally saw the errors of their ways in this
regard and fixed the situation.  But, it seems to me that you would
have been one of those people saying that there's no need to have a
way of overriding attribute assignment and fetching, as you can always
just write all that extra boilerplate code, or instead add an extra
layer of indirection (proxy objects) in your instance data to have
things done the way you want, at the expense of ugly code.

 Not only that, but my fellow Python programmers would be sure to
 come and shoot me if I were to code that way.

 *shrug* They'd shoot you if you used let x = 0 too.

Clearly you are not familiar with the programmers that I work with.
As I mentioned previously, at least one of them is quite upset about
the auto-declaration feature of most scripting languages, and your
suggestion would not make her any happier.

 One of the reasons that I want to use Python is because I like reading
 and writing code that is easy to read and looks good.  I don't want to
 bend it to my will at the expense of ugly looking code.

 But the ugly looking code is stock-standard Python syntax.

There many things that cannot be done in stock Python syntax elegantly
(e.g. multiple predicate dispatch), which is why, when programming in
Python, one often sticks to doing things the way that *can* be done
elegantly.  (This can often result in programs that are structured
less elegantly in the large, however.) If you don't recognize this,
then you must be livid over the addition to Python of decorators, list
and generator comprehension, etc.  After, all, Python is 

Re: Python's only one way to do it philosophy isn't good?

2007-06-24 Thread Robert Brown

Steven D'Aprano [EMAIL PROTECTED] writes:
 Graham talks about 25% of the Viaweb code base being macros. Imagine how
 productive his coders would have been if the language was not quite
 so minimalistic, so that they could do what they wanted without the
 _lack_ of syntax getting in the way.

Paul Graham's Viaweb code was written in Common Lisp, which is the least
minimalistic dialect of Lisp that I know.  Even though they were using this
powerful tool, they still found it useful to create new syntactic
abstractions.  How much less productive would they have been had they not
had this opportunity?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-24 Thread Douglas Alan
Steven D'Aprano [EMAIL PROTECTED] writes:

 You seem oblivious to the fact that one of the huge benefits of Python
 is its elegant and readable syntax.  The problem with not having a
 flexible syntax, is that a programming language can't provide
 off-the-shelf an elegant syntax for all functionality that will ever
 be needed.

 It is hardly off-the-shelf if somebody has to create new syntax
 for it.

Ummm. that's my point.  No language can provide all the syntax that
will ever be needed to write elegant code.  If module authors can
provide the syntax needed to use their module elegantly, then problem
solved.

 Eventually programmers find themselves in need of new
 elegant functionality, but without a corresponding elegant syntax to
 go along with the new functionality, the result is code that does not
 look elegant and is therefore difficult to read and thus maintain.

 That's true, as far as it goes, but I think you over-state your
 case.

I do not.

It is so easy for you, without *any* experience with a language (i.e.,
Lisp) or its community to completely dismiss the knowledge and wisdom
acquired by that community.  Doesn't that disturb you a bit?

 The syntax included in Python is excellent for most things, and even
 at its weakest, is still good. I can't think of any part of Python's
 syntax that is out-and-out bad.

The proposed syntax for using the proposed predicate-based multimethod
library is ungainly.

Until decorators were added to the language, the way to do things that
decorators are good for was ugly.  Decorators patch up one ugliness,
but who wants Python to become an old boat with lots of patches?

Nearly every addition made to Python since 1.5 could have been done in
the standard library, rather than being made to the core language, if
Python had a good macro system.  The exceptions, I think, being
objects all the way down, and generators.  Though generators could
have been done in the standard library too, if Python had first class
continuations, like Scheme and Ruby do.

Over time, an infinite number of examples will turn up like this, and
I claim (1) that it is better to modify the standard library than to
modify the language implementation, and that (2) it is better to allow
people to experiment with language features without having to modify
the implementation, and (3) that it is better to allow people to
distribute new language features for experimentation or production in
a loadable modular fashion, and (4) that it is better to allow
application developers to develope new language features for their
application frameworks than to not.

 The reality is, one can go a long, long, long distance with Python's
 syntax.

And you can go a long, long way with Basic, or Fortran, or C, or C++,
or Haskell, or Lisp.  None of this implies that there aren't
deficiencies in all of these languages.  Python is no exception.
Python just happens to be better than most in a number of significant
regards.

 Most requests for new syntax I've seen fall into a few
 categories:

 * optimization, e.g. case, repeat, multi-line lambda

I don't give a hoot about case or repeat, though a Lisp-like loop
macro might be nice.  (The loop macro is a little mini language
optimized for coding complicated loops.)  A multi-line lambda would
be very nice.

 * language Foo looks like this, it is kewl

Sometimes language Foo has features that are actually important to for
a specific application or problem domain.  It's no accident, for
instance, that Lisp is still the preferred language for doing AI
research.  It's better for Python if Python can accommodate these
applications and domains than for Python to give up these markets to
Foo.

 * the usual braces/whitespace flamewars
 * trying to get static type checking into the language


 So let's be specific -- what do you think Python's syntax is missing? If
 Python did have a macro facility, what would you change?

In addition to the examples given above, symbols would be nice.  Lisp
has 'em, Ruby has 'em, Python doesn't.  They are very useful.

An elegant multimethod based object system will be essential
for every language someday, when the time is right for people to
understand the advantages.

Manifest typing will be essential.

A backtracking system is important for some applications.  Perhaps all
applications, someday.

The ability to make mini-languages for specific domains, like fields
of math and science, is very useful, so the mathematicians and
scientists can denote things in a notation that is closer to the
notation that they actually work in.

Etc., etc., etc.  The future is long, and our ability to peer into it
is blurry, and languages that can adapt to the unforeseen needs of that
blurry future are the ones that will survive.

For instance, I can state with almost 100% certainty that one hundred
years from now, some dialect of Lisp will still be around and in
common usage.  I can't say the same thing about Python.  I can't say
that about Python ten years from 

Re: Python's only one way to do it philosophy isn't good?

2007-06-24 Thread Graham Breed
Steven D'Aprano wote:

 But if you really want declarations, you can have them.

  import variables
  variables.declare(x=1, y=2.5, z=[1, 2, 4])
  variables.x = None
  variables.w = 0
 Traceback (most recent call last):
   File stdin, line 1, in module
   File variables.py, line 15, in __setattr__
 raise self.DeclarationError(Variable '%s' not declared % name)
 variables.DeclarationError: Variable 'w' not declared

Another way is to decorate functions with their local variables:

 from strict import my
 @my(item)
... def f(x=1, y=2.5, z=[1,2,4]):
... x = float(x)
... w = float(y)
... return [item+x-y for item in z]
...
Traceback (most recent call last):
  File stdin, line 2, in module
  File strict.py, line 11, in dec
raise DeclarationError(No slot for %s%varname)
strict.DeclarationError: No slot for w

and the implementation

import re

class DeclarationError(TypeError): pass

def my(slots=):
tokens = slots.split()
def dec(func):
code = func.func_code
for varname in code.co_varnames[code.co_argcount:]:
if re.match('\w+$', varname) and varname not in tokens:
raise DeclarationError(No slot for %s%varname)
return func
return dec


The best way to catch false rebindings is to stick a comment with the
word rebound after every statement where you think you're rebinding
a variable.  Then you can search your code for cases where there's a
rebound comment but no rebinding.  Assuming you're the kind of
person who knows that false rebindings can lead to perplexing bugs,
but doesn't check apparent rebindings in a paranoid way every time a
perplexing bug comes up, anyway.  (They aren't that common in modern
python code, after all.)  And that you remembered to add the comments
(like you would have remembered the let and set).  And you're also the
kind of person who's troubled by perplexing bugs but doesn't run a
fully fledged lint.  Maybe that's the kind of person who wouldn't put
up with anything short of a macro as in the original proposal.  All I
know is that it's the kind of person I don't want to second guess.


   Graham

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-24 Thread Douglas Alan
Graham Breed [EMAIL PROTECTED] writes:

 Another way is to decorate functions with their local variables:

 from strict import my
 @my(item)
 ... def f(x=1, y=2.5, z=[1,2,4]):
 ... x = float(x)
 ... w = float(y)
 ... return [item+x-y for item in z]

Well, I suppose that's a bit better than the previous suggestion, but
(1) it breaks the style rule of not declaring variables until you need
them, and (2) it doesn't catch double initialization.

 The best way to catch false rebindings is to stick a comment with
 the word rebound after every statement where you think you're
 rebinding a variable.

No, the best way to catch false rebindings is to have the computers
catch such errors for you.  That's what you pay them for.

 Then you can search your code for cases where there's a rebound
 comment but no rebinding.

And how do I easily do that?  And how do I know if I even need to in
the face of sometimes subtle bugs?

 Assuming you're the kind of person who knows that false rebindings
 can lead to perplexing bugs, but doesn't check apparent rebindings
 in a paranoid way every time a perplexing bug comes up, anyway.
 (They aren't that common in modern python code, after all.)

They're not that uncommon, either.

I've certainly had it happen to me on several occasions, and sometimes
they've been hard to find as I might not even see the mispeling even
if I read the code 20 times.

(Like the time I spent all day trying to figure out why my assembly
code wasn't working when I was a student and finally I decided to ask
the TA for help, and while talking him through my code so that he
could tell me what I was doing wrong, I finally noticed the rO where
there was supposed to be an r0.  It's amazing how useful a TA can
be, while doing nothing at all!)

 And you're also the kind of person who's troubled by perplexing bugs
 but doesn't run a fully fledged lint.

Maybe PyLint is better than Lint for C was (hated it!), but my idea of
RAD does not include wading through piles of useless warning messages
looking for the needle warning in the warning haystack.  Or running
any other programs in the midst of my code, run, code, run, ..., loop.

 Maybe that's the kind of person who wouldn't put up with anything
 short of a macro as in the original proposal.  All I know is that
 it's the kind of person I don't want to second guess.

As it is, I code in Python the way that a normal Python programmer
would, and when I have a bug, I track it down through sometimes
painstaking debugging as a normal Python programmer would.  Just as
any other normal Python programmer, I would not use the alternatives
suggested so far, as I'd find them cumbersome and inelegant.  I'd
prefer not to have been bit by the bugs to begin with.  Consequently,
I'd use let and set statements, if they were provided (or if I could
implement them), just as I have the equivalents to let and set in
every other programming language that I commonly program in other than
Python.

|oug
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python's only one way to do it philosophy isn't good?

2007-06-24 Thread Michele Simionato
On Jun 23, 6:39 pm, Douglas Alan [EMAIL PROTECTED] wrote:

 One of the things that annoys me when coding in Python (and this is a
 flaw that even lowly Perl has a good solution for), is that if you do
 something like

  longVarableName = foo(longVariableName)

 You end up with a bug that can be very hard to track down.

You should really be using pychecker (as well as Emacs autocompletion
feature ...):

~$ cat x.py
def foo(x): return x

longVariableName = 1
longVarableName = foo(longVariableName)

~$ pychecker -v x.py
Processing x...

Warnings...

x.py:4: Variable (longVarableName) not used

[I know you will not be satisfied with this, but pychecker is really
useful,
since it catches many other errors that no amount of macroprogramming
would
evere remove].

   Michele Simionato

-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   >