Re: explicitly declare closures???

2001-08-28 Thread Dave Mitchell

Ken Fox [EMAIL PROTECTED] wrote:
 We must be very careful not to confuse closure with Perl's
 current implementation of closure. You've stumbled onto a bug in
 Perl, not discovered a feature of closures. Perl's closures
 were horribly buggy until release 5.004. (Thanks Chip!)

Er, no its not a bug - or at least Gurusamy didnt think so.

 Closed variables are just local variables. There's nothing special
 about refering to a variable from an inner scope. You don't want
 to write
 
   sub foo {
 my $x;
 
 if (...) { my outer $x; $x = 0 }
 else { my outer $x; $x = 1 }
 
 $x;
   }
 
 do you? So why make people using closures do it?

The whole point is that closed variables *aren't* 'just local variables'.
The inner $x's in the following 2 lines are vastly different:

sub foo { my $x= ... { $x } }
sub foo { my $x= ... sub { $x } }

In the first line, the two $x's both refer to the same thing. In the
second line, they don't. To all intents and puposes the inner $x in the
2nd line is declaring a new lexical which happens to grab the outer $x's
value at the time the anon sub is instantiated.

The reason why removing the 'middle' $x in the following

{ my $x = 'bar'; sub foo { $x; sub {$x} }}

causes the behaviour to change is that the middle $x implicitly gives
foo() a copy of $x at compile time. When the anon sub is cloned,
it picks up the current value of foo()'s $x. Without the middle $x, the
cloned sub picks up the outer $x instead.

Since the use of a bare lexical in a nested sub to all intents and purposes
introduces new variable, I thought it would help *people* for this to
be explicitly shown. It would also resolve some fuzzy scoping issues
by making things explicit. In the following, should the anon sub grab
foo()'s $x or the outer $x ?

{ my $x = 'bar'; sub foo {  {$x}; sub {$x}  }}

In bleedperl, the outer $x is grabbed, while the following line causes
foo()'s $x to be grabbed:

{ my $x = 'bar'; sub foo {  sub {$x}; {$x}  }}

Clearly one of them is a bug, but which one? No one on P5Pers seemed to want
to decide.

Use of an 'outer' declaration would make this explicit:

{ my $x = 'bar'; sub foo {  outer $x;  sub {$x} } # grab foo's $x
{ my $x = 'bar'; sub foo { {outer $x;} sub {$x} } # grab outer $x

Dave monomania M.




Re: explicitly declare closures???

2001-08-28 Thread Ken Fox

Dave Mitchell wrote:
 The whole point is that closed variables *aren't* 'just local variables'.
 The inner $x's in the following 2 lines are vastly different:
 
 sub foo { my $x= ... { $x } }
 sub foo { my $x= ... sub { $x } }

You really need to learn what a closure is. There's a very nice book
called Structure and Interpretation of Computer Programs that can
give you a deep understanding. **

Anyways, it's important to understand that closures do not change the
scoping rules. A closure simply *captures* an existing environment.
If the environment isn't captured properly or changes after capture,
then you have buggy closures.

 causes the behaviour to change is that the middle $x implicitly gives
 foo() a copy of $x at compile time. When the anon sub is cloned,
 it picks up the current value of foo()'s $x. Without the middle $x, the
 cloned sub picks up the outer $x instead.

You're speaking in Perl implementation terms. I've already told you
that if Perl acts the way you say it does, then Perl has buggy
closures. You don't need to explain a bug to know that one exists!

On page 260 in Programming Perl (3rd ed), Larry/Tom/Jon talk about
how Perl's closures (should) behave:

  In other words, you are guaranteed to get the same copy of
  a lexical variable each time ...

IMHO bugs in Perl 5 shouldn't carry over to Perl 6. (Unless, of course,
we *like* the bugs... ;)

- Ken

** Unfortunately the term closure has two important meanings that
are not really related. We're talking about closing a subroutine's
environment, which is not how SICP uses the word. If you want a
Closures For 21 Dummies sort of book, this is not it.



Re: Expunge implicit @_ passing

2001-08-28 Thread Michael G Schwern

On Tue, Aug 28, 2001 at 09:10:40AM -0400, Ken Fox wrote:
 One of the cool things about Perl's OO system is that it lets
 us invent new type systems. This IMHO is its greatest strength.
 Perhaps this is also why some OO people hate Perl's OO?

Yes, this sort of thing FRIGHTENS THE HELL out of non-Perl people.
This is not a bad thing, it just means they have to stop expecting the
language designer to dictate their whole universe.  They can only see
hordes of malicious hackers and irresponsible junior programmers
blowing away their classes at run-time.

As the pendulum swings in the other direction you get mind-bogglingly
silly things like finalize which I just learned of today.


I'm going to be giving a talk about just this sort of thing at JAOO to
a room full of Java people.  Should be interesting.


-- 

Michael G. Schwern   [EMAIL PROTECTED]http://www.pobox.com/~schwern/
Perl6 Quality Assurance [EMAIL PROTECTED]   Kwalitee Is Job One
Your average appeasement engineer is about as clued-up on computers as
the average computer hacker is about B.O.
-- BOFH



Re: explicitly declare closures???

2001-08-28 Thread Dave Mitchell

Ken Fox [EMAIL PROTECTED] wrote:
 You really need to learn what a closure is. There's a very nice book
 called Structure and Interpretation of Computer Programs that can
 give you a deep understanding. **

Quite possibly I do. Anyway, I've now got the book on order :-)

 You're speaking in Perl implementation terms. I've already told you
 that if Perl acts the way you say it does, then Perl has buggy
 closures. You don't need to explain a bug to know that one exists!

Okay, to humour me for a mo', what should the following 2 examples
output if Perl were doing the right thing?


sub pr { print $_[0] || 'undef', \n }

{ my $x = 'X'; sub f {$F = sub {pr $x} }}
f(); $F-();

{ my $y = 'Y'; sub g { pr $y; $G = sub {pr $y} }}
g(); $G-();


Dave.




finalization

2001-08-28 Thread Hong Zhang

 On Tue, Aug 28, 2001 at 09:13:25AM -0400, Michael G Schwern wrote:
  As the pendulum swings in the other direction you get 
 mind-bogglingly
  silly things like finalize which I just learned of today.
 
 What's so silly about finalize?  It's pretty much identical to Perl's
 DESTROY.  (Except that Java's non-refcounting GC doesn't provide the
 same guarantees on when an object whill be finalized.)  (And 
 the problem
 that all too many JVMs have had buggy implementations of finalizers...
 that's an implementation issue, not a language one, though.)

The finalization is significantly different from DESTROY and destructor.
The problem is there is no guarantee the finalization method will be ever
called, or when it is called. For example, shall we let exit() to call all
finalization methods. It is very likely to cause deadlock. I don't think
C++ will unwind all thread stacks and call destructors in this case.

Most of finalization is used to deal with external resource, such as open
file, socket, window. You don't really want to depend on finalization,
since it is very likely run out of default file descriptor limit before
the finalization kicks in. The rule of thumb is to let finalization to
cleanup resource when an unexpected exception happens. It is more like
a safe-net or parachute. It is most likely to help you in emergency, but
there is no guarantee it will work.

Hong



Re: Expunge implicit @_ passing

2001-08-28 Thread Michael G Schwern

On Tue, Aug 28, 2001 at 10:47:35AM -0700, Damien Neil wrote:
 On Tue, Aug 28, 2001 at 09:13:25AM -0400, Michael G Schwern wrote:
  As the pendulum swings in the other direction you get mind-bogglingly
  silly things like finalize which I just learned of today.
 
 What's so silly about finalize?

Sorry, I ment final.  final classes and methods.  The idea that you
can prevent someone from subclassing your class or overriding your
methods.  I've seen things that hinder reuse, but this is the first
time I've seen one that violently blocks reuse!

Wow.  I'm reading the Sun tutorial on the subject.  Interesting reading.
http://java.sun.com/docs/books/tutorial/java/javaOO/final.html

They list two reasons to make your class final.  One is security
(which might actually be valid, but I doubt it will hold up to
determined attack), the other though...

You may also wish to declare a class as final for object-oriented
design reasons. You may think that your class is perfect or that,
conceptually, your class should have no subclasses.

The idea that a class is either 'perfect' or 'complete' has to be the
silliest, most arrogant thing I've ever heard!


Anyhow, just don't anyone suggest putting this in Perl 6.  I know
where you live.


-- 

Michael G. Schwern   [EMAIL PROTECTED]http://www.pobox.com/~schwern/
Perl6 Quality Assurance [EMAIL PROTECTED]   Kwalitee Is Job One
Good tidings, my native American Indian friend!  America will soon again
be yours!  Please accept 5th Avenue as an initial return!



RE: Expunge implicit @_ passing

2001-08-28 Thread Hong Zhang

 Sorry, I ment final.  final classes and methods.  The idea that you
 can prevent someone from subclassing your class or overriding your
 methods.  I've seen things that hinder reuse, but this is the first
 time I've seen one that violently blocks reuse!

final is only useful for strongly-variable-typed language, such as Java.
If the variable is not strongly-typed, people can always use delegation
or has-a scheme to subvert the class, even the class itself is declared
as final. For a truly well designed class, which has no public/protected
fields, nor protected methods, it really does not matter whether it is
final or not, since the subclass can not do anything beyond the class'
public interface.

Unless we want Perl to be strongly typed everywhere, I doubt the usefulness
of final except documentation purpose.

Hong



RE: Expunge implicit @_ passing

2001-08-28 Thread Brent Dax

# -Original Message-
# From: Michael G Schwern [mailto:[EMAIL PROTECTED]]
# Sent: Tuesday, August 28, 2001 4:35 PM
# To: [EMAIL PROTECTED]
# Subject: Re: Expunge implicit @_ passing
#
#
# On Tue, Aug 28, 2001 at 10:47:35AM -0700, Damien Neil wrote:
#  On Tue, Aug 28, 2001 at 09:13:25AM -0400, Michael G Schwern wrote:
#   As the pendulum swings in the other direction you get
# mind-bogglingly
#   silly things like finalize which I just learned of today.
# 
#  What's so silly about finalize?
#
# Sorry, I ment final.  final classes and methods.  The idea that you
# can prevent someone from subclassing your class or overriding your
# methods.  I've seen things that hinder reuse, but this is the first
# time I've seen one that violently blocks reuse!

On the other hand, it could stop some of the really stupid uses for
inheritance I've seen.  The dumbest one was in high school Advanced
Placement's C++ classes--the queue and stack classes inherited from the
array class!  (It was private inheritance, so you couldn't tell this
from the outside.)  This was one of the biggest kludges I've ever seen,
and a good example of a bad use of is-a.  It also meant that the class
was nearly impossible to modify for different storage--it was far easier
to just write a new class with the same interface.  Stupid, stupid,
stupid.

--Brent Dax
[EMAIL PROTECTED]




Re: finalization

2001-08-28 Thread Jan Dubois

On Tue, 28 Aug 2001 21:07:03 -0400 (EDT), Sam Tregar [EMAIL PROTECTED]
wrote:

On Wed, 29 Aug 2001, Jeremy Howard wrote:

 The answer used in .NET is to have a dispose() method (which is not a
 special name--just an informal standard) that the class user calls manually
 to clean up resources. It's not an ideal solution but there doesn't seem to
 be many other practical options.

Well, there's the Perl 5 reference counting solution.  In normal cases
DESTROY is called as soon as it can be.  Of course we're all anxious to
get into the leaky GC boat with Java and C# because we've heard it's
faster.  I wonder how fast it is when it's halfway under water and out of
file descriptors.

With GC, it is of course again the duty of the programmer to make sure the
resources are freed on time:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpapndx/html/_cor_finalize_and_dispose.asp

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguidnf/html/cpconcleaningupunmanagedresources.asp

C# has some syntactic sugar to make this a little more convenient to use:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/csspec/html/vclrfcsharpspec_8_13.asp

Having to do explicit resource management is a real pain when you are used
to Perl 5's reference counting. :)

BTW, the lure of GC over refcounting is *not* the speed (it is only
slightly faster).  The advantage is that it takes care of circular
references.  And it can remove sandbars in your heap for long running
processes.  It is also supposed to make your programs more robust because
you don't have to bother with keeping your reference counts rights.  But
it has a ton of its own problems with resource management, so I'm not
convinced there.

-Jan




Re: finalization

2001-08-28 Thread Damien Neil

On Tue, Aug 28, 2001 at 09:07:03PM -0400, Sam Tregar wrote:
 Well, there's the Perl 5 reference counting solution.  In normal cases
 DESTROY is called as soon as it can be.  Of course we're all anxious to
 get into the leaky GC boat with Java and C# because we've heard it's
 faster.  I wonder how fast it is when it's halfway under water and out of
 file descriptors.

Speaking purely for myself, I'd rather have a non-refcounting GC
because I'm rather tired of going through elaborate tricks to avoid
ever creating circular data structures.

- Damien



Re: finalization

2001-08-28 Thread Jeremy Howard

Sam Tregar wrote:
 On Wed, 29 Aug 2001, Jeremy Howard wrote:

  The answer used in .NET is to have a dispose() method (which is not a
  special name--just an informal standard) that the class user calls
manually
  to clean up resources. It's not an ideal solution but there doesn't seem
to
  be many other practical options.

 Well, there's the Perl 5 reference counting solution.  In normal cases
 DESTROY is called as soon as it can be.  Of course we're all anxious to
 get into the leaky GC boat with Java and C# because we've heard it's
 faster.  I wonder how fast it is when it's halfway under water and out of
 file descriptors.

I don't think speed is where the interest is coming from. GC should fix
common memory problems, such as the nasty circular references issue that has
caught all of us at some time.





Re: finalization

2001-08-28 Thread Jan Dubois

On Tue, 28 Aug 2001 19:04:20 -0700, Hong Zhang [EMAIL PROTECTED]
wrote:

Normally, GC is more efficient than ref count, since you will have many
advanced gc algorith to choose and don't have to pay malloc overhead.

You still need to malloc() your memory; however I realize that the
allocator can be *really* fast here.  But still, you give a lot of the
gain back during the mark-and-sweep phase, especially if you also
move/compact the memory.

The big gain only comes in when your program is small/quick enough to
actually finish before the GC kicks in the first time (think CGI).  In
that case you just discard the whole heap instead of doing a proper
garbage collection (unless of course someone thought they could still do
something inside a finalizer during global destruction and you still need
to finalize every other object on your heap :).

On MP machine, ref count is really slow, because of the atomic instructions,
which are very slow. I measured the atomic x86 instruction such as 
LOCK INC DWORD PTR [ECX]; long time ago. I believe each instruction takes
about 10 to 30 clock cycles.

Don't even dream of accessing Perl scalars simultaneously from multiple
threads without some kind of locking.  To keep their internal caching
behavior consistent, you'll need to lock them for even the most simple
operations (see the failure of the Perl 5.005 thread model).

But even if you give up the caching behavior, what about strings?  Atomic
updates, eh?  Welcome to the world of immutable strings.  Just allocate a
new string every time you need to modify it and update the string
reference atomically.  You want a modifiable buffer?  Get a StringBuilder
object and lock it on every access. :)  We could just as well switch to
Java or C#.

-Jan