Bryan C. Warnock [EMAIL PROTECTED] wrote:
On Thursday 06 September 2001 06:16 am, Dave Mitchell wrote:
One further worry of mine concerns the action of %MY:: on unintroduced
variables (especially the action of delete).
my $x = 100;
{
my $x = (%MY::{'$x'} = \200, $x+1);
From: Dave Mitchell [mailto:[EMAIL PROTECTED]]
Bryan C. Warnock [EMAIL PROTECTED] mused:
Consider it like, oh, PATH and executables:
`perl` will search PATH and execute the first perl
found, but 'rm perl' will not. It would only remove
a perl in my current scope..., er, directory.
At 02:19 PM 9/6/2001 +0200, Bart Lateur wrote:
On Tue, 04 Sep 2001 18:38:20 -0400, Dan Sugalski wrote:
At 09:20 AM 9/5/2001 +1100, Damian Conway wrote:
The main uses are (surprise):
* introducing lexically scoped subroutines into a caller's scope
I knew there was something
Dan Sugalski wrote:
... you have to take into account the possibility that a
variable outside your immediate scope (because it's been defined in an
outer level of scope) might get replaced by a variable in some intermediate
level, things get tricky.
Other things get tricky too. How about
DS == Dan Sugalski [EMAIL PROTECTED] writes:
DSmy $foo = 'a';
DS{
DS {
DS%MY[-1]{'$foo'} = 'B';
DSprint $foo;
DS }
DS }
explain %MY[-1] please.
my impression is that is illegal/meaningless in perl6. maybe you meant
something with caller and
At 11:51 AM 9/6/2001 -0400, Uri Guttman wrote:
DS == Dan Sugalski [EMAIL PROTECTED] writes:
DSmy $foo = 'a';
DS{
DS {
DS%MY[-1]{'$foo'} = 'B';
DSprint $foo;
DS }
DS }
explain %MY[-1] please.
my impression is that is
Hong Zhang wrote:
How do you define the currently loaded? If things are lazy loaded,
the stuff you expect has been loaded may not have been loaded.
We could load placeholders that go and load the bigger methods
as needed, for instance.
--
David
At 11:44 AM 9/6/2001 -0400, Ken Fox wrote:
Yeah, I can see it now. Perl 6 has three kinds of variables:
dynamically scoped package variables, statically scoped lexical
variables and Magical Disappearing Reappearing Surprise Your
Friends Every Time variables. Oh, and by the way, lexicals
are
One further worry of mine concerns the action of %MY:: on unintroduced
variables (especially the action of delete).
my $x = 100;
{
my $x = (%MY::{'$x'} = \200, $x+1);
print inner=$x, ;
}
print outer=$x;
I'm guessing this prints inner=201, outer=200
As for
my $x = 50;
{
my $x =
Damian Conway wrote:
proper lexically-scoped modules.
sub foo { print outer foo\n};
{
local *foo = sub {print inner foo\n};
foo();
};
foo();
did what I wanted it to. Should I extend Pollute:: to make
this possible:
in file
Simon Cozens [EMAIL PROTECTED] wrote:
On Thu, Sep 06, 2001 at 11:05:37AM +0100, Dave Mitchell wrote:
I'm trying to get my head round the relationship between pad lexicals,
pad tmps, and registers (if any).
It's exactly the same as the relationship between auto variables, C
temporaries
On Thu, Sep 06, 2001 at 02:35:53PM +0100, Dave Mitchell wrote:
The Perl equivalent $a = $a + $a*$b requires a
temporary PMC to store the intermediate result ($a*$b)
Probably a temporary INT or NUM register, in fact. But I see
your point. I wouldn't be surprised if some of the PMC registers
had
On Mon, 03 Sep 2001 19:29:09 -0400, Ken Fox wrote:
*How* are they fundamentally different?
Perl's local variables are dynamically scoped. This means that
they are *globally visible* -- you never know where the actual
variable you're using came from. If you set a local variable,
all the
Simon Cozens [EMAIL PROTECTED] wrote:
On Thu, Sep 06, 2001 at 02:35:53PM +0100, Dave Mitchell wrote:
The Perl equivalent $a = $a + $a*$b requires a
temporary PMC to store the intermediate result ($a*$b)
Probably a temporary INT or NUM register, in fact. But I see
your point. I wouldn't
Dave Mitchell wrote:
The Perl equivalent $a = $a + $a*$b requires a
temporary PMC to store the intermediate result ($a*$b). I'm asking
where this tmp PMC comes from.
The PMC will stashed in a register. The PMC's value will be
stored either on the heap or in a special memory pool reserved
for
On Thu, Sep 06, 2001 at 02:54:29PM +0100, Dave Mitchell wrote:
So I guess I'm asking whether we're abandoning the Perl 5 concept
of a pad full of tmp targets, each hardcoded as the target for individual
ops to store their tmp results in.
Not entirely; the last thing we want to be doing is
Simon Cozens [EMAIL PROTECTED] wrote:
On Thu, Sep 06, 2001 at 02:54:29PM +0100, Dave Mitchell wrote:
So I guess I'm asking whether we're abandoning the Perl 5 concept
of a pad full of tmp targets, each hardcoded as the target for individual
ops to store their tmp results in.
Not
On Sun, Sep 02, 2001 at 11:56:10PM +0100, Simon Cozens wrote:
Here's the first of a bunch of things I'm writing which should give you
practical information to get you up to speed on what we're going to be doing
with Parrot so we can get you coding away. :) Think of them as having a
Dave Mitchell wrote:
So how does that all work then? What does the parrot assembler for
foo($x+1, $x+2, , $x+65)
The arg list will be on the stack. Parrot just allocates new PMCs and
pushes the PMC on the stack.
I assume it will look something like
new_pmc pmc_register[0]
add
Simon Cozens wrote:
I want to get on with writing all the other documents like this one, but
I don't want the questions raised in this thread to go undocumented and
unanswered. I would *love* it if someone could volunteer to send me a patch
to the original document tightening it up in the
On Thu, Sep 06, 2001 at 10:46:56AM -0400, Ken Fox wrote:
Sure. I can do that while *waiting patiently* for Parrot to be
released. ;)
Don't tell Nat I said this, but we're hoping for around the
beginning of next week.
Simon
At 10:44 AM 9/6/2001 +0200, Bart Lateur wrote:
On Mon, 03 Sep 2001 19:30:33 -0400, Dan Sugalski wrote:
The less real question, Should pads be hashes or arrays, can be answered
by whichever is ultimately cheaper. My bet is we'll probably keep the
array structure with embedded names, and do a
At 10:41 AM 9/6/2001 +0200, Bart Lateur wrote:
Firs of all, currently, you can localize an element from a hash or an
array, even if the variable is lexically scoped.
This doesn't actually have anything to do with lexicals, globals, or pads.
And the reason the keyword local works on elements of
On Mon, 03 Sep 2001 19:30:33 -0400, Dan Sugalski wrote:
The less real question, Should pads be hashes or arrays, can be answered
by whichever is ultimately cheaper. My bet is we'll probably keep the
array structure with embedded names, and do a linear search for those rare
times you're
At 10:45 AM 9/6/2001 -0400, Ken Fox wrote:
Dave Mitchell wrote:
So how does that all work then? What does the parrot assembler for
foo($x+1, $x+2, , $x+65)
The arg list will be on the stack. Parrot just allocates new PMCs and
pushes the PMC on the stack.
No, it won't actually.
At 05:08 PM 9/5/2001 -0500, David L. Nicol wrote:
what if:
* there is a way to say that no new classes will be introduced
Then pigs will probably be dive-bombing the Concorde, and demons ice
skating. This is the language Damian programs in, after all... :)
At 03:21 PM 9/6/2001 +0100, Dave Mitchell wrote:
Simon Cozens [EMAIL PROTECTED] wrote:
On Thu, Sep 06, 2001 at 02:54:29PM +0100, Dave Mitchell wrote:
So I guess I'm asking whether we're abandoning the Perl 5 concept
of a pad full of tmp targets, each hardcoded as the target for individual
From: Dave Mitchell [mailto:[EMAIL PROTECTED]]
Subject: pads and lexicals
Dave confused as always M.
I just wanted to say that I'm really enjoying this pad/lexical thread.
There's a lot of info passing back and forth that I don't believe is clearly
documented in perlguts, etc. I expect
At 10:11 AM 9/6/2001 -0500, Garrett Goebel wrote:
I just wanted to say that I'm really enjoying this pad/lexical thread.
There's a lot of info passing back and forth that I don't believe is clearly
documented in perlguts, etc. I expect when this thread runs its course,
you'll be a whole lot less
On 09/05/01 Nick Ing-Simmons wrote:
It's easier to generate code for a stack machine
True, but it is easier to generate FAST code for a register machine.
A stack machine forces a lot of book-keeping either run-time inc/dec of sp,
or alternatively compile-time what-is-offset-now stuff. The
Dan Sugalski [EMAIL PROTECTED] wrote:
What we're going to do is have a get_temp opcode to fetch temporary PMCs.
Where do they come from? Leave a plate of milk and cookies on your back
porch and the Temp PMC Gnomes will bring them. :)
Ah, things are starting to make sense!
new P0,
On 09/05/01 Dan Sugalski wrote:
It's easier to generate code for a stack machine
So? Take a look at all the stack-based interpreters. I can name a bunch,
including perl. They're all slow. Some slower than others, and perl tends
to be the fastest of the bunch, but they're all slow.
Have a
At 05:00 PM 9/6/2001 +0100, Dave Mitchell wrote:
Dan Sugalski [EMAIL PROTECTED] wrote:
What we're going to do is have a get_temp opcode to fetch temporary PMCs.
Where do they come from? Leave a plate of milk and cookies on your back
porch and the Temp PMC Gnomes will bring them. :)
Ah,
Paolo Molaro wrote:
If anyone has any
evidence that coding a stack-based virtual machine or a register one
provides for better instructions scheduling in the dispatch code,
please step forward.
I think we're going to have some evidence in a few weeks. I'm not
sure which side the evidence is
I'm trying to get my head round the relationship between pad lexicals,
pad tmps, and registers (if any).
The PMC registers are just a way of allowing the the address of a PMC to
be passed to an op, and possibly remembered for soonish reuse, right?
So presumably we still have the equivalent of a
On Thu, Sep 06, 2001 at 12:13:11PM -0400, Dan Sugalski wrote:
Hmmm. Yes, in fact it should. That code will end up with a list of 65
identical scalars in it. Bad Dan! No cookie for me.
Damn. I guess that means we have to write a compiler after all. I was
looking forward to having Dan assemble
On 09/05/01 Hong Zhang wrote:
I think we need to get some initial performance characteristics of register
machine vs stack machine before we go too far. There is not much points left
debating in email list.
Unfortunately getting meaningful figures is quite hard, there are
so many thing to
At 06:12 PM 9/6/2001 +0200, Paolo Molaro wrote:
On 09/05/01 Dan Sugalski wrote:
It's easier to generate code for a stack machine
So? Take a look at all the stack-based interpreters. I can name a bunch,
including perl. They're all slow. Some slower than others, and perl tends
to be the
At 10:45 AM 09-06-2001 -0400, Ken Fox wrote:
Dave Mitchell wrote:
So how does that all work then? What does the parrot assembler for
foo($x+1, $x+2, , $x+65)
The arg list will be on the stack. Parrot just allocates new PMCs and
pushes the PMC on the stack.
I assume it will look
(Firstly, I'd say trust Nick's expertise--he has spent a good-sized chunk
of his career doing software simulations of CPUs, and knows whereof he
speaks, both in terms of software running on hardware and software running
on software)
At 05:33 PM 9/6/2001 +0200, Paolo Molaro wrote:
I believe
Dan Sugalski wrote:
Dan Sugalski [EMAIL PROTECTED] wrote:
Where do they come from? Leave a plate of milk and cookies on your back
porch and the Temp PMC Gnomes will bring them. :)
Bad Dan! No cookie for me.
You aren't fooling anybody anymore... You might just as well stop the
charade
At 01:21 PM 9/6/2001 -0400, Ken Fox wrote:
Dan Sugalski wrote:
Dan Sugalski [EMAIL PROTECTED] wrote:
Where do they come from? Leave a plate of milk and cookies on your back
porch and the Temp PMC Gnomes will bring them. :)
Bad Dan! No cookie for me.
You aren't fooling anybody
At 06:12 PM 9/6/2001 +0200, Paolo Molaro wrote:
As I said in another mail, I think the stack-based approach will not
be necessarily faster, but it will allow more optimizations down the path.
It may well be 20 % slower in some cases when interpreted, but if it allows
me to easily JIT it and get
Dave Mitchell:
# Simon Cozens [EMAIL PROTECTED] wrote:
# On Thu, Sep 06, 2001 at 02:54:29PM +0100, Dave Mitchell wrote:
# So I guess I'm asking whether we're abandoning the Perl 5 concept
# of a pad full of tmp targets, each hardcoded as the
# target for individual
# ops to store their tmp
On 09/06/01 Dan Sugalski wrote:
Okay, I just did a test run, converting my sample program from interpreted
to compiled. (Hand-conversion, unfortunately, to C that went through GCC)
Went from 2.72M ops/sec to the equivalent of 22.5M ops/sec. And with -O3 on
it went to 120M ops/sec. The
On Thu, Sep 06, 2001 at 11:05:37AM +0100, Dave Mitchell wrote:
I'm trying to get my head round the relationship between pad lexicals,
pad tmps, and registers (if any).
It's exactly the same as the relationship between auto variables, C
temporaries and machine registers.
Simon
Dan Sugalski:
...
# new P0, list# New list in P0
# get_lex P1, $x # Find $x
# get_type I0, P1 # Get $x's type
# set_i I1, 1 # Set our loop var
# $10: new P2, I0 # Get a temp of the same type as $x
#
At 09:11 PM 9/6/2001 +0200, Paolo Molaro wrote:
On 09/06/01 Dan Sugalski wrote:
The original mono interpreter (that didn't implement all the semantics
required by IL code that slow down interpretation) ran about 4 times
faster than perl/python on benchmarks dominated by branches, function
At 09:22 PM 9/6/2001 +0200, Paolo Molaro wrote:
A 10x slowdown on that kind of code is normal for an interpreter
(where 10x can range from 5x to 20x, depending on the semantics).
If we're in the normal range, then, I'm happy.
Well, until we get equivalent benchmarks for Mono, in which case I
At 12:04 PM 9/6/2001 -0700, Brent Dax wrote:
If foo is an unprototyped function (and thus takes a list in P0) we can
immediately push the values of those calculations on to the list,
something like (in a lame pseudo-assembler that doesn't use the right
names for instructions):
FWIW, it's:
Dan Sugalski wrote:
At 02:05 PM 9/6/2001 -0400, Ken Fox wrote:
You wrote on perl6-internals:
get_lex P1, $x # Find $x
get_type I0, P1 # Get $x's type
[ loop using P1 and I0 ]
That code isn't safe! If %MY is changed at run-time, the
type and location of $x
At 02:44 PM 9/6/2001 -0400, Ken Fox wrote:
Could you compile the following for us with the assumption that
g() does not change its' caller?
Maybe later. Pressed for time at the moment, sorry.
What if g() *appears* to be safe when perl compiles the loop, but
later on somebody replaces its'
From: Ken Fox [mailto:[EMAIL PROTECTED]]
I think we have a language question... What should the following
print?
my $x = 1;
my $y = \$x;
my $z = 2;
%MY::{'$x'} = \$z;
$z = 3;
print $x, $$y, $z\n
a. 2, 1, 3
b. 2, 2, 3
c. 3, 1, 3
d. 3, 3, 3
e. exception: not enough
Here's a list of what any Perl 6 implementation of lexicals must be able to
cope with (barring additions from future apocalyses). Can anyone think of
anything else?
From Perl 5:
* multiple instances of the same variable name within different scopes
of the same sub
* The notion of
On Thursday 06 September 2001 08:53 am, Dave Mitchell wrote:
But surely %MY:: allows you to access/manipulate variables that are in
scope, not just variables are defined in the current scope, ie
my $x = 100;
{
print $MY::{'$x'};
}
I would expect that to print 100, not 'undef'. Are
On Thursday 06 September 2001 06:01 pm, Garrett Goebel wrote:
From: Ken Fox [mailto:[EMAIL PROTECTED]]
I think we have a language question... What should the following
print?
my $x = 1;
my $y = \$x;
my $z = 2;
%MY::{'$x'} = \$z;
$z = 3;
print $x, $$y, $z\n
a.
Bryan thought:
my $x = 1;
my $y = \$x;
my $z = 2;
%MY::{'$x'} = \$z;
$z = 3;
print $x, $$y, $z\n
My $x container contains 1. ($x = 1)
My $y container contains a ref to the $x container. ($x = 1, $y = \$x)
My $z container contain 2.
Dave Mitchell wrote:
Here's a list of what any Perl 6 implementation of lexicals must be able to
cope with (barring additions from future apocalyses). Can anyone think of
anything else?
I would like
perl -le 'my $Q = 3; {local $Q = 4; print $Q}'
to print 4 instead of crashing in
Dan Sugalski wrote:
I think you're also overestimating the freakout factor.
Probably. I'm not really worried about surprising programmers
when they debug their code. Most of the time they've requested
the surprise and will at least have a tiny clue about what
happened.
I'm worried a little
Dave Mitchell wrote:
Can anyone think of anything else?
You omitted the most important property of lexical variables:
[From perlsub.pod]
Unlike dynamic variables created by the Clocal operator, lexical
variables declared with Cmy are totally hidden from the outside
world, including
At 02:05 PM 9/6/2001 -0400, Ken Fox wrote:
Dan Sugalski wrote:
[stuff I snipped]
I'm worried a little about building features with global effects.
Part of Perl 6 is elimination of action-at-a-distance, but now
we're building the swiss-army-knife-of-action-at-a-distance.
I don't know how much of
From: Ken Fox [mailto:[EMAIL PROTECTED]]
Dan Sugalski wrote:
I think you're also overestimating the freakout factor.
Probably. I'm not really worried about surprising programmers
when they debug their code. Most of the time they've requested
the surprise and will at least have a tiny
Dan Sugalski:
# At 12:04 PM 9/6/2001 -0700, Brent Dax wrote:
# If foo is an unprototyped function (and thus takes a list in
# P0) we can
# immediately push the values of those calculations on to the list,
# something like (in a lame pseudo-assembler that doesn't use the right
# names for
On 09/06/01 Dan Sugalski wrote:
Then I'm impressed. I expect you've done some things that I haven't yet.
The only optimizations that interpreter had, were computed goto and
allocating the eval stack with alloca() instead of malloc().
Of course, now it's slower, because I implemented the full
64 matches
Mail list logo