Re: IMC returning ints

2004-01-22 Thread Leopold Toetsch
Steve Fink [EMAIL PROTECTED] wrote:

 I did a cvs update, and it looks like imcc doesn't properly return
 integers anymore from nonprototyped routines.

I don't even know if this is allowed. But anyway, if the call is non
prototyped, native types should go into P3. So you have the overhead of
PMC creation anyway plus the overhead of the array access.

And mainly the return convention are still broken.

leo


Re: how to subclass dynamic PMCs?

2004-01-22 Thread Leopold Toetsch
Michal Wallace [EMAIL PROTECTED] wrote:

 Hi all,

 I'm hoping this is just a simple linker option, but
 I've been reading ld manuals for the past few
 hours and I just don't get it. :)

 I'm trying to make a dynamically loaded PMC that
 subclasses another dynamically loaded PMC.

Its a linker problem, but not too simple. Your analysis is correct:
pistring needs the symbol Parrot_PiSequence_get_integer, so you have to
provide it:

I did something like this:

$ make -C dynclasses
$ cp dynclasses/pisequence.so blib/lib/libpisequence.so
$ cd dynclasses; cc -shared -L/usr/local/lib   -o pistring.so  \
  -I../include -I../classes  -L../blib/lib  -lparrot pistring.c \
  -lpisequence   ;  cd ..
$ cp dynclasses/pistring.so runtime/parrot/dynext/

$ parrot wl.imc   # run your test program
51
52

I also had -Wl,-E in the Makefile, when compiling pisquence. I don't
know if its necessary.

Another (untested) possibility: you could try to append the 2 pi*.c
files into another one and then compile the combined source to a shared
lib.

 Michal J Wallace

leo


Another GC bug

2004-01-22 Thread Leopold Toetsch
Running parrot built with --gc=libc works fine, all tests pass *except* 
t/pmc/pmc_62 when --gc-debug is set.

I could track this down until here:

classes/default.c:

static void
check_set_std_props(Parrot_Interp interp, PMC *pmc, STRING *key, PMC *value)
{
if (!string_compare(interp, key, string_from_cstring(interp, _ro, 
0)))

During the string_compare a DOD run is triggered (the --gc-debug flags) 
which frees the newly allocated string_from_cstring s2. I don't know 
yet, why this string is freed. It should get marked during 
trace_system_areas. Something is fishy here.

I have converted that one now into a static constant string, but this 
isn't a solution.

We have discussed already a few times, what to do with such constant 
strings, e.g. declare them static with some macro tricks or use a string 
hash. We need a solution for this.

Second is of course, why isn't that string marked during DOD.

Any hints and improvements welcome,
leo


Re: how to subclass dynamic PMCs?

2004-01-22 Thread Michal Wallace
On Thu, 22 Jan 2004, Leopold Toetsch wrote:

 Michal Wallace [EMAIL PROTECTED] wrote:

  I'm trying to make a dynamically loaded PMC that
  subclasses another dynamically loaded PMC.

 Its a linker problem, but not too simple. Your analysis is correct:
 pistring needs the symbol Parrot_PiSequence_get_integer, so you have to
 provide it:

Thanks Leo! You rock!!!


 I did something like this:

 $ make -C dynclasses
 $ cp dynclasses/pisequence.so blib/lib/libpisequence.so

Aha! I was trying to figure out how to do -lpisequence.
It didn't occur to me to just RENAME it. :)


 $ cd dynclasses; cc -shared -L/usr/local/lib   -o pistring.so  \
   -I../include -I../classes  -L../blib/lib  -lparrot pistring.c \
   -lpisequence   ;  cd ..
 $ cp dynclasses/pistring.so runtime/parrot/dynext/

 $ parrot wl.imc # run your test program
 51
 52

Awesome! Thanks again! :)


 I also had -Wl,-E in the Makefile, when compiling pisquence. I don't
 know if its necessary.

 Another (untested) possibility: you could try to append the 2 pi*.c
 files into another one and then compile the combined source to a shared
 lib.

That makes sense. I'll try it. If it works, I'll patch
pmc2c2.pl to do do this automatically. I'm sure this won't
be the last time someone wants to provide a whole set of
classes at once.

 leo

Sincerely,

Michal J Wallace
Sabren Enterprises, Inc.
-
contact: [EMAIL PROTECTED]
hosting: http://www.cornerhost.com/
my site: http://www.withoutane.com/
--


Re: [perl #25129] IO Buffer Test

2004-01-22 Thread Leopold Toetsch
Arvindh Rajesh Tamilmani [EMAIL PROTECTED] wrote:
   name=biotests.patch

Something is wrong with these tests: patching t/src/io.t and PASM code.
Even if that works, PASM tests shouldn't be in t/src but in t/pmc/io.t.

leo


Re: [RESEND] Q: Array vs SArray

2004-01-22 Thread Leopold Toetsch
Dan Sugalski [EMAIL PROTECTED] wrote:

 *) Array - fixed-size, mixed-type array

 Personally I'd leave Array as it is, since it does one of the things
 that we need it to do.

Array isn't really mixed-typed. It has methods to store or retrieve
non-PMC types, but they are converted internally to PMCs. A true
mixed-typed Array could use a typed union, i.e. the HashEntry type. This
needs more space, but is type-safe and avoids PMC creation overhead.

What do we need?

leo


Re: how to subclass dynamic PMCs?

2004-01-22 Thread Tim Bunce
On Thu, Jan 22, 2004 at 07:56:51AM -0500, Michal Wallace wrote:
 
  I did something like this:
 
  $ make -C dynclasses
  $ cp dynclasses/pisequence.so blib/lib/libpisequence.so
 
 Aha! I was trying to figure out how to do -lpisequence.
 It didn't occur to me to just RENAME it. :)

Perhaps all such .so's should be generated with lib at the start of the name.

Tim.


[perl #25232] [PATCH] PIO_read bugs on reading larger chunks

2004-01-22 Thread Tamilmani, Arvindh Rajesh \(Cognizant\)
# New Ticket Created by  Tamilmani, Arvindh Rajesh (Cognizant) 
# Please include the string:  [perl #25232]
# in the subject line of all future correspondence about this issue. 
# URL: http://rt.perl.org/rt3/Ticket/Display.html?id=25232 


The attached patch 
a) fixes bugs in reading a larger chunk when the read buffer is not empty 
(io/io_buf.c).
b) corresponding test cases (t/src/io.t).
c) minor comment fix (io/io.c).

arvindh


bufread.patch
Description: Binary data
This e-mail and any files transmitted with it are for the sole use of the intended 
recipient(s) and may contain confidential and privileged information.
If you are not the intended recipient, please contact the sender by reply e-mail and 
destroy all copies of the original message. 
Any unauthorised review, use, disclosure, dissemination, forwarding, printing or 
copying of this email or any action taken in reliance on this e-mail is strictly 
prohibited and may be unlawful.

Visit us at http://www.cognizant.com


[perl #25233] Memory pool corruption

2004-01-22 Thread via RT
# New Ticket Created by  Dan Sugalski 
# Please include the string:  [perl #25233]
# in the subject line of all future correspondence about this issue. 
# URL: http://rt.perl.org/rt3/Ticket/Display.html?id=25233 


I'm finding parrot's killing its memory pools somewhere and dying 
when it goes to compact memory during a GC sweep. The corruption's 
relatively recent, though I'm not yet sure where. (It wasn't there 
around January 14th, give or take a day)

I'll try and get a small test case to show it, though that may be 
tricky. It doesn't show with the current test suite, though it does 
show with my demo/work code.
-- 
 Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
   teddy bears get drunk


RE: [perl #25129] IO Buffer Test

2004-01-22 Thread Tamilmani, Arvindh Rajesh (Cognizant)
Even if that works, PASM tests shouldn't be in t/src but in t/pmc/io.t.

Sorry, I didn't know that.
The attached patch is for t/pmc/io.t

leo

Thanks,
Arvindh


biotests2.patch
Description: biotests2.patch
This e-mail and any files transmitted with it are for the sole use of the intended 
recipient(s) and may contain confidential and privileged information.
If you are not the intended recipient, please contact the sender by reply e-mail and 
destroy all copies of the original message. 
Any unauthorised review, use, disclosure, dissemination, forwarding, printing or 
copying of this email or any action taken in reliance on this e-mail is strictly 
prohibited and may be unlawful.

Visit us at http://www.cognizant.com


Re: Semantics of vector operations

2004-01-22 Thread Larry Wall
On Thu, Jan 22, 2004 at 03:53:04AM +0100, A. Pagaltzis wrote:
: Good point; however, this means different way to think of the
: vector ops than we had so far. Basically, we're moving from the
: realm of vector ops to that of vectorized operands.
: 
: In light of this, I think Austin's proposal of marking the
: operands as vectorized makes a lot of sense. It was an unexpected
: that had me taken aback for a moment, but I like it more the more
: I think about it. It *feels* right to emphasize vectorization as
: something that happens to an operand, rather than something
: that's a property of the operation.

I think some people will want to think of it one way, while others
will want to think of it the other way.  If that's the case, the
proper place to put the marker is between the operand and the operator.

You might argue that we should force people to think of it one way or
the other.  But there's a reason that some people will think of it
one way while others will think of it the other way--I'd argue that
vectorization is not something that happens to *either* the operand
or the operator.  Vectorization is a different *relationship* between
the operator and the operand.  As such, I still think it belongs
between.

Plus, in the symmetrical case, it *looks* symmetrical.  Marking the
args in front makes everything look asymmetrical whether it is or not.

Larry


Re: Semantics of vector operations

2004-01-22 Thread Larry Wall
On Thu, Jan 22, 2004 at 03:57:26AM +0100, A. Pagaltzis wrote:
: * Piers Cawley [EMAIL PROTECTED] [2004-01-21 23:33]:
:  And once you go to an image based IDE and have access to the
:  bytecode of the code you're writing there's all *sorts* of
:  interesting things you can do. And that's before one starts to
:  imagine attaching the IDE/debugger to a running process...
: 
: $smalltalkometer++ ? :)

%languageometer.values »+=« rand;
 
Not to be confused with

%languageometer.values »+= rand;

which would presumably add the *same* number to all languageometers.

Larry


RE: Semantics of vector operations (Damian)

2004-01-22 Thread Austin Hastings


 -Original Message-
 From: Larry Wall [mailto:[EMAIL PROTECTED]
 Sent: Thursday, January 22, 2004 12:50 PM
 To: Language List
 Subject: Re: Semantics of vector operations


 On Thu, Jan 22, 2004 at 03:57:26AM +0100, A. Pagaltzis wrote:
 : * Piers Cawley [EMAIL PROTECTED] [2004-01-21 23:33]:
 :  And once you go to an image based IDE and have access to the
 :  bytecode of the code you're writing there's all *sorts* of
 :  interesting things you can do. And that's before one starts to
 :  imagine attaching the IDE/debugger to a running process...
 :
 : $smalltalkometer++ ? :)

 %languageometer.values »+=« rand;

 Not to be confused with

 %languageometer.values »+= rand;

 which would presumably add the *same* number to all languageometers.


In reverse order:

 %languageometer.values »+= rand;

This is the same as

 all( %languageometer.values ) += rand;

right?

And is this

 %languageometer.values »+=« rand;

the same as

all( %languageometer.values ) += one( rand );

?

=Austin



RE: Semantics of vector operations

2004-01-22 Thread Austin Hastings


 -Original Message-
 From: Larry Wall [mailto:[EMAIL PROTECTED]
 Sent: Thursday, January 22, 2004 12:39 PM
 To: Language List
 Subject: Re: Semantics of vector operations


 On Thu, Jan 22, 2004 at 03:53:04AM +0100, A. Pagaltzis wrote:
 : Good point; however, this means different way to think of the
 : vector ops than we had so far. Basically, we're moving from the
 : realm of vector ops to that of vectorized operands.
 :
 : In light of this, I think Austin's proposal of marking the
 : operands as vectorized makes a lot of sense. It was an unexpected
 : that had me taken aback for a moment, but I like it more the more
 : I think about it. It *feels* right to emphasize vectorization as
 : something that happens to an operand, rather than something
 : that's a property of the operation.

 I think some people will want to think of it one way, while others
 will want to think of it the other way.  If that's the case, the
 proper place to put the marker is between the operand and the operator.


How do you handle operator precedence/associativity?

That is,

   $a + $b + $c

If you're going to vectorize, and combine, then you'll want to group. I
think making the vectorizer a grouper as well kills two birds with one
stone.

  $a + $b + $c

vs.

  $a + ($b + $c)


 You might argue that we should force people to think of it one way or
 the other.  But there's a reason that some people will think of it
 one way while others will think of it the other way--I'd argue that
 vectorization is not something that happens to *either* the operand
 or the operator.  Vectorization is a different *relationship* between
 the operator and the operand.  As such, I still think it belongs
 between.

 Plus, in the symmetrical case, it *looks* symmetrical.  Marking the
 args in front makes everything look asymmetrical whether it is or not.

Just a refresher, what *exactly* does vectorization do, again?  I think of
it as smart list-plus-times behavior, but when we go into matrix arithmetic,
that doesn't hold up. Luke?

=Austin





Re: Comma Operator

2004-01-22 Thread Larry Wall
On Wed, Jan 21, 2004 at 08:51:33PM -0500, Joe Gottman wrote:
: Great, so
: $x = foo(), bar();
: means the same thing as
: $x = ( foo(), bar() );

No, we haven't changed the relative precedence of assignment and comma.
I've been tempted to, but I always come back to liking the parens
for visual psychological purposes.  Plus changing the precedence
would break

loop $i = 0, $j = 0; $x[$i,$j]; $i++, $j++ { ... }

:  Is the optimizer going to be smart enough so that given the expression
: $x = (foo(), bar(), glarch())[-1];
: 
: Perl6 won't have to construct a three-element array just to return the last
: element?

It's interesting to me that you assume there would be a three element
array there.  If the ()[] works like it does in Perl 5, the stuff inside
the parens is always evaluated in list context.  Which means that foo(),
bar(), and glarch() can any of them return 0 or more elements.  There's
no guarantee that the final value doesn't come from foo() or bar().

Now, in Perl 6, slices are distinguished from single subscripts only by
the form of the subscript, not by any sigil.  So we can know for a fact
that [-1] is not a slice.  But that doesn't mean we can necessarily
assume that the list in parens wants to evaluate its args in scalar
context.  Maybe it does, and maybe it doesn't.

That doesn't mean that we have to put the C comma operator back
in though.  It might just mean that the default is wrong on ()[].
Suppose we say that

(foo(), bar(), glarch())[-1]

by default evaluates its arguments in scalar context.  To get the Perl 5
behavior, you'd merely have to use a splat to put the list into list
context:

(* foo(), bar(), glarch())[-1]

Then (...)[] would become the standard way of producing a list of
things evaluated in scalar context.  Alternately, if you don't like
the splat list, you can always say

[foo(), bar(), glarch()][-1]

which does the same thing.

Given all that, I think we can say that, yes, the compiler can
optimize foo() and bar() to know they're running in void context.
I'd generalize that to say that any unselected element of a scalar
list can be evaluated in a void context, whether the subscript is -1
or 0 or any constant slice.

Larry


Re: Semantics of vector operations (Damian)

2004-01-22 Thread Jonathan Scott Duff
On Thu, Jan 22, 2004 at 01:10:23PM -0500, Austin Hastings wrote:
 In reverse order:
 
  %languageometer.values ?+= rand;
 
 This is the same as
 
  all( %languageometer.values ) += rand;
 
 right?

It's the same as 

$r = rand;
$_ += $r for %languageometer.values

Your junction looks like it should work but I think you're really
adding the random number to the junction, not the elements that compose
the junction thus none of %languageometer.values are modified.

 And is this
 
  %languageometer.values ?+=? rand;
 
 the same as
 
 all( %languageometer.values ) += one( rand );

I don't think so.  It's like:

$_ += rand for %languageometer.values

perhaps if you had:

$j |= rand for (0..%languageometer.values)
any(%languageometer.values) += $j;

Though I'm not sure what that would mean.  

I don't think junctions apply at all in vectorization.   They seem to
be completely orthogonal.

-Scott
-- 
Jonathan Scott Duff
[EMAIL PROTECTED]


Re: Semantics of vector operations

2004-01-22 Thread Larry Wall
On Thu, Jan 22, 2004 at 01:10:25PM -0500, Austin Hastings wrote:
:  -Original Message-
:  From: Larry Wall [mailto:[EMAIL PROTECTED]
:  I think some people will want to think of it one way, while others
:  will want to think of it the other way.  If that's the case, the
:  proper place to put the marker is between the operand and the operator.
: 
: How do you handle operator precedence/associativity?

Me?  I handle it by making it an adverb on the base operator.  :-)

: That is,
: 
:$a + $b + $c
: 
: If you're going to vectorize, and combine, then you'll want to group. I
: think making the vectorizer a grouper as well kills two birds with one
: stone.
: 
:   $a + $b + $c

Er, pardon me, but bletch.

: vs.
: 
:   $a + ($b + $c)

That's much clearer to me.  (Ignoring the fact that you can't add
references. :-)

:  You might argue that we should force people to think of it one way or
:  the other.  But there's a reason that some people will think of it
:  one way while others will think of it the other way--I'd argue that
:  vectorization is not something that happens to *either* the operand
:  or the operator.  Vectorization is a different *relationship* between
:  the operator and the operand.  As such, I still think it belongs
:  between.
: 
:  Plus, in the symmetrical case, it *looks* symmetrical.  Marking the
:  args in front makes everything look asymmetrical whether it is or not.
: 
: Just a refresher, what *exactly* does vectorization do, again?  I think of
: it as smart list-plus-times behavior, but when we go into matrix arithmetic,
: that doesn't hold up. Luke?

It really means please dereference this scalar in an intelligent
fashion for this particular operator.  The exact behavior is
allowed to depend on the operator and the types of both arguments
(if both sides are vectorized).  That is, it's probably a multimethod
in disguise.

Actually, it's a bit misleading to call it a vectorizing operator.
It's really a dwim operator that will commonly be used for vectorizing
in the case of one-dimensional lists.  For higher dimensional beasties,
it means make these conform and then apply the operator in some kind
of distributed fashion, with a bias toward leaf operations.

I don't think that »X« means Do whatever the mathematicians want X
to do.  Unicode operators have to be good for something, after all.

So perhaps its best to call » and « distribution modifiers or
some such.

Larry


Re: Semantics of vector operations (Damian)

2004-01-22 Thread Luke Palmer
Jonathan Scott Duff writes:
 On Thu, Jan 22, 2004 at 01:10:23PM -0500, Austin Hastings wrote:
  In reverse order:
  
   %languageometer.values += rand;
  
  This is the same as
  
   all( %languageometer.values ) += rand;
  
  right?

Well, yes.  It's also the same as each of:

any(  %languageometer.values ) += rand;
none( %languageometer.values ) += rand;
one(  %languageometer.values ) += rand;

Since the junctions aren't yet being evaluated in boolean context, the
type of the junction doesn't matter.  Which is why making junctions
applicable lvalues might be a bad idea.  I'm not sure, but this looks
like a potential confuser.

 It's the same as 
 
   $r = rand;
   $_ += $r for %languageometer.values
 
 Your junction looks like it should work but I think you're really
 adding the random number to the junction, not the elements that compose
 the junction thus none of %languageometer.values are modified.

Hmm... that depends on whether junctions hold references, or what they
do in lvalue context, and a whole bunch of other, undecided things.

  And is this
  
   %languageometer.values += rand;
  
  the same as
  
  all( %languageometer.values ) += one( rand );

No, what you just wrote is the same as:

all( %languageometer.values ) += rand;

 I don't think so.  It's like:
 
   $_ += rand for %languageometer.values

Sortof.  I think Larry was implying that rand returned an infinite list
of random numbers in list context.  If not, then what he said was wrong,
because it would be sick to say that:

(1,2,3,4,5) + foo()

Calls foo() 5 times.

 perhaps if you had:
 
   $j |= rand for (0..%languageometer.values)
   any(%languageometer.values) += $j;
 
 Though I'm not sure what that would mean.  

Nonononono! Don't do that!  That adds *each* of the random numbers to
*each* of the values.  That is, each of the values would increase by
approximately %languageometer/2.

 I don't think junctions apply at all in vectorization.   They seem to
 be completely orthogonal.

I'd be happy if that were the case.

Luke



Re: Semantics of vector operations

2004-01-22 Thread A. Pagaltzis
* Larry Wall [EMAIL PROTECTED] [2004-01-22 18:40]:
 You might argue that we should force people to think of it one
 way or the other.

I wouldn't, because if I did I'd should've been talking to Guido
rather than you in the first place. :-)

And because I'm talking to you, I'll wonder whether maybe we
ought to have both options.

 I'd argue that vectorization is not something that happens to
 *either* the operand or the operator.  Vectorization is a
 different *relationship* between the operator and the operand.
 As such, I still think it belongs between.

That makes a lot of sense; consider me convinced.

Even if I agree after all though, that doesn't make me like the
way »+ and particularly +« look any more than I liked them
before. I usually scoff at line noise remarks, but in this case
I'd feel forced to mutter it myself -- it just continues to feel
like too big a change in behaviour dictated by a single magic
character.

While »+« is a little ugly as well, it does stand out boldly,
something that could not IMHO be said about the one-sided
variants. I'd argue that we really should use something more
visually prominent for the one-sided case.

Maybe »»+ and +«« or something? But the non-Unicode variant would
be, uh, less than pretty.

 Plus, in the symmetrical case, it *looks* symmetrical.  Marking
 the args in front makes everything look asymmetrical whether it
 is or not.

I was actually thinking something like

»$a« + »$b«

in which case asymmetry would not be an issue.

-- 
Regards,
Aristotle
 
If you can't laugh at yourself, you don't take life seriously enough.


Re: Semantics of vector operations

2004-01-22 Thread Luke Palmer
Austin Hastings writes:
 How do you handle operator precedence/associativity?
 
 That is,
 
$a + $b + $c
 
 If you're going to vectorize, and combine, then you'll want to group. I
 think making the vectorizer a grouper as well kills two birds with one
 stone.
 
   $a + $b + $c
 
 vs.
 
   $a + ($b + $c)

I have to agree with Larry here, the latter is much cleaner.

I'm actually starting to like this proposal.  I used to shiver at the
implementation of the old way, where people used the operator to group
arbitrary parts of the expression.  I wouldn't even know how to parse
that, much less interpret it when it's parsed.

Now, we have a clear way to call a method on a list of values:

@list .method

And a clear way to call a list of methods on a value:

$value. @methods

It's turning out pretty nice.

  You might argue that we should force people to think of it one way or
  the other.  But there's a reason that some people will think of it
  one way while others will think of it the other way--I'd argue that
  vectorization is not something that happens to *either* the operand
  or the operator.  Vectorization is a different *relationship* between
  the operator and the operand.  As such, I still think it belongs
  between.
 
  Plus, in the symmetrical case, it *looks* symmetrical.  Marking the
  args in front makes everything look asymmetrical whether it is or not.
 
 Just a refresher, what *exactly* does vectorization do, again?  I think of
 it as smart list-plus-times behavior, but when we go into matrix arithmetic,
 that doesn't hold up. Luke?

Well, for being called vector operators, they're ending up pretty
useless as far as working with mathematical vectors.  As a
mathematician, I'd want:

@vec1 * @vec2

To do an inner or an outer product (probably outer, as it has no
implicit connotations of the + operator).  That is, it would come out
either a matrix or a scalar.

But there are other times when I'd want that to call the operator
respectively.  Then you get this for the inner product:

sum(@vec1 * @vec2)

Which isn't so bad, after all.

Hmm, but if it does give an outer product, then a generic tensor product
is as easy as:

reduce { $^a + $^b } @A * @B

And nobody can fathom how happy that would make me.

Also, you'd get the nice consistency that:

@A + @B

Is the same as both:

map { @A + $^b } @B
map { $^a + @B } @A

Which is undoubtedly what the mathematician would expect  (It's very
reminiscent of how junctions currently work).

But then there's the problem of how you express the oft-wanted:

map - $i { @A[$i] + @B[$i] } 0..^min([EMAIL PROTECTED], [EMAIL PROTECTED])

(Hopefully when @A and @B are of equal length).

Maybe there's a way to do it so that we can both be happy: one syntax
that does one thing, another that does the other.  Like:

@A + @B   # One-at-a-time
@A + @B   # Outer product

Or something.  Hmm, then both:

@A + $b
@A + $b

Would mean the same thing. 

Luke



Re: Semantics of vector operations (Damian)

2004-01-22 Thread Luke Palmer
Austin Hastings writes:
  Sortof.  I think Larry was implying that rand returned an infinite list
  of random numbers in list context.  If not, then what he said was wrong,
  because it would be sick to say that:
  
  (1,2,3,4,5) + foo()
  
  Calls foo() 5 times.
 
 Why would it be sick, and in what context? 
 
 With Larry's new vectorized sides suggestion, putting a guillemot on the right 
 side of the operator vectorizes the right side operand, which *should* call foo() 
 five times.
 
  (1,2,3,4,5) +  foo()   # do { my $x=foo(); (1+$x, 2+$x, 3+$x, 4+$x, 5+$x); }
  (1,2,3,4,5) + foo()   # (1+foo(), 2+foo(), 3+foo(), 4+foo(), 5+foo())

I think that one is:

do { my @x=foo(); ([EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL 
PROTECTED], [EMAIL PROTECTED]) }

We've forgotten that foo() could return a list in list context. :-)

  (1,2,3,4,5)  + foo()   # Maybe the same as above? What does 
 infix:+(@list,$scalar) do?

Well, what does a list return in scalar context?  In the presence of the
C comma, it returns 5 for the last thing evaluated.  In its absence, it
returns 5 for the length.

  (1,2,3,4,5)  +  foo()   # foo() in list context? What does infix:+(@list, 
 @list2) do?

Same deal, 5 + $(foo())

Luke



Re: Semantics of vector operations (Damian)

2004-01-22 Thread Luke Palmer
Luke Palmer writes:
   (1,2,3,4,5)  + foo()   # Maybe the same as above? What does 
  infix:+(@list,$scalar) do?
 
 Well, what does a list return in scalar context?  In the presence of the
 C comma, it returns 5 for the last thing evaluated.  In its absence, it
 returns 5 for the length.
 
   (1,2,3,4,5)  +  foo()   # foo() in list context? What does infix:+(@list, 
  @list2) do?
 
 Same deal, 5 + $(foo())

And of course I forgot to read you comments.  So you want to add two
lists, as in:

[1,2,3,4,5] + [foo()]

Well, that's an error, I think.  That or it adds the lengths.

Luke



Re: Semantics of vector operations (Damian)

2004-01-22 Thread Jonathan Scott Duff
On Thu, Jan 22, 2004 at 01:28:42PM -0500, Austin Hastings wrote:
  From: Jonathan Scott Duff [mailto:[EMAIL PROTECTED]
  On Thu, Jan 22, 2004 at 01:10:23PM -0500, Austin Hastings wrote:
   In reverse order:
  
%languageometer.values ?+= rand;
  
   This is the same as
  
all( %languageometer.values ) += rand;
  
   right?
 
  It's the same as
 
  $r = rand;
  $_ += $r for %languageometer.values
 
  Your junction looks like it should work but I think you're really
  adding the random number to the junction, not the elements that compose
  the junction thus none of %languageometer.values are modified.
 
 It would be disappointing if junctions could not be lvalues.

Oh, I think that junctions can be lvalues but a junction is different
from the things that compose it.  I.e.,

$a = 5; $b = 10;
$c = $a | $b;
$c += 5;

print $a $b\n;
if $c  10 { print More than 10!\n; }

would output

5 10
More than 10!

because the *junction* has the +5 attached to it rather than the
individual elements of the junction.  Read the if statement as if any
of (5 or 10) + 5 is greater than 10, ...  Which is the same as if
any of 10 or 15 is greater than 10, ...

I hope I'm making sense.

  I don't think junctions apply at all in vectorization.   They seem to
  be completely orthogonal.
 
 I'm curious if that's true, of if they're two different ways of getting to
 the same data. (At least in the one-dimension case.)

I'm just waiting for Damian to speak up :-)

-Scott
-- 
Jonathan Scott Duff
[EMAIL PROTECTED]


Re: Semantics of vector operations

2004-01-22 Thread Jonathan Scott Duff
On Thu, Jan 22, 2004 at 11:25:41AM -0800, Larry Wall wrote:
 On Thu, Jan 22, 2004 at 01:10:25PM -0500, Austin Hastings wrote:
 :  -Original Message-
 :  From: Larry Wall [mailto:[EMAIL PROTECTED]
 :  I think some people will want to think of it one way, while others
 :  will want to think of it the other way.  If that's the case, the
 :  proper place to put the marker is between the operand and the operator.
 : 
 : How do you handle operator precedence/associativity?
 
 Me?  I handle it by making it an adverb on the base operator.  :-)

Does that mean it should get the colon?  :)

 I don't think that ?X? means Do whatever the mathematicians want X
 to do.  Unicode operators have to be good for something, after all.
 
 So perhaps its best to call ? and ? distribution modifiers or
 some such.

Could someone put the non-unicode variants up there so those of us
with unicode-ignorant MUAs can know what exactly we're talking about?
Or alternatively (and certainly better), could someone clue me on how
to make mutt unicode-aware?

thanks,

-Scott
-- 
Jonathan Scott Duff
[EMAIL PROTECTED]


Re: Semantics of vector operations

2004-01-22 Thread Larry Wall
On Thu, Jan 22, 2004 at 02:28:09PM -0700, Luke Palmer wrote:
: Well, for being called vector operators, they're ending up pretty
: useless as far as working with mathematical vectors.

Which is why I suggested calling them distributors or some such.

: As a
: mathematician, I'd want:
: 
: @vec1 * @vec2
: 
: To do an inner or an outer product (probably outer, as it has no
: implicit connotations of the + operator).  That is, it would come out
: either a matrix or a scalar.
: 
: But there are other times when I'd want that to call the operator
: respectively.  Then you get this for the inner product:
: 
: sum(@vec1 * @vec2)
: 
: Which isn't so bad, after all.

Yes, and I think we have to stick with the naive view of what * would
do, since there are times you simply want to do a bunch of multiplications
in parallel.

: Hmm, but if it does give an outer product, then a generic tensor product
: is as easy as:
: 
: reduce { $^a + $^b } @A * @B
: 
: And nobody can fathom how happy that would make me.

I'd think it would make you even happier to just use the appropriate
Unicode operators directly (presuming there are such).

: Also, you'd get the nice consistency that:
: 
: @A + @B
: 
: Is the same as both:
: 
: map { @A + $^b } @B
: map { $^a + @B } @A
: 
: Which is undoubtedly what the mathematician would expect  (It's very
: reminiscent of how junctions currently work).
: 
: But then there's the problem of how you express the oft-wanted:
: 
: map - $i { @A[$i] + @B[$i] } 0..^min([EMAIL PROTECTED], [EMAIL PROTECTED])
: 
: (Hopefully when @A and @B are of equal length).

Yes, though in fact  is supposed to do max() rather than min() here.
Which, in the case of arrays of equal length, comes out to the same
thing...

: Maybe there's a way to do it so that we can both be happy: one syntax
: that does one thing, another that does the other.  Like:
: 
: @A + @B   # One-at-a-time
: @A + @B   # Outer product
: 
: Or something.  Hmm, then both:
: 
: @A + $b
: @A + $b
: 
: Would mean the same thing. 

Which says to me that outer product really wants to be something like
X or  or even  (shades of APL).  In the for-what-it's-worth department,
it looks like Python might be using @ for that.

Larry


Re: Semantics of vector operations

2004-01-22 Thread Larry Wall
On Thu, Jan 22, 2004 at 01:34:33PM -0600, Jonathan Scott Duff wrote:
: On Thu, Jan 22, 2004 at 11:25:41AM -0800, Larry Wall wrote:
:  Me?  I handle it by making it an adverb on the base operator.  :-)
: 
: Does that mean it should get the colon?  :)

Only if all adverbs in English end in -ly.

Of course, my name ends -ly in Japan--I had to learn to answer to Rally
when I was there.  So maybe I still get the colon...  :-)

:  I don't think that ?X? means Do whatever the mathematicians want X
:  to do.  Unicode operators have to be good for something, after all.
:  
:  So perhaps its best to call ? and ? distribution modifiers or
:  some such.
: 
: Could someone put the non-unicode variants up there so those of us
: with unicode-ignorant MUAs can know what exactly we're talking about?

Those are just the German/French quotes that look like  and .

: Or alternatively (and certainly better), could someone clue me on how
: to make mutt unicode-aware?

Modern versions of mutt are already unicode aware--that's what I'm using.
Make sure .muttrc has

set charset=utf-8
set editor=vim(or any editor that can handle utf-8)
set send_charset=us-ascii:iso-8859-1:utf-8

The main thing is you have to make sure your xterm (or equivalent)
unicode aware.  This starts a (reversed video) unicode terminal on
my machine:

LANG=en_US.UTF-8 xterm \
-fg white -bg black \
-u8 \
-fn '-Misc-Fixed-Medium-R-Normal--18-120-100-100-C-90-ISO10646-1'

I'm also using gnome-terminal 2.4.0.1, which knows how to do utf-8
if you tell it in the preferences.

Of course, this is all from the latest Fedora Core, so your software
might not be so up-to-date.  And other folks might prefer something
other than en_US.  It's the .UTF-8 that's the important part though.
I run some windows in ja_JP.UTF-8.  And, actually, my send_charset is

set send_charset=us-ascii:iso-8859-1:iso-2022-jp:utf-8

because I have Japanese friends who prefer iso-2022-jp because they
don't know how to read utf-8 yet.

Larry


Re: Semantics of vector operations

2004-01-22 Thread Joe Gottman

- Original Message - 
From: Luke Palmer [EMAIL PROTECTED]
To: Austin Hastings [EMAIL PROTECTED]
Cc: Larry Wall [EMAIL PROTECTED]; Language List [EMAIL PROTECTED]
Sent: Thursday, January 22, 2004 4:28 PM
Subject: [perl] Re: Semantics of vector operations


 Austin Hastings writes:
  How do you handle operator precedence/associativity?
 
  That is,
 
 $a + $b + $c
 
  If you're going to vectorize, and combine, then you'll want to group. I
  think making the vectorizer a grouper as well kills two birds with one
  stone.
 
$a + $b + $c
 
  vs.
 
$a + ($b + $c)

 I have to agree with Larry here, the latter is much cleaner.

 I'm actually starting to like this proposal.  I used to shiver at the
 implementation of the old way, where people used the operator to group
 arbitrary parts of the expression.  I wouldn't even know how to parse
 that, much less interpret it when it's parsed.

 Now, we have a clear way to call a method on a list of values:

 @list .method

 And a clear way to call a list of methods on a value:

 $value. @methods

 It's turning out pretty nice.

   I just realized a potential flaw here.  Consider the code
$a = 1;

   Will this right-shift the value of $a one bit and assign the result to $a
(the current meaning)?  Or will it assign the value 1 to each element in the
array referenced by $a (as suggested by the new syntax).  Both of these are
perfectly valid operations, and I don't think its acceptable to have the
same syntax mean both.  I'm aware that using = instead of = will
eliminate the inconsistency, but not everyone has easy access to Unicode
keyboards.

Joe Gottman




Re: Semantics of vector operations

2004-01-22 Thread Larry Wall
On Thu, Jan 22, 2004 at 08:08:13PM -0500, Joe Gottman wrote:
:I just realized a potential flaw here.  Consider the code
: $a = 1;
: 
:Will this right-shift the value of $a one bit and assign the result to $a
: (the current meaning)?  Or will it assign the value 1 to each element in the
: array referenced by $a (as suggested by the new syntax).  Both of these are
: perfectly valid operations, and I don't think its acceptable to have the
: same syntax mean both.  I'm aware that using »= instead of = will
: eliminate the inconsistency, but not everyone has easy access to Unicode
: keyboards.

Well,

$a = 1

would still presumably be unambiguous, and do the right thing, albeit
with run-time dwimmery.  On the other hand, we've renamed all the
other bitwise operators, so maybe we should rename these too:

+  bitwise left shift
+  bitwise right shift

which also gives us useful string bitshift ops:

~  stringwise left shift
~  stringwise right shift

as well as the never-before-thought-of:

?  boolean left shift
?  boolean right shift

Those last would be a great addition insofar as they could always
participate in constant folding.  Er, unless the right argument is 0,
of course...  :-)

Ain't orthogonality wonderful...

Larry


Re: Semantics of vector operations

2004-01-22 Thread Edwin Steiner
Luke Palmer [EMAIL PROTECTED] writes:
 @A »+« @B   # One-at-a-time
 @A «+» @B   # Outer product

 Or something.  Hmm, then both:

 @A »+ $b
 @A «+ $b

There is a page you may find inspiring:

http://www.ritsumei.ac.jp/~akitaoka/index-e.html

Sorry, I could not resist. :) The one-sided operators make sense to me
but combining this with both « and » seems hard on the eyes.

That being said I really like the general direction that Perl 6 is
going and I'm looking forward to using it. You're all doing great
work!

back to lurking-mode
-Edwin



Re: Semantics of vector operations

2004-01-22 Thread Luke Palmer
Larry Wall writes:
 On Thu, Jan 22, 2004 at 08:08:13PM -0500, Joe Gottman wrote:
 :I just realized a potential flaw here.  Consider the code
 : $a = 1;
 : 
 :Will this right-shift the value of $a one bit and assign the result to $a
 : (the current meaning)?  Or will it assign the value 1 to each element in the
 : array referenced by $a (as suggested by the new syntax).  Both of these are
 : perfectly valid operations, and I don't think its acceptable to have the
 : same syntax mean both.  I'm aware that using = instead of = will
 : eliminate the inconsistency, but not everyone has easy access to Unicode
 : keyboards.
 
 Well,
 
 $a = 1
 
 would still presumably be unambiguous, and do the right thing, albeit
 with run-time dwimmery.  On the other hand, we've renamed all the
 other bitwise operators, so maybe we should rename these too:
 
 +bitwise left shift
 +bitwise right shift

I could have sworn we already did that.  I thought they were:

+
+

But I guess that's an extra unneeded character.

Luke

 which lso gives us useful string bitshift ops:
 
 ~stringwise left shift
 ~stringwise right shift
 
 as well as the never-before-thought-of:
 
 ?boolean left shift
 ?boolean right shift
 
 Those last would be a great addition insofar as they could always
 participate in constant folding.  Er, unless the right argument is 0,
 of course...  :-)
 
 Ain't orthogonality wonderful...
 
 Larry


Re: Start of thread proposal

2004-01-22 Thread Dan Sugalski
At 12:15 AM +0100 1/22/04, Leopold Toetsch wrote:
Dan Sugalski [EMAIL PROTECTED] wrote:

[ No Guarantees WRT data access }

... seems to indicate that even whole ops like add P,P,P are atomic.

 Yep. They have to be, because they need to guarantee the integrity of
 the pmc structures and the data hanging off them (which includes
 buffer and string stuff)
But isn't that a contradiction? Or better: When even an opcode like
above is atomic, that an access to a shared PerlNum should be guaranteed
being atomic too.
Sure, but there's a *lot* more to user data integrity than atomic 
access to individual pieces. That's the least of the problem. The 
user data issue is one where you have multiple pieces being updated, 
or one piece being updated multiple times--that's the stuff we're not 
guaranteeing.

So, while we will make sure that storing a single value into a hash 
happens atomically, we won't guarantee that a series of stores into 
the hash, or a combination of loads and stores, or even a combination 
of reads and writes to a scalar, will happen atomically.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


RE: Start of thread proposal

2004-01-22 Thread Dan Sugalski
[Note to everyone -- I'm digging through my mail so be prepared for a 
potential set of responses to things that're already answered...]
At 6:37 PM -0500 1/19/04, Gordon Henriksen wrote:
Dan Sugalski wrote:

 For a copying collector to work, all the mutators must be blocked,
 and arguably all readers should be blocked as well.
True of non-moving collectors, too. Or, let me put it this way: non-
copying *GC* (the sweep or copy phase) can be threadsafe, but the mark
phase is never threadsafe. The method in which marking is not
threadsafe is a bit more pathological (i.e., it's not the common case
as it is with the copying collector), but a standard tracing DOD
cannot be correct when competeting with mutators. It WILL collect non-
garbage (those are MUTATORS, remember), and the result WILL be
Heizenbugs and crashes.
Some of what I've written up addresses why. It's pretty simple to
demonstrate a single case to prove the point, but I don't feel like
re-creating the ASCII art right now. :) I'll send that section when I
get out of the office.
parrot will have to be able to suspend all threads in the environment.
Unfortunate, but really quite unavoidable.
I'm not sure that's the case. What we need to do is suspend metadata 
mutation--that is, buffers can't be resized while a gc run is in 
progress. Other than that, if we have guarantees of aligned atomic 
pointer writes (so we don't have word shearing to deal with) we don't 
have to stop the mutation of the contents of the buffers themselves.

The only tricky bit comes in with the examination of the root set of 
other threads--accessing the hardware register contents of another 
running thread may be... interesting. (Which, I suppose, argues for 
some sort of volatile marking of the temp variables)
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: [RESEND] Q: Array vs SArray

2004-01-22 Thread Dan Sugalski
At 2:15 PM -0500 1/21/04, Matt Fowles wrote:
All~

So, lets do the classes as:

*) Array - fixed-size, mixed-type array
*) vPArray - variable-sized PMC array
*) PArray - Fixed-size PMC array
*) vSArray - variable-sized string array
*) SArray - fixed-size string array
I suggest using Array to mean fixed size and Vector to mean variable size.
I'd rather not. Vector, for me at least, has some specific 
connotations (from physics) that don't really match what we're 
talking about here. They're more vectors in the mathematical sense, 
but they won't behave like mathematical vectors so I don't think 
that's a good idea either.

Array, while mundane (and a bit annoying with the prefix stuff tacked 
on) is at least accurate.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: IMC returning ints

2004-01-22 Thread Dan Sugalski
At 10:28 AM +0100 1/22/04, Leopold Toetsch wrote:
Steve Fink [EMAIL PROTECTED] wrote:

 I did a cvs update, and it looks like imcc doesn't properly return
 integers anymore from nonprototyped routines.
I don't even know if this is allowed. But anyway, if the call is non
prototyped, native types should go into P3. So you have the overhead of
PMC creation anyway plus the overhead of the array access.
And mainly the return convention are still broken.
I thought those were fixed. There's no difference between calling and 
return conventions -- a return is just a call to the return 
continuation with parameters, the same way that a call is, well, a 
call to the (uninstantiated) sub continuation with parameters.

I'll go thump pdd03.
--
Dan
--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: [DOCS] C code documentation

2004-01-22 Thread Dan Sugalski
At 10:42 AM +0100 1/21/04, Michael Scott wrote:
Perhaps the most controversial feature of all this is that I'm using 
rows of 80 '#'s as visual delimiters to distinguish documentation 
sections from code.
Please don't. If you really, really must, chop it down to 60 or so 
characters. 80 may wrap in some editors under some situations, and 
anything bigger than 72 may trigger wrapping behaviour in mail 
programs. (I think I'd as soon you didn't put in delimiters like this 
at all, but I can live with it if I must)
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Start of thread proposal

2004-01-22 Thread Dan Sugalski
At 4:59 PM -0800 1/19/04, Dave Whipp wrote:
Dan Sugalski [EMAIL PROTECTED] wrote:
 =head2 Guarantees
Maybe this isn't strictly a threading thing, but are we going to make any
guarantees about event orderings? For example, will we guarantee that a
sequence of events send from one thread to another will always be received
in the order they are sent?
Hrm. I suppose we really ought to, so we will. If we prioritize 
events (and I'm still torn) then they'll be in priority then send 
order.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: [DOCS] CVS version $Id strings

2004-01-22 Thread Dan Sugalski
At 2:06 PM +0100 1/19/04, Michael Scott wrote:
Some files have CVS version $Id strings, some don't.

While tidying up the documentation I'm visiting every file. I can either:

1) add them when missing
2) remove them when present
3) do nothing
I was inclined to (1) until I reflected that it did preserve a 
relation between local and repository versions. Say one has a number 
of different check outs of the distribution, then the $Id strings 
might come in handy to distinguish between them. So in the end I 
incline to (2).

Does anyone have strong feelings either way?
Leave the CVS version strings in. They get automagically updated on 
checkin and it's a useful way to see which version of a file you have.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Optimization brainstorm: variable clusters

2004-01-22 Thread Dan Sugalski
At 2:52 PM +0100 1/17/04, Elizabeth Mattijsen wrote:
Don't know why you thgink it would be fetched 3 times, but as using 
tied variables in Perl 5, a fetch is done _everytime_ the value of 
the tied variable is needed.
You misunderstand. I'm talking about fetching the PMC for the 
variable *out of the stash* every time. This done with the assumption 
that if the *stash* is tied, then every use of a variable from the 
stash must refetch it out so the stash tie code can decide what it 
wants to do.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Vtables organization

2004-01-22 Thread Dan Sugalski
At 8:37 AM -0500 1/19/04, Benjamin K. Stuhl wrote:
Luke Palmer wrote:

Benjamin K. Stuhl writes:

Other than the special case of :readonly, can you give me an example
of when you'd need to, rather than simply writing a PMC class that
inherits from some base? I'm having trouble thinking of an example of
when you'd want to be able to do that... After all, since operator
overloading and tying effectively _replace_ the builtin operations,
what more does one need?


Well, other than Constant, we need to be able to put locking on shared
PMCs.  We'd like to add a debug trace to a PMC.  We could even make any
kind of PMC sync itself with an external source on access, though that's
a bit of a pathological case.
Indeed, all of this can be done, however, by subclassing Ref.  I think
the reason this isn't sufficient is that we want to change the actual
PMC into this new variant when it is possibly already in existence.
Like my Csupplant was evilly trying to do.  Perhaps there's a way to
get that working safely...
The issue is that the PMC's original vtable assumes (and should, 
IMHO be _able_ to assume) that it has total control over the PMC's 
data,
Well... I think I'll disagree here. The *class* vtable can assume 
this. However that doesn't mean that any random vtable function can.

In addition to the thread autolocking front end and debugging front 
end vtable functions, both of which can be generic, there's the 
potential for tracing and auditing front end functions, input data 
massaging wrappers, and all manner of Truly Evil front (and back) end 
wrappers that don't need to actually access the guts of the PMC, but 
can instead rely on the other vtable functions to get the information 
that they need to operate.

Not that this necessarily mandates passing in the vtable pointer to 
the functions, but the uses aren't exactly marginal.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Threads... last call

2004-01-22 Thread Dan Sugalski
Last chance to get in comments on the first half of the proposal. If 
it looks adequate, I'll put together the technical details 
(functions, protocols, structures, and whatnot) and send that off for 
abuse^Wdiscussion. After that we'll finalize it, PDD the thing, and 
get the implementation in and going.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: [DOCS] C code documentation

2004-01-22 Thread Michael Scott
Yep. I bounced Sam's comment around in my head for a while until I saw 
that I was only putting them in for my own current convenience - makes 
it easier to see what I'm doing as I'm doing it - so they won't be 
there. Minimal is best. And anyway who wants to be SO 20th century.

Mike

On 22 Jan 2004, at 19:33, Dan Sugalski wrote:

At 10:42 AM +0100 1/21/04, Michael Scott wrote:
Perhaps the most controversial feature of all this is that I'm using 
rows of 80 '#'s as visual delimiters to distinguish documentation 
sections from code.
Please don't. If you really, really must, chop it down to 60 or so 
characters. 80 may wrap in some editors under some situations, and 
anything bigger than 72 may trigger wrapping behaviour in mail 
programs. (I think I'd as soon you didn't put in delimiters like this 
at all, but I can live with it if I must)
--
Dan

--it's like 
this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: [DOCS] CVS version $Id strings

2004-01-22 Thread Michael Scott
Duh. Rereading that I can see I got my numbers in a twist. I've been 
adding them where missing.

On 22 Jan 2004, at 19:39, Dan Sugalski wrote:

At 2:06 PM +0100 1/19/04, Michael Scott wrote:
Some files have CVS version $Id strings, some don't.

While tidying up the documentation I'm visiting every file. I can 
either:

1) add them when missing
2) remove them when present
3) do nothing
I was inclined to (1) until I reflected that it did preserve a 
relation between local and repository versions. Say one has a number 
of different check outs of the distribution, then the $Id strings 
might come in handy to distinguish between them. So in the end I 
incline to (2).

Does anyone have strong feelings either way?
Leave the CVS version strings in. They get automagically updated on 
checkin and it's a useful way to see which version of a file you have.
--
Dan

--it's like 
this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Managed and unmanaged structs (Another for the todo list)

2004-01-22 Thread chromatic
On Thu, 2004-01-15 at 09:16, Leopold Toetsch wrote:

 Dan Sugalski [EMAIL PROTECTED] wrote:

  If that's living in an managedstruct, then accessing the struct
  elements should be as simple as:
 
  set I0, P20['bar']
  set S1, P20['plugh']
  set P20['baz'], 15
 
 That's mostly done, except for named keys (I used arrays). If you like
 named keys, an OrderedHash would provide both named and indexed access.

How does an OrderedHash cross the NCI boundary?  That is, I know how a
ManagedStruct or UnmanagedStruct converts to something the wrapped
library can understand -- the PMC_data() macro makes sense.  How does it
work for an OrderedHash?

I looked at this and wondered where to hang the mapping of names to
array indices.  The data member of the PMC looks full up.

My other idea was to wrap the struct in an object and write methods that
set the appropriate array members, but there's still NCI to consider.

Of course, I could be missing something extremely simple, but if no one
ever asks

-- c



Re: open issue review (easy stuff)

2004-01-22 Thread Robert Spier
 Is there any way to get RT to close tickets (or change their status) 
 entirely via e-mail? That'd make this a lot easier if we could throw 
 a:
 RT-Status: Closed
 or something like it in the reply to a bug report that notes the bug 
 has been fixed.

I could implement this, but there are authentication issues.

The rt CLI is another option.  At some point in the future, I will
document this better on bugs.perl.org. (Redoing bugs.perl.org is on my
short-list.)

-R


Re: Start of thread proposal

2004-01-22 Thread Leopold Toetsch
Dan Sugalski [EMAIL PROTECTED] wrote:

 The only tricky bit comes in with the examination of the root set of
 other threads--accessing the hardware register contents of another
 running thread may be... interesting. (Which, I suppose, argues for
 some sort of volatile marking of the temp variables)

You'll provide the interesting part, that is:

use Psi::Estimate::CPU_Register_Changes_in_Future_till_mark_is_done;

SCNR, leo


Re: [perl #25233] Memory pool corruption

2004-01-22 Thread Leopold Toetsch
Dan Sugalski [EMAIL PROTECTED] wrote:

 I'm finding parrot's killing its memory pools somewhere and dying
 when it goes to compact memory during a GC sweep.

Yep. See also Memory corruption by Steve Fink and my f'ups.

leo


Re: IMC returning ints

2004-01-22 Thread Leopold Toetsch
Dan Sugalski [EMAIL PROTECTED] wrote:
 At 10:28 AM +0100 1/22/04, Leopold Toetsch wrote:

And mainly the return convention are still broken.

 I thought those were fixed.

Not yet.

 ... There's no difference between calling and
 return conventions

To be done.

leo


Re: Threads... last call

2004-01-22 Thread Deven T. Corzine
Dan Sugalski wrote:

Last chance to get in comments on the first half of the proposal. If 
it looks adequate, I'll put together the technical details (functions, 
protocols, structures, and whatnot) and send that off for 
abuse^Wdiscussion. After that we'll finalize it, PDD the thing, and 
get the implementation in and going.
Dan,

Sorry to jump in out of the blue here, but did you respond to Damien 
Neil's message about locking issues?  (Or did I just miss it?)

This sounds like it could be a critically important design question; 
wouldn't it be best to address it before jumping into implementation?  
If there's a better approach available, wouldn't this be the best time 
to determine that?

Deven

Date: Wed, 21 Jan 2004 13:32:52 -0800
From: Damien Neil [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Re: Start of thread proposal
Message-ID: [EMAIL PROTECTED]
References: [EMAIL PROTECTED] [EMAIL PROTECTED] [EMAIL PROTECTED]
In-Reply-To: [EMAIL PROTECTED]
Content-Length: 1429
On Wed, Jan 21, 2004 at 01:14:46PM -0500, Dan Sugalski wrote:
... seems to indicate that even whole ops like add P,P,P are atomic.

Yep. They have to be, because they need to guarantee the integrity of 
the pmc structures and the data hanging off them (which includes 
buffer and string stuff)
Personally, I think it would be better to use corruption-resistant
buffer and string structures, and avoid locking during basic data
access.  While there are substantial differences in VM design--PMCs
are much more complicated than any JVM data type--the JVM does provide
a good example that this can be done, and done efficiently.
Failing this, it would be worth investigating what the real-world
performance difference is between acquiring multiple locks per VM
operation (current Parrot proposal) vs. having a single lock
controlling all data access (Python) or jettisoning OS threads
entirely in favor of VM-level threading (Ruby).  This forfeits the
ability to take advantage of multiple CPUs--but Leopold's initial
timing tests of shared PMCs were showing a potential 3-5x slowdown
from excessive locking.
I've seen software before that was redesigned to take advantage of
multiple CPUs--and then required no less than four CPUs to match
the performance of the older, single-CPU version.  The problem was
largely attributed to excessive locking of mostly-uncontested data
structures.
   - Damien




Re: Threads... last call

2004-01-22 Thread Josh Wilmes

I'm also concerned by those timings that leo posted.
0.0001 vs 0.0005 ms on a set- that magnitude of locking overhead 
seems pretty crazy to me.

It seemed like a few people have said that the JVM style of locking
can reduce this, so it seems to me that it merits some serious 
consideration, even if it may require some changes to the design of
parrot.

I'm not familiar enough with the implementation details here to say much 
one way or another. But it seems to me that if this is one of those
low-level decisions that will be impossible to change later and will
forever constrain perl's performance, then it's important not to rush
into a bad choice because it seems more straightforward.

Perhaps some more experimentation is in order at this time.

--Josh


At 17:24 on 01/22/2004 EST, Deven T. Corzine [EMAIL PROTECTED] wrote:

 Dan Sugalski wrote:
 
  Last chance to get in comments on the first half of the proposal. If 
  it looks adequate, I'll put together the technical details (functions, 
  protocols, structures, and whatnot) and send that off for 
  abuse^Wdiscussion. After that we'll finalize it, PDD the thing, and 
  get the implementation in and going.
 
 Dan,
 
 Sorry to jump in out of the blue here, but did you respond to Damien 
 Neil's message about locking issues?  (Or did I just miss it?)
 
 This sounds like it could be a critically important design question; 
 wouldn't it be best to address it before jumping into implementation?  
 If there's a better approach available, wouldn't this be the best time 
 to determine that?
 
 Deven
 
 Date: Wed, 21 Jan 2004 13:32:52 -0800
 From: Damien Neil [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Subject: Re: Start of thread proposal
 Message-ID: [EMAIL PROTECTED]
 References: [EMAIL PROTECTED] [EMAIL PROTECTED]
8.leo.home [EMAIL PROTECTED]
 In-Reply-To: [EMAIL PROTECTED]
 Content-Length: 1429
 
 On Wed, Jan 21, 2004 at 01:14:46PM -0500, Dan Sugalski wrote:
  ... seems to indicate that even whole ops like add P,P,P are atomic.
  
  Yep. They have to be, because they need to guarantee the integrity of 
  the pmc structures and the data hanging off them (which includes 
  buffer and string stuff)
 
 Personally, I think it would be better to use corruption-resistant
 buffer and string structures, and avoid locking during basic data
 access.  While there are substantial differences in VM design--PMCs
 are much more complicated than any JVM data type--the JVM does provide
 a good example that this can be done, and done efficiently.
 
 Failing this, it would be worth investigating what the real-world
 performance difference is between acquiring multiple locks per VM
 operation (current Parrot proposal) vs. having a single lock
 controlling all data access (Python) or jettisoning OS threads
 entirely in favor of VM-level threading (Ruby).  This forfeits the
 ability to take advantage of multiple CPUs--but Leopold's initial
 timing tests of shared PMCs were showing a potential 3-5x slowdown
 from excessive locking.
 
 I've seen software before that was redesigned to take advantage of
 multiple CPUs--and then required no less than four CPUs to match
 the performance of the older, single-CPU version.  The problem was
 largely attributed to excessive locking of mostly-uncontested data
 structures.
 
 - Damien
 




Re: [COMMIT] IMCC gets high level sub call syntax

2004-01-22 Thread Will Coleda
My most recent get tcl working again error was in fact, the result of 
a bad jump, as Leo suggested. Of course, that jump was never made 
before, and in trying to track down why I was hitting an error 
condition I hadn't before, I realized that I had a few 
non-calling-convention subs near where the issue was occurring. In the 
process of updating everything to be shiny and use the very latest 
calling syntax (so I can then work on tracking down actual bugs 
somewhere), I found that the samples listed below both fail with:

7 8 nine
SArray index out of bounds!
the end of parrot -t shows:

PC=105; OP=885 (set_i_ic); ARGS=(I5=7, 10)
PC=108; OP=885 (set_i_ic); ARGS=(I0=-98, 1)
PC=111; OP=885 (set_i_ic); ARGS=(I1=3, 1)
PC=114; OP=885 (set_i_ic); ARGS=(I2=0, 0)
PC=117; OP=885 (set_i_ic); ARGS=(I3=-2, 0)
PC=120; OP=885 (set_i_ic); ARGS=(I4=0, 0)
PC=123; OP=38 (invoke_p); ARGS=(P1=RetContinuation=PMC(0x8bb7d8))
PC=48; OP=1003 (restoretop)
PC=49; OP=801 (shift_i_p); ARGS=(I16=8, P3=SArray=PMC(0x8bb7f0))
Which kind of stops me dead in my tracks, as I'm loathe to put things 
back to the old, bulky calling conventions.

Once this gets fixed, I'd be happy to submit a doc patch for 
./imcc/docs/calling_conventions to document the syntax.

OOC, any progress on

$P1(arg)

working yet? (where $P1 is a Sub-like PMC?) This would allow me to rid 
my code of the pcc_call style entirely.

Regards.

On Sunday, November 16, 2003, at 08:03  PM, Melvin Smith wrote:

# Sample 1
.sub _main
  .local int i
  .local int j
  .local string s
  i = 7
  $I1 = 8
  s = nine
  I0 = _foo(7, 8, nine)
  print return: 
  print I0
  print \n
  end
.end
.sub _foo
  .param int i
  .param int j
  .param string s
  print i
  print  
  print j
  print  
  print s
  print \n
  .pcc_begin_return
  .return 10
  .pcc_end_return
.end

# Sample 2, multiple return values
.sub _main
  .local int i
  .local int j
  .local string s
  i = 7
  $I1 = 8
  s = nine
  (I0, I1) = _foo(7, 8, nine)
  print return: 
  print I0
  print  
  print I1
  print \n
  end
.end
.sub _foo
  .param int i
  .param int j
  .param string s
  print i
  print  
  print j
  print  
  print s
  print \n
  .pcc_begin_return
  .return 10
  .return 11
  .pcc_end_return
.end
--
Will Coke Coledawill at coleda 
dot com



Closable Tickets

2004-01-22 Thread Matt Fowles
Robert~

You can close the following tickets

24848
24840
22281
21988
Matt


How does perl handle HLL Ceval?

2004-01-22 Thread nigelsandever
The subject says it all. 

As parrot is designed to be targetted by many langauges, 
how will it handle 'eval' opcodes for those different languages?

Shell out to a seperate process?

Nigel.




Re: How does perl handle HLL Ceval?

2004-01-22 Thread Michal Wallace
On Fri, 23 Jan 2004 [EMAIL PROTECTED] wrote:

 The subject says it all.

 As parrot is designed to be targetted by many langauges,
 how will it handle 'eval' opcodes for those different languages?

 Shell out to a seperate process?

You could do that, or you can provide a C-based
compiler as a PMC or you can teach your language
to compile itself... Or you can even write your
language in some other language that targets parrot. :)


 Nigel.


Sincerely,

Michal J Wallace
Sabren Enterprises, Inc.
-
contact: [EMAIL PROTECTED]
hosting: http://www.cornerhost.com/
my site: http://www.withoutane.com/
--