Re: Cygwin versun plain XP (for working with Pugs+Parrot together)

2006-01-02 Thread Greg Bacon
In message [EMAIL PROTECTED],
Joshua Hoblitt writes:

: Can you send the post the output of `prove -v t/op/trans.t`?  I suspect
: that atan2() may be misbehaving on cygwin in the same way that it does
: on Solaris.

After upping to r10836, I needed the following patch to build:

Index: src/classes/os.pmc
===
--- src/classes/os.pmc  (revision 10836)
+++ src/classes/os.pmc  (working copy)
@@ -20,6 +20,7 @@
 */
 
 #include parrot/parrot.h
+#include limits.h
 
 static PMC * OS_PMC;
 pmclass OS singleton {

The output of the requested command is below:

$ prove -v t/op/trans.t
t/op/trans1..19
ok 1 - sin
ok 2 - cos
ok 3 - tan
ok 4 - sec
ok 5 - atan
ok 6 - asin
ok 7 - acos
ok 8 - asec
ok 9 - cosh
ok 10 - sinh
ok 11 - tanh
ok 12 - sech

# Failed test (t/op/trans.t at line 313)
#  got: 'ok 1
# ok 2
# ok 3
# ok 4
not ok 13 - atan2
# ok 5
# ok 6
# ok 7
# ok 8
# ok 9
# ok 10
# ok 11
# ok 12
# ok 13
# ok 14
# ok 15
# ok 16
# not 0.00ok 17
# '
# expected: 'ok 1
# ok 2
# ok 3
# ok 4
# ok 5
# ok 6
# ok 7
# ok 8
# ok 9
# ok 10
# ok 11
# ok 12
# ok 13
# ok 14
# ok 15
# ok 16
# ok 17
# '
ok 14 - log2
ok 15 - log10
ok 16 - ln
ok 17 - exp
ok 18 - pow
ok 19 - sqrt
# Looks like you failed 1 test of 19.
dubious
Test returned status 1 (wstat 256, 0x100)
DIED. FAILED test 13
Failed 1/19 tests, 94.74% okay
Failed Test  Stat Wstat Total Fail  Failed  List of Failed
---
t/op/trans.t1   256191   5.26%  13
Failed 1/1 test scripts, 0.00% okay. 1/19 subtests failed, 94.74% okay.

Enjoy,
Greg


Re: Configuration error in parrot-0.4.0

2006-01-02 Thread Sastry
Hi
I could install parrot with the earlier one itself. Thanks for the help.

regards
Ravi Sastry


On 12/30/05, Joshua Hoblitt [EMAIL PROTECTED] wrote:
 Does this issue still occur with recent svn sources?

 -J

 --
 On Wed, Dec 28, 2005 at 06:54:18AM -0800, jerry gay wrote:
  On 12/28/05, Sastry [EMAIL PROTECTED] wrote:
   Hi
   I tried building parrot on  Linux 2.4.20 and I get the following error
   during gmake process. I have  the default perl-5.8.6 built on my
   system. Can anybody suggest me what this error is and how to overcome
   this?
  
  [snip extra build error info]
 
   /usr/local/bin/perl -e 'chdir shift @ARGV; system q{gmake}, @ARGV;
   exit $?  8;' dynclasses
   gmake[1]: Entering directory `/home/sastry/parrot-0.4.0/dynclasses'
   /usr/local/bin/perl /home/sastry/parrot-0.4.0/build_tools/pmc2c.pl
   --dump gdbmhash.pmc
   Can't write 'classes/default.dump': No such file or directory at
   /home/sastry/parrot-0.4.0/build_tools/pmc2c.pl line 687.
   pmc2c dump failed (512)
   gmake[1]: *** [all] Error 2
   gmake[1]: Leaving directory `/home/sastry/parrot-0.4.0/dynclasses'
   gmake: *** [dynclasses.dummy] Error 2
   [EMAIL PROTECTED] parrot-0.4.0]$
  
  i assume you have the 0.4.0 release tarball. i don't know why it's not
  working with that configuration, sorry--perhaps there's somebody with
  linux and gmake that can chime in. in the meantime, perhaps an updated
  tarball, or a fresh checkout from svn will prove successful. see
  http://www.parrotcode.org/source.html for instructions on downloading
  either.
 
  hope that helps.
  ~jerry




--


[perl #34549] atan2() isn't IEEE compliant on OpenBSD/*BSD/Cygwin/Solaris

2006-01-02 Thread Joshua Hoblitt via RT
I've commited a possible fix for openbsd, cygwin,  solaris as changesets  
r10839  r10843.  I basically applied what Steve Peters proposed but
with the changes in math.c instead of creating init.c (as agreed to on
#parrot).

This doesn't appear to have done anything for gcc/solaris... can someone
test openbsd and cygwin?

-J

--


Re: [perl #34549] atan2() isn't IEEE compliant on OpenBSD/*BSD/Cygwin/Solaris

2006-01-02 Thread Greg Bacon
In message [EMAIL PROTECTED],
Joshua Hoblitt via RT writes:

: I've commited a possible fix for openbsd, cygwin,  solaris as changesets  
: r10839  r10843.  I basically applied what Steve Peters proposed but
: with the changes in math.c instead of creating init.c (as agreed to on
: #parrot).
: 
: This doesn't appear to have done anything for gcc/solaris... can someone
: test openbsd and cygwin?

After upping to r10844, trans.t still fails:

t/op/trans1..19
ok 1 - sin
ok 2 - cos
ok 3 - tan
ok 4 - sec
ok 5 - atan
ok 6 - asin
ok 7 - acos
ok 8 - asec
ok 9 - cosh
ok 10 - sinh
ok 11 - tanh
ok 12 - sech

# Failed test (t/op/trans.t at line 313)
#  got: 'ok 1
# ok 2
# ok 3
# ok 4
# ok 5
# ok 6
# ok 7
# ok 8
# ok 9
# ok 10
# ok 11
# ok 12
# ok 13
# ok 14
# ok 15
# ok 16
not ok 13 - atan2
# not 0.00ok 17
# '
# expected: 'ok 1
# ok 2
# ok 3
# ok 4
# ok 5
# ok 6
# ok 7
# ok 8
# ok 9
# ok 10
# ok 11
# ok 12
# ok 13
# ok 14
# ok 15
# ok 16
# ok 17
# '
ok 14 - log2
ok 15 - log10
ok 16 - ln
ok 17 - exp
ok 18 - pow
ok 19 - sqrt
# Looks like you failed 1 test of 19.
dubious
Test returned status 1 (wstat 256, 0x100)
DIED. FAILED test 13
Failed 1/19 tests, 94.74% okay
Failed Test  Stat Wstat Total Fail  Failed  List of Failed
---
t/op/trans.t1   256191   5.26%  13
Failed 1/1 test scripts, 0.00% okay. 1/19 subtests failed, 94.74% okay.


Re: IMCC optimizer instruction duplication and opcode write/reads

2006-01-02 Thread Amos Robinson
 Argh. Just realised my old address, [EMAIL PROTECTED], could receive emails
but not send them (not even to itself!)


 On Dec 31, 2005, at 15:43, Amos Robinson wrote:

--

 A copy_ins() function would be nice, if needed.

 However, this doesn't seem to work with e.g. set_args.

 Why?

In my local repo with branch_cond_loop optimizations,
parrot -v -d runtime/parrot/library/Data/Dumper/Default.pir
gives me...

-- SNIP --
push %s, %s push
push %s, %s push
sprintf %s, %s, %s  sprintf
set %s, %s[%s]  set
error:imcc:The opcode '_' (1) was not found. Check the type and number
of the
arguments
in file 'runtime/parrot/library/Data/Dumper/Default.pir' line 342

The op after the set is a method call: self.dump(...). If I change it to
say, a label (that is being used), it works fine.
I can get it working for now by checking that it's not gonna move any PCC
directives - at the moment it just doesn't like any sort of branches.


--

 Duplicating e.g. a function call needs a bit more effort, e.g.
 allocation and filling the pcc_sub structure.

Hmm, okay. I was hoping I could've just copied the set_args, get_results,
and callmethodccs. I'll have a look further into the PCC stuff you did.

--

 The in/out doesn't tell you if a PMC is modified like in 'pop' but tells
you if the register is unmodified (in) or has a new contents after the
operation (out). E.g.

pop P0, P1   # (out, in)

 doesn't modify P1 (it is a pointer to the same location before and after
the operation - if the contents of the array changes, doesn't matter at
all). But P0 has a total new value after the operation, namely a pointer
to the previously last value of P1.

 This information is needed for register allocation inside imcc itself
and in the JIT. The 'out' means that the life range of the old 'P0' ends
at that instruction and a new life range of 'P0' is starting.


Ahh, okay. How many ops would you say there are that change contents of
non-LHS args? I don't want to cause too much trouble, but for used_once I
think some way of checking whether it's one of those is necessary, even if
I just hard-code the checks.

 Amos Robinson

 leo


Re: $/ and $! should be env (and explanation of env variables)

2006-01-02 Thread TSa

HaloO,

happy new year to Everybody!

Luke Palmer wrote:

Env variables are implicitly passed up through any number of call
frames.


Interesting to note that you imagine the call chain to grow upwards
where I would say 'implicitly passed down'. Nevertheless I would
also think of upwards beeing the positive direction where you find
your CALLER's environment with a $+ twigil var or where the $^ twigiled
vars come from ;)

Since exception handling is about *not* returning but calling your
CALLER's CATCH blocks in the same direction as the environments are
traversed there should be some way of navigating in the opposite
direction of these environments up---or down---to the point where
the error occured. Could this be through $- twigil vars? Or by
having an @ENV array that is indexed in opposite call direction yielding
this frame's %ENV? Thus @ENV[-1] would nicely refer to the actual
error environment. And [EMAIL PROTECTED] tells you how far away you are catching
the error. But this would not allow to retrieve non-exceptional
environmental data frames unless forward indexing from @[EMAIL PROTECTED] and
beyond is supported.

Well, alternative we could have an ERR namespace. BTW, how is the
object data environment handled? Is there a SELF namespace? How much
of that is automagically accessible in CATCH blocks?



$/ was formerly lexical, and making it environmental, first of all,
allows substitutions to be regular functions:

$foo.subst(rx| /(.*?)/ |, { it$1/it })

Since .subst could choose bind $/ on behalf of its caller.


Just let me get that right. Whatever $foo contains is stringyfied
and passed as $?SELF to .subst with $_ as a readonly alias?
Then the string is copied into an 'env $/ is rw' inside subst and
handed over to some engine level implementation routine which actually
performs the subtitution. Finally .subst returns the value of this $/.
Well, and for an in-place modification of $foo the call reads
$foo.=subst(...), I guess?

Or was the idea to have three declarations

  env $_ is rw = CALLER$_; # = $?SELF in methods?
  env $/ is rw = CALLER$/; # = $?SELF in rules?
  env $! is rw = undef;  # = CALLER$! when exceptional

beeing implicitly in effect at the start of every sub (or block)?
Thus subst could just modify $+/ without disturbing CALLER$foo
which might actually be invisible to subst?



It is also aesthetically appealing to have all (three) punctuation
variables being environmental.


And it clears up the strange notion of lexical my-vars :)
I hope lexical now *only* means scanning braces outwards on
the source code level with capturing of state in the (run)time
dimension thrown in for closure creation, lazy evaluation etc.

Talking about lazy evaluation: does it make sense to unify
the global $*IN and $*OUT with lazy evaluation in general?
That is $OUT is where yield writes it value(s) to and $IN
is for retrieving the input. Iteration constructs automatically
stall or resume coroutines as needed.
--


Re: Deep copy

2006-01-02 Thread TSa

HaloO,

Larry Wall wrote:

I think that deep copying is rare enough in practice that it should
be dehuffmanized to .deepcopy, perhaps with optional arguments saying
how deep.


So perhaps .copy:deep then?



 Simple shallow copy is .copy, whereas .clone is a .bless
variant that will copy based on the deep/shallow preferences of the
item being cloned.  The default might be identical to .copy though.
Perhaps those two can/should be unified.


Yes! Variations of the same theme should go into variations on the
call site with the underlying implementation beeing generic enough
to support all variants parametrically.
--


Re: IMCC optimizer instruction duplication and opcode write/reads

2006-01-02 Thread Leopold Toetsch


On Jan 2, 2006, at 16:53, Amos Robinson wrote:

error:imcc:The opcode '_' (1) was not found. Check the type and 
number

of the
arguments


Looks strange. gdb might help.

Hmm, okay. I was hoping I could've just copied the set_args, 
get_results,

and callmethodccs. I'll have a look further into the PCC stuff you did.


Copying these instructions should work after all the directives are 
expanded, i.e. when the optimizer sees it.



Ahh, okay. How many ops would you say there are that change contents of
non-LHS args? I don't want to cause too much trouble, but for 
used_once I
think some way of checking whether it's one of those is necessary, 
even if

I just hard-code the checks.


Just some 'gcd' opcodes with multiple out args come to my mind 
currently - but please recheck.


leo



Re: real ranges

2006-01-02 Thread TSa

HaloO Eric,

you wrote:

#strictly outside
($a   3..6) === (3   $a   6) === (3   $a || $a   6)


Just looking at that hurts my head, how can $a be smaller than three
and larger than 6?  That doesn't make even a little since.


To my twisted brain it does ;)

The idea is that
   outside === !inside
   === !($a  3..6)
   === !(3  $a  6)
   === !(3  $a  $a  6)
   === !(3  $a) || !($a  6)  # DeMorgan of booleans
   ===  3 = $a || $a = 6

Well, stricture complicates the picture a bit.

  strictly outside === !strictly !inside

that is also the case if comparison operators could be
negated directly

   === !=
=== !(  || == )
=== !  !=
=== =  !=

We could write that as operator junctions

 infix:{''}  ::= none( infix:{''}, infix:{'=='} )
 infix:{'='} ::= any(  infix:{''}, infix:{'=='} )



($a   3..6) === ($a  3 || $a  6)


I would write that ($a  3 || 6  $a) which is just the flipped
version of my (3  $a || $a  6) and short circuit it to (3  $a  6).
That makes doubled  and  sort of complementary orders when you think
of the Nums as wrapping around from +Inf to -Inf. In fact the number
line is the projection of the unit circle. In other words the range
6..3 might be considered as the inverse of 3..6, that is all real
numbers outside of 3..6 not including 3 and 6.



Your intermediate step makes no sense at all.  I would think that (and expect)

($a   3..6) === ($a  6);

You could always use.

my $range = 3..6;
if ($range.min  $a  $range.max) == inside
if ($range.min  $a || $a  $range.max) == outisde

Then you don't have to warp any meaning  of  or , sure its longer
but its obvious to anyone exactly what it means.


With Juerd's remark of using ~~ and !~ for insideness checking on ranges
I think using  and = to mean completely below and and  and = meaning
completely above in the order there remains to define what == and !=
should mean. My current idea is to let == mean on the boundary and !=
then obviuosly everything strictly inside or outside. But it might also
be in the spirit of reals as used in maths to have no single Num $n
beeing equal to any range but the zero measure range $n..$n. This makes
two ranges equal if and only if their .min and .max are equal.

This gives the set of 4 dualing pairs of ops

=  ==  ~~

 = !=  !~ # negation of the above


The driving force behind this real ranges thing is that I consider
nums in general as coroutines iterating smaller and smaller intervalls
around a limit. That actually is how reals are defined! And all math
functions are just parametric versions of such number iterating subs.

Writing programs that implement certain nums means adhering to a number
of roles like Order which brings in  and  or Equal which requires == 
and != taken together also give = and = and =. Finally, role Range

brings in .., ~~ and !~ for insideness checking.

Role Division is actually very interesting because there are just the 4
division algebras of the Reals, Complex, Quaternions and Octonions.

Sorry, if this is all too far off-topic.
--


Re: [perl #34549] atan2() isn't IEEE compliant on OpenBSD/*BSD/Cygwin/Solaris

2006-01-02 Thread Steve Peters
On Mon, Jan 02, 2006 at 09:01:55AM -0600, Greg Bacon wrote:
 In message [EMAIL PROTECTED],
 Joshua Hoblitt via RT writes:
 
 : I've commited a possible fix for openbsd, cygwin,  solaris as changesets  
 : r10839  r10843.  I basically applied what Steve Peters proposed but
 : with the changes in math.c instead of creating init.c (as agreed to on
 : #parrot).
 : 
 : This doesn't appear to have done anything for gcc/solaris... can someone
 : test openbsd and cygwin?
 
 After upping to r10844, trans.t still fails:
 

What operating system are you using?

Steve Peters
[EMAIL PROTECTED]


Pugs-PIL: Containers adopts Roles.

2006-01-02 Thread Audrey Tang






Today Stevan started writing out Roles for container
types,
so there can be multiple classes that implements the Hash/Array/Scalar
interface, so operations like .{} and .[] can apply to user-defined
types as well.
This is similar to the Perl 5 way of using the "tie"
interface, as well as overloading @{} and %{},
but because Perl 5 is a strongly typed language with only five
([EMAIL PROTECTED]*) container types, ultimately you need to decompose the
user-defined class to one of those five things, XS-based solutions like
PDL notwithstanding.
With roles, user-defined classes can be first class citizens that
conform to various interfaces (pair, args, sigs, list, ranges, etc...),
and it'd be much easier to write an ordered hash class that does
both the Array and Hash interface.
We are working toward something like Scala's traits
hierarchy, starting with the bare minimum already defined in docs/quickref/data.
As the main Bootstrap.pil is getting huge with the container type
interfaces, I factored them out into multiple small files in src/PIL/Native/Bootstrap/.
The next step for me is to create another surface syntax for PIL2
-- this time a bare subset of syntactically-valid Perl 6 -- and compile
it to the already-running-fine bootstrapped PILN runcore.
The compiler itself will have access to an object space, and simply
serialize the final (garbage-collected) state as the executable image,
ready to be run by invoking the main routine in *::('').
Once we can pass the t/01-sanity/
tests with this compiler, the rest of the job is to port all
desugaring, primitives, as well as other assorted magics from the old
runcore over, a process not unlike what iblech has been doing for the
_javascript_ runcore. Stay tuned...






signature.asc
Description: OpenPGP digital signature


T and L parameter types for NCI

2006-01-02 Thread Dan Sugalski
I just went, after ages, and sync'd up with a current parrot for my 
work project. Fixing things to work with the changes has been... 
interesting.


The big hang up has been the removal of the T and L parameter types 
for NCI calls. T was a string pointer array and L was a long array. 
They're still documented in call_list.txt, there are still references 
to them in parts of the library, and there are fragments of the code 
for them in nativecall.pl.


Change 9260 did this back in september (yes, it has been ages, I'm 
just syncing up now). This breaks the postgres.pir interface code -- 
making calls into postgres now isn't possible, as the interpreter 
pukes and dies when you try.


Are there alternatives? The documentation for this stuff is worse now 
than when I wrote it originally, and it's not clear what needs to be 
done to duplicate the functionality of the removed call types.

--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: real ranges

2006-01-02 Thread TSa

HaloO,

Luke Palmer wrote:

In fact, it might even bug me more.  I'm a proponent of the idea that
one name (in a particular scope) is one concept.  We don't overload +
to mean concatenation, we don't overload  to mean output, and we
don't overload  to mean outside.


I agree. And have converted inside to be implemented by ~~ as
hinted by Juerd. But even with the previous idea 1..10  -5
would have been false I guess:

  1..10  -5 === 1  -5  10
 === 1  -5  -5  10 === false

Actually +(10..1) == -9 because the measure is always .max - .min
even if .max  .min! But the Range role might be applicable where
the strict ordering  and  don't. E.g. in the complex plane ranges
form rectangles or ring segments depending on the parametrisation
choosen. In the ifinitesimal (point)limit both coincide, of course!



Supposing you fixed those operators, I really think that this is a job
for a module.  All the syntax and semantics you propose can be easily
implemented by a module, except for the basic definition of ... 


Which was the actual reason to post the idea in the first place.
Note that the proposed ,, and ;; nicely complement the other
list constructors. The infinite lazy list might actually be
written  with pre- or postfix ,,, without running into the same
problem as ... which in my conception of .. as numeric nicely denotes
a Code that doesn't actually iterate a Num. Also the ^..^ forms
are much less important since we've got the unary ^ now. Character
doubling and trippling is actually a theme in Perl6 operator design.

Just compare @a[[;]0,1,2] to @a[0;;2] visually. Not to mention
the---at least to my eye---very strange @; slice vars. BTW, is @foo
automagically related to @;foo? S02 is actually very close to
@foo[1;;5] with its @foo[1;*;5] form. The fact that the [;] reduce
form as slice actually means [==] makes things not easier.

Here is an idea to replace @;x just with a code var:

 my x;
x ==  %hash.keys.grep: {/^X/};
x == =;
x == 1,,,;  # note the ,,,
x == gather { loop { take rand 100 } };

%hash{x} # nice usage of closure index

which to me looks much nicer and increases the importance of
the  sigil and code types! At the first invocation of == the
Pipe is autovivified in x and the second invocation adds to the
outer dimension of the List of Pipe. Note that I think that in

  x == 1,,4;

operator == sees a :(Call of x) type as lhs. What semantic
that might have I don't now. But if that's not an error then perhaps
the former content is lost, an empty code ref is autovivified into x
and given to == as lhs. YMMV, though ;)

Actually evaluating a Pipe stored in a  var might just result
in unboxing the Pipe without changing the former content.

 x == 1,,4;

 x == x == x == x; # x now is Pipe (1,,4;1,,4;1,,4)


Also I realized that the zip, each, roundrobin and cat functions
mentioned in S03 are somewhat redundant with proper typing.
What diffentiates

 cat(@x;@y)

from

 (@x,@y)  #?

Or

   for zip(@names; @codes) - [$name, $zip] {...}

from

   for (@names; @codes) - [$name, $zip] {...} #?

And why is

   for each(@names; @codes) - $name, $zip {...}

not just

   for (@names; @codes) - $name, $zip {...}  #?

Apart from beeing good documentation, of course.


Yet another thing I didn't mention so far is that I see , and ;
as corresponding to ~ for strings. So parts of the conclomerat of
roles for Num and List is replicated with a different syntactical
wrapper for Str. And user defined classes can hook into this at will
by means of role composition and operator overloading.

At a lower level all this is unified syntactically! But how much
the syntax there resembles the humanized forms of Perl6, I can only
guess. On the engine level the only thing that remains is MMD and
error handling.



Maybe you could make it your first Perl 6 module?


Uhh, let's see what I can do about that. Unfortunately
I'm lazy there... *$TSa :)
--


Re: [RFC] Dynamic binding patch

2006-01-02 Thread Bob Rogers
Table of contents

1.  Deep binding is not appropriate.
2.  Outline of a shallow-binding solution.
3.  Unbinding must work with a general stack-unwinding mechanism.
4.  Conclusion (not).

1.  Deep binding is not appropriate.

   It has always been clear that a save/modify/restore mechanism for
at least some of the functions of Perl5 local/Perl6 temp/let will be
needed.  Such a mechanism is likely to be complicated (see below), which
is why I had tried to solve part of the problem independently with a
relatively lightweight solution.  (Indeed, I doubt I grok temp/let
yet, so it's probably more complicated than I think.)  Because of this
complexity, and because shallow binding can subsume the special case of
binding symbol table values, there is little advantage to having a
parallel mechanism solely for binding globals.  Indeed, there are enough
things that interact with stack-unwinding as it is.  So, please consider
my previous proposal withdrawn.  (Sigh.)

   That leaves us with shallow binding.

2.  Outline of a shallow-binding solution.

   Shallow binding essentially just means the straightforward
save/modify/restore approach.  During execution of a particular
stretch of code in a given context, that context's dynamic bindings are
in place in the heap, and any alien being who can reach into Parrot's
memory (gdb, for one) would find the values established by local/temp.
This means that we must ensure that those dynamic bindings are undone
when exiting the context, even temporarily, and redone when re-entering.
There are a number of cases:

   A.  Sub/method call.  This is the simplest; we are creating a new
context that needs to inherit the old one, but otherwise the environment
stays the same.

   B.  Sub/method return.  The current context's dynamic bindings must
be undone, effectively popping them off.

   C.  Continuation call.  This involves leaving one context (without
necessarily exiting it permanently) and entering another.  The simplest
case is just B, but in general the two contexts, call them X and Y, do
not necessarily have any particular relationship.  We need to (1) find
the common ancestor of X and Y, call it Q; (2) undo the dynamic bindings
between X and Q in reverse order, but without throwing them away; and
(3) redo dynamic bindings between Q and Y in forward order.  Call this
operation rezipping the dynamic binding stack.

   D.  Coroutine yield.  This turns out to be equivalent to C, except
that the from context is not abandoned.

   E.  Throwing an exception.  This is simpler than the general
continuation case, since the throw is always to an ancestor, but with an
interesting wrinkle.  A sub may make bindings before pushing a handler,
or afterwards, or both, so the Exception_Handler object must somehow
record the dynamic binding state when it is created.  Care must be taken
to avoid allowing the control stack to get out of phase with respect to
the binding stack.  One way to do this would be to use the control stack
for dynamic bindings, forcing bindings and handlers to take note of each
other, but this requires further thought.

   [I had thought that context switching between threads would be
another such case, but the ithreads model saves us the trouble; I didn't
understand that at the time I made my previous posts.  Dynamic binding
of shared variables needs to be handled correctly, especially with
respect to locking, but I expect this can be addressed independently.]

   So most of the complexity boils down to continuations and exceptions.
Note that if the common ancestor Q has dynamic bindings in effect when
the calls leading to X and Y are made, then we don't want to undo/redo
all of Q's bindings; really, we want to find the common ancestor of the
dynamic binding stacks, and not that of the contexts.  Rezipping can be
done in time proportional to the number of bindings that must be
undone/redone if we keep a depth counter, similar to the recursion_depth
field in struct Parrot_Context, in each dynamic binding record.  Let us
call this structure a Parrot_DBR (for dynamic binding record).  The
Parrot_DBR therefore needs the following:

typedef struct _parrot_dynamic_binding_record {
Parrot_DBR *prev;   /* backpointer to previous binding. */
INTVAL depth;   /* stack depth, used for rezipping. */
PMC *location;  /* a generalized location PMC, not NULL. */
PMC *value; /* the saved value; NULL means unbound. */
} Parrot_DBR;

Note that we cannot make this a doubly-linked list; coroutines would
make it branch, so there might be multiple forward pointers.  However,
it would be handy to have a temp_next slot for rezipping, to make it
easier to go forward to the destination context from the common
ancestor.

   The saved value slot contains the old value when the binding is in
effect, and the newly bound value otherwise.  Rezipping past a
Parrot_DBR in either direction therefore consists of 

Re: T and L parameter types for NCI

2006-01-02 Thread Leopold Toetsch


On Jan 2, 2006, at 19:36, Dan Sugalski wrote:

The big hang up has been the removal of the T and L parameter types 
for NCI calls. T was a string pointer array and L was a long array.


[ ... ]

Are there alternatives? The documentation for this stuff is worse now 
than when I wrote it originally, and it's not clear what needs to be 
done to duplicate the functionality of the removed call types.


Sorry for the documentation mismatch, but you know that's the hardest 
part. Anyway using {Un,}ManagedStruct for any item more complex than a 
simple type (incl. C-strings) is the way to go. There are by far too 
many possible permutations of foo* items to be covered with just one or 
two plain signature chars.


This pod:

$ perldoc docs/pmc/struct.pod

should cover most of it.

Some examples are e.g. t/pmc/nci.t: nci_pi - struct with ints
or runtime/parrot/library/libpcre.pir, which is using array of ints too.
The SDL libraries are also using rather complex structures built with 
the *Struct interface.


BTW: while looking at runtime/parrot/library/postgres.pir and 
runtime/parrot/library/postgres.declarations I discovered some 
inconsistencies, especially with these more complex structures, e.g.:


dlfunc $P2, $P1, 'PQexecParams',   'pptiLTLLi'
dlfunc $P2, $P1, 'PQexecPrepared', 'pptit33i'

whike the C declarations are vastly the same, the PIR counterparts are 
totally different.


HTH,
leo



Re: relationship between slurpy parameters and named args?

2006-01-02 Thread TSa

HaloO,

Austin Frank wrote:
It seems to me like these are related contexts-- arguments to a sub are 
supposed to fulfill its parameter list.  This makes the overloading of 
prefix:* confusing to me.


Would an explicit type List help?


I'm pretty sure we don't need slurpiness in argument lists, and I don't 
know if the prefix:* notation for named arguments would be useful in 
parameter lists.  But just because I can reason that, for example, 
prefix:* in an argument list can't mean slurpiness, that doesn't make 
it clear to me what prefix:* _does_ mean in that context.


Slurpiness in a parameter list is a property of a *zone* not of a
single parameter, IIRC. In a parameter list * actually is no operator
but a syntactic type markup! As arity indicator it could actually be
given after the param like the !, ? and = forms.

If the slurpy zone has more type constraints then just 'give me all'
these have to be met by the dynamic args. The very rudimentary split
in the types is on behalf of named versus positional reflected in the @ 
versus % sigils. Using prefix:* at the call site just defers the

matching to dispatch time unless the type inferencer knows that there
is no chance of meeting the requirements! And parens are needed to
itemize pairs syntactically.

Note that the invocant zone does neither support slurpyness with *
nor optionality with ?. And I'm not sure how explicit types like
List or Pipe behave with respect to dispatch.


I think I understand that prefix:* is available outside of parameter 
lists because that's the only place we need it to mean slurpiness.


No, I think outside of parameter declarations prefix:* indicates
lazy evaluation which I see as a snapshoting of the part of the state
that is relevant for producing the actual values later in the program.
But this might contradict the synopsis which talk of 'when a value
becomes known' which sounds like a ASAP bound ref whose value at deref
time depends on side-effects between the lazification and deref!


So, is there a conceptual connection between imposing named argument 
interpretation on pairs in an arg list and slurping up the end of a 
parameter list?  Are there other meanings of prefix:* that relate to 
one or the other of these two meanings?


I see the following type correspondences:

  *$   Item of List  # accessible through the $ variable
  *@   List of Item  # accessible through the @ variable
  *%   List of Pair  # accessible through the % variable

and perhaps

  *   Item of Code  # accessible through the  variable

but this last one is not slurping up more than one block.
Hmm, I'm also unsure if a single *$ param suffices to
bind a complete list of args into an Item of Ref of List.
OTOH, it is clearly stated that *$ after a *@ or *% never
receive values.
--


Junctions again (was Re: binding arguments)

2006-01-02 Thread TSa

HaloO,

Luke Palmer wrote:

The point was that you should know when you're passing a named
argument, always.  Objects that behave specially when passed to a
function prevent the ability to abstract uniformly using functions.[1]
...
[1] This is one of my quibbles with junctions, too.


You mean the fact that after $junc = any(1,2,3) there is
no syntactical indication of non-scalar magic in subsequent
uses of $junc e.g. when subs are auto-threaded? I strongly
agree. But I'm striving for a definition where the predicate
nature of the junctions is obvious and the magic under control
of the type system.

The least I think should be done here is to restrict the magic
to happen from within  vars combined with not too much auto-enref
and -deref of junction (code) refs. The design is somewhat too
friendly to the junctions are values idea but then not auto-hypering
list operations...

But I have no idea for this nice syntax, yet. Perhaps something like

  my junc = any(1,2,3);
  my $val = 1;

  if junc( infix:==, $val ) {...}

which is arguably clumsy. The part that needs smarting up is handing
in the boolean operator ref. Might a slurpy block work?

  if junc($val): {==} {...}

Or

  if junc:{==}($val) {...}

Or $val out front

  if $val == :{junc} {...}

which doesn't work with two junctions.

Or reduce syntax:

  if [==] junc, $val {...}

OTOH, explicit overloads of all ops applicable to junctions
might end up where we are now:

  if  $junc == $val {...}

Hmm, wasn't that recently defined as actually beeing

  if +$junc === +$val {...}  #  3 === 1 -- false

Or how else does a junction numerify? Thus the problem only remains with
generic equivalence if explicit, inhomogenous overloads for junctions
exist. And there are no generic , , = and =. Does a !== or ^^^
antivalence op exist? I guess not.
--


[PATCH] t/pmc/os.t 'mkdir' test can fail

2006-01-02 Thread Bob Rogers
   . . . depending on where Parrot is located.  Mine is in
/usr/src/parrot, so the code expected /usr/xpto/parrot/src instead of
/usr/src/parrot/xpto . . .

-- Bob Rogers
   http://rgrjr.dyndns.org/


index: t/pmc/os.t
===
--- t/pmc/os.t  (revision 10854)
+++ t/pmc/os.t  (working copy)
@@ -86,7 +86,7 @@
 # Test mkdir
 
 my $xpto = $upcwd;
-$xpto =~ s/src/xpto/;
+$xpto =~ s/src$/xpto/;
 
 pir_output_is('CODE', OUT, Test mkdir);
 .sub main :main


Re: Junctions again (was Re: binding arguments)

2006-01-02 Thread Luke Palmer
On 1/2/06, TSa [EMAIL PROTECTED] wrote:
 But I have no idea for this nice syntax, yet. Perhaps something like

my junc = any(1,2,3);
my $val = 1;

if junc( infix:==, $val ) {...}

 which is arguably clumsy.

I don't think anyone would waste his time arguing that.  :-)

 The part that needs smarting up is handing
 in the boolean operator ref. Might a slurpy block work?

if junc($val): {==} {...}

This all reminds me of Haskell's incantation of the same thing:

if (x ==) `any` [1,2,3]
then ... else ...

Which reads nicely, but it is quite opaque to the naive user. 
Whatever solution we end up with for Junctions, Larry wants it to
support this:

if $x == 1 | 2 | 3 {...}

And I'm almost sure that I agree with him.  It's too bad, because
except for that little detail, fmap was looking pretty darn nice for
junctions.

There is a conflict of design interest here.  We would like to maintain:

* Function abstraction
* Variable abstraction

for junctions, but we would also like to maintain genericity with
respect to user-defined operators.  Of the proposals so far:

Quantum::Superpositions behavior violates genericity with respect to
user-defined operators.

Autothreading behavior violates function abstraction.

Lexical expansion (i.e. just having the compiler turn $x == 1 | 2 into
$x == 1 || $x == 2) violates variable abstraction.

So, with these three constraints in mind, fmap again comes out on top.
 Yes, I'm quite proud of it.  Unfortuately, it's ugly, and that's a
constraint too.

There's got to be a good solution lying around here somewhere...

Luke


Re: [RFC] Dynamic binding patch

2006-01-02 Thread Larry Wall
On Mon, Jan 02, 2006 at 06:55:24PM -0500, Bob Rogers wrote:
: [2]  About two-thirds of the way through A06 (search for temporize
:  object attributes), Larry says that this will be done via
:  closures.  In order to support rezipping, such a closure would need
:  to accept a new value to store and return the old value.  Or maybe
:  the .TEMP method could just return a location PMC?

I would prefer the latter for the general case.  As with any rw
location, you're returning a proxy/location object with an appropriate
lvalue interface.  Any user-specified closures end up being methods
of that object.  Treating such a closure as the proxy object not
only confuses getting with setting but also confuses both of those
with the original identification of the location.  I was not terribly
consistent in carrying this idea though in the original Apocalypses.

By the way, it's getting to be a bit dangerous to quote the Apocalypses
rather than the Synopses, since they've diverged somewhat, and the
Synopses are being kept rather more up-to-date.  The post block
has turned into a leave block, for instance.  (Pre/post blocks are
reserved for Design-By-Contract now.)

Anyway, I think we'll need to recognize over the long haul that some
forms of control flow are simply incompatible with certain kinds of
state change, in a Haskellian monadic sense.  I think if the first
whack at rezipping can simply detect such situations and refuse to
proceed, we'll at least be up to the Perl 5 level, even if we can't
continue into a temporized area.  After all, the entire Perl 5 model
assumes that you only have to undo once.  It'd be nice to generalize
from that, but Perl 6 is moving more toward using a thing we're calling
environmental lexical variables to represent dynamic context anyway,
which I think is more like what you'd call a deep binding.  So from
a Perl perspective, temporization is being de-emphasized somewhat.

On the other hand, as you point out, you do have to be careful about
unwinding exceptions.  We'll need to do that as lazily as possible,
since some exceptions will include an optional continuation to
resume if the exception thrower wants to allow the exception to be
defatalized.  In fact, I suspect we'll end up handling all warnings
as exceptions that are defatalized by default by an outermost warning
exception handler, and it'd be a shame if, when we resume after
a warning, the temporizations have all been undone.  Even if they
get rezipped at resumption, that's gotta be a big performance hit
just to emit a warning.  So maybe there's some way of running CATCH
blocks in the dynamic context of the thrower without rezipping twice.
Have to think about the semantic ramifications of that though...

Larry