Re: about binary protocol porting

2022-01-03 Thread Geoffrey Broadwell
Just from skimming some of the relevant docs (not having written a 
driver for Apache Ignite before), some thoughts:


 * It does indeed look like there is enough info, both as documentation
   and example code, to write codecs and drivers for Ignite
 * The formats and protocols look rather baroque, with significant
   historical baggage -- it's going to take quite a bit of work to get
   a fully compliant driver, though it does look like a smaller subset
   could be built to match just a particular need
 * There is a strong Java flavor to everything; there is some impedance
   mismatch with Raku (such as the Char array
   
<https://ignite.apache.org/docs/latest/binary-client-protocol/data-format#char-array>
   type, which is an array of UTF-16 code units that doesn't
   necessarily contain valid decodeable text)
 * There seems to be a contention in the design between desire to
   support a schema-less/plain-data mode and a schema/object mode; Raku
   easily has the metaobject protocol chops to make the latter possible
   without invoking truly deep magic, but it does require somewhat more
   advanced knowledge to write

So in short: It looks doable, but quite a fair chunk of work depending 
on how complete you need it to be, and some decisions need to be made 
about how pedantically to support their Java-flavored APIs.



On 1/3/22 7:39 PM, Piper H wrote:

Glad to hear these suggestions, @Geoffery.
I also have a question, this product has a clear binary protocol, do 
you know how to port it to perl or perl6?

https://ignite.apache.org/docs/latest/binary-client-protocol/binary-client-protocol
I was using their python/ruby clients, but there is not a perl version.

Thanks.
Piper

On Tue, Jan 4, 2022 at 11:15 AM Geoffrey Broadwell <mailto:g...@sonic.net>> wrote:


I love doing binary codecs for Raku[1]!  How you approach this
really depends on what formats and protocols you want to create
Raku modules for.

The first thing you need to be able to do is test if your codec is
correct.  It is notoriously easy to make a tiny mistake in a
protocol implementation and (especially for binary protocols) miss
it entirely because it only happens in certain edge cases.

If the format or protocol in question is open and has one or more
public test suites, you're in good shape.  Raku gives a lot of
power for refactoring tests to be very clean, and I've had good
success doing this with several formats.

If there is no public test suite, but you can find RFCs or other
detailed specs, you can often bootstrap a bespoke test suite from
the examples in the spec documents. Failing that, sometimes you
can find sites (even Wikipedia, for the most common formats) that
have known-correct examples to start with, or have published
reverse engineering of files or captured data.

If the format is truly proprietary, you'll be getting lots of
reverse engineering practice of your own. 

Now that you have some way of testing correctness, you'll want to
be able to diagnose the incorrect bits.  Make sure you have some
way of presenting easily-readable text expansions of the binary
format, because just comparing raw buffer contents can be rather
tedious (though I admit to having found bugs in a public test
suite by spending so much time staring at the buffers I could tell
they'd messed up a translation in a way that made the test always
pass).  If the format or protocol has an official text
translation/diagnostic/debug format -- CBOR, BSON, Protobuf, etc.
all have these -- so much the better, you should support that
format as soon as practical.

Once you get down to the nitty-gritty of writing the codec, I find
it is very important to make it work before making it fast.  There
is a lot of room for tuning Raku code, but it is WAY easier to get
things going in the right direction by starting off with idiomatic
Raku -- given/when, treating the data buffer as if it was a normal
Array (Positional really), and so on.

Make sure that with every protocol feature that you add, that you
make tests newly pass, and (I find at least) that you write the
coding and decoding bits at the same time, so you can check that
you can round-trip data successfully.  For the love of all that is
good, don't implement any obtuse features before the core features
are rock solid and pass the test suite with nary a hiccup.

After that, when you think you're ready to optimize, write
performance /tests/ first.  Make sure you test with data that will
both use your codec in a typical manner, and also test out all the
odd corners.  You're looking for things that seem weirdly slow;
this usually indicates a thinko like copying the entire buffer
each time you read a byte from it, or somesuch.

Once you've got the obvious performance kinks worked out, come by
and ask ag

Re: about binary protocol porting

2022-01-03 Thread Geoffrey Broadwell
I love doing binary codecs for Raku[1]!  How you approach this really 
depends on what formats and protocols you want to create Raku modules for.


The first thing you need to be able to do is test if your codec is 
correct.  It is notoriously easy to make a tiny mistake in a protocol 
implementation and (especially for binary protocols) miss it entirely 
because it only happens in certain edge cases.


If the format or protocol in question is open and has one or more public 
test suites, you're in good shape.  Raku gives a lot of power for 
refactoring tests to be very clean, and I've had good success doing this 
with several formats.


If there is no public test suite, but you can find RFCs or other 
detailed specs, you can often bootstrap a bespoke test suite from the 
examples in the spec documents.  Failing that, sometimes you can find 
sites (even Wikipedia, for the most common formats) that have 
known-correct examples to start with, or have published reverse 
engineering of files or captured data.


If the format is truly proprietary, you'll be getting lots of reverse 
engineering practice of your own. 


Now that you have some way of testing correctness, you'll want to be 
able to diagnose the incorrect bits.  Make sure you have some way of 
presenting easily-readable text expansions of the binary format, because 
just comparing raw buffer contents can be rather tedious (though I admit 
to having found bugs in a public test suite by spending so much time 
staring at the buffers I could tell they'd messed up a translation in a 
way that made the test always pass).  If the format or protocol has an 
official text translation/diagnostic/debug format -- CBOR, BSON, 
Protobuf, etc. all have these -- so much the better, you should support 
that format as soon as practical.


Once you get down to the nitty-gritty of writing the codec, I find it is 
very important to make it work before making it fast. There is a lot of 
room for tuning Raku code, but it is WAY easier to get things going in 
the right direction by starting off with idiomatic Raku -- given/when, 
treating the data buffer as if it was a normal Array (Positional 
really), and so on.


Make sure that with every protocol feature that you add, that you make 
tests newly pass, and (I find at least) that you write the coding and 
decoding bits at the same time, so you can check that you can round-trip 
data successfully.  For the love of all that is good, don't implement 
any obtuse features before the core features are rock solid and pass the 
test suite with nary a hiccup.


After that, when you think you're ready to optimize, write performance 
/tests/ first.  Make sure you test with data that will both use your 
codec in a typical manner, and also test out all the odd corners.  
You're looking for things that seem weirdly slow; this usually indicates 
a thinko like copying the entire buffer each time you read a byte from 
it, or somesuch.


Once you've got the obvious performance kinks worked out, come by and 
ask again, and we can give you further advice from there.  Or heck, just 
come visit us on IRC (#raku at Libera.chat), and we'll be happy to 
help.  (Do stick around for a while though, because traffic varies 
strongly by time of day and day of week.)


Best Regards,


Geoff (japhb)


[1]  I'm a bit of a nut for it, really.  In the distant past, I wrapped 
C libraries to get the job done, but more recently I've done them as 
plain Raku code (and sometimes NQP, the language that Rakudo is written in).


I've written some of the binary format codecs for Raku:

 * https://github.com/japhb/CBOR-Simple
   
 * https://github.com/japhb/BSON-Simple
   
 * https://github.com/japhb/Terminal-ANSIParser
   
 * https://github.com/japhb/TinyFloats
   

Modified or tuned others:

 * https://github.com/samuraisam/p6-pb/commits?author=japhb
   
 * https://github.com/japhb/serializer-perf
   
 * (Lots of stuff spread across various Cro
    repositories)

Added a spec extension for an existing standardized format (CBOR):

 * https://github.com/japhb/cbor-specs/blob/main/capture.md
   

And I think I forgot a few things.  




Re: [perl #130716] [CONC] unbounded supply {} + react {} = pseudo-hang

2017-02-05 Thread Geoffrey Broadwell via perl6-compiler
Responses inline ...

On Sat, Feb 4, 2017 at 7:08 AM, jn...@jnthn.net via RT <
perl6-bugs-follo...@perl.org> wrote:

> On Fri, 03 Feb 2017 21:20:59 -0800, g...@google.com wrote:
> > See the following gist:
> >
> > https://gist.github.com/japhb/40772099ed24e20ec2c37c06f434594b
> >
> > (If you run that at the command line, you'll probably want to pipe it to
> > `head -30` or so; it will output a lot of lines very quickly!)
> >
> > Essentially it appears that unlike the friendly one-at-a-time behavior of
> > .act, react/whenever will try to exhaust all the emits from an unbounded
> > supply *before* delivering any of them to the whenever code -- which
> makes
> > it awfully hard to have the whenever tell the supply when to stop.
>
> Firstly, the boring observations: there are two mistakes in the gist.
>
> 1) A role is not a closure, so:
> } does role { method done { $done = True } }
> Will not behave as you want.
>

It took me a minute to realize this was true, because if you move the `my
$s2 = make-supply;` from the react section right up under the creation of
$s1, it's clear that things go terribly wrong -- it apparently only worked
for me because I was only creating one at a time and finishing with one
before creating the next.

That said, this raises two questions:

A. How did this work in the first place?  Was the role's reference to $done
pointing to a single static slot?

B. Why isn't a role declaration a closure?  I understand that the
attributes and methods need to be flattened into the composed class; is
this because the contents of the role and the class are inserted into one
combined block that can only have one outer lexical context?

2) In the react example, there is $s1.done, when I presume $s2.done was
> meant.
>

Yup, didn't notice that pasto because the gist was essentially a merge of
two different cases, and the behavior after merging was the same as before.


> Even with these corrected, the behavior under consideration still occurs.
>
> The deadlock we're seeing here is thanks to the intersection of two
> individually reasonable things.
>
> The first is the general principle that supplies are about taming, not
> introducing, concurrency. There are, of course, a number of Supply factory
> methods that will introduce concurrency (Supply.interval, for example),
> together with a number of supply operators that also will - typically,
> anything involving time, such as the delay method. Naturally, schedule-on
> also can. But these are all quite explicitly asking for the concurrency
> (and all are delegating to something else - a scheduler - to actually
> provide it).
>
> The second, which is in some ways a follow-on from the first, is the
> actor-like semantics of supply and react blocks. Only one thread may be
> inside of a given instance of a supply or react block at a time, including
> any of the whenever blocks inside of it. This has two important
> consequences:
>
> 1) You can be sure your setup logic inside of the supply or react block
> will complete before any messages are processed.
>
> 2) You can be sure that you'll never end up with data races on any of the
> variables declared inside of your supply or react block because only one
> message will be processed at a time.
>
> This all works out well if the supply being tapped truly *is* an
> asynchronous source of data - which is what supplies are primarily aimed
> at. In the case we're considering here, however, it is not. Thanks to the
> first principle, we don't introduce concurrency, so we tap the supply on
> the thread running the react block's body. It never hands back control due
> to the loop inside of it, running straight into the concurrency control
> mechanism.
>

OK, the above makes sense to me, but why does the .act version work
properly then?  When I first read that react {} was supplying actor-like
semantics, I assumed that meant it works just like .act -- but it doesn't.
Why not?  What am I missing here?


> A one-word fix is to introduce a bit of concurrency explicitly:
>
> react {
> start whenever $s2 -> $n {
> say "Received $n";
> $s2.done if $n >= 5;
> }
> }
>
> With this, the react block's setup can complete, and then it starts
> processing the messages.
>

Well ... that kinda works.  As I tried this (with a `sleep 2` added at
program end) and a few other variants -- using `last` instead of `$s2.done`
as recommended in the irclog, using `loop` instead of `until $done`,
getting rid of the role application and instead putting `my $done = False;
CLOSE $done = True;` inside the supply {} block, etc. -- I found that every
variation I tried sometimes worked, and sometimes led to sadness.  For
example, it might emit a largish number of times, then stop emitting and
just hang (way past the length of the sleep).  The version using `last` and
`CLOSE` together would sometimes emit quite a few times before exiting,
with the last few emits interspersed with `===SORRY!===` and `last without
loop 

Re: [perl #101858] [PATCH] [BUG] List.unshift won't unshift false values in nom

2011-10-21 Thread Geoffrey Broadwell
See attached short patch to src/core/List.pm to fix #101858.


-'f

From d18c6af3e8c8bd2e1dc43d132fcc2cb39fc41e6c Mon Sep 17 00:00:00 2001
From: Geoffrey Broadwell ge...@broadwell.org
Date: Thu, 20 Oct 2011 21:28:36 -0700
Subject: [PATCH] List.unshift(): loop while @elems is non-empty, not while first element is true; fixes #101858

---
 src/core/List.pm |4 +---
 1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/src/core/List.pm b/src/core/List.pm
index 0fa72bd..e679717 100644
--- a/src/core/List.pm
+++ b/src/core/List.pm
@@ -194,9 +194,7 @@ my class List does Positional {
 }
 
 method unshift(*@elems) {
-while @elems.pop - $e {
-nqp::unshift($!items, $e)
-}
+nqp::unshift($!items, @elems.pop) while @elems;
 self
 }
 
-- 
1.7.4.1



Re: r31789 -[S32] DateTime immutable, leap seconds validation

2010-07-23 Thread Geoffrey Broadwell
On Fri, 2010-07-23 at 15:26 +0200, Raphael Descamps wrote:
 Have a look at nqp-rx + kakapo + plumage + proto/PLS for some examples
 where you can help without any C or Perl 5 knowledge:
 
 http://gitorious.org/parrot-plumage

Best.  Suggestion.  Ever.

:-)


-'f




Re: [perl #74334] [PATCH] Fix Parcel.sort (fixes the very first example in http://cloud.github.com/downloads/perl6/book/book-2010-04.pdf) and Hash.sort

2010-04-13 Thread Geoffrey Broadwell
On Tue, 2010-04-13 at 05:42 -0700, Ira Byerly wrote:
 Note that one test in t/spec/S32-list/sort.t fails...
 
 not ok 4 - array of mixed numbers including Inf/NaN
 #  got: [-Inf, -61/20, -1e-07, 1/10, 11/10, 2, 42, Inf, NaN]
 # expected: [NaN, -Inf, -61/20, -1e-07, 1/10, 11/10, 2, 42, Inf]
 
 This is due to NaN sorting greater than Inf, rather than less than -Inf.
 Since this is consistent with the Rakudo spaceship operator...
 
 $ perl6 -e 'say NaN = Inf'
 1
 
 ... I'm not sure that is appropriate to fix it in the code; it might be
 better to change the test to match Rakudo's behavoir, or perhaps the test
 should accept either +Inf or -Inf.  The IEEE 754 standard just specifies
 that than NaN should be unordered.

In that case, the spec test should not care where the NaN sorts -- it
should be implementation-dependent (unless $Larry has declared
otherwise).  However, that does not mean you should take NaN out of the
sort testing -- you still want to test that it can be sorted against
any other numeric value without invoking HCF (Halt and Catch Fire).

Probably the easiest solution is just to do the sort, check that it
contains a NaN somewhere, then grep the NaN out of the results and
compare the cleaned results against an expected list with no NaN in it.


-'f




Re: underscores vs hyphens (was Re: A new era for Temporal)

2010-04-12 Thread Geoffrey Broadwell
On Mon, 2010-04-12 at 11:23 -0700, Larry Wall wrote:
 The standard parser will likely be pointing out spelling errors and
 conjecturing emendations for near misses.  Whole-program analysis can
 even do this for any method names that look wrongish.  The difference
 between Acme-X and Acme_X is no worse than the difference between
 Damian and Damien, at least in Levenshtein distance.

Ah yes, I forgot about this feature.  Consider my argument for choosing
only one separator throughout the standard setting Way Less Adamant --
though I still think it's a good idea.


-'f




Re: underscores vs hyphens (was Re: A new era for Temporal)

2010-04-11 Thread Geoffrey Broadwell
On Sat, 2010-04-10 at 17:20 -0700, yary wrote:
 Adjectives and nouns aren't English-only. So Damian's proposal is
 multi-culti. One could argue that Perl's identifiers, keywords, etc
 are based on English so that it is more difficult for a non-English
 speaker to discern why underscore is used in some places and hyphens
 in other. The solution to that would be rote memorization of method
 names, including _ and - in the spelling. Not ideal, but most
 likely what many English speaking programmers would do too. And would
 cuss over.

And there's the rub for me.  One of the goals of Perl 6 is to reduce the
amount of rote memorization of special cases that Perl 5 required.  Any
mixed use of _ and - in the standard setting defies that goal.

(FWIW, I don't really care which is used -- I see arguments for both --
but I do firmly believe the standard setting should only use one or the
other.  Damian's Temporal example in which only one method used a
different separator made the rules-versus-exceptions part of my brain
scream for mercy.)


-'f




Re: A common and useful thing that doesn't appear to be easy in Perl 6

2010-04-06 Thread Geoffrey Broadwell
First: what Damian said.

Second: Whatever syntax people come up with has to make it easy and
type-safe to name particular combinations of those bits.

In other words, you should be able to make a bitset with Unix-style
permissions:

OTHER_EXECUTE
OTHER_WRITE
OTHER_READ
GROUP_EXECUTE
GROUP_WRITE
GROUP_READ
...

But still be able to make bitmasks (ignore the syntax here):

OTHER_MASK = OTHER_READ +| OTHER_WRITE +| OTHER_EXECUTE;
GROUP_MASK = GROUP_READ +| GROUP_WRITE +| GROUP_EXECUTE;
...

These bitmasks should be properly typed with respect to the original
bitset; which is to say, this should work:

my Permissions $other_perms = $file_perms + OTHER_MASK;


-'f




Re: r30205 - docs/Perl6/Spec

2010-03-26 Thread Geoffrey Broadwell
On Fri, 2010-03-26 at 08:38 +0100, pugs-comm...@feather.perl6.nl wrote:
  .doit: { $^a = $^b }  # okay
  .doit(): { $^a = $^b }# okay
  .doit(1,2,3): { $^a = $^b }   # okay
 +.doit(1,2,3): { $^a = $^b }   # okay

 +.doit:{ $^a = $^b }  # okay
 +.doit():{ $^a = $^b }# okay
 +.doit(1,2,3):{ $^a = $^b }   # okay
 +.doit(1,2,3):{ $^a = $^b }   # okay

My eyes must be playing tricks on me -- I can't see the difference
between the last two lines in each of the above blocks.  What am I
missing?


-'f




Re: [perl #73350] [PATCH] Add p5chomp and add p5chomp and p5chop to tests

2010-03-07 Thread Geoffrey Broadwell
On Sat, 2010-03-06 at 07:52 -0800, Martin Kjeldsen wrote:
 +if $str ~~ /\x0a$/ {
 +$str = $str.substr(0, $str.chars - 1);

Unless newlines are being canonicalized elsewhere, this seems
*nix-specific.

(Sorry I haven't researched further, this just caught my eye in passing;
feel free to ignore if it doesn't make sense.)


-'f




Re: Rakudo Perl database access

2009-12-15 Thread Geoffrey Broadwell
On Tue, 2009-12-15 at 09:47 -0500, Guy Hulbert wrote:
 This:
 
 http://trac.parrot.org/parrot/wiki/ModuleEcosystem
 
 works better for me as starting point for plumage.

That's due for some updates; currently stalled on my tuits, but it's on
my short list.  Sadly, my short list is getting rather long these
days 

On the plus side, what's making that list so long is things to do for
Perl 6 folks.  :-)


-'f




Re: p6 Q: How do I metaprogram this?

2009-12-08 Thread Geoffrey Broadwell
On Tue, 2009-12-08 at 18:58 -0500, Austin Hastings wrote:
 I know that I could 'metaprogram' this stuff by using string 
 manipulation on the various method names, and then calling a 
 (self-built) call_method($obj, $method_name, ...args...) function.

You don't need to write this by hand.  NQP-rx supports the method call
by name Perl 6 syntax:

$obj.$method_name(...args...);

which makes this kind of thing much easier.  I use it in Plumage in a
number of places.

 But I'm curious if there's some P6 feature I've forgotten about (which 
 I've forgotten most of them, excepting the rev number) that would let me 
 do this without having to go too far away from the metal.

The above syntax is actually pretty close to the metal because it
translates directly to standard PIR ops.


-'f




Re: p6 Q: How do I metaprogram this?

2009-12-08 Thread Geoffrey Broadwell
On Wed, 2009-12-09 at 00:16 -0500, Austin Hastings wrote:
 Geoffrey Broadwell wrote:
  On Tue, 2009-12-08 at 18:58 -0500, Austin Hastings wrote:

  I know that I could 'metaprogram' this stuff by using string 
  manipulation on the various method names, and then calling a 
  (self-built) call_method($obj, $method_name, ...args...) function.
 
  You don't need to write this by hand.  NQP-rx supports the method call
  by name Perl 6 syntax:
 
  $obj.$method_name(...args...);
 
 The problem I have with the above is that it seems to require a second 
 layer of call. Something like:
 
 sub beforeall_methods() { return 
 fetch_methods_by_category('beforeall'); }
 
 sub fetch_methods_by_category($cat) {...}
 
 Essentially, it's one level of function call to translate code into data 
 (method name into string) and then the template function is the second 
 layer of call.

I'm not entirely sure what you mean here by translate code into data
(method name into string).  The method name is already a string, which
is why I offered the call by name syntax above.  But of course if you
have a code object for the method itself, you could do this in Perl 6:

$obj.$method(...args...);

Sadly this does not currently work in NQP-rx, though IIUC there's no
reason it couldn't (and in fact I've already requested this feature
because it would be useful for some function table stuff I do).

Full Perl 6 offers a number of features that would be useful for calling
a whole pile of dynamically-chosen methods on an object, but few have
been implemented in NQP-rx.  (I would assume because there hasn't been a
lot of demand for it yet.)

I'll let the Perl 6 gurus follow up with actual syntax examples for some
of these nifty features.  ;-)


-'f




Re: Parrot and Perl 6 Summary

2009-11-23 Thread Geoffrey Broadwell
On Mon, 2009-11-23 at 01:15 +0100, Lithos wrote:
 Today I posted my first attempt at summarizing Perl 6 and Parrot things at
 
http://lith-ology.blogspot.com/
 
 Any comments and corrections welcome!

This is *very* valuable to us.  Please keep it up!


-'f




Re: Parrot Bug Summary

2009-11-23 Thread Geoffrey Broadwell
On Mon, 2009-11-23 at 14:00 +, Parrot Bug Summary wrote:
 Parrot Bug Summary
 
 http://rt.perl.org/rt3/NoAuth/parrot/Overview.html

Now that we've emptied RT, can we shut this off?


-'f




Re: trouble building rakudo

2009-11-22 Thread Geoffrey Broadwell
On Sun, 2009-11-22 at 08:15 -0800, Paul Simon wrote:
 The system has only 256 MB with few resources being used. That's a big 
 difference
 between 12+ hours and 3 minutes! Maybe I should invest in more RAM :-)

Yep, if you see swapping, more RAM is probably the single most effective
performance enhancer you can possibly throw at it.  The slowdown from
swapping to a spinning disk completely swamps all performance
differences between CPUs, for instance.

FWIW, the next thing to improve depends on your common tasks: a solid
state disk and a better video card are usually the best bets.  The SSD
is great for compiles, program startups, and other disk-intensive
processes.  (Get a good one, the cheap ones are awful.)  The video card
is most useful if you do 3D, watch videos, or have a desktop that uses a
compositing engine.  If you just use twm to run xterms, it's not all
that valuable.  :-)

Still, both of those are WAY further down the list than getting enough
RAM to do the compile in main memory.


-'f




Re: r29129 - docs/Perl6/Spec

2009-11-19 Thread Geoffrey Broadwell
I kinda like 'blorst'.  The word makes me think of a warm stew on a cold
winter night.  And I agree with the searchability advantage of 'blorst'
as well.

/bikeshed


-'f




Re: S26 - The Next Generation

2009-09-17 Thread Geoffrey Broadwell
On Thu, 2009-09-17 at 11:12 -0700, yary wrote:
 On Thu, Sep 17, 2009 at 1:05 AM, Damian Conway dam...@conway.org wrote:
  Aaron Sherman asked:
 ...
  I'd very much like to establish that at default optimization levels for
  execution, this information is not guaranteed to be maintained past the
  creation of the AST.
 
  Unfortunately, it is. Perl 6 defines that Perl 6 programs can always
  access their own Pod at runtime (via $=POD). You probably can't even
  optimize the information away in the absence of any compile-time
  reference to $=POD, since there are plenty of symbolic ways to refer to
  $=POD at run-time.
 
 Can some concept/implementation of $=POD lazyness only incur the
 memory and performance hit on access?

Alternately it should be possible to declare that the Pod data be
dropped before mainline runtime begins.  For example, it ought to be
possible for a compiling implementation such as Rakudo to declare that
the Pod data not be frozen into the PBC file.

(If this is already specced, I apologize -- I haven't searched for it.)


-'f




Re: patch for t/spec/s06-multi/type-based.t

2009-08-21 Thread Geoffrey Broadwell
On Fri, 2009-08-21 at 14:24 +1100, Илья wrote:
 -multi foo (@bar) { Array  ~ join(', ', @bar) }
 -multi foo (%bar)  { Hash  ~ join(', ', %bar.keys.sort) }
 +multi foo (@bar) { Positioanl  ~ join(', ', @bar) }
 +multi foo (%bar)  { Associative  ~ join(', ', %bar.keys.sort) }

Typo in third line there (Positioanl ).


-'f




Re: r27605 - docs/Perl6/Spec/S32-setting-library

2009-07-19 Thread Geoffrey Broadwell
On Sat, 2009-07-18 at 21:22 -0400, James Cloos wrote:
 lwall + enum TrigBase is export Radians Degrees Gradians Circles;
 
 Is Circles of much value?
 
 I can see Semicircles, since that would make the range (-1,1] or [-1,1).
 But a range of [0,1) or (0,1] seems *much* less useful.
 
 Or am I missing an obvious use case?

With Circles, simple int/frac always gets you a count of full rotations
and an angle always between 0 and one full rotation.  Seems useful to
me.

Semicircles sounds useful as well, for the reason you state above.


-'f




Re: Trying to install Rakudo

2009-06-21 Thread Geoffrey Broadwell
On Sun, 2009-06-21 at 13:14 +0100, Lyle wrote:
 Hi Ron,
  I looks like the SVN repo was just down temporarily. Try again and 
 it'll probably work. You can also run the svn commands directly to grab 
 parrot... Take a look in gen_parrot.pl
 
 Ron Savage wrote:
  Checking out Parrot r39599 via svn...
  svn: OPTIONS of 'https://svn.parrot.org/parrot/trunk': could not connect
  to server (https://svn.parrot.org)

If you're using Debian testing, there's another possibility -- you may
have fallen victim to a broken libneon package.  You will either need to
upgrade that package and all its versioned dependencies to the unstable
versions, or change your ~/.subversion/servers file to include the
following line in the [global] section:

http-library = serf

I chose the latter fix because it was simpler, but I've had a couple
bits of weirdness.  I've heard that the libneon upgrade is the better
long-term solution, so YMMV.


-'f




Re: RPN calculator in Perl 6

2009-06-08 Thread Geoffrey Broadwell
On Sat, 2009-06-06 at 13:18 -0300, Daniel Ruoso wrote: 
 http://sial.org/pbot/37077
 A slightly improved syntax, as per jnthn++ suggestion...

My list mail has been very delayed, so this may be out of sequence, but
in case no one mentioned it yet:

http://sial.org/pbot/37102

(That's ruoso++'s later 37100 paste with a couple small tweaks by me.)

I wanted to shrink that even further by replacing the given/when with a
direct call of the correct op variant, but I couldn't figure out how to
do that in current Rakudo.  (And just using eval inside a multi seemed
to be broken.)


-'f




Re: Amazing Perl 6

2009-05-29 Thread Geoffrey Broadwell
Tim Nelson:
 There's some standard that says this is how to generate unicode:

 1.Hold down Ctrl+Shift
 2.Press U
 3.Type the hexadecimal for the unicode character
 4.Release Ctrl+Shift

This works under GNOME, which also has a variant that is a little
friendlier to the fingers (and probably also works better with various
accessibility changes to the shift keys):

1. Press Ctrl+Shift+U
2. Release; see 'underlined u' feedback
3. Type the hex for the Unicode character (leading 0's optional);
   hex digits continue showing underline feedback
4. Press Enter; underlined u and digits are replaced with final glyph


-'f




Re: [perl #63874] [PATCH] floor-ceiling-round-sign-and-abs-in-perl6

2009-03-24 Thread Geoffrey Broadwell
On Tue, 2009-03-24 at 11:56 -0500, Patrick R. Michaud wrote:
 On Mon, Mar 16, 2009 at 09:41:37PM -0700, Geoffrey Broadwell wrote:
  On Mon, 2009-03-16 at 21:08 -0500, Patrick R. Michaud wrote:
   By putting floor/ceiling/round/sign/abs as a candidates for the setting 
   I was really aiming more for inline PIR than a pure Perl 6 solution.
   We still need those functions to have signatures and (perhaps)
   participate in multidispatch, and that's easier if the function
   definitions are Perl 6 (with the function bodies being inline PIR
   or a mixture of Perl 6 and inline PIR).
  
  Gotcha.  Sounds fine by me (as long as the Perl 6 signatures don't carry
  significantly more overhead than the pure-PIR version).
 
 They do carry more overhead (perhaps even a significant amount), but it's 
 a necessary overhead because we want them to properly participate in 
 multidispatch, and we'd like things like .signature to work properly.

I get the feeling that dispatch performance is going to be utterly
critical for Perl 6 (all implementations).  It seems to me that in
gaining flexibility and orthogonality, we've lost a lot of places that
Perl 5 could special case things for speed.

Of course, Perl 6 allows us to optimize *different* things for speed --
hyperoperators come to mind -- but it's hard to let go of the things
that you already have, you know?

/me goes back to blind faith in the coming happy place 


-'f




Re: [perl #63874] [PATCH] floor-ceiling-round-sign-and-abs-in-perl6

2009-03-16 Thread Geoffrey Broadwell
On Mon, 2009-03-16 at 21:08 -0500, Patrick R. Michaud wrote:
 On Sun, Mar 15, 2009 at 01:25:28AM -0500, fREW Schmidt wrote:
   Lesson from the Forth world: In cases where the semantic of a high-level
   word exactly (or very closely) matches an instruction in the hardware's
   ISA, it really deserves to be a primitive.
  
  Yeah, the main reason I did it was because it was on the rakudo wiki for
  candidates for the setting.
 
 By putting floor/ceiling/round/sign/abs as a candidates for the setting 
 I was really aiming more for inline PIR than a pure Perl 6 solution.
 We still need those functions to have signatures and (perhaps)
 participate in multidispatch, and that's easier if the function
 definitions are Perl 6 (with the function bodies being inline PIR
 or a mixture of Perl 6 and inline PIR).

Gotcha.  Sounds fine by me (as long as the Perl 6 signatures don't carry
significantly more overhead than the pure-PIR version).


-'f




Re: [perl #63874] [PATCH] floor-ceiling-round-sign-and-abs-in-perl6

2009-03-14 Thread Geoffrey Broadwell
On Sat, 2009-03-14 at 14:07 -0700, fREW Schmidt wrote:
 # New Ticket Created by  fREW Schmidt 
 # Please include the string:  [perl #63874]
 # in the subject line of all future correspondence about this issue. 
 # URL: http://rt.perl.org/rt3/Ticket/Display.html?id=63874 
 
 
 , perl6

I'm all in favor of converting things that are complex in PIR to things
that are simple in Perl 6 ... but why convert things that boil down to a
single instruction in PIR into complex things in Perl 6?  Especially
since the complex Perl 6 code is highly likely to run a couple orders of
magnitude slower?

Lesson from the Forth world: In cases where the semantic of a high-level
word exactly (or very closely) matches an instruction in the hardware's
ISA, it really deserves to be a primitive.


-'f




Re: [perl #63626] Re: bouncing parrot...@parrotcode.org

2009-03-04 Thread Geoffrey Broadwell
  Could you change parrot...@parrotcode.org to simply bounce with a  
  message:
 
  --
  Please submit reports to Parrot using the web interface:
 
  https://trac.parrot.org/parrot/newticket
 
  Thanks,
  The Parrot Team
  --
 
  Thanks!
  Allison

Out of curiosity, why don't we allow emails to create tickets in Trac?


-'f




Re: Comparing inexact values (was Re: Temporal changes)

2009-02-24 Thread Geoffrey Broadwell
On Tue, 2009-02-24 at 12:31 -0800, Jon Lang wrote:
   $y ± 5  # same as ($y - 5) | ($y + 5)
   $y within 5 # same as ($y - 5) .. ($y + 5)

Oh, that's just beautiful.


-'f




Re: Temporal revisited

2009-02-20 Thread Geoffrey Broadwell
On Fri, 2009-02-20 at 15:33 -0600, Dave Rolsky wrote:
 Of course, if you're dealing with TAI only, you're safe for constants up 
 to ONE_WEEK.

So we just define ONE_MONTH as 4 * ONE_WEEK, right?

*duck*


-'f




Re: Spec reorganisation

2009-02-19 Thread Geoffrey Broadwell
On Thu, 2009-02-19 at 22:57 +1100, Timothy S. Nelson wrote:
 On Thu, 19 Feb 2009, Carl Mäsak wrote:
  A tree is a graph without cycles.

That's insufficient.  In fact, there are a number of ways that the
general concept of an acyclic graph must be constrained before you get
something you can call a 'tree'.

  The concept of a root is very common in computer representations, but in 
  no way necessary for a general tree. In fact, in phylogenetics, it's 
  business as usual to handle unrooted trees. This makes the $root attribute 
  meaningless in at least some cases.
 
   Interesting.  I'm happy to assume that $root is allowed to be 
 Undefined, I think.  But let me ask a question; were you to represent an 
 unrooted tree in a computer, how would you do it so that, if you had to look 
 around the tree, you could do it?  You'd need some node that was an 
 entry-point into the tree.  That's the purpose I'm trying to get at here.

A tree with nodes but without a root is not a tree -- it's a collection
of trees, more commonly called a grove or forest.


-'f




Re: r25122 - docs/Perl6/Spec

2009-01-30 Thread Geoffrey Broadwell
On Fri, 2009-01-30 at 08:12 +0100, pugs-comm...@feather.perl6.nl wrote:
 @@ -103,7 +106,7 @@
  =item *
  
  POD sections may be used reliably as multiline comments in Perl 6.
 -Unlike in Perl 5, POD syntax now requires that C=begin comment
 +Unlike in Perl 5, POD syntax now lets you use C=begin comment
  and C=end comment delimit a POD block correctly without the need
  for C=cut.  (In fact, C=cut is now gone.)  The format name does
  not have to be Ccomment -- any unrecognized format name will do

I believe that with this change in wording the next line needs to use
'to delimit' rather than just 'delimit'.


-'f




Re: RFD: Built-in testing

2009-01-21 Thread Geoffrey Broadwell
On Wed, 2009-01-21 at 14:23 +, Peter Scott wrote:
 On Wed, 21 Jan 2009 13:35:50 +0100, Carl Mäsak wrote:
  I'm trying to explain to myself why I don't like this idea at all. I'm
  only partially successful. Other people seem to have no problem with it,
  so I might just be wrong, or part of a very small, ignorable minority.
  :) 
 
 I find myself echoing you.  I don't have the language design skills others 
 are displaying here.  I can only evaluate this from an educator's point of 
 view and say that the P5 syntax of
 
 is $x, 42, 'Got The Answer';
 
 is just about the conceivable pinnacle of elegance for at least that form 
 of question.  (Compare, e.g., the logorrhoea of Java tests.)  I do not see 
 how I could tell a student with a straight face that the P6 proposal is an 
 improvement, at which point the conversation would devolve into a 
 defensive argument I do not want to have.
 
 I get that 'is' is already taken and we do not want the grammar to engage 
 in Clintonesque parsing when it encounters the token.  Okay.  But how do I 
 justify the new syntax to a student?  What are they getting that makes up 
 for what looks like a fall in readability?

I don't quite understand the problem with using the same syntax as in
Perl 5, just uppercasing the verbs so they won't conflict with everyday
syntactic features:

OK($bool,  'Widget claimed success');
IS($x, 42, 'Widget produced the right answer');

(This is ignoring issues of placement of parens or curlies to make the
Perl 6 syntax attractive and consistent with other constructs -- I'm
just talking about using verb rather than adverb syntax, with our
already properly Huffmanized verb names intact.)

I do like the idea of having TEST {} blocks that go inactive when not in
testing mode (however that is defined).  But other than that, I don't
understand the value of the other syntactic changes suggested, the
adverb syntax in particular.  Maybe I'm missing something obvious 


-'f




Re: design of the Prelude (was Re: Rakudo leaving the Parrot nest)

2009-01-15 Thread Geoffrey Broadwell
On Thu, 2009-01-15 at 16:03 -0800, Darren Duncan wrote:
 Patrick R. Michaud wrote (on p6c):
  On Thu, Jan 15, 2009 at 08:53:33AM +0100, Moritz Lenz wrote:
  Another thing to keep in mind is that once we start to have a Perl 6
  prelude, we might decide to be nice neighbors and share it with other
  implementations, as far as that's practical. 
  
  My guess is that there will be a shared prelude that is maintained
  in a central repository like the spectests, but that individual
  implementations are likely to want or need customized versions of
  the prelude for performance or implementation reasons.  In this
  sense the shared prelude acts as a reference standard that
  implementations can use directly or optimize as appropriate.
 
 What I recommend, and forgive me if things already work this way, is to 
 expand 
 the Prelude so that it defines every Perl 6 core type and operator using pure 
 Perl 6, complete enough such that there are as few as possible actual 
 primitives 
 not defined in terms of other things.  This Prelude would then be shared 
 unchanged between all the Perl 6 implementations.
 
 Then, each implementation would also define its own PreludeOverride file 
 (name 
 can be different) in which it lists a subset of the type and operator 
 definitions in the Prelude that the particular implementation has its own 
 implementation-specific version of, and the latter then takes precedence over 
 the former in terms of being compiled and executed by the implementation.

The problem with this method is that there are usually *several* ways to
implement each feature in terms of some number of other features.  The
creators of the shared prelude are then stuck with the problem of
deciding which of these to use.  If their choices do not match the way a
particular implementation is designed, it will then be necessary for the
implementation to replace large swaths of the Prelude to get decent
performance.

For example, implementations in pure C, Common Lisp, and PIR will
probably have VASTLY different concepts of available and optimized
primitive operations.  A prelude written with any one of them in mind
may well be pessimal for one of the others.

That's not to say it's not a useful idea for helping to jumpstart new
implementations -- I just somewhat doubt that a mature implementation
will be able to use more than a fraction of a common prelude.


-'f

P.S.  I did this sort of thing once -- a Forth prelude that attempted to
minimize the primitive set, and it *was* very nice from an abstract
perspective.  Unfortunately, it also made some operations take millions
of cycles that would take no more than one assembly instruction on just
about every CPU known to man.  It's a REALLY easy trap to fall into.




Re: [PATCH] Add .trim method

2009-01-12 Thread Geoffrey Broadwell
On Mon, 2009-01-12 at 07:01 -0800, Ovid wrote:
 - Original Message 
 
 
I could optionally make the following work:
   
 $string.trim(:leading0);
 $string.trim(:trailing0);
 
 Alternatively, those could be ltrim() and rtrim().  If you need to 
 dynamically determine what you're going to trim, you'd couldn't just set 
 variables to do it, though. You'd have to figure out which methods to call.  
 Or all could be allowed and $string.trim(:leading0) could all $string.rtrim 
 internally.

When I saw your proposed syntax above, instead of reading don't trim
leading/trailing whitespace, I read change the definition of
'whitespace' to 'codepoint 0' for leading/trailing.

That of course raises the question of how one *would* properly override
trim's concept of whitespace 


-'f




Re: Allocation of PASM registers (in PASM mode)

2009-01-11 Thread Geoffrey Broadwell
On Sun, 2009-01-11 at 12:39 -0500, Andrew Whitworth wrote:
 This is something that obviously needs to be avoided. PASM doesn't
 require that P42 be the 42nd register in an array. It only requires
 that values put into P42 aren't overwritten and the register isn't
 repurposed later. The simplest allocator to avoid this problem would
 probably be hash-based, where the string P1000 maps to a
 small-numered but unique integer value. Each register name maps
 uniquely to an actual register storage location, just not necessarily
 the one specified in the name.

Do you propose to do this during PASM compile, PBC load, or PBC
interpret?  There are tradeoffs:

  * If only mapped in the PASM compile, then the bytecode is still
risky, and you haven't really addressed the root problem.

  * If mapped during PBC load, that breaks the mmap and execute
optimized path.  One possible mitigation is to only perform
the register remapping if the maximum register number seen in
the PBC is larger than some threshold.  The max register number
might be available as a side effect of general PBC verification;
one hopes that it will not add too much CPU time to that pass.

  * If instead the mapping is done during the PBC interpret phase,
that makes almost every op slower.  This is probably just
unacceptable, performance-wise.


-'f




Re: r24809 - docs/Perl6/Spec

2009-01-08 Thread Geoffrey Broadwell
On Thu, 2009-01-08 at 23:06 +0100, pugs-comm...@feather.perl6.nl wrote:
 +=item -0 *octal/hex*
 +
 +Sets input record separator.  Missing due to lack of specification in
 +LSynopsis 16|S16-io.  There is a comment about this in the L/Notes
 +section at the end of this document.

I use this option quite a bit -- but only in the bare '-0' syntax,
meaning null-terminated lines, necessary for robust command line pipes.
I only rarely use the full form.  In fact, really rarely.

 +=item -i *extension*
 +
 +Modify files in-place.  Haven't thought about it enough to add yet, but
 +I'm certain it has a strong following. {{TODO review decision here}}

Oh yeah.  I use this a LOT.

 +=item -l
 +
 +Enable automatic line-ending processing.  This is the default behavior.

And there was much rejoicing 


-'f




Re: Converting a Perl 5 pseudo-continuation to Perl 6

2009-01-02 Thread Geoffrey Broadwell
On Fri, 2009-01-02 at 14:19 +0200, Leon Timmermans wrote:
 When going OO, I'd say an augment()/inner() approach would be
 cleanest. See 
 http://search.cpan.org/~drolsky/Moose/lib/Moose/Cookbook/Basics/Recipe6.pod
 for an example. I don't know how to express that in Perl 6 though.

There's no description on that page, just sample code, but it looks like
augment performs a similar function to the .wrap method on Routines in
Perl 6.  That's an interesting variation of my approach #4, I think:

  4. In order to keep the sub separate, but still not split the
pid_file_handler call, I came up with a variation of #3 in which
pid_file_handler takes a callback parameter:
 
sub init_server {
# ...
pid_file_handler($options{pid_file}, become_daemon);
# ...
}
 
sub pid_file_handler($pid_file, callback) {
# ... top half ...
callback();
# ... bottom half ...
}

I like your idea a little better than the callback method, because I can
see the logic behind saying I want to make an enhanced version of
become_daemon that is *also* able to handle PID files.  However, it ties
the two together -- the PID file handling cannot be used in any context
other than becoming a daemon, and in particular it's not obvious how you
would unit test it.


-'f




Re: r24737 - docs/Perl6/Spec

2009-01-02 Thread Geoffrey Broadwell
On Fri, 2009-01-02 at 17:08 +0100, pugs-comm...@feather.perl6.nl wrote:
 +=head2 Synopsis
 +
 +  multi sub perl6(
 +Bool :a($autosplit),
 +Bool :c($check-syntax),
 +Bool :$doc,
 +:e($execute),
 +:$execute-lax,  #TODO fix illegal -e6 syntax. -6? not legal. -x? hrmm
 +Bool :F($autoloop-split),
 +Bool :h($help),
 +:I(@include),
 +#TODO -M,
 +Bool :n($autoloop-no-print),
 +:O($output-format) = 'exe',
 +:o($output-file) = $*OUT,
 +Bool :p($autoloop-print),
 +:S(@search-path),
 +Bool :T($taint),
 +Bool :v($version),
 +Bool :V($verbose-config),
 +  );

I find this a little difficult to skim, because parallel things are not
aligned.  Aligning on ':' should make things much easier on the eyes:

multi sub perl6(
Bool :a($autosplit),
Bool :c($check-syntax),
Bool :$doc,
 :e($execute),
 :$execute-lax,
Bool :F($autoloop-split),
Bool :h($help),
 :I(@include),
 #TODO -M,
Bool :n($autoloop-no-print),
 :O($output-format) = 'exe',
 :o($output-file)   = $*OUT,
Bool :p($autoloop-print),
 :S(@search-path),
Bool :T($taint),
Bool :v($version),
Bool :V($verbose-config),
);

Ah, that's a bit better.  Looking at the above, is $execute-lax supposed
to be a boolean, or is it really a generic scalar?  It's also not
obvious what a boolean named $doc does -- which probably means either
that it's not supposed to be a boolean, or it needs a somewhat more
descriptive long name (or both).

Also, in Perl 5 taint is tri-valued, because it has a warnings-only
mode.  How will that be supported by Perl 6?

Finally, how do the defaults of $output-file and $output-format interact
so that the default behavior remains compile-and-execute?  Changing the
default to compile-to-exe seems unperlish to me 


-'f




Re: r24737 - docs/Perl6/Spec

2009-01-02 Thread Geoffrey Broadwell
Thank you for the quick turnaround!

On Fri, 2009-01-02 at 10:55 -0800, jerry gay wrote:
 On Fri, Jan 2, 2009 at 09:27, Geoffrey Broadwell ge...@broadwell.org wrote:
  It's also not
  obvious what a boolean named $doc does -- which probably means either
  that it's not supposed to be a boolean, or it needs a somewhat more
  descriptive long name (or both).

I think this is the only remaining item you had not yet responded to --
er, unless I missed it.


-'f




Re: Converting a Perl 5 pseudo-continuation to Perl 6

2009-01-02 Thread Geoffrey Broadwell
On Fri, 2009-01-02 at 22:56 +0100, Aristotle Pagaltzis wrote:
  When I asked this question on #perl6, pmurias suggested using
  gather/take syntax, but that didn't feel right to me either --
  it's contrived in a similar way to using a one-off closure.
 
 Contrived how?

Meaning, the gather/take syntax doesn't make much sense, because we're
not gathering anything; the PID file handler has nothing to return.
We'd only be using it for the side effect of being able to pause the
callee's execution and resume it later.

 When you have an explicit entity representing the continuation,
 all of these questions resolve themselves in at once: all calls
 to the original routine create a new continuation, and all calls
 via the state object are resumptions. There is no ambiguity or
 subtlety to think about.

I like this argument.  I'm not sure it's applicable in every case, but
it certainly applies to the class of situations containing my problem.

 So from the perspective of the caller, I consider the “one-off”
 closure ideal: the first call yields an object that can be used
 to resume the call.
 
 However, I agree that having to use an extra block inside the
 routine and return it explicity is suboptimal. It would be nice
 if there was a `yield` keyword that not only threw a resumable
 exception, but also closed over the exception object in a
 function that, when called, resumes the original function.
 
 That way, you get this combination:
 
 sub pid_file_handler ( $filename ) {
 # ... top half ...
 yield;
 # ... bottom half ...
 }
 
 sub init_server {
 # ...
 my $write_pid = pid_file_handler( $optionspid_file );
 become_daemon();
 $write_pid();
 # ...
 }

That's pretty nice.  Perhaps we can make it even cleaner with a few
small tweaks to init_server():

sub init_server(:$pid_file, ...) {
# ...
my write_pid := pid_file_handler($pid_file);
become_daemon();
write_pid();
# ...
}

So far, this variant is winning for me, I think.  It's slightly more
verbose on the caller's side than the yield variant I had proposed, but
it's also more explicit, and allows (as you said) a clean syntactic
separation between starting the PID file handler and continuing it.

It does bring up a question, though.  What if pid_file_handler() needed
to be broken into three or more pieces, thus containing multiple yield
statements?  Does only the first one return a continuation object, which
can be called repeatedly to continue after each yield like this?

sub init_server(:$pid_file, ...) {
# ...
my more_pid_stuff := pid_file_handler($pid_file);
become_daemon();

more_pid_stuff();
do_something();

more_pid_stuff();
do_something_else();

more_pid_stuff();
# ...
}

Or does each yield produce a fresh new continuation object like this?

sub init_server(:$pid_file, ...) {
# ...
my write_pid   := pid_file_handler($pid_file);
become_daemon();

my fold_pid:= write_pid();
do_something();

my spindle_pid := fold_pid();
do_something_else();

spindle_pid();
# ...
}

(Note that I assume you can simply ignore the returned object if you
don't plan to continue the operation any more, without raising a
warning.)

Certainly the first version has less visual clutter, so I tend to lean
that way by default.  But the second design would allow one to create a
tree of partial executions, by calling any earlier continuation object
again.  That's a very powerful concept that I don't want to give up on.

Supporting both feels like it might be an adverb on the invocation
(possibly with a frosty sugar coating available).  It would be nice to
support invoking a continuation in ratcheting and forgetful modes.

Thoughts?


-'f




Re: r24737 - docs/Perl6/Spec

2009-01-02 Thread Geoffrey Broadwell
On Fri, 2009-01-02 at 12:27 -0800, jerry gay wrote:
 oh, yes, whoops! i responded to someone else in #pugs earlier, and
 forgot to address the item here. Cperl6 --doc replaces p5's
 Cperldoc (that's the latest idea from damian, although it seems not
 to be published yet).

Ah, I get it!  What about perldoc's special modes?  Will these go in
++DOC ... ++/DOC sections?

 the most likely short names, C  -d -o -c   are all taken by
 either p5 or p6 command-line. i don't want to use C-d, because that
 has to do with the debugger in p5, so makes it harder for p6 to catch
 accidental usage. C--doc probably warrants a short name, since it
 will be called frequently--i hope, reducing irc traffic :) but i
 haven't decided on a good name yet. i'm open to suggestions.

Don't have any yet.  Will let my subconcious ruminate on it.


-'f




Converting a Perl 5 pseudo-continuation to Perl 6

2009-01-01 Thread Geoffrey Broadwell
In the below Perl 5 code, I refactored to pull the two halves of the PID
file handling out of init_server(), but to do so, I had to return a sub
from pid_file_handler() that acted as a continuation.  The syntax is a
bit ugly, though.  Is there a cleaner way to this in Perl 6?

##
sub init_server {
my %options  = @_;

# ...

# Do top (pre-daemonize) portion of PID file handling.
my $handler = pid_file_handler($options{pid_file});

# Detach from parent session and get to clean state.
become_daemon();

# Do bottom (post-daemonize) portion of PID file handling.
$handler-();

# ...
}

sub pid_file_handler {
# Do top half (pre-daemonize) PID file handling ...
my $filename = shift;
my $basename = lc $BRAND;
my $PID_FILE = $filename || $PID_FILE_DIR/$basename.pid;
my $pid_file = open_pid_file($PID_FILE);

# ... and return a continuation on the bottom half (post-daemonize).
return sub {
$MASTER_PID  =  $$;
print $pid_file $$;
close $pid_file;
};
}
##

When I asked this question on #perl6, pmurias suggested using
gather/take syntax, but that didn't feel right to me either -- it's
contrived in a similar way to using a one-off closure.

pmichaud offered several possibilities (I've converted some of his
suggestions expressed as prose into code, so the errors there are mine):

1. Take advantage of Perl 6 syntax reduction to turn 'return sub {...}'
   into 'return {...}' (or even just fall of the end with '{...}', I
   suppose).  This is visually slightly better, but still leaves the
   bottom half inside a block that merely exists to satisfy Perl, not
   actually representing anything intrinsic about the problem.

2. Throw a resumable exception in the middle:

   sub init_server {
   # ...
   pid_file_handler($options{pid_file});
   become_daemon();
   pid_file_handler();
   # ...
   }

   sub pid_file_handler {
   # ... top half ...
   throw ResumableException;
   # ... bottom half ...
   }

   He also suggested a variant syntax with an adverb on return:

   sub pid_file_handler {
   # ... top half ...
   return :resumable;
   # ... bottom half ...
   }

   I suggested a naked yield syntax:

   sub pid_file_handler {
   # ... top half ...
   yield;
   # ... bottom half ...
   }

   These all desugar to the same thing, of course.

3. Make become_daemon a part of pid_file_handler, or vice-versa.
   I rejected both of these on the basis of separating different
   things into different subs.  The two tasks are only tangentially
   related, and neither really seems like a subordinate op of the
   other.

4. In order to keep the sub separate, but still not split the
   pid_file_handler call, I came up with a variation of #3 in which
   pid_file_handler takes a callback parameter:

   sub init_server {
   # ...
   pid_file_handler($options{pid_file}, become_daemon);
   # ...
   }

   sub pid_file_handler($pid_file, callback) {
   # ... top half ...
   callback();
   # ... bottom half ...
   }

   That seems like a silly contortion to hide the problem, and
   doesn't represent my intent well -- the pid file handler doesn't
   need to send a message, it needs to yield control while waiting
   for something else to happen.

5. Make a new PidHandler class and address the problem in OO fashion:

   sub init_server {
   # ...
   my $pid_handler = PidHandler.new(file = $options{pid_file});
   $pid_handler.top();
   become_daemon();
   $pid_handler.bottom();
   #...
   }

   This is certainly workable, but again feels like a contrived
   workaround in the same way that gather/take and return {...} do.
   Plus, writing a new class and using OO/method call syntax just to
   allow a sub to be split seems like pointless busy work.  Not
   as bad in Perl 6 as in Perl 5, but still.

In the end, I think I like the 'naked yield' idea best of the ones we
have so far.  Any comments or other ideas? [1]


-'f

[1] Other than that I've used the word 'contrived' too many times.  :-)




Re: Converting a Perl 5 pseudo-continuation to Perl 6

2009-01-01 Thread Geoffrey Broadwell
On Fri, 2009-01-02 at 00:30 +0200, Leon Timmermans wrote:
 I can't help wondering why does pid_file_handler need to be split up
 in the first place? Why wouldn't it be possible to simply call
 pid_file_handler after become_daemon?

Two answers:

1. If an error occurs that will not allow the PID file to be created
   (another copy of the daemon is already running, the user doesn't
   have required root permissions, or what have you), the program
   should die visibly at the command line, rather than *appearing* to
   launch but actually just spitting an error into the syslog and
   disappearing silently.  Checking for another running daemon and
   taking ownership of the pid file should be an atomic operation
   (or at the very least err on the side of failing noisily if
   something fishy happens), so I can't just check for an existing
   pid file before daemonizing, and then create the new pid file after.

   It's not visible in the code I posted, but the program should also
   do a number of other sanity checks before it daemonizes, for the
   very same reasons.  For example, it should load all modules it
   expects to use before becoming a daemon, and complain loudly if
   it can't.

2. The particular code I used is just a decent example to ask about
   the general question of a better syntax for interrupting and
   continuing a sub.  So even if I could do what you say, I'd still
   have the question.  :-)


-'f




Re: [perl #60048] [BUG] [MMD] CGP Does Not Work with PCC Runcore Reentry

2008-12-23 Thread Geoffrey Broadwell
On Tue, 2008-12-23 at 17:31 -0800, Will Coleda via RT wrote:
 chromatic mentioned on #parrot that if we remove PIC, we're going to break 
 all the 
 predereferenced runcores. After some discussion, this probably means ripping 
 out:
 
 16:42 @chromatic Everything other than the default core, the nearly-useless 
profiling core, and the gc-debug core.
 
 So, I vote we update the deprecation notice in trunk to include the runcores 
 (which means 
 delaying the removal until post-0.9.0), and then I can continue the mayhem 
 and destruction 
 that has begun in the branch.
 
 Comments?

This is certainly a biggie, but I believe we've been doing this on a
smaller scale more and more lately: removing functionality and/or
optimizations that we don't have the spare cycles to support.

On the one hand this is a good thing -- we'll actually hit 1.0 in a few
months.  I'm all for getting that wider audience.

On the other hand, I'm somewhat concerned that Parrot 1.0 will either
itself be rather slow, or will architecturally force HLL implementations
to be slow.  While looking for the IRC discussion mentioned by Coke, I
found the following interchange (slightly edited for clarity):

donaldh   Hmm. Bad memory profile for rakudo. A piece of PIR that runs
a SQLite query and prints ~18000 rows tops out at 6 MB when
run with -G. The equivalent in Rakudo tops out at 1.6GB
chromatic PGE/PCT/Rakudo uses more STRINGs and PMCs.  If you disable
garbage collection, Parrot won't reuse them.
donaldh   Sure. I'm just realising how much pressure Rakudo is putting
on GC.
pmichaud  rakudo is somewhat constrained by the architecture Parrot
provides, unfortunately.

This interchange raised a flag for me.  Am I incorrect in seeing this as
a problem?  Since Parrot 1.0 is supposed to be the stable interface for
HLL implementors to aim for, I'd hate for that interface to be very
suboptimal, performance-wise, even if it is technically sufficient to
get things to *work*.  Or is the plan that Parrot 1.5/2.0 are going to
include the needed performance and functional improvements as part of
the push to production?


-'f




Re: For your encouragement

2008-12-05 Thread Geoffrey Broadwell
On Fri, 2008-12-05 at 09:10 -0600, Andy Lester wrote:
 On Dec 5, 2008, at 4:13 AM, Simon Cozens wrote:
 
  I just ran this code, which worked with the expected results:
 
 
 Beautiful.  Posted to Perlbuzz.
 
 http://perlbuzz.com/2008/12/database-access-in-perl-6-is-coming-along-nicely.html

Someone needs to reply to the comments from readers who have confused
DBI and DBDI, and have thus decided we are turning Perl into Java.

I can't, because as Perlbuzz oh-so-helpfully tells me when I try to
submit my comment: Registration is required.  With no indication how
to actually do so.


-'f




Re: For your encouragement

2008-12-05 Thread Geoffrey Broadwell
On Fri, 2008-12-05 at 13:13 -0600, Andy Lester wrote:
 On Dec 5, 2008, at 1:11 PM, Geoffrey Broadwell wrote:
  I can't, because as Perlbuzz oh-so-helpfully tells me when I try to
  submit my comment: Registration is required.  With no indication how
  to actually do so.
 
 You have to have JavaScript turned on.  Sorry that the message sucks.   
 It's on my to-do list to fix.

OK, that's fair enough -- but why does submitting a dead simple form
require JavaScript?

Hmmm, maybe I should be taking this up with the MT developers.  Are you
running a current enough rev that it's likely still a problem?  (I don't
want to go through the trouble of installing a local MT just to check
that.  :-)


-'f




Re: how to write literals of some Perl 6 types?

2008-12-02 Thread Geoffrey Broadwell
On Tue, 2008-12-02 at 08:50 +0100, Carl Mäsak wrote:
 Darren ():
   Bit
   Blob
   Set
   Bag
   Mapping
 
  How does one write anonymous value literals of those types?  And I mean
  directly, not by writing a literal of some other type and using a conversion
  function to derive the above?
 
 Why is the latter method insufficient for your needs?

Efficiency reasons, among others.  We can quibble over the syntax, but
it would be awfully nice if implementations were able to generate the
data structure as early as possible -- at compile time (for literals
containing only constants) or during runtime as a single build pass
rather than build-other-type-then-convert (for literals containing
runtime-evaluated expressions).

If there isn't an easy way for the implementation to make this
optimization, then we're stuck with some of the basic types taking
twice the time and space to create that other similar types do, for no
good reason.  Mind you, some implementations may get lucky by using a
common all-powerful collection implementation underneath, and turning
the conversion into a simple type relabel (constant cost in time and
space), but that doesn't generalize to highly-tuned implementations that
optimize each collection type's data structures individually.


-'f




Re: how to write literals of some Perl 6 types?

2008-12-02 Thread Geoffrey Broadwell
On Tue, 2008-12-02 at 13:07 -0700, David Green wrote:
 On 2008-Dec-2, at 12:33 pm, Geoffrey Broadwell wrote:
  On Tue, 2008-12-02 at 08:50 +0100, Carl Mäsak wrote:
  Darren ():
  How does one write anonymous value literals of those types?
  Why is the latter method [conversion] insufficient for your needs?
  Efficiency reasons, among others.
 
 Surely the optimizer will perform conversions of constants at compile  
 time.

It would be nice to expect that (though I don't, actually) ... but the
second half of my statement was at least as important.  It also matters
how this is handled for runtime expressions (literals that aren't
constants).

I was merely saying that we must avoid deciding the semantics in a way
that prevents a runtime-varying literal from being constructed
efficiently.


-'f




Re: how to write literals of some Perl 6 types?

2008-12-02 Thread Geoffrey Broadwell
On Tue, 2008-12-02 at 21:21 +0100, Leon Timmermans wrote:
 If you really want it, a macro can fix all of this for you.
 That's the beauty of macros: these kinds of things are possible if you
 need them.

Sure, but user-written macros are also an easy out that allows one to
avoid making hard decisions about syntax and semantics.  Where the base
language is concerned, we should avoid waving our hands and telling the
user to paper over our indecisiveness.

Though perhaps you meant that the base language should implement a few
standard macros that convert sugary syntax into something efficient, in
which case I'm fine with that answer.


-'f




Re: Files, Directories, Resources, Operating Systems

2008-11-26 Thread Geoffrey Broadwell
On Wed, 2008-11-26 at 11:34 -0800, Darren Duncan wrote:
 I agree with the idea of making Perl 6's filesystem/etc interface more 
 abstract, 
 as previously discussed, and also that users should be able to choose between 
 different levels of abstraction where that makes sense, either picking a more 
 portable interface versus a more platform-specific one.

Agreed on both counts.

 Following up on Tim Bunce's comment about looking at prior art, I also 
 recommend 
 looking at the SQLite DBMS, specifically its virtual file system layer; this 
 one 
 is designed to give you deterministic behaviour and introspection over a wide 
 range of storage systems and attributes, both on PCs and on embedded devices, 
 or 
 hard disks versus flash or write once vs write many etc, where a lot of 
 otherwise-assumptions are spelled out.  One relevant url is 
 http://sqlite.org/c3ref/vfs.html and for the moment I forget where other good 
 urls are.

There are also higher-level VFS systems, such as Icculus.org PhysicsFS,
which goes farther than just abstracting the OS operations.  It also
abstracts away the differences between archives and real directories,
unions multiple directory trees on top of each other, and transparently
redirects writes to a different trunk than reads:

http://icculus.org/physfs/

I want to be able to support that functionality in a way that still
allows me to open and close PhysicsFS files and directories the way
I would normally.  I want to be able to layer it *under* the standard
Perl IO ops, rather than above them.

The following is all obvious, but just to keep it in people's minds and
frame the discussion:

Being able to layer IO abstractions is at least as important as the
basic OS abstraction itself -- as well as the ability to use the high
level abstraction most of the time, but reach down the stack when
needed.  This implies making best effort to minimize the ways in which
upper layers will be hopelessly confused by low-level operations, and
documenting the heck out of the problem areas.

These layers should be mix-and-match as much as possible, with
abstractions designed with common interfaces.  Certainly Perl 5's IO
layers, as well as any networking or library stack, are prior art here.

 To summarize, what we really want is something more generic than 
 case-sensitivity, which is text normalization and text folding in general, as 
 well as distinctly dealing with distinctness for representation versus 
 distinctness for mutual exclusivity.

Yes, definitely.

 [This] implies that 
 sensitivity is special whereas sensitivity should be considered normal, and 
 rather insensitivity should be considered special.

If only that were true in other areas of life.  :-)


-'f




Re: [svn:perl6-synopsis] r14586 - doc/trunk/design/syn

2008-10-05 Thread Geoffrey Broadwell
On Sun, 2008-10-05 at 17:05 -0700, [EMAIL PROTECTED] wrote:
 +C infix:... , the series operator.

Lovely, just lovely.

 +1, 3, 5 ... *# odd numbers
 +1. 2. 4 ... *# powers of 2

Did you mean to use commas on that second line?


-'f




Re: [perl #59600] [PATCH] Require Storable 2.13 indirectly by requiring perl 5.8.6

2008-10-03 Thread Geoffrey Broadwell
On Fri, 2008-10-03 at 08:55 -0700, Will Coleda wrote:
 Index: Makefile.PL
 ===
 -BEGIN { require 5.008 }
 +BEGIN { require 5.8.6 }

 Index: Configure.pl
 ===
 -use 5.008;
 +use 5.8.6;

I understand that it doesn't matter for anything used post-configure,
because in theory the user should have gotten a friendly error message
at configure time and not even make it to the other files -- but for
these two files, I believe we should use the backward compatible syntax,
so that ancient Perls will be friendly to people just trying to get
started.

(From `perldoc -f use`:

Specifying VERSION as a literal of the form v5.6.1 should generally be
avoided, because it leads to misleading error messages under earlier
versions of Perl (that is, prior to 5.6.0) that do not support this
syntax.  The equivalent numeric version should be used instead.

use v5.6.1; # compile time version check
use 5.6.1;  # ditto
use 5.006_001;  # ditto; preferred for backwards compatibility
)


-'f




Re: Revisiting lexicals, part 1

2008-09-25 Thread Geoffrey Broadwell
Tom Christiansen:
  Don't we have to solve all this to get the Perl 6 debugger
  working anyway?  
 
 Although I'm unsure why that might be, I also recognize the possibility
 that there may well exist hypothetical documents, unread by me, which
 mandate some scenario or behavior wherein the answer to your question 
 can only be yes.

My original thinking behind that question came from a few different
vague questions in my mind about what kind of scope manipulation was
allowed in Perl 6 (and therefore would be expected from the debugger).
Admittedly, there was a certain amount of laziness on my part in going
with my hunch rather than rereading the relevant spec docs 

For example, it is not clear to me how the Perl 6 debugger can
temporarily unwarp the grammar (to evaluate input using the core
language, instead of whatever abomination is in scope) while still
*otherwise* respecting the local scopes, without having most of the
scope introspection/modification tools fully working.

 [... much detail on why the design of the Perl 5 debugger does not
  allow new persistent lexical variables to be created ...]

Yes, I knew all that, but it's nice to see it all spelled out again in
one place.

 Perhaps you saying that you would *like* to see the perl6 debugger provide
 a facility under which scoped constructs like these could seem to outlive
 their scope, probably by making an allowance so that some sort of variant
 Ceval STRING construct be made available that's not a disguised Ceval
 {STRING}, as currently occurs.

Yes.  I've wanted that from Perl 5 for ages.  I *thought* this was
something I could expect from the improved scope introspection
facilities in Perl 6, but in retrospect I may have filled in gaps in the
spec with my wishful thinking.

  (Aside from which, it would be useful to have this capability properly
  exposed, for writing shell-style UIs that can escape to raw Perl.)
 
 Can't see you feel to be stopping you from doing that now, considering that
 many existence-proofs show they already do this.  You must be talking about
 some sneaky way to violate the inviolable boundary of scope.

Why yes, I am.  :-)

 [... mild prejudice towards a unix view of parents and children ...]
 That is, they wish for changes in their own process to somehow propagate
 *upwards* in a most unnatural fashion to affect those who created them,
 rather than downwards following the natural order of things to affect their
 unborn children.

This model is weak.  I understand the reasons for it in the unix world
view, but having worked in programming environments that allowed more
powerful modes of interaction, I don't buy it as the natural order.
(And in fact, the concept of one thing being able to unilateraly affect
another is decidedly non-physical, so calling it natural is at best a
misnomer.)

Circling back to Perl -- for me, Perl 6 is all about continuing the
long-standing Perl practice of merging in the best ideas of everyone
else.  It's about time we merged the several-decades-old concept of true
first-class support for interactive evaluation.  And that means
providing some way to strip the implicit new scope off of eval.

To me, the move to scopeless eval is a shift on the order of the
introduction of closures -- you really don't realize how bloody useful
they are until you have them.

Patrick Michaud:
 Taking a pure Parrot perspective, Parrot doesn't and probably
 shouldn't impose a particular view of debugging on the languages it
 supports.  Clearly we can support the style of debugging and
 interactive
 execution that you've described happens with perl 5, but Parrot might 
 also come across a dynamic language where eval'd code is in scope 
 and can modify the current lexical environment.  So, while Parrot
 probably won't impose this view on Perl (5 or 6), it may still
 need to evolve to support it at some point.

 From a Perl 6 perspective, given that Pugs provides an interactive
 mode where one can do my $variable and have it stick, it
 may be that this becomes a standard feature in Perl 6 in
 general.  Fortunately that's not my call, but I can see why people
 may want something like it for Rakudo as well, and people running
 Python on Parrot will certainly expect interactively entered
 lexical variable declarations to work.

Certainly users of other languages also expect to have real
interactive modes, wherein declarations persist, so I expect Parrot/PCT
most likely *will* want to support this -- and when the capability is
provided by both Rakudo and Pugs, I'd hope it is not to hard to convince
$Larry to make it official.


-'f




Re: Revisiting lexicals, part 1

2008-09-24 Thread Geoffrey Broadwell
On Wed, 2008-09-24 at 18:09 -0500, Patrick R. Michaud wrote:
 On Thu, Sep 25, 2008 at 12:10:35AM +0200, Reini Urban wrote:
  2008/9/24 Patrick R. Michaud [EMAIL PROTECTED]:
   So, in order to get the behavior you're describing from the interactive
   prompt, we'll probably need more than just Perl 6's 'eval'.  In
   particular, the interactive prompt mode will need to be able to
   maintain it's own dynamic lexical pad (i.e., a DynLexPad) and have
   some way of extracting any lexical changes from whatever code string
   it evaluates.
  
  I wouldn't call them DynLexPad or lexicals at all, I would call them
  just globals.  lexvars could shadow them though, but this a user 
  problem then.
 
 This approach might expose some rough edges, though -- things like
 MY::, OUTER::, *::, etc. might not work as expected, or those 
 constructs would have to know when they're dealing with interactive 
 mode pseudo-lexical-globals instead of what the rest of the
 system is using.
 
 Still, we might consider something along these lines -- perhaps
 as a stopgap approach if nothing else.

Don't we have to solve all this to get the Perl 6 debugger working
anyway?  (Aside from which, it would be useful to have this capability
properly exposed, for writing shell-style UIs that can escape to raw
Perl.)


-'f




Re: [svn:parrot] r31049 - in trunk: include/parrot languages/perl6/src/builtins languages/perl6/src/parser languages/perl6/t

2008-09-18 Thread Geoffrey Broadwell
On Thu, 2008-09-18 at 07:34 -0500, Patrick R. Michaud wrote:
  Aggregating coroutine and aggregating yield aren't nearly as zippy  
  as 'gather' and 'take', but they're more meaningful to a broader  
  audience, which may help the feature spread.

I don't buy this.  The Perl 6 terms are well chosen, and as soon as you
know what they mean in the context of programming, you won't forget.
The other versions ... well, let's leave it at easy to forget.  (OK,
one more thing -- the word coroutine scares people.  Gather does
not.)

 I'm rather hoping and expecting that gather and take become 
 the meaningful names for this feature, much like grep started 
 out as a Unix shell command but is now the language-agnostic term for
 extract things from a list matching a pattern.

Now *this* I agree with.  The first system to make a feature standard
gets first try at standardizing the name.  If they've chosen the name
well, there's a decent chance it will stick.


-'f




Re: [svn:parrot] r31049 - in trunk: include/parrot languages/perl6/src/builtins languages/perl6/src/parser languages/perl6/t

2008-09-18 Thread Geoffrey Broadwell
On Thu, 2008-09-18 at 10:28 -0700, jerry gay wrote:
 On Thu, Sep 18, 2008 at 10:21 AM, Patrick R. Michaud [EMAIL PROTECTED] 
 wrote:
  On Thu, Sep 18, 2008 at 09:06:44AM -0700, jerry gay wrote:
  what some refer to as traits, perl 6 calls roles.

The Perl 6 name is a better, more natural and self-describing name.
Larry has a gift for naming, and he puts a lot of effort into making
names obvious.  I think it's a mistake to ignore that, just because
Larry didn't design all of the major dynamic languages himself.

  Other languages have adopted the Perl shortname of hash as well,
  including Ruby and this odd little creature known as Parrot.  Perhaps
  we should rename Parrot's Hash class to AssociativePMCArray?  1/2 ;-)

I personally agree that 'hash' is by far the better name.  But to be
fair, it helps that 'hash' is one of the basic data structures taught to
every CS student freshman year 

  we should call gather and take by their proper names where they're
  defined. aggregating coroutine is more precise and descriptive than
  is gather,

If you had no idea what an 'aggregating coroutine' was, would your first
guess be something that acts as a generator for a lazy list?  Really?
And you'd get that faster than guessing what 'gather' might mean?  Do
you think the same is true of someone without a CS degree and/or a
rather advanced background?

  however gather is much easier to say in polite company,
  and is therefore a better name to use at the language level.

We should not have the implementation and the HLLs use utterly different
terminology for the same concept (unless every HLL uses different
terminology and they all suck) -- that will just confuse contributors
who don't do full time core work.  It is certainly proper for the core
and the HLLs to use different terminology for things that are similar
but different, but in this case we're talking about the implementation
of the HLL concept -- it should use the same terminology.

Of course, I'm fine with using slightly more verbosity in the core,
because it will be more rarely looked at and therefore needs to optimize
more for clarity than stroke reduction.

  By this reasoning, we should also change the other exceptions:
 
 .CONTROL_RETURN   =   .CONTROL_SUB_RETURN   (or .CONTROL_SUB_EXIT)
 .CONTROL_BREAK=   .CONTROL_LOOP_EXIT
 .CONTROL_CONTINUE =   .CONTROL_LOOP_NEXT
 
  and perhaps add .CONTROL_LOOP_REPEAT there as well.  Note that I'm not at
  all opposed to this -- if we're going to do it for one, we really
  ought to do it for all.
 
 agreed. precision is of little benefit unless it's consistent across
 related functionality.

Along the same lines, how about one of the following pairs?

  * .CONTROL_GENERATOR_GATHER and .CONTROL_GENERATOR_TAKE
  * .CONTROL_GENERATOR_SINK   and .CONTROL_GENERATOR_SOURCE
  * .CONTROL_GATHER_SINK  and .CONTROL_GATHER_SOURCE
  * .CONTROL_GATHER_LIST  and .CONTROL_YIELD_LIST_ELEMENT(S)
  * .CONTROL_LIST_GATHER  and .CONTROL_LIST_YIELD
  * .CONTROL_LAZY_GATHER  and .CONTROL_LAZY_YIELD
  * .CONTROL_LAZY_LIST_GATHER and .CONTROL_LAZY_LIST_YIELD

(Or something similar; my naming fu is off today.)


-'f




Re: Speccing Test.pm?

2008-09-03 Thread Geoffrey Broadwell
On Tue, 2008-09-02 at 12:32 -0700, Darren Duncan wrote:
 Now a common factor to both of my proposals is that this Test.pm is 
 intentionally kept as simple as possible and contains just the 
 functionality needed to bootstrap the official Perl 6 test suite; if the 
 official test suite doesn't use certain features, then they don't exist in 
 this Test.pm, in general.

This doesn't quite address one assumed detail -- should the official
test suite be modified to use as few (and as simple) Test.pm features as
possible, so that Test.pm can then be made even simpler?  This would
likely make the test suite slightly clumsier in places, while making it
easier for a new implementation to get enough functionality in place so
that Test.pm is fully supported.

 There would still be room for third party Test modules, as those would be 
 richer and provide functionality that would be useful for testing language 
 extensions / CPAN modules but that aren't needed by the tests for Perl 6 
 itself.

If the test suite is modified as above, then there pretty much HAVE to
be additional Test modules -- people programming third-party code would
go insane using only the anemic Test.pm that would be sufficient for a
simplified test suite.

Of course, that doesn't mean that a more extensive Test module can't be
standardized, or even an official version written that all perl6's can
ship with.  It doesn't have to be all-encompassing, but a core set of
best practices test tools, perhaps just taken from Perl 5 experience
of the TDD folks and modified for Perl 6 differences, would be nice to
rely on everywhere.


-'f




Re: [perl #58410] [TODO] Deprecate n_* variants of the math opcodes

2008-08-28 Thread Geoffrey Broadwell
On Thu, 2008-08-28 at 00:03 -0700, Allison Randal wrote:
 Briefly discussed on the phone with Patrick, Jerry, and chromatic: The 
 versions of the math opcodes that modify an existing destination PMC 
 instead of creating a new destination PMC are not useful to HLLs, 
 because they make assumptions about assignment semantics that don't hold 
 true for all (or possibly even any) HLLs. Code generated from PCT takes 
 the result of the math op as a temporary result value, and then performs 
 a separate assignment operation to the HLL result variable, following 
 the HLLs semantics for assignment.
 
 The plan is to make the regular variants (like 'add') create a new 
 destination PMC, and then deprecate the old n_* variants (like 'n_add').

What is the replacement for the old regular variants that use a
pre-existing destination?

A few years ago when I was doing copious Perl 5 PDL work, I found that
in certain loops I would get bottlenecked entirely by creation and
destruction of temporaries.  I might be doing several dozen math
operations per iteration, but I as the programmer knew that I only
needed a handful of temporaries, and these could be created outside the
loop.  The vast majority of the object cycling was quite obviously
wasted.  In some cases, I could work around this by considerably
uglifying the code and/or reaching through several layers of
abstraction, but sometimes there was no recourse except to drop to
PDL::PP (specially preprocessed C) or even C itself.

I'd like to be able to write decently-performing math libraries in PIR,
instead of having to drop down all the way to C.  Being forced to create
and destroy loads of temporaries I don't need will make this nigh
impossible, let alone putting a major strain on the GC subsystem.

Will there be some way to stay in PIR and still optimize away temporary
cycling?


-'f




Re: NCI and Calling Conventions (esp. on Windows)

2008-08-20 Thread Geoffrey Broadwell
On Wed, 2008-08-20 at 22:20 +0200, Ron Blaschke wrote:
 I think we need a way to select the calling convention for a function,
 similar to, or maybe even part of, the signature.  Also, it would be
 good to have a way to select a calling convention when loading a
 library, as a calling convention is usually used consistently, and
 providing defaults for well known libraries.

tewk's C99 parser / NCI JIT code should handle part of this (and could
be expanded to do more).  We may want to just settle on extending that
work as needed, rather than trying to shoehorn it into old-style NCI.


-'f




Re: Inter-HLL Mapping Notes

2008-08-18 Thread Geoffrey Broadwell
On Mon, 2008-08-18 at 17:44 -0400, Michael Peters wrote:
 Allison Randal wrote:
 
  It's true that you can't get a Python array and expect it to respond to 
  all the same method calls as a Tcl array. But that Python array is just 
  another variable type, that accepts keyed access and method calls. You 
  treat it as a user-defined data type, and read the documentation to find 
  out what method calls it accepts.
 
 This is similar to how Inline::Java works. If you call a Java method you 
 get back a Java object. It's up to you to translate that into a Perl 
 object and you do that by reading the docs for that Java object and 
 calling it's methods.
 
 There is some magic to make Strings in Java be strings in Perl (same for 
 numbers, etc). But it's not too deep. I would expect that this kind of 
 shallow magic would be implemented in a Perl library. One each for the 
 most popular language. And it would probably just consist of some 
 methods that do casts.
 
  While I don't expect every foreign data type to be immediately 
  translated to a native type as soon as they touch the native code, I do 
  expect it will be best practice to have any library in any language only 
  return types native to its language. 
 
 Absolutely. Some of the talk of automagic object translation was getting 
 me worried. If I'm calling a Java library it should return exactly what 
 it's documentation says (a Java object). Then I as the application 
 author should take the responsibility to make sure that code that uses 
 my library should get exactly what it's expecting.

Speaking just for myself, I had no expectation that we would be able to
create a be-all end-all inter-HLL type mapper.  Instead, I want to avoid
forcing every HLL wrapper author to write a lot of very similar mapping
scaffolding for common types, for every library and/or foreign HLL they
wrap.

I want to make the common cases easy -- and I'm happy to just keep the
hard mappings possible.  Common stuff should Just Work, without having
to do a lot of pointless busywork.

If I have to hand-write all the PIR to wrap something like Perl5's
Term::ReadLine::GNU in another language, I'll go insane.  Sure, I might
reasonably choose to write a custom automated PIR generator to do all
that, but I certainly don't want to write a new generator for each
library I wrap.  So I might try to generalize it to simplify writing the
type mappings for the most common basic types.  Pretty soon I begin to
wonder how much of the generator is specific to the HLL I'm working on,
and how much could be shared with other HLLs 

At which point the whole thing starts to look like something Parrot
should provide, just like it provides a grammar engine, compiler
framework, etc., even though there's no reason every HLL couldn't write
their own tools from scratch (and some do).  PCT just makes things SO
much easier for common cases that it's like coming out of the dark ages.

I think it would be mighty fine if HLL developers could get that feeling
for inter-HLL call wrappers, too.


-'f




Re: [perl #57942] [BUG] Smolder failure [linelength, compilers/pirc]

2008-08-15 Thread Geoffrey Broadwell
On Fri, 2008-08-15 at 07:00 -0700, Will Coleda wrote:
 #not ok 1 - Line length ok
 #   Failed test 'Line length ok'
 #   at t/codingstd/linelength.t line 80.
 # Lines longer than coding standard limit (100 columns) in 1 files:
 # /home/smoke/parrot/compilers/pirc/new/pirsymbol.c:256: 104 cols
 # Looks like you failed 1 test of 1.
 
 This causes -all- smolder reports to be marked as failures.

Perhaps 'make codetest' or 'make codingstd_tests' should be an automated
commit hurdle?  Meaning, SVN won't allow the commit if those don't pass.

(Before anyone asks, I do not know how to write SVN commit hurdles.  I
just seem to recall they are possible.)


-'f




Re: [perl #57942] [BUG] Smolder failure [linelength, compilers/pirc]

2008-08-15 Thread Geoffrey Broadwell
On Fri, 2008-08-15 at 11:57 -0400, Will Coleda wrote:
  This causes -all- smolder reports to be marked as failures.
 
  Perhaps 'make codetest' or 'make codingstd_tests' should be an automated
  commit hurdle?  Meaning, SVN won't allow the commit if those don't pass.
 
 Assuming we actually want to be running these tests all the time, and
 having codingstd violations 'break the build', this is not an
 unreasonable approach.[1]
 
 ISTR our current hosting providers would like to avoid such things,
 but that's an understanding from many years ago.
 
 [1] I don't think that's the way to go.


It seems then that we have two remaining options:

  1. Don't run codingstd as part of smolder.

  2. Differentiate a codingstd failure and a real failure in smolder.

Which one of these are you proposing?


-'f




Re: Branching

2008-08-05 Thread Geoffrey Broadwell
On Tue, 2008-08-05 at 13:20 -0400, Will Coleda wrote:
 On Tue, Aug 5, 2008 at 1:10 PM, chromatic [EMAIL PROTECTED] wrote:
  Gah, no maintenance releases please!  See Mommy, why did it take over five
  years to release a new stable version of Perl 5 with a bugfix I made in
  2002?

 Perhaps I used an official term when I didn't mean to here.
 
 Let's simplify: I can easily see us needing at least dev and
 production branches (one of which can be trunk), which is one more
 than we have now.

We will definitely need multiple long-lived branches.  Just to make
explicit the reasoning: data loss, security, or otherwise critical
bugfixes that should be backported to one or more already released
versions and re-released immediately.  That's a lot harder if you don't
have release branches.  Of course, you can branch lazily, since releases
are tagged.  But we have to assume that there *will* be multiple
long-lived branches that won't merge and go away.

However, I'm against the practice of branching before release to
stabilize an assumed-crazy trunk.  I prefer the (general) way we do
things now: releases are made from the trunk code, which is kept as
high-quality as possible; small changes are made directly to trunk;
large changes are made in a branch and merged to trunk when ready.

The details may be ripe for improvement, however.  There seems to be an
implicit assumption from at least some of us that a merge back to
trunk should be (or even 'needs' to be) an all or nothing affair.
Several SCMs make it easier to cherry-pick changes from the branch,
merge them back to trunk, and keep the diff in even a long-lived feature
development branch as small as possible.  git for example (combined with
Stacked GIT or a similar tool) has decent support for altering existing
commits in a branch to make them easier to merge piecemeal.  I don't
have enough SVK fu to know how well this development model is supported
there.


-'f




Re: Branching

2008-08-05 Thread Geoffrey Broadwell
On Tue, 2008-08-05 at 11:19 -0400, Jesse Vincent wrote:
 [branch feature]

This sounds very useful.  Is the SVK paradigm changing so that online
use is assumed, and offline is a mode to switch to temporarily?  I'm
used to thinking of SVK in one two ways:

1. As a better SVN client for normal always-online use
2. As a full-time disconnected client, with rare online use
   to merge back to the SVN master

Is this new branch mode intended to generalize and replace the above
two?  Or is it a third use case entirely?

 If this seems appealing, I'm sure I could get some clkao cycles if  
 there's more you folks need.

My biggest request (which you may or may not have any influence over) is
better distro packaging.  Both Debian and rpmforge have gone through
periods where SVK was completely fubar.  In fact, earlier this year
Debian screwed up their SVK package to the point of helpfully
uninstalling it and making it impossible to reinstall.  And when the
replacement finally came, after a very long wait, it crashed all over
the place.  I avoided data loss by the skin of my teeth.  That whole
situation is what made me try git-svn -- I didn't have another decent
choice for disconnected Parrot work.

Anyway, applying some resources here and there to help the distro
packagers may have a big positive effect on the SVK user base.


-'f




Re: Branching

2008-08-05 Thread Geoffrey Broadwell
On Tue, 2008-08-05 at 12:54 -0700, chromatic wrote:
 On Tuesday 05 August 2008 12:35:50 Geoffrey Broadwell wrote:
  bugfixes that should be backported to one or more already released
  versions and re-released immediately.

 I can see patching the previous release in case of a critical bugfix, but if 
 we get in the habit of encouraging users to expect updates of anything older 
 than the previous stable release for free, we've doomed the project.

That's why I was careful to say 'one or more'.  As in greater than zero,
but other than that it's a separate policy decision that I was not
trying to address in my previous message.

 Point releases every month.  Major releases every three months.

Agree, except I'd like to hear more about how you define a 'major
release'.

 Complete and 
 utter refusal to support users who expect that they can install Parrot 1.0 
 and get free support from the mailing list or IRC for the next eight to ten 
 years.

Half agree.  I agree that we should only *directly* support a release
for a limited time, though I think the minimum sane time would be major
release before current one -- 3-6 months at any given moment, given
your above schedule.  In other words, just because we do a new 3 month
release, doesn't mean we immediately de-support the one we did just 3
months ago.

Now, I might argue for a longer direct support schedule than just 'most
recent + 1', but I think any less than that can't work in real life.

Beyond that, I think we need to explicitly acknowledge that distro
packagers have a longer schedule to care about.  While we may not
support them directly, we still need to have a process in place to make
sure they are notified about critical problems that may apply to
previous releases, so that they can go back and check/patch their
versions.  We should also facilitate any process that will help
different distros to help each other to backport our trunk fixes in a
timely fashion.

In short, we don't have to do the hard work for the distros ourselves,
but we can't leave them out in the cold, either.


-'f




Re: Branching

2008-08-05 Thread Geoffrey Broadwell
On Tue, 2008-08-05 at 16:19 -0400, Michael Peters wrote:
 We also need to think about deprecation cycles. If you deprecate a 
 feature in 1 version and then it disappears in the next then the time 
 between when my code works and when it doesn't is only 6 months. Some 
 distros provide support for several years.

Which reminds me: chromatic, what was your reasoning for major releases
being every three months, instead of four or six?

I agree we don't want to go much beyond six months for our major
releases, but with at least two major distros that aim for decent
freshness (Ubuntu and Fedora) using six month release cycles, I'm
curious what we gain with a shorter cycle than that.

A six month release cycle makes deprecation-and-removal a one year
affair, which isn't too bad.  And we can fairly tell users who want more
stability than that to use the slow distro that matches each fast
distro we aim for -- Debian instead of Ubuntu, RHEL/CentOS instead of
Fedora, for example.

(Separately, I agree that one month point releases seem to work well for
us.  I don't see any reason to change that.)


-'f




Re: [perl #57344] [TODO] Change runtime/parrot/* to runtime/*

2008-07-28 Thread Geoffrey Broadwell
I'll reply to the rest of this (if someone doesn't beat me to it)
tomorrow, but just wanted to comment on your closing comment:

On Sun, 2008-07-27 at 22:25 -0700, jerry gay wrote:
 that's an install tree
 policy, and as far as i'm concerned, it hasn't been addressed yet
 (along with many other install-related policies.)

It seems to be time to make these policy decisions, at least on a draft
basis -- because we have considerable volunteer effort being applied to
getting installable packages working.  We should be answering the
necessary policy questions so that said volunteer effort can
simultaneously -Ofun and help Parrot.  We should *not* put this off for
an indefinite future and lose the contribution.


-'f




Re: [perl #56996] [TODO] remove non FHS-compliant searchpaths

2008-07-27 Thread Geoffrey Broadwell
On Sun, 2008-07-27 at 13:13 +0200, Reini Urban wrote:
 +stat $I0, conf_file, 0
 +if $I0 goto conf
 +
 +# If installed into /usr/lib/parrot, not /usr/runtime/parrot
 +# This logic has to be reversed when installed versions should
 run faster
 +# than source builds.

Reverse it now; we'll never remember to get back to this in the future.

 +conf_file = interpinfo .INTERPINFO_RUNTIME_PREFIX
 +conf_file .= /lib/parrot/include/config.fpmc
 +conf:

 +name = interpinfo .INTERPINFO_RUNTIME_PREFIX
 +concat name, lib/parrot/dynext/
  concat name, request

Since we're using PIR in both places, we should probably use the .=
sugar in both places.  Yes, I know the second file has some 'concat's in
it already.  Here's an opportunity to fix that.  :-)


-'f




Re: [perl #57344] [TODO] Change runtime/parrot/* to runtime/*

2008-07-27 Thread Geoffrey Broadwell
On Sun, 2008-07-27 at 12:10 -0700, Will Coleda via RT wrote:
 On Sun, Jul 27, 2008 at 1:08 PM, via RT Geoffrey Broadwell
 [EMAIL PROTECTED] wrote:
  # New Ticket Created by  Geoffrey Broadwell
  # Please include the string:  [perl #57344]
  # in the subject line of all future correspondence about this issue.
  # URL: http://rt.perl.org/rt3/Ticket/Display.html?id=57344 
 
 
  In the source repository, the 'parrot' in runtime/parrot/foo is
  pointless.  It's a singleton directory, and it's redundant.
 
  It also means that when we install, we either end up with silly
  directories like /usr/lib/parrot/runtime/parrot/foo/, or we have to
  clutter the .include and load_bytecode code with checks in both
  prefix/runtime/parrot/foo/ *AND* prefix/runtime/foo/.
 
  The solution to this is to fix the source tree -- move the children of
  runtime/parrot/ up one level, and get rid of the 'parrot' directory.
 
  In summary, there are three things that can budge, depending on how you
  view this problem: cleanliness, performance, and the source tree.  I
  vote for fixing the source tree.
 
 What about the runtime for things that are not core parrot, for
 example, the runtimes of the various languages? Tcl, for example, has
 a runtime PBC, op library, and some dynpmcs.[1]
 
 (Which don't go into a separate place under runtime at the moment, but
 arguably could.)

I thought of that.  rurban (I think) said on IRC that the language
dynpmcs go in .../dynext already, and other libraries either do, or
could, go under .../library.

Now, if we decide to standardize on moving all language runtimes to
siblings of runtime/parrot, I'd be fine with leaving it.  But my
understanding is that over time, the languages will migrate out of the
repo altogether.  Of course, this hasn't happened yet, because every
time we change fundamentals in Parrot, we need to fix all the languages.
Having Kea-CL out of tree has already shown the problems there.  There's
also nothing that says that even when moved out of tree eventually, we
can't make them all use the same directory structure under runtime/ 

So: either we decide to actually move things from the languages to
parallel runtime/parrot/, or we should remove the singleton /parrot/
directory from the middle of the path.


-'f




Re: [perl #57260] [BUG] Segfaults in sprintf opcode

2008-07-25 Thread Geoffrey Broadwell
On Fri, 2008-07-25 at 13:40 +0200, Peter Gibbs wrote:
 +HUGEINTVAL num;

Does this really need to be a HUGEINTVAL?  Why is INTVAL not sufficient?


-'f




Re: [perl #57260] [BUG] Segfaults in sprintf opcode

2008-07-25 Thread Geoffrey Broadwell
On Fri, 2008-07-25 at 22:18 +0200, Peter Gibbs wrote:
 typedef HUGEINTVAL(*sprintf_getint_t) (PARROT_INTERP,INTVAL, 
 SPRINTF_OBJ *);
 
 So, since obj-getint returns a HUGEINTVAL, I gave it one to store the 
 result in.

Fair enough, that's good enough for me.

 As to why sprintf_obj is defined that way, I have no clue.

A question for another day ... when we take a pass through our typedefs
and check ourselves for sanity.


-'f




Re: [perl #57190] HLL Interoperation

2008-07-23 Thread Geoffrey Broadwell
On Wed, 2008-07-23 at 10:11 -0400, Bob Rogers wrote:
 True.  But passing a Complex to any language that does not have a
 concept of Complex is going to cause problems if the language tries to
 treat it as anything but a black box.  And a black box doesn't require a
 special representation.

But we *do* need to have a defined way to pass black boxes back and
forth, as when registering (and later calling) a cross-HLL callback with
associated data.

But if you can't represent X in a language *at all*, what does it
 mean to map such a thing?

That's a good question.  Does it throw an exception?  Does it
automatically become a black box?  Does it convert to an opaque object
with methods?  Does it become a frozen data structure that the
destination language just doesn't know how to thaw?

Defining behavior for the exceptional cases must be part of our type
mapping system.

  2. We may simply decide that when you pass containers between HLLs,
   you explicitely give up some or all of the source language's
   guarantees, and the source language compiler is required to treat
   the data structure after that point as if it had been exposed to
   hard radiation.
 What guarantees?  When you pass a hash to function that expects that arg
 to be an array, strange things are bound to happen.  And they are bound
 to be equally strange regardless of whether the call to this function
 was foreign or native.  It seems strange to expect inter-language
 calling to be somehow safer than intra-language calling.

An inter-language call could be less safe or more safe, depending on
implementation.  If Parrot guarantees that a deep copy will happen, or
that the structure passed is somehow guaranteed to be read-only, there
may be less danger than even a native call.  If Parrot *might* pass a
writeable reference to the original structure, then the source language
needs to treat an inter-HLL call as Tainting not just the structure
contents, but the structure itself.


-'f




Re: HLLs needing OpenGL support

2008-07-23 Thread Geoffrey Broadwell
On Wed, 2008-07-23 at 17:09 +0200, François Perrad wrote:
  From a couple comments you make later, it sounds like you're aiming to
  be perfectly API compatible with the original library implementations
  for Lua and PHP, so that moving to Parrot is a drop-in replacement as
  far as the user's source code is concerned; am I correct in this
  assumption?

 Yes, fully compatible. For me, a Lua-like or PHP-like is just a toy.

Gotcha.  Makes sense -- making the implementation switch as smooth and
easy as possible will greatly assist bringing them over to the light
side.

  I'm not sure what you mean by only for bootstrap here.  Do you mean
  because of potential performance problems implementing heavy math code
  in PIR as opposed to C?

 Currently, the whole parrot/perl6 project is in bootstrap stage, ie. 
 Perl6 is not available.
 So, the choice is :
  - waiting for Perl6 or other magic tools
  - writing PIR today, and doing the job

Ah, I think I understand you now.

  Why do you silently ignore extra arguments?  Is that part of the
  standard Lua design?

 It's the standard Lua behavior. So, it's a requirement.

Got it.  And this is true even for C-implemented libraries?  Lua's NCI
automatically does extra argument stripping?

  Why were they written in C instead of PHP in the existing PHP
  implementation?  Is there any reason we wouldn't be providing a nice
  service to the PHP community by doing the rewrite, totally aside from
  our needs for the Parrot project?

 I think they use C for performance reason.

Do you think they would be fast enough if written in PIR?  Or is Pipp so
much faster than the original PHP implementation, that a rewrite in pure
PHP now makes sense?

  Finally, improvement in runtime/parrot/library/OpenGL.pir :
  at this time, just split the function '_export_all_functions' into 
  '_export_GL_functions'  '_export_GLUT_functions'.
 
  Should be easy.  However, the OpenGL header parser will try to detect
  and wrap any GL-related API it can find.  So we can do this two ways:
 
 1. Have a single function that takes one or more API names
(GL, GLU, GLE, AGL, GLX ...) and exports them.  This gets
closer to acting like I believe the Namespace API intended.
 
 2. Have an export function per API, as you suggested; we'll need
on the order of a dozen of these to handle the GL APIs seen on
each of our platforms.  It's a fair amount of copied boilerplate,
but may be mildly more efficient at run time (though I certainly
hope that symbol export isn't a bottleneck for anyone ...).
 
  In either case, we need to decide what happens when the user tries to
  export an API that hasn't been wrapped.  I'm thinking exception 

Would any other Parrot Porters like to weigh in on this?


-'f




Re: HLLs needing OpenGL support

2008-07-22 Thread Geoffrey Broadwell
On Tue, 2008-07-22 at 09:03 +0200, François Perrad wrote:
 Ok, talking about libraries :
 Lua compiler  Lua Standard Libraries are complete (as far as the 
 current Parrot supports it).
 So, since April 2008, I wrote some extension libraries for Lua
 Since mid-June 2008, I tried to write extension libraries for Pipp
 (PHP supplies more than 2500 functions !!!, huge work)

From a couple comments you make later, it sounds like you're aiming to
be perfectly API compatible with the original library implementations
for Lua and PHP, so that moving to Parrot is a drop-in replacement as
far as the user's source code is concerned; am I correct in this
assumption?

 I believe that PHP could be the killer application for Parrot

Why PHP in particular?  Is it because of what Parrot can do for PHP
(performance, stability (we hope), interop, ...), or because of what PHP
can do for Parrot (bring lots of eyeballs to Parrot, as Tim Bunce
mentioned for embedding Parrot in Java)?

  - lua/src/lib/md5.pir  sha1.pir : wrapper over PMC MD5  SHA1 (I wrote 
 them on the outside of Lua)
 - lua/src/lib/random.pir : wrapper over 
 library/Math/Random/mt19937ar.pir (I wrote it on the outside of Lua)
  - lua/src/lib/uuid.pir : wrapper over library/uuid.pir (I wrote it on 
 the outside of Lua)

Meaning, you wrote the PMCs / Parrot library PIR before working on Lua,
without any Lua dependencies?

  - lua/src/lib/lfs.pir : Lua File System library, over the PMC OS
 (still incomplete)
  - pipp/src/common/php_base64.pir : wrapper over library/MIME/Base64.pir 
 (but incomplete because PHP API has more options than MIME/Base64 Perl API)
  - pipp/src/common/php_pcre.pir : wrapper over library/pcre.pir 
 (incomplete because no PhpArray support, and the NCI PCRE is incomplete 
 and seems very old)

Can you please create tickets for the stuff that's missing that you
need?  Maybe we can get someone to take on improvements of these, so you
don't have to 

 As I need to write a NCI wrapper over gmp, I begin my study of NCI by 
 using the OpenGL one (with Lua).

Makes sense.

 If I try to summarize my experiment with libraries :
 Parrot will supply 3 kinds of common libraries for HLL
  - written in Perl6, but currently not available (I don't know if NQP is 
 suitable for writing library)

In theory perhaps, but NQP has a lot of limitations (intentionally).

  - written in PIR, but only for bootstrap, because as Bernhard 
 Schmalhofer tell PIR is not a decent language
 + good for libraries not common (in fact equivalent to builtins)
 - lua/src/lib/bitlib.pir
 - lua/src/lib/lfs.pir
 - pipp/src/common/php_ctype.pir
 - pipp/src/common/php_math.pir
 - pipp/src/common/php_type.pir
 + good when no native library available (but a full test suite is 
 needed)
 - library/Math/Random/mt19937ar.pir (Mersenne Twisted)

I'm not sure what you mean by only for bootstrap here.  Do you mean
because of potential performance problems implementing heavy math code
in PIR as opposed to C?

  - binding over native (C/C++) shared libraries
 + with native PMC (C compile/link)
- sometime, for security reason, a static linkage is mandatory 
 (libssl is shared lib, but its subset libcrypto is static lib)
- other advantage, PMC allows direct OO interface

And for security libraries, being able to precisely control buffer
copies is important 

 + with NCI : the best way (no C compile/link)
- but only procedural interface (no direct OO)

How many system libraries provide a true OO interface only, with no
procedural or procedural-pretending-to-be-OO interface (like GNOME, for
instance)?  What functionality do we actually *need* to support those
stragglers?

 2 designs choices :
 - For long term maintenance, I write PIR close to original C. For 
 example, I start Lua on Parrot  aligned with version 5.0.2  and now it's 
 5.1.3. And in most of case, the original C is the only valid (updated) 
 user  requirements documentation.

Like the old Perl 5 (the language) *is* what perl5 (the VM) *does*
problem 

 - I try to emit the same (as possible) error or warning messages than 
 the original implementation and to have the same interface. Rule of 
 Least Surprise for the end user. And I could run the test suite against 
 the original implementation.

Makes sense.

 So, in the init function of a library, I wrote (or generate) some boring 
 code like :
 .const .Sub _mod_funct = 'func'
 _mod_func.'setfenv'(_lua__GLOBAL)
 set $P1, 'func'
 _mod[$P1] = _mod_func
 I wait for a IMCC improvement (hi kjs) in order to support :
  .macro register(tname, fname)
  .const .Sub $fname = .fname
  $fname.'setfenv'(_lua__GLOBAL)
  set $P1, .fname
  .tname[$P1] = $fname
  .endm

Again, what's missing here?  Can you write up an RT ticket for the
missing functionality?  Or is there one already?

 - Currently, the LuaTable 

Re: Inter-HLL Mapping Notes

2008-07-22 Thread Geoffrey Broadwell
On Tue, 2008-07-22 at 15:37 -0700, chromatic wrote:
 The wiki page at:
 
   http://www.perlfoundation.org/parrot/index.cgi?inter_hll_mapping_notes
 
 seems to be missing the rationale for *why* it's necessary to map types 
 between languages?  (Also see If Perl 6 has to care about the internal 
 storage format of an Integer PMC, it's doing something very wrong.)

What about HLL-specific container types, above the level of the basic
Parrot-provided types?  What does Lisp do with a Perl 5 Scalar?  What
does Forth do with a LuaTable?  How do you work with a Perl 6 Capture or
Junction in LOLCODE?  What about Haskell user-defined types in any
language that doesn't understand records or junctive types?

If your answer is Use only Parrot-standard opcodes/vtables on them, I
argue that you leave a lot of the source container's functionality on
the table, and you may in fact be constrained to do things that are
meaningless or absurd.

If your answer is Treat them as opaque objects with methods then:

   A) you have just defined a type mapping

   B) this may be much lower performance than another mapping could be

   C) what about destination languages without native standardized OO?

   D) every source language is burdened with defining a complete set
  of dumbed-down methods to access every type, which it may not
  need or use internally

   E) you've lost the syntactic convenience that comes with being
  able to work with advanced types using native syntax in the
  destination language, if the source type could map directly

If you want to be smarter than either of these, then you need to figure
out a better way to do HLL-to-HLL typemapping.  Hence our discussion.


-'f




Re: [perl #57190] HLL Interoperation

2008-07-22 Thread Geoffrey Broadwell
On Tue, 2008-07-22 at 22:58 -0400, Bob Rogers wrote:
So I would argue that (1) what seem like differences in numbers in
 the various languages are really differences in the way those languages
 define their numeric operators, not in the numbers themselves;

I disagree.  How do you represent Complex in a language that doesn't
have a way to represent a number with more than one dimension?  This is
a fundamentally different kind of thing than any simpler numeric type.

  and (2)
 standardizing on common numeric data type will avoid the impossible job
 of making the Parrot built-in arithmetic be all things to all languages
 (and all combinations thereof).

That's certainly a possible choice, but it's still a mapping, and I
argue not the only sane one.  More than one language has a Complex type,
but not all of them do.  If we make Complex the base Parrot type that
everything gets converted to, then some languages will be ... unhappy.
If we standardize on some other numeric type, than we fail the round
trip test spectacularly.

I would further argue that *any* sort of type mapping is problematic
 when calling across language boundaries.  If I pass an array of arrays
 of ... of numbers from one language to another, then mapping would seem
 to require a deep copy.  This changes the API:  The native call into
 this routine can see side effects to the passed data structure, whereas
 the foreign call would not.  (Or am I misunderstanding what you mean by
 mapping here?)

I have two answers to that:

  1. It may not be a deep copy; it may be an autobox of some sort.  But
 that just begs the question of how to explain the source language's
 semantics to all destination languages.  Something we need to
 discuss, clearly.  :-)

  2. We may simply decide that when you pass containers between HLLs,
 you explicitely give up some or all of the source language's
 guarantees, and the source language compiler is required to treat
 the data structure after that point as if it had been exposed to
 hard radiation.


-'f




Re: HLLs needing OpenGL support

2008-07-21 Thread Geoffrey Broadwell
On Mon, 2008-07-21 at 09:34 +0200, François Perrad wrote:
 Geoffrey Broadwell a écrit :
  fperrad: How do these bindings actually work?
 There'll work with runtime/parrot/library/OpenGL.pir.

OK ... so what could be improved about runtime/parrot/library/OpenGL.pir
so that you didn't have to write any bindings at all, or so that your
bindings could be greatly simplified?  So far, I'm seeing the following:

1. HLL access to the GL constants.  At the very least, you should
   already be able to define your constant table using the generated
   constants in runtime/parrot/include/opengl_defines.pasm, rather than
   having to hardcode them all.  Even better, this should all be wrapped
   up in an HLL-friendly way for you, but I've been looking for
   suggestions on how best to do that in a cross-HLL manner.

2. Namespace unflattening (glFoo - gl.Foo, glutBar - glut.Bar).  That
   should be easy for runtime/parrot/library/OpenGL.pir to do, but may
   not be valuable to you if you already have to do everything else
   below.

3. All subs are marked :anon, and then manually added into a global
   LuaTable which appears to be reimplementing a namespace.  Why?  And
   if all Lua namespaces are created this way, does LuaTable implement
   enough of the Parrot namespace API that other HLLs will be able to
   work with Lua-implemented modules?

4. Ignore extra args to each function (which I'm just guessing is the
   purpose of the '.param pmc extra :slurpy' on every sub).  Why do
   you want to do this?

5. All params are marked :optional (but don't have matching :opt_flag
   params) and seem required by the code.  Again, why do this?

6. Argument type checking and conversion.  This appears to be the real
   problem, though this seems like the exact kind of problem that Parrot
   was supposed to make easier for us.  If you have to manually wrap
   every function in a cross-language library in every HLL because
   Parrot won't Do The Right Thing, that seems like a design flaw.

7. Simplified wrappers around some common functions.  I've been thinking
   about creating some of these (most OpenGL wrappers for scripting
   languages seem to do this, to a greater or lesser degree).  Whether
   it is worth it to try to do this in runtime/parrot/library/OpenGL.pir
   depends on how many HLL implementors are trying to get exact ports
   of existing bindings in the original (non-parrot) implementation of
   their language, and how many would be willing to share a common
   simplified binding.

Anything else I'm missing?


-'f




HLLs needing OpenGL support

2008-07-20 Thread Geoffrey Broadwell
I noticed a couple commits overnight for Lua to support OpenGL.  I'm a
bit confused by them, since they don't seem to actually *do* anything,
just lots of (hopefully automatically generated!) scaffolding.

fperrad: How do these bindings actually work?

Everyone:

We're getting to the stage that HLLs are starting to want common library
access.  Some of these already exist as more or less decent PIR bindings
in /runtime/parrot/library/ ; others are either very incomplete or
non-existant.  Still, the time has come to be working on making it
easier for HLL authors to get a fully colloquial binding using the
common libraries, rather than having to roll their own from scratch.

pmichaud, jonathan, and I discussed a basic design a few weeks ago in
#parrot for PCT languages to use cross-language modules; the basic
protocol we sketched out should be workable for non-PCT languages as
well.  That protocol needs to be fleshed out and implemented.

Also, I would love to know what I can do for the OpenGL binding to make
it more amenable to use by the HLLs.  I notice that the Lua bindings
want to unflatten the namespace ('gl.Foo' rather than 'glFoo').  What
other changes do various HLLs typically make to C library bindings?


-'f




Loading libs under different names

2008-07-19 Thread Geoffrey Broadwell
I've noticed several patches from you today in which you're adding code
to try to load an existing library under additional library names for
cygwin support.  It's beginning to look like this is a common operation.

I needed this for the OpenGL bindings, so I wrote a utility routine in
runtime/parrot/library/OpenGL.pir called _load_lib_with_fallbacks() that
encapsulates this sort of fallback behavior.

It would be easy to split that routine out into a utility library, and
use it everywhere, but I've got some concerns for which I'd like input
from the Parrot Porters:

  1. If this is indeed a really common operation, it might be worth
 moving it down the stack.  Instead of using a PIR library,
 perhaps we should allow the loadlib opcode to take arguments
 other than a single simple string, and use the additional info
 to define fallbacks that are automatically tried.

  2. It's not clear to me that a simple list of names is appropriate
 everywhere.  It works when all the variations are unique (and
 fully compatible), but it doesn't work so well when the name of
 library A on one platform is the same as library B on another
 platform.  But perhaps this is a problem that doesn't exist in
 the wild, or is so rare it's simpler just to special case it.

  3. If we try to do something smarter for #2, I fear being sucked
 into a vortex of complexity, and I really don't want to go there.

Thoughts?


-'f




Re: [perl #57006] [PATCH] add cygwin opengl config quirks

2008-07-17 Thread Geoffrey Broadwell
On Thu, 2008-07-17 at 22:50 +0200, Reini Urban wrote:
 The problem I had with the w32api libs was -lglut32. with linking 
 directly to the dll /usr/bin/glut32.dll everything works fine, and I'll 
 get rid of freeglut as default.

I'm not sure I understand what you meant here.

 Now I only have to find out whatÄs wrong with that importlib, then I 
 send the revised patch. for now it is:

I'm fine with the patch conceptually, but some details:

 --- origsrc/parrot-0.6.4/config/auto/opengl.pm2008-06-02 
 
 -
   =head3 MSVC
 
 -
   =head3 MinGW

I find it easier to read raw POD with two lines of blank above headers;
please don't remove these.  :-)

 +=head3 Cygwin/w32api
 
 -=head3 cygwin
 +The Cygwin/w32api for native opengl support
 
 - : No details yet
 +F-lglut32 -lglu32 -lopengl32

These should be replaced with the actual package names you need to
install (w32api, opengl, ...?)

 +Requires a X server.

In this case, use 'an' instead of 'a'.

Also note that Coke had discussed moving this kind of optional library
requirements documentation to a separate file in docs/ -- if that is
done, then most of the POD from this file can be moved there, leaving
just a stub with a link to make the file in docs/ easy to find.

 +# Prefer Cygwin/w32api over Cygwin/X, but use X when DISPLAY is set

How about:

# Prefer Cygwin/w32api over Cygwin/X unless DISPLAY is set

 + cygwin  = '-lglut -L/usr/X11R6/lib -lGLU -lGL'

Why not just use 'win32_gcc' here?  Otherwise, it's not clear below how
this relates to win32_gcc in the non-X case.

 +} ) } else {

Please uncuddle that else.  :-)

Thanks for your work on this, rurban!


-'f




Re: [perl #56996] [TODO] remove non FHS-compliant searchpaths

2008-07-16 Thread Geoffrey Broadwell
On Wed, 2008-07-16 at 10:04 -0700, Reini Urban wrote:
 Remove
/usr/runtime/parrot/include
/usr/runtime/parrot
/usr
 paths from the .include searchpath.

+1 for not adding these to the searchpath by default.

(We shouldn't do something messed up like adding them in one place, and
then removing them in another.)


-'f




Re: [perl #56628] [BUG][PATCH] cygwin opengl libs

2008-07-07 Thread Geoffrey Broadwell
On Mon, 2008-07-07 at 17:21 +0200, Reini Urban wrote:
 Donald Hunter via RT schrieb:
  I think you must be linking against the X11 libGLU and libGL, where I am
  linking against the w32 native libraries.
 
 So we have to use some detection heuristic to seperate the X11 case 
 (such as $ENV{DISPLAY} set, and libGLU avaliable),
 from your win32 native case.

Certainly checking for $ENV{DISPLAY} is trivial; that leaves the GLU
check.  Is there a reliable place to find libGLU?  Is it guaranteed to
be in /usr/lib ?  What is the correct capitalization under cygwin?  For
that matter, is the cygwin libGLU a .so or a .dll?

In other words, is the following a sufficient check?

$using_x11_mesa =  defined $ENV{DISPLAY}
 -e '/usr/lib/libGLU.so';

Hmmm.  Will the cygwin libGLU work without using X11?  (Do we have to
check for $ENV{DISPLAY} there?)

What do we do if the user has both libGLU and glu32 installed?


-'f




Re: [perl #56636] [BUG] segfault from sort if comparison is always 1

2008-07-06 Thread Geoffrey Broadwell
On Sat, 2008-07-05 at 20:11 -0700, Andrew Johnson wrote:
 Parrot_quicksort() is in src/utils.c; the first do-while loop has nothing to
 stop it when j reaches 0, so it keeps going outside of the data array. I
 guess that the while condition needs j  0 adding to it to prevent that from
 happening.

Better yet, we should replace the inherently insecure quicksort
algorithm (insecure in the vulnerable to algorithmic attack sense)
with a more secure mergesort like perl5 uses.  IIRC, perl5's mergesort
is also carefully crafted to be as sensible as possible in the face of
insane compare functions 


-'f




Re: [ITP] parrot-0.6.3 with parrot-perl6

2008-07-03 Thread Geoffrey Broadwell
First, thank you for working on this!

Now some comments ...

On Thu, 2008-07-03 at 19:07 +0200, Reini Urban wrote:
 parrot-languages is my compressed version of the fedora split,
 they have for every single language a seperate package.

Thank you for choosing the single-package route here.

 I just left the docs/examples, the others stripped it.
 pdb is called parrot_pdb, disassemble is called pbc_disassemble.

Both of these are good changes (and I don't think the other packagers
should have stripped the examples; they are arguably necessary to
understand certain constructions, since our official docs are not
complete and are difficult to navigate in places).

 .include searchpath:
   /usr/runtime/parrot/include
   /usr/runtime/parrot
   /usr
   /usr/lib/parrot/include
   /usr/lib/parrot/
   .

/usr/runtime doesn't seem right to me (and it's not FHS-compliant,
IIRC).  I notice that none of your packages install to /usr/runtime, and
I'd prefer to just drop it.  Why is raw '/usr' included?

Side note to the Parrot Porters: the redundant 'parrot' directory under
'runtime' in the source tree makes no sense to me.  I'm in favor of
moving its three subdirectories up a level, and dropping the extra
cruft.

   /usr/share/doc/parrot/LICENSE
   /usr/share/doc/parrot/RESPONSIBLE_PARTIES
   /usr/share/doc/parrot/TODO

Should probably include NEWS here in the main parrot package; for the
last year and a half, all of the key updates have been listed there
(ChangeLog is only useful for historical spelunking, and can be placed
in -devel).  Also, CREDITS fills in a lot of details that
RESPONSIBLE_PARTIES leaves out, and it arguably belongs in the main
package right next to the LICENSE.  Plus it maximizes karma exposure for
our valued contributor base; for the same reason, I'd vote for
DONORS.pod here as well.  A binary package version of the official
README that drops the PREREQUISITES, INSTRUCTIONS, and NOTES blocks
seems like a good idea too.

DEPRECATED.pod and PBC_COMPAT need to be in the -devel package.

Speaking of all these docs, do the various distro ports/packages include
manpages (at least a minimal 'parrot' manpage, as I believe Debian and
OpenBSD require to comply with their respective policies)?


-'f




Commit message summary on first line

2008-07-02 Thread Geoffrey Broadwell
Please consider putting the change summary in the very first line of the
commit message, rather than just the subsystem ID.  For example, prefer
this:

[foobar] Fix compile under VoodooCC
* Frobnicated the whosit
* Defenestrated the whatsit
* Sacrificed a chicken under a full blue moon

Rather than this:

[foobar]
Fix compile under VoodooCC:
* Frobnicated the whosit
* Defenestrated the whatsit
* Sacrificed a chicken under a full blue moon

A number of tools display only the first line of the commit message when
displaying lists of changes.  With only the subsystem IDs in the first
line, such lists look like this:

[foobar]
[baz]
[quux]
[quux]
[xyzzy]

That doesn't help much, aside from seeing what subsystems are getting
attention lately.

See the attached screenshot for an example seen in gitk.


-'f

attachment: gitk-parrot.png

Re: [svn:parrot] r28910 - branches/pdd25cx/src/ops

2008-07-01 Thread Geoffrey Broadwell
On Tue, 2008-07-01 at 11:46 -0700, chromatic wrote: 
   if (nextinterp-code-base.data
   ||  next = (interp-code-base.data + interp-code-base.size))

Oh, that's just pretty.  I've long been in the habit of laying out
whitespace for multiple if tests like this:

if (   nextinterp-code-base.data
|| next = (interp-code-base.data + interp-code-base.size))

But I like your version better -- mine always seemed a little scattered
on the left.

Thank you!


-'f




Re: [perl #55978] [PATCH] [OpenGL] cygwin fixes from donaldh++

2008-06-18 Thread Geoffrey Broadwell
On Wed, 2008-06-18 at 09:06 -0700, chromatic wrote:
 On Tuesday 17 June 2008 21:06:58 Geoffrey Broadwell wrote:
 
Index: src/dynext.c
  ===
  --- src/dynext.c(revision 28459)
  +++ src/dynext.c(working copy)
  @@ -276,12 +276,10 @@
   /* And on cygwin replace a leading lib by cyg. */
   #ifdef __CYGWIN__
   if (!STRING_IS_EMPTY(lib)  memcmp(lib-strstart, lib, 3) == 0) {
  -strcpy(path-strstart, lib-strstart);
  +path = string_append(interp,
  +string_from_cstring(interp, cyg, 3),
  +string_substr(interp, lib, 3, lib-strlen - 3, NULL, 0));
 
 That string_from_cstring could almost probably be CONST_STRING.  If we can 
 swing that, it's usually much better.

OK, regenerated with this change.  Tested to not break on Linux, but
that's not saying much, given the #ifdef __CYGWIN__ 


-'f

Index: src/dynext.c
===
--- src/dynext.c	(revision 28514)
+++ src/dynext.c	(working copy)
@@ -276,12 +276,9 @@
 /* And on cygwin replace a leading lib by cyg. */
 #ifdef __CYGWIN__
 if (!STRING_IS_EMPTY(lib)  memcmp(lib-strstart, lib, 3) == 0) {
-strcpy(path-strstart, lib-strstart);
+path = string_append(interp, CONST_STRING(interp, cyg),
+string_substr(interp, lib, 3, lib-strlen - 3, NULL, 0));
 
-path-strstart[0] = 'c';
-path-strstart[1] = 'y';
-path-strstart[2] = 'g';
-
 *handle   = Parrot_dlopen(path-strstart);
 
 if (*handle)
Index: lib/Parrot/Configure/Step/Methods.pm
===
--- lib/Parrot/Configure/Step/Methods.pm	(revision 28514)
+++ lib/Parrot/Configure/Step/Methods.pm	(working copy)
@@ -198,7 +198,8 @@
 my $args = shift;
 croak _add_to_libs() takes hashref: $! unless ref($args) eq 'HASH';
 my $platform =
-  ($args-{osname} =~ /mswin32/i 
+  (($args-{osname} =~ /mswin32/i ||
+	$args-{osname} =~ /cygwin/i) 
$args-{cc} =~ /^gcc/i)  ? 'win32_gcc'
 :  $args-{osname} =~ /mswin32/i? 'win32_nongcc'
 :  $args-{osname} =~ /darwin/i ? 'darwin'
Index: config/gen/opengl.pm
===
--- config/gen/opengl.pm	(revision 28514)
+++ config/gen/opengl.pm	(working copy)
@@ -410,6 +410,9 @@
 '/System/Library/Frameworks/OpenGL.framework/Headers/*.h',
 '/System/Library/Frameworks/GLUT.framework/Headers/*.h',
 
+# Cygwin
+'/usr/include/w32api/GL/*.h',
+
 # Windows/MSVC
 (map $_/gl/*.h = @include_paths_win32),
 
@@ -444,11 +447,19 @@
 # $ENV{HOME}/src/osx-insane/usr/X11R6 1/include/GL/*.h,
 );
 
+print \nChecking for OpenGL headers using the following globs:\n\t,
+join(\n\t, @header_globs), \n
+if $verbose;
+
 my @header_files = sort map {File::Glob::bsd_glob($_)} @header_globs;
 
 my %skip = map {($_ = 1)} @SKIP;
 @header_files = grep {my ($file) = m{([^/]+)$}; !$skip{$file}} @header_files;
 
+print \nFound the following OpenGL headers:\n\t,
+join(\n\t, @header_files), \n
+if $verbose;
+
 die OpenGL enabled and detected, but no OpenGL headers found!
 unless @header_files;
 


Re: [perl #55910] - running Configure.pl on PPC OS X

2008-06-17 Thread Geoffrey Broadwell
On Mon, 2008-06-16 at 23:17 -0500, Packy Anderson wrote:
 Of course, I'm still doing perl Configure.pl --without-opengl, but  
 I don't know if that's a problem with hints file or with the GLUT  
 implementation...

That would be for me to look at   Join #parrot (irc.perl.org) and
ping japhb, or file a separate RT with details, and I'll happily debug
your OpenGL troubles with you.


-'f




Re: [perl #40204] [BUG] line numbers of *some* runtime errors are one too low

2008-06-17 Thread Geoffrey Broadwell
On Mon, 2008-06-16 at 20:01 -0700, Will Coleda via RT wrote:
 On Sat Aug 19 14:30:53 2006, chip wrote:
  Runtime errors seem to be off by one these days.  Anybody play with line
  numbering recently?

Note that this bug (or a similar one) affects line numbers of
disassembly (I assume because the line number info in the PBC is wrong).
You can see this effect with tools/util/dump_pbc.pl; certain PIR
constructs seem to be reliably misnumbered, so that the disassembly for
a line will appear just before the matching source code rather than just
after.


-'f




Re: [perl #55950] problem compiling OpenGL/GLUT on PPC OS X

2008-06-17 Thread Geoffrey Broadwell
On Tue, 2008-06-17 at 11:27 -0700, Packy Anderson wrote:
 Here's the command I'm using to configure and make
 $ make realclean; CC=gcc-4.0 CX=g++-4.0 perl Configure.pl -- 
 cc=$CC --cxx=$CX --link=$CX --ld=$CX --optimize; make -j 2

That's a pretty advanced build method.  OK, let's sanity check:

* Does it still fail if you don't override the compiler and linker?
* Does it still fail if you don't run a parallel make?
* Does it still fail if you don't configure with --optimize?
* Does it still fail for just 'make realclean; perl Configure.pl; make'?

(These aren't just silly questions.  Parrot is not tested as often or as
well with non-default build configurations, so a bug could easily have
crept back in.)

 c++ -o digest_group.bundle lib-digest_group.o md2.o md4.o md5.o  
 ripemd160.o sha.o sha1.o sha256.o sha512.o  -lm -framework OpenGL - 
 framework GLUT -lcrypto  -L/usr/local/lib -L/usr/local/source/parrot/ 
 blib/lib -L/opt/local/lib  -L/usr/local/source/parrot/blib/lib - 
 bundle -undefined dynamic_lookup -L/usr/local/source/parrot/blib/lib - 
 lparrot
 /usr/bin/ld: warning can't open dynamic library: /opt/local/lib/libz. 
 1.dylib (checking for undefined symbols may be affected) (No such  
 file or directory, errno = 2)

This seems very odd to me.  I'm not sure what is needing libz, or why it
is pulling it from /opt/local/ ...  I've asked another Mac OS X person
to take a look at this, since it makes no sense to me.


-'f




Re: Release warm-up! Call for NEWS, CREDITS and PLATFORMS updates.

2008-06-16 Thread Geoffrey Broadwell
On Fri, 2008-06-13 at 18:35 +0100, Nuno 'smash' Carvalho wrote:
 Parrot next release is on schedule for next Tuesday, June 17th. Unless
 any showstopping bugs are reported in the next few days. In
 preparation, please update NEWS with the latest hackings, also report
 any PLATFORMS updates.

As noted in IRC ... the NEWS file has only been partially updated, and
the release is tomorrow.  More updates welcome (appropriate to previous
detail levels, of course)!


-'f




Re: [perl #55530] OpenGL configure step emits a large number of warnings

2008-06-09 Thread Geoffrey Broadwell
 This is on a Gentoo linux amd64 machine, with Parrot trunk r28204, and an
 unstable (git) version of mesa built from the x11 overlay.
 
 Is this normal?

Nope, not normal.

Try the attached patch.  It's an update of the patch in #55228; I'll
update that RT in a moment.


-'f

diff --git a/config/gen/opengl.pm b/config/gen/opengl.pm
index 64a368f..f20c833 100644
--- a/config/gen/opengl.pm
+++ b/config/gen/opengl.pm
@@ -108,12 +108,19 @@ my %C_TYPE = (
 SphereMap   = 'void',
 Display = 'void',
 XVisualInfo = 'void',
+GLEWContext = 'void',
+GLXEWContext= 'void',
+WGLEWContext= 'void',
 _CGLContextObject   = 'void',
+CGDirectDisplayID   = 'void',
 GLXHyperpipeConfigSGIX  = 'void',
 GLXHyperpipeNetworkSGIX = 'void',
+PIXELFORMATDESCRIPTOR   = 'void',
+COLORREF= 'void',
 
 wchar_t = 'void',
 
+GLMfunctions= 'void*',
 GLXContext  = 'void*',
 GLXFBConfig = 'void*',
 GLXFBConfigSGIX = 'void*',
@@ -121,6 +128,21 @@ my %C_TYPE = (
 CGLPixelFormatObj   = 'void*',
 CGLRendererInfoObj  = 'void*',
 CGLPBufferObj   = 'void*',
+AGLContext  = 'void*',
+AGLDevice   = 'void*',
+AGLDrawable = 'void*',
+AGLPixelFormat  = 'void*',
+AGLRendererInfo = 'void*',
+AGLPbuffer  = 'void*',
+GDHandle= 'void*',
+WindowRef   = 'void*',
+HIViewRef   = 'void*',
+Style   = 'void*',
+HDC = 'void*',
+HGLRC   = 'void*',
+LPGLYPHMETRICSFLOAT = 'void*',
+LPLAYERPLANEDESCRIPTOR  = 'void*',
+LPPIXELFORMATDESCRIPTOR = 'void*',
 
 GLchar  = 'char',
 GLcharARB   = 'char',
@@ -137,6 +159,8 @@ my %C_TYPE = (
 Status  = 'int',
 GLint   = 'int',
 GLsizei = 'int',
+GLfixed = 'int',
+GLclampx= 'int',
 int32_t = 'int',
 
 GLenum  = 'unsigned int',
@@ -234,6 +258,7 @@ my @IGNORE = (
 'glutGetProcAddress',
 'glXGetProcAddress',
 'glXGetProcAddressARB',
+'wglGetProcAddress',
 
 # Don't handle this odd create/callback register function yet
 'glutCreateMenu',
@@ -260,6 +285,14 @@ my @IGNORE = (
 'uview_direction',
 'uviewpoint',
 
+# Some versions of GLUT declare these both with and without prefixes;
+# ignore the non-prefixed versions
+'SwapBuffers',
+'ChoosePixelFormat',
+'DescribePixelFormat',
+'GetPixelFormat',
+'SetPixelFormat',
+
 # Can't handle longlong until RT 53406 is done
 'glPresentFrameKeyedNV',
 'glPresentFrameDualFillNV',
@@ -276,11 +309,37 @@ my @IGNORE = (
 );
 
 my @SKIP = (
+# Can't properly support these yet; some (such as the internal headers)
+# may never be supported.
+
+# Mesa non-standard driver headers
+'amesa.h',
+'dmesa.h',
+'foomesa.h',
+'fxmesa.h',
+'ggimesa.h',
+'mesa_wgl.h',
+'mglmesa.h',
+'osmesa.h',
+'svgamesa.h',
+'uglmesa.h',
+'wmesa.h',
+'xmesa.h',
+'xmesa_xf86.h',
+'xmesa_x.h',
+
 # Mesa API-mangling headers (to load vendor GL and Mesa simultaneously)
 'gl_mangle.h',
 'glu_mangle.h',
 'glx_mangle.h',
 
+# OpenVMS API-mangling header
+'vms_x_fix.h',
+
+# Internal headers for DRI
+'dri_interface.h',
+'glcore.h',
+
 # Apple CGL OpenGL API conversion macros
 'CGLMacro.h',
 
@@ -299,6 +358,12 @@ my @SKIP = (
 'gizmo.h',
 'hslider.h',
 'vslider.h',
+
+# SGI GLw Drawing Area headers
+'GLwDrawA.h',
+'GLwDrawAP.h',
+'GLwMDrawA.h',
+'GLwMDrawAP.h',
 );
 
 my $MACRO_FILE = 'runtime/parrot/include/opengl_defines.pasm';
@@ -331,8 +396,9 @@ sub runstep {
 s{\\}{/}g foreach @include_paths_win32;
 
 my @header_globs = (
-# Default location for most UNIX-like platforms
+# Default locations for most UNIX-like platforms
 '/usr/include/GL/*.h',
+'/usr/local/include/GL/*.h',
 
 # Mac OS X
 '/System/Library/Frameworks/OpenGL.framework/Headers/*.h',
@@ -341,6 +407,7 @@ sub runstep {
 # Windows/MSVC
 (map $_/gl/*.h = @include_paths_win32),
 
+# # Portability testing headers
 # $ENV{HOME}/src/osx/headers/GLUT/*.h,
 # $ENV{HOME}/src/osx/headers/OpenGL/*.h,
 # $ENV{HOME}/src/osx-10.4/GLUT/*.h,
@@ -350,6 +417,25 @@ sub runstep {
 # $ENV{HOME}/src/cygwin/opengl-1.1.0/glut-3.7.3/include/mui/*.h,
 # $ENV{HOME}/src/glut-3.7.6/include/GL/*.h,
 # $ENV{HOME}/src/glut-3.7.6/include/mui/*.h,
+# $ENV{HOME}/src/freebsd-gl/usr/local/include/GL/*.h,
+
+# 

Re: [perl #55228] [BUG] Configuration problem with GLUT on macintel leopard

2008-06-05 Thread Geoffrey Broadwell
On Thu, 2008-06-05 at 02:54 -0700, Stephane Payrard via RT wrote:
 On Wed Jun 04 21:40:56 2008, japhb wrote:
  cognominal apparently has an absolutely insane collection of GL headers
  on his system.  I've made a number of portability fixes in response;
  hopefully the attached patch should fix his OpenGL issues.
 What are you doing? I hope you just select the most appropriate GL library. 

As I mentioned in IRC: I need to make sure my code works with every GL
lib/header we can find, because your troubles are just a proxy for the
silent majority that don't file bug reports (they just get discouraged
instead).

 Anyway the fix is not yet perfect. The Configure.PL still dies :
 
 Use of uninitialized value in hash element at config/gen/opengl.pm line 490.
 
 step gen::opengl died during execution: 'GLEW_FUN_EXPORT' is defined as 
 'GLEWAPI', but no 
 'GLEWAPI' has been defined at config/gen/opengl.pm line 494.
 
  at Configure.pl line 66

Hmmm, it looks like I got so caught up in fixing all the other weirdness
in your headers, I neglected to notice that we seem to have missed the
GLEW header that was causing your initial problem.  The good thing is
that you would have hit all the other errors I fixed once this one went
away, so it's not lost effort.  ;-)

I think I need the contents of:

/usr/include/GL/
/System/Library/Frameworks/OpenGL.framework/Headers/
/System/Library/Frameworks/GLUT.framework/Headers/

Believe it or not, none of those were in the tarball you sent!

/me boggles


-'f




Re: [perl #55290] [BUG] get_iter() not implemented in class 'ResizableStringArray'

2008-06-05 Thread Geoffrey Broadwell
On Thu, 2008-06-05 at 12:16 +0200, Jonathan Worthington wrote:
 chromatic wrote:
  On Wednesday 04 June 2008 11:28:58 Geoffrey Broadwell wrote:

  The op '$P0 = iter $P1' doesn't work if $P1 is a ResizableStringArray.
  I haven't tested, but I suspect the same may be true of the some other
  *Array PMCs as well.
 
  This should be fixed up, so we can move the 'iter' op from experimental
  to standard status.
  
 
  This should do it.  If you want to work up some tests, we can get this 
  applied.

 Thanks, applied this along with a (passing) test.

In my original ticket, I mentioned three problems:

1. ResizableStringArray didn't have get_iter.
2. Other *Array PMCs may be missing that as well.
3. The 'iter' op is still listed as experimental.

Clearly #1 was the focus of chromatic's patch.  Jonathan, did you
address #2 and #3?


-'f




Re: [perl #52988] [PATCH] OpenGL binding, part 1

2008-06-05 Thread Geoffrey Broadwell
On Thu, 2008-06-05 at 17:36 -0700, Ivan B. Serezhkin via RT wrote:
 Hello.

Hi there!

 FreeBSD users are humans too, with two arms and two legs =)

Of course!  I just haven't been able to find a Parrot/FreeBSD/OpenGL
person previously.  Standard request:  please send me a tarball of all
of your OpenGL/GLU/GLUT/etc. headers, so I can test them and incorporate
fixes.

 This is the fix for the configure to configure OpenGL under FreeBSD 
 correctly.

Comments below.

 Another workarround in FreeBSD ports commited to repository.

What SVN revision were those changes made in, so I can review?

 +HGLRC  = 'void*',
 +HDC= 'void*',
 +
 +PIXELFORMATDESCRIPTOR   = 'void',
 +LPPIXELFORMATDESCRIPTOR = 'void*',
 +LPLAYERPLANEDESCRIPTOR  = 'void*',
 +COLORREF   = 'void',
 +LPGLYPHMETRICSFLOAT= 'void*',

This is a subset of the ones that appear in my most recent patch to
#55228.  Can you try that patch, and see if it includes everything you
need?

 +PROC   = 'void*', # But this is function pointer
 - what to do ?

I'm ignoring all of the functions that return PROCs (this list was also
updated by my #55228 patch), so this line isn't needed.

 +# windows functions
 +'SwapBuffers',
 +'ChoosePixelFormat',
 +'DescribePixelFormat',
 +'GetPixelFormat',
 +'SetPixelFormat',

I'm surprised you're seeing these at all, since you're on FreeBSD.
Another reason I'd like to see your header collection.  :-)

  my @header_globs = (
  # Default location for most UNIX-like platforms
  '/usr/include/GL/*.h',
 -
 +'/usr/local/include/GL/*.h',

That's certainly reasonable; I'll include that in my patches.

  # We only care about regular function prototypes
 -next unless /API/ or /\bextern\b/ or /\bmui[A-Z]/;
 +next unless /API\s/ or /\bextern\b/ or /\bmui[A-Z]/;

What problem is this change trying to fix?

Thank you for your help!


-'f




Re: [perl #55228] [BUG] Configuration problem with GLUT on macintel leopard

2008-06-05 Thread Geoffrey Broadwell
On Thu, 2008-06-05 at 08:57 -0700, Geoffrey Broadwell wrote:
 On Thu, 2008-06-05 at 02:54 -0700, Stephane Payrard via RT wrote:
  Anyway the fix is not yet perfect. The Configure.PL still dies :
  
  Use of uninitialized value in hash element at config/gen/opengl.pm line 490.
  
  step gen::opengl died during execution: 'GLEW_FUN_EXPORT' is defined as 
  'GLEWAPI', but no 
  'GLEWAPI' has been defined at config/gen/opengl.pm line 494.
  
   at Configure.pl line 66
 
 Hmmm, it looks like I got so caught up in fixing all the other weirdness
 in your headers, I neglected to notice that we seem to have missed the
 GLEW header that was causing your initial problem.  The good thing is
 that you would have hit all the other errors I fixed once this one went
 away, so it's not lost effort.  ;-)

OK, I have fixed this problem, merged fixes from vany++ in RT #52988,
and fixed a couple more things as well.

Please try the newest version of the patch, attached.


-'f

Index: config/gen/opengl.pm
===
--- config/gen/opengl.pm	(revision 28127)
+++ config/gen/opengl.pm	(working copy)
@@ -108,12 +108,19 @@
 SphereMap   = 'void',
 Display = 'void',
 XVisualInfo = 'void',
+GLEWContext = 'void',
+GLXEWContext= 'void',
+WGLEWContext= 'void',
 _CGLContextObject   = 'void',
+CGDirectDisplayID   = 'void',
 GLXHyperpipeConfigSGIX  = 'void',
 GLXHyperpipeNetworkSGIX = 'void',
+PIXELFORMATDESCRIPTOR   = 'void',
+COLORREF= 'void',
 
 wchar_t = 'void',
 
+GLMfunctions= 'void*',
 GLXContext  = 'void*',
 GLXFBConfig = 'void*',
 GLXFBConfigSGIX = 'void*',
@@ -121,6 +128,21 @@
 CGLPixelFormatObj   = 'void*',
 CGLRendererInfoObj  = 'void*',
 CGLPBufferObj   = 'void*',
+AGLContext  = 'void*',
+AGLDevice   = 'void*',
+AGLDrawable = 'void*',
+AGLPixelFormat  = 'void*',
+AGLRendererInfo = 'void*',
+AGLPbuffer  = 'void*',
+GDHandle= 'void*',
+WindowRef   = 'void*',
+HIViewRef   = 'void*',
+Style   = 'void*',
+HDC = 'void*',
+HGLRC   = 'void*',
+LPGLYPHMETRICSFLOAT = 'void*',
+LPLAYERPLANEDESCRIPTOR  = 'void*',
+LPPIXELFORMATDESCRIPTOR = 'void*',
 
 GLchar  = 'char',
 GLcharARB   = 'char',
@@ -137,6 +159,8 @@
 Status  = 'int',
 GLint   = 'int',
 GLsizei = 'int',
+GLfixed = 'int',
+GLclampx= 'int',
 int32_t = 'int',
 
 GLenum  = 'unsigned int',
@@ -234,6 +258,7 @@
 'glutGetProcAddress',
 'glXGetProcAddress',
 'glXGetProcAddressARB',
+'wglGetProcAddress',
 
 # Don't handle this odd create/callback register function yet
 'glutCreateMenu',
@@ -260,6 +285,14 @@
 'uview_direction',
 'uviewpoint',
 
+# Some versions of GLUT declare these both with and without prefixes;
+# ignore the non-prefixed versions
+'SwapBuffers',
+'ChoosePixelFormat',
+'DescribePixelFormat',
+'GetPixelFormat',
+'SetPixelFormat',
+
 # Can't handle longlong until RT 53406 is done
 'glPresentFrameKeyedNV',
 'glPresentFrameDualFillNV',
@@ -276,11 +309,36 @@
 );
 
 my @SKIP = (
+# Can't properly support these yet; some (such as the internal headers)
+# may never be supported.
+
+# Mesa non-standard driver headers
+'amesa.h',
+'dmesa.h',
+'fxmesa.h',
+'ggimesa.h',
+'mesa_wgl.h',
+'mglmesa.h',
+'osmesa.h',
+'svgamesa.h',
+'uglmesa.h',
+'wmesa.h',
+'xmesa.h',
+'xmesa_xf86.h',
+'xmesa_x.h',
+
 # Mesa API-mangling headers (to load vendor GL and Mesa simultaneously)
 'gl_mangle.h',
 'glu_mangle.h',
 'glx_mangle.h',
 
+# OpenVMS API-mangling header
+'vms_x_fix.h',
+
+# Internal headers for DRI
+'dri_interface.h',
+'glcore.h',
+
 # Apple CGL OpenGL API conversion macros
 'CGLMacro.h',
 
@@ -299,6 +357,12 @@
 'gizmo.h',
 'hslider.h',
 'vslider.h',
+
+# SGI GLw Drawing Area headers
+'GLwDrawA.h',
+'GLwDrawAP.h',
+'GLwMDrawA.h',
+'GLwMDrawAP.h',
 );
 
 my $MACRO_FILE = 'runtime/parrot/include/opengl_defines.pasm';
@@ -331,8 +395,9 @@
 s{\\}{/}g foreach @include_paths_win32;
 
 my @header_globs = (
-# Default location for most UNIX-like platforms
+# Default locations for most UNIX-like platforms
 '/usr/include/GL/*.h',
+'/usr/local/include/GL/*.h',
 
 # Mac OS X
 '/System/Library/Frameworks/OpenGL.framework/Headers/*.h

Re: [perl #52988] [PATCH] OpenGL binding, part 1

2008-06-05 Thread Geoffrey Broadwell
I've incorporated all of your fixes, plus some fixes for cognominal, in
the latest patch attached to RT #55228.  Please give that a try.


-'f




Re: [perl #55238] [BUG] OpenGL breaks the build

2008-06-03 Thread Geoffrey Broadwell
On Tue, 2008-06-03 at 08:33 -0700, Will Coleda wrote:
 # New Ticket Created by  Will Coleda 
 # Please include the string:  [perl #55238]
 # in the subject line of all future correspondence about this issue. 
 # URL: http://rt.perl.org/rt3/Ticket/Display.html?id=55238 
 
 
 ./parrot -o runtime/parrot/library/OpenGL.pbc 
 runtime/parrot/library/OpenGL.pir
 error:imcc:No such file or directory
 in file 'runtime/parrot/library/OpenGL.pir' line 79
 make: *** [runtime/parrot/library/OpenGL.pbc] Error 2
 
 The offending line is shown below:
 
 76
 77  .namespace ['OpenGL']
 78
 79  .include 'library/OpenGL_funcs.pir'
 80
 81
 82  =item _opengl_init()
 
 OpenGL_funcs.pir' doesn't appear to be in the repository, so when we
 try to build it, it dies.
 
 svn blame says.
 
  27975  japhb .include 'library/OpenGL_funcs.pir'
 
 Regards.

That file is generated during Configure.pl (by config/gen/opengl.pm).  I
have a pending question out asking what the right place is to add
makefile dependencies for libraries in runtime/parrot/library/ (because
most of them contain at least dependencies on generated files in
runtime/parrot/include that as far as I can tell we don't enforce).
Unfortunately, each time I've asked, I've been Warnocked.

There is, however, another problem here.  I'm not sure how you ended up
*not* having an OpenGL_funcs.pir when you went to make.  According to
config/gen/makefiles/root.in , OpenGL_funcs.pir is conditioned on
has_opengl and included in GEN_PASM_INCLUDES, which is part of
CONFIGURE_GENERATED_FILES, which should *not* be removed on 'make
clean', only on 'make realclean'.  And the latter requires a 'perl
Configure.pl' afterwards, which regenerates that file!

In the mean time, a 'make realclean; perl Configure.pl; make' ought to
fix it for you.  But I still want to know how you ended up ready to make
without having that file.


-'f




Re: [perl #54562] [TODO] DEVELOPING should stop lagging reality

2008-06-03 Thread Geoffrey Broadwell
On Mon, 2008-06-02 at 20:28 -0700, chromatic via RT wrote:
 On Monday 02 June 2008 20:05:22 Bob Rogers wrote:
 
  Agreed, but doesn't this info really belong in README?  Then DEVELOPING
  really only needs the middle paragraph, which is unchanging, and there
  would be one less file to have to update when cutting a new release.
 
 Or is there some purpose in keeping this information out of the hands
  of mere tarball-downloaders?
 
 DEVELOPING mostly exists so that people who check out interim releases run 
 extra tests by default and people who download only official releases don't.
 
 There may be a better way to accomplish this.

OK, how about this:

1. As with Bob's suggestion, DEVELOPING is reduced to just the
unchanging middle paragraph.  It is then merely a flag -- removing that
function as well is better left to another RT.

2. The information about the next release moves to NEWS, where I think
it belongs.  As soon as a release is cut and functionality starts to be
committed, we should start updating NEWS.  A simple change to the format
of the header line makes it serve the duty previously done by the first
paragraph in DEVELOPING.

See the attached patch.


-'f

diff --git a/DEVELOPING b/DEVELOPING
index 66d50d4..b1d5aee 100644
--- a/DEVELOPING
+++ b/DEVELOPING
@@ -1,11 +1,5 @@
 # $Id$
 
-THIS RELEASE: Parrot 0.6.2   2008.05.20
-PREVIOUS RELEASE: Parrot 0.6.1   2008.04.15
-
-This file should only exist in development distributions. Delete it
+This file should only exist in development distributions.  Delete it
 (and its entry in the MANIFEST) before packaging Parrot up for a CPAN
 or other release distribution.
-
-'THIS RELEASE' is the goal of the current development.
-'PREVIOUS RELEASE' is the release that has been last let out into the wild.
diff --git a/NEWS b/NEWS
index d740ad8..bb6ba56 100644
--- a/NEWS
+++ b/NEWS
@@ -1,5 +1,12 @@
 # $Id$
 
+New for next release (2008-06-17, version undecided)
+- Configuration
+  + expanded step gen::opengl
+- Miscellaneous
+  + ported OpenGL/GLU/GLUT bindings to Win32 and more Mac OS X variants
+  + generate OpenGL/GLU/GLUT bindings by parsing system headers
+
 New in 0.6.2
 - Specification
   + updated and launched pdd28_strings.pod


  1   2   >