[perl #57568] [PATCH][Lua] 4 bugfixes for the Lua language and test cases.

2008-08-05 Thread François PERRAD via RT
On Sun Aug 03 16:11:56 2008, Neopallium wrote:
 
 I have separated the bugfixes into different patch files and included
 one
 large patch file (all_changes.patch) that includes all the other
 patches.
 
 fix_assignlist.patch:
 In Lua you can swap variables using an assignlist like this:
 v1,v2 = v2,v1
 The problem is that both v1  v2 would equal the value in v2.  The
 value in v1
 was lost.  To fix this I added the use of temp registers to store the
 values
 before the final assignments.  The life.lua script was effected by
 this bug.
 
 test_swap_assignlist.patch:
 Adds test case for bug fixed in 'fix_assignlist.patch'
 
both applied in r29995.

 fix_environ_reg_cache.patch:
 Accessing a global in Lua after using a logical and operation would
 cause a
 crash if the second operand was a global variable and the first
 operand was
 false.  The POST pass was caching the register name used to access
 global
 variable to speed up global access.  That cached register needs to be
 cleared
 after method calls and branches.  It wasn't being cleared after the
 logical and operation so the next global access try using a register
 that
 will not be set if the branch wasn't taken.  This bug is why the
 life.lua
 script was crashing.
 
 test_global_access.patch:
 Adds test case for bug fixed in 'fix_environ_reg_cache.patch'
both applied in r29997.

 
 fix_fornum.patch:
 The loop count variable in for loops was being passed by reference
 instead
 of by value as is required for numbers in Lua.  To fix this I added a
 temp
 register for the real loop counter and cloned that value into the
 count
 variable that is visible to the loop code.  This bug effected the
 sieve.lua
 script making it return only 1 prime.
applied in r30005.

 
 fix_lua_bytecode_loader.patch:
 This patch fixes the loading of Lua bytecode files.
applied in r29998.

 
 test_fix_bisect_output.patch:
 The expected output for the bisect.lua script seems to be wrong.  I
 have
 tested that script with Lua 5.1.3 on three different computers and the
 output
 doesn't match the expected output for this test case.  Even the Live
 demo
 on lua.org: http://www.lua.org/cgi-bin/demo matches the output from my
 three
 computers.  This patch updates the expected output file so it matches
 the
 output from the official Lua output.
Applied with modification in r30009.
The current output reference was created with Lua 5.1.3 on Windows (from
LuaBinaries).
The code of the Lua interpreter is extremely portable, but not the output.
So, now there are 2 reference outputs.

Thanks again for these real improvements.

François.

Note: the ticket #57504 (mktime vs 64bits) was prematurely closed.
See http://rt.perl.org/rt3/Ticket/Update.html?id=57504

 
 test_from_lua.patch:
 Removes TODO on bisect  sieve tests.  Changes skip message for life
 test
 from crash to uses to much memory with default runcore
 
 The life.lua script uses too much memory (1 Gbyte) when using the
 slow/fast/computed-goto cores and even when the JIT is turned on.
 Using
 the --leak-test option doesn't show any leaks and the --gc-debug
 option
 doesn't improve memory usage.  The CGP  switched cores don't have
 this
 problem they use less then 60 Mbytes.
 
 diffstats for all_changes.patch:
  src/POSTGrammar.tg   |   14 ++
  src/lib/luaaux.pir   |2 +-
  t/assign.t   |   30 +-
  t/expr.t |   18 +-
  t/test-from-lua.t|   12 +---
  t/test/bisect-output.txt |6 +++---
  6 files changed, 61 insertions(+), 21 deletions(-)
 



[svn:parrot-pdd] r30011 - trunk/docs/pdds

2008-08-05 Thread Whiteknight
Author: Whiteknight
Date: Mon Aug  4 15:10:22 2008
New Revision: 30011

Modified:
   trunk/docs/pdds/pdd09_gc.pod

Log:
[docs/pdd] update pdd09 to include more descriptions, more information and some 
much-needed clarity. These are all lessons i've learned the hard way.

Modified: trunk/docs/pdds/pdd09_gc.pod
==
--- trunk/docs/pdds/pdd09_gc.pod(original)
+++ trunk/docs/pdds/pdd09_gc.podMon Aug  4 15:10:22 2008
@@ -161,6 +161,15 @@
 The primary GC model for PMCs, at least for the 1.0 release, will use a
 tri-color incremental marking scheme, combined with a concurrent sweep scheme.
 
+=head2 Terminology
+
+A GC run is composed of two distinct operations: Finding objects which are
+dead (the trace phase) and freeing dead objects for later reuse (the
+sweep phase). The sweep phase is also known as the collection phase. The
+trace phase is also known as the mark phase and less frequently as the
+dead object detection phase. The use of the term dead object detection
+and it's acronym DOD has been deprecated.
+
 =head2 Initial Marking
 
 Each PMC has a Cflags member which, among other things, facilitates garbage
@@ -186,7 +195,7 @@
 
 =item Global stash
 
-=item System stack
+=item System stack and processor registers
 
 =item Current PMC register set
 
@@ -335,9 +344,9 @@
 
 =head3 Initialization
 
-Each GC core declares an initialization routine, which is called from
-Fsrc/memory.c:mem_setup_allocator() after creating Carena_base in the
-interpreter struct.
+Each GC core declares an initialization routine as a function pointer,
+which is installed in Fsrc/memory.c:mem_setup_allocator() after
+creating Carena_base in the interpreter struct.
 
 =over 4
 
@@ -357,40 +366,66 @@
 
 =over 4
 
+=item Cvoid (*init_gc_system) (Interp *)
+
+Initialize the GC system. Install the additional function pointers into
+the Arenas structure, and prepare any private storage to be used by
+the GC in the Arenas-gc_private field.
+
 =item Cvoid (*do_gc_mark) (Interp *, int flags)
 
 Trigger or perform a GC run. With an incremental GC core, this may only
-start/continue a partial mark phase, rather than marking the entire tree of
-live objects.
+start/continue a partial mark phase or sweep phase, rather than performing an
+entire run from start to finish. It may take several calls to Cdo_gc_mark in
+order to complete an entire incremental run.
+
+For a concurrent collector, calls to this function may activate a concurrent
+collection thread or, if such a thread is already running, do nothing at all.
 
-Flags is one of:
+The Cdo_gc_mark function is called from the CParrot_do_dod_run function,
+and should not usually be called directly.
+
+Cflags is one of:
 
 =over 4
 
+=item C0
+
+Run the GC normally, including the trace and the sweep phases, if applicable.
+Incremental GCs will likely only run one portion of the complete GC run, and
+repeated calls would be required for a complete run. A complete trace of all
+system areas is not required.
+
 =item GC_trace_normal | GC_trace_stack_FLAG
 
-Run a normal GC cycle. This is normally called by resource shortage in the
-buffer memory pools before a collection is run. The bit named
-CGC_trace_stack_FLAG indicates that the C-stack (and other system areas
-like the processor registers) have to be traced too.
-
-The implementation might or might not actually run a full GC cycle. If an
-incremental GC system just finished the mark phase, it would do nothing.  OTOH
-if no objects are currently marked live, the implementation should run the
-mark phase, so that copying of dead objects is avoided.
+Run a normal GC trace cycle, at least. This is typically called when there
+is a resource shortage in the buffer memory pools before the sweep phase is
+run. The processor registers and any other system areas have to be traced too.
+
+Behavior is determined by the GC implementation, and might or might not
+actually run a full GC cycle. If the system is an incremental GC, it might
+do nothing depending on the current state of the GC. In an incremental GC, if
+the GC is already past the trace phase it may opt to do nothing and return
+immediately. A copying collector may choose to run a mark phase if it hasn't
+yet, to prevent the unnecessary copying of dead objects later on.
 
 =item GC_lazy_FLAG
 
 Do a timely destruction run. The goal is either to detect all objects that
-need timely destruction or to do a full collection. In the former case the
-collection can be interrupted or postponed. This is called from the Parrot
-run-loop. No system areas have to be traced.
+need timely destruction or to do a full collection. This is called from the
+Parrot run-loop, typically when a lexical scope is exited and the local
+variables in that scope need to be cleaned up. Many types of PMC objects, such
+as line-buffered IO PMCs rely on this behavior for proper operation.
+
+No system areas have to be traced.
 
 =item 

[perl #57468] unused VNSNPRINTF check?

2008-08-05 Thread Christoph Otto via RT
On Mon Aug 04 16:29:03 2008, coke wrote:
 As I mentioned on IRC, I'd recommend just removing it instead of
 adding another probe we're not sure we need. We can always come back
 to this ticket and grab your patch for later application if it we need
 to.
 

Good enough.  Andy (who originally wrote it) doesn't care either way, so
I'm taking out the extra conditional and marking this rt as resolved. 
It'd be safer to use vsnprintf, but this is the only place it'd be used
and a single use doesn't justify the extra configuration and testing code.

The code in question was removed in r30028.

If this ever gets revived, kid51 noted that there should be some
documentation stating the difference between vsprintf and vsnprintf, and
that a t/steps/auto_vsnprintf-01.t should also be added.


[perl #57602] [PATCH] typo in docs/pdds/pdd19_pir.pod

2008-08-05 Thread via RT
# New Ticket Created by  Bob Wilkinson 
# Please include the string:  [perl #57602]
# in the subject line of all future correspondence about this issue. 
# URL: http://rt.perl.org/rt3/Ticket/Display.html?id=57602 


Hello

 Please find the attached patch for a spelling mistake in
 docs/pdds/pdd19_pir.pod

Bob
Index: docs/pdds/pdd19_pir.pod
===
--- docs/pdds/pdd19_pir.pod (revision 30010)
+++ docs/pdds/pdd19_pir.pod (working copy)
@@ -189,7 +189,7 @@
   set S0, utf8:unicode:«
 
 The encoding and charset are attached to the string constant, and
-adopted by any string containter the constant is assigned to.
+adopted by any string container the constant is assigned to.
 
 The standard escape sequences are honored within strings with an
 alternate encoding, so in the example above, you can include a


[svn:perl6-synopsis] r14572 - doc/trunk/design/syn

2008-08-05 Thread audreyt
Author: audreyt
Date: Tue Aug  5 02:43:49 2008
New Revision: 14572

Modified:
   doc/trunk/design/syn/S12.pod

Log:
* Typo spotted by John M. Dlugosz++:

  method close is export () { ... }  # Wrong
  method close () is export { ... }  # Right

Modified: doc/trunk/design/syn/S12.pod
==
--- doc/trunk/design/syn/S12.pod(original)
+++ doc/trunk/design/syn/S12.podTue Aug  5 02:43:49 2008
@@ -12,9 +12,9 @@
 
   Maintainer: Larry Wall [EMAIL PROTECTED]
   Date: 27 Oct 2004
-  Last Modified: 10 Jul 2008
+  Last Modified: 5 Aug 2008
   Number: 12
-  Version: 61
+  Version: 62
 
 =head1 Overview
 
@@ -222,7 +222,7 @@
 close($handle);
 close $handle;
 
-However, here the built-in BIO class defines Cmethod close is export (),
+However, here the built-in BIO class defines Cmethod close () is export,
 which puts a Cmulti sub close (IO) in scope by default.  Thus if the
 C$handle evaluates to an IO object, then the two subroutine calls above
 are still translated into method calls.


Re: syntax question: method close is export ()

2008-08-05 Thread Audrey Tang

John M. Dlugosz 提到:
Does that mean that traits can come before the signature?  Or should it 
be corrected to

method close () is export { ... }


It's a simple typo.  Thanks, fixed in r14572.

Cheers,
Audrey



Class Name Question

2008-08-05 Thread John M. Dlugosz
In S12, So when you say Dog, you're referring to both a package and a 
protoobject, that latter of which points to the actual object representing the 
class via HOW.


Does that mean that the object referred to by Dog does both roles?  In that 
case the latter is confusing wording.


Or does it mean that the compiler returns one of two different objects depending 
on context?  In addition to the listop!  So, how does it know which is wanted?


E.g.
my $x = Dog;  # the undefined Dog
my $y = ::Dog; # the package

but that means other wording is wrong, in that :: in rvalue context is not a 
no-op exactly.


If, on the other hand, Dog is always a listop that in the 0-ary case returns the 
protoobject, the protoobject can be defined to also do the Abstraction role, or 
perhaps, Ah! mix in the Package as a property.


my ::z ::= Dog;
Dog::func1();
z::func1();  # same thing

In that case, the first line works because the object is the undefined dog but 
the package object, so there is an implicit conversion to Package.  The middle 
line might work by knowing the context of a qualified name, so either the 
compiler knows this specifically or can handle anything in the symbol table that 
 can be implicitly converted to an Abstraction.


--John


syntax question: method close is export ()

2008-08-05 Thread John M. Dlugosz
Does that mean that traits can come before the signature?  Or should it be 
corrected to

method close () is export { ... }

?


Re: Edits to submit

2008-08-05 Thread Audrey Tang

John M. Dlugosz 提到:
I've edited several of the S??.pod files,but I have not heard back from 
the owner ($Larry, whose name is on the top of the file) about accepting 
merging or rejecting my changes.


I've posted the files to http://www.dlugosz.com/Perl6/offerings/ so 
they don't get lost, until someone with authority wants to diff them.


I'm diffing them (slowly), and have committed your stylistic edits to 
S02.pod.  Thanks!


However, in S02 you removed the Code class and replaced it with Routine, 
but that does not really work; for example, a bare block is a Code, but 
it cannot be a Routine since it can't be wrapped in place, and caller() 
would bypass it when considering caller frames.


Cheers,
Audrey



Re: Edits to submit

2008-08-05 Thread Audrey Tang

Audrey Tang 提到:
However, in S02 you removed the Code class and replaced it with Routine, 
but that does not really work; for example, a bare block is a Code, but 
it cannot be a Routine since it can't be wrapped in place, and caller() 
would bypass it when considering caller frames.


I should've been more explicit.  While I don't really have a problem 
with replacing Code with Callable (except the latter is more wordy, so 
why not replace Callable with Code...), the issue is that your S02.pod 
edits indicates that a variable foo must always be bound to a Routine 
object. However, variable with the  sigil can be bound to a Block as 
well, so replacing Code with Routine at line 1487 and 1512 doesn't quite 
work. :-)


Cheers,
Audrey



[perl #57610] [PATCH] Resumable exceptions

2008-08-05 Thread via RT
# New Ticket Created by  Stephen Weeks 
# Please include the string:  [perl #57610]
# in the subject line of all future correspondence about this issue. 
# URL: http://rt.perl.org/rt3/Ticket/Display.html?id=57610 


pdd23:

Exception handlers can resume execution immediately after the
throw opcode by invoking the resume continuation which is stored
in the exception object.  That continuation must be invoked with no
parameters; in other words, throw never returns a value.

Exception.pmc has the following attributes:

ATTR INTVALid;   /* The task ID in the scheduler. */
ATTR FLOATVAL  birthtime;/* The creation time stamp of the exception. */
ATTR STRING   *message;  /* The exception message. */
ATTR PMC  *payload;  /* The payload for the exception. */
ATTR INTVALseverity; /* The severity of the exception. */
ATTR INTVALtype; /* The type of the exception. */
ATTR INTVALexit_code;/* The exit code of the exception. */
ATTR PMC  *stacktrace;   /* The stacktrace of an exception. */
ATTR INTVALhandled;  /* Whether the exception has been handled. */
ATTR PMC  *handler_iter; /* An iterator of handlers (for rethrow). */
ATTR Parrot_Context *handler_ctx; /* A stored context for handler iterator. 
*/

None of these is a continuation.

The throw opcode passes the address of the next opcode to
Parrot_ex_throw_from_op, but Petfo only uses it in:

address= VTABLE_invoke(interp, handler, dest);

and the ExceptionHandler PMC's invoke() does not use that parameter
at all.


This first draft of a patch adds an attribute to the exception pmc to
hold a return continuation, creates a retcontinuation pmc in the throw
opcode and assigns it to that attribute, and patches
new_ret_continuation to initialize the new continuation's from_ctx
attribute in the same way new_continuation does.

This last item is there to fix a segfault.  I don't understand parrot's
continuations well enough yet to have any idea why they were different,
so I just guessed.  I don't know if it's wrong, but it doesn't seem to
fail any extra tests.

Added a simple test case.
From 21bc85c3ae1d749187b250bd898028d63a92891f Mon Sep 17 00:00:00 2001
From: Stephen Weeks [EMAIL PROTECTED]
Date: Tue, 5 Aug 2008 04:55:30 -0600
Subject: [PATCH] Add a return continuation attribute to the Exception pmc and 
fill it in the throw opcode.

---
 src/ops/core.ops  |4 +++-
 src/pmc/exception.pmc |   10 ++
 src/sub.c |2 +-
 t/op/exceptions.t |   25 -
 4 files changed, 38 insertions(+), 3 deletions(-)

diff --git a/src/ops/core.ops b/src/ops/core.ops
index 25f8775..da821f3 100644
--- a/src/ops/core.ops
+++ b/src/ops/core.ops
@@ -816,7 +816,9 @@ inline op pop_eh() {
 
 inline op throw(invar PMC) :flow {
 opcode_t * const ret  = expr NEXT();
-opcode_t * const dest = Parrot_ex_throw_from_op(interp, $1, ret);
+Parrot_cont * resume = new_ret_continuation_pmc(interp,ret);
+
VTABLE_set_attr_str(interp,$1,string_from_literal(interp,retcont),resume);
+opcode_t * const dest = Parrot_ex_throw_from_op(interp, $1, resume);
 goto ADDRESS(dest);
 }
 
diff --git a/src/pmc/exception.pmc b/src/pmc/exception.pmc
index 15c7056..2a09882 100644
--- a/src/pmc/exception.pmc
+++ b/src/pmc/exception.pmc
@@ -57,6 +57,7 @@ pmclass Exception {
 ATTR FLOATVAL  birthtime;/* The creation time stamp of the exception. 
*/
 ATTR STRING   *message;  /* The exception message. */
 ATTR PMC  *payload;  /* The payload for the exception. */
+ATTR PMC  *retcont;  /* The return continuation for the exception. 
*/
 ATTR INTVALseverity; /* The severity of the exception. */
 ATTR INTVALtype; /* The type of the exception. */
 ATTR INTVALexit_code;/* The exit code of the exception. */
@@ -93,6 +94,7 @@ Initializes the exception with default values.
 core_struct-handled= 0;
 core_struct-message= CONST_STRING(interp, );
 core_struct-payload= PMCNULL;
+core_struct-retcont= PMCNULL;
 core_struct-stacktrace = PMCNULL;
 core_struct-handler_iter = PMCNULL;
 }
@@ -113,6 +115,8 @@ Mark any active exception data as live.
 pobject_lives(interp, (PObj *)core_struct-message);
 if (core_struct-payload)
 pobject_lives(interp, (PObj *)core_struct-payload);
+if (core_struct-retcont)
+pobject_lives(interp, (PObj *)core_struct-retcont);
 if (core_struct-stacktrace)
 pobject_lives(interp, (PObj *)core_struct-stacktrace);
 if (core_struct-handler_iter)
@@ -530,6 +534,9 @@ Retrieve an attribute value for the exception object.
 else if (string_equal(INTERP, name, CONST_STRING(INTERP, payload)) 
== 0) {
 GET_ATTR_payload(interp, SELF, value);
 }
+else if (string_equal(INTERP, name, 

[svn:perl6-synopsis] r14571 - doc/trunk/design/syn

2008-08-05 Thread audreyt
Author: audreyt
Date: Tue Aug  5 02:38:33 2008
New Revision: 14571

Modified:
   doc/trunk/design/syn/S02.pod

Log:
* S02: A few more C... an C... blocks, Contributed by John M. Dlugosz++.

Modified: doc/trunk/design/syn/S02.pod
==
--- doc/trunk/design/syn/S02.pod(original)
+++ doc/trunk/design/syn/S02.podTue Aug  5 02:38:33 2008
@@ -12,9 +12,9 @@
 
   Maintainer: Larry Wall [EMAIL PROTECTED]
   Date: 10 Aug 2004
-  Last Modified: 25 Jul 2008
+  Last Modified: 5 Aug 2008
   Number: 2
-  Version: 133
+  Version: 134
 
 This document summarizes Apocalypse 2, which covers small-scale
 lexical items and typological issues.  (These Synopses also contain
@@ -1415,7 +1415,7 @@
 There is a need to distinguish list assignment from list binding.
 List assignment works much like it does in Perl 5, copying the
 values.  There's a new C:= binding operator that lets you bind
-names to Array and Hash objects without copying, in the same way
+names to CArray and CHash objects without copying, in the same way
 as subroutine arguments are bound to formal parameters.  See S06
 for more about binding.
 
@@ -1544,7 +1544,7 @@
 
 =item *
 
-In numeric context (i.e. when cast into CInt or CNum), a Hash object
+In numeric context (i.e. when cast into CInt or CNum), a CHash object
 becomes the number of pairs contained in the hash.  In a boolean context, a
 Hash object is true if there are any pairs in the hash.  In either case,
 any intrinsic iterator would be reset.  (If hashes do carry an intrinsic
@@ -1807,7 +1807,7 @@
 it starts in the current dynamic scope and from there
 scans outward through all dynamic scopes until it finds a
 contextual variable of that name in that context's lexical scope.
-(Use of C$+FOO is equivalent to CONTEXT::$FOO or $CONTEXT::FOO.)
+(Use of C$+FOO is equivalent to C CONTEXT::$FOO  or C $CONTEXT::FOO 
.)
 If after scanning all the lexical scopes of each dynamic scope,
 there is no variable of that name, it looks in the C* package.
 If there is no variable in the C* package and the variable is
@@ -1921,7 +1921,7 @@
 C$?FILE and C$?LINE are your current file and line number, for
 instance.  C? is not a shortcut for a package name like C* is.
 Instead of C$?OUTER::SUB you probably want to write C OUTER::$?SUB .
-Within code that is being run during the compile, such as BEGIN blocks, or
+Within code that is being run during the compile, such as CBEGIN blocks, or
 macro bodies, or constant initializers, the compiler variables must be referred
 to as (for instance) C COMPILING::$?LINE  if the bare C$?LINE would
 be taken to be the value during the compilation of the currently running


Re: [perl #57476] [pdb] parrot version

2008-08-05 Thread NotFound
On Thu, Jul 31, 2008 at 8:20 PM, via RT Will Coleda
[EMAIL PROTECTED] wrote:

 The parrot_debugger version should, IMO, be identical to the parrot
 version. (e.g., it's currently reporting as 0.4 instead of 0.6.x.)

Done in r30034

-- 
Salu2


Re: [perl #57438] [DEPRECATED] [PDD19] .pragma n_operators

2008-08-05 Thread Klaas-Jan Stol
As far as I could see, it seems that the whole n_operators thing is no
longer mentioned in pdd19.

if it's what Pm thinks, just a change from .pragma n_operators to
.n_operators, then that should be added.

kjs

On Sat, Aug 2, 2008 at 6:56 PM, Patrick R. Michaud [EMAIL PROTECTED]wrote:

 On Thu, Jul 31, 2008 at 10:07:49AM +0100, Klaas-Jan Stol wrote:
  On Wed, Jul 30, 2008 at 9:06 PM, via RT Will Coleda wrote
   From PDD19:
  
   =item .pragma n_operators [deprecated]
 
  does this mean that by default all ops will have the n_ prefix by
 default?
  That would imply some variants of these ops are removed (namely, the
  non-n_-prefixed ones).
 
  I guess what my question is, what's the reason for removing this?

 I think all this means is that the pragma itself is deprecated.
 I would presume that the n_operators remain, and that programs can
 continue to generate both n_ and non-n_ opcodes as needed.

 Pm



Re: Beta of web services to fulfill smoke Queryability requirements.

2008-08-05 Thread Ronald Schmidt

Michael Peters wrote:

Ronald Schmidt wrote:
I've been meaning to update that wiki page to point to the progress 
we're making toward this. I should also write up how Smolder already 
accomplishes those goals (well, the ones it does accomplish).


Thanks.  If I had noticed that smolder was already in development to fix 
the issues on the RFP  page I would probably have moved on to a 
different project.  I now notice the link to smolder at the bottom of 
the page, but there is no mention in the page content of smolder's role 
as a solution to the requirements for better reports.


My system currently fulfills some requirements that do not seem to be 
handled by your system including:


   * Give me the results of all reports of t/steps/inter_progs-01.t on
 all OS/platform/compiler combinations for the past seven days.

   * Tell me which OS/platform/compiler combinations have been
 smoke-tested in the past thirty days.

It also provides analysis for reports submitted with make smoke, which 
continue to be submitted.  So I am still interested in suggesting that 
the wiki page be updated to note both, as you mention, how smolder 
accomplishes those goals as well as a link to my system's reports at 
http://www.software-path.com/parrotsmoke;.


Smolder uses TAP, so no need to scrape web pages. And we expose that 
TAP for download if someone wants to analyze it further with some 
other tools. It also uses SQLite, so if we decide to move this install 
over to a TPF owned box (or a box that just has parrot related smoke 
info on it) we can even expose the SQLite file itself for 
downloading/quering.
My system uses Mysql but populates the MYSQL database by scraping smoke 
reports.  I agree that processing TAP is a better solution.  If you are 
willing to share the ddl used by smolder I might well be interested in 
porting some of my reports to your system.  Note that the current smoke 
reports include TAP data and some of my parsing work might be used to 
put current 'make smoke' reports in your database, if such an effort is 
felt to be of interest.


Ron



Re: Beta of web services to fulfill smoke Queryability requirements.

2008-08-05 Thread Will Coleda
On Tue, Aug 5, 2008 at 10:39 AM, Ronald Schmidt
[EMAIL PROTECTED] wrote:
 Michael Peters wrote:

 Ronald Schmidt wrote:
 I've been meaning to update that wiki page to point to the progress we're
 making toward this. I should also write up how Smolder already accomplishes
 those goals (well, the ones it does accomplish).

 Thanks.  If I had noticed that smolder was already in development to fix the
 issues on the RFP  page I would probably have moved on to a different
 project.  I now notice the link to smolder at the bottom of the page, but
 there is no mention in the page content of smolder's role as a solution to
 the requirements for better reports.

Smolder is a fairly new player in our testing strategy, and it looks
like we dropped the ball on keeping the wiki up to date here. My
apologies. I did hit the links you sent out earlier, and they were
helpful. See below.

 My system currently fulfills some requirements that do not seem to be
 handled by your system including:

   * Give me the results of all reports of t/steps/inter_progs-01.t on
 all OS/platform/compiler combinations for the past seven days.

   * Tell me which OS/platform/compiler combinations have been
 smoke-tested in the past thirty days.

More queryability of the smolder data would be spiffy.

I especially like the first one here. Makes it easy to see if
something is isolated to a particular platform or set of platforms.

 It also provides analysis for reports submitted with make smoke, which
 continue to be submitted.  So I am still interested in suggesting that the
 wiki page be updated to note both, as you mention, how smolder accomplishes
 those goals as well as a link to my system's reports at
 http://www.software-path.com/parrotsmoke;.

Ronald, you can go ahead and add own data there if you like.

Michael, can you update the page with more smolder information?

 Smolder uses TAP, so no need to scrape web pages. And we expose that TAP
 for download if someone wants to analyze it further with some other tools.
 It also uses SQLite, so if we decide to move this install over to a TPF
 owned box (or a box that just has parrot related smoke info on it) we can
 even expose the SQLite file itself for downloading/quering.

 My system uses Mysql but populates the MYSQL database by scraping smoke
 reports.  I agree that processing TAP is a better solution.  If you are
 willing to share the ddl used by smolder I might well be interested in
 porting some of my reports to your system.  Note that the current smoke
 reports include TAP data and some of my parsing work might be used to put
 current 'make smoke' reports in your database, if such an effort is felt to
 be of interest.

 Ron





-- 
Will Coke Coleda


Branching

2008-08-05 Thread Will Coleda
Using svn as a backing store, how can we more easily work with long
lived branches?

I've some existing branches which are long lived, and doing the svn
merge either way is extremely slow.

I know much of our community used svk for a while; I think the usage
there has dropped off as git is the new shiny. My usage of svk was for
local branching; I couldn't easily share my work in progress with the
community.

Does anyone have *any* recommendations? (including: you're doing the
merge wrong).

I'm bccing the current svn admins to find out if they have any ideas as well.

Would an upgrade on the server side to 1.5 help with performance? (It
would certainly make the maintenance aspect of the merging less
painful.)

-- 
Will Coke Coleda


Re: Edits to submit - Routine/Callable

2008-08-05 Thread John M. Dlugosz

Audrey Tang audreyt-at-audreyt.org |Perl 6| wrote:

Audrey Tang 提到:
However, in S02 you removed the Code class and replaced it with 
Routine, but that does not really work; for example, a bare block is 
a Code, but it cannot be a Routine since it can't be wrapped in 
place, and caller() would bypass it when considering caller frames.


I should've been more explicit.  While I don't really have a problem 
with replacing Code with Callable (except the latter is more wordy, so 
why not replace Callable with Code...), 
1) nobody cared enough to discuss it for a couple weeks, and I decided 
that's one reason why this is moving so slowly...better to =do= 
already.  See earlier post on Callable/Code. 

2) Callable is mentioned in S02 as the role that goes with  variables, 
and that is latest most official statement from Larry.  Uses of Code for 
that role (eqv to  variable) is relic from before roles.  Why not use 
Code instead of Callable?  I think Larry wanted those key sigil roles to 
have names that are adjectives.  In any case, they form a nice matched set.


the issue is that your S02.pod edits indicates that a variable foo 
must always be bound to a Routine object. However, variable with the 
 sigil can be bound to a Block as well, so replacing Code with 
Routine at line 1487 and 1512 doesn't quite work. :-)




I must have made a mistake; that should have been Callable.  Callable is 
synonomous with the  sigil.




Cheers,
Audrey






Re: Branching

2008-08-05 Thread Kevin Tew

Git is really nice for:
local branches,
frequently(daily) rebasing local branches to keep in sync with HEAD,
publishing local branches for others to review,
allowing non-committers to make changes and publish those changes 
publicly


Kevin

Will Coleda wrote:

Using svn as a backing store, how can we more easily work with long
lived branches?

I've some existing branches which are long lived, and doing the svn
merge either way is extremely slow.

I know much of our community used svk for a while; I think the usage
there has dropped off as git is the new shiny. My usage of svk was for
local branching; I couldn't easily share my work in progress with the
community.

Does anyone have *any* recommendations? (including: you're doing the
merge wrong).

I'm bccing the current svn admins to find out if they have any ideas as well.

Would an upgrade on the server side to 1.5 help with performance? (It
would certainly make the maintenance aspect of the merging less
painful.)

  




Re: Branching

2008-08-05 Thread Will Coleda
On Tue, Aug 5, 2008 at 11:04 AM, Kevin Tew [EMAIL PROTECTED] wrote:
 Git is really nice for:
local branches,

This is on par with svk...

frequently(daily) rebasing local branches to keep in sync with HEAD,

How does this work? What's the pain threshold?

publishing local branches for others to review,

Is this something that we'd host near the svn repository? On a machine
like feather? Wherever a developer wanted? (How would we advertise?)

allowing non-committers to make changes and publish those changes
 publicly

So, like patches in RT, but closer to the 'metal', as it were?

 Kevin

 Will Coleda wrote:

 Using svn as a backing store, how can we more easily work with long
 lived branches?

 I've some existing branches which are long lived, and doing the svn
 merge either way is extremely slow.

 I know much of our community used svk for a while; I think the usage
 there has dropped off as git is the new shiny. My usage of svk was for
 local branching; I couldn't easily share my work in progress with the
 community.

 Does anyone have *any* recommendations? (including: you're doing the
 merge wrong).

 I'm bccing the current svn admins to find out if they have any ideas as
 well.

 Would an upgrade on the server side to 1.5 help with performance? (It
 would certainly make the maintenance aspect of the merging less
 painful.)







-- 
Will Coke Coleda


Re: Branching

2008-08-05 Thread Jesse Vincent


On Aug 5, 2008, at 10:51 AM, Will Coleda wrote:


Using svn as a backing store, how can we more easily work with long
lived branches?

I've some existing branches which are long lived, and doing the svn
merge either way is extremely slow.

I know much of our community used svk for a while; I think the usage
there has dropped off as git is the new shiny. My usage of svk was for
local branching; I couldn't easily share my work in progress with the
community.



Not that I'm biased, but

SVK 2.2b1 is out today. It contains many, many bugfixes and  
performance improvements (http://search.cpan.org/~clkao/SVK-v2.1.99_01/)


It also has a bunch of new features. The one that's made my life  
easier is the new branch command.


branch is designed to encapsulate the sorts of branching operations  
people working on a project often actually _do_. In addition to the  
below, there are more tools to help release engineers keep track of  
and merge branches.


# create a branch
svk br --create p5-implementation parrot

# get a checkout

svk br --co p5-implementation parrot

cd p5-implementation

# hack hack hack

# realize you're late for your plane

svk br --offline

# svk clones the remote branch to local

# hack hack hack

# land, find net

svk br --online

# svk merges down changes from the upstream copy of the branch and  
then pushes your changes



If this seems appealing, I'm sure I could get some clkao cycles if  
there's more you folks need.


-j



Does anyone have *any* recommendations? (including: you're doing the
merge wrong).

I'm bccing the current svn admins to find out if they have any ideas  
as well.


Would an upgrade on the server side to 1.5 help with performance? (It
would certainly make the maintenance aspect of the merging less
painful.)

--
Will Coke Coleda





Re: Branching

2008-08-05 Thread Will Coleda
On Tue, Aug 5, 2008 at 11:19 AM, Jesse Vincent [EMAIL PROTECTED] wrote:

 On Aug 5, 2008, at 10:51 AM, Will Coleda wrote:

 Using svn as a backing store, how can we more easily work with long
 lived branches?

 I've some existing branches which are long lived, and doing the svn
 merge either way is extremely slow.

 I know much of our community used svk for a while; I think the usage
 there has dropped off as git is the new shiny. My usage of svk was for
 local branching; I couldn't easily share my work in progress with the
 community.


 Not that I'm biased, but

 SVK 2.2b1 is out today. It contains many, many bugfixes and performance
 improvements (http://search.cpan.org/~clkao/SVK-v2.1.99_01/)

 It also has a bunch of new features. The one that's made my life easier is
 the new branch command.

 branch is designed to encapsulate the sorts of branching operations people
 working on a project often actually _do_. In addition to the below, there
 are more tools to help release engineers keep track of and merge branches.

 # create a branch
 svk br --create p5-implementation parrot

 # get a checkout

 svk br --co p5-implementation parrot

 cd p5-implementation

 # hack hack hack

 # realize you're late for your plane

 svk br --offline

 # svk clones the remote branch to local

 # hack hack hack

 # land, find net

 svk br --online

 # svk merges down changes from the upstream copy of the branch and then
 pushes your changes


 If this seems appealing, I'm sure I could get some clkao cycles if there's
 more you folks need.

 -j


 Does anyone have *any* recommendations? (including: you're doing the
 merge wrong).

 I'm bccing the current svn admins to find out if they have any ideas as
 well.

 Would an upgrade on the server side to 1.5 help with performance? (It
 would certainly make the maintenance aspect of the merging less
 painful.)

 --
 Will Coke Coleda




Sounds spiffy.

So these branch commands actually create branches on the svn
repository that's doing the hosting, so they're defacto shared with
the community in the obvious location? (presuming you're online and
pushing changes back?)

That seems to be the best of both worlds, presuming it handles the
merging better/faster/cleaner than 'svn merge' does.

-- 
Will Coke Coleda


Re: Branching

2008-08-05 Thread Jesse Vincent


On Aug 5, 2008, at 11:32 AM, Will Coleda wrote:


[SVK 2.2]



Sounds spiffy.

So these branch commands actually create branches on the svn
repository that's doing the hosting, so they're defacto shared with
the community in the obvious location? (presuming you're online and
pushing changes back?)


Correct.


That seems to be the best of both worlds, presuming it handles the
merging better/faster/cleaner than 'svn merge' does.


So long as everyone doing the merges for a given set of branches uses  
svk, then it can keep track of what's going on and make your life easy.


And, of course, we want to know if it's not making your life easier,  
so we can improve things.


-j




--
Will Coke Coleda





Re: Branching

2008-08-05 Thread Reini Urban
2008/8/5 Will Coleda [EMAIL PROTECTED]:
 So these branch commands actually create branches on the svn
 repository that's doing the hosting, so they're defacto shared with
 the community in the obvious location? (presuming you're online and
 pushing changes back?)

 That seems to be the best of both worlds, presuming it handles the
 merging better/faster/cleaner than 'svn merge' does.

But this is only the best for committers.

For non-committers who would have to wait longer time to get their
patches applied
it would be better to use a git branch and merge it more often to get
in the upstream updates.
Like my bigger patches, which are often heavily outdated by other
changes when they are applied.
-- 
Reini Urban
http://phpwiki.org/ http://murbreak.at/


[Fwd: Re: Branching]

2008-08-05 Thread Kevin Tew


---BeginMessage---

Will Coleda wrote:

On Tue, Aug 5, 2008 at 11:04 AM, Kevin Tew [EMAIL PROTECTED] wrote:
  

Git is really nice for:
   local branches,



This is on par with svk...

  

   frequently(daily) rebasing local branches to keep in sync with HEAD,



How does this work? What's the pain threshold?

  
git knows which changes came from say svn.perl.org and which changes you 
made in your local branch.
git-rebase removes all your local changes applies the new changes from 
svn.perl.org and then reappGit is not that different from patches, the 
pain threshold is just much lower.
lies your local changes giving you the opprotunity to edit each local 
change if a conflict occurs while they are being reapplied.



   publishing local branches for others to review,



Is this something that we'd host near the svn repository? On a machine
like feather? Wherever a developer wanted? (How would we advertise?)

  

Yes, Yes, and Yes.
With git, branches (ie patch sets) are just like html pages. 
It doesn't really matter where they live.  The url just has to be 
publicly accessible if you want others to see the content.




   allowing non-committers to make changes and publish those changes
publicly



So, like patches in RT, but closer to the 'metal', as it were?

  
The web analogy isn't perfect, but I like it so I'm going to stretch it 
further.


Git is like a web-browser for source control changes. 
Once I have the url for a branch I'm interested in, I can track that 
branch and see the new changes on that branch each time I browse

to it.

Patches are like saving the raw html source of a web page and sending it 
via email to let a co-worker know about this cool new web site I found.
Patches are a one-time snapshot of the work, they don't allow me to 
continue to track its progress.
The pain of patches is kinda analogous to  having to save raw html out 
of an email to disk every time I want to visit a web site.
Git drastically reduces the pain threshold of implementing the patch 
model of development.


Kevin

---End Message---


Re: Branching

2008-08-05 Thread Jesse Vincent


On Aug 5, 2008, at 11:50 AM, Reini Urban wrote:


2008/8/5 Will Coleda [EMAIL PROTECTED]:

So these branch commands actually create branches on the svn
repository that's doing the hosting, so they're defacto shared with
the community in the obvious location? (presuming you're online and
pushing changes back?)

That seems to be the best of both worlds, presuming it handles the
merging better/faster/cleaner than 'svn merge' does.


But this is only the best for committers.

For non-committers who would have to wait longer time to get their
patches applied
it would be better to use a git branch and merge it more often to get
in the upstream updates.
Like my bigger patches, which are often heavily outdated by other
changes when they are applied.


Howso?  An svk local branch can easily pull from an upstream branch  
repeatedly over time and can push changes to a patch file suitable  
for application upstream.


(I'm _really_ not trying to start a VCS flamewar here and would be  
perfectly happy to continue this in private mail)


-j




--
Reini Urban
http://phpwiki.org/ http://murbreak.at/





[perl #47972] [DEPRECATED] getclass opcode

2008-08-05 Thread Will Coleda via RT
On Tue Jul 01 18:56:40 2008, coke wrote:
 On Thu Nov 29 22:08:11 2007, [EMAIL PROTECTED] wrote:
  Will Coleda wrote:
  
   1) using getclass (aka, reject this ticket)
   2) doing something custom for the say method here (like, say,
   translating say 'what' into something like getstdout P0;
   P0.'say'('what');
   3) eliminating the automagic method translation used here and just
   writing a 'say' opcode.
   4) find a syntax that works generically like the current method
 does;
  
   There are currently 42 of these automagic translations (found in
   src/builtins.c)
  
   What's the desired approach here? I'd prefer 4 slightly over 3;
 neither
   requires a deprecation cycle (unless as part of 3 we decide to not
   support opcodes/faux-opcodes for some of them.); 2 is evil. I am
 neutral
   on 1.
 
  Another alternative is to update ParrotIO so it works with the new
  'get_class'. That change is in the works, I/O will be integrated
 into
  the new OO model, instead of using its own custom OO-like system.
 But
  the new I/O model is scheduled for completed implementation in May,
 and
  it'd be nice to remove 'getclass' before then.
 
  I'm in favor of making 'say' a standard opcode, whatever else we do.
  That is, assuming it's worth keeping. Show of hands if you use it
 and
  want to keep it.
 
  Allison
 
 
 In order to facilitate the removal of the getclass opcode, I've
 created a branch
 no_builtin_methods to experiment with removing the special
 translation that exists for
 many methods on builtin PMCs and, if necessary, replacing them with
 actual opcodes.
 (Especially the 'say' variants.)

This patch ( http://nopaste.snit.ch/13743 ) reflects the current state
of the branch (modulo any merging difficulty I had).

- getclass opcodes are removed (usage replaced with 'new' or 'get_class')
- adds say opcodes (replacing the most common usage of the builtin methods)
- converts any PIR for other builtin methods to use explicit methods.
- removes any code in IMCC for managing the builtins (which involved
inserting the getclass opcode into the bytecode to do the lookup)

There may be some more cleanup that could be done to IMCC; I may have
made some code paths unreachable with this patch. Comments welcome, but
I'll plan on merging this in before the next release unless there's an
objection.

-- 
Will Coke Coleda


Re: Branching

2008-08-05 Thread chromatic
On Tuesday 05 August 2008 07:51:35 Will Coleda wrote:

 Using svn as a backing store, how can we more easily work with long
 lived branches?

 I've some existing branches which are long lived, and doing the svn
 merge either way is extremely slow.

 I know much of our community used svk for a while; I think the usage
 there has dropped off as git is the new shiny. My usage of svk was for
 local branching; I couldn't easily share my work in progress with the
 community.

 Does anyone have *any* recommendations? (including: you're doing the
 merge wrong).

Don't use long-lived branches.  The smaller the merge in *any* system, the 
easier it is.

-- c


Re: Branching

2008-08-05 Thread Andy Lester


On Aug 5, 2008, at 11:12 AM, chromatic wrote:

Don't use long-lived branches.  The smaller the merge in *any*  
system, the

easier it is.



I agree 100%.  If you think your project is so big that you have to  
have a long-lived branch, then it should be broken up into smaller,  
mergeable milestones.


Branches that don't merge back to trunk regularly are out of touch  
with the rest of development.


Length of a branch increases technical debt of merging exponentially.

xoox,
Andy

--
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance






A few multiple dispatch questions

2008-08-05 Thread Jonathan Worthington

Hi,

I am currently reviewing bits of the spec surrounding multiple dispatch 
and, of course, have a question or two (I'll probably have some more 
later, as the dust settles in my head).


1) The spec says:

--
A proto also adds an implicit multi to all routines of
the same short name within its scope, unless they have an explicit modifier.
--

If you write:

proto sub foo(:$thing) { ... }
sub foo(Int $x) { ... }
only sub foo() { ... }

Does this give some kind of error, because you've declared something 
with 'only', but it clearly can't be the only one because we also have a 
proto in there?



2) If I write:

multi sub foo(Int $blah) { ... } # 1
proto sub foo(:$blah) is thingy { ... } # 2
multi sub foo() { ... } # 3

Does #1 get the thingy trait, or not because it was declared before the 
proto was? I'm clear that #3 gets it...



3) The spec says:

--
A parameter list may have at most one double semicolon; parameters after 
it are
never considered for multiple dispatch (except of course that they can 
still

veto if their number or types mismatch).
--

Does the veto take place once the multiple dispatch has given us a 
candidate and we try to bind the parameters to the signature, or as part 
of the multiple dispatch? For example, supposing I declare:


multi foo(Int $a;; Num $b) { ... } # 1
multi foo(Int $a;; Str $b) { ... } # 2
multi foo(Int $a;; Num $b, Num $c) { ... } # 3

What happens with these?

foo(2, RandomThing.new); # Ambiguous dispatch error
foo(2, 2.5); # Ambiguous dispatch error, or 1 because 2 vetos?
foo(1, 2.5, 3.4); # Ambiguous dispatch error, or 3 because only one with 
arity match?


Basically, what I'm getting at is, are all of these multi-methods 
ambiguous because they all have the same long name, and just because 
binding fails doesn't make us return into the multiple dispatch 
algorithm? (This is what I'm kinda expecting and would mean every one of 
these fails. But I just want to check that is what was meant by the 
wording.)


Thanks!

Jonathan


Re: new article, A Romp Through Infinity

2008-08-05 Thread TSa

HaloO,

John M. Dlugosz wrote:
Please let me know if you see any coding errors, and of course any 
feedback is welcome.


Firstly, shouldn't there also be infinite strings? E.g. 'ab' x Inf
is a regularly infinite string and ~pi as well. Other classes might
have elaborate notions of infinity. The Complex e.g. might have an
angle associated to an Inf.

Secondly, you only have a single Inf constant and its negation. But
there should be a multitude of infinities. E.g. a code fragment

my Int $a = random(0..1)  0.5 ?? 3 !! Inf;
my Int $b = $a + 1;
say yes if $b  $a;

should always print yes. That is we continue counting after Inf such
that we have transfinite ordinals.

0, 1, 2, ..., Inf, Inf+1, Inf+2, ..., Inf*2, Inf*2+1, ...

The implementation is strait forward as an array of coefficients
of the Inf powers with Inf**0 == 1 being the finite Ints. The sign
bit goes separate from the magnitude. That is you can do the usual
Int arithmetic in the ranges Inf..^Inf*2 and -Inf*2^..-Inf except
that Inf has no predecessor and -Inf no successor. Well, and we lose
commutativity of + and *. I.e. 1 + $a != $a + 1 if $a is transfinite.

I'm not sure if such a concept of interesting values of infinity
is overly useful, though. In TeX e.g. there are infinitely stretchable
spacings of different infinitudes so that they overwrite each other.
Or take a stereographic projection near the point opposite of the
center of projection where you can usefully clip instead of getting
into funny folding of values into the valid range.

Also I think we can have finite conceptual infinities for types like
int32 and num64. In the latter case we also have infinitely small
values and infinities like sqrt(2). In short everything that falls
out of the finite range of these types and is captured in Int or Num.
BTW, with an infinite precision Num I see no need for the Rat type!


Regards, TSa.
--

The unavoidable price of reliability is simplicity -- C.A.R. Hoare
Simplicity does not precede complexity, but follows it. -- A.J. Perlis
1 + 2 + 3 + 4 + ... = -1/12  -- Srinivasa Ramanujan


Re: A few multiple dispatch questions

2008-08-05 Thread TSa

HaloO,

Jonathan Worthington wrote:
Does the veto take place once the multiple dispatch has given us a 
candidate and we try to bind the parameters to the signature, or as part 
of the multiple dispatch? For example, supposing I declare:


multi foo(Int $a;; Num $b) { ... } # 1
multi foo(Int $a;; Str $b) { ... } # 2
multi foo(Int $a;; Num $b, Num $c) { ... } # 3

What happens with these?


I would expect that since all parameters are required that
they never show up in the same candidate set that is considered
by the dispatcher to find the most specific.



foo(2, RandomThing.new); # Ambiguous dispatch error


Assuming that the second parameter is incompatible to Num and Str
you should get a can't dispatch error---no ambiguity at all.


foo(2, 2.5); # Ambiguous dispatch error, or 1 because 2 vetos?


I must admit that I've never grasped that veto business. But why
should it be necessary here?

foo(1, 2.5, 3.4); # Ambiguous dispatch error, or 3 because only one with 
arity match?


Yeah, only #3 in the applicable method set.


Regards, TSa.
--

The unavoidable price of reliability is simplicity -- C.A.R. Hoare
Simplicity does not precede complexity, but follows it. -- A.J. Perlis
1 + 2 + 3 + 4 + ... = -1/12  -- Srinivasa Ramanujan


Re: Branching

2008-08-05 Thread Will Coleda
On Tue, Aug 5, 2008 at 12:16 PM, Andy Lester [EMAIL PROTECTED] wrote:

 On Aug 5, 2008, at 11:12 AM, chromatic wrote:

 Don't use long-lived branches.  The smaller the merge in *any* system, the
 easier it is.


 I agree 100%.  If you think your project is so big that you have to have a
 long-lived branch, then it should be broken up into smaller, mergeable
 milestones.

I agree 50%. =-)

In my particular case, it's not a question of big in terms of -code-.
it's big in terms of time to deliver. Something that I might have
planned to take a week in a branch may be interrupted by real life for
several months.

 Branches that don't merge back to trunk regularly are out of touch with the
 rest of development.

I disagree. Branches that don't rebase from trunk regularly are out of
touch, yes. If you rebase regularly, then you're basically just a
patch waiting to be applied.

 Length of a branch increases technical debt of merging exponentially.

Given our current toolset, it certainly does increase. However, there
appear to be tools for which this issue is greatly mitigated.

If we can adapt our toolset to fit our community instead of adapting
our community to fit our toolset, I'd rather go that way. I'd like
something that let us:

1) keep trunk pretty close to release-quality all the time. We're
pretty much at this point now.
2) work on sharable branches that...
3) were easily rebased from trunk.
4) were easily merged back into trunk

We're a little weak on #3 and #4 at the moment. (It just took me 20-30
minutes to run an svn:merge command between the no_builtin_methods
branch and the repository, and then a few more minutes to clean up
some things that shouldn't have been included, presumably as a result
of issues with #3.)

Whatever methodology we end up with, we also want it to address the
fact that it's probably going to be different once we have 1.0; We're
going to have to start using branches regularly to handle maintenance
and new feature releases. I would hesitate to recommend our current
system for that.

Regards.

 xoox,
 Andy

 --
 Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance





-- 
Will Coke Coleda


Re: Branching

2008-08-05 Thread chromatic
On Tuesday 05 August 2008 09:48:22 Will Coleda wrote:

 Branches that don't rebase from trunk regularly are out of
 touch, yes. If you rebase regularly, then you're basically just a
 patch waiting to be applied.

... and, as time goes by, an ever-larger patch waiting to land on trunk with a 
big thunk of unbisectable, unreviewable code.

 Whatever methodology we end up with, we also want it to address the
 fact that it's probably going to be different once we have 1.0; We're
 going to have to start using branches regularly to handle maintenance
 and new feature releases. I would hesitate to recommend our current
 system for that.

Gah, no maintenance releases please!  See Mommy, why did it take over five 
years to release a new stable version of Perl 5 with a bugfix I made in 
2002?

-- c


Re: Branching

2008-08-05 Thread Will Coleda
On Tue, Aug 5, 2008 at 1:10 PM, chromatic [EMAIL PROTECTED] wrote:
 On Tuesday 05 August 2008 09:48:22 Will Coleda wrote:

 Branches that don't rebase from trunk regularly are out of
 touch, yes. If you rebase regularly, then you're basically just a
 patch waiting to be applied.

 ... and, as time goes by, an ever-larger patch waiting to land on trunk with a
 big thunk of unbisectable, unreviewable code.

Again, that assumes that you're changing a lot of code in the branch.
My problem is just keeping up with the changes in trunk when your
particular patch is small. Even that is currently painful and
shouldn't be.

And can you explain how the patch is unreviewable? Unbisectable I can
see, if you're looking just at trunk and not the branch.

 Whatever methodology we end up with, we also want it to address the
 fact that it's probably going to be different once we have 1.0; We're
 going to have to start using branches regularly to handle maintenance
 and new feature releases. I would hesitate to recommend our current
 system for that.

 Gah, no maintenance releases please!  See Mommy, why did it take over five
 years to release a new stable version of Perl 5 with a bugfix I made in
 2002?

 -- c


Perhaps I used an official term when I didn't mean to here.

Let's simplify: I can easily see us needing at least dev and
production branches (one of which can be trunk), which is one more
than we have now.


-- 
Will Coke Coleda


[svn:parrot-pdd] r30041 - trunk/docs/pdds

2008-08-05 Thread kjs
Author: kjs
Date: Tue Aug  5 11:16:36 2008
New Revision: 30041

Modified:
   trunk/docs/pdds/pdd19_pir.pod

Log:
[pdd19] add an RT ticket referring to deprecated old-style pasm registers 
deprecation decision.

Modified: trunk/docs/pdds/pdd19_pir.pod
==
--- trunk/docs/pdds/pdd19_pir.pod   (original)
+++ trunk/docs/pdds/pdd19_pir.pod   Tue Aug  5 11:16:36 2008
@@ -103,7 +103,7 @@
 register, if it is the only register in the subroutine.
 
 {{DEPRECATION NOTE: PIR will no longer support the old PASM-style syntax
-for registers without dollar signs: CIn, CSn, CNn, CPn.}}
+for registers without dollar signs: CIn, CSn, CNn, CPn. RT#57638}}
 
 =head2 Constants
 


Re: A few multiple dispatch questions

2008-08-05 Thread Larry Wall
On Tue, Aug 05, 2008 at 06:17:30PM +0200, Jonathan Worthington wrote:
 Hi,

 I am currently reviewing bits of the spec surrounding multiple dispatch  
 and, of course, have a question or two (I'll probably have some more  
 later, as the dust settles in my head).

 1) The spec says:

 --
 A proto also adds an implicit multi to all routines of
 the same short name within its scope, unless they have an explicit modifier.
 --

 If you write:

 proto sub foo(:$thing) { ... }
 sub foo(Int $x) { ... }
 only sub foo() { ... }

 Does this give some kind of error, because you've declared something  
 with 'only', but it clearly can't be the only one because we also have a  
 proto in there?

I'd consider it an error.

 2) If I write:

 multi sub foo(Int $blah) { ... } # 1
 proto sub foo(:$blah) is thingy { ... } # 2
 multi sub foo() { ... } # 3

 Does #1 get the thingy trait, or not because it was declared before the  
 proto was? I'm clear that #3 gets it...

I think a proto cannot be expected to work retroactively.  In fact, I
think it's probably an error to declare a proto after a non-proto in the
same scope.

 3) The spec says:

 --
 A parameter list may have at most one double semicolon; parameters after  
 it are
 never considered for multiple dispatch (except of course that they can  
 still
 veto if their number or types mismatch).
 --

 Does the veto take place once the multiple dispatch has given us a  
 candidate and we try to bind the parameters to the signature, or as part  
 of the multiple dispatch? For example, supposing I declare:

 multi foo(Int $a;; Num $b) { ... } # 1
 multi foo(Int $a;; Str $b) { ... } # 2
 multi foo(Int $a;; Num $b, Num $c) { ... } # 3

 What happens with these?

 foo(2, RandomThing.new); # Ambiguous dispatch error
 foo(2, 2.5); # Ambiguous dispatch error, or 1 because 2 vetos?
 foo(1, 2.5, 3.4); # Ambiguous dispatch error, or 3 because only one with  
 arity match?

 Basically, what I'm getting at is, are all of these multi-methods  
 ambiguous because they all have the same long name, and just because  
 binding fails doesn't make us return into the multiple dispatch  
 algorithm? (This is what I'm kinda expecting and would mean every one of  
 these fails. But I just want to check that is what was meant by the  
 wording.)

I believe veto is giving the wrong idea here as something that
happens after the fact.  What's the term for only allowing acceptable
candidates to put their names on the ballot?  Anyway, as TSa surmises,
the ballot is vetted or stacked in advance--only those candidates
that *could* bind are considered to be part of the candidate set.
In the abstract, candidates that cannot match never get their names
on the ballot, though of course an implementation might choose to
determine this lazily as long as it preserves the same semantics.

Alternately, even if the list of valid candidates is determined
eagerly, if the candidate list construction is memoized based on the
typeshape of the Capture, it will generally not have to be redone
until you see a different typeshape (where the meaning of different
may depend on how specific the signatures are, and in particular on
whether any of the signatures rely on subset constraints (including
individual values, which are just degenerate subsets)).

Larry


Re: Branching

2008-08-05 Thread Geoffrey Broadwell
On Tue, 2008-08-05 at 13:20 -0400, Will Coleda wrote:
 On Tue, Aug 5, 2008 at 1:10 PM, chromatic [EMAIL PROTECTED] wrote:
  Gah, no maintenance releases please!  See Mommy, why did it take over five
  years to release a new stable version of Perl 5 with a bugfix I made in
  2002?

 Perhaps I used an official term when I didn't mean to here.
 
 Let's simplify: I can easily see us needing at least dev and
 production branches (one of which can be trunk), which is one more
 than we have now.

We will definitely need multiple long-lived branches.  Just to make
explicit the reasoning: data loss, security, or otherwise critical
bugfixes that should be backported to one or more already released
versions and re-released immediately.  That's a lot harder if you don't
have release branches.  Of course, you can branch lazily, since releases
are tagged.  But we have to assume that there *will* be multiple
long-lived branches that won't merge and go away.

However, I'm against the practice of branching before release to
stabilize an assumed-crazy trunk.  I prefer the (general) way we do
things now: releases are made from the trunk code, which is kept as
high-quality as possible; small changes are made directly to trunk;
large changes are made in a branch and merged to trunk when ready.

The details may be ripe for improvement, however.  There seems to be an
implicit assumption from at least some of us that a merge back to
trunk should be (or even 'needs' to be) an all or nothing affair.
Several SCMs make it easier to cherry-pick changes from the branch,
merge them back to trunk, and keep the diff in even a long-lived feature
development branch as small as possible.  git for example (combined with
Stacked GIT or a similar tool) has decent support for altering existing
commits in a branch to make them easier to merge piecemeal.  I don't
have enough SVK fu to know how well this development model is supported
there.


-'f




Re: Branching

2008-08-05 Thread Geoffrey Broadwell
On Tue, 2008-08-05 at 11:19 -0400, Jesse Vincent wrote:
 [branch feature]

This sounds very useful.  Is the SVK paradigm changing so that online
use is assumed, and offline is a mode to switch to temporarily?  I'm
used to thinking of SVK in one two ways:

1. As a better SVN client for normal always-online use
2. As a full-time disconnected client, with rare online use
   to merge back to the SVN master

Is this new branch mode intended to generalize and replace the above
two?  Or is it a third use case entirely?

 If this seems appealing, I'm sure I could get some clkao cycles if  
 there's more you folks need.

My biggest request (which you may or may not have any influence over) is
better distro packaging.  Both Debian and rpmforge have gone through
periods where SVK was completely fubar.  In fact, earlier this year
Debian screwed up their SVK package to the point of helpfully
uninstalling it and making it impossible to reinstall.  And when the
replacement finally came, after a very long wait, it crashed all over
the place.  I avoided data loss by the skin of my teeth.  That whole
situation is what made me try git-svn -- I didn't have another decent
choice for disconnected Parrot work.

Anyway, applying some resources here and there to help the distro
packagers may have a big positive effect on the SVK user base.


-'f




Re: Branching

2008-08-05 Thread chromatic
On Tuesday 05 August 2008 12:35:50 Geoffrey Broadwell wrote:

 We will definitely need multiple long-lived branches.  Just to make
 explicit the reasoning: data loss, security, or otherwise critical
 bugfixes that should be backported to one or more already released
 versions and re-released immediately.  That's a lot harder if you don't
 have release branches.  Of course, you can branch lazily, since releases
 are tagged.  But we have to assume that there *will* be multiple
 long-lived branches that won't merge and go away.

That's horrible.  I doubt you give morphine to a five year old who falls and 
scrapes his knee.  This is the equivalent in releasing software.

I can see patching the previous release in case of a critical bugfix, but if 
we get in the habit of encouraging users to expect updates of anything older 
than the previous stable release for free, we've doomed the project.

Point releases every month.  Major releases every three months.  Complete and 
utter refusal to support users who expect that they can install Parrot 1.0 
and get free support from the mailing list or IRC for the next eight to ten 
years.  If we don't set those expectations now, we might avoid the problem of 
these leeches in the future.  (Again, see Perl 5 for an example of why 
maintaining multiple stable branches doesn't work, why encouraging people not 
to upgrade doesn't work, why release candidates don't work, and why 
feature-based releases don't work.)

 However, I'm against the practice of branching before release to
 stabilize an assumed-crazy trunk.  I prefer the (general) way we do
 things now: releases are made from the trunk code, which is kept as
 high-quality as possible; small changes are made directly to trunk;
 large changes are made in a branch and merged to trunk when ready.

Agree.

 The details may be ripe for improvement, however.  There seems to be an
 implicit assumption from at least some of us that a merge back to
 trunk should be (or even 'needs' to be) an all or nothing affair.

Traditionally, it has been.  If mergebacks are small and frequent, I can live 
with long-lived branches.

The problem isn't that branches live too long.  The problem is that features 
are too big and take too long to develop.  Small, working, isolated steps are 
always better than big thuds.  Branches only seem to protect us from those 
big thuds by keeping trunk stable.  (If you can't keep your branch stable... 
well, there's another problem obvious to anyone who watched the concurrency 
branch.)

-- c


Re: A few multiple dispatch questions

2008-08-05 Thread chromatic
On Tuesday 05 August 2008 12:01:29 Larry Wall wrote:

 I believe veto is giving the wrong idea here as something that
 happens after the fact.  What's the term for only allowing acceptable
 candidates to put their names on the ballot?

disenfranchise

-- c


Re: Branching

2008-08-05 Thread Geoffrey Broadwell
On Tue, 2008-08-05 at 12:54 -0700, chromatic wrote:
 On Tuesday 05 August 2008 12:35:50 Geoffrey Broadwell wrote:
  bugfixes that should be backported to one or more already released
  versions and re-released immediately.

 I can see patching the previous release in case of a critical bugfix, but if 
 we get in the habit of encouraging users to expect updates of anything older 
 than the previous stable release for free, we've doomed the project.

That's why I was careful to say 'one or more'.  As in greater than zero,
but other than that it's a separate policy decision that I was not
trying to address in my previous message.

 Point releases every month.  Major releases every three months.

Agree, except I'd like to hear more about how you define a 'major
release'.

 Complete and 
 utter refusal to support users who expect that they can install Parrot 1.0 
 and get free support from the mailing list or IRC for the next eight to ten 
 years.

Half agree.  I agree that we should only *directly* support a release
for a limited time, though I think the minimum sane time would be major
release before current one -- 3-6 months at any given moment, given
your above schedule.  In other words, just because we do a new 3 month
release, doesn't mean we immediately de-support the one we did just 3
months ago.

Now, I might argue for a longer direct support schedule than just 'most
recent + 1', but I think any less than that can't work in real life.

Beyond that, I think we need to explicitly acknowledge that distro
packagers have a longer schedule to care about.  While we may not
support them directly, we still need to have a process in place to make
sure they are notified about critical problems that may apply to
previous releases, so that they can go back and check/patch their
versions.  We should also facilitate any process that will help
different distros to help each other to backport our trunk fixes in a
timely fashion.

In short, we don't have to do the hard work for the distros ourselves,
but we can't leave them out in the cold, either.


-'f




Re: Branching

2008-08-05 Thread Michael Peters

Geoffrey Broadwell wrote:

Complete and 
utter refusal to support users who expect that they can install Parrot 1.0 
and get free support from the mailing list or IRC for the next eight to ten 
years.


Half agree.  I agree that we should only *directly* support a release
for a limited time, though I think the minimum sane time would be major
release before current one -- 3-6 months at any given moment, given
your above schedule.  In other words, just because we do a new 3 month
release, doesn't mean we immediately de-support the one we did just 3
months ago.

Now, I might argue for a longer direct support schedule than just 'most
recent + 1', but I think any less than that can't work in real life.


We also need to think about deprecation cycles. If you deprecate a 
feature in 1 version and then it disappears in the next then the time 
between when my code works and when it doesn't is only 6 months. Some 
distros provide support for several years.


--
Michael Peters
Plus Three, LP



Re: Branching

2008-08-05 Thread Jesse Vincent


On Aug 5, 2008, at 3:46 PM, Geoffrey Broadwell wrote:


On Tue, 2008-08-05 at 11:19 -0400, Jesse Vincent wrote:

[branch feature]


This sounds very useful.  Is the SVK paradigm changing so that online
use is assumed, and offline is a mode to switch to temporarily?


No. But that's a common enough use case that it should be easy.


I'm used to thinking of SVK in one two ways:

   1. As a better SVN client for normal always-online use
   2. As a full-time disconnected client, with rare online use
  to merge back to the SVN master

Is this new branch mode intended to generalize and replace the above
two?  Or is it a third use case entirely?


It's a layer of sugar which we've found to be helpful in both use cases.



If this seems appealing, I'm sure I could get some clkao cycles if
there's more you folks need.


My biggest request (which you may or may not have any influence  
over) is

better distro packaging.  Both Debian and rpmforge have gone through
periods where SVK was completely fubar.


We've applied gentle pressure where we can for this. What we _have_  
done is build binary releases of SVK which should work on a modern  
(mac, linux, win32) box without depending on a distribution or  
packager's limited time and resources.


Builds for 2.2b1 should appear at http://download.bestpractical.com/pub/svk/ 
 within a few days.


Re: Branching

2008-08-05 Thread chromatic
On Tuesday 05 August 2008 13:14:27 Geoffrey Broadwell wrote:

  I can see patching the previous release in case of a critical bugfix, but
  if we get in the habit of encouraging users to expect updates of anything
  older than the previous stable release for free, we've doomed the
  project.

 That's why I was careful to say 'one or more'.  As in greater than zero,
 but other than that it's a separate policy decision that I was not
 trying to address in my previous message.

Fair enough.

  Point releases every month.  Major releases every three months.
 Agree, except I'd like to hear more about how you define a 'major
 release'.

Deprecation takes one major release.  We support the current major release 
(and possibly the previous major release for critical bugs only).

 Half agree.  I agree that we should only *directly* support a release
 for a limited time, though I think the minimum sane time would be major
 release before current one -- 3-6 months at any given moment, given
 your above schedule.  In other words, just because we do a new 3 month
 release, doesn't mean we immediately de-support the one we did just 3
 months ago.

Right.

 Now, I might argue for a longer direct support schedule than just 'most
 recent + 1', but I think any less than that can't work in real life.

I'm not sure any more of that can work in real life.

 Beyond that, I think we need to explicitly acknowledge that distro
 packagers have a longer schedule to care about.  While we may not
 support them directly, we still need to have a process in place to make
 sure they are notified about critical problems that may apply to
 previous releases, so that they can go back and check/patch their
 versions.  We should also facilitate any process that will help
 different distros to help each other to backport our trunk fixes in a
 timely fashion.

 In short, we don't have to do the hard work for the distros ourselves,
 but we can't leave them out in the cold, either.

I can imagine creating a mailing list for critical update notifications, but 
hold little hope for support from downstream unless the packager happens to 
be a contributing member of the project.

-- c


Re: Branching

2008-08-05 Thread chromatic
On Tuesday 05 August 2008 13:19:52 Michael Peters wrote:

 If you deprecate a
 feature in 1 version and then it disappears in the next then the time
 between when my code works and when it doesn't is only 6 months. Some
 distros provide support for several years.

If they want to support ancient versions of code, that's their choice.  
Presumably they have customers paying them to do so.

(Also, no one forces you to upgrade.  If you want to keep running an ancient 
version, feel free.  Just don't expect me to care if you find a bug we fixed 
ancient - 1 years ago.  You get the source code for all previous versions.  
You get support for recent versions for free.  That's still quite a deal, 
even if for everything else you get my rate card.)

-- c


Re: Branching

2008-08-05 Thread Geoffrey Broadwell
On Tue, 2008-08-05 at 16:19 -0400, Michael Peters wrote:
 We also need to think about deprecation cycles. If you deprecate a 
 feature in 1 version and then it disappears in the next then the time 
 between when my code works and when it doesn't is only 6 months. Some 
 distros provide support for several years.

Which reminds me: chromatic, what was your reasoning for major releases
being every three months, instead of four or six?

I agree we don't want to go much beyond six months for our major
releases, but with at least two major distros that aim for decent
freshness (Ubuntu and Fedora) using six month release cycles, I'm
curious what we gain with a shorter cycle than that.

A six month release cycle makes deprecation-and-removal a one year
affair, which isn't too bad.  And we can fairly tell users who want more
stability than that to use the slow distro that matches each fast
distro we aim for -- Debian instead of Ubuntu, RHEL/CentOS instead of
Fedora, for example.

(Separately, I agree that one month point releases seem to work well for
us.  I don't see any reason to change that.)


-'f




Re: Branching

2008-08-05 Thread jerry gay
On Tue, Aug 5, 2008 at 1:47 PM, Geoffrey Broadwell [EMAIL PROTECTED] wrote:
 On Tue, 2008-08-05 at 16:19 -0400, Michael Peters wrote:
 We also need to think about deprecation cycles. If you deprecate a
 feature in 1 version and then it disappears in the next then the time
 between when my code works and when it doesn't is only 6 months. Some
 distros provide support for several years.

 Which reminds me: chromatic, what was your reasoning for major releases
 being every three months, instead of four or six?

 I agree we don't want to go much beyond six months for our major
 releases, but with at least two major distros that aim for decent
 freshness (Ubuntu and Fedora) using six month release cycles, I'm
 curious what we gain with a shorter cycle than that.

 A six month release cycle makes deprecation-and-removal a one year
 affair, which isn't too bad.  And we can fairly tell users who want more
 stability than that to use the slow distro that matches each fast
 distro we aim for -- Debian instead of Ubuntu, RHEL/CentOS instead of
 Fedora, for example.

 (Separately, I agree that one month point releases seem to work well for
 us.  I don't see any reason to change that.)

please start a new thread as this has moved off-topic.
~jerry


Re: enterprise parrot

2008-08-05 Thread Eric Wilhelm
# from jerry gay
# on Tuesday 05 August 2008 14:13:
On Tue, Aug 5, 2008 at 1:47 PM, Geoffrey Broadwell wrote:

 Which reminds me: chromatic, what was your reasoning for major
 releases being every three months, instead of four or six?

 I agree we don't want to go much beyond six months for our major
 releases, but with at least two major distros that aim for decent
 freshness (Ubuntu and Fedora) using six month release cycles, I'm
 curious what we gain with a shorter cycle than that.

please start a new thread as this has moved off-topic.

Indeed.

There's quite a bit of ground left to be covered before anyone needs to 
worry about how much the support contracts are going to cost.

I imagine that release cycles and deprecation of parrot features isn't 
going to mean nearly as much churn to RHEL or Ubuntu LTS, or 
Debian stable users as it would with e.g. Perl 5 -- because they will 
typically be interfacing with the HLL, which provides a bit of buffer.

--Eric
-- 
A counterintuitive sansevieria trifasciata was once literalized 
guiltily.
--Product of Artificial Intelligence
---
http://scratchcomputing.com
---


[perl #57626] [BUG] perl6 -e 'say hello' == Segmentation fault

2008-08-05 Thread [EMAIL PROTECTED] (via RT)
# New Ticket Created by  [EMAIL PROTECTED] 
# Please include the string:  [perl #57626]
# in the subject line of all future correspondence about this issue. 
# URL: http://rt.perl.org/rt3/Ticket/Display.html?id=57626 


* Today I downloaded parrot-0.6.4 to my Debian PC.
* I also installed the libicu-dev, libicu38 packages before configuration
* cd parrot-0.6.4; perl Configure.pl; make; cd languages/perl6; make perl6
* It seems that every program that starts with 'say' produces a
Segmentation fault on my system
* I will happily provide additional info (if requested) or test newer
versions on my PC to verify that the bug is present/gone.

Best regards,

Yaakov


[perl #57630] [TODO] Deprecate DOD acronym for PDD09

2008-08-05 Thread via RT
# New Ticket Created by  Andrew Whitworth 
# Please include the string:  [perl #57630]
# in the subject line of all future correspondence about this issue. 
# URL: http://rt.perl.org/rt3/Ticket/Display.html?id=57630 


PDD09 says that the acronym DOD for dead object detection is
deprecated. At the moment, there are still a number of functions,
variables, datafields and mentions in the PDDs of this acronym. This
ticket is going to be a placeholder for keeping track of the status of
this deprecation effort.

I'll be working on this peicewise, when I have the tuits.

--Andrew Whitworth


[perl #57636] [TODO][PDD19] Document the reason for :unique_reg flag

2008-08-05 Thread via RT
# New Ticket Created by  Klaas-Jan Stol 
# Please include the string:  [perl #57636]
# in the subject line of all future correspondence about this issue. 
# URL: http://rt.perl.org/rt3/Ticket/Display.html?id=57636 


From pdd19:

The optional C:unique_reg modifier will force the register allocator to
associate the identifier with a unique register for the duration of the
subroutine.


This, however, does not document /why/ you would want to do that. Why do we
have this flag?
Maybe add an example.

This needs to be clarified.


kjs


[perl #57634] [RFC] Remove .globalconst from PIR

2008-08-05 Thread via RT
# New Ticket Created by  Klaas-Jan Stol 
# Please include the string:  [perl #57634]
# in the subject line of all future correspondence about this issue. 
# URL: http://rt.perl.org/rt3/Ticket/Display.html?id=57634 


hi,

in PIR you can use the .globalconst directive in a sub to define a constant
that is globally accessible.
Likewise, you can use the .const directive in a sub that is local to that
sub.

.sub foo
 .globalconst int answer = 42
 .const num PI = 3.14

.end

answer in this case is globally accessible (in any other sub, that is parsed
AFTER the foo subroutine, I should note)
PI in this case is only accessible in this subroutine foo.


However, I question the need for .globalconst, as the .const directive can
also be used /outside/ of a subroutine, like so:

.const int answer = 42


Therefore, the .globalconst directive seems to be superfluous; why have 2
directives that do the same thing; if a .globalconst is accessible globally
anyway, there's no need to define it WITHIN a sub.

Therefore, my proposal is to remove the .globalconst directive;
whenever you need to have a global const, use .const outside of a
subroutine.
whenever you need to have a local const (in a sub), use .const inside a
subroutine.

comments welcome,
kjs


[perl #57638] [IMCC] old-style PASM registers no longer supported.

2008-08-05 Thread via RT
# New Ticket Created by  Klaas-Jan Stol 
# Please include the string:  [perl #57638]
# in the subject line of all future correspondence about this issue. 
# URL: http://rt.perl.org/rt3/Ticket/Display.html?id=57638 


From PDD19:

{{DEPRECATION NOTE: PIR will no longer support the old PASM-style syntax
for registers without dollar signs: CIn, CSn, CNn, CPn. }}


kjs


Re: A few multiple dispatch questions

2008-08-05 Thread chromatic
On Tuesday 05 August 2008 15:25:47 Bob Rogers wrote:

On Tuesday 05 August 2008 12:01:29 Larry Wall wrote:
 I believe veto is giving the wrong idea here as something that
 happens after the fact.  What's the term for only allowing
 acceptable candidates to put their names on the ballot?

disenfranchise

 In the context of balloting, that applies to voters; the equivalent word
 for candidates is qualify (or disqualify).

My state has closed primaries; same effect.

-- c



Re: [perl #57626] [BUG] perl6 -e 'say hello' == Segmentation fault

2008-08-05 Thread chromatic
On Tuesday 05 August 2008 07:50:41 [EMAIL PROTECTED] (via RT) wrote:

 * Today I downloaded parrot-0.6.4 to my Debian PC.
 * I also installed the libicu-dev, libicu38 packages before configuration
 * cd parrot-0.6.4; perl Configure.pl; make; cd languages/perl6; make perl6
 * It seems that every program that starts with 'say' produces a
 Segmentation fault on my system
 * I will happily provide additional info (if requested) or test newer
 versions on my PC to verify that the bug is present/gone.

I just fixed one such bug in optimized builds (r30047) for all programs which 
use fakecutables such as perl6 or pbc_to_exe.  Can you show a program which 
demonstrates this error after that checkin?

-- c


Re: [perl #57608] [PATCH] add ports/cygwin

2008-08-05 Thread chromatic
On Tuesday 05 August 2008 01:35:48 Reini Urban wrote:

 Attached patch adds the directory ports/cygwin with
 the most recent cygports file,
 the most recent src patch and the sources for the CYGWIN patches.
 (the contents of parrot-0.6.4-2.cygwin.patch which creates those files
 in CYGWIN-PATCHES/)

Thanks, applied as r30048.

-- c


Re: [perl #57546] [PATCH] tags-xemacs

2008-08-05 Thread chromatic
On Sunday 03 August 2008 05:15:07 Reini Urban wrote:

 Attached patch adds support for the old ctags/etags from XEmacs 21

Thanks, applied as r30049.

-- c


Re: [perl #57486] [patch] Fix stat / lstat test failure on Cygwin

2008-08-05 Thread chromatic
On Thursday 31 July 2008 15:16:33 Donald Hunter wrote:

 This is a patch for t/pmc/os.t to fix test failures on Cygwin. This is
 generated against revision 29913.

Thanks, applied as r30050.

 One anomaly remains. The test was skipping the inode field for Cygwin
 because the installed Perl 5 is generating a longer number so I'm masking
 it. I will need to investigate why Perl 5 generates a larger inode value
 than Parrot. (Maybe stat is actually broken in Parrot on Cygwin).

That's a strong possibility.

-- c


Re: [perl #38432] [BUG] Exception thrown from constructor leads to oddness/segfault

2008-08-05 Thread chromatic
On Monday 04 August 2008 16:22:35 Will Coleda via RT wrote:

 Post pdd25cx mergeback, updating the syntax yet again, and simplifying it
 slightly, we have the attached file, which generates:

 ok #test exception from init vtable
 not ok #test exception from init vtable

 Even better, remove the 'end' opcode (which shouldn't be needed), and you
 get a segmentation fault. (see attached for backtrace)

The segmentation fault is because, in the exception handler, the current 
context used for an NCI call into the say method on ParrotIO (for example) 
has no calling context.

As to *why* that's happening, I'm not entirely sure.

-- c