The inevitable question - why not support Polish Notation or Reverse
Polish ? Well known to be easier to use, since it involves no
ambiguities in regard to association, is notationally clearer (not
needing parentheses). Seems to make it easier to analyze what one might
want to mean by

[op] (p1 p2 p3 ... )

E.g., 

Convention 1

        [op] (p1 p2 p3 ... ) =by definition= p1 p2 op  p3 op ... pN op

Convention 2
    
      [op] (p1 p2 p3 ... ) =by definition= p1 p2 p3 ... pN op op op ...
op

One could support both conventions (with notational differences) ...

Example

        [^] (2 3 4) =defn 1= 2 3 ^ 4 ^ , i.e., (2^3)^4

      [^] (2 3 4) = defn 2= 2 3 4 ^^ , i.e., (2^(3^4))

Etc.

-
Hugh Miller
e-mail: [EMAIL PROTECTED]


-----Original Message-----
From: Darren Duncan [mailto:[EMAIL PROTECTED] 
Sent: Sunday, March 30, 2008 2:00 AM
To: p6l
Subject: Re: Query re: duction and precedence.

Mark J. Reed wrote:
> You anticipated me. So, is there a core method for 
> foldl/foldr/inject/reduce, or do you have to roll your own as in p5?
> 
> On 3/29/08, Larry Wall <[EMAIL PROTECTED]> wrote:
>> On Sat, Mar 29, 2008 at 10:18:53PM -0400, Mark J. Reed wrote:
>> : In general, is
>> :
>> : [op] (p1,p2,p3,p4...)
>> :
>> : expected to return the same result as
>> :
>> : p1 op p2 op p3 op p4...
>> :
>> : including precedence considerations?
>>
>> Yes, in fact the section on Reduction Operators uses exponentiation 
>> obliquely in one of its examples of something that should work 
>> right-to-left.  Admittedly it's not clearly stated there...
>>
>> But the basic idea is always that the reduction form should produce 
>> the same result as if you'd written it out with interspersed infixes.
>> It's a linguistic construct, not a functional programming construct.
>> It's not intended to be in the same league as foldl and foldr, and is

>> only just slightly beyond a macro insofar as it can intersperse 
>> however many operators it needs to for an arbitrarily sized list.
>> It's not making any attempt to deal with anonymous first-class 
>> functions though.  Call a real function for that. :)

I think it would be powerful while not too difficult for Perl 6's
"reduce" 
to be able to do everything you'd get in functional programming, at
least some of the time.

Generally speaking, as long as the base operator is associative, then

   [op] *$seq_or_array_etc

... should auto-parallelize with a deterministic result; or as long as
the base operator is both associative and commutative, then

   [op] *$set_or_bag_or_seq_or_array_etc

... should also auto-parallelize with a deterministic result.

And then you get all the functional programming goodies.  The first
example works for string catenation, but the second doesn't; the second
does work for sum|product|and|or|xor|union|intersection though.

Some base operators have an identity value in case the input collection
is empty, as is the case with the above operators, but others only work
with a non-empty input, such as mean|median|mode.

For other operators, non-assoc etc, the work will probably all have to
be linear.  Eg difference|quotient|exponentiation.

Something I'm wondering, though, realistically how often would one
actually be reducing on an operator that is not associative?  What
practical use is there for [-] (3,4,5) for example?

Are you just supporting that with all operators for parsing rule
simplicity as per a macro?  I can understand that reasoning, but
otherwise ...

I would think it makes sense to restrict the use of the reduction
meta-operator to just work over operators that are at least associative.

-- Darren Duncan

Reply via email to