Hi Doug,

Doug McNutt writes:
> >The first source of discussion was around whether there is any
> >consistent meaning to array operations. The source of this was that
> >some felt that other people may want C<*> to mean something other
> >than element-wise multiplication by default (e.g. matrix inner
> >product). However no-one actually said that I<they> wanted it to work
> >this way, only that I<others> may prefer it, particularly
> >mathematicians.
> 
> This physicist wants it that way. In scalar context $dotproduct = @A
> >>*<< @B; ought to return the scalar (dot, inner) product of vectors @A
> and @B by which I mean the sum of the products of like components. In
> array context it is quite reasonable to return the element by element
> products but it will be for accounting rather than for electrical
> engineering, mapmaking, and 3D rendering.
> 
> @C = @A >>+<< @B; returning the vector sum is mathematically correct.
> 
> An intrinsic ability to figure the dot product is useful for matrix
> multiplication. 

I was sincerely pissed when PDL, the Perl language _for_ mathematicians,
was not capable of easily doing a tensor inner product.  The "threading"
system was too advanced, and you needed to jump through hoops to get it
to do the simple thing.

While Perl 6 is not being designed as a language for mathematicians,
it's not being designed as a language against mathematicians.  I want an
inner product.

One can generally do a dot product as such:

    $prod = @a >>*<< @b ==> sum;

Where:

    sub sum([EMAIL PROTECTED]) { reduce { $^a + $^b } 0, @data }

But

    $prod = @a >>*<< @b;

Won't get you the dot product without some kind of module.  It'll get
you a reference to the array of @a[$i] multiplied by @b[$i].  And
putting a numeric context on that won't do it either; that will give you
the length of the resulting array.

PDL has a lot of liberty in Perl 6, though.  They can easily go far
enough to overload piddle operations to do the Right Thing.  If they so
desire (don't know whether I would be for or against this), they could
make up a whole new piddle sigil, so you could go:

    @>pid = (1,2,3,4,5);

And have the scalar operations work right automatically.  That might
actually be the way to go, because when I'm programming these sorts of
things, I have arrays and I have vectors, and I rarely interchange them.

> RFC 82 offers a mathematically abhorrent example of
> element by element "matrix" multiplication:
> 
> > my int @mat1 = ([1,2],  [3,4]);
> > my int @mat2 = ([2,2],  [1,1]);
> > my @mat3 = @mat1 * @mat2;   # ([2,4],[3,4])
>  
> Perhaps the designations "mat1, 2, 3" are not intended to be
> abbreviations for the word matrix but it is dangerous indeed to
> provide a syntax that will surely be confused with true matrix
> multiplication. @matC = @matA >>*<< @matB should produce elements in
> the result which are the dot products of the row vectors of the left
> matrix with the column vectors of the right. Note that one or the
> other of the input matrices needs to be transposed and some
> standardized notation is required. Matrix multiplication is not
> commutative.
> 
> The apocalypse at <http://dev.perl.org/perl6/apocalypse/A03.html> has
> this to say about confusing people.
> 
> >Anyway, in essence, I'm rejecting the underlying premise of this RFC
> >(82), that we'll have strong enough typing to intuit the right
> >behavior without confusing people. Nevertheless, we'll still have
> >easy-to-use (and more importantly, easy-to-recognize)
> >hyper-operators.
> 
> And then it goes on about other vector-like things:.
> 
> >This RFC also asks about how return values for functions like abs( )
> >might be specified. I expect sub declarations to (optionally) include
> >a return type, so this would be sufficient to figure out which
> >functions would know how to map a scalar to a scalar.
> 
> There is the norm of a vector, actually the length if we're talking
> about 3 dimensional geometry, which can be considered the abs(@A) as a
> scalar. It would be the positive square root of the sum of the squares
> of the components.  = sqrt ( @A >>*<< @A ).
> 
> It would also be possible to include a scalar norm of a matrix which
> would be its determinant but that's probably fodder for a module
> dedicated to solution of linear equations.

And abs() really doesn't seem the place to put the norm.  That feels
more like norm() to me.

> An element by element concatenation of two arrays of text is certainly
> reasonable. As a multiplication I'm not sure what it would mean but
> it's a bit like the @mat1 >>*<< @mat2 in the quote above. Not very
> interesting or useful.
> 
> Whatever be the final result for perl 6 syntax it ought either to use
> the terms matrix and vector as employed by the likes of Maxwell and
> Heisenberg and provide mathematically correct operators or it should
> avoid those terms when documenting hyper-operators that do something
> else.

So, I guess it won't be concerting to you to hear that we're calling
hyper-operators vector-operators.  I personally think it's not the
operators that are misleading, though.  They have clear and well-defined
semantics, and when these are single-dimensional arrays, vector-operator
makes a lot of sense.

Anyway, I'm kind of losing my point.  Basically, I think that the
mathematically correct stuff should be saved for other kinds of
variables or other kinds of operators.  It's more important to have a
consistent and useful semantic with vector operators than it is to get
it to work mathematically in the >>*<< case.

PDL will likely be the place to go for these semantics.  Considering my
vast disappointment with the current state of PDL, I'll probably be a
contributor this second time 'round.

> And then there are those two-component vectors which are complex
> numbers. . . and vector cross products. . . and quaternions. . .

All of which deserve types and their own set of semantics.

Luke

Reply via email to