Re: No Cpow op with PMC arguments?

2004-11-09 Thread Jeff Clites
On Nov 8, 2004, at 3:08 AM, Leopold Toetsch wrote:
Jeff Clites [EMAIL PROTECTED] wrote:
No. The binary operations in Python are opcodes, as well as in 
Parrot.
And both provide the snytax to override the opcode doing a method 
call,
that's it.

I guess we'll just have to disagree here. I don't see any evidence of
this
UTSL please. The code is even inlined:
,--[ Python/ceval.c ]
|   case BINARY_ADD:
|   w = POP();
|   v = TOP();
|   if (PyInt_CheckExact(v)  PyInt_CheckExact(w)) {
|   /* INLINE: int + int */
|   register long a, b, i;
|   a = PyInt_AS_LONG(v);
|   b = PyInt_AS_LONG(w);
|   i = a + b;
|   if ((i^a)  0  (i^b)  0)
|   goto slow_add;
|   x = PyInt_FromLong(i);
`
But I said, from an API/behavior perspective. How the regular Python 
interpreter is implemented isn't the point--it's how the language acts 
that's important. And I can't think of any user code in which a+b and 
a.__add__(b) act differently, and I think that intentional--an 
explicit languages design decision. The impl. above of BINARY_ADD is 
most likely an optimization--the code for BINARY_MULTIPLY (and 
exponentiation, division, etc.) looks like this:

case BINARY_MULTIPLY:
w = POP();
v = TOP();
x = PyNumber_Multiply(v, w);
Py_DECREF(v);
Py_DECREF(w);
SET_TOP(x);
if (x != NULL) continue;
break;
And again, what about Ruby? If you believe in matching the current 
philosophy of the language, it won't use ops for operators (but rather, 
method calls), and won't do the right thing for objects with 
vtable/MMDs, and no corresponding methods.

Not actually MMD in Python--behavior only depends on the left operand,
it seems.
It's hard to say what Python actually does. It's a mess of nested if's.
Just look at the behavior--that's what's important:
Behavior depends only on left operand:
== class Foo:
... def __add__(a,b): return 7
...
== x = Foo()
== x + x
7
== x + 3
7
== x + b
7
== x + (1,2)
7
All the following are error cases. Error statement varies depending on 
left operand only:

== 3 + b
Traceback (most recent call last):
  File stdin, line 1, in ?
TypeError: unsupported operand type(s) for +: 'int' and 'str'
== 3 + x
Traceback (most recent call last):
  File stdin, line 1, in ?
TypeError: unsupported operand type(s) for +: 'int' and 'instance'
== 3 + (1,2)
Traceback (most recent call last):
  File stdin, line 1, in ?
TypeError: unsupported operand type(s) for +: 'int' and 'tuple'
== b + 3
Traceback (most recent call last):
  File stdin, line 1, in ?
TypeError: cannot concatenate 'str' and 'int' objects
== b + x
Traceback (most recent call last):
  File stdin, line 1, in ?
TypeError: cannot concatenate 'str' and 'instance' objects
== b + (1,2)
Traceback (most recent call last):
  File stdin, line 1, in ?
TypeError: cannot concatenate 'str' and 'tuple' objects
== (1,2) + 3
Traceback (most recent call last):
  File stdin, line 1, in ?
TypeError: can only concatenate tuple (not int) to tuple
== (1,2) + b
Traceback (most recent call last):
  File stdin, line 1, in ?
TypeError: can only concatenate tuple (not str) to tuple
== (1,2) + x
Traceback (most recent call last):
  File stdin, line 1, in ?
TypeError: can only concatenate tuple (not instance) to tuple
   null dest
   dest = l + r
should produce a *new* dest PMC.

Yes, it's a separate issue, but it's pointing out a general design
problem with these ops--their baseline behavior isn't useful.
It *is* useful. If the destination exists, you can use it. The
destination PMC acts as a reference then, changing the value in place.
But in case of Python it's not of much use
Right, changing the value in-place would do the wrong thing, for 
Python. (It depends on whether the arguments to the op are references, 
or the actual values. If they're references, then it can work 
correctly, but then we don't want to be MMD dispatching on the 
(reference) types, but rather on the types of what they're pointing 
to.)

except for the inplace (augmented) operations.
Yes, but that ends up being just for the two-argument forms (and even 
those don't work for Python--a += 3 doesn't really update in-place in 
Python, but returns a new instance). In-place operators tend to only 
take one argument on the right side, so the p_p_p forms aren't useful 
for this.

..., but for
PMCs this could compile like a = b.plus(c).

but you don't need add_p_p_p, just method invocation.
Why should we do method invocation with all it's overhead, if for the
normal case a plain function call we'll do it?
Ah, that's the key. Method invocation 

Re: No Cpow op with PMC arguments?

2004-11-09 Thread Dan Sugalski
At 12:44 AM -0800 11/9/04, Jeff Clites wrote:
And again, what about Ruby? If you believe in matching the current 
philosophy of the language, it won't use ops for operators (but 
rather, method calls), and won't do the right thing for objects with 
vtable/MMDs, and no corresponding methods.
Yes it will. You're very much missing the point here. Methods calls 
absolutely *can't* be used, and the MMD system *must* be used by all 
language compilers. Any language that doesn't won't handle PMCs 
coming from other languages, and operations with those PMCs won't 
work. This is bad.

The MMD system, as it is set up, allows for the current python/ruby 
semantics. That's what the default function for a type does. It also 
allows for the full and proper MMD semantics in any case where an 
overriden operation exists.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: No Cpow op with PMC arguments?

2004-11-08 Thread Leopold Toetsch
Jeff Clites [EMAIL PROTECTED] wrote:
 On Nov 5, 2004, at 9:40 AM, Leopold Toetsch wrote:

 In Python, semantically you know that you'll end up doing a method call
 (or, behaving as though you had), so it's very roundabout to do a
 method call by using an op which you know will fall back to doing a
 method call. Clearer just to do the method call.

No. The binary operations in Python are opcodes, as well as in Parrot.
And both provide the snytax to override the opcode doing a method call,
that's it.

 The only thing that's special is that there are certain built-in
 classes, and some of them implement __pow__, but that's not really
 anything special about __pow__.

Yes. And these certain *builtin* classes have MMD functions for binary
opcodes.

 And even the ops we currently have are broken semantically. Consider a
 = b + c in Python. This can't compile to add_p_p_p, because for that
 op to work, you have to already have an existing object in the first P
 register specified. But in Python, a is going to hold the result of
 b + c, which in general will be a new object and could be of any
 type, and has nothing to do with what's already in a.

That's a totally different thing and we have to address that. I have
already proposed that the sequence:

   null dest
   dest = l + r

should produce a *new* dest PMC. That's quite simple. We just have to
pass the address of the dest PMC pointer instead of the PMC to all such
operations. Warnocked.

 ... I think
 we should create PMC-based ops only if one of the following criteria
 are met: (a) there's no other reasonable way to provide some needed
 functionality,

So, this is already a perfect reason to have these opcodes with PMCs.

  a = b + c

behaves differently, if b and c are plain (small) integers or
overflowing integers or complex numbers and so on. You can't provide
this functionality w/o PMCs.

 JEff

leo


Re: No Cpow op with PMC arguments?

2004-11-08 Thread Jeff Clites
On Nov 8, 2004, at 12:50 AM, Leopold Toetsch wrote:
Jeff Clites [EMAIL PROTECTED] wrote:
On Nov 5, 2004, at 9:40 AM, Leopold Toetsch wrote:

In Python, semantically you know that you'll end up doing a method 
call
(or, behaving as though you had), so it's very roundabout to do a
method call by using an op which you know will fall back to doing a
method call. Clearer just to do the method call.
No. The binary operations in Python are opcodes, as well as in Parrot.
And both provide the snytax to override the opcode doing a method call,
that's it.
I guess we'll just have to disagree here. I don't see any evidence of 
this from an API/behavior perspective in Python. I think the existence 
of a separate Python opcode is just a holdover from a time when these 
infix operators only existed for built-in types (just inferring this). 
I can't find any case where Python would act differently, if these were 
compiled directly to method calls. And for Ruby, it's *explicit* that 
these operators are just method calls. And languages like Java don't 
have these operators at all, for objects.

The only thing that's special is that there are certain built-in
classes, and some of them implement __pow__, but that's not really
anything special about __pow__.
Yes. And these certain *builtin* classes have MMD functions for binary
opcodes.
Not actually MMD in Python--behavior only depends on the left operand, 
it seems.

And even the ops we currently have are broken semantically. Consider 
a
= b + c in Python. This can't compile to add_p_p_p, because for that
op to work, you have to already have an existing object in the first P
register specified. But in Python, a is going to hold the result of
b + c, which in general will be a new object and could be of any
type, and has nothing to do with what's already in a.
That's a totally different thing and we have to address that. I have
already proposed that the sequence:
   null dest
   dest = l + r
should produce a *new* dest PMC. That's quite simple. We just have to
pass the address of the dest PMC pointer instead of the PMC to all such
operations. Warnocked.
Yes, it's a separate issue, but it's pointing out a general design 
problem with these ops--their baseline behavior isn't useful. The 
result of l + r will not depend on what's to the left of the = by 
HLL semantics, for any case I can think of. (Perl cares about context, 
but that's not really the same thing.)

... I think
we should create PMC-based ops only if one of the following criteria
are met: (a) there's no other reasonable way to provide some needed
functionality,
So, this is already a perfect reason to have these opcodes with PMCs.
  a = b + c
behaves differently, if b and c are plain (small) integers or
overflowing integers or complex numbers and so on. You can't provide
this functionality w/o PMCs.
I don't understand this example. Certainly you need PMCs, but if b and 
c are I or N types, of course you'd use add_i_i_i or add_n_n_n, but for 
PMCs this could compile like a = b.plus(c). Of course you have to 
know I v. N v. P at compile-time, and there's no reason that I/N v. P 
pasm must look identical, for similar-looking HLL code. You need PMCs, 
but you don't need add_p_p_p, just method invocation.

For complex numbers and such, I'd want to be able to define classes for 
them in bytecode. For that to work, ops would eventually have to 
resolve to method calls anyway. (You can't create a new PMC and 
vtable/MMDs in bytecode.) Why not skip the middle-man?

JEff


Re: No Cpow op with PMC arguments?

2004-11-08 Thread Leopold Toetsch
Jeff Clites [EMAIL PROTECTED] wrote:

 No. The binary operations in Python are opcodes, as well as in Parrot.
 And both provide the snytax to override the opcode doing a method call,
 that's it.

 I guess we'll just have to disagree here. I don't see any evidence of
 this

UTSL please. The code is even inlined:

,--[ Python/ceval.c ]
|   case BINARY_ADD:
|   w = POP();
|   v = TOP();
|   if (PyInt_CheckExact(v)  PyInt_CheckExact(w)) {
|   /* INLINE: int + int */
|   register long a, b, i;
|   a = PyInt_AS_LONG(v);
|   b = PyInt_AS_LONG(w);
|   i = a + b;
|   if ((i^a)  0  (i^b)  0)
|   goto slow_add;
|   x = PyInt_FromLong(i);
`

 Not actually MMD in Python--behavior only depends on the left operand,
 it seems.

It's hard to say what Python actually does. It's a mess of nested if's.

null dest
dest = l + r

 should produce a *new* dest PMC.

 Yes, it's a separate issue, but it's pointing out a general design
 problem with these ops--their baseline behavior isn't useful.

It *is* useful. If the destination exists, you can use it. The
destination PMC acts as a reference then, changing the value in place.
But in case of Python it's not of much use, except for the inplace
(augmented) operations.

 ..., but for
 PMCs this could compile like a = b.plus(c).

 but you don't need add_p_p_p, just method invocation.

Why should we do method invocation with all it's overhead, if for the
normal case a plain function call we'll do it?

 For complex numbers and such, I'd want to be able to define classes for
 them in bytecode. For that to work, ops would eventually have to
 resolve to method calls anyway.

This is all working now already. You can do that. Again: if a method is
there it's used (or almost MMD not yet, vtables are fine):

.sub main @MAIN
.local pmc MyInt
getclass $P0, Integer
subclass MyInt, $P0, MyInt
.local pmc i, j, k
$I0 = find_type MyInt
# current hack - MMD overriding still missing
$P0 = find_global MyInt, __add
.include mmd.pasm
mmdvtregister .MMD_ADD, $I0, $I0, $P0
# end hack
i = new $I0
j = new $I0
k = new $I0
j = 2
k = 3
i = j + k
print i
print \n
.end
.namespace [ MyInt ]
.sub __add
.param pmc l
.param pmc r
.param pmc d
$I0 = l
$I1 = r
$I2 = $I0 + $I1
$I2 = 42   # test
d = $I2
.end

 JEff

leo


Re: No Cpow op with PMC arguments?

2004-11-07 Thread Jeff Clites
On Nov 5, 2004, at 9:40 AM, Leopold Toetsch wrote:
Jeff Clites [EMAIL PROTECTED] wrote:
a) As Sam says, in Python y**z is just shorthand for
y.__pow__(z)--they will compile down to exactly the same thing
(required for Python to behave correctly).
I don't think so (and you can replace with add, sub, ... FWIW). All 
these
binops compile to Parrot opcodes.
I'm saying that, by Python semantics they do the same thing--it's an 
open question how they should compile, that's what we're discussing. It 
is true that in the real Python they do compile to different Python 
ops (probably for historical reasons), but semantically they are 
identical. And importantly, not only do they produce the same results, 
but also there's never a case in which one works and the other produces 
an error.

These call the MMD dispatcher. Now
depending on the type of left and right, we e.g. call into a function
living in classes/*.pmc that does the Right Thing.
A plain PMC does the binop (if it can do it). If it's a an object
derived from a PMC it either delegates to the PMC or, if provided by 
the
user to the __pow__ function. If it's an object it does a full method
lookup for that object ...
In Python, semantically you know that you'll end up doing a method call 
(or, behaving as though you had), so it's very roundabout to do a 
method call by using an op which you know will fall back to doing a 
method call. Clearer just to do the method call.

And currently, PerlInts used in Python code would handle a + b, but 
fail for a.__add__(b) (since that method isn't defined, currently). 
That breaks an invariant of Python--a + b should work if-and-only-if 
a.__add__(b) would, and should product the same result. Compiling to 
ops, this isn't guaranteed, and currently in fact doesn't happen. Real 
Python behaves as though infix operators are just an alternate syntax 
for certain method calls. And Ruby explicitly defines them this 
way--infix notation is just a syntax trick. (Note: not true for *all* 
operators, but at least for the mathematical infix operators.)

... Since __pow__ isn't
special, we don't need anything to support it that we wouldn't need
for any other arbitrary method, say y.elbowCanOpener(z).
No. One is builtin and one isn't. These are really different. The only
common thing is that the former can be overridden, while the latter is
always provided by user code.
I wouldn't describe that at can be overridden. If I define my own 
class, __pow__ is implemented on that class if-and-only-if I define it, 
just like any other method. I'm not overriding any default behavior--if 
I don't define it, it's just not there for my class.

The only thing that's special is that there are certain built-in 
classes, and some of them implement __pow__, but that's not really 
anything special about __pow__.

So I don't think an op really gives us what we want.
This sentence would obviously be true for all binops then. It isn't.
Yes, I think it's true for most of our PMC ops.
And even the ops we currently have are broken semantically. Consider a 
= b + c in Python. This can't compile to add_p_p_p, because for that 
op to work, you have to already have an existing object in the first P 
register specified. But in Python, a is going to hold the result of 
b + c, which in general will be a new object and could be of any 
type, and has nothing to do with what's already in a. I don't seem to 
have any way to add 2 PMCs, and have as the result a third PMC, whose 
type is not known at compile-time.

So yes, I am pointing out what I think is a larger design problem in 
Parrot--it's not specific to pow at all. And I think that we should 
not define PMC ops just because corresponding I or N ops exist. I think 
we should create PMC-based ops only if one of the following criteria 
are met: (a) there's no other reasonable way to provide some needed 
functionality, (b) there is some significant performance benefit to 
providing it as an op, or (c) it's needed to provide some interpreter 
functionality like invoke, but I think that this case is already 
covered by (a) and (b). Using methods when possible would let us get 
rid of most of the PMC ops, including things such as pow which make 
no sense for most PMC types, and leave us with a smaller set that makes 
more sense across PMC types.

JEff


Re: No Cpow op with PMC arguments?

2004-11-07 Thread Jeff Clites
On Nov 5, 2004, at 10:03 AM, Sam Ruby wrote:
Jeff Clites wrote:
a) As Sam says, in Python y**z is just shorthand for 
y.__pow__(z)--they will compile down to exactly the same thing 
(required for Python to behave correctly). Since __pow__ isn't 
special, we don't need anything to support it that we wouldn't need 
for any other arbitrary method, say y.elbowCanOpener(z).
[snip]
So I don't think an op really gives us what we want. (Specifically, 
we could define a pow_p_p_p op, but then Python wouldn't use it for 
the case Sam brought up.) I'd apply the same argument to many of the 
other p_p_p ops that we have--they don't gives us what we need at the 
HLL level (though they may still be necessary for other uses).
It is my intent that Python *would* use this method.  What is 
important isn't that y**z actually call y.__pow__(z), but that the two 
have the same effect.

Let's take a related example: y+z.  I could make sure that each are 
PMCs
[Not really a need for a check--everything will be a PMC, right? Or if 
not, you need to know before emitting the op anyway. Side issue, 
though.]

and then find the __add__ attribute, retrieve it, and then use it as a 
basis for a subroutine call, as the semantics of Python would seem to 
require.  Or I could simply emit the add op.

How do I make these the same?  I have a common base class for all 
python objects which defines an __add__ method thus:

 METHOD PMC* __add__(PMC *value) {
 PMC * ret = pmc_new(INTERP, dynclass_PyObject);
 mmd_dispatch_v_ppp(INTERP, SELF, value, ret, MMD_ADD);
 return ret;
 }
... so, people who invoke the __add__ method explicitly get the same 
function done, albeit at a marginally higher cost.
There are three problems I see with this:
1) If you have 2 PerlInts in Python code, then a + b will work, but 
a.__add__(b) won't, since PerlInts won't inherit from your Python 
base class. To my mind, that breaks an invariant of Python.

2) If your __add__ method above were somehow in place even for 
PerlInts, it would produce a PyObject as its result, instead of a 
PerlInt, which is what I would have expected. That's a basic problem 
with our p_p_p ops currently--the return type can't be decided by the 
implementation of the MMD method which is called.

and...
Now to complete this, what I plan to do is to also implement all 
MMD/vtable operations at the PyObject level and have them call the 
corresponding Python methods.  This will enable user level __add__ 
methods to be called.

Prior to calling the method, there will need to be a check to ensure 
that the method to be called was, in fact, overridden.  If not, a 
type_error exception will be thrown.
3) As described by Leo, the op would call the MMD dispatcher, which 
would ultimately do the method call, then your method above would call 
the MMD dispatcher, so you'd get an infinite loop, right? And if you 
avoid this by adding some check (as you mentioned) to make sure you 
only call __add__ if it was overridden (really, implemented at the user 
level), then your default implementation above will never be called, 
right?

If instead you just compile infix operators as method calls, then all 
of those problems go away, and it's much simpler conceptually and in 
terms of implementation.

And for Ruby (the language), it's explicit that infix operators are 
just an alternate syntax for method calls, so compiling them as ops is 
even more semantically problematic there.

JEff


Re: No Cpow op with PMC arguments?

2004-11-07 Thread Jeff Clites
On Nov 4, 2004, at 5:24 AM, Sam Ruby wrote:
[Referring to infix operators as an alternate syntax for a named method 
call]

What's the downside of compiling this code in this way?  If you are a 
Python programmer and all the objects that you are dealing with were 
created by Python code, then not much.  However, if somebody wanted to 
create a language independent complex number implementation, then it 
wouldn't exactly be obvious to a Python programmer how one would raise 
such a complex number to a given power.  Either the authors of the 
complex PMC would have to research and mimic the signatures of all the 
popular languages, or they would have to provide a fallback method 
that is accessible to all and educate people to use it.
Yes, and I think that compiling using ops makes things worse, because 
of languages such as Java which don't have operator overloading, so 
you'd have to make all functionality available as method calls anyway, 
so why bother with the ops? Methods are much more flexible, and don't 
bloat the VM.

Ultimately, Parrot will need something akin to .Net's concept of a 
Common Language Specification which defines a set of rules for 
designing code to interoperate.  A description of .Net's CLS rules can 
be found in sections 7 and 11 (a total of six pages) in the CLI 
Partition I - Architecture document[1].
I think that ultimately, code will break down into two categories:
1) Code designed with multiple langauges in mind.
2) Code designed with only one language in mind.
Code in case (2) will be awkward to use in other languages, but it 
should definitely be possible to use it somehow. For case (1), we need 
to make this easy for library authors to do.

In terms of method naming, we may want to do something automatic (if 
you name your method such and such, it will appear to Python named 
this, and Ruby named that, and Perl named...), or it may be better to 
provide an explicit way to create language-specific method aliases. 
Some cases are simple (__mul__ in Python and * in Ruby and 
multiply in Java should all map to the same method for mathematical 
objects, probably), but others are more subtle (adding two array-like 
things means something different in different languages, potentially: 
append v. componentwise add v. componentwise add only if they have the 
same length). Doing something automatic saves a bunch of redundant 
work in the former case, but could cause problems in the latter.[1]  
And whatever the approach, it should be possible (even easy) for 
someone to take a library designed for only one language, and provide 
some cross-language mapping info and turn it into a nice cross-language 
library, without necessarily having to dig into the source code. (I'm 
thinking here of being able to specify the mapping in a document 
separate from the source code.) This is sort of treating method names 
as part of the interface, and not the implementation.

[1] Automatic mapping could also cause problems in the case where I 
design a library and intend it to be cross-language, but where I want 
the API to look identical across langauges--I don't want to 
accidentally trip over a method name which happens got get translated 
for me, when I don't intend for that to happen. (For instance, I might 
name a method pow which controls some power level, not intending 
anything about exponentiation.) But, this wouldn't be a problem if the 
automatic approach were to let me somehow register a method as 
filling a certain role (my method 'blah' should be called to perform 
numeric addition), rather than inferring to from the method name. 
Probably a tricky balance between convenience and control/flexibility.

JEff


Re: No Cpow op with PMC arguments? (Modified by Jeff Clites)

2004-11-07 Thread Jeff Clites
[missed cc-ing the list when I sent this originally]
On Nov 5, 2004, at 10:39 AM, Brent 'Dax' Royal-Gordon wrote:
Jeff Clites [EMAIL PROTECTED] wrote:
There are a
myriad of interesting mathematical types and operations, but they 
don't
need dedicated ops to support them.
But we're not talking about adding pow_bignum, pow_complex and
pow_matrix.  We're talking about adding pow--a fundamental operation
by most standards--and a bignum, complex number, or matrix can Do The
Right Thing when pow is called on it.
What's bugging me is that PMCs are not meant to be specifically 
mathematical types--of the 72 in classes/*.pmc, only a few are. pow 
isn't a fundamental operation, by my thinking, on PMC types--it makes 
no sense for most of them. (Similarly for other mathematical 
operations.) Modeling these as method calls, rather than ops, seems to 
be a better fit conceptually. And if you look across languages, it 
makes even more sense (specifically in the case of mathematical infix 
operators): Java doesn't have operator overloading, and would use 
method calls anyway; Ruby explicitly treats infix operators as an 
alternative syntax for method calls; and Python semantically behaves 
this way as well.

(And even the seemingly obvious
cases aren't: There are at least three different operations on vectors
which could be called multiplication. I don't think the mul op
should be used for any of them.)
I would assume mul on a matrix would perform the same matrix
multiplication the public school system tortured me with in Algebra 2
and Precalculus.
But I said vectors, not matrices. For vectors, you have dot product 
(inner product), cross product (outer product), and component-wise 
product (not often used in math, but useful in programming). Three 
different things which have equal claim on mul. There's not a 
one-to-one correspondence between fundamental operations on ints/floats 
and other mathematical types.

What I *can* see a case for is removing *all* binary ops from their
current special pseudo-vtable status; instead, create special names
for them that won't conflict with anything, and turn them into normal
methods (or normal multimethods, as the case may be).
Yes, that's my thought. I think that we should only have PMC ops which 
make sense for most PMC types (or which are needed for basic 
interpreter functionality, like invoke)--it would make for a fairly 
short list. I don't think that we should have an op for PMCs just 
because we have a corresponding op for ints and floats. And no matter 
what, in terms of methods naming we'll need some mechanism for exposing 
a given method with different names for different languages--what shows 
up as __mul__ in Python should show up as multiply in Java and * 
in Ruby. That's something which hasn't been addressed/discussed yet.

But that can't happen until we have N-ary multimethod support which 
scales well
enough that we don't have to worry about the multimethod table
becoming too big.
Though in the case of Python, these don't act as multimethods--they 
dispatch on the left operand only.

JEff


Re: No Cpow op with PMC arguments?

2004-11-05 Thread Luke Palmer
Jeff Clites writes:
 On Nov 4, 2004, at 8:29 PM, Brent 'Dax' Royal-Gordon wrote:
 This is true.  But how do you define a number?  Do you include
 floating-point?  Fixed-point?  Bignum?  Bigrat?  Complex?  Surreal?
 Matrix?  N registers don't even begin to encompass all the numbers
 out there.
 
 Floating point, and possibly integer. Those are the numeric primitives 
 of processors. Other aggregate mathematical types are always defined in 
 terms of those (in a computing context), one way or another.

The question is, though, how do compilers think of it?  That is, does
the compiler have the liberty, given the code:

$x ** $y

To emit:

pow $P0, x, y

Or must it use a named multimethod?

This is just the age-old question, is this operation fundamental
according to Parrot?

Luke


Re: No Cpow op with PMC arguments?

2004-11-05 Thread Leopold Toetsch
Luke Palmer [EMAIL PROTECTED] wrote:

 The question is, though, how do compilers think of it?  That is, does
 the compiler have the liberty, given the code:

 $x ** $y

 To emit:

 pow $P0, x, y

 Or must it use a named multimethod?

Well, that's a thing compilers (or their writers ;) have to know. We can
just provide a consistent set of operands for implemented opcodes. The
compiler can query the opcode library, if it contains an opcode (imcc
does that). But the compiler must still have a clue that such and opcode
exists (and under which name).

gcc is in a worse position. It can hardly query the i386 if it can
execute lwx ;)

 This is just the age-old question, is this operation fundamental
 according to Parrot?

pow is a MMD opcode, it's in Python too. It's fundamental. The syntax
x ** y is indicating that too.

 Luke

leo


Re: No Cpow op with PMC arguments?

2004-11-05 Thread Jeff Clites
On Nov 4, 2004, at 5:24 AM, Sam Ruby wrote:
From a Python or Ruby language perspective, infix operators are not 
fundamental operations associated with specific types, they are 
syntactic sugar for method calls.

A the moment, I'm compiling x=y**z into:
x = y.__pow__(z)
There is nothing reserved about the name __pow__.  Any class can 
define a method by this name, and such methods can accept arguments of 
any type, and return objects of any type.  They can be called 
explicitly, or via the infix syntax.
Of course--I should have realized that. I knew that's how Python 
handles +, etc.--don't know why I assumed exponentiation would be 
different.

So scratch what I said. I should have said this:
Languages tend to take one of the following two approaches when it 
comes to generalizing operations on basic types (numbers, strings) into 
operations on object types.

1) Generalization via conversion to a basic type. As an example, some 
languages generalize numeric addition, obj1 + obj2, as being 
syntactic sugar for something like, obj1.floatValue() + 
obj2.floatValue(). (That is, you do a basic operation on non-basic 
types by converting them into the relevant basic types, then performing 
the operation on those.) This is how Perl5 handles string 
concatenation--you string-concatenate two objects by string-ifying 
them, and concatenating those strings.

2) Generalization by method call. Some languages treat obj1 + obj2 as 
syntactic sugar for something like obj1.add(obj2). That is, you 
generalize in the obvious o.o. way. This is how Python (and C++) 
treats infix operators.

Different languages choose (1) v. (2), and can certainly mix-and-match 
(take one approach for some operations, another for others). Another 
way a language may mix-and-match is to do (2) if such a method is 
defined on the object, and fall back to (1) if it isn't.

Now from a Parrot perspective: Case (1) is already handled by 
Parrot--it's just an exercise in code generation by a compiler. For 
case (2), I think these operations correspond to method calls on 
objects (in the Parrot sense--the stuff in src/objects.c), not MMD or 
vtable operations accessed via custom ops. Here are a couple of 
examples why:

a) As Sam says, in Python y**z is just shorthand for 
y.__pow__(z)--they will compile down to exactly the same thing 
(required for Python to behave correctly). Since __pow__ isn't 
special, we don't need anything to support it that we wouldn't need 
for any other arbitrary method, say y.elbowCanOpener(z).

b) I can define arbitrary Python classes with arbitrary implementations 
of __pow__, and change those implementations on-the-fly, on a per-class 
or per-instance basis. These aren't new PMC-classes, and I don't think 
that the op-plus-MMD approach gives us the ability to handle that.

Summary: Both cases (1) and (2) are syntactic sugar, and case (1) is 
sugar for casting/conversion, and case (2) is sugar for object-method 
calls.

So I don't think an op really gives us what we want. (Specifically, we 
could define a pow_p_p_p op, but then Python wouldn't use it for the 
case Sam brought up.) I'd apply the same argument to many of the other 
p_p_p ops that we have--they don't gives us what we need at the HLL 
level (though they may still be necessary for other uses).

JEff


Re: No Cpow op with PMC arguments?

2004-11-05 Thread Leopold Toetsch
Jeff Clites [EMAIL PROTECTED] wrote:

 a) As Sam says, in Python y**z is just shorthand for
 y.__pow__(z)--they will compile down to exactly the same thing
 (required for Python to behave correctly).

I don't think so (and you can replace with add, sub, ... FWIW). All these
binops compile to Parrot opcodes. These call the MMD dispatcher. Now
depending on the type of left and right, we e.g. call into a function
living in classes/*.pmc that does the Right Thing.

A plain PMC does the binop (if it can do it). If it's a an object
derived from a PMC it either delegates to the PMC or, if provided by the
user to the __pow__ function. If it's an object it does a full method
lookup for that object ...

 ... Since __pow__ isn't
 special, we don't need anything to support it that we wouldn't need
 for any other arbitrary method, say y.elbowCanOpener(z).

No. One is builtin and one isn't. These are really different. The only
common thing is that the former can be overridden, while the latter is
always provided by user code.

 So I don't think an op really gives us what we want.

This sentence would obviously be true for all binops then. It isn't.

 JEff

leo


Re: No Cpow op with PMC arguments?

2004-11-05 Thread Sam Ruby
Jeff Clites wrote:
a) As Sam says, in Python y**z is just shorthand for 
y.__pow__(z)--they will compile down to exactly the same thing 
(required for Python to behave correctly). Since __pow__ isn't 
special, we don't need anything to support it that we wouldn't need 
for any other arbitrary method, say y.elbowCanOpener(z).

[snip]
So I don't think an op really gives us what we want. (Specifically, we 
could define a pow_p_p_p op, but then Python wouldn't use it for the 
case Sam brought up.) I'd apply the same argument to many of the other 
p_p_p ops that we have--they don't gives us what we need at the HLL 
level (though they may still be necessary for other uses).
It is my intent that Python *would* use this method.  What is important 
isn't that y**z actually call y.__pow__(z), but that the two have the 
same effect.

Let's take a related example: y+z.  I could make sure that each are 
PMCs, and then find the __add__ attribute, retrieve it, and then use it 
as a basis for a subroutine call, as the semantics of Python would seem 
to require.  Or I could simply emit the add op.

How do I make these the same?  I have a common base class for all python 
objects which defines an __add__ method thus:

 METHOD PMC* __add__(PMC *value) {
 PMC * ret = pmc_new(INTERP, dynclass_PyObject);
 mmd_dispatch_v_ppp(INTERP, SELF, value, ret, MMD_ADD);
 return ret;
 }
... so, people who invoke the __add__ method explicitly get the same 
function done, albeit at a marginally higher cost.

Now to complete this, what I plan to do is to also implement all 
MMD/vtable operations at the PyObject level and have them call the 
corresponding Python methods.  This will enable user level __add__ 
methods to be called.

Prior to calling the method, there will need to be a check to ensure 
that the method to be called was, in fact, overridden.  If not, a 
type_error exception will be thrown.

- Sam Ruby



Re: No Cpow op with PMC arguments?

2004-11-05 Thread Jeff Clites
On Nov 4, 2004, at 10:30 PM, Brent 'Dax' Royal-Gordon wrote:
On Thu, 4 Nov 2004 21:46:19 -0800, Jeff Clites [EMAIL PROTECTED] wrote:
On Nov 4, 2004, at 8:29 PM, Brent 'Dax' Royal-Gordon wrote:
This is true.  But how do you define a number?  Do you include
floating-point?  Fixed-point?  Bignum?  Bigrat?  Complex?  Surreal?
Matrix?  N registers don't even begin to encompass all the numbers
out there.
Floating point, and possibly integer. Those are the numeric primitives
of processors. Other aggregate mathematical types are always defined 
in
terms of those (in a computing context), one way or another.
Yes, but your decomposition (N2=P2; N3=P3; N1=N2+N3; P1=N1) doesn't
take anything but the primitives into account.
Yes--see my subsequent message (just sent a moment ago), as that 
decomposition isn't what I meant. But you asked how I define a 
number--that's what I was answering above, from a computing 
perspective, not a more general question.

My point there, when I said one way or another, was that for example, 
even in mathematics, you define addition, multiplication, etc. on 
complex numbers in terms of such operations over the real numbers. 
(That is, you define them in terms of operations on a pair of real 
numbers.) Based on the design of processors, and of Parrot, there's a 
good performance reason to define basic ops on integers and floating 
point numbers--namely, they'll often JIT down to single instructions. 
Such ops on PMCs won't have that performance benefit--they'll still 
involve table lookups and multiple function calls to execute. So a 
mechanism other than an op may be more appropriate there. There are a 
myriad of interesting mathematical types and operations, but they don't 
need dedicated ops to support them. (And even the seemingly obvious 
cases aren't: There are at least three different operations on vectors 
which could be called multiplication. I don't think the mul op 
should be used for any of them.)

JEff


Re: No Cpow op with PMC arguments?

2004-11-04 Thread Leopold Toetsch
Sam Ruby wrote:
This omission seems odd.  Was this intentional?
A single pow_p_p_p op backed by a (non-MMD) vtable entry would make it 
easier to support code like the following:
Well, Python has a pow vtable slot. And it should be MMD.
Patches welcome,
leo


Re: No Cpow op with PMC arguments?

2004-11-04 Thread Jeff Clites
On Nov 3, 2004, at 8:09 AM, Dan Sugalski wrote:
At 11:04 AM -0500 11/3/04, Sam Ruby wrote:

A single pow_p_p_p op backed by a (non-MMD) vtable entry would make 
it easier to support code like the following:

  def f(x): return x**3
  print f(3), f(2.5)
Yeah, it would. I know I'm going to regret asking, but... any reason 
*not* to make it MMD? (Though I have no idea what happens if you 
square a matrix)
I feel like we have op-itis and vtable-itis. I would think that rather 
than a pow_p_p_p, you'd compile x**y as something like:

set N0, P0
set N1, P1
pow N2, N0, N1
new P2, .PythonNumber
assign P2, N2
I.e., PMCs don't inherently exponentiate--numbers do, and you can 
exponentiate PMCs by numberizing them, exponentiating, and creating a 
PMC with the result.

Or, if we must have an op, implement it something like this (without 
needing a new vtable entry, or MMD):

inline op pow(out PMC, in PMC, in PMC) :base_core {
  $1 = pmc_new(interpreter, $2-vtable-type(interpreter, $2));
  $1-vtable-set_number_native(interpreter, $1,
 pow( $2-vtable-get_number(interpreter, $2),
 $3-vtable-get_number(interpreter, $3)))
  goto NEXT();
}
Those would probably JIT to about the same thing, given register 
mapping.

This is the viewpoint that pow() isn't a fundamental operation that 
makes sense for all types; it's a numeric operation, which can be 
extended in a straighforward manner to types which know how to 
represent themselves as numbers. (e.g., it's gibberish to raise a 
ManagedStruct to a ParrotIO power, except that you can stretch and 
interpret such a thing as just implicit num-ification of the 
arguments.)

JEff


Re: No Cpow op with PMC arguments?

2004-11-04 Thread Sam Ruby
Jeff Clites wrote:
On Nov 3, 2004, at 8:09 AM, Dan Sugalski wrote:
At 11:04 AM -0500 11/3/04, Sam Ruby wrote:

A single pow_p_p_p op backed by a (non-MMD) vtable entry would make 
it easier to support code like the following:

  def f(x): return x**3
  print f(3), f(2.5)

Yeah, it would. I know I'm going to regret asking, but... any reason 
*not* to make it MMD? (Though I have no idea what happens if you 
square a matrix)
I feel like we have op-itis and vtable-itis. I would think that rather 
than a pow_p_p_p, you'd compile x**y as something like:

set N0, P0
set N1, P1
pow N2, N0, N1
new P2, .PythonNumber
assign P2, N2
I.e., PMCs don't inherently exponentiate--numbers do, and you can 
exponentiate PMCs by numberizing them, exponentiating, and creating a 
PMC with the result.

Or, if we must have an op, implement it something like this (without 
needing a new vtable entry, or MMD):

inline op pow(out PMC, in PMC, in PMC) :base_core {
  $1 = pmc_new(interpreter, $2-vtable-type(interpreter, $2));
  $1-vtable-set_number_native(interpreter, $1,
 pow( $2-vtable-get_number(interpreter, $2),
 $3-vtable-get_number(interpreter, $3)))
  goto NEXT();
}
Those would probably JIT to about the same thing, given register mapping.
This is the viewpoint that pow() isn't a fundamental operation that 
makes sense for all types; it's a numeric operation, which can be 
extended in a straighforward manner to types which know how to represent 
themselves as numbers. (e.g., it's gibberish to raise a ManagedStruct to 
a ParrotIO power, except that you can stretch and interpret such a thing 
as just implicit num-ification of the arguments.)
From a Python or Ruby language perspective, infix operators are not 
fundamental operations associated with specific types, they are 
syntactic sugar for method calls.

A the moment, I'm compiling x=y**z into:
x = y.__pow__(z)
There is nothing reserved about the name __pow__.  Any class can 
define a method by this name, and such methods can accept arguments of 
any type, and return objects of any type.  They can be called 
explicitly, or via the infix syntax.

What's the downside of compiling this code in this way?  If you are a 
Python programmer and all the objects that you are dealing with were 
created by Python code, then not much.  However, if somebody wanted to 
create a language independent complex number implementation, then it 
wouldn't exactly be obvious to a Python programmer how one would raise 
such a complex number to a given power.  Either the authors of the 
complex PMC would have to research and mimic the signatures of all the 
popular languages, or they would have to provide a fallback method that 
is accessible to all and educate people to use it.

Ultimately, Parrot will need something akin to .Net's concept of a 
Common Language Specification which defines a set of rules for 
designing code to interoperate.  A description of .Net's CLS rules can 
be found in sections 7 and 11 (a total of six pages) in the CLI 
Partition I - Architecture document[1].

- Sam Ruby
[1] http://msdn.microsoft.com/net/ecma/


Re: No Cpow op with PMC arguments?

2004-11-04 Thread Dan Sugalski
At 1:19 AM -0800 11/4/04, Jeff Clites wrote:
On Nov 3, 2004, at 8:09 AM, Dan Sugalski wrote:
At 11:04 AM -0500 11/3/04, Sam Ruby wrote:

A single pow_p_p_p op backed by a (non-MMD) vtable entry would 
make it easier to support code like the following:

  def f(x): return x**3
  print f(3), f(2.5)
Yeah, it would. I know I'm going to regret asking, but... any 
reason *not* to make it MMD? (Though I have no idea what happens if 
you square a matrix)
I feel like we have op-itis and vtable-itis. I would think that 
rather than a pow_p_p_p, you'd compile x**y as something like:
Or, in this case, MMD-itis, since that's the right thing to do here.
If it can be overridden, and in this case it certainly can, then we 
must allow for it. We only get to scam for speed *after* we meet the 
basic language requirements.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: No Cpow op with PMC arguments?

2004-11-04 Thread Leopold Toetsch
Jeff Clites [EMAIL PROTECTED] wrote:

 I feel like we have op-itis and vtable-itis.

I'm for sure the last one that would add an opcode or a vtable, if it's
not needed. But in that case it has to be one. The PMC can be any kind
of plain scalar and also *complex*. We have different operations with
different results.

So your example

 set N0, P0
 set N1, P1
 pow N2, N0, N1

doesn't work for complex numbers.

$1-vtable-set_number_native(interpreter, $1,

Same problem.

 JEff

leo


Re: No Cpow op with PMC arguments?

2004-11-04 Thread Brent 'Dax' Royal-Gordon
Jeff Clites [EMAIL PROTECTED] wrote:
 I.e., PMCs don't inherently exponentiate--numbers do, and you can
 exponentiate PMCs by numberizing them, exponentiating, and creating a
 PMC with the result.

This is true.  But how do you define a number?  Do you include
floating-point?  Fixed-point?  Bignum?  Bigrat?  Complex?  Surreal? 
Matrix?  N registers don't even begin to encompass all the numbers
out there.

-- 
Brent 'Dax' Royal-Gordon [EMAIL PROTECTED]
Perl and Parrot hacker

There is no cabal.


Re: No Cpow op with PMC arguments?

2004-11-04 Thread Jeff Clites
On Nov 4, 2004, at 8:29 PM, Brent 'Dax' Royal-Gordon wrote:
Jeff Clites [EMAIL PROTECTED] wrote:
I.e., PMCs don't inherently exponentiate--numbers do, and you can
exponentiate PMCs by numberizing them, exponentiating, and creating a
PMC with the result.
This is true.  But how do you define a number?  Do you include
floating-point?  Fixed-point?  Bignum?  Bigrat?  Complex?  Surreal?
Matrix?  N registers don't even begin to encompass all the numbers
out there.
Floating point, and possibly integer. Those are the numeric primitives 
of processors. Other aggregate mathematical types are always defined in 
terms of those (in a computing context), one way or another.

JEff


Re: No Cpow op with PMC arguments?

2004-11-04 Thread Brent 'Dax' Royal-Gordon
On Thu, 4 Nov 2004 21:46:19 -0800, Jeff Clites [EMAIL PROTECTED] wrote:
 On Nov 4, 2004, at 8:29 PM, Brent 'Dax' Royal-Gordon wrote:
  This is true.  But how do you define a number?  Do you include
  floating-point?  Fixed-point?  Bignum?  Bigrat?  Complex?  Surreal?
  Matrix?  N registers don't even begin to encompass all the numbers
  out there.
 
 Floating point, and possibly integer. Those are the numeric primitives
 of processors. Other aggregate mathematical types are always defined in
 terms of those (in a computing context), one way or another.

Yes, but your decomposition (N2=P2; N3=P3; N1=N2+N3; P1=N1) doesn't
take anything but the primitives into account.  It would destroy the
meaningfulness of performing a pow() on a complex number, or even just
a bignum (which the language isn't necessarily even aware will be
involved in a particular operation--many will convert smoothly between
integer and bignum).

Dynamic languages generally try to hide the reality of the machines
they run on from the programmer; things like pow only works on
numeric primitives smack the programmer in the face with that
reality.  (Sure, languages can work around it, but their various hacks
will probably be mutually incompatible and less efficient than just
doing it ourselves.)  Operations that only work with primitives makes
sense for hardware, but out here in the realm of software we can do
better.

-- 
Brent 'Dax' Royal-Gordon [EMAIL PROTECTED]
Perl and Parrot hacker

There is no cabal.


No Cpow op with PMC arguments?

2004-11-03 Thread Sam Ruby
This omission seems odd.  Was this intentional?
A single pow_p_p_p op backed by a (non-MMD) vtable entry would make it 
easier to support code like the following:

  def f(x): return x**3
  print f(3), f(2.5)
- Sam Ruby


Re: No Cpow op with PMC arguments?

2004-11-03 Thread Dan Sugalski
At 11:04 AM -0500 11/3/04, Sam Ruby wrote:
This omission seems odd.  Was this intentional?
Nope.
A single pow_p_p_p op backed by a (non-MMD) vtable entry would make 
it easier to support code like the following:

  def f(x): return x**3
  print f(3), f(2.5)
Yeah, it would. I know I'm going to regret asking, but... any reason 
*not* to make it MMD? (Though I have no idea what happens if you 
square a matrix)
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: No Cpow op with PMC arguments?

2004-11-03 Thread Matt Fowles
Dan~

On Wed, 3 Nov 2004 11:09:49 -0500, Dan Sugalski [EMAIL PROTECTED] wrote:
 Yeah, it would. I know I'm going to regret asking, but... any reason
 *not* to make it MMD? (Though I have no idea what happens if you
 square a matrix)

Squaring a matrix is easy (so long as it is square).

A^2 == A * A.

What gets more fun is raising something (usually e) to a matrix power.
 Then you have to do things with the Jordan Connical form and
decompose your matrix into eigenvalues and stuff.  On the plus side,
this also allows you to define the sin and cos of a matrix... ::evil
grin::

Matt
-- 
Computer Science is merely the post-Turing Decline of Formal Systems Theory.
-???


Re: No Cpow op with PMC arguments?

2004-11-03 Thread Sam Ruby
Dan Sugalski wrote:
At 11:04 AM -0500 11/3/04, Sam Ruby wrote:
This omission seems odd.  Was this intentional?
Nope.
A single pow_p_p_p op backed by a (non-MMD) vtable entry would make it 
easier to support code like the following:

  def f(x): return x**3
  print f(3), f(2.5)
Yeah, it would. I know I'm going to regret asking, but... any reason 
*not* to make it MMD? (Though I have no idea what happens if you square 
a matrix)
No objection to a MMD, just attempting to propose the simplest thing 
that can possibly work.  Also, there are all sorts of other opcodes 
that might be worth discussing, like pow_p_p_ic.

- Sam Ruby
P.S.  Yes, squaring a matrix is valid operation.  As would be squaring a 
complex number.


Re: No Cpow op with PMC arguments?

2004-11-03 Thread Jerome Quelin
On 04/11/03 11:19 -0500, Matt Fowles wrote:
 What gets more fun is raising something (usually e) to a matrix power.
  Then you have to do things with the Jordan Connical form and
 decompose your matrix into eigenvalues and stuff.  On the plus side,
 this also allows you to define the sin and cos of a matrix... ::evil
 grin::

Damn, is it a new rule that perl 6 summarizer should be a maths teacher? :-)

Jérôme
-- 
[EMAIL PROTECTED]


Re: No Cpow op with PMC arguments?

2004-11-03 Thread Matt Fowles
Jerome~


On Wed, 3 Nov 2004 18:33:28 +0100, Jerome Quelin [EMAIL PROTECTED] wrote:
 Damn, is it a new rule that perl 6 summarizer should be a maths teacher? :-)

Actually, as an American I would be a lowly math teacher... ;-)

Matt
-- 
Computer Science is merely the post-Turing Decline of Formal Systems Theory.
-???