On 17 February 2015 at 04:06, Jonathan S. Shapiro <[email protected]> wrote:
> I want to try to summarize where we are in the current discussion. We > appear to be coming down to two proposals: > > > Option 1: Functions take N>0 argument patterns and return 1 result. We may > or may not adopt curried application syntax (i.e. not decorate with > parenthesis), but the number of actual parameters must match the number of > formal parameters. Partial application is not permitted in the absence of > some syntactic construct expressing that the partial application was > intentionally, and incidentally signaling the programmer's acceptance of a > lambda injection that may cause allocation if the optimizer cannot > eliminate it. > > Option 2: Functions take exactly one argument, which may be a pattern such > as a tuple pattern that has sub-constituents. Implementation-level arity is > inferred based on the type of the singleton parameter. The present proposal > is that tuple patterns be used to express functions taking more than one > argument, but we could presumably infer arity from other sorts of patterns > as well if we choose to do so. Concerns of lambda injection resulting from > partial allocation do not arise, because all functions have arity 1 by > virtue of the language definition. > I had taken the time to formalise a few other possibilities, but decided I didn't like them. Right now, I want: Option 3 (similar to option 1): There's some syntax to allow expression of native arities within types, whether grouping arguments into tuples, wonky arrows, numbers, or an effect monad. These have no consequence for copy compatability, besides their influence on the regions involved. Wherever an explicit type is assigned to a value, this gives its native arity. Otherwise: * Native arity needs to be inferred at phi nodes. The result of a phi has the minimum of the native arities of its arguments. This ensures that effects are correctly sequenced. * Partial application of a native arity N function to M < N arguments results in a closure with native arity N - M, unless the type of the result is specified explicitly. This is safe because no effect could have occured. Alternatively, the use-set of an associated definition can be used to infer a maximum native arity (as the minimum of the arities of each use). My opinion is that this DTRT when used with generic algorithms - since they don't care about the remaining arguments, this allows explicit partial evaluation when necessary. Points of allocation are determinable locally and are overridable (by specifying the arity directly), which is a bonus. -- William Leslie Notice: Likely much of this email is, by the nature of copyright, covered under copyright law. You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement.
_______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
