On 14 Feb 2015 16:39, "Jonathan S. Shapiro" <[email protected]> wrote:
>
> On Fri, Feb 13, 2015 at 12:35 PM, Keean Schupke <[email protected]> wrote:
>>
>>
>>> Given your proposal, consider a function using curried application that
has been constrained by a NOALLOC effect. Does an algorithm exists that is
guaranteed to perform curried application without any allocation, in the
case where the function being applied cannot be statically resolved, and so
we cannot guess the function's native arity given that you propose it
should not be encoded in the type?
>>
>>
>> The compiler would pass the import and store two type signatures for the
function, the native-type and the 'local-type'. The program would initially
be type checked with the local-type, then optimisation passes applied, then
type checked again with the native-type, and fail this type check if it has
been unable to solve the uncurrying (or it can add thunks if allowed).
>
>
> I don't think that works. If we take the position that implementation and
type should be firmly separate, then when I see a function in a module
signature, how do I know it's arity?

Because you import the function by its native arity, and give it a local
alias with the arity you want to use.

>
> It's possible that you are assuming the F# approach here, but note that
their approach *also* avoids placing arity in the type. It appears to rely
on seeing the definition of the function. So in their scheme, if I can see
>
>> def f x y = x + y
>> f : 'a -> 'a -> 'a

Not the F# approach which is implicit, but an explicit alias.

>
>
> I know that's a function with arity 2. But if I merely see 'a -> 'a -> 'a
(which is what I would see at a module boundary declaration), I don't have
the arity information. The issue is even more apparent when you consider the

No, I am saying at the module boundary you see both the native-arity and
the local-arity. At the module boundary you import the function with the
native-arity, and supply a local type alias (with whatever calling
convention you want) that applies inside that module only so:

import 'C' add :: (int32, int32) -> int32 as int32 -> int32 -> int32


makes 'add' with native-arity 2 visible in this module as a curried
function.


>
>> def f x = fun y -> x + y
>
>
> example they gave, which apparently gets assigned arity 2 in their scheme
by unspecified means.
>
> I'm not saying that arity has to be associated with type. I'm saying that
it has to be associated with *something*. And it really *appears* to me
that it has to be associated with type, because if it isn't, there seem to
be a whole lot of ways that the two can get separated and also a whole lot
of places (e.g. function parameters) where there is no way to convey the
needed information. In this sense, OCaml and Haskell are cheating. It's not
that they don't encode arity in type. It's that they do it by assigning
arity 1 to all functions.

You know

>
>>
>> I cannot prove it is possible, but I have a hunch that by AST
manipulation you can always solve the uncurrying problem. The idea is that
the function must be applied to both arguments at some point, so we just
need to bring the arguments together (this assumes purity) by reordering
the code. I think beta reduction would work. imagine:
>>
>> inc x = add 1 x
>>
>> f x = inc x
>
>
> You have made a number of arguments of this form during this thread. A
consistent flaw in those arguments is that you have relied (perhaps
unintentionally) on the ability to statically resolve the procedures. I
agree that you can do the term substitution you propose in your example
above. Please explain how to do the comparable term substitution in the
body of the following function:
>
>> def f (g : 'a -> 'a -> 'a) x = g x
>
>
> Note that all you have for g here is it's type, which (per your desire)
does not convey arity. We know enough to apply g to x. We don't know enough
to know whether that application allocates or not. This is true because we
don't have enough information to know whether it is a partial application.

We know the native arity of 'g' from the module import statement.


>
>
> Stepping *down* a level of abstraction here, let's think about the
optimization that does closure elimination. So we have something like:
>
>> def myAdd x y = x + y
>
>
> and an expression
>
>> myAdd y v // which in curried form is really
>> (myAdd x) v
>
>
> Good so far. And now we'll fiddle with some region types to get the
closure record for the (myAdd x) part lifted into the parent stack. Still
good. And having done that we're going to rewrite the expression into an
application of an arity-2 version of myAdd, whereupon we can apply dataflow
and eliminate the closure record. All of this we can do, except for one
crucial thing: in an arity-less type system, we have no way to type that
intermediate myAdd in a way that tells us it has arity 2!

Its has to be explicitly stated in the module import, expressed as a native
type signature like:

(int32, int32) -> int32

Which is a 2-tuple argument and a single return value. You then optionally
can supply a local type alias as in:

int32 -> int32 -> int32

Providing the compile can match the elements of the two types, the function
looks like the local alias when you use it (so it only has one valid type
signature, no overloading), but the original native type signature is
available to the compiler inside this module.


>
> One of the things we tried in BitC v0 that was a huge success was to keep
the type system all the way down. It did wonders for catchiing mistakes. If
you want to do that (and I do), the optimization sequence above requires
the ability to capture arity in type.

Yes, in fact we have two precise arities, one the one you can use, and the
native one for the compiler to target in optimisation.


>
>> I would again want monadic effect control so all 'application' arrows
are pure...
>
>
> Until somebody explains to me how to reconcile monads and concurrency, we
can't go there. Even if we *can* explain that, there are still concerns
about monad composability.

So lets start with composition. You need 'set' types in the type system,
then you can have a monad (lets call it Eff) that is parametric on a set of
types:

class Eff e a ...

Where 'e' is a set of types for each separate effect (or state containter).
The type system simply unions these together in type inference as e is
inferred and is polymorphic, so the correct inferred set of 'effects' is
the union.

See this paper for details of how its implemented in Haskell:

https://www.cl.cam.ac.uk/~dao29/publ/haskell14-effects.pdf

Note this uses a lot of complex type level programming because Haskell does
not have native set types. If you have these then the whole thing is much
simpler, and you basically just need to infer the effects.


Concerning concurrency, I don't see any problem with monads... They are
simply a type system artifact that allows composition of types. See this
paper:

http://r6.ca/blog/20110520T220201Z.html

The type of IO is not a monad, it is a data-type. The monad just provides
composition of these datatypes.

It may be I have missed you issue with concurrency? If I have could you
elaborate.


Keean.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to