Now we're getting somewhere. Just a few questions left.
On Sun, Feb 15, 2015 at 4:24 AM, Keean Schupke <[email protected]> wrote:
>> As defined, the inferred type of f is ('a -> 'a -> 'a) -> 'a -> 'a -> 'a.
>
> f is a function passed a function and returning a function that takes a
> single argument and returns a function taking a single argument and
> returning a value, so native arity (1, 1, 1)
>
> The first argument is a function that takes a function and returns a
> function that takes a value and returns a value (1, 1).
I'm assuming you meant "takes a value and returns a function that
takes a value and returns a value". I don't really know why you're
spelling it out it English.
>> When f is called from another module, what native arity must a closure
>> be in order to safely pass it to f?
>
> It must have arity (1, 1)... i.e. a function which takes a single argument
> and which returns a function that takes a single argument (all of type 'a).
>> What native arity is assumed of the parameter g when compiling f?
>
> (1, 1) as described above
>> Note that the answers to the above two questions must be the same, but
>> one answer must come from the compiler of a module importing f, and
>> one must come from the compiler of the module defining f.
>
> Here's where you lose me. We have already said all type signatures at module
> boundaries should be explicit, due to the desire for a stable interface. In
> any case, in one module you would say:
>
> export f :: ('a -> 'a -> 'a) -> 'a -> 'a -> 'a
>
> In the other module you would say:
>
> import f :: ('a -> 'a ->'a) -> 'a -> 'a -> 'a
>
> If these are different it is a link-time error, not a compile time error.
So in this case, no tuplization/uncurrying/whatever occurs. So of
course then it works.
>> How does your scheme ensure that these answers will always be the same?
>
> It doesn't mismatch is a link time error, but libraries can provide 'import'
> header files to make sure users get this right and don't have to type it all
> in.
Or something like that. But again, since nothing really happened in
this example, it really is just comparing the surface types.
>> And finally:
>> In your scheme, what does the programmer do to control the native arity of
>> g?
>
> Hmm, I don't see the issue, nothing here has required local aliases so far,
> but lets say I do:
>
> import f :: ('a -> 'a -> 'a) -> 'a -> 'a -> 'a as (('a ,'a) -> 'a, 'a) -> 'a
>
> I could then use it with a 'g' with the type ('a, 'a) -> 'a locally in this
> module.
((('a ,'a) -> 'a, 'a) -> 'a) doesn't look like isomorphic to (('a ->
'a -> 'a) -> 'a -> 'a -> 'a). Your type of function only gets one 'a.
Maybe you meant ((('a ,'a) -> 'a, 'a) -> 'a -> 'a).
But anyway, that's the opposite of what I'm asking for. You are
showing me how to make f look like a different type than from its
definition without actually changing the way the definition is
compiled. I want to know how you'd get the compiler to implement it as
if it had the type ((('a,'a)->'a,'a)->'a->'a), so that g has arity 2.
(And apparently, f has (2,1) while we're at it.)
Is your answer to write f like
def f(g,x) y = g(x,y)
and then maybe say something like "But use it as (('a->'a->'a)->'a->'a->'a)."?
>> >> Anyway, how is this business of using different inter-module types any
>> >> better than using an arity-aware type system for compiled modules, or
>> >> automatically tuplizing functions under the hood?
>> >
>> > Hmm, the type system is arity aware, the exact arities are encoded in
>> > the
>> > types using tuples.
>>
>> OK, I can use that point of view. The comparison I was asking for is
>> essentially the same, whether you use arities or unboxed tuples under
>> the hood.
>
> Well I could imagine a third syntax for arities like:
>
> h :: {int32, int32, int32} -> {int32, int32} -> int32
>
> but it ends up looking just like tuples, and for unboxed tuples the stack
> representation would likely be the same. However if you want to allow
> unboxed tuples, it might be necessary to have something like this...
Whether we want to treat multi-arg as different from unboxed tuples is
independent of how we pick native arity. It looks like Shap is leaning
towards using unboxed tuples exclusively.
> I need
> to think about this some more as there are other reasons to make function
> calling explicit verses a value. IE a unary function needs to be
> distinguished in the type system from a value.
BitC will be eager evaluating. A _nullary_ function needs to be
distinguished from a value. Probably with unit, if we're using tuples.
> In Haskell this would be done
> with a datatype. Lets just say we have a datatype 'F' for function
> application the above can be:
>
> h :: int32 -> int32 -> int32 -> F (int32 -> int32 -> F int32)
>
> This is probably my preferred solution as F captures side effects too,
> leaving the function arrow '->' as a pure function. We can then make 'F' a
> monad to allow composition of function application. We might call this monad
> 'IO'?
Whether to use monads for effects is another issue that really seems
quite separate to me.
While I'm on monads, I think for the first time on this list, I'd like
to say that I don't like the point of view that effects are _just_
certain monads. They can be modeled with certain monads, but they're
special, because the purpose of effects is to expose some subset of
the functionality of one particular monad: the IO monad. IO is the
"true" monad which gives you all the stuff your computer can do that
Haskell left out. Effects hand out pieces of that power in a more
fine-grained way. But monads in general have nothing to do with the IO
monad in particular.
The difficulty of composing monads might be due to the huge amount of
extra generality of monads over effects. The problem of composing
monads in general is not a problem for BitC, AFAICT.
I don't see any reason why you couldn't do concurrent programming in
the IO monad. But I also don't see any reason why BitC would _want_
to.
>> > With regards to automatic tupelization, you might be coding in a
>> > tupelized
>> > style and want to import a library of curried functions and not have to
>> > mix
>> > styles (or import a library of tupelized functions and use them in a
>> > curried
>> > style). Doing this automatically like F# works only one way, and seems a
>> > little permissive as automatic tupelization may not work (or even be
>> > desired
>> > in some code that relies on hoisting part of a curried application
>> > outside
>> > of a loop for efficiency).
>>
>> By automatic tuplization, I mean the user can use either tuplized or
>> curried functions, and the compiler will decide, based on the
>> definitions, what the type should've been to best avoid closure
>> allocation. Usually this means turning curried functions into tuplized
>> ones. This changes their types, so both the original "surface" types
>> and "tuplized" types are recorded in the compiled module. When
>> importing a module, the functions still appear to the programmer to
>> have their surface types, but the compiler also knows their tuplized
>> types, so it can generate the correct call code.
>>
>> The main difference I see between what I've just described and what
>> you're calling for is that you do not store the surface types in a
>> compiled module, so the importer picks their own surface types to use.
>
> Yes, this avoid the problem with libraries using different styles.
OK, so for some reason, I thought your proposal was trying to avoid
allocation all this time. But now it looks like you're leaving that up
to the programmer, and your proposal smooths over superficial
differences between API styles. I'm cool with that.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev