On 05 Jun 2014, at 19:41, Mark van Gulik <[email protected]> wrote:

> We saw something peculiar in your implementation:
> 
> Method "_reduce" is [ 
>   v : una 
> |
>   a1 ::= v's arg1 full reduce;
>   if a1 = v's arg1 then [ a1 v's func call ] else [ a1 v's func λ ]
> ] : value;
> 
> It appears that you're performing a full reduction (to normal form) of a1 
> before applying the unary function.  And similarly for both arguments for 
> binary.  While this gives the same order of evaluation (and strictness) as, 
> say, Haskell, it does so in one jump.  Technically, the only time you would 
> be reducing a unary application is if you wanted that application to reach 
> head normal form, so it's still correct to require the argument to first 
> reach head normal form "in one step".  But wouldn't that lead to an 
> accidental excess of strictness if only the *head* normal form of the 
> argument is needed?  Am I reading that correctly?  The evaluation order when 
> I ran your example seems to agree with this interpretation, but maybe I'm 
> misinterpreting the purpose of the full reduce operation.

first, thank you discussing my implementation.
yes, spread evaluation is somewhere in between strict and lazy. there is deeper 
purpose for that, which i will soon reveal ;)

> I also see that the test right after the reduction checks to see if v's 
> reduction to a1 was nilpotent (not just idempotent), so really the function 
> application is accomplished in exactly one or two steps, depending on whether 
> the argument was already in normal form.

indeed, two steps at most. cool that you’ve spotted that.
notice that the function that is applied may return another thunk, which can be 
reduced further, etc.

> As for adding strong typing... which kind did you have in mind?  It should be 
> possible to augment every value with another field that holds the expected 
> type.

but that would require an extra type field per value which is rather wasteful.

>  A constant value's expected type could be the type of the constant, and a 
> function value's type could be the function's type.

so for constant values and function values there would be no need for an extra 
type field, right?

>  A function application's type would then be the function type's return type. 
>  While correct, this is far less powerful than a parametric type system, even 
> taking into account that we'd be using Avail's own (parametric) types.  The 
> reason is that a program in spread defines no new methods, so there would be 
> nowhere to hang semantic restrictions for its expressions.

additional operations can be defined (not in spread itself) by implementing a 
specific “__call” method. 

> You could include one more field in functions that indicates how a function's 
> return type can be strengthened as a function of its input types.  Applying 
> this until it converges at a fixed-point would give you a potentially 
> non-terminating type resolution process (not strictly an algorithm, since it 
> sometimes doesn't complete -- just like some functional type deduction 
> mechanisms).  You could stop before the fixed-point if either the types were 
> precise enough for a whole program to be shown type-safe, or if any computed 
> type violated an existing constraint (by narrowing the type to bottom).  
> Tricky about the bottom thing, since guarded subexpressions may very well 
> have type bottom if they're supposed to be unreachable dynamically.

interesting ideas and observations - thank you!

> Alternatively, you could store quantified type declarations with each 
> function value that says how the provided types and the output types are 
> related, then use a bi-directional semi-unification algorithm like 
> Hindley-Milner to seal the gaps.  But as you can tell from Avail, I prefer 
> the previous approach :-).

yep, i’ve done some haskell in the past and i really didn’t like programming 
‘monadically’.

> I suggest mulling over the possibilities for a while, maybe by working 
> through some examples with higher-order functions like foldl, and operating 
> on data structures like, oh, let's say... a lazy, typed version of your 
> treap.  Cons lists aren't going to be sufficient to reveal the nasty problems 
> that quantified types introduce.

actually, i’ve implemented my treap library as a spread dsl in scala. i got it 
working and strongly typed.
but i believe avail is more capable in type checking spread than scala.

> Good luck!

thanks!

cheers,
Robbert.

Reply via email to