On 4 May 2015 at 00:23, Matt Oliveri <[email protected]> wrote:
>
> > In the type inference systems I have written, you need a ground type for
> all
> > type variables to allow the program to compile. The only exception is
> > existentials, where you pass the witness at runtime. This is not some
> > coincidence but a fundamental property, and precisely why you need a
> witness
> > at runtime for existentials but not for any other type.
>
> Hmm, well maybe that's a way to look at it. But it sure doesn't sound
> like the usual way to look at it.


Its how most real type checkers are implemented, although its different
from the academic descriptions using substitution. Its how the OCaml type
checker is implemented for example. Maybe these other implementers have not
realised that all (non existential) type variables must be grounded, but it
is a requirement, and I am sure I am not the first to spot it.

Polymorphic recursion is tricky, but it has many properties in common with
existential types, in that the 'polymorphic' type cannot be returned and
cannot escape from the closure created by the polymorphically recursive
function. You can see the similarity in the type signatures too:

data Box a = forall a . Show a => Box a

f :: forall a . Show a => a -> Int -> String

You could look at this another way, after completing type inference, any
ungrounded type variables are existential, or part of polymorphic
recursion. They tell you exactly where you need runtime polymorphism.


> > I am not saying that there do not exist type systems which do not have
> the
> > relationship between strata and phasing. Dependent types are the obvious
> > counter example.
>
> There seems to be a bit of a disconnect in your responses, so I guess
> you're not understanding one of my points.
>

I am saying there certainly exist type systems where the above property
doesn't hold, but that is a subset of all type systems. Starting with
simple type systems, the property above holds, so it seems reasonable to
say it holds in general unless you add a particular feature to your type
system that breaks it.


> It's not just dependent type systems, it's practically all type
> systems from or based on typed functional languages.
>

No, I disagree. Simply typed lambda calculus has the above property, and
adding higher order functions does not remove it. HM + higher order
functions retains it too.. Existentials are a controllable exception (we
can use the existential type itself to tell us to introduce runtime
polymorphism).


> Strange. To me, doing (restricted) polymorphism and type classes, but
> with the same implementation style as C++ templates seems right up
> modal staging's alley. It would be an improvement over templates in
> that, like real type abstraction, you don't need to check each
> template instance.
>

Okay, first you need to convince me the above property does not hold before
I think its worth discussing staging. Otherwise we have two mechanisms
trying to achieve the same thing, which seems like a bad design to me.


> I don't see why boxing and garbage collection would be the culprits
> for losing static resolution. Those things make type systems easier.


Because you get tagged types at runtime you end up with code that is
non-specialised. Yes, it does make things easier by removing the need to
completely specialise, but then it is no longer generating code as fast as
C/C++.


> Well, we've established that you're thinking about type systems in a
> very strange way. We'll see how you'd express this after you realize
> you're supposed to be doing type checking at abstract types.
>

I don't get your point, I am type checking with abstract types. I am saying
the property of all variables being grounded is there even if you do check
with abstract types. Just because you can't see the elephant in the room,
does not mean it is not there.


> Depends on what's "good enough". I think it's plenty good enough for
> most application code.
>

Good enough for code for which performance (and hardware cost when scaled
out) is not a factor.

But if you can have the performance and write beautiful elegant code what's
the problem? I think you can have both, and that the features I would leave
out would not make the code better (more readable, simple).

The goal is to write the simplest and most understandable version of some
algorithm, and have it be fast too, without jumping through hoops. In C++ I
can write fast code with elegant abstractions using templates. In Haskell I
have to annotate my code for strictness in some arcane way.


> It shouldn't remind you of eval. It should remind you of macros. Just
> much much cleaner. (But also less expressive.)
>

Well code quoting, quasi-quoting, variable substitution. All these things
make code harder to read, and is function less obvious.

>
> Since you said your existantials have a runtime witness, I can tell
> you already that they are a dead wrong way of doing dependent types.
> One of the major draws of dependent types is moving evidence and
> checks to compile time. E.g. array bounds and bounds checking.
>

You cannot move such things to compile time checking, because you don't
know what they are until runtime.


> > types depending on values = existentials
>
> No. This is dependent types. This is exactly dependent types. We can
> talk about what existential-like types various systems have later.
> Dependent typing is conceptually prior to any specific type
> constructor. But understanding abstract types is conceptually prior to
> dependent typing.
>

data Box = forall a . Show a => Box a

Things we put in the box, once in the box, we cannot know the type of (at
compile time) this is the meaning of not allowing the type variable to
escape. However we do know the type at runtime, we can prove that:

f (Box a) = show a

Calling 'f' on a boxed value calls the correct version of show. So here we
have a type depending on a value, whatever value is put in the box. We can
put a value in the box at runtime using polymorphic recursion:

{-# LANGUAGE ExistentialQuantification #-}

data Z = Z deriving Show
data S x = S x deriving Show

data Box = forall a . Show a => Box a

reify' :: Show a => a -> Int -> Box
reify' a 0 = Box a
reify' a x | x > 0 = reify' (S a) (x - 1)

reify x = reify' Z x

main = do
    a <- (fmap read getLine) :: IO Int
    return $ let b = reify a in case b of
        Box c -> show c



On 4 May 2015 at 01:38, Matt Oliveri <[email protected]> wrote:
>
> You _can_ implement an HM type checker that way, if it's really just
> HM. (E.g., no modules.) Still, no-one does that.
>

I think you misunderstand, you still type check abstractly, the grounding
property is still there when you have type checked the whole program.

Try the following code in OCaml, or analogous code in whatever
> language w/ generic types you have on hand:
>
> (fun (f:'a->'a->'a)->f) (fun x->x)
> Error: This expression [x] has type 'a but an expression was expected of
> type
>          'a -> 'a
>        The type variable 'a occurs inside 'a -> 'a
>
> Now how could it know that expression is nonsense (without
> equirecursive types at least) without knowing what 'a is? Only one
> explanation: it type checked it.
>

The function is type checked, but not its use. Having a function that is
never used, the function may as well not exist.

In the type system this is simply an uninstantiated type. I can have as any
polymorphic types in the polymorphic environment as I like, but if there
are never polyinstantiated they are not part of this program, and play no
part in the typing of the program.


> > I think you are not fully thinking through the process of type checking.
>
> Either we are having a hilarious misunderstanding, or one of us has
> learned type checkers all wrong. :)
>

I think you are not thinking about the type derivation of the program, but
somehow think the types of unused functions are somehow relevant to the
type of a program.


> Maybe this helps:
> Abstract types means having variables in types, such that you can
> check *expressions* at those abstract types.
>
> > Lets say I have some functions:
> >
> > f :: forall a b . a -> b -> b
> > g :: forall a b .  a -> b -> a
> >
> > Now I can type check these polymorphic function definitions on their own
> > with the type variables no problem,
>
> You have not defined any functions...
>

Actually I have each of those functions has only one possible definition. I
dont need to specify the value level because it is obvious:

f x y = y
g x y = x

There are no other possibilities.


> >but lets say I now want to type check:
> > h x = f g x
> >
> > To work out the type of 'h' I need to instantiate and unify the types of
> f
> > and g with the type variable for the variable 'x'.
>
> Right. So you get:
> h :: forall a . a -> a
>

The actual type is not important

> When I get to the main program:
> >
> > main = f g 1 2 3
>
> You get a type error because you gave f too many arguments.
>

main = f g 1

> No type variables are allowed to be unknown, because they do not exist
> > outside of main.
>
> What do you mean?
>

we know the types of all the variables. IE the type of main does not
contain any variables. The two are connected, unless we introduce a new
type variable existentially, there can be no type introductions left in the
type of main. The type system is an algebra and the left and right hand
sides must be equivalent. Quantifiers cannot just appear.

> There is no way for main to pass a value of unknown type to
> > any function,
>
> Well you're mixing up runtime and compile time in that sentence. There
> _is_ a way to use an expression of unknown type as an argument. It'll
> just screw up if you run it:
>
> main = f (hd []) 3
>

So you are relying on the fact that we know at compile time that we never
use the list. If we use the list we need to know its type. This is fine as
there are no unknown types in the program. Yes there can be unknown types
in dead code, which a strict type checker should reject, but I don't really
mind if it doesn't.

The compiler would just dead-code eliminate the expression, hence there
would still be no ungrounded type variables in the program.

 Because you could add "existentials", and still have typeclasses over

> types with existentials, and add instance arguments, and add dependent
> types, and add everything but the kitchen sink, and you'd still know
> that types "in the mode" would be known at compile time.


This not what I intend. You can still have type-classes over existentials.
The point is that you know when to use runtime polymorphism (when there is
an existential) and when you can resolve statically.

The point is not that you cannot do these things, but that:

- There is a clear rule for when runtime polymorphism is needed, that is
exact.

- The places where runtime polymorphism is to be used are clearly visible
to the programmer. They have to actively use existentials to get the
runtime polymorphism. You don't pay for what you don't (explicitly) use.


Keean.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to