On Tue, Jun 16, 2015 at 7:05 AM, Matt Rice <[email protected]> wrote:
> On Tue, Jun 16, 2015 at 1:04 AM, Matt Oliveri <[email protected]> wrote:
>> On Tue, Jun 16, 2015 at 2:52 AM, Matt Rice <[email protected]> wrote:
>>> So, I was reading this paper about phase distinction (something I
>>> haven't been sure what to call until now),
>>>
>>> and it occurred to me that a statically typed AST, with statically
>>> typed sub-trees may require more interesting forms of phase separation
>>> than is typically seen, in the form of an expression for producing the
>>> type of the sub-tree
>>
>> Which notion of "statically typed AST"? Shap's/mine, which can be done
>> in simply-typed languages, or ASTs of (only) well-typed programs,
>> which require fancier type systems?
>
> ASTs for a well typed compiler so the latter,

Well typed compiler doesn't seem any less ambiguous than statically typed AST.

>> Either way, I don't actually see what it has to do with phase
>> distinction. The type of an AST is compile-time, and the AST itself is
>> runtime (or it can be either, with dependent types), no matter how
>> complex the type is. Can you explain your reasoning?
>
> My reasoning is its complicated to talk about this primarily because
> something can be interpreted any of these ways,

Hmm. Maybe you're right that these ideas are not pinned down carefully
enough, outside of a specific system. But I always thought it was
pretty straightforward what's compile time or runtime, and it's a
matter of how the language is implemented. The type system technically
cannot tell you about compile time vs runtime. Even with staged types,
it's just giving the implementation a big hint.

> and frustratingly set
> up so that you cannot speak about type's return value, or specify some
> return values future type existentially,

I don't understand. A type doesn't have a return value. And what is a
"future type"? You're not talking about futures, are you?

> when we limit ourselves to one side of the type checking equation, yet
> the tools work on both, and ownership of type in the resulting
> language becomes underspecified, and when type class "returns" a new
> type class associated with a type, who owns the resulting type class,
> and that you can go down a class of order, but cannot go back up
> one...

I hope you mean "type checking equation" figuratively, otherwise I
don't know what you mean. I don't know what "ownership of type" could
mean. By "class of order", you mean the Ord type class? What does it
mean to go up or down it?

Maybe it would be helpful to draw a distinction between types, which
do not exist at runtime, and type representations, which can? So even
with Type:Type, we're really just saying there's a type of type
representations. The trick is that you're allowed to (statically!) use
an expression for a type representation as a type. And this only seems
to make sense in referentially transparent languages.

> the theory is that we should be interpreting sequences of provably
> true theories and forcing everything, language primitives included to
> admit to returning a value, which can be thrown away if you really
> don't want it,

Are you getting at total functions here? That is, functions which will
eventually return a value; they can't run forever?

> and letting type be a regular value which just looks
> like the identity function on some future well defined type.

Mm, no I don't get this.

> its easy to fall under the spell of omniscient type values, rather
> than isolating phases from one another, and reusing the language.

I don't know what you mean by "omniscient". But it kind of sounds like
you're against having code be reusable across phases.

>> Dependent types look
>> impossible if you assume
>> all code belongs to a certain (unique) phase.
>> But the better way to think about it is that code in a dependent type
>> system possibly belongs to both phases.
>
> right, or to restate it, you can decide via some pure function whether
> you work with a runtime value or a compile time value

I don't think that's what I'm saying at all. If we have a value in the
compiler, it's compile time. If we have a value in the program that
got compiled, it's runtime. Deciding which it is makes no sense.

What I was saying is that when a compiler for a dependently-typed
language sees some code, it can run it on compile-time values, to help
with type checking, and it can also compile it to be run by the
compiled program.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to