On 12 Jun 2015 11:04, "Matt Oliveri" <[email protected]> wrote:
> Right. And the typing rules, which are stated in terms of ASTs, not
> surface syntax, are the type checker spec. And they are separate.

They need not be. In my Prolog type inference implementations I have
combined grammar and type inference rules. I think the Prolog rules are
about as close to a formal specification as you can get.

> In both cases, the spec is _generative_ whereas the implementation is
> _analytic_. That is, a CFG tells you how to generate all the parseable
> inputs, whereas a parser takes an input and tests if it's parseable
> (and also does something with it, hopefully). Typing rules tell you
> how to generate all the well-typed program fragments (as ASTs),
> whereas a type checker takes an AST and tests if it's well-typed.
>
> PEGs are actually analytic, and I'd argue they're really no better
> than an abstract combinator API. So I thought you were saying that
> because a parser generates an AST, we could think of a parser, with
> integrated type checking, made with combinators, as a stand-in for a
> grammar and typing rules, since it's like a PEG that generates all
> well-typed program fragments.
>
> And I was trying to explain why this approach does not provide enough
> evidence that the programmer knows what they're doing. Aside from what
> I was saying about not keeping parsing and typing separate, and buggy
> combinator implementations, you have Shap's old complaint that with a
> PEG, you can't tell if the language is ambiguous.
>
> > For example we take an EBNF grammar specification, and write it using
parser
> > combinators, or compile it into code. What else is there to check it
> > against?
>
> Since (I think) EBNF is generative, it's actually not a given that the
> combinators parse the same language that the grammar generates. You'd
> have to prove it. So that would be something to check even if you
> consider a type-checking, combinator-based parser to be an acceptable
> definition of a type system, which I don't.
>
> You can always take an implementation to be a spec, in which case it's
> correct by definition. But most implementations have bugs, so taking
> an implementation for a spec usually gets you a pretty stupid
> definition of correctness.

Actually the grammar and parsing are the same thing, its just the parser
has an additional literal representation attached to leaf terms in the
grammar.

So far I have been assuming we are discussing implementing a parser for our
new language in some existing widely supported language (like C++).

But if we are taking about how our new language might better support
compiler writing, then that's a different question. What would a proof look
like?

If we can express the typed grammar as a logic program (see the example
arity-aware inference algorithm I posted a while back), then we just need
to statically assert that each parser implements a type transformation from
the grammar. We might do this by naming the grammar rules and asserting
which grammar rule a parser is implementing.

Keean.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to