On Thu, Jul 3, 2014 at 3:48 PM, Jonathan S. Shapiro <[email protected]> wrote: > On Thu, Jul 3, 2014 at 10:57 AM, Geoffrey Irving <[email protected]> wrote: >> >> By the way: the reason I don't think this is a problem is that the >> compiler can just give up after a while and say "I'm not sure". I >> don't see how this pragmatic solution differs from standard >> Hindley-Milner eating all the RAM in the universe due to exponential >> blowup even with a simple decidable type system. > > It's very different. If two systems differ in RAM, that's an easy thing to > fix. If two compilers differ in when they choose to quit on the problem, > that's a failure of language definition.
Fair enough. In that case, we either need real dependent types with strong normalization plus a compile time / runtime striation, or something much weaker than dependent types (and also weaker than C++ templates, Haskell with GHC extensions, etc.). I hope that system languages get to dependent types long term, but I frankly don't think they are practical without some heuristic theorem proving support, which is much worse in terms of compiler dependence than a time cutoff. That probably rules them out for bitc. I haven't thought much about the space of types-with-values weak enough to be fully decidable, so I'm not sure anything practical exists. I've only skimmed Habit, and it was a while ago, so they might be an example of such. One could also imagine rigorously defining the set of accepted programs with defining a semantics for the compile time execution that includes a counter, and standardize the number of "standard instructions" allowed per function. That seems almost too horrendous to mention, but I think it would actually work. There would be language feature to change this number in source, not least to allow different compilers to compare against a standard test suite for compliance. Geoffrey _______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
