Stefan Holdermans wrote:
This is rather typical in the field of program analysis. Getting the analysis precise is impossible and reduces to solving the halting problem. So, the next best thing is to get an approximate answer. An import design guideline to such an analysis is "to err on the safe side". That means you'll end up either

- with an analysis that tells you that a given function *is guaranteed to* inspect all elements of its argument list or else that the function *may* inspect less elements; or

- with an analysis that tells you that a given function *is guaranteed to* not inspect all elements of its argument list or else that the function *may* inspect all elements.

Of course, it depends on your needs which of these modalities suits you best.

Right. So what you're saying is that for most program properties, you can partition the set of all possible problems into the set for which X is true, the set for which X is false, and a final set for programs where we can't actually determine the truth of X. Is that about right?

I find it interesting that one quite hard looking property - "this program is well-typed" - can always be deduced. Perhaps that's because the language is specifically designed to constrain the set of possible programs in such a way that this is possible? Or perhaps it's just that the set of programs which can't be proved well-typed are simply rejected as if they were provel ill-typed? *shrugs*

Can somebody explain to me exactly what the halting problem says? As I understand it, it doesn't say "you can't tell if this program halts", it says "you cannot write an algorithm which can tell whether *every* program halts or not". (Similar to the way that Galios dude didn't say "you can't solve high-order polynomials", he said "you can't solve all possible Nth order polynomials *with one formula* [involving only a specific set of mathematical operators]". As in, you *can* solve high-order polynomials - just not with a single formula for all of them. Or not just with the specified operators.)

I am not aware of any work on exactly this type of analysis, but you may want to have a look at strictness analysis first; I think there's quite some overlap with what you are trying to achieve.

I was actually thinking about having a chain of functions, and wondering whether you can remove the intermediate lists, and/or "optimise" them into some other data structure. But that would depend on how much of the list is consumed - which, as I suspected, is tricky to determine.

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to