2009/6/15 Paul Chiusano <paul.chius...@gmail.com>: > Hello, > I was recently trying to figure out if there was a way, at runtime, to do > better strictness analysis for polymorphic HOFs, for which the strictness of > some arguments might depend on the strictness of the strictness of function > types that are passed as arguments [1]. As an example, consider foldl. The > 'seed' parameter of foldl can be made strict as long as the binary function > used for the fold is strict in its arguments. Unfortunately, because > strictness analysis is done statically, Haskell can't assume anything about > the strictness of the binary function - assuming we only compile one > instance of foldl, it must be the most conservative version possible, and > that means making the seed parameter lazy. :-( > I started thinking about ways you could to a check at runtime for this sort > of thing, something to the effect of asking foldl, before heap-allocating a > thunk for the seed parameter, whether that parameter could be made strict. > foldl could then inspect other arguments that have been supplied, and based > on these arguments, evaluate or go ahead with creating a thunk the seed > parameter. It's a runtime cost, sure, but would it be more than the cost of > having to do an additional heap allocation? In any case a small runtime cost > might be worth it if the analysis becomes more uniform. > So, my question is: does doing this sort of runtime analysis seem like a > horrible idea? Is it even possible? Has anyone tried this in the past? (So > far I haven't found anything, but would love references if people have > them.) > Note that I'm not suggesting Haskell should do anything like this. I'm > playing around with the ideas because I'm interesting in creating a lazy > language and I was hoping to have strictness analysis be very predictable > and uniform, something the programmer can count on and use to simply reason > about space usage ... which might be hopelessly unrealistic goal! I guess > the more general question is - is "perfect" strictness analysis (however > that is defined) possible, if we're willing to incur some runtime cost? What > would that look like?
The idea looks cool, but perfect strictness analysis is not possible, t.i. the problem of determining whether f _|_ = _|_ is undecidable, since it is a non-trivial property of f (there exist f's for which it is true, and ones for which it is false) and non-trivial properties are undecidable, thanks to Rice theorem. > Best, > Paul > [1]: > More background on my thinking here - a bit half-baked, so bear with me! > http://pchiusano.blogspot.com/2009/06/perfect-strictness-analysis-part-1.html > http://pchiusano.blogspot.com/2009/06/perfect-strictness-analysis-part-2.html > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe > > -- Eugene Kirpichov Web IR developer, market.yandex.ru _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe