Kind of a random question for today :). We all know that we can write map in terms of foldr:
> map f = foldr ((:) . f) [] as I understand it, this is essentially because foldr encapsulates all primitive recursive functions and since map is primitive recursive, we can implement it in terms of a fold. one thing that is interesting to note is that if we are also given references, a function to sequence monadic actions ('sequence_') and reverse, we can write foldr in terms of map (I do this in the IO monad, but as long as you have refs, it doesn't really matter -- if you used a "better" monad, you could replace the unsafePerformIO with something which doesn't sound so bad, but in this case, the operation is safe): > foldr f z l = unsafePerformIO $ do > v <- newIORef z > sequence_ $ map (modifyIORef v . f) (reverse l) > readIORef v Of course, we can postulate instead of sequence_, rsequence_, defined as: > rsequence_ :: Monad m => [m a] -> m () > rsequence_ [] = return () > rsequence_ (x:xs) = do rsequence_ xs; x; return () it's probably also possible to do it by storing in the reference a function which we compose, but that's not interesting here. now, when we're in the IO monad, the difference between the real foldr and the simulated one is that the simulated one cannot deal with infinite inputs. for instance: > head $ foldr (:) [] [1..] should return 1, which it does for the real version. the simulated one hangs. one might like to attribute this to the fact that IO is a strict monad, but that doesn't seem to be all -- we would also need to be able to read and write references lazily in order to get this to work completely. Q1: is this correct? is there a way to "fix" the above definition or to use a different monad to get laziness? i think not, but can't prove it. now, if we just limit ourselves to finite inputs, things look a lot better. however, this brings up really the main question i have. foldr (in its original form) is in some sense quintessentially(sp?) PR on lists, whereas map is not. however, it seems that the combination map+rsequence_+references is enough to give you full PR power. what more can we say about this? clearly references play the largest role here, but it also seems impossible to remove this dependence on the sequencing operation. clearly there is some class of algorithms weaker that primitive recursive into which both map and rsequence_ fall. these two functions are in some sence counterparts to eachother: map retains list structure and only modifies elements, which rsequence_ destroys list structure and doesn't modify (just "runs") elements. is there some sor tof theory that talks about this relationship? and why is it that when put together, these things are strong enough (plus refs) to subsume primitive recursiveness? - Hal p.s., responses delayed until after ICFP are equally welcome :) -- Hal Daume III | [EMAIL PROTECTED] "Arrest this man, he talks in maths." | www.isi.edu/~hdaume _______________________________________________ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell