[Haskell-cafe] ANNOUNCE: sloth-0.0.1
Hi, I have just released a prototype implementation of a tool (http://hackage.haskell.org/package/sloth) for checking whether a function is unnecessarily strict. It is inspired by Olaf Chitils StrictCheck tool (http://www.cs.kent.ac.uk/people/staff/oc/) but does not propose improvements that lead to non-sequential function definitions. For more details there is a short explanation with some examples at http://www.informatik.uni-kiel.de/fileadmin/arbeitsgruppen/ca_prog_develop/Projekte/Sloth/Examples.pdf. If you discover a bug or have any questions please use the human bug tracking system at jan_christiansen[at]gmx[dot]de. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] space-efficient, composable list transformers [was: Re: Reifying case expressions]
Hi, On Jan 2, 2012, at 2:34 PM, Heinrich Apfelmus wrote: Without an explicit guarantee that the function is incremental, we can't do anything here. But we can just add another constructor to that effect if we turn ListTo into a GADT: data ListTo a b where CaseOf :: b - (a - ListTo a b) - ListTo a b Fmap :: (b - c) - ListTo a b - ListTo a c FmapCons :: b - ListTo a [b] - ListTo a [b] I did not follow your discussion but how about using an additional GADT for the argument of Fmap, that is data Fun a b where Fun :: (a - b) - Fun a b Cons :: a - Fun [a] [a] data ListTo a b where CaseOf :: b - (a - ListTo a b) - ListTo a b Fmap :: Fun b c - ListTo a b - ListTo a c and provide a function to interpret this data type as well interpretFun :: Fun a b - a - b interpretFun (Fun f) = f interpretFun (Cons x) = (x:) for the sequential composition if I am not mistaken. (.) :: ListTo b c - ListTo a [b] - ListTo a c (CaseOf _ cons) . (Fmap (Cons y) b) = cons y . b (Fmap f a) . (Fmap g b) = Fmap f $ a . (Fmap g b) a . (CaseOf nil cons)= CaseOf (interpret a nil) ((a .) . cons) a . (Fmap f b) = fmap (interpret a . interpretFun f) b -- functor instance instance Functor (ListTo a) where fmap f = normalize . Fmap (Fun f) normalize :: ListTo a b - ListTo a b normalize (Fmap (Fun f) (Fmap (Fun g) c)) = fmap (f . g) c normalize x = x Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Properties for Foldable
Hi, On Jul 29, 2011, at 10:15 PM, Conal Elliott wrote: Is there a collection of laws associated with the Foldable class? Or for Traversable? - Conal if you are not aware of The essence of the Iterator pattern by Jeremy Gibbons and Bruno Oliveira (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.136.850rep=rep1type=pdf) it might be interesting. Especially chapter 5 discusses some laws of traversable instances. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Properties for Foldable
On Jul 30, 2011, at 12:40 AM, Sebastian Fischer wrote: Interesting. However I don't understand why the instance in Section 5.5 is not already forbidden by the purity law traverse pure = pure and a 'no duplication' constraint would be necessary. For example: traverse Id [1,2,3] = Id [1,1,2,2,3,3] /= Id [1,2,3] What am I missing? I suspect you missed the use of const in the definition of the instance if I am referring to the correct instance. The following instance duplicates elements where duplication of elements is meant in the sense that the instance duplicates the applicative action on an element and not the element itself (at least in the sense of duplication of elements you used above). instance Traversable [] where traverse f [] = pure [] traversef (x:xs) = pure (const (:)) * f x * f x * traverse f xs If f is pure the duplication has no effect but it might have an effect if it is not. Can we derive Foldable laws from the Traversable laws? As foldMap is supposed to be equivalent to foldMapDefault and we have foldMapDefault f = getConst . traverse (Const . f) I think we can derive similar laws. To me, in this case the no duplication constraint seems very desirable for all instances of Traversable as foldMapDefault is probably not supposed to use the function f more often than there are elements in the data structure. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How large is the Haskell community ?
On 12.02.2011, at 21:18, Aaron Gray wrote: I was wondering if anyone had an idea or estimate as to how large the Haskell community is ? All the answers made me wonder what the criterion is to be a member of the Haskell community. Are you a member if you downloaded ghc, if you have (at least once) defined a Monad instance, if you have written a hackage package, if you have contributed to the Monad.Reader, if you have a github account with at least one Haskell project, if you read at least one of the haskell mailing lists, if you contribute to a haskell mailing list (perhaps on a regular basis), if you post on reddit, if you answer/ask questions on stackoverflow, if you have written at least 1 lines of code in Haskell, if Haskell is one of the programming languages you use, if Haskell is the one programming language you use, if you have written a PhD thesis related to Haskell, if you have asked a type related question only Oleg Kiselyov was able to answer, if you know what a Monoid is and know how to use it ... ; ) Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Are constructors strict?
Hi, On 22.01.2011, at 08:12, Sebastian Fischer wrote: Also, Jan, I don't understand your comment about continuation monads. Maybe I am a bit numb today.. What property do you mean do continuation monads have or not? I was wrong there. If there exist values x and y with x /= y and you have a function f such that f x /= f y then we have f _|_ = _|_ (at least if f is a sequential function). I thought this property might fail if x and y are functions but I was totally wrong. Therefore the laws mzero = f = mzero and return x = f = f x together with mzero /= mplus m n and mzero /= mplus (m = f) (n = f) for some m and n implies that we have _|_ = f = _|_ if = is sequential. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Are constructors strict?
On 20.01.2011, at 22:18, Daryoush Mehrtash wrote: I am having hard time understanding the following paragraph in Purely functional Lazy non-deterministic programing paper http://www.cs.rutgers.edu/~ccshan/rational/lazy-nondet.pdf The problem with the naive monadic encoding of non-determinism is that the arguments to a constructor must be deterministic. If these arguments are themselves results of non-deterministic computations, these computations must be performed completely before we can apply the constructor to build a non-deterministic result. Why does the argument to constructors must be deterministic?WHy is it that thunks are not used in this case? If you use a monadic model for non-determinism in Haskell you use the standard constructors provided by Haskell and the bind operator to apply it to a non-deterministic (monadic) value. For example, consider the non-deterministic value coin defined as follows. coin :: MonadPlus m = m Bool coin = False `mplus` True To apply the constructor Just to this value you use the bind operator as follows. test :: MonadPlus m = m (Maybe Bool) test = coin = return . Just If you now consider the following definition. loop = MonadPlus m = m Bool loop = loop If we apply Just to loop as follows test2 :: MonadPlus m = m (Maybe Bool) test2 = loop = return . Just the evaluation of test2 does not terminate because = has to evaluate loop. But this does not correctly reflect the behaviour in a functional logic language like Curry. For example, if you have a function that checks whether the outermost constructor of test2 is Just, the test is supposed to be successful. In the naive model for non-determinism this is not the case. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Are constructors strict?
On 21.01.2011, at 11:52, Daryoush Mehrtash wrote: Do I have to have MonadPlus m or would any other Monad class work the same way? Not all monad instances satisfy undefined = return . Just = undefined if that's what you are asking for. For example, consider the identity monad. instance Monad Identity where return = Identity m = k = k (runIdentity m) Then we have undefined = return . Just = undefined = Identity . Just = Identity (Just undefined) /= undefined If = performs pattern matching on its first argument like most instances do then you get undefined = return . Just = undefined. I think that the monadplus laws mplus m n = f = mplus (m = f) (n = f) called (Ldistr) in the paper and mzero = f = mzero called (Lzero) in the paper imply undefined = return . Just = undefined At least if you have mzero /= mplus m n which is quite desirable. I don't think that this holds for continuation based monads. But Sebastian will most certainly know better as he is one of the authors of the paper. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Lambda-case / lambda-if
On 02.10.2010, at 20:35, Henning Thielemann wrote: On Sat, 2 Oct 2010, Colin Paul Adams wrote: Prelude (if then Haskell else Cafe) False Max Cafe My reaction is to ask: Can you write this as: (if then else) False Haskell Cafe ? Sure: ifThenElse :: Bool - a - a - a ifThenElse True x _ = x ifThenElse False _ y = y Prelude ifThenElse False Haskell Cafe You can use a similar approach for case expressions ; ) import Prelude hiding ( catch ) import System.IO.Unsafe ( unsafePerformIO ) import Control.Exception ( catch, evaluate, PatternMatchFail ) caseOf :: a - [a - b] - b caseOf x = unsafePerformIO . firstMatch . map ($x) firstMatch :: [a] - IO a firstMatch (x:xs) = catch (evaluate x) (handlePatternFail (firstMatch xs)) handlePatternFail :: a - PatternMatchFail - a handlePatternFail x _ = x test = (flip caseOf [\1 - One, \_ - Not-one]) 1 Well, to tell the truth this does not work correctly as the following example shows. test2 = (flip caseOf [\1 - ((\2 - One) 3), \_ - Not-one]) 1 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Coding conventions for Haskell?
Hi, On 01.10.2010, at 09:57, Wolfgang Jeltsch wrote: I think this is much more readable, since you can easily see which of those lines belong together. I wonder whether there are empirical results which show that a certain kind of spacing is more readable for a certain kind of programmer (for example, experienced, novice, ...). I remember that I once read a paper about camelCase vs. underscore function names. I think that they came to the conclusion that camelCase is advantages for non-programmers but there was no difference when they tested programmers. But I might be wrong here. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] gathering position information lazily using traverse
Dear Applicative experts, I am seeking advice on Applicative instances and their use in traverse. Consider the following Applicative instance. newtype Proj a = Proj { unProj :: [Bool] - a } instance Functor Proj where fmap g (Proj f) = Proj (g . f) instance Applicative Proj where pure = Proj . const Proj f * Proj x = Proj (\p - f (False:p) (x (True:p))) In fact, this is not an Applicative instance as it does not satisfy the laws. On basis of this instance I have defined the following function. gshape :: Traversable t = t a - t [Bool] gshape x = unProj (traverse (const (Proj reverse)) x) [] The idea is simply to replace every polymorphic component by an identifier that identifies the position of the component in the data structure. That is, provided with the identifier I want to be able to project to the corresponding component. In this case this identifier is a path in the idiomatic term from the root to the component. I can define a correct Applicative instance if I add an additional constructor, which represents pure. I did not prove that it satisfies all laws but I think it does. data Proj a = Pure a | Proj ([Bool] - a) instance Functor Proj where fmap g (Pure x) = Pure (g x) fmap g (Proj f) = Proj (g . f) instance Applicative Proj where pure x = Pure x Pure f * Pure x = Pure (f x) Pure f * Proj x = Proj (\p - f (x p)) Proj f * Pure x = Proj (\p - f p x) Proj f * Proj x = Proj (\p - f (False:p) (x (True:p))) My problem is that this correct instance is too strict for my purpose. It is important that gshape operates correctly on partial data, that is, even if its argument is partial all the components should be replaced correctly. For example, we have gshape (Node _|_ 0 (Leaf 1))) = Node _|_ [False,True] (Leaf [True]) If the applicative instance performs pattern matching, like the latter instance, then gshape is too strict. Therefore I suspect that there is no correct Applicative instance that satisfies my needs but I am not at all certain. Thanks, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] gathering position information lazily using traverse
On 10.09.2010, at 23:58, Felipe Lessa wrote: H... On Fri, Sep 10, 2010 at 6:47 PM, Jan Christiansen j...@informatik.uni-kiel.de wrote: instance Applicative Proj where pure = Proj . const Proj f * Proj x = Proj (\p - f (False:p) (x (True:p))) (pure f) * Proj x === Proj (const f) * Proj x === Proj (\p - (const f) (False:p) (x (True:p))) === Proj (\p - f (x (True:p))) Proj f * (pure x) === Proj f * Proj (const x) === Proj (\p - f (False:p) ((const x) (True:p))) === Proj (\p - f (False:p) x)) instance Applicative Proj where pure x = Pure x Pure f * Pure x = Pure (f x) Pure f * Proj x = Proj (\p - f (x p)) Proj f * Pure x = Proj (\p - f p x) Proj f * Proj x = Proj (\p - f (False:p) (x (True:p))) (pure f) * Proj x === Pure f * Proj x === Proj (\p - f (x p)) (Proj f) * (pure x) === Proj f * Pure x === Proj (\p - f p x) Was this difference intended? Yes, this is intended. This difference is the reason why the former instance does not satisfy the Applicative laws while the latter does. The first instance provides every subterm of an idiomatic term with a position. Even a pure term is provided with a position although it does not use it. The latter instance does not provide a pure term with a position as it does not need one. Therefore, the second instance simply passes position p to a subterm if the other subterm is pure. In the example for the first instance we can observe that we unnecessarily extend the position with True and False respectively. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Unnecessarily strict implementations
On 03.09.2010, at 14:38, Daniel Fischer wrote: I can't reproduce that. For me, it leaks also with profiling. Have you used optimizations? It disappears if I compile the program with -O2. Without profiling I get the following. Here the maximum residency is nearly 45MB. $ ghc --make Temp.hs -fforce-recomp [1 of 1] Compiling Main ( Temp.hs, Temp.o ) Linking Temp ... $ ./Temp +RTS -sstderr ./Temp +RTS -sstderr 5458199 647,520,792 bytes allocated in the heap 256,581,176 bytes copied during GC 44,934,408 bytes maximum residency (11 sample(s)) 1,363,496 bytes maximum slop 103 MB total memory in use (1 MB lost due to fragmentation) Generation 0: 1223 collections, 0 parallel, 0.83s, 0.85s elapsed Generation 1:11 collections, 0 parallel, 0.49s, 0.62s elapsed INIT time0.00s ( 0.00s elapsed) MUT time0.63s ( 0.67s elapsed) GCtime1.32s ( 1.46s elapsed) EXIT time0.00s ( 0.00s elapsed) Total time1.96s ( 2.13s elapsed) %GC time 67.6% (68.5% elapsed) Alloc rate1,022,883,082 bytes per MUT second Productivity 32.3% of total user, 29.6% of total elapsed With profiling it looks as follows. Here the maximum residency is less than 15KB. $ ghc --make Temp.hs -prof -auto-all -fforce-recomp [1 of 1] Compiling Main ( Temp.hs, Temp.o ) Linking Temp ... $ ./Temp +RTS -sstderr ./Temp +RTS -sstderr 5458199 1,051,844,836 bytes allocated in the heap 110,134,944 bytes copied during GC 14,216 bytes maximum residency (96 sample(s)) 37,068 bytes maximum slop 2 MB total memory in use (0 MB lost due to fragmentation) Generation 0: 1908 collections, 0 parallel, 0.57s, 0.59s elapsed Generation 1:96 collections, 0 parallel, 0.02s, 0.02s elapsed INIT time0.00s ( 0.00s elapsed) MUT time1.46s ( 1.51s elapsed) GCtime0.60s ( 0.61s elapsed) RPtime0.00s ( 0.00s elapsed) PROF time0.00s ( 0.00s elapsed) EXIT time0.00s ( 0.00s elapsed) Total time2.05s ( 2.12s elapsed) %GC time 29.0% (28.7% elapsed) Alloc rate721,170,248 bytes per MUT second Productivity 71.0% of total user, 68.9% of total elapsed ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Unnecessarily strict implementations
Hi, On 02.09.2010, at 01:35, Daniel Fischer wrote: It's not that it's not as non-strict as possible per se. (Sorry, had to :) It's that intersperse's current definition (in GHC at least) can cause a space leak. In this case, making the function less strict can cure it, in other cases, more strictness might be the solution. I would be very happy if you would share this example with me. I am looking for an example where the current implementation of intersperse or inits causes a space leak for quite a while now. On the other hand, we currently have intersect [] _|_ = [] and one of intersect _|_ [] and intersect [] _|_ must give _|_. Which one is a matter of choice. I am sorry for not being precise. You are right. But right now we have intersect xs [] = _|_ for every list xs terminated by _|_. But I suffices to evaluate xs to head normal to decide that the result should be []. That is, we could have intersect [] _|_ = [] and intersect (_|_:_|_) [] = [] or intersect [] (_|_:_|_) = [] and intersect _|_ [] = [] and the current implementation satisfies neither. And before that, the rule intersect [] _ = [] if the current behaviour of intersect [] should be retained. That's a deal. The implication (=) :: Bool - Bool - Bool is too strict as well. We have False = _|_ = _|_ as well as _|_ = True = _|_ while one of these cases could yield True. I'm not convinced either should (nor that they shouldn't). I think this is a matter of elegance rather than a matter of efficiency. In the same way as I prefer False _|_ = False over False _|_ = _|_ I prefer False = _|_ = True over False = _|_ = _|_ The last slide lists among the problems proposes undesirably inefficient functions (reverse). I wouldn't equate 'not minimally strict' with 'too strict'. Minimal strictness also can have negative effects, one must look at each case individually. I second this but in my opinion the minimally strict implementation should be the default if there is no reason against it. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Unnecessarily strict implementations
Hi, On 02.09.2010, at 13:41, Daniel Fischer wrote: takes a little to run and keeps the entire file until the first occurrence of pat in memory. first of all thanks very much for the detailed instructions. I have rewritten the example slightly using Strings instead of Bytestrings. Replacing all occurrences of 'รค' by auml; in the collected works of Shakespeare ; ) has a maximum memory usage of around 65MB with the current implementation of intersperse while it has a maximum memory usage of only around 5KB with the less strict implementation. replaceBy :: Eq alpha = alpha - [alpha] - [alpha] - [alpha] replaceBy x sep = concat . intersperse sep . splitBy (==x) splitBy :: (alpha - Bool) - [alpha] - [[alpha]] splitBy _ [] = [] splitBy p xs = case break p xs of (l,ys) - l : case ys of [] - [] (_:zs) - splitBy p zs This function only runs in constant space if I use the strict pattern matching on the result of break. If I use the following implementation I observe a linear memory usage instead. splitBy' :: (alpha - Bool) - [alpha] - [[alpha]] splitBy' _ [] = [] splitBy' p xs = l : case ys of [] - [] (_:zs) - splitBy' p zs where (l,ys) = break p xs I think this is due to the Wadler tuple space leak. The same would apply to the current implementation of lines. I wonder whether an implementation of lines analogous to splitBy has any disadvantages. That is, we could have intersect [] _|_ = [] and intersect (_|_:_|_) [] = [] or intersect [] (_|_:_|_) = [] and intersect _|_ [] = [] and the current implementation satisfies neither. Right. So the question is, has the current implementation advantages over either of these? (I don't see any.) If not, which of these two behaviours is preferable? I'd prefer the first one as it is in line with the left to right pattern matching of Haskell. I have mixed feelings about those. Part of me dislikes breaking the symmetry between (=), (==) and compare. I think you should not blame (=) for the existence of a function that yields a superset of the information that (=) yields ; ) Cheers, Jan___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Unnecessarily strict implementations
Hi, there is a new ticket that Data.List.intersperse is not as non-strict as possible (http://hackage.haskell.org/trac/ghc/ticket/4282). I have observed some other functions which are unnecessarily strict and it might be advantageous to change their definitions as well. I think it is known that the current implementations of inits and tails are too strict. At least I think I have once read a post to haskell-cafe about this topic. Furthermore intersect is too strict. We have intersect _|_ [] = _|_ and the current implementation is linear in the size of xs if we evaluate intersect xs []. I think simply adding a rule intersect _ [] = [] to the current implementation solves this issue. The implication (=) :: Bool - Bool - Bool is too strict as well. We have False = _|_ = _|_ as well as _|_ = True = _|_ while one of these cases could yield True. The problem is that (=) is defined by means of compare. This effect shows up for all data types with a least element if the default implementation of (=) by means of compare is used. Furthermore there are a couple of functions which are too strict but whose minimally strict implementations do not provide benefits. For example, reverse is too strict as has already been observed by Olaf Chitil (http://www.cs.kent.ac.uk/people/staff/oc/Talks/ifl2006NonStrictProgramming.pdf ). Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] powerSet = filterM (const [True, False]) and Data.List permutation
Hi, i am replying to a thread called Data.List permutations on ghc- users and a thread called powerSet = filterM (const [True, False]) ... is this obfuscated haskell? on haskell cafe. On 04.08.2009, at 19:48, Slavomir Kaslev wrote: A friend mine, new to functional programming, was entertaining himself by writing different combinatorial algorithms in Haskell. He asked me for some help so I sent him my quick and dirty solutions for generating variations and permutations: On the haskell cafe thread it was observed that you can implement the permutations function in a non-deterministic favour. The ideas behind these implementations closely resemble implementations of corresponding functions in Curry. We can generalise your implementation to an arbitrary MonadPlus. The idea is that the MonadPlus represents non-determinism. `inter` non- deterministically inserts an element to every possible position of its argument list. inter x [] = [[x]] inter x yys@(y:ys) = [x:yys] ++ map (y:) (inter x ys) interM :: MonadPlus m = a - [a] - m [a] interM x [] = return [x] interM x yys@(y:ys) = return (x:yys) `mplus` liftM (y:) (interM x ys) perm [] = [[]] perm (x:xs) = concatMap (inter x) (perm xs) permM :: MonadPlus m = [a] - m [a] permM [] = return [] permM (x:xs) = interM x = permM xs Alternatively we can implement permM by means of foldM. permM :: MonadPlus m = [a] - m [a] permM = foldM (flip interM) [] A standard example for the use of non-determinism in Curry is a perm function that looks very similar to `permM` with the slight difference that you do not need the monad in Curry. An alternative to this definition is to define a monadic version of insertion sort. First we define a monadic equivalent of the `insertBy` function as follows: -- insertBy :: (a - a - Bool) - a - [a] - [a] -- insertBy _ x [] = [x] -- insertBy le x (y:ys) =-- if le x y-- then x:y:ys -- else y:insertBy le x ys insertByM :: MonadPlus m = (a - a - m Bool) - a - [a] - m [a] insertByM _ x [] = return [x] insertByM le x (y:ys) = do b - le x y if b then return (x:y:ys) else liftM (y:) (insertByM le x ys) Note that this function is very similar to interM, that is, we have interM = insertByM (\_ _ - return False `mplus` return True) On basis of `insertBy` we can define insertion sort. -- sortBy :: (a - a - Bool) - [a] - [a] -- sortBy le = foldr (insertBy le) [] In the same manner we can define a function `sortByM` by means of `insertByM`. sortByM :: MonadPlus m = (a - a - m Bool) - [a] - m [a] sortByM le = foldM (flip (insertByM le)) [] Now we can define a function that enumerates all permutations by means of `sortByM`. permM :: MonadPlus m = [a] - m [a] permM = sortByM (\_ _ - return False `mplus` return True) Interestingly we can also define permM by means of monadic counterparts of other sorting algorithms like mergeSort. Although there were some arguments on haskell cafe that this would not work for other sorting algorithms it looks like this is not the case. At least the corresponding implementation of perm by means of mergeSort in Curry works well for lists that I can test in reasonable time. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Suggestions for simulating Object ID
Hi, On 30.06.2009, at 11:55, Hemanth Kapila wrote: Can some one please suggest something on simulating some sort of object-Ids in Haskell? I mean, something like the following relation will hold: a1 == a2 = (objectID a1) == (objectID a2) I am not quite sure but perhaps Stable Names are also interesting for you. They only provide the following: mkStableName x == mkStableName y = x == y http://www.haskell.org/ghc/docs/latest/html/libraries/base/System-Mem-StableName.html Stable names are a way of performing fast (O(1)), not-quite-exact comparison between objects. Stable names solve the following problem: suppose you want to build a hash table with Haskell objects as keys, but you want to use pointer equality for comparison; maybe because the keys are large and hashing would be slow, or perhaps because the keys are infinite in size. We can't build a hash table using the address of the object as the key, because objects get moved around by the garbage collector, meaning a re-hash would be necessary after every garbage collection. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] memoization using unsafePerformIO
Hi, I have tried to implement a memo function using stable names and weak pointers like it is presented in the paper stretching the storage manager. There is an abstract datatype SNMap a b which implements a map that maps values of type StableName a to values of type b. The map is located in an IORef. {-# NOINLINE memo #-} memo :: (a - b) - a - b memo f = unsafePerformIO (do tref - newSNMap return (applyWeak f tref)) {-# NOINLINE applyWeak #-} applyWeak :: (a - b) - SNMap a (Weak b) - a - b applyWeak f tbl arg = unsafePerformIO (do sn - makeStableName arg lkp - lookupSNMap tbl sn case lkp of Nothing - not_found sn Just weak - do val - deRefWeak weak case val of Just result - return result Nothing - not_found sn) where not_found sn = do let res = f arg weak - mkWeak arg res Nothing insertSNMap tbl sn weak return res Using this memo function I have implemented a fibonacci function like it is defined in the paper. fib :: Int - Int fib = memo ufib ufib :: Int - Int ufib 0 = 1 ufib 1 = 1 ufib n = fib (n-1) + fib (n-2) When I compile this program with 6.10.3 fib yields the correct results up to 18 but for 19 it detects a loop. Can anybody give me a hint how I can avoid this behaviour (and still use unsafePerformIO ; ) )? Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Maintaining laziness: another example
Hi, this is a very nice example. On 08.06.2009, at 14:31, Eugene Kirpichov wrote: Cool! Probably one should start teaching with 'case' instead of pattern function definitions; that would put an accent on what is forced and in what order. I like this idea. Only after the student understands the laziness issues, introduce pattern signatures. But I think this will not solve the problem. Even for an experienced Haskell programmer it is not easy to check whether a function is too strict. I think it would be helpful if you define your functions using case expressions but in general you do not want to do this. For example consider the implementation of insect of Data.List. intersect xs ys = [x | x - xs, any (==x) ys] This definition is too strict. The evaluation of intersect [] [1..] yields [] while the evaluation of intersect [1..] [] does not terminate. This function can be improved such that it yields the empty list in both cases. This function was probably not implemented by a Haskell novice ; ) Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Purely logical programming language
Hi, On 26.05.2009, at 21:24, Lauri Alanko wrote: Mercury also has type classes and other Haskellisms, so if you're interested in doing Prolog the Haskell way, you should definitely have a look at it. I have to admit that I am not very familiar with Mercury. But if you are looking for doing Prolog the Haskell way advertiseyou can also have a look at Curry/advertise. Curry is a lazy functional logic programming language that has a Haskell like syntax (http://www.curry-language.org/ ). Besides standard functional features it provides non-determinism and narrowing. In contrast to Haskell overlapping rules in function definitions induce non-determinism. For example the following function non-deterministically inserts an element at each position of a list. insert :: a - [a] - [a] insert x xs = x : xs insert x (y:ys) = y : insert x ys From the side-effects point of view Curry is very boring as it does not provide type classes but there is one monad namely the IO monad for doing side effects. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] monad . comonad = id
Hi, I have a question regarding the connection between monads and comonads. I always thought of the comonad operations as being the inverse operations of the corresponding monad functions. As I do not know the underlying theory I thought I simply ask the masters. Is there a formal verification for this intuition? That is, are there always corresponding instances for monad and comonad such that the following laws are satisfied? extract . return == id join . duplicate == id If this is the case I would like to know what the corresponding monad for the following comond could be. This comonad treats lists as pointed sets where the first element is focused. instance Comonad [] where extract = head duplicate xs = init (zipWith (++) (tails xs) (inits xs)) Obviously we can define return x = [x]. But I do not know how to define join. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] monad . comonad = id
Hi, On 08.05.2009, at 13:14, Miguel Mitrofanov wrote: I have a question regarding the connection between monads and comonads. I always thought of the comonad operations as being the inverse operations of the corresponding monad functions. That's not true. I thought there is some kind of duality between monads and comonads in category theory. Does this only refer to the types of the functions or am I totally wrong here? If this is the case I would like to know what the corresponding monad for the following comond could be. This comonad treats lists as pointed sets where the first element is focused. instance Comonad [] where extract = head duplicate xs = init (zipWith (++) (tails xs) (inits xs)) First of all, this is NOT a comonad. The term comonad doesn't refer just to instances of Comonad class, it refers to instances that satisfy several laws. I am aware of this. I thought it does satisfy all the laws. Which one have I missed? Secondly, [] is a monad, and there is an instance of Monad class for it. It's working; don't fix it. I am also aware of this. Perhaps I should have used a data type like data Pointed a = Pointed a [a] but in this case a corresponding duplicate implementation is more complicated. I am very sorry if have asked a stupid question. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Code Golf
Hi, On 16.04.2009, at 05:08, Matt Morrow wrote: And i forgot to include the defs of (co)prod: prod () p1 p2 = (\a - p1 a p2 a) I think this one is liftM2 of the ((-) a) Monad instance. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Code Golf
Hi, On 15.04.2009, at 13:28, Sebastian Fischer wrote: Actually, there are a number of implementations that implement the same behaviour as the original version, e.g., diag = concat . foldr diags [] where diags [] ys = ys diags (x:xs) ys = [x] : merge xs ys merge [] ys = ys merge xs@(_:_) [] = map (:[]) xs merge (x:xs) (y:ys) = (x:y) : merge xs ys I think your first implementation is a slightly unreadable : ) implementation of this version but uses functional lists instead of standard lists. If we replace some of the lists by functional lists we get the following diag :: [[a]] - [a] diag = toList . concatFL . foldr diags2 [] where diags [] ys = ys diags (x:xs) ys = (x:) : merge xs ys merge [] ys = ys merge xs@(_:_) [] = map (:) xs merge (x:xs) (y:ys) = ((x:) . y) : merge xs ys with the following definitions concatFL :: [[a] - [a]] - [a] - [a] concatFL = foldr (.) id toList :: ([a] - [a]) - [a] toList fl = fl [] Additionally we can move the 'map (:)' in merge to diags diag :: [[a]] - [a] diag = toList . concatFL . foldr diags [] where diags [] ys = ys diags (x:xs) ys = (x:) : merge (map (:) xs) ys merge [] ys = ys merge xs@(_:_) [] = xs merge (x:xs) (y:ys) = (x . y) : merge xs ys If we now replace toList and concatFL by its definitions it looks very similar to the original implementation. diag :: [[a]] - [a] diag = foldr (.) id (foldr diags []) [] where diags [] ys = ys diags (x:xs) ys = (x:) : merge (map (:) xs) ys merge [] ys = ys merge xs@(_:_) [] = xs merge (x:xs) (y:ys) = (x . y) : merge xs ys diag l = foldr (.) id ((sel l . flip sel) ((:[]).(:))) [] where sel = foldr (\a b c - id : mrg (a c) (b c)) (const []) . map (flip id) mrg [] ys = ys mrg xs [] = xs mrg (x:xs) (y:ys) = (x.y) : mrg xs ys I guess that we can inline diags and get the definition above but I am kind of stuck here. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re: [Haskell-cafe] How to make code least strict?
Thomas Davie wrote: Further to all the playing with unamb to get some very cool behaviors, you might want to look at Olaf Chitil's paper here: http://www.cs.kent.ac.uk/pubs/2006/2477/index.html It outlines a tool for checking if your programs are as non-strict as they can be. We have discussed StrictCheck on the thread Maintaining Laziness which also covered the topic least strictness but wasn't as popular as this one. I have put up a page with functions from Data.List which are not least strict. I want to extend this with other examples from standard libraries. The page can be found at http://www.informatik.uni-kiel.de/~jac/strictcheck/. Jan -- View this message in context: http://www.nabble.com/How-to-make-code-least-strict--tp21546891p21604665.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Maintaining laziness
Am 14.01.2009 um 15:26 schrieb Henning Thielemann: See the List functions in http://hackage.haskell.org/cgi-bin/hackage-scripts/package/ utility-ht Version 0.0.1 was before applying StrictCheck, 0.0.2 after processing its suggestions. I have stopped when I saw that it repeatedly shows examples where I expect, that they are unresolvable. Thanks very much. Your library has provided me with an invaluable insight. The original example from the IFL paper that showed problems with lists is reverse. StrictCheck states the following You have to fulfil f _|_ = _|_ f (_|_ : _|_) = _|_ - (_|_ : _|_) f (_|_ : []) = (_|_ : []) f (_|_ : (_|_ : _|_)) = _|_ - (_|_ : (_|_ : _|_)) f (_|_ : (_|_ : [])) = (_|_ : (_|_ : [])) f (_|_ : (_|_ : (_|_ : _|_))) = _|_ - (_|_ : (_|_ : (_|_ : _|_))) That is, if the argument of reverse is known to have at least 2 elements the result should have at least two elements. The only implementation I could imagine that fulfilled these requirements had a quadratic complexity. But your shorterList inspired me to an implementation that is not as efficient as the original one but which is linear. That is, there is a least strict implementation of reverse that is nearly as efficient as the standard implementation. withSpineOf :: [a] - [a] - [a] _ `withSpineOf` [] = [] l `withSpineOf` (_:ys) = x:xs `withSpineOf` ys where x:xs = l rev xs = reverse xs `withSpineOf` xs If anybody knows a prettier implementation of reverse that is least- strict I would be delighted to hear about it. By the way, I was wrong that your example is similar to the reverse example. My new implementation of StrictCheck states that maybePrefixOf is least strict. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Maintaining laziness
Hi, Am 12.01.2009 um 14:37 schrieb Henning Thielemann: On Mon, 12 Jan 2009, Jan Christiansen wrote: I am not sure whether this would be a good idea. The original version makes a lot of suggestions which are not satisfiable but it is not at all trivial to decide which are satisfiable and which are not. I have rewritten StrictCheck from scratch to overcome this problem. But the current implementation is not presentable right now and I have no formal prove that the criterion that I am using is correct. As I said, the current version is already very useful. I have applied it to several basic functions and found a lot to improve. OK, I am sorry for my negative attitude. I just have had some negative experiences where the tool made suggestions, I implemented them and afterwards the tool made complains about another case. In fact it was not possible to fulfil both cases at all. But it was very difficult to detect these cases. I would be very interested in functions that can be improved with respect to non-strictness as test cases for my work. Cheers Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Maintaining laziness
This is great. I am working on the very same topic for quite a while now. My aim is to develop a tool that tells you whether a function is least strict. My work is based on an idea by Olaf Chitil: http://www.cs.kent.ac.uk/people/staff/oc/ I think he was the first who introduced the idea of least strictness. In a paper in the pre-proceedings of IFL 2006 he introduced the term least strict and presented the idea behind a tool called StrictCheck. This tool was supposed to check whether a function is least strict. The problem with this tool was that it proposes non-sequential function definitions that are not implementable in Haskell. The main example in the StrictCheck paper is the definition of unzip by using foldr. This one is very similar to your partition example as it is too strict if you do not use lazy pattern matching for the tuple structure. unzip2 :: [(a, b)] - ([a], [b]) unzip2 = foldr (\(x,y) (xs,ys) - (x:xs,y:ys)) ([],[]) My favorite example right now is the multiplication of Peano numbers. Suppose the definition of a data type for Peano numbers and addition and multiplication. data Natural = Z | S Natural (+) :: Natural - Natural - Natural Z + y = y S x + y = S (x + y) (*) :: Natural - Natural - Natural Z * _ = Z S x * y = y + (x * y) Now we can define an infinite Peano number as follows infinity :: Natural infinity = S infinity The evaluation of Z * infinity yields Z while infinity * Z does not terminate. The tool I am working on is supposed to yield the following information. Natural strictCheck2 (*) 5 100 You have to fulfil f (S _|_) Z = Z (_|_) This states that the evaluation of S _|_ * Z yields _|_ but it could yield Z. Therefore we can improve the function as follows (*) :: Natural - Natural - Natural Z * _ = Z S _ * Z = Z S x * y = y + (x * y) Now the evaluation of infinity * Z yields Z as we wanted it to. If we test this new implementation using StrictCheck we get *Natural strictCheck2 (*) 5 100 The function seems to be least strict The non least strict definition of (*) is taken from the module Data.Number.Natural which was published by Lennart Augustsson in the numbers package at hackage. I mention this here because I think it clearly shows that it is even hard for experienced Haskell programmers to define least strict functions. An even more promising area of application are functional logic programs as least strict functions result in smaller search spaces. For example I was able to improve the enumeration of pythagorean triples in Curry by a factor of 4 only by making a function least strict. Cheers, Jan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe