Re: [Haskell-cafe] PSA: do not install xcode 5 if you are using ghc 7.6
Just to add to Carter's message: if you happened to install Xcode 5 anyway, then realized your mistake and uninstalled it and installed Xcode 4 again, you will STILL have the command line tools that came with Xcode 5 and your Haskell toolchain will STILL be broken -- and so far I have been unable to fix that by any combination of cabal's gcc-location, ghc's -pgmP, and various other options. This was true even though I installed the command line tools that came with Xcode 4. Eventually I fixed this by manually installing the command line tools separately from https://developer.apple.com/downloads/index.action?=command%20line%20tools# Unlike ticking the Command line tools option in Xcode itself, this will actually overwrite the command line tools in /usr/bin, and my Haskell toolchain was once again restored. I don't know why I couldn't get it to work by pointing to the relevant tools in /Applications/Xcode.app/Contents/Developer/usr/bin, but there you go.. Anyway, this might save some of you some headaches.. -Edsko On Fri, Sep 20, 2013 at 7:05 PM, Carter Schonwald carter.schonw...@gmail.com wrote: glad to help. an alternative for the discerning power user is to install a recent version of gcc locally (eg 4.8), and build 7.6.3 with that! (or just repoint your ghc settings file to a locally built version of real gcc.) yes, assuming we have the time (after all, it's all volunteer time), that is the plan. On Fri, Sep 20, 2013 at 1:50 PM, Adam Foltzer acfolt...@gmail.com wrote: Hi Carter, Thanks for this heads up! Many of us here are cutting edge Mac users, and would have been bitten by this. Darin and I plan to spend some time next month preparing an unofficial patched version of ghc 7.6 that should play nice with clang / xcode 5, though at such a time ghc 7.8 will be in RC status at the very least. Can this be backported to the 7.6.3 tag and released as 7.6.4? It would be nice to not have to choose between running the latest xcode and the ability to test multiple GHC versions. Cheers, Adam ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Errors with Template Haskell
The Template Haskell quotation monad (Q) has proper support for fail: module A where import Language.Haskell.TH foo :: Q Exp foo = fail Custom compile error! and module B where import A main :: IO () main = print $foo gives B.hs:6:14: Custom compile error! In the first argument of `print', namely `$foo' In the expression: print ($foo) In an equation for `main': main = print ($foo) -E On Fri, Aug 9, 2013 at 9:48 AM, Jose A. Lopes jabolo...@google.com wrote: Hi, In Template Haskell, what is the proper way of signalling an error ? For example, you are generating code and you detect that a given parameter does not fulfill a precondition (e.g., String is empty), and you want to abort compilation with a descriptive error message. Thanks, Jose -- Jose Antonio Lopes Ganeti Engineering Google Germany GmbH Dienerstr. 12, 80331, München Registergericht und -nummer: Hamburg, HRB 86891 Sitz der Gesellschaft: Hamburg Geschäftsführer: Graham Law, Christine Elizabeth Flores Steuernummer: 48/725/00206 Umsatzsteueridentifikationsnummer: DE813741370 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] meaning of referential transparency
I have quite a detailed discussion of this concept, and related concepts, in Section 2.8 of my PhD thesis ( https://www.cs.tcd.ie/Edsko.de.Vries/pub/MakingUniquenessTypingLessUnique-screen.pdf ). -E On Sat, Apr 6, 2013 at 7:13 PM, Kim-Ee Yeoh k...@atamo.com wrote: On Sun, Apr 7, 2013 at 12:43 AM, Henning Thielemann lemm...@henning-thielemann.de wrote: Can someone enlighten me about the origin of the term referential transparency? I can lookup the definition of referential transparency in the functional programming sense in the Haskell Wiki and I can lookup the meaning of reference and transparency in a dictionary, but I don't know why these words were chosen as name for this defined property. Instead of a immaculately precise definition, may I suggest going about it from the practical benefits POV? RT matters so much in Haskell because of the engineering leverage it gives us. Bird's Pearls are a good source of Why Equational Reasoning Matters. -- Kim-Ee ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Threadscope 0.2.2 goes in segmentation fault on Mac Os X 10.8.3
Hi Alfredo, No dark magic as far as I recall (except in the actual bundling as a Mac app, unfortunately that required some magic, the GTK libraries don't relocate so easily :-( ). I didn't have any problems building. I compiled it with ghc 7.6.1, with the GTK libraries installed manually (there are some suggestions on how to do that at the very end of my Comprehensive Haskell Sandboxes post, http://www.edsko.net/2013/02/10/comprehensive-haskell-sandboxes/). It used to be a lot more painful (requiring the latest versions of Haskell libraries, with patches etc.) but these days the situation is a lot better. (That's not so say that problems like the one you reported don't still crop up from time to time, and can cause many a sleepless night..). Edsko On Wed, Apr 3, 2013 at 9:33 PM, Alfredo Di Napoli alfredo.dinap...@gmail.com wrote: Thanks Edsko, the app is awesome and it's starting just fine. Even though this fixes my problem, it doesn't solve the root, namely why it was failing. Can you tell me a bit more about the dark magic you used to make it work? Which GHC version did you use? Thanks a lot, A. On 3 April 2013 12:40, Edsko de Vries edskodevr...@gmail.com wrote: I provide a ThreadScope binary on my site ( http://www.edsko.net/2013/01/24/threadscope-0-2-2/) which runs fine for me on 10.8.3. -E On Mon, Apr 1, 2013 at 8:01 AM, Dominic Steinitz domi...@steinitz.orgwrote: Alfredo Di Napoli alfredo.dinapoli at gmail.com writes: Said that,has someone had any luck in running Threadscope on Mac OS X 10.8 at all? Thanks, A. I think I have encountered the same problem: https://groups.google.com/d/msg/parallel-haskell/-lhrgNN8elw/KzqLM9BzoJwJ In my experience, anything that uses gtk is a problem on a MAC. I still intend to do some analysis *not* using threadscope but using event-logs directly but that is at least a few weeks away. Dominic. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Threadscope 0.2.2 goes in segmentation fault on Mac Os X 10.8.3
a) 7.6.2 vs 7.6.1 seems unlike to be the issue, although theoretically possible I guess. b) Actually, the blog post is how to set things up by hand for better control than either of those tools give you; but again, I don't think it's relevant. c) This might be a bigger difference. I don't know what version brew installs, where it installs it, etc. etc. d) And this might be related too; yes, I'm using XQuartz and have the GTK compiled for it; currently using 2.7.4 but I don't know if I upgraded since building. -E On Thu, Apr 4, 2013 at 9:07 AM, Alfredo Di Napoli alfredo.dinap...@gmail.com wrote: Hi Edsko, thanks for the reply. The only things that might affect the outcome are: a) Ghc version: I'm running ghc 7.6.2 instead of 7.6.1 b) Don't know if you are using cabal-dev as sandboxing (like any good Haskell programmer I'm too lazy to open your blog post :D ), whilst I'm using hsenv c) I've brewed GTK instead of manually installing it, but gtk-demo runs just fine d) Are you using XQuartz? If yes, which version? Thanks again! A. On 4 April 2013 08:52, Edsko de Vries edskodevr...@gmail.com wrote: Hi Alfredo, No dark magic as far as I recall (except in the actual bundling as a Mac app, unfortunately that required some magic, the GTK libraries don't relocate so easily :-( ). I didn't have any problems building. I compiled it with ghc 7.6.1, with the GTK libraries installed manually (there are some suggestions on how to do that at the very end of my Comprehensive Haskell Sandboxes post, http://www.edsko.net/2013/02/10/comprehensive-haskell-sandboxes/). It used to be a lot more painful (requiring the latest versions of Haskell libraries, with patches etc.) but these days the situation is a lot better. (That's not so say that problems like the one you reported don't still crop up from time to time, and can cause many a sleepless night..). Edsko On Wed, Apr 3, 2013 at 9:33 PM, Alfredo Di Napoli alfredo.dinap...@gmail.com wrote: Thanks Edsko, the app is awesome and it's starting just fine. Even though this fixes my problem, it doesn't solve the root, namely why it was failing. Can you tell me a bit more about the dark magic you used to make it work? Which GHC version did you use? Thanks a lot, A. On 3 April 2013 12:40, Edsko de Vries edskodevr...@gmail.com wrote: I provide a ThreadScope binary on my site ( http://www.edsko.net/2013/01/24/threadscope-0-2-2/) which runs fine for me on 10.8.3. -E On Mon, Apr 1, 2013 at 8:01 AM, Dominic Steinitz domi...@steinitz.orgwrote: Alfredo Di Napoli alfredo.dinapoli at gmail.com writes: Said that,has someone had any luck in running Threadscope on Mac OS X 10.8 at all? Thanks, A. I think I have encountered the same problem: https://groups.google.com/d/msg/parallel-haskell/-lhrgNN8elw/KzqLM9BzoJwJ In my experience, anything that uses gtk is a problem on a MAC. I still intend to do some analysis *not* using threadscope but using event-logs directly but that is at least a few weeks away. Dominic. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] GSoC Project Proposal: Markdown support for Haddock
Yes please! -E On Thu, Apr 4, 2013 at 5:49 PM, Johan Tibell johan.tib...@gmail.com wrote: Hi all, Haddock's current markup language leaves something to be desired once you want to write more serious documentation (e.g. several paragraphs of introductory text at the top of the module doc). Several features are lacking (bold text, links that render as text instead of URLs, inline HTML). I suggest that we implement an alternative haddock syntax that's a superset of Markdown. It's a superset in the sense that we still want to support linkifying Haskell identifiers, etc. Modules that want to use the new syntax (which will probably be incompatible with the current syntax) can set: {-# HADDOCK Markdown #-} on top of the source file. Ticket: http://trac.haskell.org/haddock/ticket/244 -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Threadscope 0.2.2 goes in segmentation fault on Mac Os X 10.8.3
I provide a ThreadScope binary on my site ( http://www.edsko.net/2013/01/24/threadscope-0-2-2/) which runs fine for me on 10.8.3. -E On Mon, Apr 1, 2013 at 8:01 AM, Dominic Steinitz domi...@steinitz.orgwrote: Alfredo Di Napoli alfredo.dinapoli at gmail.com writes: Said that,has someone had any luck in running Threadscope on Mac OS X 10.8 at all? Thanks, A. I think I have encountered the same problem: https://groups.google.com/d/msg/parallel-haskell/-lhrgNN8elw/KzqLM9BzoJwJ In my experience, anything that uses gtk is a problem on a MAC. I still intend to do some analysis *not* using threadscope but using event-logs directly but that is at least a few weeks away. Dominic. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Library API design: functional objects VS type classes
What is the advance of using type classes? A function of the form f :: Show a = ... really has an implicit argument f :: Show__Dict a - ... that the compiler infers for us. So, the advantage of type classes is one of convenience: we don't have to pass dictionaries around, or even figure out which dictionaries we need; the compiler does that for us. But if we have a type class of the form class Foo a where mkFoo :: IO FooToken otherFun1 :: FooToken - ... otherFun2 :: FooToken - ... then this advantage is mostly lost; we still need to pass around an explicit FooToken object. In a case like this, I don't see the advantage of using a type class over using a data type data Foo = Foo { otherFun1 :: ... , otherFun2 :: ... } mkFoo :: .. - Foo There are exceptions; for instance, if you want to encode 'inheritance' in some way then type classes might still be useful; for instance, see the Gtk2Hs library, which uses this extensively. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] adding recursion to a DSL
Hi Joerg, You might find Abstract Syntax Graphs for Domain Specific Languages by Bruno Oliveira and Andres Löh ( http://ropas.snu.ac.kr/~bruno/papers/ASGDSL.pdf) a helpful reference to adding things like recursion (and other binding constructs) to your DSL. Edsko On Tue, Feb 19, 2013 at 9:47 AM, Emil Axelsson e...@chalmers.se wrote: You probably don't need recursion in the DSL for this (that would require a way to detect cycles in the expressions). For this example, it looks like all you need is to add something like `map` as a DSL construct. Your example could perhaps be expressed as forEach (1,1000) (\n - out (matrixMult, A, n, row, matrix-row)) For this you need a way to reify functions in the DSL. For an example of how to do this, see the `While` and `Arr` constructs in this paper: http://www.cse.chalmers.se/~**emax/documents/** svenningsson2013combining.pdfhttp://www.cse.chalmers.se/~emax/documents/svenningsson2013combining.pdf I'm not familiar with your particular DSL though, so I might have missed something. / Emil 2013-02-17 23:53, frit...@joerg.cc skrev: I have a tiny DSL that actually works quite well. When I say import language.CWMWL main = runCWMWL $ do out (matrixMult, A, 1, row, matrix-row) then runCWMWL is a function that is exported by language.CWMWL. This parses the experession and takes some action. Now, A is the name of the matrix and the third tuple element would represent the numbe of the row. For example 1 to 1. I want to achieve some sort of elegant (means readable code, a good representation) recursion that would let me do something like sequence [ out (matrixMult, A, n, row, matrix-row) | n - [1..1000] ] but in a nicer manner an without expending this to 1 lines of code at compile time. How can I best introduce recursion into my DSL or borrow this from the host language Haskell effectively? --Joerg __**_ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe __**_ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Difference Lists versus Accumulators
Hey all, The connection between difference lists and accumulators is probably well known, but I only recently realized it myself and a quick Google search didn't find turn up any page where this was explicitly stated, so I thought this observation might be useful to some. Every beginner Haskell student learns that this definition of reverse has O(n^2) complexity: reverse [] = [] reverse (x : xs) = reverse xs ++ [x] But what happens when we return a difference list instead, replacing [] with id, (++) with (.) and [x] with (x :)? reverse' [] = id reverse' (x : xs) = reverse xs . (x :) This definition of reverse' has type reverse' :: [a] - [a] - [a] Let's inline the second argument: reverse' [] acc = acc reverse' (x : xs) acc = reverse' xs (x : acc) And we have recovered the standard definition using an accumulator! I thought that was cute :) And may help understanding why difference lists are useful. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] License of CloudHaskell code write by Haskell language.
Hi Chatsiri, Yes, there are multiple backends for Cloud Haskell. The Azure backend is, as you say, work in progress, although it's almost in a usable state and we hope to release a first version (with minimal functionality) soon. There is also the SimpleLocalnet backend which you can use for local networks and is very convenient during development. Provided that you can use the TCP transport, the development of other backends should not be too difficult. For instance, it should be relatively little work to develop a backend for Amazon's EC2 infrastructure (especially now that there is a much improved version of the Haskell bindings for libssh2). If however you need to develop a backend which requires a different network transport (such as UDP, say), you would need to develop a new Network.Transport implementation, which is a more serious undertaking. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] License of CloudHaskell code write by Haskell language.
Hi, The version of Cloud Haskell you cite is a prototype. I recommend you use the 'distributed-process' package instead; it is licensed under BSD3. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Cloud Haskell real usage example
Hi Thiago, Let me address your questions one by one. On Wed, Aug 22, 2012 at 1:01 AM, Thiago Negri evoh...@gmail.com wrote: Hello everyone. I'm taking my first steps in Cloud Haskell and got some unexpected behaviors. I used the code from Raspberry Pi in a Haskell Cloud [1] as a first example. Did try to switch the code to use Template Haskell with no luck, stick with the verbose style. I have pasted a version of your code that uses Template Haskell at http://hpaste.org/73520. Where did you get stuck? I changed some of the code, from ProcessId-based messaging to typed channel to receive the Pong; using startSlave to start the worker nodes; and changed the master node to loop forever sending pings to the worker nodes. The unexpected behaviors: - Dropping a worker node while the master is running makes the master node to crash. There are two things going on here: 1. A bug in the SimpleLocalnet backend meant that if you dropped a worker node findSlaves might not return. I have fixed this and uploaded it to Hackage as version 0.2.0.5. 2. But even with this fix, you will still need to take into account that workers may disappear once they have been reported by findSlaves. spawn will actually throw an exception if the specified node is unreachable (it is debatable whether this is the right behaviour -- see below). - Master node do not see worker nodes started after the master process. Yes, startMaster is merely a convenience function. I have modified the documentation to specify more clearly what startMaster does: -- | 'startMaster' finds all slaves /currently/ available on the local network, -- redirects all log messages to itself, and then calls the specified process, -- passing the list of slaves nodes. -- -- Terminates when the specified process terminates. If you want to terminate -- the slaves when the master terminates, you should manually call -- 'terminateAllSlaves'. -- -- If you start more slave nodes after having started the master node, you can -- discover them with later calls to 'findSlaves', but be aware that you will -- need to call 'redirectLogHere' to redirect their logs to the master node. -- -- Note that you can use functionality of SimpleLocalnet directly (through -- 'Backend'), instead of using 'startMaster'/'startSlave', if the master/slave -- distinction does not suit your application. Note that with these modifications there is still something slightly unfortunate: if you delete a worker, and then restart it *at the same port*, the master will not see it. There is a very good reason for this: Cloud Haskell guarantees reliable ordered message passing, and we want a clear semantics for this (unlike, say, in Erlang, where you might send messages M1, M2 and M3 from P to Q, and Q might receive M1, M3 but not M2, under certain circumstances). We (developers of Cloud Haskell, Simon Peyton-Jones and some others) are still debating over what the best approach is here; in the meantime, if you restart a worker node, just give a different port number. Let me know if you have any other questions, and feel free to open an issue at https://github.com/haskell-distributed/distributed-process/issues?state=open if you think you found a bug. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] What's the deal with Clean?
On 4 Nov 2009, at 13:36, Alberto G. Corona wrote: Artyom. I know what uniqueness means. What I meant is that the context in which uniqueness is used, for imperative sequences: (y, s')= proc1 s x (z, s'')= proc2 s' y . is essentially the same sequence as if we rewrite an state monad to make the state explicit. When the state is the world state, then it is similar to the IO monad. Yes, as long as there is a single thing that is being updated there's little difference between the state monad and a unique type. But uniqueness typing is more general. For instance, a function which updates two arrays f (arr1, arr2) = (update arr1 0 'x', update arr2 0 'y') is easily written in functional style in Clean, whereas in Haskell we need to sequentialize the two updates: f (arr1, arr2) = do writeArray arr1 0 'x' writeArray arr2 0 'y' You can find a more detailed comparison in my thesis (https://www.cs.tcd.ie/Edsko.de.Vries/pub/MakingUniquenessTypingLessUnique-screen.pdf , Section 2.8.7). -Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] What's the deal with Clean?
I'm not sure I follow you? The compiler can't reorder the two updates or do them in parallel (IO is not a commutative monad). You might tell the compiler this explicitly, but then are you writing lower and lower level code, further removed from the functional paradigm. Edsko On 4 Nov 2009, at 15:27, David Leimbach wrote: On Wed, Nov 4, 2009 at 7:11 AM, Edsko de Vries edskodevr...@gmail.com wrote: On 4 Nov 2009, at 13:36, Alberto G. Corona wrote: Artyom. I know what uniqueness means. What I meant is that the context in which uniqueness is used, for imperative sequences: (y, s')= proc1 s x (z, s'')= proc2 s' y . is essentially the same sequence as if we rewrite an state monad to make the state explicit. When the state is the world state, then it is similar to the IO monad. Yes, as long as there is a single thing that is being updated there's little difference between the state monad and a unique type. But uniqueness typing is more general. For instance, a function which updates two arrays f (arr1, arr2) = (update arr1 0 'x', update arr2 0 'y') is easily written in functional style in Clean, whereas in Haskell we need to sequentialize the two updates: f (arr1, arr2) = do writeArray arr1 0 'x' writeArray arr2 0 'y' Those sequential updates can be run concurrently on both, just with different syntax though right? You can find a more detailed comparison in my thesis (https://www.cs.tcd.ie/Edsko.de.Vries/pub/MakingUniquenessTypingLessUnique-screen.pdf , Section 2.8.7). -Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Bool as type class to serve EDSLs.
+1. I agree completely, I've missed this often for exactly the same reasons. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: A guess on stack-overflows - thunks build-up and tail recursion
The problem occurs when the result value is needed and thus the thunks need to be reduced, starting with the outermost, which can't be reduced without reducing the next one etc and it's these reduction steps that are pushed on the stack until its size cause a stack-overflow. Yes, that's exactly right, and something that's not often pointed out. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] least fixed points above something
I've used a similar function myself, but why write it in such a complicated way? How about lfp :: Eq a = (a - a) - a - a lfp f x | f x == x = x | otherwise = lfp f (f x) Edsko On 19 Mar 2009, at 09:49, Jens Blanck wrote: Hi, I found myself writing the following leastFixedPoint :: (Eq a) = (a - a) - a - a leastFixedPoint f x = fst . head . dropWhile (uncurry (/=)) $ zip l (tail l) where l = iterate f x and was a bit surprised that I couldn't get any matches on hoogle for the type above. The closest one is fix :: (a - a) - a but that sort of assumes that we're starting the fixed point search from the bottom element (undefined). Anyway, is there a nicer way of doing the above? Jens ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] least fixed points above something
I always feel that the compiler should do such optimizations for me :) On 19 Mar 2009, at 16:21, Neil Mitchell wrote: I've used a similar function myself, but why write it in such a complicated way? How about lfp :: Eq a = (a - a) - a - a lfp f x | f x == x = x | otherwise = lfp f (f x) I've used a similar function too, but your version computes f x twice per iteration, I wrote mine as: fix :: Eq a = (a - a) - a - a fix f x = if x == x2 then x else fix f x2 where x2 = f x I find this fix much more useful than the standard fix. Thanks Neil ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] least fixed points above something
On 19 Mar 2009, at 16:37, Martijn van Steenbergen wrote: Neil Mitchell wrote: if length (replicate 'a' 1) == 1 then [] else head (replicate 'a' 1) This program will use O(1) memory. Doesn't length force evaluation of the 1 cells? Yes, but without CSE every cell can immediately be garbage collected; hence, CSE can lead to space leaks - a fair point. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] *almost* composition in writer monad
Hi, Does this function remind anybody of anything? It seems like I'm missing an obvious abstraction: composeWriter :: [a - (a, b)] - a - (a, [b]) composeWriter [] a = (a, []) composeWriter (f:fs) a = let (a', b) = f a (final_a, bs) = composeWriter fs a' in (final_a, b:bs) It's almost but not quite composition for functions in the Writer monad; the difference is that every function has exactly one b result, rather than a list of them. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] *almost* composition in writer monad
Doh, yes, of course. I had a feeling I was missing something obvious :) Thanks :) On 4 Mar 2009, at 17:29, Miguel Mitrofanov wrote: Isn't that sequence in State monad? On 4 Mar 2009, at 19:37, Edsko de Vries wrote: Hi, Does this function remind anybody of anything? It seems like I'm missing an obvious abstraction: composeWriter :: [a - (a, b)] - a - (a, [b]) composeWriter [] a = (a, []) composeWriter (f:fs) a = let (a', b) = f a (final_a, bs) = composeWriter fs a' in (final_a, b:bs) It's almost but not quite composition for functions in the Writer monad; the difference is that every function has exactly one b result, rather than a list of them. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Strict version of Data.Map.map
I guess so. Maybe using mapAccum helps: import qualified Data.Map as M strictMap :: (a - b) - M.Map k a - M.Map k b strictMap f m = case M.mapAccum f' () m of ((), m') - m' where f' () x = x' `seq` ((), x') where x' = f x testStrictness mapper = m `seq` Not strict. where m = mapper (const e) (M.singleton () ()) e = error Strict! :: () Very clever. I had tried to use mapAccum but I couldn't figure out what to put in the accumulator. I didn't realize it didn't make a difference (unit will do) as long as it is evaluated when the Map is. Seq wrecks my head ;) Thanks! Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Strict version of Data.Map.map
Hi, Is it possible to write a strict version of Data.Map.map (so that the Map becomes strict in the elements as well as the keys)? Thanks, Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Strict version of Data.Map.map
On Thu, Feb 26, 2009 at 12:45:09PM -0300, Felipe Lessa wrote: I'd advise you to see Control.Parallel.Strategies, specially the NFData class and the rnf function. What is the time complexity of running rnf on a Data.Map? If it is O(n), then surely running rnf on my map after every 'map' operation is hardly going to speed things up? Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: The Typeclassopedia, and request for feedback
Hey, Another comment: I feel that the Const datatype in Control.Applicative deserves to be better-known; you might mention it in your article, especially since it connects Applicative with Monoid. (In Conor's article, he calls that datatype 'Accy' and shows why it is so useful). Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: The Typeclassopedia, and request for feedback
Hi Brent, I want to congratulate you on your article! An excellent piece of work which should be compulsory reading for all serious haskell programmers :) My one suggestion would be that you expand on some of the examples; for example, in the monoid section, you refer to various cool applications of Monoid, but do not include them in the paper. Dedicated readers might follow up on your (rather long!) references list, but many will not, and it is a shame if they miss the elegance that Monoid permits. I think that if you would inline some of these examples, the article would be even better :) Thanks for writing it! Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Another point-free question (=, join, ap)
On Fri, Feb 13, 2009 at 05:21:50PM +0100, Thomas Davie wrote: Hey, Thanks for all the suggestions. I was hoping that there was some uniform pattern that would extend to n arguments (rather than having to use liftM2, litM3, etc. or have different 'application' operators in between the different arguments); perhaps not. Oh well :) Sure you can! What you want is Control.Applicative, not Control.Monad. (*) is the generic application you're looking for: pure (+) * [1,2,3] * [4,5,6] [5,6,7,6,7,8,7,8,9] Note that pure f * y can be shortened to fmap though, which Control.Applicative defines a handy infix version of: (+) $ [1,2,3] * [4,5,6] [5,6,7,6,7,8,7,8,9] Hope that provides what you want Hi Bob, Thanks for the suggestion, but that solution does not work when the function I want to apply (in your case, +) is monadic itself. Then I'd still have to write join $ f * [1,2,3] * [4,5,6] :( Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Another point-free question (=, join, ap)
Hi Conor, Will this do? http://www.haskell.org/haskellwiki/Idiom_brackets You get to write iI f a1 a2 a3 Ji for do x1 - a1 x2 - a2 x3 - a3 f a1 a2 a3 amongst other things... Cool :-) I had seen those idiom brackets before and put them on my mental 'things I want to understand' list but never got round to them. Very nice! Now if only ghc would allow me to write unicode so that I can write *real* brackets.. (Something the Agda people do very well!) Thanks! Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Another point-free question (=, join, ap)
Hey, Thanks for all the suggestions. I was hoping that there was some uniform pattern that would extend to n arguments (rather than having to use liftM2, litM3, etc. or have different 'application' operators in between the different arguments); perhaps not. Oh well :) Thanks again! Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Another point-free question (=, join, ap)
Hi, I can desugar do x' - x f x' as x = \x - f x' which is clearly the same as x = f However, now consider do x' - x y' - y f x' y' desugared, this is x = \x - y = \y' - f x' y' I can simplify the second half to x = \x - y = f x' but now we are stuck. I feel it should be possible to write something like x ... y ... f or perhaps f ... x ... y the best I could come up with was join $ return f `ap` x `ap` y which is not terrible but quite as easy as I feel this should be. Any hints? Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Looking for pointfree version
Hi, Is there a nice way to write down :: Focus - [Focus] down p = concat [downPar p, downNew p, downTrans p] in point-free style? (In doesn't make much difference what these functions do; if it helps, their types are downPar, downNew, downTrans :: Focus - [Focus]). Ideally, I would like to write something like down = downPar ... downNew ... downTrans but I'm not sure what should be on the dots. This works: down = concat . flip map [downPar, downNew, downTrans] . flip ($) but is extremely ugly and doesn't really explain what's going on :) (It seems to me I should be able to take advantage of the list monad, somehow). Pointers appreciated! Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Looking for pointfree version
Perfect! Beautiful. I was hoping there'd be a simple solution like that. Thanks! On 9 Feb 2009, at 14:31, Wouter Swierstra wrote: snip How about using Data.Monoid: down = downPar `mappend` downNew `mappend` downTrans Wouter ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Just how unsafe is unsafe
Hi, My opinion is that unsafeXXX is acceptable only when its use is preserved behind an abstraction that is referentially transparent and type safe. Others may be able to help refine this statement. I would agree with this. The problem is that impurity spreads easily. For example, suppose we have this truly random number generator, 'random'. As soon as we have this, then *almost every* function is potentially impure: f :: Integer - Integer f x = random + x g :: [Integer] g = repeat random etc. etc. The compiler has no way of tracking impurity other than through the type system which is, of course, exactly what monads do. To echo the sentiment above, the only safe way to use unsafePerformIO is to hide it behind a function that *is* guaranteed to be pure (i.e., returns the same values for the same arguments, can be inlined, etc.). And even then, I would recommend against it. Let me give a *practical* reason why. For a long time, I would never even have considered unsafePerformIO, but I recently had an application that needed unique global identifiers in lots of places, and I was reluctant to pass state around everywhere; *but* I could hide it in a few functions, which were themselves pure. It looked something like: -- Replace (some) name by a number quote :: Integer - Term - Term -- Replace a number by (that) name unquote :: Name - Term - Term -- Typical usage foo t == let l = getUnsafeUniqueGlobalIdentifier () in unquote l . do some stuff . quote l Since unquote l . quote l is an identity operation, 'foo' itself is pure -- provided that nothing in 'do some stuff' relies on the exact identity of the identifier. --- A rule which I broke at some point, got some very strange behaviour, and took me ages to debug. This was mostly due to laziness, which made the point of execution of the unsafe operation to be very difficult to predict. For example, every call to getUnsafeUniqueGlobalIdentifier (it wasn't actually called that, don't worry :-) yielded a number one higher than the previous. However, in a list of terms [t1, t2, .., tn] all of which include some unique idnetifier, it is *not* the generation of the list that determines whether the identifiers in these terms are incrementing, but the *evaluation* of the list -- when are the terms forced to normal form. I was called 'sort' on this list, and sort depended on the values of these identifiers -- but since sort evaluated the terms in the list to normal form in a hard to predict order, the order of the list was anything but sorted! --- Moreover, you need all sorts of compiler options or nasty hacks (the unit argument to getUnsafeUniqueGlobalIdentifier above is no mistake) to avoid the compiler optimizing your code in ways that you did not expect. In the end, I ended up rewriting the entire application to avoid the use of this global unique identifiers, because it was simply too difficult to get right. I felt I was writing C code again and was chasing bugs due to dangling pointers and the wrong memory being used. Not a time I want to return to! Moral of the story: unless you really really need to and really really know what you are doing -- do not use unsafePerformIO. Uncontrolled side effects and lazines will cause extremely hard to track behaviour in your program, and things are almost guaranteed to go wrong. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Type wildcards
Hi, On Tue, Dec 16, 2008 at 05:26:00PM +0200, Eyal Lotem wrote: Martin Foster (aka. EvilTerran) suggested an interesting idea, and I decided it was too nice to ignore/be forgotten inside Martin's head... So I'd like to try and suggest it. Type wildcards that allow partially specifying types, e.g: f :: _ - String f x = show x This will instruct the type-inferrer to fill out the wild-card part only (e.g: Show a = a). What you are describing is (an instance of) the some operator. For example, see http://research.microsoft.com/en-us/um/people/daan/download/papers/hmf.pdf Section 5.1, Partial annotations. And yes, I agree, they are useful :) Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Proof that Haskell is RT
See What is a purely functional language by Sabry. Not quite a formal proof about *Haskell*, but then we would first need a formal semantics of Haskell to be able to do that proof ;-) On 12 Nov 2008, at 10:11, Andrew Birkett wrote: Hi, Is a formal proof that the Haskell language is referentially transparent? Many people state haskell is RT without backing up that claim. I know that, in practice, I can't write any counter- examples but that's a bit handy-wavy. Is there a formal proof that, for all possible haskell programs, we can replace coreferent expressions without changing the meaning of a program? (I was writing a blog post about the origins of the phrase 'referentially transparent' and it made me think about this) Andrew -- - http://www.nobugs.org - ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Implementing pi-calculus using STM
Hi, (Note: assumes knowledge of pi-calculus.) I am playing with writing a simple interpreter for the pi-calculus using STM. The implementation of most of the operators of the pi- calculus is straightforward, but I am unsure on how to implement the replication operator. The interpretation of !P should be the interpretation of the infinite number of parallel processes P | P | P ... . I am implementing parallel processes using the fork operation, but spawning an infinite amount of processes -- even if the (infinite) majority of them all immediately lock -- seems like a bad idea. So I would prefer to spawn one P, and as soon as the thread that interprets P makes any progress at all, spawn another, and so on. I'm not too sure however how to achieve this. Any ideas? Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Associative Commutative Unification
On Tue, Jul 08, 2008 at 08:24:45AM -0400, John D. Ramsdell wrote: The Haskell typechecker contains a nice example of a unifier for freely generated terms. My focus is on equational unification, but thanks anyway. Are you aware of Term Rewriting and all That? It describes how to do associative commutative unification; whether it satisfies your 'obviously correct' criterion I don't know. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell's type system
On Tue, Jun 17, 2008 at 04:40:51PM -0400, Ron Alford wrote: I'm trying to wrap my head around the theoretical aspects of haskell's type system. Is there a discussion of the topic separate from the language itself? Since I come from a rather logic-y background, I have this (far-fetched) hope that there is a translation from haskell's type syntax to first order logic (or an extension there-of). Is this done? Doable? I'll give you some terms to Google for. The ideal type system for Haskell (ignoring type classes) is System F. Haskell'98 doesn't quite get there, but recent extensions such as boxy types and FPH do. System F corresponds to second order propositional logic: types correspond to propositions in this logic, and Haskell programs as the corresponding proofs (by the Curry-Howard isomorphism). The type system of Haskell'98 is a bit weird from a logical perspective, and sort of corresponds to the rank 1.5 predicative fragment of second order propositional logic (I say rank 1.5 because although no abstraction over rank 1 types is allowed normally, limited abstractions over rank 1 types is allowed through let-polymorphism -- so this is *almost* first order logic but not quite: slightly more powerful). Regarding type classes, I'm not 100% what the logical equivalent is, although one can regard a type such as forall a. Eq a = a - a as requiring a proof (evidence) that equality on a is decidable. Where this sits formally as a logic I'm not sure though. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Moving forall over type constructors
On Mon, Jun 09, 2008 at 03:20:33PM +0200, Klaus Ostermann wrote: At first I'd like to thank Claus, Ryan, Edsko, Luke and Derek for their quite helpful replies to my previous thread. In the course of following their advice I encountered the problem of moving a forall quantifier over a wrapper type constructor. If I have newtype Wrapper a = Wrapper a and I instantiate Wrapper with a polymorphic type, then it is possible to move the quantifier outside: outside :: Wrapper (forall a. (t a)) - (forall a. Wrapper (t a)) outside(Wrapper x) = Wrapper x (surprisingly the code does not work with the implementation 'outside x = x'; I guess this is a ghc bug) Not a bug; those two types are not the same. In the code you've given, ghc needs to find evidence that it can create a element of type (Wrapper (t a)) for any any; fortunately, it can do so because it has 'x', which can create a 't a' for any 'a'. inside :: (forall a. Wrapper (t a))- Wrapper (forall a. (t a)) inside x= x results in the following error: But here we have an argument that can return a Wrapper (t a) for any 'a'; that does *not* mean it can return a wrapper of a polymorphic type. If you think about 'a' as an actual argument, then you could pass 'Int' to get a Wrapper (t Int), Bool to get a wrapper (t Bool), or even (forall a. a - a) to get a Wrapper (t (forall a. a - a)), but no argument at all could make a Wrapper (forall a. t a). Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Moving forall over type constructors
On Mon, Jun 09, 2008 at 06:55:20AM -0700, Klaus Ostermann wrote: But here we have an argument that can return a Wrapper (t a) for any 'a'; that does *not* mean it can return a wrapper of a polymorphic type. If you think about 'a' as an actual argument, then you could pass 'Int' to get a Wrapper (t Int), Bool to get a wrapper (t Bool), or even (forall a. a - a) to get a Wrapper (t (forall a. a - a)), but no argument at all could make a Wrapper (forall a. t a). I just found out that it *is* possible to implement the inside function, namely as follows: inside :: forall t. ((forall a. Wrapper (t a))- Wrapper (forall a. (t a))) inside x = Wrapper f where f :: forall a. (t a) f = unwrap x unwrap (Wrapper z) = z I guess this solves my problem. Sorry for bothering you with this question. I still find it a bit weird to write all these obfuscated identity functions to make the type checker happy, though. As I said, the types are not isomorphic -- it you think of type parameters as arguments you will see why. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Question about kinds
On Fri, Jun 06, 2008 at 03:41:07PM -0700, Klaus Ostermann wrote: Why does the code below not pass the type checker? If I could explictly parameterize y with the type constructor Id (as e.g. in System F), then 'y Id' should have the type Int - Int and hence y Id x should be OK, but with Haskell's implicit type parameters it does not work. The type of 'Id' is expanded to simply 'Int', and 'Int' does not unify with 'm a' for any type constructor 'm': Haskell's type level functions are always unevaluated (type synonyms don't count: they are always expanded). So, if you wanted to to this, you'd need to do newtype Id a = Id a Of course, now your values need to be tagged as well. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] What is the maturity of Haskell Web Frameworks
On Wed, Jun 04, 2008 at 10:30:49PM -0400, Paul L wrote: Pardon me to hijack this thread, but I have an idea to build a different kind of Web Framework and am not sure if somebody has already done it. Have a look at iTasks, written in Clean. Not *quite* Haskell, I know, but close enough. I does what you suggest, it does what the OP suggested (assigning tasks to people etc) and much more. For example, read the following ICFP paper: http://www.st.cs.ru.nl/papers/2007/plar2007-ICFP07-iTasks.pdf Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] automatically deriving Map and Filter on datatypes etc.
On Thu, Jun 05, 2008 at 10:39:16AM +0200, Thomas Davie wrote: Even deriving an instance of Functor seems rather implausable, what should it do for data Wierd a b = Nil | A a (Wierd a b) | B b (Wierd a b) Should fmap's function argument operate on 'a's, 'b's, or both? Generic Haskell can do all that. Read Ralf Hinze's Generic Programs and Proofs for the theory: http://www.informatik.uni-bonn.de/~ralf/publications/habilitation.pdf Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] hs-plugins compile error
On Tue, Jun 03, 2008 at 03:07:33PM +0100, John O'Donnell wrote: Hi, What is the status of hs-plugins? I recently tried to install the version plugins-1.2 on hackage, using a Gnu/Linux box with Fedora 9 and ghc-6.8.2, but didn't get past the configure stage (see config.log below). The installation script is invoking gcc with a -V command line argument but according to gcc documentation -V requires an argument. Has anyone managed to get hs-plugins to work? If so, what platform, which version of ghc and gcc, and where did you find the hs-plugins source? It's working fine for me (after applying the patch I submitted), on Debian Lenny (which currently comes with ghc 6.8.2, gcc 4.2.4) and the source from Hackage. Source from the darcs repository also works fine (after applying the same patch). I *am* getting an application crash sometimes when I close my application, but I have not yet been able to track that down so I'm not 100% sure it's due to hs-plugins. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] hs-plugins compile error
Hi, I'm getting the compilation error that is actually logged on Hackage: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/plugins Below is a small diff file that resolves these problems; I don't know what the proper protocol is for submitting these diffs but it may be useful to someone. - E diff -ur plugins-1.2-orig/src/System/Plugins/Env.hs plugins-1.2/src/System/Plugins/Env.hs --- plugins-1.2-orig/src/System/Plugins/Env.hs 2008-06-02 14:57:59.0 +0100 +++ plugins-1.2/src/System/Plugins/Env.hs 2008-06-02 15:00:25.0 +0100 @@ -73,7 +73,7 @@ import Control.Concurrent.MVar ( MVar(), newMVar, withMVar ) -import Distribution.Package +import Distribution.Package hiding (packageName) import Text.ParserCombinators.ReadP import qualified Data.Map as M diff -ur plugins-1.2-orig/src/System/Plugins/PackageAPI.hs plugins-1.2/src/System/Plugins/PackageAPI.hs --- plugins-1.2-orig/src/System/Plugins/PackageAPI.hs 2008-06-02 14:57:59.0 +0100 +++ plugins-1.2/src/System/Plugins/PackageAPI.hs2008-06-02 14:59:49.0 +0100 @@ -40,7 +40,7 @@ #if CABAL == 1 || __GLASGOW_HASKELL__ = 604 import Distribution.InstalledPackageInfo -import Distribution.Package +import Distribution.Package hiding (depends, packageName) #else import System.Plugins.Package #endif diff -ur plugins-1.2-orig/src/System/Plugins/ParsePkgConfCabal.hs plugins-1.2/src/System/Plugins/ParsePkgConfCabal.hs --- plugins-1.2-orig/src/System/Plugins/ParsePkgConfCabal.hs2008-06-02 14:57:59.0 +0100 +++ plugins-1.2/src/System/Plugins/ParsePkgConfCabal.hs 2008-06-02 14:58:56.0 +0100 @@ -6,7 +6,7 @@ ) where import Distribution.InstalledPackageInfo -import Distribution.Package +import Distribution.Package hiding (depends) import Distribution.Version import Data.Char ( isSpace, isAlpha, isAlphaNum, isUpper, isDigit ) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] hs-plugins compile error
Hi Don, Is this the kind of thing you mean (I'm not really a darcs user; this is the patch created by darcs record): [Hide some names to remove ambiguity errors Edsko de Vries [EMAIL PROTECTED]**20080602202001] { hunk ./src/System/Plugins/Env.hs 76 -import Distribution.Package +import Distribution.Package hiding (packageName) hunk ./src/System/Plugins/PackageAPI.hs 43 -import Distribution.Package +import Distribution.Package hiding (depends, packageName) hunk ./src/System/Plugins/ParsePkgConfCabal.hs 9 -import Distribution.Package +import Distribution.Package hiding (depends) } Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] I/O without monads, using an event loop
On Fri, May 30, 2008 at 03:09:37PM +0100, Robin Green wrote: I have been thinking about to what extent you could cleanly do I/O without explicit use of the I/O monad, and without uniqueness types (which are the main alternative to monads in pure functional programming, and are used in the Concurrent Clean programming language). Suppose you have a main event handler function, like this: eventMain :: (Event, SystemState AppState) - (Command, SystemState AppState) This function could be called over and over in an event loop, until an EndProgram command was received, and the event loop would itself do all the actual I/O (the SystemStates are only in-memory representations of some part of the system state, plus the application's own state). Things like disk I/O could be done with commands which generate events when complete. Interprocess communication could be done in the same way. Then eventMain, and everything called by it, would be referentially-transparent, and yet non-monadic. You could of course build higher-level stuff on top of that. On the other hand, it's quite stateful, because anything you need to remember between events need to be recorded, either in the SystemState or externally (e.g. in a file). I suppose this is the most important disadvantage? Is there any published work or code using this approach, or something like it, in a pure functional language? I'm primarily interested in embedded system and desktop UIs, rather than say web-based systems, although both would be interesting. Yeah, check the History of Haskell paper, in particular Section 7. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Aren't type system extensions fun? [Further analysis]
On Thu, May 29, 2008 at 01:44:24PM +0200, Roberto Zunino wrote: Kim-Ee Yeoh wrote: How about foo :: (exists. m :: * - *. forall a. a - m a) - (m Char, m Bool) Thank you: I had actually thought about something like that. First, the exists above should actually span over the whole type, so it becomes a forall because (-) is contravariant on its 1st argument: foo :: forall m :: * - * . (forall a. a - m a) - (m Char, m Bool) This seems to be Haskell+extensions, but note that m above is meant to be an arbitrary type-level function, and not a type constructor (in general). So, I am not surprised if undecidability issues arise in type checking/inference. :-) Type *checking* is still decidable (System Fw is sufficiently powerful to model these types) but type inference now needs higher order unification, which indeed is undecidable. Another valid type for foo can be done AFAICS with intersection types: foo :: (Char - a /\ Bool - b) - (a,b) But I can not comment about their inference, or usefulness in practice. Again, undecidable :) In fact, I believe that an inference algorithm for intersection types is equivalent to solving the halting problem. Type checking however is decidable, but expensive. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Monad for HOAS?
On Wed, May 14, 2008 at 06:01:37PM -0400, Chung-chieh Shan wrote: Conal Elliott [EMAIL PROTECTED] wrote in article [EMAIL PROTECTED] in gmane.comp.lang.haskell.cafe: I share your perspective, Edsko. If foo and (Let foo id) are indistinguishable to clients of your module and are equal with respect to your intended semantics of Exp, then I'd say at least this one monad law holds. - Conal I am at least sympathetic to this perspective, but the Expr constructors are not as polymorphic as the monad operations: if in do a - foo return a foo has type ExprM String (perhaps foo is equal to return []), then we want to generate the DSL expression Let [] id, but [] is not of type Expr. Because whenever foo's type is not ExprM Expr the above code using do notation must be exactly equal to foo, by parametricity even when foo's type is ExprM Expr we cannot generate Let. Yes, absolutely. This is the core difficulty in designing the monad, and the reason why I started experimenting with adding a type constructor to Expr data Expr a = One | Add (Expr a) (Expr a) | Let (Expr a) (Expr a - Expr a) | Place a This is useful regardless, because we can now define catamorphisms over Expr. Nevertheless, I still can't see how to define my monad properly (other than using Lauri's suggestion, which has already improved the readability of my code). Return is now easy (return = Place), and it should be relatively easy to define a join operation Expr (Expr a) - Expr a *but* since Expr uses HOAS, it is not an instance of Functor and so we cannot use the join operator to define return. Actually, I think this approach is a dead end too, but I'm not 100% sure. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Monad for HOAS?
Hi, On Wed, May 14, 2008 at 03:59:58PM +0300, Lauri Alanko wrote: On Wed, May 14, 2008 at 10:11:17AM +0100, Edsko de Vries wrote: Suppose we have some data structure that uses HOAS; typically, a DSL with explicit sharing. For example: data Expr = One | Add Expr Expr | Let Expr (Expr - Expr) When I use such a data structure, I find myself writing expressions such as Let foo $ \a - Let bar $ \b - Add a b It seems to me that there should be a monad here somewhere, so that I can write this approximately like do a - foo b - bar return (Add a b) Neat idea, but the monad can't work exactly as you propose, because it'd break the monad laws: do { a - foo; return a } should be equal to foo, but in your example it'd result in Let foo id. However, with an explicit binding marker the continuation monad does what you want: import Control.Monad.Cont data Expr = One | Add Expr Expr | Let Expr (Expr - Expr) type ExprM = Cont Expr bind :: Expr - ExprM Expr bind e = Cont (Let e) runExprM :: ExprM Expr - Expr runExprM e = runCont e id Now you'd write your example as do a - bind foo b - bind bar return (Add a b) Nice. That's certainly a step towards what I was looking for, although it requires slightly more tags than I was hoping for :) But I've incorporated it into my code and it's much better already :) You mention that a direct implementation of what I suggested would break the monad laws, as (foo) and (Let foo id) are not equal. But one might argue that they are in fact, in a sense, equivalent. Do you reckon that if it is acceptable that foo and do { a - foo; return a } don't return equal terms (but equivalent terms), we can do still better? Thanks again, Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Stack vs Heap allocation
Hi, No, the thunks are (usually) stored on the heap. You don't get the stack overflow until you actually force the computation at which point you have an expression like: (...(((1+2)+3)+4) ... + 1000) which requires stack in proportion to the number of nested parentheses (effectively) Ah, that makes! So does it make sense to talk about tail recursive thunks? Or does the evaluation of thunks always take stack space proportional to the nesting level? Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Stack vs Heap allocation
Hi, How can I know whether something will be stack or heap allocated? For example, in the standard example of why foldl (+) 0 will fail to evaluate a long list of integers due to a stack overflow, but foldl' won't, it is pointed out that foldl starts building up unevaluated thunks. So, apparently, these thunks are allocated on the stack rather than on the heap with a pointer to the thunk on the stack. (I understand that foldl' is asymptotically better than foldl space-wise; that is irrelevant to my question.) On the other hand, in this version that sums all the values in a tree data Tree = Leaf Int | Node Tree Tree sum :: Tree - Int sum t = sum' [t] 0 where sum' [] acc = acc sum' (Leaf i : ts) acc = sum' ts $! (i + acc) sum' (Node l r : ts) acc = sum' (l : r : ts) acc we are building up a (potentially) large lists of trees yet to be processed, but never run out of stack space. Apparently, the list is built up on the heap rather than on the stack. What is the fundamental difference between these two examples? Why is the list of trees yet to be processed allocated on the heap (with a pointer on the stack, presumably) but the unevaluated thunks in the foldl example allocated on the stack? Thanks, Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Understanding tail recursion and trees
Hi, I think Huet's Zipper is intended to solve this sort of problem. data Path = Top | BranchL Path Tree | BranchR Tree Path type Zipper = (Path, Tree) openZipper :: Tree - Zipper openZipper t = (Top, t) Conceptually the zipper is a tree with one subtree selected. You can move the selection point with the (partial) functions defined below. downL, downR, up :: Zipper - Zipper downL (p, Branch l r) = (BranchL p r, l) downR (p, Branch l r) = (BranchR l p, r) up (BranchL p r, l) = (p, Branch l r) up (BranchR l p, r) = (p, Branch l r) Note that these functions just shuffle existing subtrees around. Depending on your traversal pattern, these can run in roughly constant space. Using the zipper, we can define functions that traverse the entire tree and return a new tree: number :: Tree - Tree number t = down Top t 0 where down p (Leaf _) n = up p (Leaf n) $! n + 1 down p (Branch l r) n = down (BranchL p r) l n up Top t n = t up (BranchL p r) l n = down (BranchR l p) r n up (BranchR l p) r n = up p (Branch l r) n Please correct me if I'm wrong, but doesn't the the size of the zipper grow every time we move down the left branch? I.e., by the time we reach the leaf, we'll have a zipper (BranchL (BranchL ..)) of size the depth of the tree? Or am I missing something? Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Understanding tail recursion and trees
Hi, I am writing a simple compiler for a small DSL embedded in Haskell, and am struggling to identify and remove the source of a stack error when compiling larger examples. To understand the issues better, I was playing around with tail recursion on trees when I came across the following problem. Suppose we want to count the number of leaves in a tree. The obvious naive (non tail-recursive) will run out of stack space quickly on larger trees. To test this, I defined a function that generates left (gentreeL, code below) or right (gentreeR) biased trees, that look like ** / \ / \ * ** * / \ / \ * ** * . . .. n n respectively; that is, a tree of depth n, with on the right (or the left) leaves only). Now, we can define define two traversals: one that has a tail call for the left subtree, after having traversed the right (acountL, below); and one that has a tail call for the right subtree, after having traversed the left (acountR). I was expecting acountL to work on the left biased tree but not on the right biased tree -- and that assumption turned out to be correct. However, I was also expecting (by duality :) acountR to work on the right biased tree, but not on the left biased tree, whereas in fact it works on both! (Indeed, it works on reallybigtree3, which combines the left and right biased trees, as well.) Can anyone explain why this is happening? Why is acountR not running out of stack space on the left biased tree? Also, if this is quirk rather than something I can rely on, is there a way to write a function that can count the number of leaves in reallybigtree3 without running out of stack space? Thanks (code follows), Edsko data Tree = Leaf Integer | Branch Tree Tree -- naive count ncount :: Tree - Integer ncount (Leaf _) = 1 ncount (Branch t1 t2) = ncount t1 + ncount t2 -- generate left-biased tree (right nodes are always single leaves) gentreeL :: Integer - Tree gentreeL 0 = Leaf 0 gentreeL n = Branch (gentreeL (n-1)) (Leaf 0) -- generate right-based tree (left nodes are always single leaves) gentreeR :: Integer - Tree gentreeR 0 = Leaf 0 gentreeR n = Branch (Leaf 0) (gentreeR (n-1)) -- test examples reallybigtree1 = gentreeL 200 reallybigtree2 = gentreeR 200 reallybigtree3 = Branch (gentreeL 200) (gentreeR 200) -- count with tail calls for the left subtree acountL :: Tree - Integer - Integer acountL (Leaf _) acc = acc + 1 acountL (Branch t1 t2) acc = acountL t1 $! (acountL t2 acc) -- count with tail calls for the right subtree acountR :: Tree - Integer - Integer acountR (Leaf _) acc = acc + 1 acountR (Branch t1 t2) acc = acountR t2 $! (acountL t1 acc) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Understanding tail recursion and trees
Hi, Thanks to Miguel for pointing out my silly error. So at least my understanding of tail recursion is correct :) So then the question becomes: what *is* the best way to write this function? One version I can think of is ecount :: [Tree] - Integer - Integer ecount [] acc = acc ecount (Leaf _ : ts) acc = ecount ts $! (acc + 1) ecount (Branch t1 t2 : ts) acc = ecount (t1 : t2 : ts) acc which essentially maintains an explicit stack and runs on all trees. Are there better ways to do this? Thanks again and sorry for my mistake, Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Functional programmer's intuition for adjunctions?
Hi, Is there an intuition that can be used to explain adjunctions to functional programmers, even if the match isn't necessary 100% perfect (like natural transformations and polymorphic functions?). Thanks, Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Functional programmer's intuition for adjunctions?
On Tue, Mar 04, 2008 at 11:58:38AM -0600, Derek Elkins wrote: On Tue, 2008-03-04 at 17:16 +, Edsko de Vries wrote: Hi, Is there an intuition that can be used to explain adjunctions to functional programmers, even if the match isn't necessary 100% perfect (like natural transformations and polymorphic functions?). Well when you pretend Hask is Set, many things can be discussed fairly directly. F is left adjoint to U, F -| U, if Hom(FA,B) is naturally isomorphic to Hom(A,UB), natural in A and B. A natural transformation over Set is just a polymorphic function. So we have an adjunction if we have two functions: phi :: (F a - b) - (a - U b) and phiInv :: (a - U b) - (F a - b) such that phi . phiInv = id and phiInv . phi = id and F and U are instances of Functor. The unit and counit respectively is then just phi id and phiInv id. Sure, but that doesn't really explain what an adjunction *is*. For me, it helps to think of a natural transformation as a polymorphic function: it gives me an intuition about what a natural transformation is. Specializing the definition of an adjunction for Hask (or Set) helps understanding the *definitions*, but not the *intention*, if that makes any sense.. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] OT: Isorecursive types and type abstraction
Hi, This is rather off-topic but the audience of this list may be the right one; if there is a more appropriate venue for this question, please let me know. Most descriptions of recursive types state that iso-recursive types (with explicit 'fold' and 'unfold' operators) are easy to typecheck, and that equi-recursive types are much more difficult. My question regards using iso-recursive types (explicitly, not implicitly using algebraic data types) together with type abstraction and application. The usual typing rules for fold and unfold are e :: Fix X. t - Unfold unfold e :: [X := Fix X. t] t e :: [X := Fix X. t] t - Fold fold e : Fix X. t This works ok for the following type: ListInt = Fix L. 1 + Int * L (where 1 is the unit type). If x :: ListInt then unfold x :: 1 + Int * ListInt using the Unfold typing rule. However, this breaks when using type abstraction and application. Consider the polymorphic version of list: List = Fix L. /\A. 1 + A * L A Now if we have x :: List Int we can no longer type unfold x because x does not have type (Fix ..), but ((Fix ..) Int) instead. Of course, we can unroll List Int by first unrolling List, and then re-applying the unrolled type to Int to get (/\A. 1 + A * (Fix L. /\A. 1 * L A) A) Int which can be simplified to 1 + Int * List Int but this is not usually mentioned (that I could find; in particular, TAPL does not mention it) and I'm worried that there are subtleties here that I'm missing--nor do I have an exact definition of what this 'extended' unrolling rule should do. Any hints or pointers to relevant literature would be appreciated! Edsko PS. One way to simplify the problem is to redefine List as List = /\A. Fix L. 1 + A * L so that List Int can easily be simplified to the right form (Fix ..); but that can only be done for regular datatypes. For example, the nested type Perfect = Fix L. /\A. A + Perfect (A, A) cannot be so rewritten because the argument to Perfect in the recursive call is different. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] OT: Isorecursive types and type abstraction
On Thu, Jan 24, 2008 at 10:06:04AM -0600, Antoine Latter wrote: Can Fix be made to work with higher-kinded types? If so, would the following work: Perfect = /\ A . Fix (L :: * - *) . (A + L (A,A)) Hi, Thanks for your quick reply. Unfortunately, your solution does not work. For Fix X. t to be well-defined, we must have that if 'X' has kind 'k', then 't' must also have kind 'k' (compare the type of 'fix' in Haskell: (a - a) - a). Keep in mind I have no idea what the Perfect data structure is supposed to look like. The Haskell equivalent would be data Perfect a = Leaf a | Branch (Perfect (a, a)) and models perfect binary trees (I admit, slightly headwrecking datatype! :) Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] OT: Isorecursive types and type abstraction
On Thu, Jan 24, 2008 at 10:46:36AM -0600, Antoine Latter wrote: Hmm ... How about: Perfect :: * - * = Fix (L :: * - *) . /\ A . (A + L (A,A)) unfold Perfect = [L := Fix L . t] t where t = /\ A . (A + L (A,A)) = /\ A . (A + (Fix L . /\ B . (B + L (B,B))) (A,A)) assuming alpha-equality. Because L and (Fix L . t) are of kind (* - *), the substitution should be okay. Am I missing something, again? The problem is not that Perfect as it stands cannot be unrolled; the problem is that without some hackery, I don't know how to unroll the type of a term if that type is of the form ((Fix ..) applied to some arguments rather than just (Fix ..) -- whether that is List or Perfect. The only reason I mention Perfect is that for List I can 'lift' the /\A over the Fix, but I cannot do that with Perfect. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Precedence and associativity in a pretty-printer
Hi, Suppose we have some algebraic datatype describing an expression language containing the usual suspects (various binary arithmetic operators such as addition, subtraction, multiplication, division, exponentiation, function abstraction and application, etc.) each with their own precendence (multiplication binds stronger than addition) and associativity (function application is left associative, exponentiation is right associative, addition is associative, etc.) Is there a nice way to pretty-print such an expression with the minimal number of brackets? I can come up with something, but I'm sure somebody thought hard about this problem before and came up with a really nice solution :) Any hints or pointers would be appreciated, Thanks, Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] Fixpoint combinator without recursion
Yeah, it's rather cool. IIRC, this style of encoding of recursion operators is attributed to Morris. Do you have a reference? Before the advent of equality coercions, GHC typically had problems generating code for these kinds of definitions. Did you test this with a release version? If so, how did you get around the code- generation problem? Is it the NOINLINE pragma that does the trick? Yep. Without the NOINLINE pragma, ghc tries to inline the definition of fac, expanding it ad infinitum (this is a known bug in ghc and mentioned in the ghc manual). Hugs doesn't have a problem though. Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] Fixpoint combinator without recursion
On Wed, Apr 04, 2007 at 11:05:51PM +0200, Stefan Holdermans wrote: Edsko, Yeah, it's rather cool. IIRC, this style of encoding of recursion operators is attributed to Morris. Do you have a reference? James H. Morris. Lambda calculus models of programming languages. Technical Report MIT-LCS//MIT/LCS/TR-57, Massachusetts Institute of Technology, 1968. Aah, I guess that's a bit old to be avaiable online :) Does he talk about negative datatypes though? The Y combinator itself isn't really the point of my little exercise; it's more that I can code the Y combinator without making any recursive calls (in fact, there aren't any recursive calls in that program at all). Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell] Fixpoint combinator without recursion
On Wed, Apr 04, 2007 at 11:15:25PM +0200, Stefan Holdermans wrote: Edsko, James H. Morris. Lambda calculus models of programming languages. Technical Report MIT-LCS//MIT/LCS/TR-57, Massachusetts Institute of Technology, 1968. Aah, I guess that's a bit old to be avaiable online :) Does he talk about negative datatypes though? The Y combinator itself isn't really the point of my little exercise; it's more that I can code the Y combinator without making any recursive calls (in fact, there aren't any recursive calls in that program at all). If I recall correctly that's exactly what he demonstrates, i.e., that fixed-point combinators can be encoded without value-level recursion, but by instead making use of types that are contravariantly recursive. I see. Thanks for the reference! Must try to dig that up (the MIT publication database appears to be offline at the moment). Thanks again, Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Search monad
Hey, I have a structure containing Xs in various places, like so data X data Structure = Structure .. [X] .. [X] .. And I defined mapStructure mapStructure :: (X - X) - (Structure - Structure) I then wanted to use mapStructure to define queries as well as transformations on structures. I generalized mapStructure to mapStructureM: mapStructure :: Monad m = (X - m X) - (Structure - m Structure) and then defined the following search monad: data Search f a b = Found (f a) b class Monad (m a) = SearchMonad m a where found :: a - m a a fromSearch :: Search f a b - f a fromSearch (Found a _) = a search :: (SearchMonad m a) = (a - Bool) - a - m a a search f a | f a = found a | otherwise = return a Instances of the monad for finding one and for finding all elements: instance SearchMonad (Search Maybe) a where found a = Found (Just a) a instance SearchMonad (Search []) a where found a = Found [a] a instance Monad (Search Maybe a) where return b = Found Nothing b Found (Just a) a' = f = case f a' of Found _ b - Found (Just a) b Found Nothing a' = f = f a' instance Monad (Search [] a) where return b = Found [] b Found as a' = f = case f a' of Found as' b - Found (as ++ as') b Here is a simple sample session with ghci *Util fromSearch $ mapM (search even) [1,3,5] :: Maybe Int Nothing *Util fromSearch $ mapM (search even) [1,2,3,4,5] :: Maybe Int Just 2 *Util fromSearch $ mapM (search even) [1,2,3,4,5] :: [Int] [2,4] What I'm wondering about is if this monad is an instance of a more general monad(if so, which one?), and generally if people have any comments about these definitions. Thanks! Edsko ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe