Re: [Haskell] Re: package mounting
> What about packages with multiple module trees like, say, Cabal? That's a good question, and I think the right answer is not to do anything special to support them. I assume that what you're referring to with Cabal is that there is no common prefix for all of the module names, but rather a small set of common prefixes (Distribution.*, Language.Haskell.Extension). Under my proposal, if we want to get rid of the 'Distribution' module prefix within the Cabal source code, then we'll have to either rename the Language.Haskell.Extension module, or move it to another package. Best, Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Re: package mounting
On Sun, Oct 29, 2006 at 10:03:32PM -0400, Samuel Bronson wrote: > On 10/25/06, Frederik Eaton <[EMAIL PROTECTED]> wrote: > > >http://hackage.haskell.org/trac/ghc/wiki/PackageMounting > > It looks nice, but don't you think the -package-base flag ought to > take both the package name *and* the mountpoint? My intention was that -package-base specifies a base for the package specified in the preceding -package flag, but I'll clarify it in the document. In other words, it is an optional argument and the syntax is ghc ... -package PACKAGE -package-base BASE ... (giving package PACKAGE mount point BASE) > Otherwise, this looks like what I've wanted all along, if only I knew it ;-). Excellent, thanks. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] package mounting
Hi lists, I recently read Simon Peyton Jones' proposal: http://hackage.haskell.org/trac/ghc/wiki/GhcPackages and disagreed with some of the design decisions. (To be fair, the aspects I disagree with are shared with most or all of the other proposals) So I've put an alternative proposal here: http://hackage.haskell.org/trac/ghc/wiki/PackageMounting Discussion is welcome (but only on the libraries@ list, please). Hopefully we can weigh the benefits of each proposal, and reach an agreement and write up a more formal specification within the next few months or so provided that people decide that this is an extension we want to implement. Cheers, Frederik P.S. (I'm sorry that I missed Sven's earlier package mounting thread in July. But I hope the above article covers what my response would have been) -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] thread-local variables
> Furthermore, can we move this thread from the Haskell mailing list > (which should not have heavy traffic) to either Haskell-Café, or > the libraries list? Sure, moving to haskell-cafe. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] thread-local variables
On Tue, Aug 08, 2006 at 04:21:06PM +0300, Einar Karttunen wrote: > On 07.08 13:16, Frederik Eaton wrote: > > > How would this work together with the FFI? > > > > It wouldn't, at least I wouldn't care if it didn't. > > Suddenly breaking libraries that happen to use FFI behind your > back does not seem like a good conservative extension. FFI already doesn't mix well with GHC's IO handles. What if I write to file descriptor 1 before all data in stdout has been flushed? Is that a reason not to allow FFI? > I think we should move the discussion to the wiki as Simon > suggested. I can create a wikipage if you don't want to. http://haskell.org/haskellwiki/Thread_local_storage I think the wiki is a good place for proposals, but not most discussion. > > What about my example: > > > > newMain host environment program_args > > network_config locale terminal_settings > > stdin stdout stderr = do > > ... > > > > Now, let's see. We might want two threads to have the same network > > configuration, but a different view of the filesystem; or the same > > view of the filesystem, but a different set of environment variables; > > or the same environment, but different command line arguments. All > > three cases are pretty common in practice. We might also want to have > > the same arguments but different IO handles - as in a multi-threaded > > server application. > > This won't be pretty even with TLS. Our fancy app will probably mix > in STM and pass callback actions to the thread processing > packets coming directly from the network interface. Quickly > the TLS approach seems problematic - we need to know what actions > depend on each other and how. I don't understand. Does TLS make such design harder or easier? > > And the part that implements the filesystem might want to access the > > network (if there is a network filesystem). And the part that starts > > processes with an environment might want to access the filesystem, for > > instance to read the code for the process and for shared libraries; > > and maybe it also wants to get the hostname from the network layer. > > And the part that starts programs with arguments might want to access > > the environment (for instance, to get the current locale), as well as > > the filesystem (for instance, to read locale configuration files). And > > the part that accesses the IO handles might also want to access not > > just the program arguments but the environment, and the filesystem, > > and the network. > > So we have the following dependencies: > > > FileSystem -> Network > Environment -> FileSystem, Network > Arguments -> Environment and Filesystem > IO Handles -> Arguments,Environment,FS,Network > > With TLS every one of them has type IO. Now the programmer is supposed > to know that he has to configure the network before using program > arguments? So a programmer first wanting to process command line > arguments and only then configuring network will probably have > hidden bugs. The running example is an example of an executable starting in an operating system. So everything is already configured by the time it starts, as you know. My application will be no different - for instance, the database-related parameter will be set; then a request thread will start, and after parsing the request, a user-id parameter will be set, and then the request-processing functions will be called. There is no reason for the main server thread to call any of the request-processing functions, because it doesn't have a request to process. > It becomes very hard to know what different components depend on. > > Even if we had to define all those instances that would be > 1+2+1+3 = 7 instance declarations. Not 5^2 = 25 instances. > Or use small wrapper combinators (which I prefer). O(x) doesn't mean "same as x". > btw how would the TLS solution elegantly handle that I'd like > separate network configurations for e.g. > IO Handle -> Network(socket) and > IO Handle -> FileSystem(NFS) -> Network > ? The filesystem could send its actions to be executed in a separate thread, which has its own configuration? > > So here is an example where we have nested layers, and each layer > > accesses most of the layers below it. > > And this will cause problems. A good API should not encourage > going to the lower levels directly. If the lowest level changes > then with your design one has to make O(layers) changes instead of > O(1) if the layers are not available directly. No, you just write a compatibility wrapper over the new implementation. > If one of the layers adds
Re: [Haskell] thread-local variables
Hi Simon, It is good that you support thread-local variables. I have initialized a wiki page: http://haskell.org/haskellwiki/Thread_local_storage The main difference between my and your proposals, as I see it, is that your proposal is based on "keys" which can be used for other things. I think that leads to an interface which is less natural. In my proposal, the IOParam type is quite similar to an IORef - it has a user-specified initial state, and the internal implementation is hidden from the user - yours differs in both of these aspects. > * I agree with Robert that a key issue is initialisation. Maybe it > should be possible to associate an initialiser with a key. I have not > thought this out. I still don't understand this, so it is not mentioned on the wiki. > * A key issue is this: when forking a thread, does the new thread > inherit the current thread's bindings, or does it get a > freshly-initialised set. Sometimes you want one, sometimes the other, > alas. I think the inheritance semantics are more useful and also more general: If I wanted a freshly-initialized set of bindings, and I only had inheritance semantics, then I could start a thread early on when all the bindings are in their initial state, and have this thread read actions from a channel and execute them in sub-threads of itself, and implement a 'fork' variant based on this. More generally, I could do the same thing from a sub-thread of the main thread - I could start a thread with any set of bindings, and use it to launch other threads with those bindings. In this way, the "initial" set of bindings is not specially privileged over intermediate sets of bindings. > On the GHC front, we're going to be busy with 6.6 etc until after ICFP, > so nothing is going to happen fast -- which gives an opportunity to > discuss it. However it's just infeasible for the community at large to > follow a long email thread like this one. My suggestion would be for the > interested parties to proceed somewhat as we did with packages. > (http://hackage.haskell.org/trac/ghc/wiki/GhcPackages) I have put a page on the wiki summarizing the thread. However, I want to say that I think that email is a better medium for most ongoing discussions. (I'm not sure if I may have suggested the opposite earlier) For those who are not interested in the discussion, it should be easy in most mail readers to ignore or hide a long thread, or to skip to the very end of it to get a rough idea of where things stand. I think it is a good idea to have proposals on a wiki, though, so that the product of all agreed-upon amendments and alterations can be easily referred to. When discussions happen on a wiki, though, they often take the same threaded form as email discussions (see Wikipedia) - but, they are seen by fewer interested people, and the interface is clumsier (for instance, I can subscribe to email notification when a wiki page changes - thanks to whomever finally made this possible on haskell.org, by the way - but I have to read the updated version to figure out whether the modification was replying to me or another poster; whereas my mail reader clearly flags messages where I appear in the recipients list). Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] thread-local variables
On Sun, Aug 06, 2006 at 01:36:15PM +0300, Einar Karttunen wrote: > On 06.08 02:41, Frederik Eaton wrote: > > Also, note that my proposal differs in that thread local variables are > > not writable, but can only be changed by calling (e.g. in my API) > > 'withIOParam'. This is still just as general, because an IORef can be > > stored in a thread-local variable, but it makes it easier to reason > > about the more common use case where TLS is used to make IO a Reader; > > and it makes it easier to share modifiable state across more than one > > thread. I.e. if modifiable state is stored as 'IOParam (IORef a)' then > > the default is for the stored 'IORef a' to be shared across all > > threads; it can only be changed "locally" for a specified action and > > any sub-threads using 'withIOParam'; and if some library I use decides > > to fork a thread behind the scenes, it won't change my program's > > behavior. > > Perhaps a function like this would solve all our problems: > > -- | Tie all TLS references in the IO action to the current > -- environment rather than the environment it will actually > -- be executed. > tieToCurrentTLS :: IO a -> IO (IO a) "Our" problems? :) Well, it should be easy to implement. I think it's a good idea. > > I think it is a good idea to have stdin, cwd, etc. be thread-local. > > How would this work together with the FFI? It wouldn't, at least I wouldn't care if it didn't. > > I don't understand why the 'TL' monad is necessary, but I haven't read > > the proposal very carefully. > > The TL monad is necessary to make initialization order problems go > away. That's what it seemed like the intended purpose was, but I don't see any initialization order problems in my proposal. > On 05.08 19:56, Frederik Eaton wrote: > > That doesn't answer the question: What if my application has a need > > for several different sets of parameters - what if it doesn't make > > sense to combine them into a single monad? What if there are 'n' > > layers? Is it incorrect to say that the monadic approach requires code > > size O(n^2)? > > Well designed monadic approach does not require O(n^2). But if you > want to design code in a way that requires O(n^2) code size you > can do it. > > Parallel layers require O(layers). > Nested layers hiding the lower layer need O(layers). > > This is not a problem in practice and makes refactoring very easy. Is that true? I would be very careful when making generalizations about all software design. What about my example: newMain host environment program_args network_config locale terminal_settings stdin stdout stderr = do ... Now, let's see. We might want two threads to have the same network configuration, but a different view of the filesystem; or the same view of the filesystem, but a different set of environment variables; or the same environment, but different command line arguments. All three cases are pretty common in practice. We might also want to have the same arguments but different IO handles - as in a multi-threaded server application. And the part that implements the filesystem might want to access the network (if there is a network filesystem). And the part that starts processes with an environment might want to access the filesystem, for instance to read the code for the process and for shared libraries; and maybe it also wants to get the hostname from the network layer. And the part that starts programs with arguments might want to access the environment (for instance, to get the current locale), as well as the filesystem (for instance, to read locale configuration files). And the part that accesses the IO handles might also want to access not just the program arguments but the environment, and the filesystem, and the network. So here is an example where we have nested layers, and each layer accesses most of the layers below it. kernel (networking, devices) filesystem linker libc application If we started with a library that dealt with OS devices such as the network, and used a special monad for that; and then if we built upon that a layer for keeping track of environment variables, with another monad; and then a layer for invoking executables with arguments; and then a layer for IO; all with monads - then we would have a good modular, extensible design, which, due to the interactions between layers, would, in Haskell, require code length which is quadratic in the number of layers. (Of course, it's true that in real operating systems, each of these layers has its own set of interfaces to the other layers - so the monadic approach is actually not more verbose. But the point is that it&
Re: [Haskell] thread-local variables
> > Here is a naive and dirty implementation. The largest problem is that > > TypeRep is not in Ord. An alternative approach using Dynamic would be > > possible, but I like the connection between the key > > and the associated type. > > > > http://www.cs.helsinki.fi/u/ekarttun/haskell/TLS/ > > > > Not optimized for performance at all. > > You've redefined 'fork'. If I want a library which works with other > libraries, that will not be an option. The original purpose of my > posting to this thread was to ask for two standard functions which > would let me define thread-local variables in a way which is > interoperable with other libraries, to the same extent as 'withArgs' > and 'withProgName' are. I also forgot to mention that if you hold on to a ThreadId, it apparently causes the whole thread to be retained. Simon Marlow explained this on 2005/10/18: m> One could argue that getting the parent ThreadId is something that m> should be supported natively by forkIO, and I might be inlined to agree. m> Unfortunately there are some subtleties: currently a ThreadId is m> represented by a pointer to the thread itself, which causes the thread m> to be kept alive. This has implications not only for space leaks, but m> also for reporting deadlock: if you have a ThreadId for a thread, you m> can send it an exception with throwTo at any time, and hence the runtime m> can never determine that the thread is deadlocked so it will never get m> the NonTermination exception. Perhaps we need two kinds of ThreadId: a m> weak one for use in Maps, and a strong one that you can use with m> throwTo. But then building a Map in which some elements can be garbage m> collected is a bit tricky (it can be done though; see our old Memo table m> implementation in fptools/hslibs/util/Memo.hs). So this is another problem with your implementation, and another reason why I want TLS support in the standard libraries. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] thread-local variables
Hi Robert, I looked over your proposal. I'm not sure if I'm in favor of introducing a new keyword. It seems unnecessary. Also, note that my proposal differs in that thread local variables are not writable, but can only be changed by calling (e.g. in my API) 'withIOParam'. This is still just as general, because an IORef can be stored in a thread-local variable, but it makes it easier to reason about the more common use case where TLS is used to make IO a Reader; and it makes it easier to share modifiable state across more than one thread. I.e. if modifiable state is stored as 'IOParam (IORef a)' then the default is for the stored 'IORef a' to be shared across all threads; it can only be changed "locally" for a specified action and any sub-threads using 'withIOParam'; and if some library I use decides to fork a thread behind the scenes, it won't change my program's behavior. I think it is a good idea to have stdin, cwd, etc. be thread-local. I don't understand why the 'TL' monad is necessary, but I haven't read the proposal very carefully. Best, Frederik On Sat, Aug 05, 2006 at 02:18:58PM -0400, Robert Dockins wrote: > Sorry to jump into this thread so late. However, I'd like to take a moment > to remind everyone that some time ago I put a concrete proposal for > thread-local variables on the table. > > http://article.gmane.org/gmane.comp.lang.haskell.cafe/11010 > > I believe this proposal addresses the initialization issues that Einar has > been discussing. In my proposal, thread-local variables always have some > defined value, and they obtain their values at well-defined points. > > The liked message also gives several use cases that I felt motivated the > proposal. > > -- > Rob Dockins > > Talk softly and drive a Sherman tank. > Laugh hard, it's a long way to the bank. >-- TMBG > ___ > Haskell mailing list > Haskell@haskell.org > http://www.haskell.org/mailman/listinfo/haskell > -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] thread-local variables
> > > As said before the monadic approach can be quite clean. I haven't used > > > implicit parameters that much, so I won't comment on them. > > > > Perhaps you can give an example? As I said, a single monad won't > > suffice for me, because different libraries only know about different > > parts of the state. With TLS, one can delimit the scope of parameters > > by making the references to them module-internal, for instance. > > > > With monads, I imagine that I'll need for each parameter > > > > (1) a MonadX class, with a liftX member > > (2) a catchX function > > (3) a MonadY instance, for each wrapped monad Y (thus the number of > > such instances will be O(n^2) where n is the number of parameters) > > That is usually the wrong approach. Newtype something like > "StateT AppState IO". Use something like: > > runWithPart :: (AppState -> c) -> (c -> IO a) -> AppM a > > to define nice actions for different parts of the libraries. > > Usually this is very easy if one uses combinators and high level > constructs and messier if it is hard to find the right combinators. > > If you look at the various web frameworks in Haskell you will notice > that most of them live happily with one monad and don't suffer from > problems because of that. That doesn't answer the question: What if my application has a need for several different sets of parameters - what if it doesn't make sense to combine them into a single monad? What if there are 'n' layers? Is it incorrect to say that the monadic approach requires code size O(n^2)? > > With TLS, I need > > > > (1) a declaration "x = unsafePerformIO $ newIOParam ..." > > And don't have any static guarantees that you have done all the proper > initialization calls before you use them. Well, there are a lot of things I don't have static guarantees for. For instance, sometimes I call the function 'head', and the compiler isn't able to verify that the argument isn't an empty list. If I initialize my TLS to 'undefined' then I'll get a similar error message, at run time. For another example, I don't use monadic regions when I do file IO. I can live with that. > ... Also if we have two pieces of the same per-thread state that we > wish to use in one thread (e.g. db-connections) then the TLS > approach becomes quite hard. No harder than the monadic approach, in my opinion. > Here is a naive and dirty implementation. The largest problem is that > TypeRep is not in Ord. An alternative approach using Dynamic would be > possible, but I like the connection between the key > and the associated type. > > http://www.cs.helsinki.fi/u/ekarttun/haskell/TLS/ > > Not optimized for performance at all. You've redefined 'fork'. If I want a library which works with other libraries, that will not be an option. The original purpose of my posting to this thread was to ask for two standard functions which would let me define thread-local variables in a way which is interoperable with other libraries, to the same extent as 'withArgs' and 'withProgName' are. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] thread-local variables
> > Maybe I'm misunderstanding your position - maybe you think that I > > should use lots of different processes to segregate global state into > > separate contexts? Well, that's nice, but I'd rather not. For > > instance, I'm writing a server - and it's just not efficient to use a > > separate process for each request. And there are some things such as > > database connections, current user id, log files, various profiling > > data, etc., that I would like to be thread-global but not > > process-global. > > I have done many servers in Haskell. Usually I have threads allocated > to specific tasks rather than specific requests. > > What guarantees do your code have that all the relevant parameters > are already initialized - and how can an user of the code know > which TLS variables need to be initialized? You could ask the same questions about process-global state, couldn't you? > If it is documented maybe it could be done at the level of an > implicit parameter? Do you think implicit parameters are better than TLS? > > Or maybe you think that certain types of global state should be > > privileged - for instance, that all of the things which are arguments > > to 'newMain' above are OK to have as global state, but that anything > > else should be passed as function arguments, thus making > > thread-localization moot. I disagree with this - I am a proponent of > > extensibility, and think that the language should make as few things > > as possible "built-in". I want to define my own application-specific > > global state, and, additionally, I want to have it thread-global, not > > process-global. > > This can cause much fun with the FFI. If we change e.g. stdout to > thread specific what should be do before each foreign call? Same > with the other things that are related to the OS process in question. > > A thread is a context of execution while a process is a context for > resources. Would you like to have multiple Haskell processes inside > one OS process? If you want to think of it that way, then sure. > I don't consider these very different: > 1) use one thread from a pre-allocated pool to do a task > 2) fork a new thread to do the task > > With TLS they are vastly different. If you don't consider them different, then you can start using (2) instead of (1). > > You asked for an example, but, because of the nature of this topic, it > > would have to be a very large example to prove my point. Thread-local > > variables are things that only become really useful in large programs. > > Instead, I've asked you to put yourself in my shoes - what if the bits > > of context that you already take for granted in your programs had to > > be thread-local? How would you cope, without thread-local variables, > > in such a situation? > > I have been using an application specific monad (newtyped transformer) and > a clean set of functions so that the implementation is not hardcoded > and can be changed easily. Thus I haven't had the same difficulties > as you. > > I don't think many of the process global resources would make sense > on a per-thread basis and I am not against all global state. You say "many", but the question is "are there any". > > > But I would say that I think I would find having to know what thread > > > a particular bit of code was running in in order to "grok it" very > > > strange, > > > > I agree that it is important to have code which is easy to understand. > > > > Usually, functions run in the same thread as their caller, unless they > > are passed to something with the word 'fork' in the name. That's a > > good rule of thumb that is in fact sufficient to let you understand > > the code I write. Also, if that's too much to remember, then since I'm > > only proposing and using non-mutable thread-local state (i.e. it > > behaves like a MonadReader), and since I'm not passing actions between > > threads as Einar is, then you can forget about the 'fork' caveat. > > The only problem appears when someone uses two libraries one written > by me and an another written by you and wonders "why is my program > failing in mysterious ways". Can you give the API for your library? I have a hard time imagining how it could not be obvious that a thread pool is being used. > > I think the code would in fact be more difficult to "grok", if all of > > the things which I want to be thread-local were instead passed around > > as parameters, a la 'newMain'. This is simply because, in that > > scenario, there would much more code to read, and it would be very > > repetitive. If I used special monads for my state, then the situation > > would be only slightly better - a single monad would not suffice, and > > I'd be faced with a plethora of 'lift' functions and redefinitions of > > 'catch', as well as long type signatures and a crowded namespace. > > As said before the monadic approach can be quite clean. I haven't used > implicit parameters that much, so I won't comment on them. Perhaps you can
Re: [Haskell] thread-local variables
> As for the subject under discussion (thread local state), I am > personally sceptical about it. Why do we need it? Are we talking > about safety or just convenience/API elegance? I've never > encountered a situation where I've needed thread local state, > (but this does not necessarily make it evil:-) OK. What if all Haskell processes, all over the world, were made into threads in the same large process? There are a lot of things that are currently "global" state - as in, process-global - which would have to become non-global in some way - pretty much all interaction with the world: file IO, networking, command line arguments, system environment, etc. You, Einar, and others seem to be arguing that the only way to make these things non-global should be to either make them explicit arguments to functions, or to have them appear explicitly in the type of the application's primary monad. For instance, this simple program: main :: IO () main = do putStrLn "Hello world" might, in Adrian Hey and Einar Karttunen's world, become: newMain host environment program_args network_config locale terminal_settings stdin stdout stderr = do hPutStrLn stdout (defaultEncoding locale) "Hello world" Now, some people might find this second version delightfully explicit, but I'd have doubts about whether such people are actually trying to get things done, or whether they see the language as an end in itself. As for me, I prefer the first version - it saves reading and typing, and is perfectly clear, and I have work to do. Maybe I'm misunderstanding your position - maybe you think that I should use lots of different processes to segregate global state into separate contexts? Well, that's nice, but I'd rather not. For instance, I'm writing a server - and it's just not efficient to use a separate process for each request. And there are some things such as database connections, current user id, log files, various profiling data, etc., that I would like to be thread-global but not process-global. Or maybe you think that certain types of global state should be privileged - for instance, that all of the things which are arguments to 'newMain' above are OK to have as global state, but that anything else should be passed as function arguments, thus making thread-localization moot. I disagree with this - I am a proponent of extensibility, and think that the language should make as few things as possible "built-in". I want to define my own application-specific global state, and, additionally, I want to have it thread-global, not process-global. You asked for an example, but, because of the nature of this topic, it would have to be a very large example to prove my point. Thread-local variables are things that only become really useful in large programs. Instead, I've asked you to put yourself in my shoes - what if the bits of context that you already take for granted in your programs had to be thread-local? How would you cope, without thread-local variables, in such a situation? > But I would say that I think I would find having to know what thread > a particular bit of code was running in in order to "grok it" very > strange, I agree that it is important to have code which is easy to understand. Usually, functions run in the same thread as their caller, unless they are passed to something with the word 'fork' in the name. That's a good rule of thumb that is in fact sufficient to let you understand the code I write. Also, if that's too much to remember, then since I'm only proposing and using non-mutable thread-local state (i.e. it behaves like a MonadReader), and since I'm not passing actions between threads as Einar is, then you can forget about the 'fork' caveat. I think the code would in fact be more difficult to "grok", if all of the things which I want to be thread-local were instead passed around as parameters, a la 'newMain'. This is simply because, in that scenario, there would much more code to read, and it would be very repetitive. If I used special monads for my state, then the situation would be only slightly better - a single monad would not suffice, and I'd be faced with a plethora of 'lift' functions and redefinitions of 'catch', as well as long type signatures and a crowded namespace. > unless there was some obvious technical reason why the > thread local state needed to be thread local (can't think of any > such reason right now). Some things are not immediately obvious. If you don't like to think of reasons, then just take my word for it that it would help me. A facility for thread-local variables would be just another of many facilities that programmers could choose from when designing their code. I'm not asking you to change the way you program - I don't care how other people program. I trust them to know what is best for their particular application. It's none of my business, anyway. Since Simon Marlow said that he had been considering a thread-local variable facility, I merely wanted
Re: [Haskell] thread-local variables (was: Re: Implicit Parameters)
On Mon, Jul 31, 2006 at 03:09:59PM +0300, Einar Karttunen wrote: > On 31.07 03:18, Frederik Eaton wrote: > > I don't think it's necessarily such a big deal. Presumably the library > > with the worker threads will have to be invoked somewhere. One should > > just make sure that it is invoked in the appropriate environment, for > > instance with the database connection already properly initialized. > > > > (*) One might even want to change the environment a little within each > > thread, for instance so that errors get logged to a thread-specific > > log file. > > So we have the following: > 1) the library is initialized and spawns worker thread Tw > 2) application initializes the database connection and it >is associated with the current thread Tc and all the children >it will have (unless changed) > 3) the application calls the library in Tc passing an IO action >to it. The IO action refers to the TLS thinking it is in >Tc where it is valid. > 4) the library runs the callback code in Tw where the TLS state is >invalid. This is even worse than a global variable in this case. If you have threads, and you have something which needs to be different among different threads, then it is hard for me to see how thread-local variables could be worse than global variables in any case at all. > Of course one can argue that the application should first initialize > the database handle. But if the app uses worker threads (spawned > before library initialization) then things will break if a library > uses TLS and callbacks and they end up running in threads created > before the library initialization. OK, sure. In certain situations you have to keep track of whether a function to which you pass an action might be sending it off to be run in a different thread. We've been over this. Perhaps users should be warned in the documentation - and in the documentation for exceptions as well. I really don't see that as a problem that would sneak up on people, since if you're passing an action to a function, rather than executing it yourself, then in most cases it should be clear to programmers that the action will run in a different context if not a different thread altogether. And if you want to force the context to be the same, you wrap the action in something restoring that context, just like you would have to do with your state transformer monad stack. Another way to write buggy code is to have it so bloated with extra syntax - e.g. with monad conversions, or extra function parameters, as you propose - that it becomes impossible to read and understand. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] thread-local variables (was: Re: Implicit Parameters)
On Mon, Jul 31, 2006 at 03:54:29AM +0300, Einar Karttunen wrote: > On 30.07 11:49, Frederik Eaton wrote: > > No, because the thread in which it runs inherits any thread-local > > state from its parent. > > So we have different threads modifying the thread-local state? > If it is a copy then updates are not propagated. As I said, please read my code. There are no "updates". > What about a design with 10 worker threads taking requests > from a "Chan (IO ())" and running them (this occurs in real code). > To get things right they should use the TLS-context relevant > to each "IO ()" rather than the thread. I could see how either behavior might be desirable, see below. (*) > (snip) > Usually I just define one custom monad for the application which > wraps the stack of monad transformers. Thus changing the monad stack > does not affect the application code. A quite clean and efficient > solution. That does sound like a clean approach. However, I think that my approach would be cleaner; and in any case I think that both approaches should be available to the programmer. > My main objection to the TLS is that it looks like normal IO, > but changing the thread that evaluates it can break things in ways > that are hard to debug. E.g. we have an application that uses > TLS and passes an IO action to a library that happens to use > a pool of worker threads that invisible to the application. > Or the same with the role of the application and library reversed. I don't think it's necessarily such a big deal. Presumably the library with the worker threads will have to be invoked somewhere. One should just make sure that it is invoked in the appropriate environment, for instance with the database connection already properly initialized. (*) One might even want to change the environment a little within each thread, for instance so that errors get logged to a thread-specific log file. > Offering it up as a separate library should be ok as it would > be very easy to spot and take extra care not to cause problems. That's good to hear. Regards, Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] thread-local variables (was: Re: Implicit Parameters)
On Sun, Jul 30, 2006 at 12:35:42PM +0300, Einar Karttunen wrote: > On 29.07 13:25, Frederik Eaton wrote: > > I think support for thread-local variables is something which is > > urgently needed. It's very frustrating that using concurrency in > > Haskell is so easy and nice, yet when it comes to IORefs there is no > > way to get thread-local behavior. Furthermore, that one can make > > certain things thread-local (e.g. with withArgs, withProgName) makes > > the solution seem close at hand (although I can appreciate that it may > > not be). Yet isn't it just a matter of making a Map with existentially > > quantified values part of the state of each thread, just as the > > program name and arguments are also part of that state? > > Are thread local variables really a good idea in Haskell? Yes. > If variables are thread local how would this combinator work: Do read the code I posted. Please note I'm not suggesting that *all* variables be thread local, I was proposing a special data-type for that. > withTimeOut :: Int -> IO a -> IO a > withTimeOut tout op = mdo > mv <- newEmptyMVar > wt <- forkIO $ do try op >>= tryPutMVar mv >> killThread kt > kt <- forkIO $ do threadDelay tout > e <- tryPutMVar mv $ Left $ DynException $ toDyn > TimeOutException > if e then killThread wt else return () > either throw return =<< takeMVar mv > > > Would it change the semantics of the action as it is run in a > different thread (this is a must if there are potentially blocking FFI > calls). No, because the thread in which it runs inherits any thread-local state from its parent. > Now if the action changes the thread local state then > it behaves differently. Do we really want that? I'm not sure what you're suggesting. The API I proposed actually doesn't let users discover when their actions are running in sub-threads. (Can you write an example using that API?) However, even if it did, I don't see a problem. Do you think that we should get rid of 'myThreadId', for instance? I don't. > Usually one can just add a monad that wraps IO/STM and provides the > variables one needs. This has the good side of making scoping > explicit. That's easier said than done. Sometimes I take that route. But sometimes I don't want 5 different monads wrapping each other, each with its own 'lift' and 'catch' functions, making error messages indecipherable and code difficult to read and debug. Do you propose creating a special monad for file operations? For network operations? No? Then I don't see why I should have to make a special monad for database operations. Or, if the answer was "yes", then fine: obfuscate your own code, but please don't ask me to do the same. Let's support both ways of doing things, and we can be different. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] thread-local variables (was: Re: Implicit Parameters)
Hi, Sorry to bring up this thread from so long ago. On Wed, Mar 01, 2006 at 11:53:42AM +, Simon Marlow wrote: > Ashley Yakeley wrote: > >Simon Marlow wrote: > >>Simon & I have discussed doing some form of thread-local state, which > >>covers many uses of implicit > >>parameters and is much preferable IMO. Thread-local state doesn't change > >>your types, and it > >>doesn't require passing any extra parameters at runtime. It works > >>perfectly well for the OS > >>example you give, for example. > >Interesting. What would that look like in code? > > No concrete plans yet. There have been proposals for thread-local variables > in the past on this > list and haskell-cafe, and other languages have similar features (eg. > Scheme's support for dynamic > scoping). Doing something along these lines is likely to be quite > straightforward to implement, > won't require any changes to the type system, and gives you a useful form of > implicit parameters > without any of the drawbacks. > > The main difference from implicit parameters would be that thread-local > variables would be > restricted to the IO monad. I think support for thread-local variables is something which is urgently needed. It's very frustrating that using concurrency in Haskell is so easy and nice, yet when it comes to IORefs there is no way to get thread-local behavior. Furthermore, that one can make certain things thread-local (e.g. with withArgs, withProgName) makes the solution seem close at hand (although I can appreciate that it may not be). Yet isn't it just a matter of making a Map with existentially quantified values part of the state of each thread, just as the program name and arguments are also part of that state? import qualified Data.Map as M import Data.Maybe import Data.Unique import Data.IORef import Data.Typeable -- only these 2 must be implemented: withParams :: ParamsMap -> IO () -> IO () getParams :: IO ParamsMap -- type ParamsMap = M.Map Unique Value data Value = forall a . (Typeable a) => V a type IOParam a = IORef (Unique, a) newIOParam :: Typeable a => a -> IO (IOParam a) newIOParam def = do k <- newUnique newIORef (k,def) withIOParam :: Typeable a => IOParam a -> a -> IO () -> IO () withIOParam p value act = do (k,def) <- readIORef p m <- getParams withParams (M.insert k (V value) m) act getIOParam :: Typeable a => IOParam a -> IO a getIOParam p = do (k,def) <- readIORef p m <- getParams return $ fromMaybe def (M.lookup k m >>= (\ (V x) -> cast x)) Frederik P.S. I sent a message about this a while back, when I was trying to implement my own version using ThreadId (not really a good approach). -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] ANNOUNCE: An index-aware linear algebra library in Haskell
An index-aware linear algebra library in Haskell I've been exploring the implementation of a library for linear algebra, i.e. manipulating vectors and matrices and so forth, which has as a fundamental design goal the exposure of index types and ranges to the type system so that operand conformability can be statically guaranteed (e.g. an attempt to add two matrices of incompatible sizes, or multiply an NxK matrix by an MxL matrix where K /= M, is flagged statically by the compiler). A significant contribution of this work is a mechanism for supporting interactive use. This is challenging because it requires instantiating values with user-specified index types directly, rather than via CPS rank-2 polymorphic "with" functions as in Kiselyov and Shan's "Implicit Configurations". The solution is based on Template Haskell. MOTIVATION: Haskell has never been a serious language choice for numerics, partly because of efficiency problems. However, a large portion of the demand for number-processing in science is met by other interpreted languages such as Octave and Matlab. These languages are just as slow, or slower, than Haskell when it comes to executing a large number of operations in succession. Their advantage is in making available to the user a collection of highly-optimized linear algebra routines. Since many numerical tasks consist of simple, repetitive operations on large amounts of data, and because these operations can often be reformulated in terms of operations on vectors and matrices, such matrix languages can give good performance on appropriately written code, and have become standard, viable workbenches for numerical analysis. However, matrix languages are often good for little more than matrix manipulations - their support for more complicated data structures is poor. Furthermore, they are generally dynamically typed, which means that type errors only become evident at runtime. In particular, operand conformability errors, caused for instance by multiplying matrices A*B instead of B*A or A'*B' (in Matlab notation), are runtime errors, and such errors can be masked when dimensions happen to be equal, for instance when A and B are square matrices as in the above example. I believe that with a good linear algebra library which solves this problem via static typing, Haskell could make a valuable contribution in the area of technical computing. DESIGN: - The fundamental type is a "vector", which includes an element type and an index type. Matrices are vectors indexed by pairs. Vectors are members of the Vector class. The element type is a class parameter, to allow vectors over specialized types: class Vector v e | v -> e (note, the index type is not a class parameter) http://ofb.net/~frederik/futility/src/Vector/Base.hs - The building-block index type wraps a type-level integer N, and accepts a range of possible values 0..(N-1). I use a modified version of Oleg Kiselyov and Chung-chieh Shan's "Implicit Configurations" paper for the type-level integers. Index types must implement the Dom ("domain") class. Superclasses are Bounded, Enum, Ix, Eq, and Show. Instances are defined so that index types may be combined with (,) and Either. Additionally, there is a generics-style method domCastPair which allows us to detect when a vector is also a matrix. The primary intent of this is to make it possible for libraries such as Alberto Ruiz's GSL library to implement our interface, using different data types for matrices and vectors "under the hood". I also use domCastPair to display vectors and matrices differently. http://ofb.net/~frederik/futility/src/Domain.hs http://ofb.net/~frederik/futility/src/Prepose.hs - I have provided an example implementation of most of the operations using the Array type. This is very slow for numerical operations (about 200x slower than Octave), but it allows arbitrary (boxed) element types, which faster implementations are unlikely to do. http://ofb.net/~frederik/futility/src/Vector/Array.hs Thus in the current state the library is a working prototype. For it to be practically useful, it still needs a Vector instance which wraps some fast linear algebra library such as the GSL (at the expense of only supporting a restricted set of element types). A faster Haskell-only implementation would also be possible using unboxed arrays, but I think it is unlikely that this would be able to come close to the performance of an external C or Fortran library such as GSL. - Due to the need to specify index types at some point, input of vectors is difficult. I have provided two functions in Fu.Vector.Base which should cover most of the cases: listVec :: Vector v e => [e] -> (forall n . (ReflectNum n) => v (L n) -> w) -> w listMat :: Vector v e => [[e]] -> (forall n m . (ReflectNum n, ReflectNum m) => v (L m, L n) -> w) -> w However, these aren't useful in interactive situations. So I have also provided some template-haskell routines http://ofb.net/~fre
Re: [Haskell] QuickCheck revival and Cabal
Hi, Why not just call it, say, Test.QuickCheck2? I think module names should reflect only their functionality. I don't see how "External" or "Contrib" or "Chalmers" would say anything useful about the functionality of the modules. A while ago I sent a proposal for "package mounting", which I think would let us avoid this whole issue: http://www.haskell.org//pipermail/libraries/2005-June/004009.html I am opposed to a situation such as Java's, in which every module is permanently fixed somewhere in a huge module hierarchy, for the reasons I describe in the proposal (namely, in part because I don't think that global-name-choosing should be such a fundamental part of coding - as you said, it is a source of agony). The proposal was an attempt to describe an alternative approach. I think the ideal solution would be: - Your code is released in a package with modules named "Batch", "Poly", "Utils", etc., i.e. with no qualification. - The package has a default mount point of Test.QuickCheck. If people want the old version, then they can specify the old package instead of the new package on the compiler/interpreter command line. The response to my proposal was semi-positive, but I don't think any work has gone towards implementing it (certainly not by myself). Cheers, Frederik On Tue, Apr 11, 2006 at 01:02:51PM +0200, Koen Claessen wrote: > Dear all, > > For the past couple of years, I have been quietly hacking on a brand new > version of QuickCheck with lots of cool features. I have been > distributing copies to some friends, but have not released any official > package. > > Now, after lots of peer pressure, the time has come that I want to > release the current version as a Cabal package. > > I have been agonizing however over where in the module hierarchy the new > QuickCheck package should be. > > There is currently an old QuickCheck version in the standard hierarchy > in Test.QuickCheck. As the new QuickCheck is incompatible with the old > one, I do not want to override that place. Rather, I would like to > create my own little space in the hierarchy where the new version can > sit and develop. > > It feels to me that there should be a convention that people use to add > their own contributions to the module hierarchy without the danger of > clashing with other packages. > > Proposals: > > Contrib.Chalmers.QuickCheck > External.Chalmers.QuickCheck > Chalmers.QuickCheck > Contrib.Test.QuickCheck > Contrib.QuickCheck > > The first three I like -- but not the last one; I don't want to rule out > anyone else (except for my own colleagues :-) making their own version > of QuickCheck and releasing it somewhere in the tree. > > What does one think? > > Regards, > /Koen > > PS1. Previously discussions about this were referred to the libraries > mailing list but I feel that this is of interest to the larger crowd too. > > PS2. I welcome myself back to the Haskell mailing list after years of > inactivity :-) > > ___ > Haskell mailing list > Haskell@haskell.org > http://www.haskell.org/mailman/listinfo/haskell > -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Read Instances for Data.Map and Data.Set
On Wed, Oct 19, 2005 at 08:20:10PM +0200, Georg Martius wrote: > Hi folks, > > I was really annoyed by the fact that for Data.Map and Data.Set are no Read > instances declared, but Show instances are! I believe there should be some > kind of unwritten rule that in the standart lib the Show and Read instances > come pairwise and are fully compatible. I've been annoyed by this too. I wrote my own instances at one point. Frederik ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] reader-like IO, parents of threads
What about adding support for hooks in forkIO? These could be useful for other things as well. Pthreads could be said to have this functionality: -- Function: int pthread_atfork (void (*PREPARE)(void), void (*PARENT)(void), void (*CHILD)(void)) `pthread_atfork' registers handler functions to be called just before and just after a new process is created with `fork'. The PREPARE handler will be called from the parent process, just before the new process is created. The PARENT handler will be called from the parent process, just before `fork' returns. The CHILD handler will be called from the child process, just before `fork' returns. As well as: -- Function: void pthread_cleanup_push (void (*ROUTINE) (void *), void *ARG) `pthread_cleanup_push' installs the ROUTINE function with argument ARG as a cleanup handler. From this point on to the matching `pthread_cleanup_pop', the function ROUTINE will be called with arguments ARG when the thread terminates, either through `pthread_exit' or by cancellation. If several cleanup handlers are active at that point, they are called in LIFO order: the most recently installed handler is called first. Of course, 'fork' has a bit of a different meaning in pthreads. I don't know if there is support for handlers which are run when a new thread is created. (Pthreads also has support for "thread-specific data": -- Function: int pthread_setspecific (pthread_key_t KEY, const void *POINTER) `pthread_setspecific' changes the value associated with KEY in the calling thread, storing the given POINTER instead. If there is no such key KEY, it returns `EINVAL'. Otherwise it returns 0. -- Function: void * pthread_getspecific (pthread_key_t KEY) `pthread_getspecific' returns the value currently associated with KEY in the calling thread. If there is no such key KEY, it returns `NULL'. ) Regards, Frederik On Tue, Oct 18, 2005 at 11:47:29AM +0100, Simon Marlow wrote: > It seems that you can do this as long as you provide your own version of > forkIO, but not if you want to use the built-in forkIO. > > One could argue that getting the parent ThreadId is something that > should be supported natively by forkIO, and I might be inlined to agree. > Unfortunately there are some subtleties: currently a ThreadId is > represented by a pointer to the thread itself, which causes the thread > to be kept alive. This has implications not only for space leaks, but > also for reporting deadlock: if you have a ThreadId for a thread, you > can send it an exception with throwTo at any time, and hence the runtime > can never determine that the thread is deadlocked so it will never get > the NonTermination exception. Perhaps we need two kinds of ThreadId: a > weak one for use in Maps, and a strong one that you can use with > throwTo. But then building a Map in which some elements can be garbage > collected is a bit tricky (it can be done though; see our old Memo table > implementation in fptools/hslibs/util/Memo.hs). > > Cheers, > Simon > > On 16 October 2005 20:53, Frederik Eaton wrote: > > > John Meacham suggested that I should be a little more clear about the > > semantics I'm seeking. Also, apparently it isn't possible to implement > > writeTLRef/modifyTLRef with the data structure I gave: > > > >> data TLRef a = TLR a (MVar (Map ThreadId a)) > > (the first argument is a default value, the second is a map storing > > the values in each thread. The MVar is for safe concurrent access) > > > > Without those functions, it looks a little more like the Reader monad > > I'm comparing it to. > > > > - What happens on fork? The child thread effectively gets a "copy" of > > each TLRef in its parent. They have the same values, but modifying > > them using withTLRef has no effect on the values in other threads. > > > > - Can you pass a TLRef to a different thread? Yes, but the value it > > holds will not be the same when it is dereferenced in a different > > thread. > > > > The problem with writeTLRef is that if a child thread looks up the > > default value for an unbound reference by looking up the value in its > > parent, but after calling forkIO the parent changes the value with > > writeTLRef, then the child thread will get the wrong value. It is > > supposed to only see the value which was stored in the reference at > > the point where forkIO was called. > > > > Also, for this reason, I think withTLRef would have to be implemented > > by creating a separate thread with forkIO
Re: [Haskell] reader-like IO, parents of threads
John Meacham suggested that I should be a little more clear about the semantics I'm seeking. Also, apparently it isn't possible to implement writeTLRef/modifyTLRef with the data structure I gave: > data TLRef a = TLR a (MVar (Map ThreadId a)) (the first argument is a default value, the second is a map storing the values in each thread. The MVar is for safe concurrent access) Without those functions, it looks a little more like the Reader monad I'm comparing it to. - What happens on fork? The child thread effectively gets a "copy" of each TLRef in its parent. They have the same values, but modifying them using withTLRef has no effect on the values in other threads. - Can you pass a TLRef to a different thread? Yes, but the value it holds will not be the same when it is dereferenced in a different thread. The problem with writeTLRef is that if a child thread looks up the default value for an unbound reference by looking up the value in its parent, but after calling forkIO the parent changes the value with writeTLRef, then the child thread will get the wrong value. It is supposed to only see the value which was stored in the reference at the point where forkIO was called. Also, for this reason, I think withTLRef would have to be implemented by creating a separate thread with forkIO and waiting for it to finish. This would avoid overwriting a value which other child threads might still need to access. Note that an e.g. "myParentThreadId" function isn't enough - what is needed is a parentThreadId :: ThreadId -> IO (Maybe ThreadId) which can look up the parent of an arbitrary thread. Alternatively, if 'forkIO' supported hooks to run before and/or after forking, then a 'parentThreadId' function could be implemented from that. Frederik On Sun, Oct 16, 2005 at 04:40:40AM -0700, Frederik Eaton wrote: > Hi, > > I'm trying to get MonadReader-like functionality in the IO monad. It > doesn't appear possible implement it with the interfaces that > Haskell98 or GHC provide. I'm looking for something like "thread-local > variables". The interface could be something like this: > > newTLRef :: a -> IO (TLRef a) > withTLRef :: TLRef a -> a -> IO b -> IO b > readTLRef :: TLRef a -> IO a > writeTLRef :: TLRef a -> a -> IO () > modifyTLRef :: TLRef a -> (a -> a) -> IO () > > This would have a lot of uses. I am aware of the "Implicit > Configurations" paper by Kiselyov and Shan, but a solution such as > theirs which requires modifying the type signatures of all > intermediate function calls is not suitable. I want to be able to say > "run algorithm A using database D" without requiring all of the > functions in algorithm A to know that databases are somehow involved. > One way to look at it is that I am seeking something like the > type-based approach, but easier and with less explicit syntax; another > way to look at it is that I am seeking something like a global IORef > based approach, but more safe. > > An implementation based on ThreadId-keyed maps is almost workable: > > data TLRef a = TLR a (MVar (Map ThreadId a)) > > The problem with this is that while it is possible to find out the > ThreadId of the current thread, it doesn't appear to be possible to > get the ThreadId of the parent thread, which would be needed for > values to be properly inherited. > > Is there a way around this? Will there ever be standard support for > either finding the thread id of the parent of the current thread, or > for something like the thread-local references I have proposed? > > Thanks, > > Frederik > ___ > Haskell mailing list > Haskell@haskell.org > http://www.haskell.org/mailman/listinfo/haskell > ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] reader-like IO, parents of threads
Hi, I'm trying to get MonadReader-like functionality in the IO monad. It doesn't appear possible implement it with the interfaces that Haskell98 or GHC provide. I'm looking for something like "thread-local variables". The interface could be something like this: newTLRef :: a -> IO (TLRef a) withTLRef :: TLRef a -> a -> IO b -> IO b readTLRef :: TLRef a -> IO a writeTLRef :: TLRef a -> a -> IO () modifyTLRef :: TLRef a -> (a -> a) -> IO () This would have a lot of uses. I am aware of the "Implicit Configurations" paper by Kiselyov and Shan, but a solution such as theirs which requires modifying the type signatures of all intermediate function calls is not suitable. I want to be able to say "run algorithm A using database D" without requiring all of the functions in algorithm A to know that databases are somehow involved. One way to look at it is that I am seeking something like the type-based approach, but easier and with less explicit syntax; another way to look at it is that I am seeking something like a global IORef based approach, but more safe. An implementation based on ThreadId-keyed maps is almost workable: data TLRef a = TLR a (MVar (Map ThreadId a)) The problem with this is that while it is possible to find out the ThreadId of the current thread, it doesn't appear to be possible to get the ThreadId of the parent thread, which would be needed for values to be properly inherited. Is there a way around this? Will there ever be standard support for either finding the thread id of the parent of the current thread, or for something like the thread-local references I have proposed? Thanks, Frederik ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Literal for Infinity
But they all have a largest and smallest possible value, as I have already indicated. On Sun, Oct 02, 2005 at 04:35:02PM +0200, Lennart Augustsson wrote: > Not all FP representations have infinity, and even if > they do, they might only have one infinity. > > -- Lennart > > Frederik Eaton wrote: > >I've previously mentioned that I would like to see an 'instance > >Bounded Double' etc., as part of the standard, which would use 1/0 for > >maxBound, or the largest possible value (there must be one!) for > >platforms where that is not possible. I don't see a problem with > >looking at Double values as if they were bounded by -infinity and > >+infinity. > > > >On Thu, Sep 29, 2005 at 09:11:25PM +0300, Yitzchak Gale wrote: > > > >>Hi Jacques, > >> > >>Thanks also to you for a most interesting reply. > >> > >>This same discussion has taken place on the > >>discussion list of every modern general-purpose > >>programming language. > >> > >>The same points are always raised and argued, and > >>the conclusion is always the same: floating point > >>exceptions should raise exceptions. Programs that > >>are so sensitive that the tiny overhead makes a > >>difference should use numeric libraries, unboxed > >>types, FFI, and the like. > >> > >>In Haskell also, it looks like the infrastructure > >>was already laid in the Control.Exception module. > >>I hope we will soon be using it. > >> > >>I personally would like also to see alternative > >>functions that return values in the Error monad. > >> > >>Regards, > >>Yitz > >> > >>On Thu, Sep 29, 2005 at 03:13:27PM +0300, Jacques Carette wrote: > >> > >>>The IEEE 754 standard says (fairly clearly) that +1.0 / +0.0 is one of > >>>the most 'stable' definitions of Infinity (in Float at least). > >>>Throwing an exception is also regarded as a possibility in IEEE 754, but > >>>it is expected that that is not the default, as experience shows that > >>>that is a sub-optimal default. Mathematical software (Maple, > >>>Mathematica, Matlab) have generally moved in that direction. > >>> > >>>Almost all hardware implementations of float arithmetic now default to > >>>IEEE 754 arithmetic. Having the arithmetic do 'something else' involves > >>>more CPU cycles, so users should generally complain if their system's > >>>arithmetic differs from IEEE 754 arithmetic without some deep reason to > >>>do so [there are some; read and understand William Kahan's papers for > >>>these]. > >>> > >>>Jacques > >> > >>___ > >>Haskell mailing list > >>Haskell@haskell.org > >>http://www.haskell.org/mailman/listinfo/haskell > >> > > > >___ > >Haskell mailing list > >Haskell@haskell.org > >http://www.haskell.org/mailman/listinfo/haskell > > > ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Literal for Infinity
I've previously mentioned that I would like to see an 'instance Bounded Double' etc., as part of the standard, which would use 1/0 for maxBound, or the largest possible value (there must be one!) for platforms where that is not possible. I don't see a problem with looking at Double values as if they were bounded by -infinity and +infinity. On Thu, Sep 29, 2005 at 09:11:25PM +0300, Yitzchak Gale wrote: > Hi Jacques, > > Thanks also to you for a most interesting reply. > > This same discussion has taken place on the > discussion list of every modern general-purpose > programming language. > > The same points are always raised and argued, and > the conclusion is always the same: floating point > exceptions should raise exceptions. Programs that > are so sensitive that the tiny overhead makes a > difference should use numeric libraries, unboxed > types, FFI, and the like. > > In Haskell also, it looks like the infrastructure > was already laid in the Control.Exception module. > I hope we will soon be using it. > > I personally would like also to see alternative > functions that return values in the Error monad. > > Regards, > Yitz > > On Thu, Sep 29, 2005 at 03:13:27PM +0300, Jacques Carette wrote: > > The IEEE 754 standard says (fairly clearly) that +1.0 / +0.0 is one of > > the most 'stable' definitions of Infinity (in Float at least). > > Throwing an exception is also regarded as a possibility in IEEE 754, but > > it is expected that that is not the default, as experience shows that > > that is a sub-optimal default. Mathematical software (Maple, > > Mathematica, Matlab) have generally moved in that direction. > > > > Almost all hardware implementations of float arithmetic now default to > > IEEE 754 arithmetic. Having the arithmetic do 'something else' involves > > more CPU cycles, so users should generally complain if their system's > > arithmetic differs from IEEE 754 arithmetic without some deep reason to > > do so [there are some; read and understand William Kahan's papers for > > these]. > > > > Jacques > ___ > Haskell mailing list > Haskell@haskell.org > http://www.haskell.org/mailman/listinfo/haskell > ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Mixing monadic and non-monadic functions
> I have another proposal, though. Introduce a new keyword, which I'll > call "borrow" (the opposite of "return"), that behaves like a > function of type (Monad m) => m a -> a inside of do statements. More > precisely, a do expression of the form > > do { ... ; ... borrow E ... ; ... } > > is transformed into > > do { ... ; x <- E ; ... x ... ; ... } > > where x is a fresh variable. If more than one borrow form appears in > the same do statement, they are pulled out from left to right, which > matches the convention already used in liftM2, ap, mapM, etc. I think this is a good idea. I like the inline "<-", or maybe something like "@". I'm not sure what you intend to do about nested "do" statements, though. If they correspond to different monads, I might want to have a 'borrow' in the inner "do" statement create a lifted expression in the outer "do" statement. Furthermore, I might want to have a lifted expression in the outer "do" create something which needs to be evaluated again in the monad of the inner "do" to produce the final value. In any case, it would certainly be good to have better support for lifting; and something which doesn't weaken the type system is likely to be implemented before something that does is, so I am in favor of investigation along the lines of your proposal. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Monadification as a refactoring [was: Mixing monadic and non-monadic functions]
On Sat, Sep 10, 2005 at 12:55:15AM +0100, Claus Reinke wrote: > life is funny, isn't it? so many people so eagerly lazily, in my case > discussing conversion between non-monadic and monadic code, I'm trying to discuss a new syntax, not code transformations. I agree that the two are related. I'm interested in the latter, but I don't understand it very well. I think of refactoring as an operation that takes source code to source code, i.e. unlike most operations done on source code, refactoring produces output which is meant to be edited by humans. Is this correct? But if it is, doesn't it mean that one would like refactorizations to have some ill-defined "reversibility" property: a refactorization should have an inverse which commutes with simple edits For instance, if I (a) rename a variable, and then (b) introduce a new reference to the renamed variable somewhere, I can later decide to change the name back, reverting (a), without losing the work I did in the meantime in (b). I can do this by applying another rename operation, which will also affect the new reference. Or, if I (a) take a bit of code and remove it to a separate function, and then (b) modify the body of that function, I can later decide to inline the function back into the one place which calls it, thus reverting (a), without losing the modification done in (b). Yet, I don't see how the "monadification" operations you propose could have this property. They are certainly code transformations! But they seem irreversible - once I (a) apply your transformations and (b) edit the output, I can't revert (a) without losing the work done in (b). Changes to the code become tightly coupled, design becomes less tractable. > yet when we asked for your opinions and suggestions on this very > topic only a short while ago, we got a total of 4 (four) replies - > all quite useful, mind you, so we were grateful, but still one > wonders.. we might have assumed that not many people cared after > all: > > http://www.haskell.org//pipermail/haskell/2005-March/015557.html It might have been more useful to ask for survey replies to be sent to the list. Often the various opinions of a large number of people can be compressed to a few representative positions. But if respondents can't see what opinions have been expressed so far, then this time-saving compression becomes impossible. That is just my opinion. > shall I assume that all participants in this discussion have joined > the Haskell parade since then, and have proceeded rapidly to the > problems of monadic lifting?-) in which case I'd invite you to have > a look at that survey and the papers mentioned. I should do that, yes! It's just that I was a bit late, having misplaced my trumpet. > > > I thought the easy answer would be to inject non-monadic values into the > > > monad (assuming one already rejiggered things to do automatic lifting). > > I'd phrase it slightly differently: what (I think) one wants are implicit > coercions > between monadic and non-monadic types of expressions, where the coercions > lift non-monadic values into the monad in question, while embedding monadic > computations in the current monad to get a non-monadic result if only that is > needed (although one might think of the latter as partially lifting the > operation > that needs the non-monadic result). > > only I wouldn't want those implicit coercions to be introduced unless > programmers explicitly ask for that (one usually only converts code from > non-monadic to monadic once, and while the details of that step might > be tiresome and in need of tool-support, the step itself should be explicit > - see my comment on (2) below). > > > Note that in (a), "pure" values are never used where monads are asked > > for, only the other way around. > > that is probably where some would beg to differ - if you lift operations, > why not lift values as well? Oh, one should do both, I was just giving a case where value-lifting didn't happen, as a counterexample to Aaron's viewpoint. > > I think that supporting syntax (a) for semantics (b) should be a > > feature because: (1) it is (usually) obvious what (a) means; (2) it > > eliminates the single-use variable 'v' - single-use variables like > > this occur a lot in monadic Haskell code, and I think they make it > > harder to read and write; (3) it would support the math-like syntax > > that I presented in my original message. > > (1) "(usually) obvious" is tech-speak for "(perhaps) possible to > figure out, though probably not uniquely determined"?-) > > when mathematicians abuse notation in the "obvious" way, > there is usually an assumed context in which the intended > abuses are clearly defined (if not, there is another context > in which the "obvious" things will go unexpectedly awry). > > (2) the nice thing about Haskell is that it *distinguishes* between > monadic and non-monadic computations, and between evaluation > and execution o
Re: [Haskell] Re: Mixing monadic and non-monadic functions
ue for a large class of mathematical shorthand (and I think that the importance of notation is underrated). I think most such shorthand can be understood simply in terms of lifting, and I hypothesize that we can find an automatic lifting rule along the lines I've described which will not be as ambiguous as you suggest. Frederik > On 09/09/05, Frederik Eaton <[EMAIL PROTECTED]> wrote: > > By the way, I thought it would be obvious, but a lot of people seem to > > be missing the fact that I'm not (as Sean, I believe, isn't) > > requesting limited support for 1 or 2 or 3 argument functions or > > certain type classes to be applied to monads, or for certain > > operations to defined on certain types. I know at least how to define > > type classes and functions. If this is what I wanted I would probably > > do it myself. > > > > > I thought the easy answer would be to inject non-monadic values into the > > > monad (assuming one already rejiggered things to do automatic lifting). > > > > I don't know if this is the right way of looking at it. Do you have an > > example? > > > > My idea is that you should be able to have code like this: > > > > -- (a) > > > > m3 :: a -> m b > > > > m6 = do > > m1 > > m2 > > m3 (p1 (p2 p3 (p4 m4 p5)) p6) > > m5 > > > > - where the m* values are functions returning monads and the p* values > > are so-called "pure" functions, i.e. functions which don't take monad > > values or return monad results (so currently the above code won't > > type-check beacuse of m4) - but have it be interpreted as: > > > > -- (b) > > > > m3 :: a -> m b > > > > m6 = do > > m1 > > m2 > > v <- m4 > > m3 (p1 (p2 p3 (p4 v p5) p6) > > m5 > > > > Note that in (a), "pure" values are never used where monads are asked > > for, only the other way around. > > > > I think that supporting syntax (a) for semantics (b) should be a > > feature because: (1) it is (usually) obvious what (a) means; (2) it > > eliminates the single-use variable 'v' - single-use variables like > > this occur a lot in monadic Haskell code, and I think they make it > > harder to read and write; (3) it would support the math-like syntax > > that I presented in my original message. > > > > It might be hard to modify the type checker to get it to work, but I > > think it is possible, and I see no reason not to be as general as > > possible. > > > > Would it mean treating the 'Monad' class specially? Perhaps, but I > > don't think this is a reason to avoid it. Further, it is likely that > > whatever is done to extend the type checker could be given a general > > interface, which Monad would simply take advantage of, using a > > meta-declaration in the same spirit as "infixr" etc. > > > > Also, I do not think that template haskell is powerful enough to > > support this, but I'm willing to be proven wrong. > > > > Frederik > > > > -- > > http://ofb.net/~frederik/ > > ___ > > Haskell mailing list > > Haskell@haskell.org > > http://www.haskell.org/mailman/listinfo/haskell > > > -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Re: Mixing monadic and non-monadic functions
By the way, I thought it would be obvious, but a lot of people seem to be missing the fact that I'm not (as Sean, I believe, isn't) requesting limited support for 1 or 2 or 3 argument functions or certain type classes to be applied to monads, or for certain operations to defined on certain types. I know at least how to define type classes and functions. If this is what I wanted I would probably do it myself. > I thought the easy answer would be to inject non-monadic values into the > monad (assuming one already rejiggered things to do automatic lifting). I don't know if this is the right way of looking at it. Do you have an example? My idea is that you should be able to have code like this: -- (a) m3 :: a -> m b m6 = do m1 m2 m3 (p1 (p2 p3 (p4 m4 p5)) p6) m5 - where the m* values are functions returning monads and the p* values are so-called "pure" functions, i.e. functions which don't take monad values or return monad results (so currently the above code won't type-check beacuse of m4) - but have it be interpreted as: -- (b) m3 :: a -> m b m6 = do m1 m2 v <- m4 m3 (p1 (p2 p3 (p4 v p5) p6) m5 Note that in (a), "pure" values are never used where monads are asked for, only the other way around. I think that supporting syntax (a) for semantics (b) should be a feature because: (1) it is (usually) obvious what (a) means; (2) it eliminates the single-use variable 'v' - single-use variables like this occur a lot in monadic Haskell code, and I think they make it harder to read and write; (3) it would support the math-like syntax that I presented in my original message. It might be hard to modify the type checker to get it to work, but I think it is possible, and I see no reason not to be as general as possible. Would it mean treating the 'Monad' class specially? Perhaps, but I don't think this is a reason to avoid it. Further, it is likely that whatever is done to extend the type checker could be given a general interface, which Monad would simply take advantage of, using a meta-declaration in the same spirit as "infixr" etc. Also, I do not think that template haskell is powerful enough to support this, but I'm willing to be proven wrong. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Mixing monadic and non-monadic functions
Anyway, if the idea is to ultimately wrap every value in an expression like ([1,2]+[3,4]) in a 'run' application, that doesn't sound very useful. Program structure might be improved, but it would be bloated out by these calls. Also, I don't know what would happen to the readability of type checker errors. I think it would be more useful if the compiler took care of this automatically. I think it would be worthwhile just for making imperative code more readable. Frederik P.S. By the way, did you misunderstand what I meant by 'automatic lifting'? Note that I'm talking about "lift" as in 'liftM', not 'lift' from MonadTrans. On Fri, Sep 09, 2005 at 01:17:57PM -0700, Frederik Eaton wrote: > On Thu, Sep 08, 2005 at 09:34:33AM +0100, Keean Schupke wrote: > > Can't you do automatic lifting with a "Runnable" class: > > > > class Runnable x y where > >run :: x -> y > > > > instance Runnable (m a) (m a) where > > run = id > > > > instance Runnable (s -> m a) (s -> m a) where > > run = id > > instance (Monad m,Monad n,MonadT t m,Runnable (m a) (n a)) => Runnable > > (t m a) (n a) where > > run = run . down > > Interesting... > > > instance (Monad m,MonadT t m,Monad (t m)) => Runnable (t m a) (m a) > > where > > run = down > > The above is redundant, right? > > > Where: > > > > class (Monad m,Monad (t m)) => MonadT t m where > >up :: m a -> t m a > >down :: t m a -> m a > > > > For example for StateT: > > ... > > So, 'run' is more like a form of casting than running, right? > > How do I use it to add two lists? Where do the 'run' applications go? > Do you have an explicit example? > > I was trying to test things out, and I'm running into problems with > the type system, for instance when I declare: > > class Cast x y where > cast :: x -> y > > instance Monad m => Cast x (m x) where > cast = return > > p1 :: (Monad m, Num a) => m (a -> a -> a) > p1 = cast (+) > > it says: > > Could not deduce (Cast (a1 -> a1 -> a1) (m (a -> a -> a))) > from the context (Monad m, Num a) > arising from use of `cast' at runnable1.hs:14:5-8 > > But this should match the instance I declared, I don't understand what > the problem is. > > Frederik > > -- > http://ofb.net/~frederik/ > -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Mixing monadic and non-monadic functions
On Thu, Sep 08, 2005 at 09:34:33AM +0100, Keean Schupke wrote: > Can't you do automatic lifting with a "Runnable" class: > > class Runnable x y where >run :: x -> y > > instance Runnable (m a) (m a) where > run = id > > instance Runnable (s -> m a) (s -> m a) where > run = id > instance (Monad m,Monad n,MonadT t m,Runnable (m a) (n a)) => Runnable > (t m a) (n a) where > run = run . down Interesting... > instance (Monad m,MonadT t m,Monad (t m)) => Runnable (t m a) (m a) where > run = down The above is redundant, right? > Where: > > class (Monad m,Monad (t m)) => MonadT t m where >up :: m a -> t m a >down :: t m a -> m a > > For example for StateT: > ... So, 'run' is more like a form of casting than running, right? How do I use it to add two lists? Where do the 'run' applications go? Do you have an explicit example? I was trying to test things out, and I'm running into problems with the type system, for instance when I declare: class Cast x y where cast :: x -> y instance Monad m => Cast x (m x) where cast = return p1 :: (Monad m, Num a) => m (a -> a -> a) p1 = cast (+) it says: Could not deduce (Cast (a1 -> a1 -> a1) (m (a -> a -> a))) from the context (Monad m, Num a) arising from use of `cast' at runnable1.hs:14:5-8 But this should match the instance I declared, I don't understand what the problem is. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Mixing monadic and non-monadic functions
On Thu, Sep 08, 2005 at 09:30:34AM -0700, Scherrer, Chad wrote: > One of Mark Jones's articles suggests something like > > class Plus a b c | a b -> c where > (+) :: a -> b -> c > > Would > > instance (Plus a b c, Monad m) => Plus (m a) (m b) (m c) where > mx + my = do x <- mx >y <- my >return (x + y) > > do what you're looking for? Hi Chad, I'm not sure exactly what you have in mind. Obviously I want something that applies to all functions, with any number of arguments, and not just (+). Furthermore, it should handle cases like 1+[2,3] where only one value is monadic. Keean Schupke's suggestion sounds more likely to be useful, but I'm still reading it. In any case, a minimum of syntactic overhead is desired. Frederik > -- > Original message: > > Hi, > > Sean's comment (yeah, it was like a billion years ago, just catching > up) is something that I've often thought myself. > > I want the type system to be able to do "automatic lifting" of monads, > i.e., since [] is a monad, I should be able to write the following: > > [1,2]+[3,4] > > and have it interpreted as "do {a<-[1,2]; b<-[3,4]; return (a+b)}". > > Also, I would have > > Reader (+1) + Reader (+4) == Reader (\x -> 2*x+5) > > The point I want to make is that this is much more general than IO or > monads! I think we all understand intuitively what mathematicians mean > when they add two sets > > {1,2}+{3,4} (i.e. { x+y | x\in {1,2}, y\in {3,4}}) > > or when they add functions > > (f+g)(x) where f(x)=x+1 and g(x)=x+4 > > So "automatic lifting" is a feature which is very simple to describe, > but which gives both of these notations their intuitive mathematical > meaning - not to mention making monadic code much tidier (who wants to > spend their time naming variables which are only used once?). I think > it deserves more attention. > > I agree that in its simplest incarnation, there is some ugliness: the > order in which the values in the arguments are extracted from their > monads could be said to be arbitrary. Personally, I do not think that > this in itself is a reason to reject the concept. Because of currying, > the order of function arguments is already important in Haskell. If > you think of the proposed operation not as lifting, but as inserting > `ap`s: > > return f `ap` x1 `ap` ... `ap` xn > > then the ordering problem doesn't seem like such a big deal. I mean, > what other order does one expect, than one in which the arguments are > read in the same order that 'f' is applied to them? > > Although it is true that in most of the instances where this feature > would be used, the order in which arguments are read from their monads > will not matter; yet that does not change the fact that in cases where > order *does* matter it's pretty damn easy to figure out what it will > be. For instance, in > > print ("a: " ++ readLn ++ "\nb: " ++ readLn) > > two lines are read and then printed. Does anybody for a moment > question what order the lines should be read in? > > Frederik > > -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Mixing monadic and non-monadic functions
On Thu, Sep 08, 2005 at 10:35:49AM +0200, Wolfgang Lux wrote: > Frederik Eaton wrote: > > >I want the type system to be able to do "automatic lifting" of monads, > >i.e., since [] is a monad, I should be able to write the following: > >and have it interpreted as "do {a<-[1,2]; b<-[3,4]; return (a+b)}". > > Are you sure that this is the interpretation you have in mind? The > expression do {a<-[1,2]; b<-[3,4]; return (a+b)} does *not* compute the > element-wise sum of the two lists, but returns the list [4,5,5,6]. To > me, this would be a very counter intuitive result for an expression > [1,2]+[3,4]. Thanks for bringing up a good point. Yes, this is what I have in mind. As I see it, the monadic interface for lists gives them the semantics of (multi)sets. Adding two sets could only be interpreted as I have said. If you were adding, say, arrays, elementwise, the monad would be more like a reader monad, which I also gave an example of, with the parameter being the array index. Furthermore, it's hard to see how one would elegantly flesh out the semantics you propose for lists. What if the two lists have different lengths? Thus I think set semantics is more appropriate for a list monad. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Mixing monadic and non-monadic functions
I guess what I don't understand is what's wrong with the first alternative you mention: > One way of preventing the compiler from rearranging effects is to > thread though a dummy variable - like a "World token", ala the IO > monad - which makes the order of operations explicit as an extra > data dependency, then compile the resulting code. which sounds sort of the same as the semantics I'm envisioning. Frederik On Wed, Sep 07, 2005 at 11:41:41PM -0700, Frederik Eaton wrote: > > Frederik, > > To do "automatic lifting" you need to do a (higher-order) effect analysis, > > then make sure the > > compiler doesn't rearrange applications which have conflicting effects. > > Hrm, I disagree. I don't think this is what I was advocating in my > message. > > What I'm advocating is a reasonably simple modification of the type > checker to allow a more concise syntax. Unless I'm misunderstanding > you, there is no special "effect analysis" needed. > > I haven't worked it out exactly, but what you'd do is the following: > > 1. keep track of when you are unifying types within a "monadic >context"; for instance when you unify "Monad m => x -> m b" with >"Monad m => y -> m b", you have to unify "x" and "y". this second >unification of "x" and "y" will be done within a "context" to which >the monad "m" has been added, to make a note of the fact that >computations in "m" within "x" or "y" can be lifted. > > 2. if two types don't unify, but you can get them to unify by >inserting a lift operation from one of the current context monads, >then do that. e.g. when you find an application where a function >expects an argument of type "a" and the user is passing something >of type "m a", and "m" is in the context (so we know that we can >eventually get rid of it), then do the application with `ap` >instead of "$". > > I don't pretend that this is rigorous, but I do hope it gives a better > idea of what I'm talking about doing. The point of the last few > paragraphs of my message was to argue that even with this syntax > change, users will still be able to easily reason about the > side-effects of monadic parts of their code. Do you disagree with that > assertion? Or are you just saying that the syntax change as I propose > it is called "effect analysis"? > > Frederik > > -- > http://ofb.net/~frederik/ > ___ > Haskell mailing list > Haskell@haskell.org > http://www.haskell.org/mailman/listinfo/haskell > -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] mailing list headaches
On Thu, Sep 08, 2005 at 08:39:29AM +0200, Tomasz Zielonka wrote: > On Wed, Sep 07, 2005 at 12:46:42PM -0700, Frederik Eaton wrote: > > Hi all, > > Hi! > > > After some weeks of squinting, I've ended up settling with the > > following partial solution in my configuration files (I use Mutt): > > > > set strict_threads=yes > > folder-hook folders/haskell set strict_threads=no > > folder-hook folders/libraries set strict_threads=no > > folder-hook folders/glasgow-haskell set strict_threads=no > > folder-hook folders/glasgow-haskell-bugs set strict_threads=no > > Nice, thanks! BTW, could you also share the configuration you use to > split e-mails into folders? Do you use procmail for this? Ooh, no, I don't use procmail. I don't like procmail at all. I wrote my own set of scripts in perl. I run 'w3m' with a local cgi script which shows all the folders along with their new/total counts, and when I select one of the links it opens mutt with that folder. Then I defined mutt macros to do things like respool messages (if I change the filter script) or move them to other folders. I don't expect that this hackery will be very useful to you, but I've posted it here so you can see: http://ofb.net/~frederik/mailproc.tar.gz One thing it has is perl code to automatically recognize mailing list messages and put them into appropriately-named folders. Apparently I'm subscribed to over 100 mailing lists. I'd wanted to do something more elegant, something like John Meacham's Ginsu but backed by an SQL database, but never got around to it. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Mixing monadic and non-monadic functions
> Frederik, > To do "automatic lifting" you need to do a (higher-order) effect analysis, > then make sure the > compiler doesn't rearrange applications which have conflicting effects. Hrm, I disagree. I don't think this is what I was advocating in my message. What I'm advocating is a reasonably simple modification of the type checker to allow a more concise syntax. Unless I'm misunderstanding you, there is no special "effect analysis" needed. I haven't worked it out exactly, but what you'd do is the following: 1. keep track of when you are unifying types within a "monadic context"; for instance when you unify "Monad m => x -> m b" with "Monad m => y -> m b", you have to unify "x" and "y". this second unification of "x" and "y" will be done within a "context" to which the monad "m" has been added, to make a note of the fact that computations in "m" within "x" or "y" can be lifted. 2. if two types don't unify, but you can get them to unify by inserting a lift operation from one of the current context monads, then do that. e.g. when you find an application where a function expects an argument of type "a" and the user is passing something of type "m a", and "m" is in the context (so we know that we can eventually get rid of it), then do the application with `ap` instead of "$". I don't pretend that this is rigorous, but I do hope it gives a better idea of what I'm talking about doing. The point of the last few paragraphs of my message was to argue that even with this syntax change, users will still be able to easily reason about the side-effects of monadic parts of their code. Do you disagree with that assertion? Or are you just saying that the syntax change as I propose it is called "effect analysis"? Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] Mixing monadic and non-monadic functions
Hi, Sean's comment (yeah, it was like a billion years ago, just catching up) is something that I've often thought myself. I want the type system to be able to do "automatic lifting" of monads, i.e., since [] is a monad, I should be able to write the following: [1,2]+[3,4] and have it interpreted as "do {a<-[1,2]; b<-[3,4]; return (a+b)}". Also, I would have Reader (+1) + Reader (+4) == Reader (\x -> 2*x+5) The point I want to make is that this is much more general than IO or monads! I think we all understand intuitively what mathematicians mean when they add two sets {1,2}+{3,4} (i.e. { x+y | x\in {1,2}, y\in {3,4}}) or when they add functions (f+g)(x) where f(x)=x+1 and g(x)=x+4 So "automatic lifting" is a feature which is very simple to describe, but which gives both of these notations their intuitive mathematical meaning - not to mention making monadic code much tidier (who wants to spend their time naming variables which are only used once?). I think it deserves more attention. I agree that in its simplest incarnation, there is some ugliness: the order in which the values in the arguments are extracted from their monads could be said to be arbitrary. Personally, I do not think that this in itself is a reason to reject the concept. Because of currying, the order of function arguments is already important in Haskell. If you think of the proposed operation not as lifting, but as inserting `ap`s: return f `ap` x1 `ap` ... `ap` xn then the ordering problem doesn't seem like such a big deal. I mean, what other order does one expect, than one in which the arguments are read in the same order that 'f' is applied to them? Although it is true that in most of the instances where this feature would be used, the order in which arguments are read from their monads will not matter; yet that does not change the fact that in cases where order *does* matter it's pretty damn easy to figure out what it will be. For instance, in print ("a: " ++ readLn ++ "\nb: " ++ readLn) two lines are read and then printed. Does anybody for a moment question what order the lines should be read in? Frederik On Tue, Mar 23, 2004 at 12:55:56PM -0500, Sean E. Russell wrote: > On Tuesday 23 March 2004 11:36, Graham Klyne wrote: > > I think you're a rather stuck with the "temporary variables" (which they're > > not really), but it might be possible to hide some of the untidiness in an > > auxiliary monadic function. > > That seems to be the common suggestion: write my own visitors. > > I'm just surprised that there isn't a more elegant mechanism for getting > interoperability between monadic and non-monadic functions. The current > state of affairs just seems awkward. > > [Warning: quasi-rant] > > Caveat: I'm not smart enough, and I don't know enough, to criticize Haskell, > so please don't misconstrue my comments. To quote Einstein: "When I'm asking > simple questions and I'm getting simple answers, I'm talking to God." I > simply mistrust, and therefore question, systems where simple things are > overly involved. > > The standard explaination about why monads are so troublesome always sounds > like an excuse to me. We have monads, because they allow side-effects. Ok. > If programs that used side effects were uncommon, I'd be fine with them being > troublesome -- but they aren't. Maybe it is just me, but my Haskell programs > invariably develop a need for side effects within a few tens of lines of > code, whether IO, Maybe, or whatnot. And I can't help but think that > language support to make dealing with monads easier -- that is, to integrate > monads with the rest of the language, so as to alleviate the need for > constant lifting -- would be a Good Thing. > > Hmmm. Could I say that Haskell requires "heavy lifting"? > > -- > ### SER > ### Deutsch|Esperanto|Francaise|Linux|XML|Java|Ruby|Aikido > ### http://www.germane-software.com/~ser jabber.com:ser ICQ:83578737 > ### GPG: http://www.germane-software.com/~ser/Security/ser_public.gpg > ___ > Haskell mailing list > Haskell@haskell.org > http://www.haskell.org/mailman/listinfo/haskell -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] mailing list headaches
Hi all, I've been trying for some time to get threading to work properly on this mailing list. The problem is that I don't want to thread by subject in all of my folders, because then messages with short subjects like "hi", "hey", etc., in my personal folders will end up together. However, threading by "References", which RFC 2822 says SHOULD be possible, and which works on my other folders, doesn't work well on Haskell mailing lists. Presumably the issue is that there are a large number of Windows users with strange mail clients which don't insert "References" headers. After some weeks of squinting, I've ended up settling with the following partial solution in my configuration files (I use Mutt): set strict_threads=yes folder-hook folders/haskell set strict_threads=no folder-hook folders/libraries set strict_threads=no folder-hook folders/glasgow-haskell set strict_threads=no folder-hook folders/glasgow-haskell-bugs set strict_threads=no I thought I'd share this feature because a lot of people use Mutt and it makes the Haskell mailing lists a bit easier to follow. It isn't perfect, because threads organized by subject are only one layer deep - you end up getting a list instead of a tree, except that since Mutt uses References where possible it ends up being a list of trees, where at the root of each tree is either a reply to the first message of the thread, or someone with a non-conforming mail client. Of course, another problem is the mailing list archives, which also try to organize threads by "References" but fail on these lists: http://www.haskell.org/pipermail/glasgow-haskell-users/2005-June/thread.html Thus it seems the list archive software also requires fixing or reconfiguration... Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] [OT] xemacs (was: Re: emacs haskell mode)
On Mon, Sep 05, 2005 at 06:53:21PM -0400, Stefan Monnier wrote: > > Hmm. I use font-lock, but I use xemacs (21.4). Maybe that's the cause > > of the difference. > > IIRC XEmacs's support for syntax-table text-properties (and more > specifically for font-lock-syntactic-keywords) has been generally fairly > late. I'd expect it to be fully working by now, but maybe only in 21.5. > > Try Emacs ;-) I switched from emacs to xemacs because of gnuserv/gnuclient. As I understand it, please correct me, but with xemacs "gnuclient" one can attach a "frame" of a server xemacs process to the current terminal. Emacs only supports "emacsclient" which allows you to open a file in a remote server process, but not on the terminal which you run "emacsclient" from, so it's not as useful. The difference means that xemacs can simulate a fast-loading terminal editor, whereas emacs cannot. There are other problems with xemacs - the default syntax-coloring colors are all mapped to "white" on the terminal, for instance, and one has to manually reassign them to standard terminal colors in order to get syntax-coloring to work. I would switch back to emacs if it were not for the above difference. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Re: emacs haskell mode
On Mon, Sep 05, 2005 at 11:48:12AM -0400, Stefan Monnier wrote: > > Sorry if I've asked this before, but is version 2.0 of haskell-mode on > > your website really the newest version? > > It's the latest released version. There's newer code in the CVS, of course. > > > I ask because the file haskell-mode.el says "Version: 1.43" in the header, > > and I thought I remembered certain bugs being fixed which are present in > > your version. For instance, I thought that "\(x)->x" parsed correctly in > > the latest version, but in the version 2.0 which you distribute, the > > backslash is still interpreted incorrectly as escaping the parenthesis. > > The backslash problem is only fixed if you use font-lock. Do you use > font-lock? Also it may not work in all versions of Emacs. Which version do > you use? Hmm. I use font-lock, but I use xemacs (21.4). Maybe that's the cause of the difference. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] emacs haskell mode
Hi Stefan, Sorry if I've asked this before, but is version 2.0 of haskell-mode on your website really the newest version? I ask because the file haskell-mode.el says "Version: 1.43" in the header, and I thought I remembered certain bugs being fixed which are present in your version. For instance, I thought that "\(x)->x" parsed correctly in the latest version, but in the version 2.0 which you distribute, the backslash is still interpreted incorrectly as escaping the parenthesis. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] offside rule question
On Thu, Jul 14, 2005 at 03:15:32AM +0200, Lennart Augustsson wrote: > The offside rule is patronizing. :) > It tries to force you to lay out your program in a certain way. > If you like that way, good. I disagree. The offside rule in general makes a more concise syntax available to the programmer, who would probably choose a similar indentation style anyway. The issue that I brought up is a case where the programmer is *prevented* from using a certain syntax, for the sole reason that, if what you say is correct, someone has determined that the prohibition is "good for him". I dislike such design rationales because they always end up hurting advanced users, who may have atypical needs, but who should ideally play an important role in promoting the language to others; it makes it seem like the plan is instead to hype the language to managers with the intent that they force it on their subordinates as a "regimen" rather than as a flexible tool. I don't really think that this example is such a big deal, since it is so easy to work around, I just wanted to say what I meant by "patronizing". You'll find a great deal of better bad examples in "The Design and Evolution of C++". :) Frederik > If you don't like that way, you can use {;} as you say. > > -- Lennart > > Frederik Eaton wrote: > >Huh, that seems patronizing. Well at least I can override it with {}. > > > >Thanks, > > > >Frederik > > > >On Thu, Jul 14, 2005 at 02:42:53AM +0200, Lennart Augustsson wrote: > > > >>That's how it is defined in the Haskell definition. > >> > >>But there is a reason. The offside rule (or whatever yoy want to > >>call it) is there to give visual cues. If you were allowed to override > >>these easily just because it's parsable in principle then your code > >>would no longer have these visual cues that make Haskell code fairly > >>easy to read. > >> > >>-- Lennart > ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] offside rule question
Huh, that seems patronizing. Well at least I can override it with {}. Thanks, Frederik On Thu, Jul 14, 2005 at 02:42:53AM +0200, Lennart Augustsson wrote: > That's how it is defined in the Haskell definition. > > But there is a reason. The offside rule (or whatever yoy want to > call it) is there to give visual cues. If you were allowed to override > these easily just because it's parsable in principle then your code > would no longer have these visual cues that make Haskell code fairly > easy to read. > > -- Lennart > > Frederik Eaton wrote: > >Compiling the following module (with ghc) fails with error message > >"parse error (possibly incorrect indentation)", pointing to the let > >statement. The error goes away when I indent the lines marked "--*". > > > >But I don't understand how what I've written could be ambiguous. If I > >am inside a parenthesized expression, then I can't possibly start > >another let-clause. The fact that the compiler won't acknowledge this > >fact ends up causing a lot of my code to be squished up against the > >right margin when it seems like it shouldn't have to be. > > > >module Main where > > > >main :: IO () > >main = do > >let a = (map (\x-> > >x+1) --* > >[0..9]) --* > >print a > >return () > > > >Is there a reason for this behavior or is it just a shortcoming of the > >compiler? > > > >Frederik > >___ > >Haskell mailing list > >Haskell@haskell.org > >http://www.haskell.org/mailman/listinfo/haskell > > > ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] offside rule question
Compiling the following module (with ghc) fails with error message "parse error (possibly incorrect indentation)", pointing to the let statement. The error goes away when I indent the lines marked "--*". But I don't understand how what I've written could be ambiguous. If I am inside a parenthesized expression, then I can't possibly start another let-clause. The fact that the compiler won't acknowledge this fact ends up causing a lot of my code to be squished up against the right margin when it seems like it shouldn't have to be. module Main where main :: IO () main = do let a = (map (\x-> x+1) --* [0..9]) --* print a return () Is there a reason for this behavior or is it just a shortcoming of the compiler? Frederik ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] Re: package mounting
(By the way, sorry about cross-posting again. People have already replied to just 'haskell@' and just 'libraries@' but I'll try to stick to 'libraries@' after this since it seems that some users' mail clients show them two copies otherwise) > This idea has been raised before, but it was a while back, and we called > it "grafting". Here's the start of the thread, which went on for quite > some time: > > http://www.haskell.org/pipermail/libraries/2003-August/001310.html Actually it looks like that is slightly different from my idea, perhaps I should have expounded a bit more in my original post. Correct me if I'm wrong, but in my proposal: - grafting/mounting would be done per compilation unit. In yours it seems it would be done per (user, system). - configuration of graft/mount points would be done at compile time, zero or one times per package import option. In yours it would be at package install time. I think my proposal is better. In it there would be no cross-system compatibility issues: since each program would specify where its imported packages get mounted itself, you could write a Haskell program on one system and be assured that someone else could use it on another system without problems. I think this is a rather important property and we shouldn't allow it to be broken - a warning that, as you say, "This wouldn't be recommended though: any source code you write won't compile on someone else's system that doesn't have the same convention" is quite insufficient in my opinion! Plus, I think the ability to remount packages at non-standard locations is an important one, but for the above reason your proposal makes it too dangerous and therefore unusable in practice. Our proposals agree on this point: > The implementation must obey the following rule: > When compiling a module belonging to a package, that package > is temporarily grafted into the root of the module hierarchy. It was kind of "implicit" in my proposal though. I think the "alternative design" which is mentioned in your proposal is interesting: > Alternative design: modules in the current package could be specified > explicitly, perhaps by prefixing them with '.'. This would avoid the > possibility of overlap between the current package and the global > hierarchy, at the expense of having to add lots of extra '.'s. I think that might be a useful feature. Obviously one could introduce the '.' syntax and still allow the present sytax to be used for backward compatibility. Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] rfc: package mounting
Hi all, It looks like there's been a bit of recent discussion regarding module and package namespaces. There is a certain possible design feature that I don't think has been mentioned yet, that I think would be very helpful, so I thought I should at least bring it up. What I want is to be able to build a module namespace for a program out of packages in much the same way that filesystem namespaces are built, namely with mounting operations, rather than just by "union" or "overlay" operations as in the status quo. In other words I would like to be able to specify along with the "-package" option a "mount point" for that package in the module namespace. One possible option syntax might be e.g. "-package my-graphics-lib -package-base Graphics.UI.MyGraphicsLib". (Also, for backward compatibility and convenience, packages should probably be able to specify a default "mount point", to allow existing compiler command-line syntax to be used.) The idea is that with such a feature, library packages could get rid of the common module path prefixes which currently must be specified in every module in the library (such as "Graphics.UI.MyGraphicsLib" above). These prefixes would instead be specified once by each user of the library package (unless the default was desired), perhaps after the package import option on the compiler command line. Modules would have simple unqualified names within the library, like "Button" or "Window" which, if the package mount point were specified as say "Graphics.UI.MyGraphicsLib" in a certain compiler invocation, would be mapped to "Graphics.UI.MyGraphicsLib.Button" and "Graphics.UI.MyGraphicsLib.Window" respectively for code compiled by that invocation. But they could just as easily be mapped to "MGL.Button" etc. in a different invocation in a different project if a different mount point were preferred or were necessary to eliminate a namespace collision. There would be many benefits to being able to do things this way. First, developers would be able to move shared code across libraries without having to worry about the need to make widespread trivial changes to reflect the new module names. I could copy a 'Debug' or 'Util' module into my library from another library, and not have to go through the code to update the module hierarchy base location - furthermore I could incorporate new upstream changes easily without having to repeat this menial fixing-up procedure each time. While it's true that new version control systems like 'darcs' are meant to handle search-and-replace style changes effectively, I think that as far as this issue goes, a VC-based solution would be less elegant and less usable than what I am proposing. Second, this would decouple some aspects of the design process that in my opinion shouldn't be coupled. I would be able to start writing a library before deciding on a name, for instance - currently I at least have to stick in a dummy name as the module namespace base to avoid potential conflicts with other library imports while testing. But under this proposal I could just concentrate on building interior, bottom-up functionality first - at the end of the process a certain set of the package modules would be marked for external visibility, would comprise the exterior interface, and would suggest to me a fitting package name. Setting this name would only involve touching the cabal file rather than every single source file in my library. This would also make it easier to merge and split packages. Third, it would encourage the use of lightweight modules, by reducing the maintenance overhead of each module. Currently modules are the only way (correct me) to partition parts of the top-level namespace of a program - this is OK except that especially in libraries each module contains a certain amount of administrative paperwork, which is to say that it has to know the name of the library that contains it, because that, or some form of it, has to be part of the module name; and other importers of the module have to specify this information too; and as argued above there is a little work involved in touching up these references when code moves between libraries or when the library name changes. As a result I think people end up sticking more code in the same module at times when multiple modules would have been otherwise more suitable. Fourth, I think there would be psychological benefits. I think it's a bit patronizing to the programmer that he has to pretend to remind himself "you are in the following package" at the top of each file. I think people can easily enough keep track of that amount of state. It's as if the building code required me to put a sign with the current city and country in each room of my house. These are bits of context that I can easily call to mind if necessary, but which I would sometimes like to temporarily forget about. I believe programming is somewhat the same. We've come a long way from languages like C where one has to decide whether to
[Haskell] Re: runghc, ".hs" suffix
Excellent, thanks. Once you've committed this, it seems like there should be potential for haskell scripts to be used in some of the same situations as e.g. perl scripts. Is 'runhaskell' supposed to be a standard handle to a working haskell interpreter, e.g. possibly runhugs or runnhc? Should people put '#!/usr/bin/env runhaskell' at the top of scripts? Frederik On Mon, May 16, 2005 at 03:52:51PM +0100, Simon Marlow wrote: > I'm working on this. It's not trivial because GHC assumes all over the > place that the suffix on a file determines how it should be compiled, > but I've now implemented something similar to gcc's -x flag. > > Cheers, > Simon > > On 14 May 2005 20:04, Frederik Eaton wrote: > > > (moving to a separate thread) > > > > Also, how hard would it be to make it so that runghc doesn't require > > script file names to end with ".hs"? Some people like to keep their > > executable names language-neutral. > > > > Frederik > > > > On Wed, May 11, 2005 at 10:55:03AM +0100, Simon Marlow wrote: > >> On 11 May 2005 07:37, Frederik Eaton wrote: > >> > >>> It looks like runghc is exiting with status 0 if there is a > >>> problem. Shouldn't this be non-zero? > >>> > >>> $ runghc -V > >>> The Glorious Glasgow Haskell Compilation System, version 6.4 $ > >>> runghc /dev/null; echo $? Could not find module `/dev/null': > >>> use -v to see a list of the files searched for > >>> (one of the roots of the dependency analysis) > >>> > >>> :1:76: > >>> Failed to load interface for `Main': > >>> Could not find module `Main': > >>> it is not a module in the current program, or in any known > >>> package. 0 > >> > >> It turns out I fixed this bug after 6.4, but later the fix was > >> accidentally reverted. So thanks for reporting this again, it'll > >> work properly in 6.4.1. > >> > >> Cheers, > >>Simon > >> > > > > ___ > > Glasgow-haskell-bugs mailing list > > Glasgow-haskell-bugs@haskell.org > > http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs > > -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] instance Bounded Double
Perhaps some motivation is in order. In an interval arithmetic library, I have 'Bounded a' as a constraint on the instance 'Fractional (Interval a)' because an interval of maximum bound can result when dividing by an interval containing zero. In a function solver library I wrote, parameters need to be specified with constraints on their values - so that the natural default can be (minBound, maxBound) I require them to be Bounded. Having these bounds be plus and minus infinity is elegant, but not necessary for either of these applications, so I think that even if some platforms don't have affine infinities there should still be a suitably defined maxBound and minBound for Double - just maximum and minimum representable values should be fine. (Besides, exactly how many floating point implementations use projective infinities?) Frederik On Sun, Mar 13, 2005 at 11:08:26PM +, Thomas Davie wrote: > I may be barking up the wrong tree here, but I think the key to this > discussion is that real numbers are not bounded, while doubles are > bounded. One cannot say what the smallest or largest real number are, > but one can say what the smallest or largest double are (and it is > unfortunately implementation specific, and probably pretty messy to set > up). We could define maxBound as > (2^(mantisa_space))^(2^(exponent_space)) and min bound pretty > similarly... But I'm sure that everyone will agree that this is a > horrible hack. > > One may even question whether Doubles should be bounded, in that they > are an attempt to represent real numbers, and as such should come as > close as is possible to being real numbers (meaning not having bounds). > > Sorry for a possibly irrelevant ramble. > > Bob > > On Mar 13, 2005, at 11:02 PM, Lennart Augustsson wrote: > > >And what would you have minBound and maxBound be? > >I guess you could use +/- the maximum value representable. > >Going for infinity is rather dodgy, and assumes an FP > >representation that has infinity. > > > >-- Lennart > > > >Frederik Eaton wrote: > >>Interesting. In that case, I would agree that portability seems like > >>another reason to define a Bounded instance for Double. That way > >>users > >>could call 'maxBound' and 'minBound' rather than 1/0 and -(1/0)... > >>Frederik > >>On Fri, Mar 11, 2005 at 11:10:33AM +0100, Lennart Augustsson wrote: > >>>Haskell does not guarantee that 1/0 is well defined, > >>>nor that -(1/0) is different from 1/0. > >>>While the former is true for IEEE floating point numbers, > >>>the latter is only true when using affine infinities. > >>> > >>>-- Lennart > >>> > >>>Frederik Eaton wrote: > >>> > >>>>Shouldn't Double, Float, etc. be instances of Bounded? > >>>> > >>>>I've declared e.g. > >>>> > >>>>instance Bounded Double where > >>>> minBound = -(1/0) > >>>> maxBound = 1/0 > >>>> > >>>>in a module where I needed it and there doesn't seem to be any > >>>>issue > >>>>with the definition... > >>>> > >>>>Frederik > >>>> > >>> > >>> > > > >___ > >Haskell mailing list > >Haskell@haskell.org > >http://www.haskell.org/mailman/listinfo/haskell > > -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell] instance Bounded Double
Interesting. In that case, I would agree that portability seems like another reason to define a Bounded instance for Double. That way users could call 'maxBound' and 'minBound' rather than 1/0 and -(1/0)... Frederik On Fri, Mar 11, 2005 at 11:10:33AM +0100, Lennart Augustsson wrote: > Haskell does not guarantee that 1/0 is well defined, > nor that -(1/0) is different from 1/0. > While the former is true for IEEE floating point numbers, > the latter is only true when using affine infinities. > > -- Lennart > > Frederik Eaton wrote: > >Shouldn't Double, Float, etc. be instances of Bounded? > > > >I've declared e.g. > > > >instance Bounded Double where > >minBound = -(1/0) > >maxBound = 1/0 > > > >in a module where I needed it and there doesn't seem to be any issue > >with the definition... > > > >Frederik > > > > -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] instance Bounded Double
Shouldn't Double, Float, etc. be instances of Bounded? I've declared e.g. instance Bounded Double where minBound = -(1/0) maxBound = 1/0 in a module where I needed it and there doesn't seem to be any issue with the definition... Frederik -- http://ofb.net/~frederik/ ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
[Haskell] signature of when, unless
Wouldn't it be more useful if the type was when :: Monad m => Bool -> m a -> m () not when :: Monad m => Bool -> m () -> m () ...? Frederik ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell