Re: lazily handling exceptions in lazy sources (Re: [Haskell-cafe] Re:

2007-03-15 Thread oleg

> the usual caveats about unsafePerformIO apply, so perhaps you wouldn't want
> to use this in a database library..

Indeed. This is quite problematic, from the practical point of view of
making resources difficult to control (cf. another thread of file
handle leakage), to the theoretical point that side effects and lazy
evaluation strategy is a bad mix, severely limiting the equational
theory and making the code hard to reason about. I do care about all
of these issues; otherwise I would have programmed in C.

That reminds of Simon Peyton-Jones POPL2003 presentation, the
retrospective on Haskell. He said that the fact that lazy evaluation
and side effects are poor match has kept the designers from adding all
kinds of problematic hacks to the language. The laziness has kept
Haskell pure -- until the monad (notation) has come along and showed
how to do side-effects in the principled way. If keeping the purity
and keeping unsolved problems open until a principled solution comes
along have worked so well in the past, why to change now?

As to the original question
>> Is this really a solution? Currently, getContents reports no errors
>> but does perfect error recovery: the result of the computation prior to
>> the error is preserved and reported to the caller. Imprecise
>> exceptions give us error reporting -- but no error recovery. All
>> previously computed results are lost. Here's a typical scenario:
>> do l <- getContents
>>return (map process l)

a better (albeit still quite unsatisfactory) answer might be to change the 
interface of getContents so it would take the handler as an argument:
newGetContents :: (Exception -> IO String) -> IO String
The old getContents is equivalent to "newGetContents (const (return []))".
If the handler needs to notify the rest of the program of an error, it
may save the information from the exception in a IORef defined in
outer scopes. If this looks like the inversion of control, that's
because it is...

Often the problem can be solved via a left-fold enumerator, like the
one in Takusen. In the context of reading file, such an enumerator is
described in
http://okmij.org/ftp/Haskell/misc.html#fold-stream

One of the examples in that article was specifically reading only a
few characters from a file. With enumerator, we guarantee that file
handles do not leak, that files are closed at the precise and
predictable moments, and we never read the whole file in memory unless
the programmer specifically wishes to.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] N and R are categories, no?

2007-03-15 Thread Ulf Norell

On 3/15/07, Steve Downey <[EMAIL PROTECTED]> wrote:


EOk, i'm trying to write down, not another monad tutorial, because I
don't know that much yet, but an explication of my current
understanding of monads.

But before I write down something that is just flat worng, I thought
I'd get a cross check. (and I can't get to #haskell)

Monads are Functors. Functors are projections from one category to
another such that structure is preserved. One example I have in mind
is the embedding of the natural numbers into the real numbers. The
mapping is so good, that we don't flinch at saying 1 == 1.0.



Monads are endofunctors (functors from one category to itself). This is easy
to see from the type of join:

join : m (m a) -> m a

For Haskell monads the category is the category of Haskell types and Haskell
functions. In this category N and R are objects, so you'll get the wrong
idea trying to see them as categories.

/ Ulf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] SYB vs HList (again)

2007-03-15 Thread Bas van Dijk

2007/3/15, S. Alexander Jacobson <[EMAIL PROTECTED]>:

...
Any opinions on these issues would be very appreciated.
...


Maybe the relational database system CoddFish which uses HList
provides some insights:

http://haskell.org/haskellwiki/Libraries_and_tools/Database_interfaces/CoddFish

They also had to deal with default values to model SQL attributes that
can contain NULL values.

Regards,

Bas van Dijk.

Offtopic:
I'm trying to use CoddFish in a HAppS application I'm developing but I
can't get it to compile under ghc-6.6 :-(
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lazy IO and closing of file handles

2007-03-15 Thread Matthew Brecknell
Ketil Malde:
> Perhaps this is an esoteric way, but I think the nicest approach is to 
> parse into a strict structure.  If you fully evaluate each Email (or 
> whatever structure you parse into), there will be no unevaluated thunks 
> linking to the file, and it will be closed.

Not necessarily so, since you are making assumptions about the
timeliness of garbage collection. I was similarly sceptical of Claus'
suggestion:

Claus Reinke:
> in order to keep the overall structure, one could move readFile backwards
> and parseEmail forwards in the pipeline, until the two meet. then make sure
> that parseEmail completely constructs the internal representation of each
> email, thereby keeping no implicit references to the external representation.

So here's a test. I don't have any big maildirs handy, so this is based
on the simple exercise of printing the first line of each of a large
number of files. First, the preamble.

> import Control.Exception (bracket)
> import System.Environment
> import System.IO

> main = do
>   t:n:fs <- getArgs
>   ([test0,test1,test2,test3] !! read t) (take (read n) $ cycle fs)

The following example corresponds to Pete's original program. As
expected, when called with a sufficiently large number of files, it
always results in file handle exhaustion without producing any output:

> test0 files = mapM readFile files >>= mapM_ (putStrLn.head.lines)

The next example, corresponds (I think) to Claus' suggestion, in which
the readFile and putStrLn are performed at the same point in the
pipeline. I found that sometimes this runs without error, but other
times it fails with file handle exhaustion. This seems to depend on the
mood of the garbage collector, or at least the external conditions in
which the garbage collector operates. It also appears to fail more
frequently for small files. Without any knowledge of garbage collector
internals, I'm guessing that this is because readFiles reads in 8K
chunks. For files significantly smaller than 8K, garbage collection
cycles are likely to be much less frequent, and therefore there is
greater likelihood of file handle exhaustion between GC cycles.

> test1 files = mapM_ doStuff files where
>   doStuff f = readFile f >>= putStrLn.head.lines

The third is similar to the second, except it adds strictness
annotations to force the file to be read to the end. As expected, this
saves me from file handle exhaustion, but it is grossly inefficient for
large files.

> test2 files = mapM_ doStuff files where
>   doStuff f = do
> contents <- readFile f
> putStrLn $ head $ lines contents
> return $! force contents
>   force (x:xs) = force xs
>   force [] = ()

In the fourth example, I explicitly close the filehandle. This also
saves me from file handle exhaustion, but I must be carefull to force
everything I need to be read before returning. Returning a lazy
computation would be no good, as discovered in [1]. In this case,
putStrLn does all the forcing I need.

> test3 files = mapM_ bracketStuff files where
>   bracketStuff f = bracket (openFile f ReadMode) hClose doStuff
>   doStuff h = hGetContents h >>= putStrLn.head.lines

As Oleg points out in [2], all of the above have the problem that it is
impossible to tell the difference between a read error and end-of-file.
I had intended to write an example using explicitly sequenced I/O, but
Oleg has saved me the trouble with the post he made just now [3].

[1]http://www.haskell.org/pipermail/haskell-cafe/2007-March/023189.html
[2]http://www.haskell.org/pipermail/haskell-cafe/2007-March/023073.html
[3]http://www.haskell.org/pipermail/haskell-cafe/2007-March/023523.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lazy IO and closing of file handles

2007-03-15 Thread Bertram Felgenhauer

On 3/14/07, Pete Kazmier <[EMAIL PROTECTED]> wrote:

When using readFile to process a large number of files, I am exceeding
the resource limits for the maximum number of open file descriptors on
my system.  How can I enhance my program to deal with this situation
without making significant changes?


I made it work with 20k files with only minor modifications.


> type Subject = String
> data Email   = Email {from :: From, subject :: Subject} deriving Show


It has been pointed out that parseEmail would work better if it were
strict; the easiest way to accomplish this seems to be to replace the
above line by

data Email   = Email {from :: !From, subject :: !Subject} deriving Show

[snip]


> fileContentsOfDirectory :: FilePath -> IO [String]
> fileContentsOfDirectory dir =
> setCurrentDirectory  dir >>
> getDirectoryContents dir >>=
> filterM doesFileExist>>=  -- ignore directories
> mapM readFile


And here's another culprit - readFile actually opens the file before
any of its output is used. So I imported  System.IO.Unsafe  and replaced
the last line above by

   mapM (unsafeInterLeaveIO . readFile)

With these two changes the program seems to work fine.

HTH,

Bertram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lazy IO and closing of file handles

2007-03-15 Thread Ketil Malde

Matthew Brecknell wrote:

Ketil Malde:
  
Perhaps this is an esoteric way, but I think the nicest approach is to 
parse into a strict structure.  If you fully evaluate each Email (or 
whatever structure you parse into), there will be no unevaluated thunks 
linking to the file, and it will be closed.



Not necessarily so, since you are making assumptions about the
timeliness of garbage collection. 
Good point.  I'd have hoped that the RTS would GC on file handle 
exhaustion, but perhaps this is hard to do?


I think parsing input strictly is good practice anyway, since parsing is 
often not very compute intensive, and tends to reduce the memory 
required.  At least, that is my experience.


-k
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lazy IO and closing of file handles

2007-03-15 Thread Claus Reinke

Not necessarily so, since you are making assumptions about the
timeliness of garbage collection. I was similarly sceptical of Claus'
suggestion:

Claus Reinke:

in order to keep the overall structure, one could move readFile backwards
and parseEmail forwards in the pipeline, until the two meet. then make sure
that parseEmail completely constructs the internal representation of each
email, thereby keeping no implicit references to the external representation.


you are quite right to be skeptical!-) indeed, in the latest Handle documentation, 
we still find the following excuse for GHC:


http://www.haskell.org/ghc/docs/latest/html/libraries/base/System-IO.html#t%3AHandle

   GHC note: a Handle will be automatically closed when the garbage collector 
   detects that it has become unreferenced by the program. However, relying on 
   this behaviour is not generally recommended: the garbage collector is unpredictable. 
   If possible, use explicit an explicit hClose to close Handles when they are no longer 
   required. GHC does not currently attempt to free up file descriptors when they have 
   run out, it is your responsibility to ensure that this doesn't happen. 


this issue has been discussed in the past, and i consider it a bug if the memory
manager tells me to handle memory myself;-) so i do hope that this infelicity 
will
be removed in the future (run out of file descriptors -> run a garbage 
collection
and try again, before giving up entirely).

in fact, my local version had two variants of processFile - the one i posted and
one with explicit file handle handling (the code was restructured this way 
exactly
to hide this implementation decision in a single function). i did test both 
variants
on a directory with lots of copies of a few emails (>2000 files), and both 
worked
on my system, so i hoped -rather than checked- that the handle collection issue
had finally been fixed, and made the mistake of removing the more complex
variant before posting. thanks for pointing out that error - as the 
documentation
above demonstrates, it isn't good to rely on assumptions, nor on tests.

so here is the alternate variant of processFile (for which i imported 
System.IO):


processFile path = do
  f <- openFile path ReadMode
  text <- hGetContents f
  let email = parseEmail text
  email `seq` hClose f
  return email


all this hazzle to expose a file handle to call hClose on, just so that the GC 
does not have to..


thanks,
claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lazy IO and closing of file handles

2007-03-15 Thread Donald Bruce Stewart
claus.reinke:
> >Not necessarily so, since you are making assumptions about the
> >timeliness of garbage collection. I was similarly sceptical of Claus'
> >suggestion:
> >
> >Claus Reinke:
> >>in order to keep the overall structure, one could move readFile backwards
> >>and parseEmail forwards in the pipeline, until the two meet. then make 
> >>sure
> >>that parseEmail completely constructs the internal representation of each
> >>email, thereby keeping no implicit references to the external 
> >>representation.
> 
> you are quite right to be skeptical!-) indeed, in the latest Handle 
> documentation, we still find the following excuse for GHC:
> 
> http://www.haskell.org/ghc/docs/latest/html/libraries/base/System-IO.html#t%3AHandle
> 
>GHC note: a Handle will be automatically closed when the garbage 
>collector detects that it has become unreferenced by the program. 
>However, relying on this behaviour is not generally recommended: the 
>garbage collector is unpredictable. If possible, use explicit an 
>explicit hClose to close Handles when they are no longer required. GHC 
>does not currently attempt to free up file descriptors when they have 
>run out, it is your responsibility to ensure that this doesn't happen. 
> this issue has been discussed in the past, and i consider it a bug if the 
> memory
> manager tells me to handle memory myself;-) so i do hope that this 
> infelicity will
> be removed in the future (run out of file descriptors -> run a garbage 
> collection
> and try again, before giving up entirely).
> 
> in fact, my local version had two variants of processFile - the one i 
> posted and
> one with explicit file handle handling (the code was restructured this way 
> exactly
> to hide this implementation decision in a single function). i did test both 
> variants
> on a directory with lots of copies of a few emails (>2000 files), and both 
> worked
> on my system, so i hoped -rather than checked- that the handle collection 
> issue
> had finally been fixed, and made the mistake of removing the more complex
> variant before posting. thanks for pointing out that error - as the 
> documentation
> above demonstrates, it isn't good to rely on assumptions, nor on tests.
> 
> so here is the alternate variant of processFile (for which i imported 
> System.IO):
> 
> >processFile path = do
> >  f <- openFile path ReadMode
> >  text <- hGetContents f
> >  let email = parseEmail text
> >  email `seq` hClose f
> >  return email
> 
> all this hazzle to expose a file handle to call hClose on, just so that the 
> GC does not have to..
> 

Are we at the point that we should consider adding some documentation
how to deal with this issue? And are the recommendations to either use
strict IO (should we have a package for System.IO.Strict??), or via
strictness on the consumer of the data.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lazy IO and closing of file handles

2007-03-15 Thread Bertram Felgenhauer
Ketil Malde wrote:
> Bertram Felgenhauer wrote:
> >>> type Subject = String
> >>> data Email   = Email {from :: From, subject :: Subject} deriving Show
> >data Email   = Email {from :: !From, subject :: !Subject} deriving Show
> ...except that From and Subject are Strings, and thus the strictness 
> annotation only forces WHNF.  I.e., you also need to modify parseEmail 
> to force these.
> 
> -k

You're right. Actually, the program will be strict enough if the From
header always precedes the Subject header (which was the case in my
tests), but that's not immediately obvious.

Modifying getHeader to force its result is the clean solution, say:

  getHeader = forceString . fromMaybe "N/A" . flip lookup headers
  forceString s = length s `seq` s

Having to rely on GC to close the fds quickly enough is another problem;
can this be solved on the library side, maybe by performing GCs when
running out of FDs?

Bertram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Make strict (IO String) lazy

2007-03-15 Thread Henning Thielemann

 On the one hand, in the standard libraries there are functions like
readFile, getContents, hGetContents which read a file lazily. This is
often a nice feature, but sometimes lead to unexpected results, say when
reading a file and overwriting it with modified contents. Unfortunately
the standard libraries provide no functions for strict reading, and one
has to do this manually.
 On the other hand, when I write some IO function that returns a String, I
easily end up with a function which produces the String in a strict way.
(Say I call some shell commands and concatenate their outputs.)
 What is the preferred way to turn a strict (IO String) into a lazy one?
forkIO? forkOS? How would one derive readFile from a hypothetical
strictReadFile?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Make strict (IO String) lazy

2007-03-15 Thread Björn Bringert

Henning Thielemann wrote:

 On the one hand, in the standard libraries there are functions like
readFile, getContents, hGetContents which read a file lazily. This is
often a nice feature, but sometimes lead to unexpected results, say when
reading a file and overwriting it with modified contents. Unfortunately
the standard libraries provide no functions for strict reading, and one
has to do this manually.
 On the other hand, when I write some IO function that returns a String, I
easily end up with a function which produces the String in a strict way.
(Say I call some shell commands and concatenate their outputs.)
 What is the preferred way to turn a strict (IO String) into a lazy one?
forkIO? forkOS? How would one derive readFile from a hypothetical
strictReadFile?


Perhaps I misunderstood you, but wouldn't using fork* just make it 
nondeterministic, not lazy? unsafeInterleaveIO is the way to go, though 
it won't allow you to write readFile using strictReadFile. Rather, it 
allows you to write readFile using hGetChar. unsafeInterleaveIO . 
strictReadFile is not lazy enough, since it reads the whole file when 
you force the head of the string.


/Björn
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Make strict (IO String) lazy

2007-03-15 Thread Donald Bruce Stewart
lemming:
> 
>  On the one hand, in the standard libraries there are functions like
> readFile, getContents, hGetContents which read a file lazily. This is
> often a nice feature, but sometimes lead to unexpected results, say when
> reading a file and overwriting it with modified contents. Unfortunately
> the standard libraries provide no functions for strict reading, and one
> has to do this manually.

Data.ByteString.readFile is strict, and then:

import qualified Data.ByteString

strictReadFile :: FilePath -> IO String
strictReadFile = liftM B.unpack B.readFile

>  On the other hand, when I write some IO function that returns a String, I
> easily end up with a function which produces the String in a strict way.
> (Say I call some shell commands and concatenate their outputs.)
>  What is the preferred way to turn a strict (IO String) into a lazy one?
> forkIO? forkOS? How would one derive readFile from a hypothetical
> strictReadFile?

unsafeInterleaveIO strict reads?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lazy IO and closing of file handles

2007-03-15 Thread Claus Reinke

http://www.haskell.org/ghc/docs/latest/html/libraries/base/System-IO.html#t%3AHandle

   GHC note: a Handle will be automatically closed when the garbage
   collector detects that it has become unreferenced by the program.
   However, relying on this behaviour is not generally recommended: the
   garbage collector is unpredictable. If possible, use explicit an
   explicit hClose to close Handles when they are no longer required. GHC
   does not currently attempt to free up file descriptors when they have
   run out, it is your responsibility to ensure that this doesn't happen.



this issue has been discussed in the past, and i consider it a bug if the
memory manager tells me to handle memory myself;-) so i do hope that this
infelicity will be removed in the future (run out of file descriptors -> run a
garbage collection and try again, before giving up entirely).



Are we at the point that we should consider adding some documentation
how to deal with this issue? And are the recommendations to either use
strict IO (should we have a package for System.IO.Strict??), or via
strictness on the consumer of the data.


i'm all for having a readFileNow, right next to readFile. apart from that, it
might be sufficient to mention explicitly, in the lazy i/o docs, that

   - lazy i/o and strict i/o are separate approaches to i/o
   - lazy i/o is more abstract, strict i/o gives better control of resources
   - mixing lazy and strict i/o is to be approached with special attention,
   because the strict i/o exposes features that are assumed to be hidden
   when using lazy i/o

(should it be asynchronous vs synchronous i/o, instead of lazy vs strict?)

as for the specific issue at hand: i've seen software with thick folders of
well-written manuals explaining all the intricacies of using said software. and
i've seen software which was so obvious to use that it needed hardly any
printed manuals. guess which one i prefer?-)

in good old Hugs, for instance, we find in function newHandle in src/iomonad.c
http://cvs.haskell.org/cgi-bin/cvsweb.cgi/hugs98/src/iomonad.c?rev=1.104;content-type=text%2Fx-cvsweb-markup;f=h;only_with_tag=MAIN

   /* return a free Handle or throw an IOError */
   /* Search for unused handle*/
   /* If at first we don't*/
   /* succeed, garbage collect*/
   /* and try again ...   */
   /* ... before we give up   */

so, instead of documenting limitations and workarounds, this issue should be
fixed in GHC as well. in the meantime, the existing documentation of the GHC
issue with handles is not easy to notice because readFile does not even mention
handles, and their docs are in System.IO, not in Prelude.

readFile refers to getContents, which refers to hGetContents stdin, which 
explains
when handles are semi-closed and closed, but doesn't mention the implications
discussed in the Handle docs.

my suggestion would be that all operations that might leak handles simply ought
to have their docs include a direct link to the Handle docs, as in "see notes on
possible file handle leakage". perhaps the Handle docs are also the right place
for the notes on lazy vs strict i/o, with appriate links to that section ("see 
notes
on lazy vs strict i/o")?

claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Foralls in records

2007-03-15 Thread Adde
Thanks! Serious food for thought.
Ten years of object oriented brainwashing to undo :)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] N and R are categories, no?

2007-03-15 Thread Nicolas Frisby

That said, N and R are indeed categories; however, considering them as
categories should be carefully interlaced with your intuitions about
them as types.

I haven't formally checked it, but I would bet that this endofunctor
over N, called Sign, is a monad:

 Sign x = x + x
 Pos = injectLeft
 Neg = injectRight

 unit = Pos
 join (Pos (Pos n)) = Pos n
 join (Pos (Neg n)) = Neg n
 join (Neg (Pos n)) = Neg n
 join (Neg (Neg n)) = Pos n

Pos and Neg are just labels for sign. I'm assuming N is the naturals,
not the integers; thus this monad might actually be useful :). Also
note that this means there is not necessarily a mapping from F x -> x.
Neg 3 should not necessarily map to 3. Also, this structure is
probably satisfies many more laws than just the monad laws--e.g.
monoids or monoidals.

So while it might not always make sense to consider N and R as
categories when learning about category theory and Haskell, it might
be helpful to learn about monads (and other notions) in categories
simpler than the Fun category of functional types and partial
functions--N and R are could be good categories for such learning.
Have fun!

On 3/15/07, Ulf Norell <[EMAIL PROTECTED]> wrote:



On 3/15/07, Steve Downey <[EMAIL PROTECTED]> wrote:
> EOk, i'm trying to write down, not another monad tutorial, because I
> don't know that much yet, but an explication of my current
> understanding of monads.
>
> But before I write down something that is just flat worng, I thought
> I'd get a cross check. (and I can't get to #haskell)
>
> Monads are Functors. Functors are projections from one category to
> another such that structure is preserved. One example I have in mind
> is the embedding of the natural numbers into the real numbers. The
> mapping is so good, that we don't flinch at saying 1 == 1.0.

 Monads are endofunctors (functors from one category to itself). This is
easy to see from the type of join:

 join : m (m a) -> m a

 For Haskell monads the category is the category of Haskell types and
Haskell functions. In this category N and R are objects, so you'll get the
wrong idea trying to see them as categories.

 / Ulf


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell beginner

2007-03-15 Thread arnuld

i want to start learning Haskell and willing to master it :-)  i have
done some Common Lisp from http://www.gigamonkeys.com/book/. so i know
what are functions, variables, etc. BUT i have never done any real
life programmming or any kind of software development.

in my country no Haskell books are available :-( , hence i can only
use online Tutorials. my main purpose is to learn real-life
programming by contributing some "Haskell coding" to a GPLed Software
written using Haskell.

i have found "Yet Another Haskell Tutorial" after searching the archives.

is it really a good idea to learn Haskell using only Online material ?
(when one does not have any offline resources)



--
http://arnuld.blogspot.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell beginner

2007-03-15 Thread Scott Williams

Hi Arnuld,

It's certainly possible. I'm new to the language too and learning it much
the same way. It sounds like you don't have a lot of experience programming.
I think this is an asset, not a liability. I would be interested in what
concepts are the hardest to understand so the community can develop better
tutorials for users without a lot of functional programming and abstract
math background.

On 3/15/07, arnuld <[EMAIL PROTECTED]> wrote:


i want to start learning Haskell and willing to master it :-)  i have
done some Common Lisp from http://www.gigamonkeys.com/book/. so i know
what are functions, variables, etc. BUT i have never done any real
life programmming or any kind of software development.

in my country no Haskell books are available :-( , hence i can only
use online Tutorials. my main purpose is to learn real-life
programming by contributing some "Haskell coding" to a GPLed Software
written using Haskell.

i have found "Yet Another Haskell Tutorial" after searching the archives.

is it really a good idea to learn Haskell using only Online material ?
(when one does not have any offline resources)



--
http://arnuld.blogspot.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





--
Scott Williams
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell beginner

2007-03-15 Thread Jefferson Heard
Yet another Haskell Tutorial is the way I learned it.  I find that subscribing 
to the mailing lists and reading the tutorial material is generally speaking 
enough.

On Thursday 15 March 2007 14:30:03 arnuld wrote:
> i want to start learning Haskell and willing to master it :-)  i have
> done some Common Lisp from http://www.gigamonkeys.com/book/. so i know
> what are functions, variables, etc. BUT i have never done any real
> life programmming or any kind of software development.
>
> in my country no Haskell books are available :-( , hence i can only
> use online Tutorials. my main purpose is to learn real-life
> programming by contributing some "Haskell coding" to a GPLed Software
> written using Haskell.
>
> i have found "Yet Another Haskell Tutorial" after searching the archives.
>
> is it really a good idea to learn Haskell using only Online material ?
> (when one does not have any offline resources)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell beginner

2007-03-15 Thread Paul Johnson

arnuld wrote:
i want to start learning Haskell and willing to master it :-)  


Also check out the Wikibook at 
http://en.wikibooks.org/wiki/Programming:Haskell


Do call back here if you need help.  The Wikibook people would also 
appreciate feedback on what you found easy or difficult to understand.


Paul.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] How do I avoid stack overflows?

2007-03-15 Thread DavidA
Hi.

I'm trying to write some code which involves lots of matrix multiplications, 
but whenever I do too many, I get stack overflows (in both GHCi 6.4.2, and 
Hugs May 2006). The following code exhibits the problem.

import List (transpose)

u <.> v = sum $ zipWith (*) u v

a <<*>> b = multMx a (transpose b)
where
multMx [] _ = []
multMx (u:us) bT = map (u <.>) bT : multMx us bT

id3 = [[1,0,0],[0,1,0],[0,0,1]]

test = iterate (<<*>> id3) id3 !! 100

I tried to fix the problem using seq, as follows:

iterate' f x = x : seq x' (iterate' f x') where x' = f x

test' = iterate' (<<*>> id3) id3 !! 100

However, in both cases, the code causes stack overflows in both interpreters. 
(It sometimes kills GHCi, which I guess is a bug.)

Any ideas?

Thanks.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Lack of expressiveness in kinds?

2007-03-15 Thread Andrew Wagner

Ok, so I'm inching closer to understanding monads, and this question
popped up today. Consider the following 2 declarations:

data Foo a = Bar a
data (Ord a) => Baz a = Bah a

Note that both of these have kind * -> *. However, Baz could never be
an instance of monad, because there is a restriction on the types it
can operate on. Foo, however, is completely polymorphic, without
limitation. It seems to me that there ought to be a way to express the
difference between the two in the type/kind system. For example, you
can almost, but not quite, say that in the declaration "class Monad m
where..", m must be of kind *->*, but that's not quite enough to say,
because of this example. Am I just missing something, or is there a
reason the kind of Baz shouldn't be something other than *->*?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell beginner

2007-03-15 Thread Sebastian Sylvan

On 3/15/07, arnuld <[EMAIL PROTECTED]> wrote:

i want to start learning Haskell and willing to master it :-)  i have
done some Common Lisp from http://www.gigamonkeys.com/book/. so i know
what are functions, variables, etc. BUT i have never done any real
life programmming or any kind of software development.

in my country no Haskell books are available :-( , hence i can only
use online Tutorials. my main purpose is to learn real-life
programming by contributing some "Haskell coding" to a GPLed Software
written using Haskell.

i have found "Yet Another Haskell Tutorial" after searching the archives.

is it really a good idea to learn Haskell using only Online material ?
(when one does not have any offline resources)



Hang out in #haskell in IRC (freenode). That's probably the best
resource you'll ever find when learning Haskell.

So here's the strategy:
1. Read tutorials
2. When you get stuck, ask on #haskell
3. Goto 1

(yes I deliberately wrote an imperative algorithm using gotos to
describe how to learn Haskell, and I found it amusing)


--
Sebastian Sylvan
+46(0)736-818655
UIN: 44640862
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell beginner

2007-03-15 Thread Andrew Wagner

This is all a good idea, but I've found that I've never learned nearly
as much as when I started bashing out some code. So I highly recommend
starting up some project that's interesting to you too.



Hang out in #haskell in IRC (freenode). That's probably the best
resource you'll ever find when learning Haskell.

So here's the strategy:
1. Read tutorials
2. When you get stuck, ask on #haskell
3. Goto 1

(yes I deliberately wrote an imperative algorithm using gotos to
describe how to learn Haskell, and I found it amusing)


--
Sebastian Sylvan
+46(0)736-818655
UIN: 44640862
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] N and R are categories, no?

2007-03-15 Thread Dominic Steinitz
> I haven't formally checked it, but I would bet that this endofunctor
> over N, called Sign, is a monad:

Just to be picky a functor isn't a monad. A monad is a triple consisting of a 
functor and 2 natural transformations which make certain diagrams commute.

If you are looking for examples, I always think that a partially ordered set 
is a good because the objects don't have any elements. A functor is then an 
order preserving map between 2 ordered sets and monad is then a closure 
(http://en.wikipedia.org/wiki/Closure_operator) - I didn't know this latter 
fact until I just looked it up.

Dominic.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] Haskell beginner

2007-03-15 Thread Bulat Ziganshin
Hello Sebastian,

Friday, March 16, 2007, 12:15:09 AM, you wrote:
> So here's the strategy:
> 1. Read tutorials
> 2. When you get stuck, ask on #haskell
> 3. Goto 1

> (yes I deliberately wrote an imperative algorithm using gotos to
> describe how to learn Haskell, and I found it amusing)

knowledge <- iterateM [read tutorial >>= ask_irc]

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Generalised merge

2007-03-15 Thread Paul Johnson
The most common kind of primitive recursive function I find myself 
writing these days is a variation on the theme of merging two sorted lists.


You can see some examples in my Ranged Sets library at 
http://ranged-sets.sourceforge.net/.  For instance:


-- | Set union for ranged sets.  Infix precedence is left 6.
rSetUnion, (-\/-) :: DiscreteOrdered v => RSet v -> RSet v -> RSet v
-- Implementation note: rSetUnion merges the two lists into a single
-- sorted list and then calls normalise to combine overlapping ranges.
rSetUnion (RSet ls1) (RSet ls2) = RSet $ normalise $ merge ls1 ls2
  where
 merge ls1 [] = ls1
 merge [] ls2 = ls2
 merge ls1@(h1:t1) ls2@(h2:t2) =
if h1 <  h2
   then h1 : merge t1 ls2
   else h2 : merge ls1 t2

-- | Set intersection for ranged sets.  Infix precedence is left 7.
rSetIntersection, (-/\-) :: DiscreteOrdered v => RSet v -> RSet v -> RSet v
rSetIntersection (RSet ls1) (RSet ls2) =  
  RSet $ filter (not . rangeIsEmpty) $ merge ls1 ls2

  where
 merge ls1@(h1:t1) ls2@(h2:t2) =
rangeIntersection h1 h2 
: if rangeUpper h1 < rangeUpper h2

  then merge t1 ls2
  else merge ls1 t2
 merge _ _ = []


Union also has its own merge function.

The worst case I've come across was at work.  I can't talk about the 
details, but it involved manipulating two functions of time represented 
by lists of samples.   So I had a type TimeFunc = [(Value, Time)], and 
the job was to compare two TimeFuncs with samples at different times.  
Step 1 was to interpolate each TimeFunc with the values for the times in 
the other TimeFunc, giving a result of type [(Value, Value, Time)] for 
the union of all the times in both original TimeFuncs.  I wrote a truly 
hairy zipTimeFunc function with guards to match each possible case.  It 
worked, but it must have been 100 lines if you include the comments to 
explain each case and demonstrate totality.


So I'm wondering if anyone has a more general pattern.  Unfold?  Some 
variation on a theme of zip?  I once tried writing a generalised merge, 
but it needed half a dozen functions as arguments to handle all the 
various cases.  It was kludgier than just rolling a new merge routine 
every time.  And I don't think it would have handled the TimeFunc case.


Paul.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How do I avoid stack overflows?

2007-03-15 Thread Claus Reinke
I'm trying to write some code which involves lots of matrix multiplications, 
but whenever I do too many, I get stack overflows (in both GHCi 6.4.2, and 
Hugs May 2006). The following code exhibits the problem.
.. 

I tried to fix the problem using seq, as follows:

iterate' f x = x : seq x' (iterate' f x') where x' = f x


since you're working with lists, and nested at that, seq isn't going to buy much
(it'll evaluate the matrix to being non-empty, without forcing its rows, let 
alone
elements). you might find the recent thread on avoiding temporary arrays 
interesting:

http://www.haskell.org/pipermail/haskell-cafe/2007-March/023286.html

if you like to stay with binary lists, you might want to consider using strict 
lists
(no unevaluated heads or tails. i happen to have some strict list code lying around 
from that earlier thread which addresses your problem:-)


hth,
claus

import List (transpose)

u <.> v = summulS u v

a <<*>> b = multMx a (transposeS b)
where
 multMx Nil _ = Nil
 multMx (u:) bT :< multMx us bT

id3 = fromList $ map fromList [[1,0,0],[0,1,0],[0,0,1]]

iterate' f x = x : seq x' (iterate' f x') where x' = f x

test' = toList $ mapS toList $ iterate' (<<*>> id3) id3 !! 100

-- copy in strict list type and operations

data SL a = Nil | !a :< !(SL a) deriving Show -- head- and spine-strict lists

headS (h: SL (SL a)
transposeS Nil   = Nil
transposeS (Nil :< xss)   = transposeS xss
transposeS ((x:http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] N and R are categories, no?

2007-03-15 Thread Nicolas Frisby

Thanks for keeping me honest ;)

On 3/15/07, Dominic Steinitz <[EMAIL PROTECTED]> wrote:

> I haven't formally checked it, but I would bet that this endofunctor
> over N, called Sign, is a monad:

Just to be picky a functor isn't a monad. A monad is a triple consisting of a
functor and 2 natural transformations which make certain diagrams commute.

If you are looking for examples, I always think that a partially ordered set
is a good because the objects don't have any elements. A functor is then an
order preserving map between 2 ordered sets and monad is then a closure
(http://en.wikipedia.org/wiki/Closure_operator) - I didn't know this latter
fact until I just looked it up.

Dominic.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Lack of expressiveness in kinds?

2007-03-15 Thread oleg

Andrew Wagner wrote
> data Foo a = Bar a
> data (Ord a) => Baz a = Bah a
>
> Note that both of these have kind * -> *. However, Baz could never be
> an instance of monad, because there is a restriction on the types it
> can operate on.

There is a wide-spread opinion that one ought not to give context to a
data type declaration (please search for `restricted datatypes
Haskell'). Someone said that in GHC typechecker such contexts called
stupidctx. There has been a proposal to remove that feature from
Haskell, although I like it as a specification tool.  John Hughes
wrote a paper about a better use for that feature:

  John Hughes. 1999. Restricted datatypes in Haskell.
  In Proceedings of the 1999 Haskell workshop, ed. Erik Meijer. Technical
  Report UU-CS-1999-28, Department of Computer Science, Utrecht University.
  http://www.cs.chalmers.se/~rjmh/Papers/restricted-datatypes.ps
That proposal has not been implemented.

One should point out that restricted monads are available in
Haskell right now:
 http://www.haskell.org/pipermail/haskell-prime/2006-February/000498.html

It seems one can even use the do-notation for them, with the help of
`rebindable syntax' feature of GHC. This is because the types of
restricted bind and return are exactly the same as those of regular
bind and return. Only the `Monad' constraint is a bit different.
Restricted monads are the strict super-set of the ordinary monads, so
the backwards compatibility is maintained.

One almost wishes for a fuller integration of restricted monads into
the language...
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell beginner

2007-03-15 Thread Albert Y. C. Lai

Sebastian Sylvan wrote:

So here's the strategy:
1. Read tutorials
2. When you get stuck, ask on #haskell
3. Goto 1

(yes I deliberately wrote an imperative algorithm using gotos to
describe how to learn Haskell, and I found it amusing)


It is just a standard transformation away from a politically correct 
description:


Learn haskell by:
0. Read tutorials
1. When you get stuck, ask on #haskell
2. Learn haskell.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Trouble trying to find packages for ubuntu linux

2007-03-15 Thread Chad Scherrer

Brian,

I had this exact problem, and I found this approach to work wonderfully:

http://pupeno.com/2006/12/17/unstable-packages-on-ubuntu/


Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Hs-Generics] FW: [Haskell-cafe] SYB vs HList (again)

2007-03-15 Thread oleg

[Please follow-up to [EMAIL PROTECTED]

S. Alexander Jacobson wrote:
> HLists require you to define Labels and basically only use label
> values that are themselves either scalar or HLists.
> ...
> With SYB you create field labels using newtype (or data) declarations
> e.g.
>
>data Salary = S {salary::Float}
>
> With HList, label declarations are really verbose e.g.
>
>data SalaryLabel deriving(Typeable)
>type Salary = Field (Proxy SalaryLabel) Int
>salary = proxy :: Proxy FooLabel

Actually there is no requirement that HList record names must be 
scalar `labels', must be Proxies and require such a complex
declaration. From HList's high point of view, any collection can be a
record provided the type of each item is unique and there is some way
to extract the value associated with that type. The HList library
provides two implementations of Records (and there was one more,
obsolete now). There could be more. For example, I have just committed
a yet another implementation,
http://darcs.haskell.org/HList/src/RecordD.hs
Here a record is a list of things that have a type and a value and
provide a way to extract that value. The example from the end of this
file seems worth quoting:

> data Name  = Name String String deriving Show
> newtype Salary = S Float deriving Show
> data Dept  = D String Int deriving Show
>
> person = (Name "Joe" "Doe") .*. (S 1000) .*. (D "CIO" 123) .*. emptyRecord
>
> -- could be derived automatically, like Typeable...
> instance Fieldish Name (String,String) where 
> fromField (Name s1 s2) = (s1,s2)
> instance Fieldish Salary Float where
> fromField (S n) = n
> instance Fieldish Dept (String,Int) where
> fromField (D s n) = (s,n)
>
> test1 = show person
> -- When a field acts as a label, only its type matters, not the contents
> test2 = person .!. (Name undefined undefined)
> test3 = person .!. (undefined::Salary)
> test5 = person .!. (D "xxx" 111)



> I don't know exactly how HList handles default values but I assume you
> can restrict use of those values to explicit deserialization contexts.
> Is that correct?

I'm not sure what you mean about the restriction of default values to
deserialization contexts. Anyway, HList provides a left-biased union
of two records: hLeftUnion r1 r2 is the record r1 augmented with all
the fields from r2 that didn't occur in r2. One may consider r2 to be
the record with default fields and the corresponding values.

> It would be really nice if there was some way to tell Haskell that
> HLists have no more fields than the ones you happen to be getting and
> setting in your code. Effectively that would mean you get data
> structure inference not just function type inference which would be
> really cool!

I'm not sure I follow. Could you outline an example of the code you
wish work? Incidentally, a lot of the library depends on the record
types being members of some specific classes. One can define

> newtype ClosedRecord = ClosedRecord r

To make a ClosedRecord to be a record from which we can extract the
values of some fields, we merely need to say
> instance HasField l r v => HasField l (ClosedRecord r) v
> where hLookupByLabel l (ClosedRecord r) v = hLookupByLabel l r v

Since we did not make this record the member of HExtend or HAppend, it
is not extensible.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] How do I avoid stack overflows?

2007-03-15 Thread oleg

DavidA wrote:
> I'm trying to write some code which involves lots of matrix multiplications,
> but whenever I do too many, I get stack overflows (in both GHCi 6.4.2, and
> Hugs May 2006).

By placing a couple of strictness annotations, your test' gives the
expected answer (given some time) on Hugs. GHCi unfortunately runs
into some kind of bug (it says so itself), an unimplemented opcode.
The test'' below gives that bug instantly...


import List (transpose)

-- not needed here
-- foldl' f z [] = z
-- foldl' f z (h:t) = (foldl' f $! f h z) t
-- sum' l = foldl' (+) 0 l

map' f [] = []
map' f (h:t) = scons (f h) (map f t)

-- strict cons. Could be associated with an infix op, e.g., :$
scons :: a -> [a] -> [a]
scons x l | x `seq` l `seq` False = undefined
scons x l = x:l


u <.> v = sum $ zipWith (*) u v

a <<*>> b = multMx a (transpose b)
where
multMx [] _ = []
multMx (u:us) bT = scons (map' (u <.>) bT) (multMx us bT)

id3 = [[1,0,0],[0,1,0],[0,0,1]]

-- test = iterate (<<*>> id3) id3 !! 100

iterate' f x = x : seq x' (iterate' f x') where x' = f x
test' = iterate' (<<*>> id3) id3 !! 100

test'' = head $ drop 100 $ iterate' (<<*>> id3) id3
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe