Re: [Haskell-cafe] System.Posix, forked processes and psuedo terminals on Linux

2010-08-20 Thread Donn Cave
Quoth Erik de Castro Lopo ,
...
> The code below *almost* works. Its currently printing out:
>
> parent : Forked child was here!
> parent : Message from parent.
> Read 21 bytes
>
> while I think it should print:
>
> parent : Forked child was here!
> parent : Read 21 bytes
>
> Any clues on why 'Message from parent.' is also ending up on
> stdout? This is ghc-6.12.1 on Debian Linux.

My guess is that the default tty attributes include ECHO.  So the
data you write to the master fd is echoed back, as though by the
fork process but actually by the terminal driver.  You can turn
ECHO off.

Donn Cave, d...@avvanta.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] System.Posix, forked processes and psuedo terminals on Linux

2010-08-20 Thread Erik de Castro Lopo
Hi all,

I've got a but it code below thats not quite working as I expect
it to. The basic idea is that it opens a master/slave pair of
pseudo terminals, forks a child process and then perform
bi-directional communication between the parent and the child
via the master/slave psuedo terminal.

The thing does really need to forkProcess because once the 
comms is working I want to exec another process in the child.
I also intend to dup the file descriptors so that stdin, stdout
and stderr all point to the slave's end of the pty.

The code below *almost* works. Its currently printing out:

parent : Forked child was here!
parent : Message from parent.
Read 21 bytes

while I think it should print:

parent : Forked child was here!
parent : Read 21 bytes

Any clues on why 'Message from parent.' is also ending up on
stdout? This is ghc-6.12.1 on Debian Linux.

Cheers,
Erik

import System.Posix.IO
import System.Posix.Process
import System.Posix.Terminal
import System.Posix.Types

main :: IO ()
main
 = do   (master, slave) <- openPseudoTerminal
_childId <- forkProcess $ forkedChild (master, slave)
closeFd slave
runParent master

runParent :: Fd -> IO ()
runParent fd
 = do   (str, _) <- fdRead fd 1024
putStr $ "parent : " ++ str
_ <- fdWrite fd "Message from parent.\n"
(str2, _) <- fdRead fd 1024
putStr $ "parent : " ++ str2

forkedChild :: (Fd, Fd) -> IO ()
forkedChild (master, fd)
 = do   closeFd master
_ <- fdWrite fd "Forked child was here!\n"
(_, count) <- fdRead fd 1024
_ <- fdWrite fd $ "Read " ++ show count ++ " bytes\n"
closeFd fd


-- 
--
Erik de Castro Lopo
http://www.mega-nerd.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANNOUNCE: enumerator, an alternative iteratee package

2010-08-20 Thread John Millikin
On Fri, Aug 20, 2010 at 21:02, Paulo Tanimoto  wrote:
> Hi John,
>
> What do you think of putting those parsing functions like head, last,
> length, etc, under another module or, alternatively, putting the main
> definitions under another module (say, Base or Core)?  I wouldn't mind
> if they all get re-exported.
>
> I say that because since the library aims to be minimalistic, it would
> be nice to import the core parts only.  That makes it easy to avoid
> some name clashes as well.

My goal isn't to be "minimalistic", necessarily, I just don't want to
drag in huge dependencies like haskell98. Ideally, the API would be
something like that of "bytestring" or "text" -- easy to understand,
comprehensive, and with large dependencies factored out to related
modules (like "text-icu" or "bytestring-mmap"). Having lots (and lots
and lots) of exports is OK, as long as it's easy for users to
understand how they work.

Regarding my comment for the parsing functions: I'm starting to think
that's wrong. They're not for "parsing" so much as general data
manipulation. Especially cases like dropWhile or peek, which are
useful to all sorts of data types which can't really be "parsed".

The next release will probably see an expansion and documentation of
that section, though some of the more useless ones (length, last) will
be removed unless anybody speaks up.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANNOUNCE: enumerator, an alternative iteratee package

2010-08-20 Thread Felipe Lessa
On Sat, Aug 21, 2010 at 12:30 AM, John Millikin  wrote:
> Just released 0.2. It has the text IO and codecs module, with support
> for ASCII, ISO-8859-1, UTF-8, UTF-16, and UTF-32. It should be
> relatively easy to add support for codec libraries like libicu or
> libiconv in the future. Both encoding and decoding are incremental, so
> you can (for example) process million-line logfiles in constant space.

I think it would be nice to say in the docs that a constant sized
buffer isn't used.

Alas, Data.Text.IO.hGetLine internally uses Data.Text.concat.  This
means that you need to do an additional copy whenever a newline is not
found in the first buffer.  So there's a performance reason to have an
hGet as well =).

> This also changes the binary enumHandle to use non-blocking IO, as
> recommended by Magnus Therning. I'm embarrassed to admit I still don't
> understand the improvement, exactly, but three people so far have told
> me it's a good idea.

Me neither =).

Cheers!

-- 
Felipe.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANNOUNCE: enumerator, an alternative iteratee package

2010-08-20 Thread Paulo Tanimoto
Hi John,

What do you think of putting those parsing functions like head, last,
length, etc, under another module or, alternatively, putting the main
definitions under another module (say, Base or Core)?  I wouldn't mind
if they all get re-exported.

I say that because since the library aims to be minimalistic, it would
be nice to import the core parts only.  That makes it easy to avoid
some name clashes as well.

Take care,

Paulo
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: ANNOUNCE: enumerator, an alternative iteratee package

2010-08-20 Thread John Millikin
Just released 0.2. It has the text IO and codecs module, with support
for ASCII, ISO-8859-1, UTF-8, UTF-16, and UTF-32. It should be
relatively easy to add support for codec libraries like libicu or
libiconv in the future. Both encoding and decoding are incremental, so
you can (for example) process million-line logfiles in constant space.

Examples/wc.hs has been updated to use this decoding module for its
"character count" mode, which should allow users to see how it's used.
Basically, you use 'joinI' to flatten the iteratees returned from
enumeratees. The joinI / enumeratee style is used for implementing
nested streams.

This also changes the binary enumHandle to use non-blocking IO, as
recommended by Magnus Therning. I'm embarrassed to admit I still don't
understand the improvement, exactly, but three people so far have told
me it's a good idea.

As always, API docs and a literate PDF are available at:

http://ianen.org/haskell/enumerator/api-docs/
http://ianen.org/haskell/enumerator/enumerator.pdf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Equality constraints and RankNTypes - how do I assist type inference

2010-08-20 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 8/20/10 16:08 , DavidA wrote:
> type Tensor u v =
> (u ~ Vect k a, v ~ Vect k b) => Vect k (TensorBasis a b) -- **

IIRC this actually substitutes as

  (forall k a b. (u ~ Vect k a, v ~ Vect k b) => Vect k (TensorBasis a b))

and the implicit forall will generally mess things up because it won't be
floated out to the top level.  (Or in other words, constraints in type
declarations don't generally do what you intend.)

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkxvHvoACgkQIn7hlCsL25V7/gCgt9NaWZBFV7VFCYbs5Q6hqgGG
ke0AoNFQU6VOXboK7daFI6IAgUiyKfGx
=JL3q
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A cabal odyssey

2010-08-20 Thread Ivan Lazar Miljenovic
Andrew Coppin  writes:

> Duncan Coutts wrote:
>> Yup, there's a ticket for it.
>>   
>
> In fact, there appears to be a ticket for every single thing I
> originally mentioned. And they're all ancient tickets too. So,
> yeah... nothing to do here.
>
> (Unless you're suggesting that I should try to actually *fix* these
> things. The way I figure it, if an army of developers who are already
> experts on the subject haven't been able to fix it yet, it must be
> extremely hard, and so there's no way *I* can fix it.)

Or maybe they have other things to do (e.g. Duncan is working, finishing
off his PhD thesis and answering queries like this; when do you expect
him to get any hacking done? :p).

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: cabal-install profiling libraries

2010-08-20 Thread Jason Dagit
On Fri, Aug 20, 2010 at 7:14 AM, Johan Tibell wrote:

> On Fri, Aug 20, 2010 at 4:07 PM, Johannes Waldmann <
> waldm...@imn.htwk-leipzig.de> wrote:
>
>> Of course I understand "lack of developer time".
>> Could any of this be forked out as student projects?
>>
>
> These kind of projects are perfect for Google Summer of Code. We had two
> Cabal projects this year (Hackage 2 and unit testing support).
>
> The next GSoC is quite far in the future (9 months or so) but if we created
> some well written proposals for Cabal features we'd like to see implemented
> well in time for next year's GSoC we could get some students to work on
> them.
>

I would like to encourage this workflow.  Plan the writeup, project
specification and whatnot, as if you we are going to get GSoC students to do
the work.  In the best case, someone (anyone whether they are a GSoC student
or not) comes along and says, "Oh, what a well written proposal.  I'll go
implement it!"  In the worst case we never find anyone to implement the
proposal, but this is the open source world and if something is really
valuable someone usually comes by to implement it.

On the downside, sometimes it's harder to specify in a document format the
correct behavior / implementation of such features than actually
implementing them.

Eventually these documents could even help future generations of cabal devs
understand why things are the way they are.

Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: text-json-qq 0.2.0 (now with haskell-src-meta goodness)

2010-08-20 Thread Oscar Finnsson
Hi,

I've just uploaded a new version of text-json-qq, the json quasiquoter.

Now it's possible (thanks to haskell-src-meta) to insert Haskell code
inside the qq-code:

> myCode = [$jsonQQ| {age: <| age + 34 :: Integer |>, name: <| map toUpper name 
> |>} |]
> where age = 34 :: Integer
>   name = "Pelle"

For further info read the documentation at
http://hackage.haskell.org/package/text-json-qq

or read/fork the source code at
http://github.com/finnsson/text-json-qq

-- Oscar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] ANNOUNCE: enumerator, an alternative iteratee package

2010-08-20 Thread John Millikin
On Fri, Aug 20, 2010 at 14:58, Magnus Therning  wrote:
> Indeed.
>
> In many protocols it would force the attacker to send well-formed requests
> though.  I think this is true for many text-based protocols like
> HTTP.
>
> The looping can be handled effectively through hWaitForInput.
>
> There are also other reasons for doing non-blocking IO, not least that it
> makes developing and manual testing a lot nicer.

I think I'm failing to understand something.

Using a non-blocking read doesn't change how the iteratees react to
well- or mal-formed requests. All it does is change the failure
condition from "blocked indefinitely" to "looping indefinitely".

Replacing the hGet with a combination of hWaitForInput /
hGetNonBlocking would cause a third failure condition, "looping
indefinitely with periodic blocks". This doesn't seem to be an
improvement over simply blocking.

Do you have any example code which works well using a non-blocking
enumerator, but fails with a blocking one?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: binary-generic-0.2, generic binary serialisation using binary and syb.

2010-08-20 Thread Alexey Khudyakov

On 21.08.2010 01:38, Lars Petersen wrote:

* Float and Double are serialised big-endian according to binary-ieee754


I'd like to point out that binary-ieee754 is dead slow[1] and not usable 
when one cares about performance. IMHO lack of ways to serialize 
floating point data in IEEE754 format is one of the problems of binary 
package.


[1] ~30 times slower that reading/writing with peek/poke although I had 
to patch binary for that.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] ANNOUNCE: enumerator, an alternative iteratee package

2010-08-20 Thread Magnus Therning
On 20/08/10 22:32, John Millikin wrote:
> On Fri, Aug 20, 2010 at 12:52, Magnus Therning  wrote:
>> You don't need to send that much data, the current implementation of
>> Enumerator uses hGet, which blocks, so just send the server a few bytes and
>> it'll be sitting there waiting for input until it times out (if ever).
>> Open a few hundred of those connections and you're likely to cause the
>> server to run out of FDs.  Of course this is already coded up in tools like
>> slowloris[1] :-)
>
> Correct me if I'm wrong, but I'm pretty sure changing the implementation to
> something non-blocking like hGetNonBlocking will not fix this. Hooking up an
> iteratee to an enumerator which doesn't block will cause it to loop forever,
> which is arguably worse than simply blocking.
>
> The best way I can think of to defeat a handle-exhaustion attack is to
> enforce a timeout on HTTP header parsing, using something like
> System.Timeout. This protects against slowloris, since requiring the
> entire header to be parsed within some fixed small period of time
> prevents the socket from being held open via slowly-trickled headers.

Indeed.

In many protocols it would force the attacker to send well-formed requests
though.  I think this is true for many text-based protocols like
HTTP.

The looping can be handled effectively through hWaitForInput.

There are also other reasons for doing non-blocking IO, not least that it
makes developing and manual testing a lot nicer.

/M

-- 
Magnus Therning(OpenPGP: 0xAB4DFBA4)
magnus@therning.org   Jabber: magnus@therning.org
http://therning.org/magnus identi.ca|twitter: magthe



signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: binary-generic-0.2, generic binary serialisation using binary and syb.

2010-08-20 Thread Lars Petersen


Hello cafe,

although there was no announcement for version 0.1 there is one for 
the current 0.2:


binary-generic allows to perform binary serialisation without 
explicitly defining every type specific case.
If an algebraic type instantiates the 'Data' class the library is able 
to serialize it in a canonical way.


Unfortunately version 0.2 is not binary compatible with 0.1. I decided 
to break this for the sake of simplicity: In 0.2 all multibyte values 
are encoded big-endian. For further version I'll try to supply 
compatibility functions even if something changes, but since 0.1 is not 
even a week old, I think it's not necessary this time.


Okay, features: Common primitive types are supported out of the box:

* Char, Word, Int are serialised as big-endian, taken from Data.Binary
* Float and Double are serialised big-endian according to 
binary-ieee754

* Integer is serialized as in Data.Binary, but consistently big-endian
* Data.ByteString as it is
* Data.Text as Utf8

For types that are not supported yet, there is described an easy way 
of extension in 'Data.Binary.Extensions'. You are also free to ovveride 
certain choices and supply your own.


If you think there are more types that should be supported without 
explicit extension, drop me a line.



Cheers,
 Lars

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] ANNOUNCE: enumerator, an alternative iteratee package

2010-08-20 Thread John Millikin
On Fri, Aug 20, 2010 at 12:52, Magnus Therning  wrote:
> You don't need to send that much data, the current implementation of
> Enumerator uses hGet, which blocks, so just send the server a few bytes and
> it'll be sitting there waiting for input until it times out (if ever).
> Open a
> few hundred of those connections and you're likely to cause the server
> to run
> out of FDs.  Of course this is already coded up in tools like
> slowloris[1] :-)

Correct me if I'm wrong, but I'm pretty sure changing the
implementation to something non-blocking like hGetNonBlocking will not
fix this. Hooking up an iteratee to an enumerator which doesn't block
will cause it to loop forever, which is arguably worse than simply
blocking.

The best way I can think of to defeat a handle-exhaustion attack is to
enforce a timeout on HTTP header parsing, using something like
System.Timeout. This protects against slowloris, since requiring the
entire header to be parsed within some fixed small period of time
prevents the socket from being held open via slowly-trickled headers.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Equality constraints and RankNTypes - how do I assist type inference

2010-08-20 Thread DavidA
Hi,

I have the following code, using equality constraints and (I believe) 
RankNTypes:

{-# LANGUAGE MultiParamTypeClasses, TypeFamilies,
   RankNTypes, ExistentialQuantification #-}
{-# LANGUAGE FlexibleInstances, TypeSynonymInstances #-}

-- import Math.Algebra.Group.PermutationGroup

-- Vector space over field k with basis b
data Vect k b = V [(b,k)] deriving (Eq,Show)

data TensorBasis a b = T a b deriving (Eq, Ord, Show)

-- Tensor product of two vector spaces
type Tensor u v =
(u ~ Vect k a, v ~ Vect k b) => Vect k (TensorBasis a b) -- **

class Algebra k v where -- "v is a k-algebra"
unit :: k -> v
mult :: Tensor v v -> v

type GroupAlgebra k = Vect k Int -- (Permutation Int)

instance Num k => Algebra k (GroupAlgebra k) where
unit 0 = V []
unit x = V [(1,x)]
mult (V ts) = V [(g*h,x) | (T g h, x) <- ts]

Everything is fine except for the last line,
which causes the following error message:

Couldn't match expected type `Tensor
(GroupAlgebra k) (GroupAlgebra k)'
   against inferred type `Vect k1 b'
In the pattern: V ts
In the definition of `mult':
mult (V ts) = V [(g * h, x) | (T g h, x) <- ts]
In the instance declaration for `Algebra k (GroupAlgebra k)'

But according to me, I've told it that these two types are the same at the line
marked -- **
How do I help it out with type inference? (It, in this case, is GHCi 6.12.1)

Any ideas?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] ANNOUNCE: enumerator, an alternative iteratee package

2010-08-20 Thread Magnus Therning
On 20/08/10 17:30, Felipe Lessa wrote:
> On Fri, Aug 20, 2010 at 1:12 PM, John Millikin  wrote:
>> This thought occurred to me, but really, how often are you going to
>> have a 10 GiB **text** file with no newlines? Remember, this is for
>> text (log files, INI-style configs, plain .txt), not binary (HTML,
>> XML, JSON). Off the top of my head, I can't think of any case where
>> you'd expect to see 10 GiB in a single line.
>>
>> In the worst case, you can just use "decode" to process bytes coming
>> from the ByteString-based enumHandle, which should give nicely chunked
>> text.
>
> I was thinking about an attacker, not a use case.  Think of a web
> server accepting queries using iteratees internally.  This may open
> door to at least DoS attacks.

You don't need to send that much data, the current implementation of
Enumerator uses hGet, which blocks, so just send the server a few bytes and
it'll be sitting there waiting for input until it times out (if ever).
Open a
few hundred of those connections and you're likely to cause the server
to run
out of FDs.  Of course this is already coded up in tools like
slowloris[1] :-)

/M

[1] http://ha.ckers.org/slowloris/
-- 
Magnus Therning(OpenPGP: 0xAB4DFBA4)
magnus@therning.org   Jabber: magnus@therning.org
http://therning.org/magnus identi.ca|twitter: magthe



signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A cabal odyssey

2010-08-20 Thread Andrew Coppin

wren ng thornton wrote:

Andrew Coppin wrote:
I guess I just figured that since Cabal is used by hundreds of 
millions of people every single day, any little glitches I might have 
come across have already been seen by at least 1,000 people before me 
(and hence, the developers already know about it and just haven't had 
time to fix it yet).


Which is why, when filing a report, you scan/search the bug tracker 
first to make sure you're not filing a duplicate :)


Just remember, those thousands of other people are probably thinking 
the same thing you are. This is well a well-documented phenomenon in 
more serious circumstances:


http://en.wikipedia.org/wiki/Bystander_effect



Well, I just had a look, and it seems every issue I've mentioned already 
has a (very old) ticket. So apparently these are all very well-known 
issues (and presumably too hard to fix).


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A cabal odyssey

2010-08-20 Thread Andrew Coppin

Duncan Coutts wrote:

Yes, when cabal runs haddock on a package, it generates a comprehensive
index if none is present or expands it with the new docs.
Quite cool that :)

  

It's something I've always _wanted_ Cabal to do, but this is the first time
I've ever seen it happen. I don't know what particularly I did to make this
happen, and now it seems to be gone, so...



If you have documentation enabled then it is recreated every time you
install a package.
  


That's what I was expecting to happen... but no. Each package gets its 
documentation generated, but the master index I deleted seems to be gone 
forever.



(Though only for user packages, since we have not yet worked out
somewhere sensible we can stick a global index).
  


Uh... maybe that's it then? Yes, I think I changed it from local to 
global. It was putting all the binaries and documentation under 
Documents and Settings. I changed the install type to global, and it 
went back to putting stuff under Program Files\Haskell like it always 
used to.


I'm not sure what "somewhere sensible" is supposed to mean; until I 
deleted it, the master index was under Program Files\Haskell\doc, right 
next to all the globally-installed packages. Or did you mean there isn't 
a good place on Unix?



Yup, there's a ticket for it.
  


In fact, there appears to be a ticket for every single thing I 
originally mentioned. And they're all ancient tickets too. So, yeah... 
nothing to do here.


(Unless you're suggesting that I should try to actually *fix* these 
things. The way I figure it, if an army of developers who are already 
experts on the subject haven't been able to fix it yet, it must be 
extremely hard, and so there's no way *I* can fix it.)



If you have documentation enabled (ie use --enable-documentation on
the command line, or have "documentation: True" in the ~/.cabal/config
file) then docs get created for each package you install, and the
haddock index/contents of all installed docs gets updated.
  


Right. I still get documentation for each package, just no master index. 
That's the way it always used to work, and that apparently is the way it 
works again now...



I imagine it's so that each package can be placed in a completely arbitrary
place in the filesystem, and the links still work. I'd actually be surprised
if these URLs work on Linux either; they don't appear to follow the requisit
web standards.



You may be right, or perhaps URL syntax is just liberal enough to let
unix style paths work. It's still a bug of course that we're not using
the file:// protocol which makes it not work on windows. I filed it
here:
http://hackage.haskell.org/trac/hackage/ticket/516#comment:6
  


Yeah, I believe at least under HTTP, "/" refers to the root folder of 
the current server, so that probably works for an absolute path. "C:\" 
isn't going to be valid without a protocol spec. (I actually cannot 
remember now whether the Windows paths had forward or backward slashes.) 
I think either removing the drive spec or adding a protocol spec should 
fix this; the latter would seem more "correct". (Again, this is a 
Haddock issue rather than Cabal, isn't it?)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Unix emulation [ANNOUNCE: Sifflet visual programming language, release 1.0!]

2010-08-20 Thread Andrew Coppin

Henk-Jan van Tuyl wrote:
Curl compiles without problems on my Windows XP system. There is a 
HaskellWiki page [0] that describes how to compile packages with Unix 
scripts on Windows systems.


I did once try setting up MinGW and MSYS, just to see if I could make it 
work. But after many, many hours of trying to comprehend the terse 
documentation, I finally gave up. It's just too hard to get it to work. 
(I never even got as far as *trying* to build anything; I just couldn't 
install the tools.)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Code that writes code

2010-08-20 Thread Andrew Coppin

Graham Klyne wrote:

Maybe not helpful to you at this stage, but...

An alternative to generating source code is to factor out the common 
"boilerplate" elements into separate functions, suitably 
parameterized, and to use higher order functions to stitch these 
together.


Well, yeah, if you've got so much boiler plate that you have to automate 
generating the boilerplate, you're probably doing it wrong. ;-)


All I'm actually using it to do is generate a set of fixed-size 
containers (each of which has a bazillion class instances). I've got a 
variable-sized container, but sometimes it's useful to statically 
guarantee that a container is a specific size. In addition, by being 
fixed-size you can get a few small performance gains. That's really all 
I'm using autogeneration for.


I suppose instead of building an ADT for each container size, I could 
just write a newtype over the variable-size container and put a phantom 
type on it representing the size... That would give me the static 
guarantees but not the efficiency.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] feasability of implementing an awk interpreter.

2010-08-20 Thread Don Stewart
There's a lot of examples of languages implemented in Haskell to choose
from, too


http://haskell.org/haskellwiki/Applications_and_libraries/Compilers_and_interpreters#Large_languages

michael:
> Thank you all for your encouragement. I need to think about the core
> functionality, and do some reading.
> 
> On Fri, Aug 20, 2010 at 2:33 AM, Josef Svenningsson
>  wrote:
> > On Fri, Aug 20, 2010 at 6:05 AM, Jason Dagit  wrote:
> >>
> >>
> >> On Thu, Aug 19, 2010 at 8:05 PM, Michael Litchard 
> >> wrote:
> >>>
> >>> I'd like the community to give me feedback on the difficulty level of
> >>> implementing an awk interpreter. What language features would be
> >>> required? Specifically I'm hoping that TH is not necessary because I'm
> >>> nowhere near that skill level.
> >>
> > Implementing an awk interpreter in Haskell can be a fun project. I have a
> > half finished implementation lying around on the hard drive. It's perfectly
> > possible to implement it without using any super fancy language features.
> > But as other people have pointed out, monads are helpful for dealing with a
> > lot of the plumbing in the interpreter.
> >>>
> >>> An outline of a possible approach would be appreciated. I am using
> >>> http://www.math.utah.edu/docs/info/gawk_toc.html
> >>> as a guide to the language description.
> >>
> >> You might also focus on the 'core' of awk.  Think about, what is the
> >> minimal language and start from there.  Grow your implementation adding
> >> features bit by bit.  It's also a good opportunity to do testing.  You have
> >> a reference implementation and so you can write lots of tests for each
> >> feature as you add them.
> >
> > When I wrote my awk interpreter I decided to go for the whole language from
> > start. I had reasons for doing this as there were certain aspects of this
> > that I wanted to capture but it is not they way I would recommend going
> > about it. I definitely second Jason's advice at trying to capture the core
> > functionality first.
> > Have fun,
> > Josef
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
> 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] feasability of implementing an awk interpreter.

2010-08-20 Thread Michael Litchard
Thank you all for your encouragement. I need to think about the core
functionality, and do some reading.

On Fri, Aug 20, 2010 at 2:33 AM, Josef Svenningsson
 wrote:
> On Fri, Aug 20, 2010 at 6:05 AM, Jason Dagit  wrote:
>>
>>
>> On Thu, Aug 19, 2010 at 8:05 PM, Michael Litchard 
>> wrote:
>>>
>>> I'd like the community to give me feedback on the difficulty level of
>>> implementing an awk interpreter. What language features would be
>>> required? Specifically I'm hoping that TH is not necessary because I'm
>>> nowhere near that skill level.
>>
> Implementing an awk interpreter in Haskell can be a fun project. I have a
> half finished implementation lying around on the hard drive. It's perfectly
> possible to implement it without using any super fancy language features.
> But as other people have pointed out, monads are helpful for dealing with a
> lot of the plumbing in the interpreter.
>>>
>>> An outline of a possible approach would be appreciated. I am using
>>> http://www.math.utah.edu/docs/info/gawk_toc.html
>>> as a guide to the language description.
>>
>> You might also focus on the 'core' of awk.  Think about, what is the
>> minimal language and start from there.  Grow your implementation adding
>> features bit by bit.  It's also a good opportunity to do testing.  You have
>> a reference implementation and so you can write lots of tests for each
>> feature as you add them.
>
> When I wrote my awk interpreter I decided to go for the whole language from
> start. I had reasons for doing this as there were certain aspects of this
> that I wanted to capture but it is not they way I would recommend going
> about it. I definitely second Jason's advice at trying to capture the core
> functionality first.
> Have fun,
> Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] ANNOUNCE: enumerator, an alternative iteratee package

2010-08-20 Thread John Millikin
On Fri, Aug 20, 2010 at 09:30, Felipe Lessa  wrote:
> I was thinking about an attacker, not a use case.  Think of a web
> server accepting queries using iteratees internally.  This may open
> door to at least DoS attacks.

Web servers parse/generate HTTP, which is byte-based. They should be
using the bytes-based handle enumerator.

> And then, we use iteratees because we don't like the unpredictability
> of lazy IO.  Why should iteratees be unpredictable when dealing with
> Text?  Besides the memory consumption problem, there may be
> performance problems if the lines are too short.

If you don't want unpredictable performance, use bytes-based IO and
decode it with "decode utf8" or something similar.

Text-based IO merely exists to solve the most common case, which is a
small file in local encoding with relatively short (< 200 char) lines.
If you need to handle more complicated cases, such as:

* Files in fixed or self-described encodings (JSON, XML)
* Files with unknown encodings (HTML, RSS)
* Files with content in multiple encodings (EMail)
* Files containing potentially malicious input (such as public server log files)

Then you need to read them as bytes and decide yourself which decoding
is necessary.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Creating binary distributions with Cabal

2010-08-20 Thread Christopher Done
Hey autopackage looks swish! WiX also looks like a nice, more native
solution for Windows. Cheers!

On 20 August 2010 11:36, Magnus Therning  wrote:
> On Fri, Aug 20, 2010 at 10:18, Christopher Done
>  wrote:
>> Does Cabal have a way to produce binary distributions from a package?
>>
>> I need to create a binary distribution of my project which does not
>> depend on GHC or any development tools. The package should include all
>> required data files and configuration files. I've got the latter
>> covered with Data-Files and getDataFileName, but not sure about what
>> to do regarding configuration files -- read/write to
>> $HOME/.myproject/config or $HOME/.myprojectrc, etc., or what?
>>
>> I'm specifically targeting Redhat because that's the production
>> server, but I'm wondering if there is or will be a way to agnostically
>> access data files and configuration files without having to think
>> about what OS it will be running on, in the same way I can use sockets
>> or file access without worrying about the particular OS.
>>
>> Something like cabal sdist --binary --rpm/deb/arch/win/etc?
>>
>> How does everyone else package up their Haskell programs for binary
>> distribution?
>
> This is what package managers like rpm, dpkg, pacman, etc shines at.  So for
> Distribution for Linux that's what I suggest you use.  For Windows you'd
> probably have to hook things up to some installer-generator (WiX[1] maybe?).
>
> Other options are autopackage[2] and zeroinstall[3].
>
> /M
>
>
> [1] http://wix.sourceforge.net/
> [2] http://www.autopackage.org/
> [3] http://zero-install.sourceforge.net/
> --
> Magnus Therning                        (OpenPGP: 0xAB4DFBA4)
> magnus@therning.org          Jabber: magnus@therning.org
> http://therning.org/magnus         identi.ca|twitter: magthe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Creating binary distributions with Cabal

2010-08-20 Thread Christopher Done
On 20 August 2010 11:43, Duncan Coutts  wrote:
> On 20 August 2010 10:18, Christopher Done  wrote:
>> Does Cabal have a way to produce binary distributions from a package?
>
> No but it's not too hard to do.
>
> If you actually want an RPM or a DEB etc, then look into the cabal2rpm
> etc tools, they help automate the process.

Thanks, I hadn't seen this! It's ideal for my specific use case. :-)

> If you want a generic binary then:
>
> You first prepare an image, but using:
>
> cabal copy --destdir=./tmp/image/
>
> Now you tar up the image directory, unpack it on the target.
>
> Note that the prefix/paths you specified at configure time need to be
> the same on the target machine. There is no support yet on unix for
> relocatable / prefix independent binaries. In particular it needs the
> paths to be correct to be able to find data files.

Hmm, this is okay for me in this particular case anyway as I'm just
giving a distribution to the production admins who then unpack it,
i.e. I know the configuration of the target machine.

> Right, config files you should just look in a per-user or global
> location. You can use a data file to store a default so that the
> program can work with no config file.

Seems reasonable when someone else says it. Wasn't sure if there might
be a standard API for dealing with this. Thus far I've been relying on
a --config=PATH argument to the program. I suppose a combination
thereof encompasses most use cases.

On 20 August 2010 18:25, John MacFarlane  wrote:
> Do you know about getAppUserDataDirectory in System.Directory?

Thanks, I'd forgotten about that.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] ANNOUNCE: enumerator, an alternative iteratee package

2010-08-20 Thread Felipe Lessa
On Fri, Aug 20, 2010 at 1:12 PM, John Millikin  wrote:
> This thought occurred to me, but really, how often are you going to
> have a 10 GiB **text** file with no newlines? Remember, this is for
> text (log files, INI-style configs, plain .txt), not binary (HTML,
> XML, JSON). Off the top of my head, I can't think of any case where
> you'd expect to see 10 GiB in a single line.
>
> In the worst case, you can just use "decode" to process bytes coming
> from the ByteString-based enumHandle, which should give nicely chunked
> text.

I was thinking about an attacker, not a use case.  Think of a web
server accepting queries using iteratees internally.  This may open
door to at least DoS attacks.

And then, we use iteratees because we don't like the unpredictability
of lazy IO.  Why should iteratees be unpredictable when dealing with
Text?  Besides the memory consumption problem, there may be
performance problems if the lines are too short.

Cheers! =)

-- 
Felipe.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] ANNOUNCE: enumerator, an alternative iteratee package

2010-08-20 Thread John Millikin
On Fri, Aug 20, 2010 at 08:59, Felipe Lessa  wrote:
> On Fri, Aug 20, 2010 at 12:51 PM, John Millikin  wrote:
>> Currently, I'm planning on the following type signatures for D.E.Text.
>> 'enumHandle' will use Text's hGetLine, since there doesn't seem to be
>> any text-based equivalent to ByteString's 'hGet'.
>
> CC'ing text's maintainer.  Using 'hGetLine' will cause baaad surprises
> when you process a 10 GiB file with no '\n' in sight.

This thought occurred to me, but really, how often are you going to
have a 10 GiB **text** file with no newlines? Remember, this is for
text (log files, INI-style configs, plain .txt), not binary (HTML,
XML, JSON). Off the top of my head, I can't think of any case where
you'd expect to see 10 GiB in a single line.

In the worst case, you can just use "decode" to process bytes coming
from the ByteString-based enumHandle, which should give nicely chunked
text.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] ANNOUNCE: enumerator, an alternative iteratee package

2010-08-20 Thread Felipe Lessa
On Fri, Aug 20, 2010 at 12:51 PM, John Millikin  wrote:
> Currently, I'm planning on the following type signatures for D.E.Text.
> 'enumHandle' will use Text's hGetLine, since there doesn't seem to be
> any text-based equivalent to ByteString's 'hGet'.

CC'ing text's maintainer.  Using 'hGetLine' will cause baaad surprises
when you process a 10 GiB file with no '\n' in sight.

Cheers! =)

-- 
Felipe.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] ANNOUNCE: enumerator, an alternative iteratee package

2010-08-20 Thread John Millikin
On Fri, Aug 20, 2010 at 04:01, Simon Marlow  wrote:
> Handle IO is also doing Unicode encoding/decoding, which iteratees bypass.
>  Have you thought about how to incorporate encoding/decoding?

Yes; there will be a module Data.Enumerator.Text which contains
locale-based IO, enumeratee-based encoding/decoding, and so forth.
Since "iteratee" doesn't have any text-based IO, I figured it wasn't
necessary for a first release; getting feedback on the basic soundness
of the package was more important.

Currently, I'm planning on the following type signatures for D.E.Text.
'enumHandle' will use Text's hGetLine, since there doesn't seem to be
any text-based equivalent to ByteString's 'hGet'.



enumHandle :: Handle -> Enumerator SomeException Text IO b

enumFile :: FilePath -> Enumerator SomeException Text IO b

data Codec = Codec
{ codecName :: Text
, codecEncode :: Text -> Either SomeException ByteString
, codecDecode :: ByteString -> Either SomeException (Text, ByteString)
}

encode :: Codec -> Enumeratee SomeException Text ByteString m b

decode :: Codec -> Enumeratee SomeException ByteString Text m b

utf8 :: Codec

utf16le :: Codec

utf16be :: Codec

utf32le :: Codec

utf32be :: Codec

ascii :: Codec

iso8859_1 :: Codec

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: cabal-install profiling libraries

2010-08-20 Thread Johannes Waldmann
Johannes Waldmann  imn.htwk-leipzig.de> writes:

> I will teach a course (Sept. - Jan.) 

noh, it's  Oct. - Jan. 

otherwise it'd be too much of a good thing ...

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: cabal-install profiling libraries

2010-08-20 Thread Johannes Waldmann

> Could any of this be forked out as student projects?

I will teach a course (Sept. - Jan.) that introduces Haskell
(students know Java). Part of the coursework is a programming project.
I could assign some cabal tickets - but perhaps that's a bit far-fetched
(requires understanding of the ghc infrastructure - too time-consuming?)

J.W.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: cabal-install profiling libraries

2010-08-20 Thread Ivan Lazar Miljenovic
Johannes Waldmann  writes:

> Daniel Fischer  web.de> writes:
>
>> The problem is that otherpackages may depend on them too, so when cabal 
>> automatically reinstalls, those can break.
>
> how can this be - if the re-installed package is compiled 
> from the exact original source (as I just learned, cabal stores the sources)?
>
> or do you mean "the dependent packages must be recompiled" - 
> well, then cabal could just do it?

The latter, except that cabal-install doesn't know what you have
installed (it can only go on the information supplied by ghc-pkg for now).

> Of course I understand "lack of developer time".
> Could any of this be forked out as student projects?

Some of it was forked out to a GSoC project (a testing hook).
Otherwise, I'm sure Duncan et. al. will support good-quality patches
that solve the various bugs/feature requests on the bug tracker.

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: cabal-install profiling libraries

2010-08-20 Thread Johan Tibell
On Fri, Aug 20, 2010 at 4:07 PM, Johannes Waldmann <
waldm...@imn.htwk-leipzig.de> wrote:

> Of course I understand "lack of developer time".
> Could any of this be forked out as student projects?
>

These kind of projects are perfect for Google Summer of Code. We had two
Cabal projects this year (Hackage 2 and unit testing support).

The next GSoC is quite far in the future (9 months or so) but if we created
some well written proposals for Cabal features we'd like to see implemented
well in time for next year's GSoC we could get some students to work on
them.

Cheers,
Johan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: cabal-install profiling libraries

2010-08-20 Thread Johannes Waldmann
Daniel Fischer  web.de> writes:

> The problem is that otherpackages may depend on them too, so when cabal 
> automatically reinstalls, those can break.

how can this be - if the re-installed package is compiled 
from the exact original source (as I just learned, cabal stores the sources)?

or do you mean "the dependent packages must be recompiled" - 
well, then cabal could just do it?


Of course I understand "lack of developer time".
Could any of this be forked out as student projects?

J.W.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cabal-install profiling libraries

2010-08-20 Thread Daniel Fischer
On Friday 20 August 2010 15:20:41, Johannes Waldmann wrote:
> Here's another instance of the machine (*) telling me what to do,
> instead of doing it (or am I missing something):
>
> I have a large set of cabal packages installed with ghc.
> Then suddenly I need some package Foo with profiling.
> So I switch to library-profiling: True  in  my .cabal/config,
> and then "cabal install Foo" -  failing with the message:
>
>  Perhaps you haven't installed the profiling libraries for package `Bar'
>
> for some package Bar that Foo depends upon. - Dear Cabal: Yes!
> I know that I haven't installed them! I want you to install them for me!
> But it isn't listening ...

The problem is that otherpackages may depend on them too, so when cabal 
automatically reinstalls, those can break.
I don't think GHC can register a profiling version of the package and leave 
the vanilla package in peace, so then cabal can't just build the profiling 
lib and keep the old vanilla either.

>
> (*) "machine" = everything in that metal box that was so expensive
> and has a lot of cables coming out, and ventilators running.
>
>
> Of course you know that I have the highest respect for the work
> of the cabal authors. I'm just suggesting that the above feature
> (auto-re-install dependencies) would be helpful. Perhaps it's already
> there? If not - would it be hard to specify? To build? Or would it have
> bad consequences?
>
> Is it "cabal upgrade --reinstall"? But that was deprecated?

cabal install --reinstall

> Here I really want "reinstall with exactly the same versions".
> Is it the problem that their sources may have vanished, meanwhile?
> Could it be solved by having "cabal install" storing a copy of
> the source package that it used?

cabal keeps the tarballs of the packages, so that's not a problem.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cabal-install profiling libraries

2010-08-20 Thread Duncan Coutts
On 20 August 2010 14:20, Johannes Waldmann  wrote:
> Here's another instance of the machine (*) telling me what to do,
> instead of doing it (or am I missing something):
>
> I have a large set of cabal packages installed with ghc.
> Then suddenly I need some package Foo with profiling.
> So I switch to library-profiling: True  in  my .cabal/config,
> and then "cabal install Foo" -  failing with the message:
>
>  Perhaps you haven't installed the profiling libraries for package `Bar'
>
> for some package Bar that Foo depends upon. - Dear Cabal: Yes!
> I know that I haven't installed them! I want you to install them for me!
> But it isn't listening ...

> Of course you know that I have the highest respect for the work
> of the cabal authors. I'm just suggesting that the above feature
> (auto-re-install dependencies) would be helpful.

As usual the problem is lack of devevloper time to implement all these
nice features we all want.

http://hackage.haskell.org/trac/hackage/ticket/282

> Perhaps it's already there?
> If not - would it be hard to specify? To build? Or would it have
> bad consequences?

>From the ticket:

Our current thinking on this issue is that we should track each
"way" separately.
That is we should register profiling, vanilla and any other ways
with ghc-pkg as
independent package instances. This needs coordination with ghc
since it means
a change to the package registration information to include the way.

The idea is that once we track each way separately then Cabal will
know if the profiling way is installed or not and we can install the
profiling instance if it is missing without messing up any existing
instances.

> Is it "cabal upgrade --reinstall"? But that was deprecated?

Yes, "upgrade" is deprecated, use "install" instead. (The meaning /
behaviour of "upgrade" just sowed confusion.)

> Here I really want "reinstall with exactly the same versions".

Use: cabal install --reinstall foo-x.y.z

> Is it the problem that their sources may have vanished, meanwhile?
> Could it be solved by having "cabal install" storing a copy of
> the source package that it used?

No, the problem is we don't actually know if the profiling versions of
libs are installed or not. The ghc-pkg database does not contain this
information. Also, if we did know and started reinstalling packages,
what happens if we get half way and fail, we'd have messed up existing
installed working packages. Having profiling instances be separate
will make it all much easier.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A cabal odyssey

2010-08-20 Thread Ivan Lazar Miljenovic
Duncan Coutts  writes:

> On 19 August 2010 21:15, Andrew Coppin  wrote:
>> Daniel Fischer wrote:
>
>>> Yes, when cabal runs haddock on a package, it generates a comprehensive
>>> index if none is present or expands it with the new docs.
>>> Quite cool that :)
>>>
>>
>> It's something I've always _wanted_ Cabal to do, but this is the first time
>> I've ever seen it happen. I don't know what particularly I did to make this
>> happen, and now it seems to be gone, so...
>
> If you have documentation enabled then it is recreated every time you
> install a package.
>
> (Though only for user packages, since we have not yet worked out
> somewhere sensible we can stick a global index).

/usr/share/doc/haskell/ ?  However I think distros usually don't like
un-versioned directories there... :(

One thing I always find slightly irritating is that when using what
seems to be the default "/usr/share/doc/packagename-packageversion",
then every time I upgrade something then my bookmark links are all wrong
(and my browse history is full of now-useless links).

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] cabal-install profiling libraries

2010-08-20 Thread Johannes Waldmann
Here's another instance of the machine (*) telling me what to do, 
instead of doing it (or am I missing something):

I have a large set of cabal packages installed with ghc.
Then suddenly I need some package Foo with profiling. 
So I switch to library-profiling: True  in  my .cabal/config,
and then "cabal install Foo" -  failing with the message:

 Perhaps you haven't installed the profiling libraries for package `Bar'

for some package Bar that Foo depends upon. - Dear Cabal: Yes!
I know that I haven't installed them! I want you to install them for me!
But it isn't listening ...

(*) "machine" = everything in that metal box that was so expensive
and has a lot of cables coming out, and ventilators running.


Of course you know that I have the highest respect for the work
of the cabal authors. I'm just suggesting that the above feature
(auto-re-install dependencies) would be helpful. Perhaps it's already there?
If not - would it be hard to specify? To build? Or would it have
bad consequences?

Is it "cabal upgrade --reinstall"? But that was deprecated?
Here I really want "reinstall with exactly the same versions".
Is it the problem that their sources may have vanished, meanwhile?
Could it be solved by having "cabal install" storing a copy of
the source package that it used?

Thanks - J.W.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] GtkImageView release!

2010-08-20 Thread Andy Stewart
Hi all,

I have release gtkimageview, it's a Gtk APIs for build image viewer.

Here is screenshot :
http://www.flickr.com/photos/48809...@n02/4909785139/

You can click right-bottom to popup navigate window to drag area in
image.

Here is demo :
https://patch-tag.com/r/AndyStewart/gtkimageview/snapshot/current/content/pretty/demo

Enjoy!

  -- Andy

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lazy skip list?

2010-08-20 Thread Felipe Lessa
On Fri, Aug 20, 2010 at 3:57 AM, Luke Palmer  wrote:
> On Thu, Aug 19, 2010 at 9:57 PM, Felipe Lessa  wrote:
>> However, I haven't thought about how operations such as 'cons' and
>> 'tail' would be implemented =).  OP just asked about indexing ;-).
>
> Well if all you need is indexing, then an integer trie does it, right?
>  http://hackage.haskell.org/package/data-inttrie

Probably!  More specifically,

newtype SkipList a = (Int, IntTrie a)

index :: SkipList a -> Int -> Maybe a
index i (n, t) = if i < n && i >= 0 then Just (apply t i) else Nothing

However, with the API exposed in data-inttrie it isn't posssible to
implement fromList/toList in time O(n), only O(n log n), assuming that
modify/apply are O(log n).  Worse yet, if we wanted our fromList to
work with infinite lists we would need to do something like

import Data.List (genericLength)
import Number.Peano.Inf (Nat) -- from peano-inf on Hackage

newtype SkipList a = (Nat, IntTrie a)

fromList :: [a] -> SkipList a
fromList xs = (genericLength xs, fmap (xs !!) identity)

The problem here is that 'fromList' is now O(n²).  If IntTrie exposed
an Traversable interface, I think it would be possible to write a
'fromList' in O(n) using a state monad.  However, I don't know if it
is possible to write a Traversable interface in the first place.

Cheers! =)

-- 
Felipe.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Academic Haskell Course

2010-08-20 Thread Pierre-Etienne Meunier
> Can anyone point me towards existing work I could use? Open course
> material and syllabuses I could use, with the necessary references?

If I was to do the same, and my students already knew haskell (which will be 
the case after a few courses), I'd certainly read Chris Okasaki's book (or his 
thesis) on data structures in functional programming.

Greetings,
Pierre___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Code that writes code

2010-08-20 Thread Johannes Waldmann
Graham Klyne  ninebynine.org> writes:

> [...] rather than go through the step of
> generating source code and feeding it back into a Haskell compiler, it may be
> possible to use higher order functions to directly assemble the required logic
> within a single program.  For me, this is one of the great power-features of
> functional programming [...]

I agree one-hundred-percently, and that's also what I stress when I teach.

But of course this has to be balanced with the observation
that in current Haskell, not everything is a value.
Functions are, but modules and types are not.
That's why you cannot directly handle them programmatically.

So you either rewrite the program (unify the "similar" modules/types)
or resort to syntactic manipulation (as a compiler pass - like template haskell,
or by external processors) which has the severe downside 
of losing static typechecking (even if the generator is type-checked,
you cannot be sure that its output is type-safe).

Anyway the original poster asked about cabal integration.
For that, code generation in the compiler (template haskell)
certainly is easier than external processors.

The gtk2hs project also needs to generate boilerplate,
and they put their generators into a separate package
http://hackage.haskell.org/package/gtk2hs-buildtools
that you need to cabal-install before.
(somewhat strangely, gtk2hs-buildtools is not a dependency of gtk?
Is that because cabal packages cannot depend on executables?)


J.W.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] ANNOUNCE: enumerator, an alternative iteratee package

2010-08-20 Thread Simon Marlow

On 19/08/2010 18:21, John Millikin wrote:

On Wed, Aug 18, 2010 at 23:33, Jason Dagit  wrote:

The main reason I would use iteratees is for performance reasons.  To help
me, as a potential consumer of your library, could you please provide
benchmarks for comparing the performance of enumerator with say, a)
iteratee, b) lazy/strict bytestring, and c) Prelude functions?
I'm interested in both max memory consumption and run-times.  Using
criterion and/or progression to get the run-times would be icing on an
already delicious cake!


Oleg has some benchmarks of his implementation at<
http://okmij.org/ftp/Haskell/Iteratee/Lazy-vs-correct.txt>, which
clock iteratees at about twice as fast as lazy IO. He also compares
them to a native "wc", but his comparison is flawed, because he's
comparing a String iteratee vs byte-based wc.


Handle IO is also doing Unicode encoding/decoding, which iteratees 
bypass.  Have you thought about how to incorporate encoding/decoding?


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Creating binary distributions with Cabal

2010-08-20 Thread Duncan Coutts
On 20 August 2010 10:18, Christopher Done  wrote:
> Does Cabal have a way to produce binary distributions from a package?

No but it's not too hard to do.

If you actually want an RPM or a DEB etc, then look into the cabal2rpm
etc tools, they help automate the process.

If you want a generic binary then:

You first prepare an image, but using:

cabal copy --destdir=./tmp/image/

Now you tar up the image directory, unpack it on the target.

Note that the prefix/paths you specified at configure time need to be
the same on the target machine. There is no support yet on unix for
relocatable / prefix independent binaries. In particular it needs the
paths to be correct to be able to find data files.

> I need to create a binary distribution of my project which does not
> depend on GHC or any development tools. The package should include all
> required data files and configuration files. I've got the latter
> covered with Data-Files and getDataFileName, but not sure about what
> to do regarding configuration files -- read/write to
> $HOME/.myproject/config or $HOME/.myprojectrc, etc., or what?

Right, config files you should just look in a per-user or global
location. You can use a data file to store a default so that the
program can work with no config file.

> I'm specifically targeting Redhat because that's the production
> server, but I'm wondering if there is or will be a way to agnostically
> access data files and configuration files without having to think
> about what OS it will be running on, in the same way I can use sockets
> or file access without worrying about the particular OS.
>
> Something like cabal sdist --binary --rpm/deb/arch/win/etc?

We might eventually add something for generic binaries but we will
leave specific distros and packaging systems to specialised tools.

> How does everyone else package up their Haskell programs for binary
> distribution?

As I mentioned there are also tools like cabal2rpm that help build
binary packages for specific distros.

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Creating binary distributions with Cabal

2010-08-20 Thread Magnus Therning
On Fri, Aug 20, 2010 at 10:18, Christopher Done
 wrote:
> Does Cabal have a way to produce binary distributions from a package?
>
> I need to create a binary distribution of my project which does not
> depend on GHC or any development tools. The package should include all
> required data files and configuration files. I've got the latter
> covered with Data-Files and getDataFileName, but not sure about what
> to do regarding configuration files -- read/write to
> $HOME/.myproject/config or $HOME/.myprojectrc, etc., or what?
>
> I'm specifically targeting Redhat because that's the production
> server, but I'm wondering if there is or will be a way to agnostically
> access data files and configuration files without having to think
> about what OS it will be running on, in the same way I can use sockets
> or file access without worrying about the particular OS.
>
> Something like cabal sdist --binary --rpm/deb/arch/win/etc?
>
> How does everyone else package up their Haskell programs for binary
> distribution?

This is what package managers like rpm, dpkg, pacman, etc shines at.  So for
Distribution for Linux that's what I suggest you use.  For Windows you'd
probably have to hook things up to some installer-generator (WiX[1] maybe?).

Other options are autopackage[2] and zeroinstall[3].

/M


[1] http://wix.sourceforge.net/
[2] http://www.autopackage.org/
[3] http://zero-install.sourceforge.net/
-- 
Magnus Therning                        (OpenPGP: 0xAB4DFBA4)
magnus@therning.org          Jabber: magnus@therning.org
http://therning.org/magnus         identi.ca|twitter: magthe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A cabal odyssey

2010-08-20 Thread Duncan Coutts
On 19 August 2010 21:15, Andrew Coppin  wrote:
> Daniel Fischer wrote:

>> Yes, when cabal runs haddock on a package, it generates a comprehensive
>> index if none is present or expands it with the new docs.
>> Quite cool that :)
>>
>
> It's something I've always _wanted_ Cabal to do, but this is the first time
> I've ever seen it happen. I don't know what particularly I did to make this
> happen, and now it seems to be gone, so...

If you have documentation enabled then it is recreated every time you
install a package.

(Though only for user packages, since we have not yet worked out
somewhere sensible we can stick a global index).

> I gathered. Apparently there's no "cabal uninstall" or even merely a "cabal
> unregister" yet... (There must surely be a ticket for that already?)

Yup, there's a ticket for it.

> Well, the worst thing that can happen is I get no documentation, which isn't
> exactly a disaster. I'm just wondering how these files got created to start
> with; adding more packages doesn't appear to recreate it. I suppose I could
> try reinstalling all of them...

If you have documentation enabled (ie use --enable-documentation on
the command line, or have "documentation: True" in the ~/.cabal/config
file) then docs get created for each package you install, and the
haddock index/contents of all installed docs gets updated.

>>> Then again, all the links were broken anyway. They all had paths like
>>> "C:\Program Files\Haskell\...whatever", and Mozilla apparently expects
>>> them to say "file://C:/Program Files/Haskell/...whatever". It kept
>>> whining that "the C:\ protocol is not registered"
>>
>> Apparently, haddock links to absolute paths. That's of course not the
>> right thing to do if the path begins with an invalid protocol specifier
>> ("C:"). And it's annoying if you want to move the docs.
>>
>
> I imagine it's so that each package can be placed in a completely arbitrary
> place in the filesystem, and the links still work. I'd actually be surprised
> if these URLs work on Linux either; they don't appear to follow the requisit
> web standards.

You may be right, or perhaps URL syntax is just liberal enough to let
unix style paths work. It's still a bug of course that we're not using
the file:// protocol which makes it not work on windows. I filed it
here:
http://hackage.haskell.org/trac/hackage/ticket/516#comment:6

Duncan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] feasability of implementing an awk interpreter.

2010-08-20 Thread Josef Svenningsson
On Fri, Aug 20, 2010 at 6:05 AM, Jason Dagit  wrote:

>
>
> On Thu, Aug 19, 2010 at 8:05 PM, Michael Litchard wrote:
>
>> I'd like the community to give me feedback on the difficulty level of
>> implementing an awk interpreter. What language features would be
>> required? Specifically I'm hoping that TH is not necessary because I'm
>> nowhere near that skill level.
>>
>
> Implementing an awk interpreter in Haskell can be a fun project. I have a
half finished implementation lying around on the hard drive. It's perfectly
possible to implement it without using any super fancy language features.
But as other people have pointed out, monads are helpful for dealing with a
lot of the plumbing in the interpreter.

An outline of a possible approach would be appreciated. I am using
>> http://www.math.utah.edu/docs/info/gawk_toc.html
>> as a guide to the language description.
>>
>
> You might also focus on the 'core' of awk.  Think about, what is the
> minimal language and start from there.  Grow your implementation adding
> features bit by bit.  It's also a good opportunity to do testing.  You have
> a reference implementation and so you can write lots of tests for each
> feature as you add them.
>
> When I wrote my awk interpreter I decided to go for the whole language from
start. I had reasons for doing this as there were certain aspects of this
that I wanted to capture but it is not they way I would recommend going
about it. I definitely second Jason's advice at trying to capture the core
functionality first.

Have fun,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Code that writes code

2010-08-20 Thread Christopher Done
Check out the userHooks in Cabal[1]. I believe you can use, e.g.
hookedPreProcessors[2], or preBuild to preprocess your files into
regular Haskell files before building takes place.

[1]: 
http://www.haskell.org/ghc/docs/6.12.1/html/libraries/Cabal/Distribution-Simple-UserHooks.html#t%3AUserHooks
[2]: 
http://www.haskell.org/ghc/docs/6.12.1/html/libraries/Cabal/Distribution-Simple-UserHooks.html#v%3AhookedPreProcessors

On 19 August 2010 23:00, Andrew Coppin  wrote:
> I'm working on a small Haskell package. One module in particular contains so
> much boilerplate that rather than write the code myself, I wrote a small
> Haskell program that autogenerates it for me.
>
> What's the best way to package this for Cabal? Just stick the generated file
> in there? Or is there some (easy) way to tell Cabal how to recreate this
> file itself?
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Creating binary distributions with Cabal

2010-08-20 Thread Christopher Done
Does Cabal have a way to produce binary distributions from a package?

I need to create a binary distribution of my project which does not
depend on GHC or any development tools. The package should include all
required data files and configuration files. I've got the latter
covered with Data-Files and getDataFileName, but not sure about what
to do regarding configuration files -- read/write to
$HOME/.myproject/config or $HOME/.myprojectrc, etc., or what?

I'm specifically targeting Redhat because that's the production
server, but I'm wondering if there is or will be a way to agnostically
access data files and configuration files without having to think
about what OS it will be running on, in the same way I can use sockets
or file access without worrying about the particular OS.

Something like cabal sdist --binary --rpm/deb/arch/win/etc?

How does everyone else package up their Haskell programs for binary
distribution?

Cheers!
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Sifflet visual programming language, release 1.0!

2010-08-20 Thread Henk-Jan van Tuyl
On Fri, 20 Aug 2010 08:30:24 +0200, Andrew Coppin  
 wrote:



gdwe...@iue.edu wrote:
The problem is not usually that the C library doesn't exist for Windows  
(they tend to be widely portable, in fact). Rather, the problem is that  
Cabal won't build the Haskell binding. I've tried in the past, and I've  
never yet got it to work even once. The only known exception is Gtk2hs,  
which somehow manages to build on Windows.


In the case of Curl, Cabal downloads it, unpacks it, sees that it uses a  
autoconf script and dies. (At least Cabal now correctly reports the  
/cause/ of the problem - the configure script.) Things like autoconf,  
automake, bash, sed, awk, etc. do not usually exist on Windows, so any  
packages that require these tools won't build. And even the packages  
that don't usually fall over being unable to find the C headers in  
C:\usr\local or something dumb like that.


Curl compiles without problems on my Windows XP system. There is a  
HaskellWiki page [0] that describes how to compile packages with Unix  
scripts on Windows systems.


Regards,
Henk-Jan van Tuyl


[0] http://www.haskell.org/haskellwiki/Windows#Tools_for_compilation

--
http://Van.Tuyl.eu/
http://members.chello.nl/hjgtuyl/tourdemonad.html
--
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Code that writes code

2010-08-20 Thread Graham Klyne

Maybe not helpful to you at this stage, but...

An alternative to generating source code is to factor out the common
"boilerplate" elements into separate functions, suitably parameterized, and to
use higher order functions to stitch these together.

An example of this kind of approach, which is handled by code generation in some
other languages (e.g. lex, yacc, etc), is the Parsec combinator-based parsing
library (http://www.haskell.org/haskellwiki/Parsec) - instead of generating
code, the syntax "rules" are written directly using Haskell functions and
assemble the common underlying repeated logic dynamically, behind the scenes.

I adopted a development of this approach for a programme with a built-in
scripting language that I implemented some time ago:  the scripting language was
parsed using Parsec, not into a syntax tree, but directly into a dynamically
assembled function that could be applied to some data to perform the scripted
function (http://www.ninebynine.org/RDFNotes/Swish/Intro.html).

What I'm trying to point out here that, rather than go through the step of
generating source code and feeding it back into a Haskell compiler, it may be
possible to use higher order functions to directly assemble the required logic
within a single program.  For me, this is one of the great power-features of
functional programming, which I now tend to use where possible in other
languages that support functions as first class values.

#g
--

Andrew Coppin wrote:
I'm working on a small Haskell package. One module in particular 
contains so much boilerplate that rather than write the code myself, I 
wrote a small Haskell program that autogenerates it for me.


What's the best way to package this for Cabal? Just stick the generated 
file in there? Or is there some (easy) way to tell Cabal how to recreate 
this file itself?





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: philosophy of Haskell

2010-08-20 Thread Heinrich Apfelmus

Ertugrul Soeylemez wrote:

Heinrich Apfelmus wrote:


In particular, the World -> (a,World) model is unsuitable even without
concurrency because it cannot distinguish

loop, loop' :: IO ()
loop  = loop
loop' = putStr "c" >> loop'

I interpret the "EDSL model" to be the operational semantics presented
in the tutorial paper.


Huh?!  Let's translate them.  'loop' becomes:

  undefined

But 'loop\'' becomes:

  \w0 -> let (w1, ()) = putStr "c" w0
 in loop w1

Because this program runs forever it makes no sense to ask what its
result is after the program is run, but that's evaluation semantics.
Semantically they are both undefined.


They do have well-defined semantics, namely  loop = _|_ = loop' , the 
problem is that they are equal. You note that



execution is something separate and there is no Haskell notion for it.
In particular execution is /not/ evaluation, and the program's monadic
result is not related to the world state.


, but the whole point of the  IO a = World -> (a, World)  model is to 
give *denotational* semantics to IO. The goal is that two values of type 
 IO a  should do the same thing exactly when their denotations  World 
-> (a, World)  are equal. Clearly, the above examples show that this 
goal is not achieved.


If you also have to look at how these functions  World -> (a,World) 
"are executed", i.e. if you cannot treat them as *pure* functions, then 
the world passing model is no use; it's easier to just leave  IO a 
opaque and not introduce the complicating  World  metaphor.



Regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] feasability of implementing an awk interpreter.

2010-08-20 Thread John Lask

On 20/08/2010 1:35 PM, Jason Dagit wrote:
fairly easy .. you might want to check out the following tutorial ...

http://www.crsr.net/Programming_Languages/SoftwareTools/ch5.html

he implements a basic grep tool, you might then want to check out one of
the regex packages as a basis for your implementation of awk.




On Thu, Aug 19, 2010 at 8:05 PM, Michael Litchard mailto:mich...@schmong.org>> wrote:

I'd like the community to give me feedback on the difficulty level of
implementing an awk interpreter. What language features would be
required? Specifically I'm hoping that TH is not necessary because I'm
nowhere near that skill level.


I'd love to have portable pure haskell implementations of the
traditional unix tools.  If it were done well, it would allow you to
'cabal install' yourself into a usable dev environment on windows :)
  I'd much rather do that than deal with cygwin/mingw.

Someone (was it Stephen Hicks?) was writing (or finished writing?) an sh
parser and I got really excited for the same reason.  It would be a cool
project, but I'm not sure I can justify to myself spending my spare
cycles on it.



An outline of a possible approach would be appreciated. I am using
http://www.math.utah.edu/docs/info/gawk_toc.html
as a guide to the language description.


I think this is a good opportunity for you to learn about monad
transformers.  To that end, I think you will like this paper (quite easy
for beginners to pick up):
http://www.grabmueller.de/martin/www/pub/Transformers.en.html

At least, that's how I first learned about them and I though it was easy
to read at the time :)

You might also want to read (and try) some of the tutorials that focus
on creating interpreters just to sort of get some practice in that area.
  I haven't read it, but I've heard good things about this one:
http://en.wikibooks.org/wiki/Write_Yourself_a_Scheme_in_48_Hours

You might also focus on the 'core' of awk.  Think about, what is the
minimal language and start from there.  Grow your implementation adding
features bit by bit.  It's also a good opportunity to do testing.  You
have a reference implementation and so you can write lots of tests for
each feature as you add them.

I hope that helps,
Jason



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe