[Haskell-cafe] data-accessor-template-0.2.1.3 breaks cabal-install with ghc-6.12
Hello, I've recently run across an odd situation. I have a brand new install of GHC-6.12.1 (just did a fresh install of OSX 10.6), and am reinstalling some libraries. I've run into an unusual problem. I'm not sure if this would be considered a cabal bug or not, but I don't think it's a good situation. data-accessor-template depends upon template-haskell (>=2.2 && <2.4), unless built with the flag -ftemplate_2_4, in which case it depends upon template-haskell 2.4 (appropriate for ghc-6.12.1). The relevant lines of the .cabal file are: Flag template_2_4 description: Adapt to TemplateHaskell version of GHC-6.12 default: False If flag(template_2_4) Hs-Source-Dirs: src-5 Build-Depends: template-haskell >=2.4 && <2.5 Else Hs-Source-Dirs: src-3 Build-Depends: template-haskell >=2.2 && <2.4 I can install data-accessor-template when the template_2_4 flag is used, but then when I try to install a package that depends upon data-accessor-template cabal-install calculates the template-haskell >=2.2 && <2.4 dependency and attempts to fulfill it. Which fails at packedstring (can't find Data.Data in a hidden base package). I've included output from attempting to install Chart that demonstrates the behavior. Note that even though cabal-install sees data-accessor-template as installed, it attempts to reinstall it. John-Latos-MacBook:packages johnlato$ cabal install -v -O Chart In order, the following would be installed: packedstring-0.1.0.1 (new package) template-haskell-2.3.0.1 (new version) data-accessor-template-0.2.1.3 (reinstall) changes: template-haskell-2.4.0.0 -> 2.3.0.1 Chart-0.12 (new package) This affects everything down the dependency tree, notably Chart, criterion, and yi. I can work around this by specifying a previous version of data-accessor-template, but unfortunately I don't know how to address the real problem. Cheers, John ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] haskell-src type inference algorithm?
On Fri, Feb 12, 2010 at 03:47:56PM +1100, Bernie Pope wrote: > On 12 February 2010 10:13, Niklas Broberg wrote: > >> Anyone know of a type inference utility that can run right on haskell-src > >> types? or one that could be easily adapted? > > > > This is very high on my wish-list for haskell-src-exts, and I'm hoping > > the stuff Lennart will contribute will go a long way towards making it > > feasible. I believe I can safely say that no such tool exists (and if > > it does, why haven't you told me?? ;-)), but if you implement (parts > > of) one yourself I'd be more than interested to see, and incorporate, > > the results. > > A long time ago I worked on hatchet: > >http://www.cs.mu.oz.au/~bjpop/hatchet/src/hatchet.tar.gz > > which I believe was incorporated into JHC. Yes, hatchet formed the base of the original type checker for jhc. It has since been fully replaced, probably twice over, but jhc could not have gotten off the ground without it. John -- John Meacham - ⑆repetae.net⑆john⑈ - http://notanumber.net/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Where are the Takusen sources?
On Fri, Feb 12, 2010 at 3:24 PM, Brandon S. Allbery KF8NH < allb...@ece.cmu.edu> wrote: > On Feb 12, 2010, at 18:17 , Jason Dagit wrote: > > I wanted the takusen sources as I may want to add features. I looked on > hackage and it lists this url for the repository: > http://darcs.haskell.org/takusen > > > In general, stuff that was on darcs.haskell.org (which is being retired) > seems to be moving to code.haskell.org. I found > http://code.haskell.org/takusen and it looks reasonably up to date (10 > Feb). > Great! Thanks! Jason ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: hmatrix 0.8.3
On Fri, Feb 12, 2010 at 02:56:56PM +0100, Alberto Ruiz wrote: > I have released a new version of hmatrix, a library for numeric > computation based on LAPACK, BLAS and GSL. Recent developments > include improved SVD functions, a simple ODE solver, easier OS/X > installation (thanks to H. Apfelmus), and updated tutorial. Thanks for working on this nice package! Cheers, -- Felipe. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] haddock forgets parens in type sigs?
2010/2/12 Johannes Waldmann : > The annotated type of "update" is missing parentheses: > http://hackage.haskell.org/packages/archive/haskelldb/0.12/doc/html/Database-HaskellDB.html#v%3Aupdate > (compare with the signature given in the source) - Best, J.W. Already fixed in the darcs version. Stay tuned for the next release. David ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Where are the Takusen sources?
On Feb 12, 2010, at 18:17 , Jason Dagit wrote: I wanted the takusen sources as I may want to add features. I looked on hackage and it lists this url for the repository: http://darcs.haskell.org/takusen In general, stuff that was on darcs.haskell.org (which is being retired) seems to be moving to code.haskell.org. I found http://code.haskell.org/takusen and it looks reasonably up to date (10 Feb). -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allb...@kf8nh.com system administrator [openafs,heimdal,too many hats] allb...@ece.cmu.edu electrical and computer engineering, carnegie mellon universityKF8NH PGP.sig Description: This is a digitally signed message part ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Where are the Takusen sources?
Hello, I wanted the takusen sources as I may want to add features. I looked on hackage and it lists this url for the repository: http://darcs.haskell.org/takusen I get a 404 on that URL. I checked Oleg's website: http://okmij.org/ftp/Haskell/misc.html#takusen And the links on that page have similar problems. I see that the 0.8.5 release has source on hackage, but I would prefer access to the full repository as I may want to send patches. Does anyone know what has happened to Takusen? Thanks, Jason ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Type arithmetic with ATs/TFs
On Fri, Feb 12, 2010 at 2:10 PM, Edward Kmett wrote: > On Fri, Feb 12, 2010 at 2:11 PM, Andrew Coppin > wrote: >> >> OK, well in that case, I'm utterly puzzled as to why both forms exist in >> the first place. If TFs don't allow you to do anything that can't be done >> with ATs, why have them? >> >> My head hurts... > s/GADT/Fundep/g ? > You can say anything you might say with type families using GADTs, but > you'll often be talking about stuff you don't care about. =) > > sometimes you don't care what the Element type of a container is, just that > it is a container. Yet using GADTs you must always reference the content > type. > > size :: Container' c e => c -> Int -- using Container' defined with GADTs > > as opposed to > > size :: Container c => c -> Int > > That doesn't seem like a huge sacrifice at first, until you start > considering things like: > > http://hackage.haskell.org/packages/archive/category-extras/0.53.5/doc/html/Control-Category-Cartesian-Closed.html > > Instead of just being able to talk about a CCC based on the type used for > its homomorphisms, now I must constantly talk about the type used for its > product, and exponentials, and the identity of the product, even when I > don't care about those properties! > > This ability to not talk about those extra types becomes useful when you > start defining data types. > > Say you define a simple imperative growable hash data type, parameterized > over the monad type. You could do so with TFs fairly easily: > > newtype Hash m k v = Hash (Ref m (Array Int (Ref m [(k,v)]))) > empty :: MonadRef m => m (Hash m k v) > insert :: (Hashable k, MonadRef m) => k -> v -> Hash m k v -> m () > > But the GADT version leaks implementation-dependent details out to the data > type: > > newtype Hash r k v = Hash (r (Array Int (r [(k,v)]))) > empty :: MonadRef m r => m (Hash r k v) > insert :: (Hashable k, MonadRef m r) => k -> v -> Hash r k v -> m () > > This gets worse as you need more and more typeclass machinery. > > On the other hand, GADTs are useful when you want to define multidirectional > mutual dependencies without repeating yourself. Each is a win in terms of > the amount of boilerplate you have to write in different circumstances. > > class Foo a b c | a b -> c, b c -> a, c a -> b where > foo :: a -> b -> c > > would require 3 different class associate types, one for each fundep. > > -Edward Kmett > ___ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe > > ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] haddock forgets parens in type sigs?
The annotated type of "update" is missing parentheses: http://hackage.haskell.org/packages/archive/haskelldb/0.12/doc/html/Database-HaskellDB.html#v%3Aupdate (compare with the signature given in the source) - Best, J.W. signature.asc Description: OpenPGP digital signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell IDEs on Windows; gtk2hs
Am Freitag 12 Februar 2010 22:45:10 schrieb Alistair Bayley: > I thought I'd try some of the Haskell IDEs: eclipsefp, leksah, and yi. > So far, leksah requires gtk2hs (and apparently yi can use it too?), > and the latest gtk2hs installer for Windows doesn't like the latest > Haskell Platform, so I'm going to try building gtk2hs from source, > unless anyone tells me that it's a waste of time. > > Building yi fails with: > > Yi\Prelude.hs:182:9: > Duplicate instance declarations: > instance Category Accessor.T -- Defined at Yi\Prelude.hs:182:9-38 > instance Category Accessor.T > -- Defined in data-accessor-0.2.1.2:Data.Accessor.Private > cabal: Error: some packages failed to install: > yi-0.6.1 failed during the building phase. The exception was: > exit: ExitFailure 1 > > Presumably data-accessor has been updated, but yi has not? What is the > easiest fix? Downgrade to an earlier version of data-accessor? You would need to edit the .cabal file anyway so that the older version of data-accessor would be used. Since you need to unpack the package and edit a file anyway, you might as well comment out the instance declaration in Yi/Prelude.hs (I think). > > So that just leaves eclipsefp. Still trying to figure out how to > install eclipse plugins... > > Alistair ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Haskell IDEs on Windows; gtk2hs
I thought I'd try some of the Haskell IDEs: eclipsefp, leksah, and yi. So far, leksah requires gtk2hs (and apparently yi can use it too?), and the latest gtk2hs installer for Windows doesn't like the latest Haskell Platform, so I'm going to try building gtk2hs from source, unless anyone tells me that it's a waste of time. Building yi fails with: Yi\Prelude.hs:182:9: Duplicate instance declarations: instance Category Accessor.T -- Defined at Yi\Prelude.hs:182:9-38 instance Category Accessor.T -- Defined in data-accessor-0.2.1.2:Data.Accessor.Private cabal: Error: some packages failed to install: yi-0.6.1 failed during the building phase. The exception was: exit: ExitFailure 1 Presumably data-accessor has been updated, but yi has not? What is the easiest fix? Downgrade to an earlier version of data-accessor? So that just leaves eclipsefp. Still trying to figure out how to install eclipse plugins... Alistair ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Type arithmetic with ATs/TFs
On Fri, Feb 12, 2010 at 2:11 PM, Andrew Coppin wrote: > OK, well in that case, I'm utterly puzzled as to why both forms exist in > the first place. If TFs don't allow you to do anything that can't be done > with ATs, why have them? > > My head hurts... You can say anything you might say with type families using GADTs, but you'll often be talking about stuff you don't care about. =) sometimes you don't care what the Element type of a container is, just that it is a container. Yet using GADTs you must always reference the content type. size :: Container' c e => c -> Int -- using Container' defined with GADTs as opposed to size :: Container c => c -> Int That doesn't seem like a huge sacrifice at first, until you start considering things like: http://hackage.haskell.org/packages/archive/category-extras/0.53.5/doc/html/Control-Category-Cartesian-Closed.html Instead of just being able to talk about a CCC based on the type used for its homomorphisms, now I must constantly talk about the type used for its product, and exponentials, and the identity of the product, even when I don't care about those properties! This ability to not talk about those extra types becomes useful when you start defining data types. Say you define a simple imperative growable hash data type, parameterized over the monad type. You could do so with TFs fairly easily: newtype Hash m k v = Hash (Ref m (Array Int (Ref m [(k,v)]))) empty :: MonadRef m => m (Hash m k v) insert :: (Hashable k, MonadRef m) => k -> v -> Hash m k v -> m () But the GADT version leaks implementation-dependent details out to the data type: newtype Hash r k v = Hash (r (Array Int (r [(k,v)]))) empty :: MonadRef m r => m (Hash r k v) insert :: (Hashable k, MonadRef m r) => k -> v -> Hash r k v -> m () This gets worse as you need more and more typeclass machinery. On the other hand, GADTs are useful when you want to define multidirectional mutual dependencies without repeating yourself. Each is a win in terms of the amount of boilerplate you have to write in different circumstances. class Foo a b c | a b -> c, b c -> a, c a -> b where foo :: a -> b -> c would require 3 different class associate types, one for each fundep. -Edward Kmett ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Call for Copy: Monad.Reader Issue 16
Call for Copy: The Monad.Reader - Issue 16 -- Whether you're an established academic or have only just started learning Haskell, if you have something to say, please consider writing an article for The Monad.Reader! The submission deadline for Issue 16 will be: **Friday, April 16, 2010** The Monad.Reader The Monad.Reader is a electronic magazine about all things Haskell. It is less formal than journal, but somehow more enduring than a wiki- page. There have been a wide variety of articles: exciting code fragments, intriguing puzzles, book reviews, tutorials, and even half-baked research ideas. Submission Details ~~ Get in touch with me if you intend to submit something -- the sooner you let me know what you're up to, the better. Please submit articles for the next issue to me by e-mail (byorgey at cis.upenn.edu). Articles should be written according to the guidelines available from http://themonadreader.wordpress.com/contributing/ Please submit your article in PDF, together with any source files you used. The sources will be released together with the magazine under a BSD license. If you would like to submit an article, but have trouble with LaTeX please let me know and we'll work something out. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Type arithmetic with ATs/TFs
On Fri, Feb 12, 2010 at 2:11 PM, Andrew Coppin wrote: > OK, well in that case, I'm utterly puzzled as to why both forms exist in the > first place. If TFs don't allow you to do anything that can't be done with > ATs, why have them? > > My head hurts... > I think the question is the reverse -- why do ATs exist when you can do everything with the more general Type Families? This is the answer from the GHC documentation: "Type families appear in two flavours: (1) they can be defined on the toplevel or (2) they can appear inside type classes (in which case they are known as associated type synonyms). The former is the more general variant, as it lacks the requirement for the type-indices to coincide with the class parameters. However, the latter can lead to more clearly structured code and compiler warnings if some type instances were - possibly accidentally - omitted." http://www.haskell.org/haskellwiki/GHC/Indexed_types#Detailed_definition_of_type_synonym_families ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: How many "Haskell Engineer I/II/III"s are there?
Simon Marlow wrote: On 11/02/2010 20:57, Alp Mestanogullari wrote: It seems quite big for a 3 months project made by a student, though. No kidding :-) I last rewrote the RTS in 1998: but even so, it was about 20k lines. Man, that's at least two orders of magnitude larger than anything I've ever written in my entire life! And to think that's just the RTS - the part of GHC that most people don't neven notice. ;-) Did you really write all that code single-handedly? Also... Those old release notes are some hard-core nostalga. ;-) "At least 32 MB of RAM"... that's special. As is the list of "new" features in GHC. (Exceptions. Ooo!) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Iteratee and parsec (was: Re: [Haskell-cafe] Re: Haskell-Cafe Digest, Vol 78, Issue 14)
On Fri, 2010-02-12 at 12:51 +, John Lato wrote: > > Yes, the remaining part will be returned, but the consumed portion is > lost. I couldn't figure out how to solve that problem other than > cacheing everything. > I decided to post the new code on webpage (http://www.doc.ic.ac.uk/~mmp08/iteratee/) to not spam everyone's inbox. I think you want something more like safeParsecIteratee from my code. From the same CPU more random numbers (I wonder how they'll look like in a month when BS problems will be resolved). For ByteString: Maciej'sMaciej's Safe John's Short parser 5: 0.000144s 0.40s 0.67s 10: 0.52s 0.42s 0.48s 15: 0.53s 0.52s 0.61s 20: 0.41s 0.33s 0.39s 50: 0.54s 0.49s 0.000111s 100: 0.82s 0.000101s 0.000254s 1000: 0.000610s 0.000623s 0.014414s 1: 0.007069s 0.007947s 1.197706s 10: 0.058025s 0.057382s 117.231680s Short failing parser 5: 0.000104s 0.30s 0.26s 10: 0.28s 0.24s 0.23s 15: 0.26s 0.25s 0.31s 20: 0.27s 0.28s 0.25s 50: 0.27s 0.25s 0.42s 100: 0.26s 0.24s 0.23s 1000: 0.24s 0.23s 0.23s 1: 0.000259s 0.25s 0.22s 10: 0.25s 0.39s 0.24s Failing parser 5: 0.25s 0.24s 0.22s 10: 0.27s 0.24s 0.26s 15: 0.28s 0.28s 0.31s 20: 0.32s 0.45s 0.38s 50: 0.45s 0.45s 0.96s 100: 0.69s 0.000144s 0.000228s 1000: 0.000544s 0.000512s 0.013124s 1: 0.004760s 0.004695s 1.240703s 10: 0.046858s 0.046897s 119.860964s For []: Maciej'sMaciej's Safe John's Short parser 5: 0.000215s 0.000141s 0.000541s 10: 0.54s 0.000286s 0.000178s 15: 0.46s 0.78s 0.000248s 20: 0.000130s 0.50s 0.000420s 50: 0.66s 0.000200s 0.000785s 100: 0.000176s 0.000240s 0.001522s 1000: 0.000826s 0.000857s 0.014399s 1: 0.006674s 0.007185s 0.381615s 10: 0.062452s 0.065178s 31.454621s Short failing parser 5: 0.000210s 0.54s 0.99s 10: 0.96s 0.37s 0.000104s 15: 0.59s 0.39s 0.000184s 20: 0.38s 0.36s 0.000114s 50: 0.37s 0.000100s 0.000111s 100: 0.000165s 0.37s 0.000103s 1000: 0.79s 0.36s 0.000103s 1: 0.37s 0.37s 0.000179s 10: 0.37s 0.000168s 0.000104s Failing parser 5: 0.37s 0.90s 0.89s 10: 0.000157s 0.55s 0.000169s 15: 0.62s 0.39s 0.000303s 20: 0.43s 0.000194s 0.000311s 50: 0.000183s 0.56s 0.000780s 100: 0.80s 0.000172s 0.001624s 1000: 0.000714s 0.000714s 0.014076s 1: 0.005451s 0.006890s 0.379960s 10: 0.052609s 0.055770s 31.537776s The timings where about the same in every run. Also it seems that keeping reference to input does not create significant slow-down if it is not an artefact of testing method. Short failing parser probably have somewhere an error. > Interesting. I expect good performance as long as chunks don't need > to be concatenated. The default chunk size is either 4096 or 8192 (I > don't remember ATM). This also assumes that no intervening functions > (take, drop, etc.) alter the stream too significantly. Testing 1e5 > wouldn't do more than two concats, and particularly with bytestrings > shouldn't impose too much penalty. List performance would be much > worse though. > With 1e5 it should have 1e5/(4e3 or 8e3) \approx 10-20 concats. > > > > Regards > > PS. Why iteratee uses transformers? It seems to be identical (both have > > functional dependencies etc.) to mtl except that mtl is standard in > > platform. Using both lead to clashes between names. > > Short answer: I am using iterat
Re: [Haskell-cafe] Type arithmetic with ATs/TFs
Robert Greayer wrote: What Ryan said, and here's an example of addition with ATs, specifically (not thoroughly tested, but tested a little). The translation to TFs sans ATs is straightforward. class Add a b where type SumType a b instance Add Zero Zero where type SumType Zero Zero = Zero instance Add (Succ a) Zero where type SumType (Succ a) Zero = Succ a instance Add Zero (Succ a) where type SumType Zero (Succ a) = Succ a instance Add (Succ a) (Succ b) where type SumType (Succ a) (Succ b) = Succ (Succ (SumType a b)) I'm pretty sure this is almost exactly what I wrote in the first place, and it didn't work. I'll try again and see if I get anywhere... ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Type arithmetic with ATs/TFs
Ryan Ingram wrote: Actually, at least in GHC, associated types are just syntax sugar for type families. That is, this code: class Container c where type Element c :: * view :: c -> Maybe (Element c,c) instance Container [a] where type Element [a] = a view [] = Nothing view (x:xs) = Just (x,xs) is the same as this code: type family Element c :: * class Container c where view :: c -> Maybe (Element c, c) type instance Container [a] = a instance Container [a] where view [] = Nothing view (x:xs) = Just (x,xs) OK, well in that case, I'm utterly puzzled as to why both forms exist in the first place. If TFs don't allow you to do anything that can't be done with ATs, why have them? My head hurts... ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: MissingH dropped QuickCheck dep
Don Stewart wrote: > Excellent! > > Would it be possible to disable the runtests executable by default? > Enable it only with a conditional? It's been that way for quite some time now: Executable runtests Buildable: False heh, and I didn't even add a flag for it yet like I have with HDBC. Guess I ought to do that so a person can build tests more easily if they wish. -- John ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: MissingH dropped QuickCheck dep
jgoerzen: > On Fri, Feb 12, 2010 at 08:45:09AM -0800, John MacFarlane wrote: > > +++ thomas hartman [Feb 11 10 21:07 ]: > > > gitit on hackage is still blocked because of dependency on missingh, > > > which depends on qc1. Not an easy fix -- I couldn't figure out how to > > > migrate testpack to qc2. > > > > > > However, missingh dependency was removed from gitit head > > > > > > http://github.com/jgm/gitit > > > > > > so that's good. > > > > No, gitit head still depends on MissingH, via ConfigFile. > > > > I imagine John will update MissingH to use QuickCheck2 soon... > > Hey guys, I took a look at MissingH and there was no need for the main > library to depend on QuickCheck in the first place. It was only > needed by the tests. So I've uploaded a new MissingH 1.1.0.2 to > Hackage that drops that dep. > > That ought to solve it for you. Excellent! Would it be possible to disable the runtests executable by default? Enable it only with a conditional? -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: How many "Haskell Engineer I/II/III"s are there?
2010/02/12 stefan kersten : > On 12.02.10 16:29, Simon Marlow wrote: > > I'm aware that some people need a GC with shorter pause > > times. We'll probably put that on the roadmap at some point. > > for some applications (like realtime audio processing) it > would be interesting to even have short pause times with a > guaranteed upper bound, but i realize this is a very > specialized need that could be better served by making the GC > implementation swappable (which otoh doesn't seem to be > trivial). I think this is not a unique need. When you consider things like scalable network services with strong SLAs, largish embedded systems (iPhone, planes, &c.) and other environments where verification is a big win, it's generally also important to control latency and memory use. To be honest, though, I am of two minds about this. Why shouldn't we enforce our timing/memory requirements by writing EDSLs and compiling them? The approach Atom takes is maybe the most flexible option (there be parens, though). -- Jason Dusek ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Time for a San Francisco Hackathon?
I'd help organize. How do these usually work? Some worthy package is selected for hacking? People hack whatever they like? -- Jason Dusek ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: sendfile leaking descriptors on Linux?
Jeremy Shaw wrote: import Control.Concurrent import Control.Concurrent.MVar import System.Posix.Types data RW = Read | Write threadWaitReadWrite :: Fd -> IO RW threadWaitReadWrite fd = do m <- newEmptyMVar rid <- forkIO $ threadWaitRead fd >> putMVar m Read wid <- forkIO $ threadWaitWrite fd >> putMVar m Write r <- takeMVar m killThread rid killThread wid return r Initial testing seems promising. I haven't been able to provoke the "leak" during 15-20 minutes of testing. I'll test more thoroughly during the weekend. Cheers, ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] MissingH dropped QuickCheck dep
On Fri, Feb 12, 2010 at 08:45:09AM -0800, John MacFarlane wrote: > +++ thomas hartman [Feb 11 10 21:07 ]: > > gitit on hackage is still blocked because of dependency on missingh, > > which depends on qc1. Not an easy fix -- I couldn't figure out how to > > migrate testpack to qc2. > > > > However, missingh dependency was removed from gitit head > > > > http://github.com/jgm/gitit > > > > so that's good. > > No, gitit head still depends on MissingH, via ConfigFile. > > I imagine John will update MissingH to use QuickCheck2 soon... Hey guys, I took a look at MissingH and there was no need for the main library to depend on QuickCheck in the first place. It was only needed by the tests. So I've uploaded a new MissingH 1.1.0.2 to Hackage that drops that dep. That ought to solve it for you. -- John > > John > > > ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Using Cabal during development
* On Tue, Feb 09 2010, Johan Tibell wrote: > On Tue, Feb 9, 2010 at 6:10 AM, Ketil Malde wrote: > > Limestraël writes: > > > how do usually Haskell developpers build their softwares (and > > especially medium or big libraries) while they are still developping > them ? > > With cabal-install, by doing one 'cabal configure' once and 'cabal > build' > > each time they have altered their code ? > > With only Cabal, through some 'runhaskell Setup.hs build's ? > > Generally, the first thing I do is hit C-c C-l in Emacs to load the > current file into a haskell process. Then back to fix the type errors > (click on the error to jump to the code), and iterate until it loads > correctly. > > It's really unfortunate that this approach doesn't work for .hsc files. When > writing > low level libraries I often have a couple of these which forces me out of my > nice > Emacs workflow into an Emacs + terminal + Cabal workflow. This is solve-able. I bind compile-command to the c2hs invocation, and then have my C-c C-l keybinding run "compile" before "inferior-haskell-load-file". The only hangup is that having a -*- comment at the top of a c2hs file confuses ghc when c2hs generates the code. So I set the buffer-local variable via eproject (in the .eproject file in the project root) instead, and everything is happy. I would post the exact code, but it's on my work machine, and it's nearly impossible to get files out of my work environment without getting nasty emails about how I'm stealing the company's IP. (But hey, at least I get to use Haskell. I can't complain too loudly :) Ping me on #haskell or #emacs if you need help getting something like this working. Oh; one other thing. You don't need to leave emacs to run cabal commands; you can run ":!cabal build" from the ghci window, you can bind compile-command to "cabal build" and run M-x compile, or you can use eshell. I use all three approaches depending on my mood. But since I'm on Windows, I certainly never venture outside of my Emacs window. :) Regards, Jonathan Rockway -- Just "another Haskell hacker" ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Infix and postfix operators in Parsec
Parse a sequence of primitive expressions first and then do fixity resolution independent from Parsec. C. Christian Maeder schrieb: > It seems that the case of identical postfix and infix operators was not > considered. So I recommend to write something from scratch. > > Cheers Christian > > Xinyu Jiang schrieb: >> I'm writing a parser for a Haskell-style language, and when I need to >> use the same symbol for infix, prefix and postfix operators, the >> combinator "buildExpressionParser" seems not to work as intended. For >> example, in: >> >> (1)x + y >> (2)x + >> (3)+ x >> >> If I set the priority of the postfix version of "+" to be higher than >> the infix version, the parser cannot recognize (1), for it stops at the >> end of "x +". And if the priority of postfix "+" is lower, Parsec >> complains about (2) by returning an error message which expects >> something after "+". >> >> Is there some way to get over this problem, and let me be able to still >> benifit from the expression mechanism in Parsec. Or should I write these >> stuff from scratch? Thanks. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Infix and postfix operators in Parsec
It seems that the case of identical postfix and infix operators was not considered. So I recommend to write something from scratch. Cheers Christian Xinyu Jiang schrieb: > I'm writing a parser for a Haskell-style language, and when I need to > use the same symbol for infix, prefix and postfix operators, the > combinator "buildExpressionParser" seems not to work as intended. For > example, in: > > (1)x + y > (2)x + > (3)+ x > > If I set the priority of the postfix version of "+" to be higher than > the infix version, the parser cannot recognize (1), for it stops at the > end of "x +". And if the priority of postfix "+" is lower, Parsec > complains about (2) by returning an error message which expects > something after "+". > > Is there some way to get over this problem, and let me be able to still > benifit from the expression mechanism in Parsec. Or should I write these > stuff from scratch? Thanks. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Infix and postfix operators in Parsec
I'm writing a parser for a Haskell-style language, and when I need to use the same symbol for infix, prefix and postfix operators, the combinator "buildExpressionParser" seems not to work as intended. For example, in: (1)x + y (2)x + (3)+ x If I set the priority of the postfix version of "+" to be higher than the infix version, the parser cannot recognize (1), for it stops at the end of "x +". And if the priority of postfix "+" is lower, Parsec complains about (2) by returning an error message which expects something after "+". Is there some way to get over this problem, and let me be able to still benifit from the expression mechanism in Parsec. Or should I write these stuff from scratch? Thanks. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell RPC / Cluster
Perhaps not exactly what you're after, but at least in the same vein: http://hackage.haskell.org/package/hspread http://www.spread.org/ On Fri, Feb 12, 2010 at 8:19 AM, Rick R wrote: > I am preparing to embark on some serious cluster oriented coding (high > availability, monitoring, failover, etc). My primary concern is > conforming to standards. I would also like to aid any existing project > that fall under this scope. HackPar seems currently targeted towards > HPC style clustering, but the page seems to hint at future work in the > cloud/high-availability area. > > I was looking around for RPC libs for Haskell and stumbled across this > > http://github.com/mariusaeriksen/bert > > It implements BERT, which is based on Erlang's binary serialization > protocol. It seems to have quite a bit of support. > > Does anyone know of any other RPC modules for Haskell? In addition, > can anyone recommend other cluster oriented modules for monitoring, > process management, etc? > > If those don't exist, can anyone recommend some standards off of which > to base these? > SNMP seems obvious (and daunting), any others? > > > Thanks, > Rick > ___ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe > ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: How many "Haskell Engineer I/II/III"s are there?
On 11/02/2010 21:55, Evan Laforge wrote: On Thu, Feb 11, 2010 at 1:49 PM, John Van Enk wrote: Perhaps just defining the interface and demonstrating that different RTS's are swappable would be enough? I read a paper by (I think) a Simon, in which he described a haskell RTS. It would make it easier to experiment with GC, scheduling, and whatever else. I recall a few problems, such as performance, but nothing really intractable. Swappable RTS would be a nice side-effect. You're probably referring to this: http://www.haskell.org/~simonmar/papers/conc-substrate.pdf the idea there was to move as much of the scheduler as possible into Haskell. It's still something we'd like to do, but getting even close to the performance of the current RTS was difficult, which is why the project is currently dormant. In order to get decent performance we'd probably have to sacrifice some of the nice abstractions, like transactions, but then the advantages become less clear. I'm hoping that someday hardware TM will help here. Also, it was only the scheduler, which is quite a small part of the RTS (probably 5% is an overestimate). Cheers, Simon ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: How many "Haskell Engineer I/II/III"s are there?
On 12/02/2010 15:45, John Van Enk wrote: I _think_ that the abstract points out that reference-counted garbage collection can be done deterministically. Haskell could some day be an excellent replacement for C/Ada in safety critical markets, but some serious changes to the RTS (most having to do with memory allocation, garbage collection, and multi-threading) would have to be made. If the GC becomes deterministic, then a much better case can be made for using the language on a plane or in medical devices. In a sense the GC *is* deterministic: it guarantees to collect all the unreachable garbage. But I expect what you're referring to is the fact that the garbage remains around for a non-deterministic amount of time. To me that doesn't seem to be a problem: you could run the GC at any time to reclaim it (pause-times notwithstanding). Even if you collected garbage immediately, I wouldn't feel comfortable about claiming any kind of deterministic memory behaviour for Haskell, given that transformations performed by the compiler behind your back can change the space usage, sometimes asymptotically. If you have to have guaranteed deterministic memory usage, perhaps something like Hume[1] is more appropriate? Cheers, Simon http://www-fp.cs.st-andrews.ac.uk/hume/index.shtml ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: How many "Haskell Engineer I/II/III"s are there?
On 12.02.10 16:29, Simon Marlow wrote: > I'm aware that some people need a GC with shorter pause times. We'll > probably put that on the roadmap at some point. for some applications (like realtime audio processing) it would be interesting to even have short pause times with a guaranteed upper bound, but i realize this is a very specialized need that could be better served by making the GC implementation swappable (which otoh doesn't seem to be trivial). ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: How many "Haskell Engineer I/II/III"s are there?
On 11/02/2010 20:57, Alp Mestanogullari wrote: It seems quite big for a 3 months project made by a student, though. No kidding :-) I last rewrote the RTS in 1998: http://www.mail-archive.com/glasgow-haskell-us...@haskell.org/msg00329.html So as you can see from that announcement, it took "a few months" to rewrite the RTS. At the time, we redesigned things quite a bit, so that includes changes in the compiler too. Back then of course the RTS didn't have a few things it has now: - anything to do with multithreading or parallel execution - generational GC - profiling - dynamic linking and the byte-code interpreter (GHCi) - STM - asynchronous exceptions (throwTo) - event logging and tracing but even so, it was about 20k lines. It did have concurrency, a 2-space GC, the FFI, all the primitives, and lots of debugging code. Cheers, Simon ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: ANN: Dungeons of Wor - a largish FRP example and a fun game, all in one!
On Feb 12, 4:22 am, Simon Michael wrote: > Exciting! But on a mac, I can't get the window to become focussed or accept > input. Tips ? Tips: $ cabal install mkbndl $ cd ~/.cabal/bin $ mkbndl dow $ open Dow.app Ok, but you need to click the right control key to select stuff on the menu, and you probably only have a left control key, so: $ cd $ tar xvzf ~/.cabal/packages/hackage.haskell.org/dow/0.1.0/ dow-0.1.0.tar.gz $ cd dow-0.1.0/src $ head -n 86 Main.hs > tmp $ echo " kt1 <- getKey ' '" >> tmp $ tail -n 7 Main.hs >> tmp $ cp tmp Main.hs $ cd .. $ cabal install $ cd ~/.cabal/bin $ mkbndl -f dow $ open Dow.app > ___ > Haskell-Cafe mailing list > haskell-c...@haskell.orghttp://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: How many "Haskell Engineer I/II/III"s are there?
I _think_ that the abstract points out that reference-counted garbage collection can be done deterministically. Haskell could some day be an excellent replacement for C/Ada in safety critical markets, but some serious changes to the RTS (most having to do with memory allocation, garbage collection, and multi-threading) would have to be made. If the GC becomes deterministic, then a much better case can be made for using the language on a plane or in medical devices. /jve On Fri, Feb 12, 2010 at 10:29 AM, Simon Marlow wrote: > On 11/02/2010 17:01, John Van Enk wrote: > >> Here's the paper: >> http://comjnl.oxfordjournals.org/cgi/content/abstract/33/5/466 >> > > Can you say a bit about why that GC fits your needs? Must it be that > particular algorithm? I don't seem to be able to find the paper online. > > Replacing GHC's RTS is no mean feat, as you're probably aware. There are a > large number of dependencies between the compiler, the RTS, and the > low-level libraries. I expect rather than thinking about replacing the RTS > it would be more profitable to look at what kinds of things you need the RTS > to do that it currently does not. > > I'm aware that some people need a GC with shorter pause times. We'll > probably put that on the roadmap at some point. > > Cheers, >Simon > ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: How many "Haskell Engineer I/II/III"s are there?
On 11/02/2010 17:01, John Van Enk wrote: Here's the paper: http://comjnl.oxfordjournals.org/cgi/content/abstract/33/5/466 Can you say a bit about why that GC fits your needs? Must it be that particular algorithm? I don't seem to be able to find the paper online. Replacing GHC's RTS is no mean feat, as you're probably aware. There are a large number of dependencies between the compiler, the RTS, and the low-level libraries. I expect rather than thinking about replacing the RTS it would be more profitable to look at what kinds of things you need the RTS to do that it currently does not. I'm aware that some people need a GC with shorter pause times. We'll probably put that on the roadmap at some point. Cheers, Simon ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANN: hmatrix 0.8.3
Hello, I have released a new version of hmatrix, a library for numeric computation based on LAPACK, BLAS and GSL. Recent developments include improved SVD functions, a simple ODE solver, easier OS/X installation (thanks to H. Apfelmus), and updated tutorial. hackage : http://hackage.haskell.org/package/hmatrix home page: http://code.haskell.org/hmatrix tutorial : http://code.haskell.org/hmatrix/hmatrix.pdf Any feedback is welcome! Alberto Ruiz ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] vector to uvector and back again
On Friday 12 February 2010 8:12:51 am Roman Leshchinskiy wrote: > That's actually a conscious decision. Since vectors support O(1) slicing, > you can simply copy a slice of the source vector into a slice of the > target vector. Ah! I hadn't thought of that. That makes sense. > At the moment, it is (although it ought to be wrapped in a nicer > interface). Something like memcpy doesn't work for Data.Vector.Unboxed > because the ByteArrays aren't pinned. I don't really want to provide > thawing until someone convinces me that it is actually useful. Well, my use case is (of course) that I have lots of algorithms on mutable arrays, but they work just as well on immutable arrays by creating an intermediary. So I provided a combinator 'apply' that did something like: apply algo iv = new (safeThaw iv >>= \mv -> algo mv >> return mv) In uvector, the safeThaw part was copying iv into mv with a provided function. For the port, I used unstream . stream, which works fine assuming stream produces a correct size hint, I guess. That's the extent of what I have use for at the moment, though. > BTW, vector also supports array recycling so you could implement true > in-place sorting for fused pipelines. Something like > > map (+1) . sort . update xs > > wouldn't allocate any temporary arrays in that case. I'll look into it. -- Dan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Haskell RPC / Cluster
I am preparing to embark on some serious cluster oriented coding (high availability, monitoring, failover, etc). My primary concern is conforming to standards. I would also like to aid any existing project that fall under this scope. HackPar seems currently targeted towards HPC style clustering, but the page seems to hint at future work in the cloud/high-availability area. I was looking around for RPC libs for Haskell and stumbled across this http://github.com/mariusaeriksen/bert It implements BERT, which is based on Erlang's binary serialization protocol. It seems to have quite a bit of support. Does anyone know of any other RPC modules for Haskell? In addition, can anyone recommend other cluster oriented modules for monitoring, process management, etc? If those don't exist, can anyone recommend some standards off of which to base these? SNMP seems obvious (and daunting), any others? Thanks, Rick ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] vector to uvector and back again
On 12/02/2010, at 23:28, Dan Doel wrote: > On Thursday 11 February 2010 8:54:15 pm Dan Doel wrote: >> On Thursday 11 February 2010 12:43:10 pm stefan kersten wrote: >>> On 10.02.10 19:03, Bryan O'Sullivan wrote: I'm thinking of switching the statistics library over to using vector. >>> >>> that would be even better of course! an O(0) solution, at least for me ;) >>> let me know if i can be of any help (e.g. in testing). i suppose >>> uvector-algorithms would also need to be ported to vector, then. >> >> I could do this. > > To this end, I've done a preliminary port of the library, such that all the > modules compile. I've just used safe operations so far, so it's probably a > significant decrease in performance over the 0.2 uvector-algorithms (unless > perhaps you turn off the bounds checking flag), but it's a start. It can be > gotten with: > > darcs get http://code.haskell.org/~dolio/vector-algorithms That's great, thanks! FWIW, vector has two kinds of bounds checks: "real" ones which catch invalid indices supplied by the user (on by default) and internal ones which catch bugs in the library (off by default since the library is, of course, bug-free ;-). I guess you'd eventually want to use the latter but not the former; that's exactly what unsafe operations provide. > I only encountered a couple snags during the porting so far: > > * swap isn't exported from D.V.Generic.Mutable, so I'm using my own. Ah, I'll export it. Also, I gladly accept patches :-) > * I use a copy with an offset into the from and to arrays, and with a >length (this is necessary for merge sort). However, I only saw a whole >array copy (and only with identical sizes) in vector (so I wrote my own >again). That's actually a conscious decision. Since vectors support O(1) slicing, you can simply copy a slice of the source vector into a slice of the target vector. > * Some kind of thawing of immutable vectors into mutable vectors, or other >way to copy the former into the latter would be useful. Right now I'm >using unstream . stream, but I'm not sure that's the best way to do it. At the moment, it is (although it ought to be wrapped in a nicer interface). Something like memcpy doesn't work for Data.Vector.Unboxed because the ByteArrays aren't pinned. I don't really want to provide thawing until someone convinces me that it is actually useful. BTW, vector also supports array recycling so you could implement true in-place sorting for fused pipelines. Something like map (+1) . sort . update xs wouldn't allocate any temporary arrays in that case. Roman ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: How many "Haskell Engineer I/II/III"s are there?
On 12 Feb 2010, at 12:32, Matthias Görgens wrote: It might be big for SoC but perhaps there's some well-defined subset, like fix some blocking issue? Good idea. By the way, do all SoC projects have to be single-contributor projects, or could someone get together with a friend and work together on a somewhat larger project? In theory, two students could work together on a single project. However, in practice there would need to be a clear delineation (in advance) of what each student would contribute, so that we can determine whether either student individually succeeds and gets paid. Also, at the initial submission stage, there is no guarantee that if one student gets a place, the other will as well. So there would need to be a contingency plan for what each student would do in the absence of the other. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Haskell-Cafe Digest, Vol 78, Issue 14
On Fri, Feb 12, 2010 at 10:32 AM, Maciej Piechotka wrote: > On Thu, 2010-02-11 at 17:49 -0600, John Lato wrote: >> On Thu, Feb 11, 2010 at 10:00 AM, Gregory Collins >> wrote: >> > Maciej Piechotka writes: >> > >> >> On Tue, 2010-02-09 at 16:41 +, John Lato wrote: >> >>> >> >>> See http://inmachina.net/~jwlato/haskell/ParsecIteratee.hs for a valid >> >>> Stream instance using iteratee. Also Gregory Collins recently posted >> >>> an iteratee wrapper for Attoparsec to haskell-cafe. To my knowledge >> >>> these are not yet in any packages, but hackage is vast. >> >> >> >> Hmm. Am I correct that his implementation caches everything? >> > >> > The one that John posted (iteratees on top of parsec) has to keep a copy >> > of the entire input, because parsec wants to be able to do arbitrary >> > backtracking on the stream. >> >> This is true, however I believe this alternative approach is also >> correct. The Cursor holds the stream state, and parsec holds on to >> the Cursor for backtracking. Data is only read within the Iteratee >> monad when it goes beyond the currently available cursors, at which >> point another cursor is added to the linked list (implemented with >> IORef or other mutable reference). >> >> The downside to this approach is that data is consumed from the >> iteratee stream for a partial parse, even if the parse fails. I did >> not want this behavior, so I chose a different approach. >> > > Hmm. AFAIU your code you are doing something like: > >> concatCursor :: (Monad m, Reference r m, StreamChunk c el) >> => Cursor r m c el -> m (c el) >> concatCursor c = liftM mconcat (concatCursor' c) >> >> concatCursor' :: (Monad m, Reference r m, StreamChunk c el) >> => Cursor r m c el -> m [c el] >> concatCursor' (Cursor r v) = >> liftM2 (:) (return v) (readRef r >>= concatNextCursor') >> >> concatNextCursor' :: (Monad m, Reference r m, StreamChunk c el) >> => NextCursor r m c el -> m [c el] >> concatNextCursor' (NextCursor c) = concatCursor' $! c >> concatNextCursor' _ = return $! [] >> >> parsecIteratee' :: (Monad m, Reference r m, StreamChunk c el) >> => ParsecT (Cursor r m c el) u (IterateeG c el m) a >> -> u >> -> SourceName >> -> IterateeG c el m (Either ParseError a) >> parsecIteratee' p u sn = do >> c <- lift $ mkCursor :: IterateeG c el m (Cursor r m c el) >> res <- runParserT (liftM2 (,) p getInput) u sn c >> case res of >> Right (a, c) -> do sc <- lift $ concatCursor c >> liftI $! Done (Right a) $! Chunk $! sc >> Left err -> return $ Left err > > Which seems that it should work (I just checked if it is suppose to > compile). Unfortunately I need to work the clash between transformers > and mtl). > > EDIT. Ops. sorry. It will not work. However it will (as it should) > return the remaining part back to stream. Yes, the remaining part will be returned, but the consumed portion is lost. I couldn't figure out how to solve that problem other than cacheing everything. > >> > >> >> I tried to rewrite the implementation using... well imperative linked >> >> list. For trivial benchmark it have large improvement (althought it may >> >> be due to error in test such as using ByteString) and, I believe, that >> >> it allows to free memory before finish. >> >> >> >> Results of test on Core 2 Duo 2.8 GHz: >> >> 10: 0.000455s 0.000181s >> >> 100: 0.000669s 0.001104s >> >> 1000: 0.005209s 0.023704s >> >> 1: 0.053292s 1.423443s >> >> 10: 0.508093s 132.208597s >> > > >> I expected poor performance of my code for larger numbers of elements, >> as demonstrated here. >> > > I haven't tested for more then 1e5 (which was in comment). Interesting. I expect good performance as long as chunks don't need to be concatenated. The default chunk size is either 4096 or 8192 (I don't remember ATM). This also assumes that no intervening functions (take, drop, etc.) alter the stream too significantly. Testing 1e5 wouldn't do more than two concats, and particularly with bytestrings shouldn't impose too much penalty. List performance would be much worse though. Incidentally, performance of the WrappedByteString newtype is poor relative to true bytestrings. This will be fixed in the next major release (due in maybe a month or so?) > >> I envisioned the usage scenario where parsers would be relatively >> short (<20 chars), and most of the work would be done directly with >> iteratees. In this case it would be more important to preserve the >> stream state in the case of a failed parse, and the performance issues >> of appending chunks wouldn't arise either. >> > > Fortunately parsec does not limit the number of streams per monad so it > is up to user which one he will choose (depending on problem). > Good point. > > Regards > PS. Why iteratee uses transformers? It seems to be identical (bo
Re: [Haskell-cafe] Re: How many "Haskell Engineer I/II/III"s are there?
> It might be big for SoC but perhaps there's some well-defined subset, > like fix some blocking issue? Good idea. By the way, do all SoC projects have to be single-contributor projects, or could someone get together with a friend and work together on a somewhat larger project? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] vector to uvector and back again
On Thursday 11 February 2010 8:54:15 pm Dan Doel wrote: > On Thursday 11 February 2010 12:43:10 pm stefan kersten wrote: > > On 10.02.10 19:03, Bryan O'Sullivan wrote: > > > I'm thinking of switching the statistics library over to using vector. > > > > that would be even better of course! an O(0) solution, at least for me ;) > > let me know if i can be of any help (e.g. in testing). i suppose > > uvector-algorithms would also need to be ported to vector, then. > > I could do this. To this end, I've done a preliminary port of the library, such that all the modules compile. I've just used safe operations so far, so it's probably a significant decrease in performance over the 0.2 uvector-algorithms (unless perhaps you turn off the bounds checking flag), but it's a start. It can be gotten with: darcs get http://code.haskell.org/~dolio/vector-algorithms I only encountered a couple snags during the porting so far: * swap isn't exported from D.V.Generic.Mutable, so I'm using my own. * I use a copy with an offset into the from and to arrays, and with a length (this is necessary for merge sort). However, I only saw a whole array copy (and only with identical sizes) in vector (so I wrote my own again). * Some kind of thawing of immutable vectors into mutable vectors, or other way to copy the former into the latter would be useful. Right now I'm using unstream . stream, but I'm not sure that's the best way to do it. Other than that, things went pretty smoothly. I haven't ported the test suite or benchmarks yet, so I don't recommend that anyone actually uses this for anything important yet. Cheers, -- Dan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] haskell-src type inference algorithm?
On Fri, Feb 12, 2010 at 10:14 AM, Lennart Augustsson wrote: > Well, something like such a tool exists, but I can't give it away. I know. :-) /Niklas ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Haskell-Cafe Digest, Vol 78, Issue 14
On Thu, 2010-02-11 at 17:49 -0600, John Lato wrote: > On Thu, Feb 11, 2010 at 10:00 AM, Gregory Collins > wrote: > > Maciej Piechotka writes: > > > >> On Tue, 2010-02-09 at 16:41 +, John Lato wrote: > >>> > >>> See http://inmachina.net/~jwlato/haskell/ParsecIteratee.hs for a valid > >>> Stream instance using iteratee. Also Gregory Collins recently posted > >>> an iteratee wrapper for Attoparsec to haskell-cafe. To my knowledge > >>> these are not yet in any packages, but hackage is vast. > >> > >> Hmm. Am I correct that his implementation caches everything? > > > > The one that John posted (iteratees on top of parsec) has to keep a copy > > of the entire input, because parsec wants to be able to do arbitrary > > backtracking on the stream. > > This is true, however I believe this alternative approach is also > correct. The Cursor holds the stream state, and parsec holds on to > the Cursor for backtracking. Data is only read within the Iteratee > monad when it goes beyond the currently available cursors, at which > point another cursor is added to the linked list (implemented with > IORef or other mutable reference). > > The downside to this approach is that data is consumed from the > iteratee stream for a partial parse, even if the parse fails. I did > not want this behavior, so I chose a different approach. > Hmm. AFAIU your code you are doing something like: > concatCursor :: (Monad m, Reference r m, StreamChunk c el) > => Cursor r m c el -> m (c el) > concatCursor c = liftM mconcat (concatCursor' c) > > concatCursor' :: (Monad m, Reference r m, StreamChunk c el) > => Cursor r m c el -> m [c el] > concatCursor' (Cursor r v) = > liftM2 (:) (return v) (readRef r >>= concatNextCursor') > > concatNextCursor' :: (Monad m, Reference r m, StreamChunk c el) > => NextCursor r m c el -> m [c el] > concatNextCursor' (NextCursor c) = concatCursor' $! c > concatNextCursor' _ = return $! [] > > parsecIteratee' :: (Monad m, Reference r m, StreamChunk c el) > => ParsecT (Cursor r m c el) u (IterateeG c el m) a > -> u > -> SourceName > -> IterateeG c el m (Either ParseError a) > parsecIteratee' p u sn = do >c <- lift $ mkCursor :: IterateeG c el m (Cursor r m c el) >res <- runParserT (liftM2 (,) p getInput) u sn c >case res of > Right (a, c) -> do sc <- lift $ concatCursor c > liftI $! Done (Right a) $! Chunk $! sc > Left err -> return $ Left err Which seems that it should work (I just checked if it is suppose to compile). Unfortunately I need to work the clash between transformers and mtl). EDIT. Ops. sorry. It will not work. However it will (as it should) return the remaining part back to stream. > > > >> I tried to rewrite the implementation using... well imperative linked > >> list. For trivial benchmark it have large improvement (althought it may > >> be due to error in test such as using ByteString) and, I believe, that > >> it allows to free memory before finish. > >> > >> Results of test on Core 2 Duo 2.8 GHz: > >> 10: 0.000455s 0.000181s > >> 100: 0.000669s 0.001104s > >> 1000: 0.005209s 0.023704s > >> 1:0.053292s 1.423443s > >> 10: 0.508093s 132.208597s > > > > I'm surprised your version has better performance for small numbers of > elements. I wonder if it's partially due to more aggressive inlining > from GHC or something of that nature. Or maybe your version compiles > to a tighter loop as elements can be gc'd. > It is possible as my code was in the same module. I'll try to use 2 different modules. > I expected poor performance of my code for larger numbers of elements, > as demonstrated here. > I haven't tested for more then 1e5 (which was in comment). > I envisioned the usage scenario where parsers would be relatively > short (<20 chars), and most of the work would be done directly with > iteratees. In this case it would be more important to preserve the > stream state in the case of a failed parse, and the performance issues > of appending chunks wouldn't arise either. > Fortunately parsec does not limit the number of streams per monad so it is up to user which one he will choose (depending on problem). > Cheers, > John Regards PS. Why iteratee uses transformers? It seems to be identical (both have functional dependencies etc.) to mtl except that mtl is standard in platform. Using both lead to clashes between names. signature.asc Description: This is a digitally signed message part ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] haskell-src type inference algorithm?
Well, something like such a tool exists, but I can't give it away. On Fri, Feb 12, 2010 at 12:13 AM, Niklas Broberg wrote: >> Anyone know of a type inference utility that can run right on haskell-src >> types? or one that could be easily adapted? > > This is very high on my wish-list for haskell-src-exts, and I'm hoping > the stuff Lennart will contribute will go a long way towards making it > feasible. I believe I can safely say that no such tool exists (and if > it does, why haven't you told me?? ;-)), but if you implement (parts > of) one yourself I'd be more than interested to see, and incorporate, > the results. > > Cheers, > > /Niklas > ___ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe > ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe