Re: [Haskell-cafe] [Haskell] ANNOUNCE: set-monad
Hi Derek, Thanks for clarifying your point. You are right that (fromList . toList) = id is a desirable and it holds for Data.Set. But your suggestions that this property does not hold for Data.Set.Monad is not correct. Please check out the repo, I have just pushed a quickcheck definition for this property. With a little bit of effort, one can also prove this by hand. Let me also clarify that Data.Set.Monad exports Set as an abstract data type (i.e., the user cannot inspect its internal structure). Also the run function is only used internally and is not exposed to the users. Cheers, George On 19 June 2012 02:21, Derek Elkins derek.a.elk...@gmail.com wrote: On Jun 18, 2012 4:54 PM, George Giorgidze giorgi...@gmail.com wrote: Hi Derek, On 16 June 2012 21:53, Derek Elkins derek.a.elk...@gmail.com wrote: The law that ends up failing is toList . fromList /= id, i.e. fmap g . toList . fromList . fmap h /= fmap g . fmap h This is not related to functor laws. The property that you desire is about toList and fromList. Sorry, I typoed. I meant to write fromList . toList though that should've been clear from context. This is a law that I'm pretty sure does hold for Data.Set, potentially modulo bottom. It is a quite desirable law but, as you correctly state, not required. If you add this (non)conversion, you will get the behavior to which Dan alludes. The real upshot is that Prim . run is not id. This is not immediately obvious, but this is actually the key to why this technique works. A Data.Set.Monad Set is not a set, as I mentioned in my previous email. To drive the point home, you can easily implement fromSet and toSet. In fact, they're just Prim and run. Thus, you will fail to have fromSet . toSet = I'd, though you will have toSet . fromSet = I'd, i.e. run . Prim = id. This shows that Data.Set.Set embeds into but is not isomorphic to Data.Set.Monad.Set. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Again, version conflicting problem with cabal-install
And today I met another situation, which I think solvable by computer. Hackage A depends on magicloud (any) and container (0.4.0.0), and hackage magicloud depends on container (any). Now I've installed magicloud, using container 0.5.0.0. Then I failed to install A, with any solver. So the solvers are using the status that is installed, not the definitions from original hackages? On Wed, May 2, 2012 at 2:47 PM, Magicloud Magiclouds magicloud.magiclo...@gmail.com wrote: Hi, long time no see. I am using cabal-install-0.15.0. And the new solver. It sure solves some problem. Thanks for the work. Well, on the other hand, there still are hackages that I just have no idea how the author make it (probably not up-to-date hackage database). After human-solver checking, these hackages are not be able to build at all under certain conditions. So, I am voting on the force-allow flag. On Sat, Feb 4, 2012 at 12:36 AM, Andres Löh andres.l...@googlemail.com wrote: Hi. --force-allow=foo-1.3 with the semantics that all dependencies on foo will be changed to allow foo-1.3 to be chosen. Would that be ok? Other suggestions? Can't this be integrated with the current --constraint flag? It could be, but ... If the constraint is able to be satisfied without unrestricting any bounds, fine. Otherwise, unrestrict any bounds on that constraint. What would be the drawbacks? ... it shouldn't happen automatically. There are perfectly valid and safe reasons to use --constraint, whereas this new feature is inherently unsafe. But allowing general constraint syntax and calling the flag something with constraint in it is perhaps a good idea. An advantage is being able to specify --constraint='foo = 1.3' to get foo-1.3.7.2 instead of having to find out exactly which version you want. And if you already know what you want, you may always say --constraint='foo == 1.3.7.2'. Yes. Looking forward to the new solver! =) I need testers and feedback. You can already use it. It's in the cabal-install development version, and can be enabled by saying --solver=modular on the command line. Cheers, Andres -- 竹密岂妨流水过 山高哪阻野云飞 And for G+, please use magiclouds#gmail.com. -- 竹密岂妨流水过 山高哪阻野云飞 And for G+, please use magiclouds#gmail.com. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Again, version conflicting problem with cabal-install
Hi. Hackage A depends on magicloud (any) and container (0.4.0.0), and hackage magicloud depends on container (any). Now I've installed magicloud, using container 0.5.0.0. Then I failed to install A, with any solver. So the solvers are using the status that is installed, not the definitions from original hackages? Could you please provide me with something I can reproduce? I'm not sure what you mean by A or magicloud. Are you saying they have no further dependencies except for the one on container? If you could just state which concrete packages have caused the problem, it'd probably be easier. The trace output of the modular solver with -v3 would also help (send it to me personally if it's too long). Thanks. Andres ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hackage 2 maintainership
Hi, Are there any news how things are going? What remains there to be done to get us to Hackage 2? I found this list of tickets: https://github.com/haskell/cabal/issues?labels=hackage2page=1state=open Is there anything more to be done? Best regards, Krzysztof Skrzętnicki On Tue, Feb 14, 2012 at 12:44 AM, Ben Gamari bgamari.f...@gmail.com wrote: Hey all, Those of you who follow the Haskell subreddit no doubt saw today's post regarding the status of Hackage 2. As has been said many times in the past, the primary blocker at this point to the adoption of Hackage 2 appears to be the lack of an administrator. It seems to me this is a poor reason for this effort to be held up. Having taken a bit of time to consider, I would be willing to put in some effort to get things moving and would be willing to maintain the haskell.org Hackage 2.0 instance going forward if necessary. I currently have a running installation on my personal machine and things seem to be working as they should. On the whole, installation was quite trivial, so it seems likely that the project is indeed at a point where it can take real use (although a logout option in the web interface would make testing a bit easier). That being said, it would in my opinion be silly to proceed without fixing the Hackage trac. It was taken down earlier this year due to spamming[1] and it seems the recovery project has been orphaned. I would be willing to help with this effort, but it seems like the someone more familiar with the haskel.org infrastructure might be better equipped to handle the situation. It seems that this process will go something like this, 1) Bring Hackage trac back from the dead 2) Bring up a Hackage 2 instance along-side the existing hackage.haskell.org 3) Enlist testers 4) Let things simmer for a few weeks/months ensuring nothing explodes 5) After it's agreed that things are stable, eventually swap the Hackage 1 and 2 instances This will surely be a non-trivial process but I would be willing to move things forward. Cheers, - Ben [1] http://www.haskell.org/pipermail/cabal-devel/2012-January/008427.html ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Call for Papers: STVR Special Issue on Tests and Proofs
Apologies for duplicates. CALL FOR PAPERS STVR Special Issue on Tests and Proofs http://lifc.univ-fcomte.fr/tap2012/stvr/ The Software Testing, Verification Reliability (STVR) journal (http://www3.interscience.wiley.com/journal/13635/home) invites authors to submit papers to a Special Issue on Tests and Proofs. Background == The increasing use of software and the growing system complexity make focused software testing a challenging task. Recent years have seen an increasing industrial and academic interest in the use of static and dynamic analysis techniques together. Success has been reported combining different test techniques such as model-based testing, structural testing, or concolic testing with static techniques such as program slicing, dependencies analysis, model-checking, abstract interpretation, predicate abstraction, or verification. This special issue serves as a platform for researchers and practitioners to present theory, results, experience and advances in Tests and Proofs (TAP). Topics == This special issue focuses on all topics relevant to TAP. In particular, the topics of interest include, but are not limited to: * Program proving with the aid of testing techniques * New challenges in automated reasoning emerging from specificities of test generation * Verification and testing techniques combining proofs and tests * Generation of test data, oracles, or preambles by deductive techniques such as: theorem proving, model checking, symbolic execution, constraint logic programming, SAT and SMT solving * Model-based testing and verification * Automatic bug finding * Debugging of programs combining static and dynamic analysis * Transfer of concepts from testing to proving (e.g., coverage criteria) and from proving to testing * Formal frameworks for test and proof * Tool descriptions, experience reports and evaluation of test and proof * Case studies combining tests and proofs * Applying combination of test and proof techniques to new application domains such as validating security procotols or vulnerability detection of programs * The processes, techniques, and tools that support test and proof Submission Information == The deadline for submissions is 17th December, 2012. Notification of decisions will be given by April 15th, 2013. All submissions must contain original unpublished work not being considered for publication elsewhere. Original extensions to conference papers - identifing clearly additional contributions - are also encouraged unless prohibited by copyright. Submissions will be refereed according to standard procedures for Software Testing, Verification and Reliability. Please submit your paper electronically using the Software Testing, Verification Reliability manuscript submission site. Select Special Issue Paper and enter Tests and Proofs as title. Important Dates: * Paper submission: December 17, 2012 * Notification: April 15, 2013 Guest Editors = * Achim D. Brucker, SAP Research, Germany http://www.brucker.ch/ * Wolfgang Grieskamp, Google, U.S.A. http://www.linkedin.com/in/wgrieskamp * Jacques Julliand, University of Franche-Comté, France http://lifc.univ-fcomte.fr/page_personnelle/accueil/8 -- Dr. Achim D. Brucker, SAP Research, Vincenz-Priessnitz-Str. 1, D-76131 Karlsruhe Phone: +49 6227 7-52595, http://www.brucker.ch ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] [ANN] GenCheck - a generalized property-based testing framework
Test.GenCheck is a Haskell library for /generalized proposition-based testing/. It simultaneously generalizes *QuickCheck* and *SmallCheck*. Its main novel features are: * introduces a number of /testing strategies/ and /strategy combinators/ * introduces a variety of test execution methods * guarantees uniform sampling (at each rank) for the random strategy * guarantees both uniqueness and coverage of all structures for the exhaustive strategy * introduces an /extreme/ strategy for testing unbalanced structures * also introduces a /uniform/ strategy which does uniform sampling along an enumeration * allows different strategies to be mixed; for example one can exhaustively test all binary trees up to a certain size, filled with random integers. * complete separation between properties, generators, testing strategies and test execution methods The package is based on a lot of previous research in combinatorics (combinatorial enumeration of structures and the theory of Species), as well as a number of established concepts in testing (from a software engineering perspective). In other words, further to the features already implemented in this first release, the package contains an extensible, general framework for generators, test case generation and management. It can also be very easily generalized to cover many more combinatorial structures unavailable as Haskell types. The package also provides interfaces for different levels of usage. In other words, there is a 'simple' interface for dealing with straightforward testing, a 'medium' interface for those who want to explore different testing strategies, and an 'advanced' interface for access to the full power of GenCheck. See http://hackage.haskell.org/package/gencheck for further details. In the source repository (https://github.com/JacquesCarette/GenCheck), the file tutorial/reverse/TestReverseList.lhs shows the simplest kinds of tests (standard and deep for structures, or base for unstructured types) and reporting (checking, testing and full report) for the classical list reverse function. The files in tutorial/list_zipper show what can be done with the medium level interface (this tutorial is currently incomplete). The brave user can read the source code of the package for the advanced usage -- but we'll write a tutorial for this too, later. User beware: this is gencheck-0.1, there are still a few rough edges. We plan to add a Template Haskell feature to this which should make deriving enumerators automatic for version 0.2. Jacques and Gordon ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Problem with plugins
Hi, I'm currently working on extending the hascat Server. My problem is, that for whatever odd reason it will only work on GHC 7.0 or alternatively if I execute it with runghc or in ghci. If I compile it with GHC=7.0 and execute it, then I get this: $ ~/.cabal/bin/hascat config.xml Installing Root at / hascat: /home/tvh/.cabal/lib/plugins-1.5.2.1/ghc-7.4.1/HSplugins-1.5.2.1.o: unknown symbol `ghczm7zi4zi1_ErrUtils_zdsinsertzugo3_info' hascat: unloadObj: can't find `/usr/lib/ghc/binary-0.5.1.0/HSbinary-0.5.1.0.o' to unload user error (unloadObj: failed) Installing Hascat Server Info at /ServerInfo/ hascat: /home/tvh/.cabal/lib/hascat-system-0.2/ghc-7.4.1/HShascat-system-0.2.o: unknown symbol `base_DataziMaybe_Nothing_closure' hascat: unloadObj: can't find `/usr/lib/ghc/Cabal-1.14.0/HSCabal-1.14.0.o' to unload user error (unloadObj: failed) Installing Hascat Application Manager at /Manager/ hascat: /home/tvh/.cabal/lib/hascat-system-0.2/ghc-7.4.1/HShascat-system-0.2.o: unknown symbol `base_DataziMaybe_Nothing_closure' hascat: unloadObj: can't find `/usr/lib/ghc/Cabal-1.14.0/HSCabal-1.14.0.o' to unload user error (unloadObj: failed) Installing Hascat Application Deployer at /Deployer/ hascat: /usr/lib/haskell-packages/ghc/lib/zlib-0.5.3.3/ghc-7.4.1/HSzlib-0.5.3.3.o: unknown symbol `base_GHCziForeignPtr_ForeignPtr_con_info' hascat: unloadObj: can't find `/usr/lib/ghc/Cabal-1.14.0/HSCabal-1.14.0.o' to unload user error (unloadObj: failed) Waiting for connections on port 8012 Is there a way to make this work? Greetings Timo ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Haskell Implementors' Workshop 2012, Second CFT
Call for Talks ACM SIGPLAN Haskell Implementors' Workshop http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2012 Copenhagen, Denmark, September 14th, 2012 The workshop will be held in conjunction with ICFP 2012 http://www.icfpconference.org/icfp2012/ Important dates Proposal Deadline: 10th July 2012 Notification: 27th July 2012 Workshop: 14th September 2012 The Haskell Implementors' Workshop is to be held alongside ICFP 2012 this year in Copenhagen, Denmark. There will be no proceedings; it is an informal gathering of people involved in the design and development of Haskell implementations, tools, libraries, and supporting infrastructure. This relatively new workshop reflects the growth of the user community: there is a clear need for a well-supported tool chain for the development, distribution, deployment, and configuration of Haskell software. The aim is for this workshop to give the people involved with building the infrastructure behind this ecosystem an opportunity to bat around ideas, share experiences, and ask for feedback from fellow experts. We intend the workshop to have an informal and interactive feel, with a flexible timetable and plenty of room for ad-hoc discussion, demos, and impromptu short talks. Scope and target audience - It is important to distinguish the Haskell Implementors' Workshop from the Haskell Symposium which is also co-located with ICFP 2012. The Haskell Symposium is for the publication of Haskell-related research. In contrast, the Haskell Implementors' Workshop will have no proceedings -- although we will aim to make talk videos, slides and presented data available with the consent of the speakers. In the Haskell Implementors' Workshop, we hope to study the underlying technology. We want to bring together anyone interested in the nitty-gritty details behind turning plain-text source code into a deployed product. Having said that, members of the wider Haskell community are more than welcome to attend the workshop -- we need your feedback to keep the Haskell ecosystem thriving. The scope covers any of the following topics. There may be some topics that people feel we've missed, so by all means submit a proposal even if it doesn't fit exactly into one of these buckets: * Compilation techniques * Language features and extensions * Type system implementation * Concurrency and parallelism: language design and implementation * Performance, optimisation and benchmarking * Virtual machines and run-time systems * Libraries and tools for development or deployment Talks - At this stage we would like to invite proposals from potential speakers for a relatively short talk. We are aiming for 20 minute talks with 10 minutes for questions and changeovers. We want to hear from people writing compilers, tools, or libraries, people with cool ideas for directions in which we should take the platform, proposals for new features to be implemented, and half-baked crazy ideas. Please submit a talk title and abstract of no more than 200 words to: johan.tib...@gmail.com We will also have a lightning talks session which will be organised on the day. These talks will be 2-10 minutes, depending on available time. Suggested topics for lightning talks are to present a single idea, a work-in-progress project, a problem to intrigue and perplex Haskell implementors, or simply to ask for feedback and collaborators. Organisers -- * Lennart Augustsson (Standard Chartered Bank) * Manuel M T Chakravarty (University of New South Wales) * Gregory Collins - co-chair (Google) * Simon Marlow (Microsoft Research) * David Terei(Stanford University) * Johan Tibell - co-chair(Google) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Haskell] [ANN] GenCheck - a generalized property-based testing framework
HI Jacques, This looks very similar to the recently released testing-feat library: http://hackage.haskell.org/package/testing-feat-0.2 I get a build error on the latest platform: Test\GenCheck\Base\LabelledPartition.lhs:126:3: The equation(s) for `new' have two arguments, but its type `[a] - Map k a' has only one In the instance declaration for `LabelledPartition (Map k) a' Regards, Jonas On 19 June 2012 17:04, Jacques Carette care...@mcmaster.ca wrote: Test.GenCheck is a Haskell library for generalized proposition-based testing. It simultaneously generalizes QuickCheck and SmallCheck. Its main novel features are: introduces a number of testing strategies and strategy combinators introduces a variety of test execution methods guarantees uniform sampling (at each rank) for the random strategy guarantees both uniqueness and coverage of all structures for the exhaustive strategy introduces an extreme strategy for testing unbalanced structures also introduces a uniform strategy which does uniform sampling along an enumeration allows different strategies to be mixed; for example one can exhaustively test all binary trees up to a certain size, filled with random integers. complete separation between properties, generators, testing strategies and test execution methods The package is based on a lot of previous research in combinatorics (combinatorial enumeration of structures and the theory of Species), as well as a number of established concepts in testing (from a software engineering perspective). In other words, further to the features already implemented in this first release, the package contains an extensible, general framework for generators, test case generation and management. It can also be very easily generalized to cover many more combinatorial structures unavailable as Haskell types. The package also provides interfaces for different levels of usage. In other words, there is a 'simple' interface for dealing with straightforward testing, a 'medium' interface for those who want to explore different testing strategies, and an 'advanced' interface for access to the full power of GenCheck. See http://hackage.haskell.org/package/gencheck for further details. In the source repository (https://github.com/JacquesCarette/GenCheck), the file tutorial/reverse/TestReverseList.lhs shows the simplest kinds of tests (standard and deep for structures, or base for unstructured types) and reporting (checking, testing and full report) for the classical list reverse function. The files in tutorial/list_zipper show what can be done with the medium level interface (this tutorial is currently incomplete). The brave user can read the source code of the package for the advanced usage -- but we'll write a tutorial for this too, later. User beware: this is gencheck-0.1, there are still a few rough edges. We plan to add a Template Haskell feature to this which should make deriving enumerators automatic for version 0.2. Jacques and Gordon ___ Haskell mailing list hask...@haskell.org http://www.haskell.org/mailman/listinfo/haskell ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Haskell] [ANN] GenCheck - a generalized property-based testing framework
[Only cc:ing cafe] There are definite similarities, yes - I only became aware of testing-feat very recently. You seem to have concentrated more on efficiency, while we have focused more on the high-level modular design and on strategies. We should probably merge our efforts, if you are willing. I mistakenly build the release to upload to hackage from an experimental branch (which is currently broken). The sources on github work. I'll update gencheck shortly to fix this, sorry. Jacques On 12-06-19 12:58 PM, Jonas Almström Duregård wrote: HI Jacques, This looks very similar to the recently released testing-feat library: http://hackage.haskell.org/package/testing-feat-0.2 I get a build error on the latest platform: Test\GenCheck\Base\LabelledPartition.lhs:126:3: The equation(s) for `new' have two arguments, but its type `[a] - Map k a' has only one In the instance declaration for `LabelledPartition (Map k) a' Regards, Jonas ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Haskell-beginners] Function application versus function composition performance
A good functional programming language has a good code algebra after parsing to which algebraic transformations can be applied for optimization. For example, reducing the need for generating intermediate data structures. See: fusion. On Tue, Jun 19, 2012 at 3:01 PM, Matt Ford m...@dancingfrog.co.uk wrote: Hi All, My last question got me thinking (always dangerous): an expression that doubles a list and takes the 2nd element (index 1), I assume, goes through the following steps. double (double [1,2,3,4]) !! 1 double ( 1*2 : double [2,3,4]) !! 1 1*2*2 : double ( double [2,3,4] ) !! 1 1*2*2 : double ( 2*2 : double [3,4] ) !! 1 1*2*2 : 2*2*2 : double ( double [3,4] ) !! 1 2*2*2 8 Up until the element requested all the proceeding elements have to have the expression formed as Haskell process the calculation. Now, I want to compare this to using function composition ( double . double ) [ 1 ,2, 3, 4 ] !! 1 This is the bit I'm unsure of - what does the composition look like. It is easy to see that it could be simplified to something like: ( double . double) (x:xs) = x*4 : (double . double) xs This would mean the steps could be written (double . double) [ 1,2,3,4 ] !! 1 (1*4 : (double.double) [2,3,4]) !! 1 (1*4 : 2*4 : (double.double) [ 3,4 ]) !! 1 2*4 8 Ignoring the start and end steps, it will be half the number of steps compared to the application version. Significant then, over long lists. So is this true, are composite functions simplified in Haskell in general so that they may have improved performance over function application? I've seen posts on the internet that say it's a matter of style only: http://stackoverflow.com/questions/3030675/haskell-function-composition-and-function-application-idioms-correct-us . But my reasoning suggests it could be more than that. Perhaps, function application is similarly optimised - maybe by replacing all functions applications with composition and then simplifying? Or maybe the simplifying/optimisation step never happens? As you can see I'm just guessing at things :-) But it's nice to wonder. Many thanks for any pointers, Matt. ___ Beginners mailing list beginn...@haskell.org http://www.haskell.org/mailman/listinfo/beginners -- -- Regards, KC ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Lazy producing a list in the strict ST monad
It doesn't work like that by default, and here is why: -- an infinite tree of values data InfTree a = Branch a (InfTree a) (InfTree a) buildTree :: Num a = STRef s a - ST s (InfTree a) buildTree ref = do n - readSTRef ref writeSTRef ref $! (n+1) left - buildTree ref right - buildTree ref return (Branch n left right) makeTree :: Num a = ST s (InfTree a) makeTree = do ref - newSTRef 0 buildTree ref -- should be referentially transparent, i.e. these two expressions should be equivalent pureInfTree1, pureInfTree2 :: InfTree Integer pureInfTree1 = runST makeTree pureInfTree2 = runST makeTree element (Branch x _ _) = x goLeft (Branch _ x _) = x goRight (Branch _ _ x) = x test :: IO () test = do let left1 = goLeft pureInfTree1 let right1 = goRight pureInfTree1 let left2 = goLeft pureInfTree2 let right2 = goRight pureInfTree2 evaluate (element left1) evaluate (element right1) evaluate (element right2) evaluate (element left2) print (element left1 == element left2) -- should be True! Right now this code diverges, because buildTree diverges. If buildTree was lazy, test would print False because of the order of evaluation. You can make buildTree lazy if you want: import Control.Monad.ST.Unsafe buildTree :: Num a = STRef s a - ST s (InfTree a) buildTree ref = do n - readSTRef ref writeSTRef ref $! (n+1) left - unsafeInterleaveST (buildTree ref) right - unsafeInterleaveST (buildTree ref) return (Branch n left right) In order to safely use unsafeInterleaveST, you need to prove that none of the references used by the computation passed to unsafeInterleaveST can be used by any code after the unsafeInterleaveST; so this 'lazy' list generation is safe: buildList :: Num a = STRef s a - ST s [a] buildList = do ref - newSTRef 0 let loop = n - readSTRef ref writeSTRef ref $! (n+1) rest - unsafeInterleaveST loop return (n : rest) loop because we are guaranteed that the only reference to ref exists inside the loop which uses it in a linear fashion. So you may be able to get away with it... but you have to make a proof manually that the compiler isn't able to infer for you. -- ryan On Sun, Jun 10, 2012 at 5:37 AM, Nicu Ionita nicu.ion...@acons.at wrote: Hi, I'm trying to produce a list in the strict ST monad. The documentation of ST says that the monad is strict in the state, but not in the values. So I expect that, when returning a list, I get back only the Cons (with 2 unevaluated thunks). Now, when I need the first element (head), this will be evaluated (with whatever actions are necessary in the ST universe) and the tail is again a Cons with unevaluated parts. Internally my list is stored in a vector, and the elements are generated phasewise, each phase generating 0 or more elements in the vector, and a fuction splitMove is driving this process (see code below). I would expect that the first phase triggers, generates some moves, then (after these are consumed from the list) the next phase triggers generating the next few moves and so on. But when I trace the phases (Debug.Trace.trace) I get all the trace messages in front of the first move: Moves for fen: rnbqkbnr/pp3ppp/4p3/2pp4/3P4/**2NQ4/PPP1/R1B1KBNR w After move generation... 0 = 0 : next phase 3 = 3 : next phase 3 = 3 : next phase 42 = 42 : next phase 44 = 44 : next phase d4c5 g1f3 g1h3 c3b1 ... This seems not to be just an unhappy combination between trace and ST, as also the program without trace is beeing slower than the same implemented with plain lists, which is hard to believe (in many cases the move list is not consumed to the end). I wonder if my expectation is wrong, but I don't find a way to do this. Here is the (incomplete) code: produceList ... = runST $ do ml - newMList ... listMoves ml -- Transforms a move list to a list of moves - lazy listMoves :: MList s - ST s [Move] listMoves ml = do sm - splitMove ml case sm of Just (m, ml') - do rest - listMoves ml' return $ m : rest Nothing - return [] -- Split the first move from the move list and return it together with -- the new move list (without the first move). Return Nothing if there -- is no further move splitMove :: MList s - ST s (Maybe (Move, MList s)) splitMove ml | mlToMove ml = mlToGen ml = do mml - trace trm $ nextPhase ml case mml of Nothing - return Nothing Just ml' - splitMove ml' | otherwise = do m - U.unsafeRead (mlVec ml) (mlToMove ml) case mlCheck ml ml m of Ok- return $ Just (m, ml1) Skip - splitMove ml1 Delay - splitMove ml1 { mlBads = m : mlBads ml } where ml1 = ml { mlToMove = mlToMove ml + 1 } trm = show (mlToMove ml) ++ = ++ show (mlToGen ml) ++ : next phase __**_
[Haskell-cafe] The Layout Rule
I am looking for background material on how GHC and other Haskell compilers implement the layout rule. Are there any papers, documentation, commentary, etc. that discus the actual implementation of this rule (even if only a paragraph or two)? I've already looked at the parsing code in GHC and UHC. Do any other Haskell compilers have interesting approaches for implementing the layout rule? I am writing a paper about a new formalism for indentation sensitive languages and I want to ensure I've covered the appropriate background material on existing implementations of the layout rule. Michael D. Adams mdmko...@gmail.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Again, version conflicting problem with cabal-install
Hi, The names here were just placeholder. And I just found out the reason. Hackage magicloud is local (not from hackage.haskell.org), there is no its information in cabal INDEX (I assumed so). But this raised another question, how to reference a local hackage in .cabal. So the solver could use the dependencies originally defined but not the binary compiled (which dependencies are fixed)? On Tue, Jun 19, 2012 at 5:50 PM, Andres Löh andres.l...@googlemail.com wrote: Hi. Hackage A depends on magicloud (any) and container (0.4.0.0), and hackage magicloud depends on container (any). Now I've installed magicloud, using container 0.5.0.0. Then I failed to install A, with any solver. So the solvers are using the status that is installed, not the definitions from original hackages? Could you please provide me with something I can reproduce? I'm not sure what you mean by A or magicloud. Are you saying they have no further dependencies except for the one on container? If you could just state which concrete packages have caused the problem, it'd probably be easier. The trace output of the modular solver with -v3 would also help (send it to me personally if it's too long). Thanks. Andres -- 竹密岂妨流水过 山高哪阻野云飞 And for G+, please use magiclouds#gmail.com. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Haskell] ANNOUNCE: set-monad
Un-top-posted. See below. On 19 June 2012 02:21, Derek Elkins derek.a.elk...@gmail.com wrote: On Jun 18, 2012 4:54 PM, George Giorgidze giorgi...@gmail.com wrote: Hi Derek, On 16 June 2012 21:53, Derek Elkins derek.a.elk...@gmail.com wrote: The law that ends up failing is toList . fromList /= id, i.e. fmap g . toList . fromList . fmap h /= fmap g . fmap h This is not related to functor laws. The property that you desire is about toList and fromList. Sorry, I typoed. I meant to write fromList . toList though that should've been clear from context. This is a law that I'm pretty sure does hold for Data.Set, potentially modulo bottom. It is a quite desirable law but, as you correctly state, not required. If you add this (non)conversion, you will get the behavior to which Dan alludes. The real upshot is that Prim . run is not id. This is not immediately obvious, but this is actually the key to why this technique works. A Data.Set.Monad Set is not a set, as I mentioned in my previous email. To drive the point home, you can easily implement fromSet and toSet. In fact, they're just Prim and run. Thus, you will fail to have fromSet . toSet = I'd, though you will have toSet . fromSet = I'd, i.e. run . Prim = id. This shows that Data.Set.Set embeds into but is not isomorphic to Data.Set.Monad.Set. On Tue, Jun 19, 2012 at 4:02 AM, George Giorgidze giorgi...@gmail.com wrote: Hi Derek, Thanks for clarifying your point. You are right that (fromList . toList) = id is a desirable and it holds for Data.Set. But your suggestions that this property does not hold for Data.Set.Monad is not correct. Please check out the repo, I have just pushed a quickcheck definition for this property. With a little bit of effort, one can also prove this by hand. This is impressive because it's false. The whole point of my original response was to justify Dan's intuition but explain why it was misled in this case. Let me also clarify that Data.Set.Monad exports Set as an abstract data type (i.e., the user cannot inspect its internal structure). Also the run function is only used internally and is not exposed to the users. If fromList . toList = id is true for Data.Set.Set, then fromList . toList for Data.Set.Monad.Set reduces to Prim . run. I only spoke of the internal functions to get rid of the noise, but Data.Set.fromList . Data.Set.Monad.toList = run, and Data.Set.Monad.fromList . Data.Set.toList = Prim, so it doesn't matter that these are internal functions. As I said to Dan I will say to you, between Dan and myself the counter-example has already been provided, all you need to do is execute it. Here's the code, if fromList . toList = id, then ex4 should produce the same result as ex5 (and ex6). import Data.Set.Monad data X = X Int Int deriving (Show) instance Eq X where X a _ == X b _ = a == b instance Ord X where compare (X a _) (X b _) = compare a b f (X _ b) = X b b g (X _ b) = X 1 b xs = Prelude.map (\x - X x x) [1..10] -- should be a singleton ex1 = toList . fromList $ Prelude.map g xs -- should have 10 elements ex2 = toList $ fmap (f . g) $ fromList xs -- should have 1 element ex3 = toList $ fmap g $ fromList xs -- should have 10 element, fmap f . fmap g = fmap (f . g) ex4 = toList $ fmap f . fmap g $ fromList xs -- should have 1 element, we don't generate elements out of nowhere ex5 = toList $ fmap f $ fromList ex3 -- i.e. ex6 = toList $ fmap f . fromList . toList . fmap g $ fromList xs main = mapM_ print [ex1, ex2, ex3, ex4, ex5, ex6] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANNOUNCE : Leksah 0.12.1.2 (fixes some metadata issues)
This release has an important bug fix for the metadata download. When metadata was downloaded using libcurl it was not treated as binary data. If you used one of our binary installers or if you built Leksah with the -flibcurl flag then it is likely you have bad metadata files. To fix this please install Leksah 0.12.1.2, then delete your ~/.leksah-0.12/metadata folder to force Leksah to download the metadata files again. The first time you run leksah after doing this it will take a while for leksah to update the metadata (you can track its progress by looking in the metadata directory). If you choose to generate your metadata locally or you used wget to download (the default if you cabal installed leksah) then your metadata should be ok. There are some other bug fixes in this release including one for metadata generation for users with lots of .cabal files (some lazy IO was leaving file handles open). Source is in Hackage and https://github.com/leksah Binary Installers Use ghc --version to work out which one you need. OS X http://leksah.org/packages/leksah-0.12.1.2-ghc-7.0.3.dmg http://leksah.org/packages/leksah-0.12.1.2-ghc-7.0.4.dmg http://leksah.org/packages/leksah-0.12.1.2-ghc-7.4.1.dmg Windows http://leksah.org/packages/leksah-0.12.1.2-ghc-6.12.3.exe http://leksah.org/packages/leksah-0.12.1.2-ghc-7.0.3.exe http://leksah.org/packages/leksah-0.12.1.2-ghc-7.0.4.exe http://leksah.org/packages/leksah-0.12.1.2-ghc-7.4.1.exe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe